[MUSIC PLAYING] BRAD ABRAMS: If you're new
to developing for the Google Assistant, you have
come to the right talk. And if you're an experienced
Assistant developer, don't worry. We're going to tell
you what's new. Our mission for the
Google Assistant is to be the best way
to get things done. That is an ambitious mission. I think it sort of
rhymes with organize the world's information and
make it useful and accessible. And just like Google Search's
mission from 20 years ago, our mission today
for the Assistant requires a large and vibrant
ecosystem of developers. And that's where
all of you come in. So whether you're joining
us here at the amphitheater at Shoreline watching on the
live stream or on YouTube later, this talk is
going to tell you how you can make your
experiences shine on the Google Assistant. If you're a content
owner, this talk is going to tell you
about [INAUDIBLE] markup, and templates, and how
you can make your content look great on the Assistant. And if you're an Android
developer-- wait. I hear there's a few Android
developers here today. Where are the
Android developers? Where are you? Yes. AUDIENCE: Woo! BRAD ABRAMS: Thank you. If you're an Android
developer, this talk is going to tell you how you can
use App Actions to voice-enable your Android apps. And if you're an innovator
in this new generation of conversational
computing, this talk is going to cover
Interactive Canvas and conversational actions-- how you can use HTML, CSS,
and JavaScript to build rich, immersive actions
for the Assistant. And if you're among
the few, the proud, the hardware developers at
I/O-- any hardware developers? A couple? This talk is going to tell
you about the new innovations in the smart home SDK. But before we do
any of that, Naomi is going to tell us a
little bit about why now is the right time for you to
invest in the Google Assistant. NAOMI MAKOFSY: All right. So we're going to start
by going back in time. I want you to think about when
you first used a computer. For some of you,
it was the '80s. Maybe you played a game,
or stored a receipt, or if it was the mid-80s,
even used a word processor. For others, it may
have been the '90s. Games were a little bit better. And you navigated via the mouse,
instead of the command line. 10 years after computers
entered our home, cell phones entered
many of our hands. And by the mid to late
'90s, communication was more portable than
it had ever been before. But it was still very early. You remember the days
when text messaging was a game of back and forth
between the inbox from and the sent folders? Yes, we've come a long way. Now another 10 years
later, in about 2007, the first smartphones
entered the market. And then mobile
computing exploded. So you may notice a trend here. About every 10
years or so, there's a fundamental shift
in the way that we are computing-- from
desktop, to mobile, and now to conversation. So what does these shifts mean
for all of you, as developers? Well, it means that you
have a lot to think about, a lot to build on, and a
whole lot to build with. And this is because
each new wave is additive to the
one that came before. We're still clicking. We're still typing. We're still tapping. And yes, now we're also talking. We're looking for more
assistance in our daily lives, and we're turning to the devices
around us to get things done. Now, Google's approach to this
era of conversational computing is the Google Assistant. It's a conversation
between the user and Google that helps them get things done. In addition to the
Google Assistant, there are also
assistant-enabled devices, like Google Home,
Android phones, and more. And finally, there's
Actions on Google, which is our third-party platform. This enables developers to
build their own experience on the Assistant. It's an entirely new way
to engage with your users as they're using conversation
to get their things done. And it was announced on
this very stage just three years ago. Now in just a few
short years, we've seen an incredible evolution
in terms of how users are talking to the Assistant. And this presents
opportunities for developers across the globe. Now think back to
those first use cases on conversational platforms. They were very, very
simple and straightforward. They were limited to things
like, turn on the music, turn on the lights, turn off
the music, turn off the lights. Again, it's simple
straightforward commands that fulfilled users' very
low and limited expectations. But there have been
three incredible shifts in querying that have occurred
over the last couple of years. First, we're seeing
that users are having longer conversations. In fact, query strings on
the Assistant are about 20% longer than similar
queries on search. Second, they're
more conversational. They're 200 times more
conversational than search. So queries are going
from weather, 94043, to something like, do I
need an umbrella today? Like you might ask a friend,
a family member, or even a real life assistant. And third, queries
are action-oriented. It's 40 times more
action-oriented than search. So users aren't just
searching for information, but they're actually
looking to get things done. They're finding that
restaurant for Friday night, and they're booking
that dinner reservation. And the evolution
of the query is due to a couple of things
happening simultaneously. So first, we're seeing that
technology is improving. Natural language processing
and understanding improvements have actually decreased
the word error rate, which is a key metric
for speech recognition. It's now better than what
humans can understand. Simultaneously, the number
of Assistant-ready devices has soared. So it's turned this new way of
computing into an ambient one. It's always there when we need
it, no matter what environment we're in or what
device we're on. It's magical, but it poses
a really new challenge for all of us,
which is how do we reach the right user
with the right experience in the right moment
all at the same time? So synced to a
pretty typical day. We'll talk about some
of the touchpoints where the Assistant
might be helpful. So first, you wake up. Good start. Now if you're anything
like me, you really would love to keep your eyes
shut for that extra 20 seconds, but you also need to
kick start your day and find out where you
need to be and when. Well, the Assistant
can help with that. Now you're waiting
for the subway. You're in a crowded,
loud station. You have a couple of
moments of idle time before that train comes to maybe
preorder your cup of coffee or buy your friend that
birthday gift you've been meaning to send them. The Assistant on your
mobile or your watch can help in those
moments as well. And finally, you're
sitting on the couch at the end of a long day. Your laptop or your mobile phone
are probably not too far away, but neither is your Google Home. It's there to help you. So across these
moments and more, Google is handling
the orchestration that's required to
deliver that personalized experience for the user with
the context-appropriate content. So you, as the
developer, don't have to worry about which experience,
which device, which manner. You can just leave that to us. So what does this mean
for the developer? Well, you have more ways than
ever to be there for your user. You can reach users across
the globe in over 19 languages across 80 countries on over
1 billion devices today with over one million actions. But more than that,
it's actually easier than it's ever been. And this is something we're
all really excited about. I know we're all balancing
far too many projects for the numbers
of hours in a day. So today, we're going
to talk about how you can get started if you
have an hour, a week, or even an entire quarter to
build for the Assistant. We'll talk about how to
use existing ecosystems, as well as how to build
net new for the Assistant. And we'll focus on
four major pillars. So first, we'll talk about how
to use your existing content. This leverages what you're
already doing in search. So web developers,
we're going to be looking at you for this one. Second, we'll talk about how
to extend your existing Android investments, leverage the
continued investments you're making in your mobile apps. And app developers,
I heard you before. So you're going to want to
pay attention to that section. Third, we'll talk
about how to build net new for the Assistant. So if you're an innovator
in the conversational space, we will share how
to get started. And finally hardware
developers-- saw a few hands go up before-- if you're looking to control
your existing device cloud, our Smart Home section
will appeal to all of you. Within each section, we'll talk
about what it is and how to get started. But before we do, Dan is going
to tee up a very sweet example that we will use throughout
the rest of our presentation. DANIEL MYERS: So a
single unifying example that shows all
the different ways you can use Google Assistant. Now, this gets me
thinking about two things. One, I love s'mores. I have a sweet tooth. and I'm also an engineer
here in Silicon Valley, home of tech
startups of all kinds and the tech hub of the world. So how can I combine
my love of s'mores and my love of technology? Talking it over with my
colleagues, Brad and Naomi, we thought of the idea of
using a fictional example company that you,
as a developer, can show all of
the different ways that Assistant can help you
and your company with things like building a global brand
through Google Assistant, increasing your global
sales, customer growth and acquisition, and even
things like user re-engagement, like the very important
metric of daily active users. And so the first
pillar that we have is how you can
leverage your existing content with Google Assistant. NAOMI MAKOFSY: So like
many of your companies, SmoreSmores has a lot
of existing content that's ready for the Assistant. They have a website. They have a podcast. And of course,
they have recipes, so that we can all understand
how to make that perfect s'more at our next bonfire. Also, just like you, they
spend a great deal of time optimizing their
site for search. So we're going to talk about
how they and, of course, how you can extend existing
efforts and optimizations to the Google Assistant. Now, Google's presented ways
to optimize your content for search since the '90s. We work hard to understand
the content of a page, but we also take explicit
cues from developers who share details about their
site via structured data. Structured data is a
standardized format for providing information
about a web page and classifying
that page content. For example, on a
recipe page, you can disambiguate the ingredients
from the cooking time, the temperature, the
calories, and so on. And because of this markup,
we can provide users with richer content on
the search results page, answers to questions,
and a whole lot more. And this brings Google search
beyond just 10 blue links. And over the last year,
we've been hard at work to enhance the Google
search experience and enable developers to extend
their content from search to other Google properties,
like the Assistant. So for sites with
content in popular areas, like news, podcasts,
and recipes, we have structured
data markup to make your content available
in richer ways on search. And those same optimizations
that you make for search will also help your content
be both discoverable and accessible on the Assistant. And it's just standard RSS. You've seen this before. And that's the approach
we've always taken. We're using industry
standards and ensuring those optimizations are ready
for search and the Assistant too. And I'm so excited now to
announce two brand new programs that we're adding-- how-to to guides and FAQs. So the additional optimizations
that you make for search will soon yield a
richer experience and an automatic extension
to the Assistant. Let's dive into each. So how-to guides
enable developers to mark up their
how-to content and make it discoverable to users on
both search and the Assistant. What displays is then a
step-by-step guide to the user on anything from how to change
a flat tire to, of course, how to make that perfect s'more. So on the left here, you can
see a nice image-based preview of the how-to content
in SmoreSmores site. It allows the user to
engage with your brand further upstream
in their journey. And it differentiates
your results on the search results page. And if you don't have
images, don't worry. We have a text-based version
of this feature as well. Now on the right, you can see
the full guided experience on a home hub device, again,
all powered by the same markup on SmoreSmores site. Now the best thing about
creating a step-by-step guide is that you actually don't
have to be technical to do so. Now I know we're at I/O, and
I/O is a developers conference. But if you have one hour to
dip your toes in the Assistant pool, and you don't
have a developer who can devote the time to adding
the markup, don't worry. We have ways for you to get
your content onto the Assistant, even as simply as
using a spreadsheet. So now, you can combine
your existing YouTube how-to videos and a
simple spreadsheet and actions console to get
a guided how-to experience across many
Assistant-enabled devices. So smoresmores.com now has two
ways for how they can get there step-by-step guides on how to
make that perfect s'more onto the Assistant. If they have a developer
with some extra time, they can add the markup. Or they can use a
simple spreadsheet to extend their existing
YouTube content. Now, we're going to
switch gears a little. I want you to think about
how many times you've turned to search for
answers to questions. Maybe some of you are
even doing it right now. That's OK. Maybe you're trying to
find out the return policy to your favorite
store, or if there's a delivery fee for your
favorite restaurant, blackout dates for travel-- the list truly goes on and on. Well, our FAQs markup enables
a rich answer experience on search, giving users answers
directly from your customer service page. So the same
optimization will then enable queries on the
Assistant to be answered by the markup you already did. And it's so easy to implement. So when a user queries
something like what's SmoreSmores delivery
fee on the Assistant, Google will soon
be able to render the answer from that same markup
on your customer service page. And here's some developers
that have already gotten started with FAQs
and how-to guides. And we'd love to have you
join us tomorrow at 10:30 in the morning to
learn more about how to enhance your search
and Assistant's presence with structured data. Of course, the talk will
also be live-streamed, or you can catch it
later on YouTube. So as you've seen, there
are three main ways that smoresmores.com and you
can leverage existing content. First, you can ensure the
hygiene of your structured data markup for podcasts,
news, or recipe content. You can add the new FAQs markup
to your customer service site. Or you can use the new
template to bring your content to the Google Assistant. We're so excited about
the ways that we're making it even
easier for developers to extend existing content
from search to the Assistant. But we're also making it
even easier for companies to engage their existing
ecosystems, like Android. So let's talk more
about App Actions. BRAD ABRAMS: All right. Thank you. How about that sun, huh? You enjoying the
sun in Shoreline? I can't see anything
without these, so I'm going to go with this. So where are my Android
developers again? Android developers? Yes. Just like many of you,
the SmoreSmores Company has a popular Android app. They want to make it as
easy to order s'mores as it is to enjoy them. But just like many of you,
they face the high cost of driving app installs,
coupled with the reality that users are using fewer
and fewer apps each year. The sea of icons found
on many users' phones might be a contributing factor. It's hard for users
to remember your icon, and much less find it. What we need is a new way
for users to find and launch your Android apps, one
that's focused more on what users are trying to do,
rather than what icon to click. Last year at I/O, we gave a
sneak peek at App Actions, a simple way for
Android developers to connect their apps with the
helpfulness of the Assistant. And with the Google Assistant
on nearly one billion Android phones, App Actions
is a great way for you to reconnect with your users. App Actions uses Google's
natural language understanding technology, so
it's easy for users to naturally invoke
your application. And finally, App Actions
doesn't just launch your app, it launches deeply
into your app. So we fast-forward users
right to the good parts. Now to help you
understand this concept, let's walk through an
example, of course, using our SmoreSmores app. Let's first take a look at how
this looks the traditional way. So first, of course, I
find the SmoreSmores app in the sea of icons. Does anybody see it? Next, I select the cracker. OK, that makes sense. And then I have to
choose in marshmallow. All right. And then I get to pick the
chocolate and, of course, the toast level. OK, I've got to say
how many I want. That's important too. And then finally, I can
review my order and confirm. Now that's a long
intent-to-fulfillment chain. It's a long way
from my first had the desire for a warm
delicious s'mores before I got it
successfully ordered. And that means there's
opportunities for drop-off all along the way. Now, let's take a look
at what this looks like once SmoreSmores
has enabled App Actions. First, you'll notice I
get to naturally invoke the app with just my voice. I can say, order one milk
chocolate from SmoreSmores. And then immediately, we
jump right to the good part of this application,
confirming that order. Notice, we got all the
parameters correct. And then we just
confirm, and we're done. We're ordered. It's a short path from when I
had the intent for the warm, delicious s'more before
I got it ordered. But of course, we didn't
build App Actions just for building s'mores. We had a few partners
that have already started to look at App Actions. So for example, I can
say to the Assistant, order a maple glazed
donut from Dunkin' Donuts. Of course, I might
need to work that off, so I can say, start a
run on Nike Run Club. And I might want to settle that
bet from last night by saying, send $50 to Naomi on PayPal. So let's take a look, though,
at what enables this technology. What's going on under
the covers here? Foundationally at
Google, we connect users that express some
intent with third-parties that can fulfill it. And App Actions is the mechanism
that you, as app developers, can use to indicate what
your Android app can do. Each built-in intent
represents an atomic thing a user could want
to do, including all possible arguments for that. So you just need to implement
that built-in intent and handle the arguments
that we pass to you. The cool thing about
built-in intents is that they model
all the different Waze users might express an intent. For example, these are
the ways users could say, start an exercise. Notice, as an app
developer, you don't need to handle all
of this complexity with these different
kinds of grammar. We handle all of that for you. You just implement
the built-in intent. So speaking of which,
let's take a look at how it looks for you, as a
developer, to implement that. Well, the first thing you'll
do is open up Android Studio and add an actions.xml file. You notice on that second line
there it's, ORDER_MENU_ITEM. That is the name of
the built-in intent that we have implemented
for our SmoreSmores app. And then in that
fulfillment line, you'll notice a
custom schema URL. So you could, of course,
use an HTTPS URL as well. This just tells us
where in the application we should fast forward into. And then you'll notice we
map the arguments there. So the menuItem.name
is the argument name from the built-in intent. And then notice our URL is
expecting the item name. And then finally at
the bottom there, we're giving some inventory. What are the kinds
of things users might say for this application? And just for brevity,
I put THE_USUAL. Now we just need to handle
that in our onCreate function. So very simply, we parse
that item name parameter out of the URL. We check to see if
it's that identifier. And if so, we just
prepopulate the UI with exactly what we want. This is a very
simple integration that you can get done very
quickly on the Assistant. So the good news is that you can
build and test with App Actions starting today. We're releasing built-in intents
in these four categories-- in finance, food ordering,
ride sharing, and fitness. So if your app is in
one of those categories, you can build and
test right now. And of course, the
team is already working on the next
set of intents. So if you have ideas or thoughts
on what intent would be great, we'd love it if you gave
us feedback at this URL. But there's one more thing. App Actions and built-in
intents also enables slices. Slices is a way that you,
as an Android developer, can create a declarative version
of part of your application and embed it into
Google surfaces, like the Google Assistant. So in this case, we're
implementing the Track Order built-in intent. And then you can
see inline there, that's our Android slice showing
up right in line in the Google Assistant, making
it quick and easy for users to get
that information. And then launch into
your app, if they need more advanced functionality. So what did we see here? You can enable users to invoke
your app with just their voice with App Actions. There is a simple
integration model. All you need to do is map
the intents in your app to a common set of
built-in intents. And then the good
news is you can build and test starting today,
with more intents coming soon. DANIEL MYERS: So we've seen how
you can leverage your existing content with Google Assistant. We've seen how you can integrate
your Android applications with Google Assistant. Now I want to talk about
conversation, specifically conversation
actions, that is how you can build new custom
experiences for the Google Assistant. So why are conversation
actions so important? Well for one, it's a way
that you can natively control the device's capabilities. So if the device has a
screen, show an image. If the device supports
touch, show a suggestion chip that they can tap. It's a way that you can
increase your brand awareness through things like custom
dialog and agent personas. You can grow your
user re-engagement through well-crafted
conversation design and things like action links. And furthermore, you can
drive habits and interactions with features like
routines, daily updates, and push notifications. Now what exactly are
conversation actions? It's the idea that
you, as a developer, have full control
over the dialogue, the back and forth
of what's said. This is distinctly
different than that of App Actions or content
actions or even smart home actions where you have
some type of fixed markup, or maybe you're using a
built-in intent, something that somehow defines already
the material that you're trying to access. Google Assistant takes
that fixed markup and adjusts it, applies its own
natural language understanding, and matches what the
user says automatically to the access of that material. With conversation
actions, that's flipped. You, as a developer,
define custom intents. You define the types of
phrases that a user might say to match that customer intent. You even define the information
that you want extracted out of what they say for
those same intents. So with conversation actions,
we need some type of tool that can help us do this. And that is Dialogflow. Out-of-the-box, it
provides two key elements-- the concept of user intent, or
what the user actually wants, and the concept of
entity abstraction, the way that you
glean information out of what they say. Let's dive in a little
bit with a small example. So we take s'mores-- I would like a large
s'more with dark chocolate. And I want it to go. Dialogflow can take
this phrase as a whole and match it to the user intent,
that they want to purchase a snack of some kind. Now you see here a few words
highlighted in the sentence. Large-- Dialogflow
can understand that they want a large snack. S'more-- the type of snack. Dark chocolate-- the
topping of that snack. And they want it to go, rather
than for delivery or for there. So when we take a look at
this at a sequence of dialogue and expand it a little
bit more, the user might say something like,
hey G talk to SmoreSmores. SmoreSmores, in this case,
is the invocation name of your action. Google Assistant
takes that audio, transcribes it into text,
applies its natural language understanding, and invokes
your conversation action. From that point forward,
it's between you, as a developer, and
Dialogflow that's controlling the responses back to the user. And so let's take a
look at a live demo. Here, I have Dialogflow
and a few intents that I've already defined. I have a s'more shop where
you can order a snack. You can order whatever you
last ordered, a gift card. And so let's take a deeper
look into ordering a snack. When I look at this,
I have a things. I have my contexts. I have my training phrases
that I've already supplied. These are the phrases that
I think, as a developer, the user might say that
matches the intent of them wanting to purchase
a snack of some kind. If I scroll down, I can
see the types of parameters and the relating entities
of those parameters-- specifically things
like delivery pickup, the type of snack, the size of
the snack, toppings, et cetera. Now if I scroll
down further, you'll see the responses that I've
created as well that reference the entities that
I've also defined. If I scroll down even further,
you'll see the fulfillment. If I wanted to have
a custom fulfillment, I can have a standard web hook
call for this type of intent. Now let's look at
an example here. If I say, one large s'more
with milk chocolate, you'll notice instantly, without
additional input from me, Dialogflow has highlighted
several key elements within this phrase. Large-- it knows that's the
size of the snack that I want. S'more-- the type of snack. Chocolate-- type of milk. So there you go. That's pretty powerful stuff. Now let's take a look
at it in the phrase of a full conversation. If I say, I would like to
order a large s'more with dark chocolate, instantly it
gets the information. It has the various contexts. It matched it to the
intent of ordering a snack. And we scroll down, it also
still has the various entities and parameters that
it's extracted. Now the default response
here is that I've defined a required parameter
of delivery or pickup. And so it's asking me, will
it be for delivery or pickup? I respond, delivery. And there you go. It understands you've ordered
a large dark chocolate s'more, and you want
it for delivery. There you go. So this is powerful stuff
for Google Assistant. Now let's go back to the slides. Here, what we have is a
way to build conversation. Now Google Assistant
supports a huge array of devices and surfaces. And we want to be able
to scale across them all. Different devices support
different capabilities. A smart speaker is voice-only. A car-- voice forward. It has a screen, but you still
want to have voice control. To intermodal, like
your cell phone. To maybe your smart watch,
which is screen-only. We need some type
of feature that can fully utilize the screen. And in the case of
SmoreSmores, they want to be able to build
a rich game, something that's voice first, custom
full-screen visuals. They want to build a turn
based battle system that's multiple surface and supports
all these different kinds. So today, I'm happy to announce
the brand new API called, Interactive Canvas. This is an API that
enables pixel-level control of rendering any HTML,
any CSS, and JavaScript. Let me reiterate. That is a full web view
running on the device. It supports full-screen
visuals, animations, even video playback. Now, it wouldn't be Google
I/O without another live demo. Let's take a look here. What I have is a hub. And for this demo, I'm going to
need some audience interaction here. I'm going to be
playing a trivia game, and it's going to be a
really hard question. So I need you guys
all to scream out as loud as you can the
answer to this question. Hey, Google. Play HQ University. GOOGLE ASSISTANT: All right. Here's HQ University. [AUDIO PLAYBACK] [MUSIC PLAYING] - Oh, hi there. Scott Rogowsky here. Recognize this voice? It's the voice of HQ Trivia. Yes, that's me, the host,
Quiz Khalifa, Host Malone, the [INAUDIBLE] Trebek. And I'm here to welcome
you to HQ University. You've been accepted
into this elite program to help smarten up and
sharpen up your trivia skills to help you win HQ. The rules are very simple. My assistant, Alfredo,
is going to ask you a series of questions
that, just like HQ, start easy and get harder. You will respond with the
answer you think is correct. And the HQ universe
will reward you. Let's get down to
the nitty-gritty. Let's get this show
on the road, baby. Alfredo, hit him with
qumero numero uno. - Ready. Question 1-- if you put on a
hoodie, what type of clothing are you wearing? Your choices are-- cloak,
sweatshirt, or cape. [END PLAYBACK] DANIEL MYERS: So
what's the answer? AUDIENCE: Sweatshirt. DANIEL MYERS: Sweatshirt? [AUDIO PLAYBACK] - Yeah, baby. You did it. [END PLAYBACK] DANIEL MYERS: We did it. Awesome. So that is Interactive
Canvas, a way that you can build full-screen
visuals and custom animations. So how does something
like this work? Well first off, we have to start
with a regular conversation action where the user says
something to the device. That in turn goes to the
Actions on Google Platform, in turn to Dialogflow,
and finally your custom fulfillment. Your fulfillment supplies
a certain type of response. In this case, it needs to
supply a rich response. When we add the web
application to Canvas, it supplies an
immersive response which tells the Google
Home to load the web app. Your web application in turn has
a specific JavaScript library called, the
Interactive Canvas API. Let's look at some
code for this. When we look at this, this
is the custom fulfillment. You can see here, I have
the default welcome intent that's supplying a new
immersive response that supplies the URL of
the web application and the initial state
of that web application. What does this look like on
the web application side? Well when we see this,
there's two main elements that you need to include. There's the CSS style sheet-- this supplies the
specific padding for the header on the devices
and things like that-- and then the actual
JavaScript library. This library itself manages the
state of your web application with the state of
the conversation, so that you can
control both in unison. So some key takeaways
around conversation is that Dialogflow is the tool
for developers to where you can build custom conversations. You control the back and
forth of your dialogue. And two, we've announced
the Interactive Canvas API, the pixel-level control
over the display for games where you can run any HTML,
any CSS, and any JavaScript. Now, I want to switch
it up a little bit, talk about smart home,
the ability for you to control any hardware
with your voice. Now traditionally,
smart home has been all about
cloud-to-cloud communication. So when I turn on my
Philips Hue light bulbs, what's actually happening
is that Google Assistant takes in the audio,
transcribes it into text, applies natural
language understanding, and sends a specific response
to Philips Hues servers. Philips Hue, in turn,
controls the light bulb. And so now, I'm glad
to announce a new API-- the Local Home SDK. This provides new local
control over your devices with post-assistance latency
of under 200 milliseconds. Supports local discovery
protocols, like UDP broadcasts, mDNS UPnP. For actually
controlling devices-- UDP, TCP, and HTTPS. Now with Smart Home, there's
device types and device traits. It supports all of
them out of the box, with the exception of
two-factor authentication. Now my favorite
part is that it's come as you are,
meaning you don't need to change the embedded
code on your device for the messages. What's actually
happening is that you develop a JavaScript application
that runs on the home device. That's pretty awesome. So let's take a look
at how this works. When the user says
something like, hey, G turn on
the lights, again, that audio's sent up
to Google Assistant. Transcribes it into text. Applies natural
language understanding. Google Assistant creates
a execution intent and sends this intent, this
structured JSON response, down to the Google Home
where the home is running your JavaScript application. Your JavaScript
application in turn understands the intent
of trying to turn it on with the exact device
ID, things like that, and constructs the
specific message that your device supports-- in turn, turning on the light. So I want to show more
traits and more types today. We're always adding
more device types that we support with
the Smart Home APIs. And today, we're
releasing even more with things like
Open Close, StartStop with zones, Lock
Unlock, even devices and types like your door,
boiler, garage door. Now again, we're adding
these all the time. And today, we're releasing
a huge number more. So I want to show now how
SmoreSmores can use Smart Home. So what I have here
is a toaster oven. Some of you might have already
seen this up here and wondering what it's used for. So I have a toaster oven
here with, inside of it, some s'mores that I want to eat
and I want to toast perfectly. I also have a Google AIY Vision
Kit which, inside of that, is a Raspberry Pi 0. And I'm using this to control
the power to this toaster oven. And so let's take a look. Hey, Google, turn on
my s'mores toaster. GOOGLE ASSISTANT: OK,
turning s'mores toaster on. DANIEL MYERS: Awesome. So there you have it-- a smart home device that's
being controlled via voice with Google Assistant. And so let's recap
some key takeaways. One, we announced
the Local Home SDK where you can control
real world devices using local Wi-Fi with your voice. And second, we've
announced a huge number of new traits and device types. These are available
today that you can use in your own devices. And so what to do next. NAOMI MAKOFSY: All right. So we have covered
a lot today, but we are so excited to
share with you all of the ways you can get started
building your first action. So to quickly recap, you can use
your web content and leverage what you're already
doing in search. You can use App Actions and
leverage the Android ecosystem you're already participating in. You can build a custom
experience for the Assistant by building a
conversation action, or you can extend your hardware
and build smart home actions to control the devices
around your home. But this is just the beginning. There are 12 more
sessions this week that will dive into topics we
only had a chance to introduce in our time together today. So these additional talks will
be geared toward Android app developers, web developers,
hardware developers, or anyone who just wants to learn how
to build with conversation or some insights around
this new ecosystem. So please check them out live,
watch them on the live stream, or of course, tune
in later on YouTube. Now for those of you
here with us this week, we have a Sandbox outback,
Office Hours, and a Codelab. Now, I think I heard
our toaster is ready. So it's time for us
to go enjoy s'mores. But visit our developer
site, talk to us on Twitter. And we can't wait to see
the rest of I/O. Thank you, so much, for joining us today. [APPLAUSE] DANIEL MYERS: Bye-bye. Thank you. [MUSIC PLAYING]