[MUSIC PLAYING] MARIYA NAGORNA:
My name is Maria, and I am a technical program
manager here at Google. And with me is Krishna, who
does product management. We both work on a
developer platform called Actions on Google. And in this session,
we'll discuss how you can create actions for
your existing Android apps. And then we'll
talk about how you can leverage those actions
to build deeper integrations with the Google Assistant
so that you can reach more users on your devices. KRISHNA KUMAR: Great. Everybody is here bright and
energetic in the morning? All right. Let's have a quick
show of hands first. How many of you attended the
App Action session on Tuesday? All right. I see a few hands. So today, we are going
to talk a little bit about how you can enable
actions on your Android apps and how you can
make them surface across various touchpoints
on the Android platform. If that sounds interesting, we
had this very detailed session two days ago, which had a lot
of code samples, et cetera. So definitely I'd request you
to take a look at those code samples and that video. And we also have an
office hours immediately after that, so you can come
and ask us for questions. So we're going to provide a
quick recap on App Actions and Android apps. Then we're going to talk
a little bit about how you as Android developers
can use the same development mechanism that you
use for App Actions to basically create
conversational actions to take your services to the plethora
of new smart devices coming out. So that's the agenda for today. But let's start with,
what are actions? Our devices are our
portal to the world. We do everything with them. We watch videos. We consume content. We navigate the world
using our devices. And there are two primary
critical user needs for how people use devices. One is they actually
consume content. And that includes everything
from listening to music to watching videos to getting
your best recipes, et cetera. The other is we use our
devices to get things done. The world has changed. Like, you are no more
running back to your desktop to find maps or to book a
ticket or to purchase something online. You use your device. It's our single
portal to the world. However, there are something
like 2 million Android apps out there. And there's probably several
hundred millions of websites. That's awesome. That gives the users enormous
choice and possibilities. But that can also be a
little bit overwhelming. Just try thinking
immediately of, like, which app do I immediately
use to find the best fish taco in San Diego. There's just a lot
of apps out there. And for you, as
a developer, it's even more problematic
because it causes problems of discovery and re-engagement. So there was a statistic in
Android Authority in March 2016 that 77% of users don't
use the app within three days of installation. Man, that's crazy. And that number goes up
to 90% within a month. So in this very
crowded space, it's hard to get this re-engagement. And you're developing all
these cool new features. But even though the user
might have installed your app, they just don't know
what's going on. They don't know what are the
new capabilities of the app. So we are providing a sneak peek
on what we call app actions. Dave Burke mentioned
this in the consumer keynote today, or a
couple of days ago. And App Actions is a way for
you, as Android developers, to surface the
capabilities of your app and bring that up so
that Google can surface your capabilities, your
actions, and your content across many different touch
points across the Android platform. As the Android platform
evolves to an AI first world, we are moving from just
predicting the next app to predicting the next action
that a user might take. And we will surface the
capabilities of your app at just the right
moment, either based on the user context,
the user routine, or based on things
like the user query. So this gives you instant
increase in both reach and re-engagement. But how does this all work? How does this show up
in the Android platform? Let's start with
Google Assistant, which everybody is interested in. With App Actions, you can
basically get your Android apps to have a basic integration
and shortcut with a Google Assistant for
task-based queries. So in this particular
scenario, the user is asked something
like, manage my budget. And that immediately
brings up the Mint app straight to the budget page. How is this done? It is because the Mint
app, in this example, has registered for an
intent of budget management in this file called actions.xml. And we'll go through a lot
more in detail of how exactly you register for intents. This also works for
content-based queries. So I have asked, what is
Lady Gaga's real name. And believe it or not, I
actually didn't know till now. Stefani, OK. We now try to
understand and predict, what is the next follow-on
action that the user might do? If I'm basically
searching for Lady Gaga, I think it's a
natural conclusion that I might want to
listen to her latest album or watch a video or buy
tickets for her concerts. So we try to predict
the next action and show that as suggestion
chips down on the bottom. Clicking on any of
those suggestion chips immediately opens
the appropriate Android app straight to the Lady Gaga page. Just think about what a good-- how many steps you
are still saving here. You're not rummaging
through your app browser to find out the
app, then you're not typing Lady Gaga
there again, and then you're not trying
to find out what. All of this happens magically
so that it just surfaces up across the Android platform. We're also exploring
how we can actually surface apps that you use a
lot directly in Google search. So I use Fandango
to book tickets. So if I type a movie
like "Avengers," it shows me various
suggestion chips for Fandango right at the bottom. So clicking on any of
those suggestion chips will basically take me directly
into Fandango, where I can immediately purchase a ticket. As Android evolves,
last year, in Android O, we brought up this concept
of predicting the next app that you might use, that is
the top row that you see there. This actually had a 60%
prediction accuracy rate and was wildly successful. Now what you're
doing is actually moving from predicting
the next app to predicting the next
action that you might do. So in this blue
highlighted column there, you see basically
two suggestions. And these are two suggestions
that happen in my phone at around 5:00 to
5:30 in the evening. And this is based on
the frequency of usage for various tasks
and actions I do. So here's my routine. At 5 o'clock, I call my spouse-- Mona, wife-- and
then I argue with her as to who has to
pick up the kids. I inevitably lose that argument. And then I navigate to my kids'
school to pick up the kids. So those two suggestions, those
two actions happen very often. And therefore it is
shown up as suggestions. Now, these suggestions are
contextual and routine-based. So, for example, if you
live in San Francisco and you use a transit app
at the transit station, when you approach
the transit station, it knows that particular context
and it will automatically show a suggestion
for the transit app. And those are the
types of predictions that we can do based on many
different inputs of context and routines. We're also directly working
with App Actions and things like Smart Select. So in an email or
in the browser, if I choose a piece
of text, what happens is that Smart Text Selection,
uses machine learning to predict the full entity. So if I click on floor in
this particular example, it basically selects the
whole Floor and Water, which is a restaurant,
and shows me the action for reserving
a table using OpenTable. Now, when I click on
OpenTable, it immediately takes me to the
OpenTable's Floor and Water page, where basically I can
book the table immediately. So these are all ways in
which App Actions surface across many different
places for the apps that you have installed on
your device, which thus leads to more re-engagement
because we understand the type of capabilities
that your apps have and continue to expand upon. But what about apps
which are not installed? App Actions also shows up
on the Play Store page. In the Play Store page, if
I type in query "Lady Gaga," you know, "flight
tickets," it shows me various actions
that apps have, both for apps that are installed
on my device and apps which are not installed. This allows for great
discovery, because you actually understand the capabilities
of the apps that can actually directly act upon the
query that you provided. In this case, of Lady
Gaga, you get a bunch of suggestions for apps
which can do everything from play music to show lyrics. And if that particular show
lyrics app is not installed, when I click on it
it will basically give me the option
to install that app, and then it will
directly take me to the deep link for Lady Gaga. Now, App Actions is
very interesting. This is part of the larger
actions on Google Framework. So we have created a common
development mechanism, which includes built-in intents,
which is how you actually show the capabilities of your app. That is the API to show the
capabilities of your app. And that same built-in
intent mechanism can be used for App Actions,
conversational actions, vertical programs, and
so on and so forth. You have created
a foundation layer that enables you
to create actions across multiple
platforms, operating systems, and surfaces. We'll go a lot more into that. MARIYA NAGORNA: Great. So those were some
really cool examples of how the actions
of your Android app can be featured in suggestions
across the Android surfaces. Now let's take a look at some
of the steps of creating app actions to achieve just that. So first, using tools you're
all familiar with for building and publishing Android apps-- namely the Android studio and
the Play Developer console-- you'll create an
actions.xml file. And this is the central
place for all of the actions that your Android
app can support. There are two key
pieces of information that you'll need her
provide for each action. And that is the intent
and the fulfillment, which describes the what
and the how of your actions. So let's dive deeper into
some of these concepts, and we'll start with built-in
intents, which is how you indicate what your action does. My team at Google has
built and published a catalog of built-in intents,
and as Krishna mentioned, this is one of the core
foundational elements of app actions. If you think about the way
users ask for information, there's a myriad of
linguistic variations that they can use to
construct their query. So for example, they can
say calming activities, or they can say
breathing exercises to relax about my presentation. Or they can ask for a 10-minute
meditation techniques. All of these queries,
they indicate that the user would
like to de-stress, or they'd like to calm down. So we've designed,
built-in intents in such a way that
they abstract away all of the natural
linguistic variations, and we pass on only the relevant
information to your application from the user's query
using parameters. And so to give Google a
deep understanding of what your Android app can
do, what you need to do is register for the
relevant built-in intents that are relevant to
your app, and this will help Google show
the right actions to users at the right time. Here are some of the intents
that we're working on now. And the ones that have
a star next to them are available for developers
to try out today and preview. And throughout the
remainder of the year, we'll be continuously extending
this catalog to cover as many of your use cases as we can. And if you have an Android
app where we're not covering the use case currently,
please do give us feedback. We take it very
seriously, and we'll show you how to do that
at the end of this talk. Now let's talk
about fulfillment, which describes how a user
would invoke a specific action within your app. So when users express an
action that they'd like to accomplish-- a task or
an intent that they have-- you can help them fulfill that
action by specifying deep links into your Android apps. So in your actions.xml,
you can define the mapping between the built-in
intents and the deep link URL. And this will enable Google
to show the right content to your users from
your Android app and deep link them directly
into the experience that they seek in
that specific moment. So we have two models
for fulfillment. In the first model, which
is the URL template model, we can start the
deep link URL based on the user's query parameters. And your actions.xml
will tell us how to map the parameters
from the built-in intent to the URL parameters. And this model is ideal
for action-centric apps with deep link APIs. And in the content-driven model,
we discover the fulfillment URL through your web content
or the structured data that you give us. And then based on
the user's query, we find the relevant
content and then use your actions.xml to
connect the content to the appropriate intent. So let's see an example
of how this would work, and we'll use the
Coursera Android app, and we'll show you
how they registered for the action of TAKE_COURSE. And the reason that
they want to do this is so that when a user
comes to the Android and they want to know
information about courses or education, we can ensure
that the Coursera app is shown to them as a suggestion. And for this demo,
we'll also show you how the suggestion will show
up on the Google Assistant. So for the built-in
intent, we created one, and it's called TAKE_COURSE,
and it takes a single parameter of the type course. And this is the
parameter that if you were to implement-- to register
for this particular intent, you would need to handle
it from the user's query. And so for example, the user
might say, take a machine learning course with
Coursera, and the assistant will know that machine learning
is of type course.name. Or, if they ask for find data
science courses on Coursera, the assistant will match
the data science parameter to the correct type. Of course, it named that
about, because that's what the course is about. Now you might wonder
how does Google exactly know this particular mapping? So in Coursera's
case, each course page is annotated with
schema.org markup, and there is a page
called Machine Learning, and it is associated
with a specific URL. So this is an example of the
content-based fulfillment that we talked about a
couple of slides back. And so to bring these
two things together-- we have our actions XML, and we
tie these two things together in a really simple way. So first, you register for the
built-in intent of TAKE_COURSE, and then we take the
course parameter, and map it to the main web page
URL for Coursera for learning. And that's it. It is that simple. So let's now see how this
app action will show up on my Pixel device
in the Assistant. So lets go to the Assistant. And we'll ask it about
machine learning. Machine learning. Great. So what we see here is a basic
card about machine learning from the Assistant, but now
below in the suggestions, we see the TAKE_COURSE
action that we just created, and we know that it's from
Coursera because of the Apps icon. Now let's see what happens
when I tap into it. Awesome. So it takes me to
the Coursera app, directly into the Machine
Learning course page that I just asked about. And from here, I can
do things like enroll, and I can also explore
the rest of my app. So as the user, that was
really [? refreshing ?] about how quick and simple
that experience was for me. In just two taps, I was
able to enroll for my course right away. And for Coursera, it was
also fairly simple, right? Just by creating a
single actions.xml file, Coursera can now get users
to discover and reengage with their Android app
across multiple touch points on the Android device. But there's one caveat here. This will only work
on Android devices. Today, we know that users
are beginning to increasingly turn to new device types to
accomplish their daily tasks. And so in this next
part, we'll talk about how you can
go beyond Android and how you can reach
these users that are using these new devices
that don't run the Android. KRISHNA KUMAR: Thanks, Mariya. That was very cool. Just a single file-- actions.xml-- and you
can get your actions surfacing up in various
touch points across operating systems. Works great for Android
devices, and you can bring up the content and
capabilities of your app. But what about all these
other devices which are coming up in the market? You cannot go to an electronic
store without seeing smart speakers, smart TVs, smart
displays, smart whatever. Right? And all of these are
Google Assistant enabled, but these are not Android
devices, or at least many of them are not Android devices. How do you, as a
service provider, bring your services--
bring your functionality-- to all these new devices? And first of all, as
an Android developer, why should you even
care about that? Let's talk a little
bit about that. So, Google has been doing
a bunch of user research on how people interact,
get information, and get their tasks
done throughout the day. And our research has shown
that the way people interact to accomplish their
tasks throughout the day are very different. And the type of tasks that
they do is also very different. For example, in the
morning when you're quickly cooking breakfast and
you're in a rush to get out, you might quickly catch up
on news, find the traffic, find the weather, check your
calendar, and then rush out. And when you're commuting, you
might be checking the news, or listening to
music, navigating, and that's very
different from when you're at home in the evening-- relaxing, cooking,
dinner, or watching the TV with your family and you want
to find out the latest buzz Netflix show, right? The types of devices
you use is different. The types of information
that you want to get is very different. In the context that you're in--
you're relaxing on the couch, or you're driving. Those contexts are
also very different. And increasingly,
users are starting to use many different
types of devices. When you're cooking breakfast-- when your hands are greasy,
you don't necessarily want to pull off your phone-- your brand-new, spanking
new, costly phone. And when you're
commuting, you might want to get some information
on the latest score, how does your day look like,
but your hands are busy. Or at least you shouldn't
be using your Android phone in those contexts. When you are going
for a jog, you might want to check
your calories, or you might want to check
the steps that you've done. But again, you may
not necessarily be carrying out your phone. And finally, when you're
in front of the TV and want to get the
latest Netflix show, the phone may not be
necessarily the best context. Users are increasingly becoming
more sophisticated on how they interact with
devices, and they expect that the devices will
provide the right information to them, as well as
help them accomplish their tasks in the
most hassle-free, and the most natural
way possible. They don't want to
contort themselves into trying to
use a device where it's not right for that
particular context. And Google Assistant
has been spending a lot of time thinking of
these critical user journeys and being there for the user
in all of these contexts. Google Assistant
enables you to interact with users in
completely new ways using a combination of voice,
rich UI, cards UI, and many of these inter-modal behaviors. It enables you to create
fundamentally new experiences, thus reaching and engaging
your users in a different way for these different contexts. And the Google Assistant is
now on across 500 million plus devices. Everything from phones, to
smart speakers, to smart TVs, to smart displays
and headphones. So that you can reach your
user in the right context, in the right place that
where they are and so that increases your breath
for options of your services across all of these
different contexts. Now, making a conversational
response interaction seems complex, but we have built
a cutting-edge technology stack to take away a lot
of that complexity. We have built all
this natural language processing-- individual voice
recognition, machine learning algorithms for inflection, et
cetera, different voice types, and using Google's core assets
in the Knowledge Graph identity payments and put
together a technology stack so that we can make it
easier for you as developers. Can you imagine building
all of this infrastructure by yourself for
every single app? It's just not feasible. So we have built this
technology stack, and we invite you
to build upon-- on top of our technology
stack and focus on building compelling new
experiences for your users. Conversational
actions are, again, part of the larger
Google framework. You will use the
same intents Mariya talked about the
built-in intents that Mariya talked about
that you use for app actions also for conversational actions. And during the rest
of this session, we will see some of the
foundational elements that enable your Android
apps and services to work more seamlessly with
your conversational actions. But you're probably thinking
now, oh, man, this Google. They're asking me to rebuild
my whole app for voice. Come on. OK, so I'm not asking you
to rebuild or replicate your whole app. I want to bring this concept
of a companion conversational action. A companion action is
basically providing a snippet of information,
or helping the user complete a specific task
that the user might use of your service in
a different context. The best way to think about
this is through an example. Let us take the
case of GoalAlert. GoalAlert is a pretty
popular soccer app, which has rich information on
teams in Europe, league tables, points, all of that good stuff. Fantastic. I use it every day
during the World Cup. All right. So what happens when I go to a
completely different context. I'm driving to work
and I just want to catch up with the
latest score as to what happened yesterday. Keep in mind, an Android
app such as this, which is not part of
messaging or media will not show full-screen
on Android Auto. So how do you actually get
this information to the user and extend that reach
to a new context? So what GoalAlert has done is-- they found the piece
of their service, such as check the
latest score, which makes more sense for the user in
a completely different context, and they have created a simple
conversational action, which provides the response for what
are results of the Premier League, or what happened
to that match yesterday, or what are the scores? So they created a rich
conversational plus UYRS response, which can be used
across all Assistant devices. Let's take another example-- Todoist. Todoist, as everybody knows,
is a popular, to-do app. It's a rich to-do app with
more than 10 million downloads. It has all sorts of
detailed task information, different ways you can
manipulate it, et cetera. Great. What do you do when
you're cooking breakfast, and you just want to find what
tasks do I have due today? Here again, Todoist has thought
through the critical user journeys that their users
have and figured out that what task do
I have due today is a service which goes
across multiple contexts, across multiple devices. So they've created a
conversational action to provide a single,
easy response to, what do I have due today? So as far as a developer
journey with actions goes for building these
conversational actions, for app actions, you're
familiar with this. You're created an
Android APK, you're going to add an actions.xml
file using Android Studio. You will test it out
in Android Studio, and you will publish it using
the standard Play console mechanisms for publishing
your Android app. Let's take that
for conversational. For conversational,
you're going to, again, use the same built-in
intent mechanism to build conversational actions. You will build simple
conversational actions using dialogue flow,
which is a tool provided by Actions on Google. And you'll go to the Actions
on Google Actions console-- to manage, publish, your
conversational agent and conversational action. Let us see how all of this
works together from a high level architecture perspective. If I'm creating an
app, and let's say this is Krishna's To-Do App. It's a task-based app. What I'm going to do is I'm
going to put up a web server-- node.js or Apache or whatever--
connect it to a database where all of my users'
to-do tasks are stored, then I'm going to create an
Android app, which basically talks to the web server
to get that user's information provided back to my
app, which will render it based on the UI design of my app. The same concept works for
our friends down the street, and it also works for
event-based interface to your app. When you you then go to
conversational agents, what you're going to
do is you're going to create a conversational
agent, which basically will talk to your web server
using the same API, same infrastructure that
you're provided. What happens is
for a query of what task do I have today, the
assistant will take care of all of the lateral
language processing, query understanding-- it understands what the
query is, and then in walks your agent. Your agent will then basically
understand that query, and now your agent
has to actually do a fulfillment for that query. So to do the fulfillment, what
you're going to do, is you're going to use a web
hook, which will connect to the web
server on your back end, and then provide the
response back to the user. The key point here is
that you, as a developer, are using the exact same
infrastructure, API, identity mechanisms that you
already used for your apps, to also provide a
conversational response for all of Assistant enabled devices. Now, let's talk
a little bit more about this foundational layer. Mariya explained a little
bit about built-in intents, but we also provide a certain
set of foundational elements, which basically lets
you create easy, seamless interaction
between your Android apps, and your conversational agents. Let's, again, go back to
out Todoist, our to-do app, and let's think a little
bit about account linking. You probably as an
Android developer, especially if you have some
sort of service back end, have some form of an
account management system. Todoist uses various
identity mechanism, including Google as an identity. And as a user, when I start
using Todoist on Android, I can create my
identity using Google or through some other
identity mechanisms. When you go to the Assistant,
and when you set up Todoist do for the first
time on your Assistant, you will have to
link your Todoist identity to the Assistant. Once you do the
linking, you can use your conversational response
across any Assistant-enabled device without having to
log in into Todoist again. The key point here is
that in Actions console, you can link your accounts so
that things work seamlessly. And with one step of account
linking for the user, the user can use Todoist across
any Assistant-enabled device. That was account linking. Let us talk a little bit
about play, entitlements, and seamless digital
subscriptions. The interesting thing is that
both the Assistant and Android uses this Google Play for
inventory and order management, and entitlements management. Because of the fact
that both of them use the same Google Play
infrastructure, entitlements, subscriptions, in-app
purchases work seamlessly across both Assistant
and Android. Let us take the case
of Economist Espresso. "Economist" is a great
magazine, and they have an app called
Economist Espresso, which has a subscription feature to
get their premium articles. I went and logged
into my Android app, and paid the
subscription in Android. Now, when I go to the Assistant
and use Economist Espresso, I immediately get access to
all of their premium content, because the Assistant also
knows that I have a premium subscription to
Economist Espresso, and hence I'm eligible for
all of the premium content. And this works reverse, also. You can purchase subscriptions
in the Assistant, and it will work on
your Android app, also. And the way this works,
again, can be done and the Actions console. The Assistant first will
check whether you have access to these premium subscriptions. You can then purchase content
using the same Inventory and Order Management System. And then your
premium subscriptions work across the Assistant,
and, as mentioned, also across Android. So in all of these scenarios
about companion conversational actions, these
developers-- what they did is, they thought of this
very specific use case and built a
conversational action. So here are my principles
for building a companion conversational action. You're not just an app provider. You are a service provider. And as a service
provider, you want to take your service,
your functionality, to many new contexts. So think beyond the app
and think of the service that you're providing. Also, what is the critical
user journey for your service? Like, how do people actually
interact with your service? In what context? In what device? And can you think of
new contexts and devices in which your service actually
makes sense to the user? And also, you're going to
expand the reach of your service quite a bit, but please do
not think of just replicating your whole app for voice. You need to think of the
specific service that makes sense for the user. So my challenge to you is-- what are the compelling
new experiences that you can create by
integrating with the Google Assistant? Please join these
number of providers who have already
created an Android app, but have also
expanded their reach to a number of
Google-enabled devices by building a
conversational agent. MARIYA NAGORNA: Thanks, Krishna. So that was a really
high-level overview for how you can think
about companion actions to your Android app. How to really think
about the experience that you can provide your users
for those times in their life when their Android device
just might not be handy. So there's two such times that
come to my mind right away. First of which is while I'm
driving and my full attention needs to be on the road,
and my Android device is usually in my pocket. And then the second
is while I'm cooking, and my hands are usually
either busy or they're dirty, and it's really
inconvenient to have to keep going back and forth
between washing my hands, unlocking my phone, reading
the recipe, making my food, washing my hands, unlocking
my phone-- you get the idea, right? This is a normal
cooking process. So I have two devices to
help me in these situations. I have the Assistant
in my car, and I have the Smart Display at home. So I'd like to go back
to the Coursera example that we introduced
earlier, and I'd like to show you how
we helped them take their app beyond Android so
that it could accompany me throughout my day
with the situations that I just described. So we all know machine
learning is a pretty hot topic right now. And so as we saw earlier,
I did register for it on for it on Coursera. And I like to go through
my courses after work. So I like to listen
to the course podcast while I'm driving in my car,
and then when I get home, I like to continue watching
the video for the course that I started to
listen to in the car. Now neither my car nor my smart
device at home run Android. Both of these devices, they only
work with the Google Assistant. And so let's see how we helped
build the Coursera companion app that works on
the Google Assistant. Here's the actions.xml
that we showed you earlier where they registered for the
TAKE_COURSE built-in intent. And now after their APK
was approved and published in the Play Developer
console, there now appeared an option
there to enhance actions with actions on Google. And clicking there lands us
in the Actions Console Project Claiming Page. Now, this console is
a really focused way to develop and manage your
companion action for the Google Assistant. And we're very excited
because this week, we announced a major
redesign of this console, and it basically helps
you do three main things. It first allows
you to configure, set up the metadata
and the directory listing information
for your companion app. And then it allows you to manage
the development, the testing, and the deployment process
in a very fine-grained way. And then once your
action is published, it gives you
analytics so that you can track how your
action is doing out there with real users
in the real world. Now, there's a lot of
magic that happens here during this project
claiming stage. For example, we automatically
import most of the information that you provided
in the actions.xml to make it easier for you to
develop your companion app. Now let's see with the Actions
console looks like for Coursera after they've gone through
the claiming process. So this is the Actions console. And we can see that it's already
set up with the Coursera demo app. Let's look into this
Actions section. And we see here the
TAKE_COURSE built-in intent that we had in Android
was automatically imported from
actions.xml when we went through that claiming process. And when we click
in, we'll hopefully see some magical
information show up here. There you go. So we see all of
the triggering-- we see all of the
important information that you would want to
know about this action. So for example, the
triggering phrases up there, these are sample
invocations that users can use to invoke this
particular action. And we see the
parameters, as well as the fulfillment information
that you provided us in your actions.xml, and
that was automatically imported into here. Now that we have this basic
wiring set up, what we can do is we can create a rich response
using dialogue flow to cover those two cases that
I described I wanted to have earlier for Coursera. So to listen to my course
podcast while I'm driving and then to continue watching
my video when I get at home. And to do that,
you would just add the fulfillment of
conversational type here, and this will land
us in dialogue flow where you can specify the
details about the fulfillment. Now, because we're a
bit short on time today, I won't show you how you can
build fulfillment from scratch, but there are a
few key components that I'd like to walk
through so that you can see how you do this for yourself. So this is the dialogue flow
to, and this is the JavaScript for our fulfillment code. We see here that we first use
the TAKE_COURSE built-in intent that we walked through
earlier, and then we reference the
course parameter to understand what is the name
of the course that is coming in from the user's query. We also created a function that
calls the back end Coursera service to get the
details from the course that the user is asking about. And then when we get the
response, what we can do is parse it. And for this particular
example, we'll just create a simple card that
shows us all of the course information, and then we
use our media response API to return the audio or
the video of the course based on what device
I'm running on. Now, one really amazing
thing about dialogue flow is that it's simply integrated
with the Actions console. It has a really deep
integration with that console. And so this provides a really
nice development environment where you can do
things like test how this would work on
the Google Assistant directly from this
UI, just by simply clicking in this section. So let's see how this
basic card that I just showed you we created would
look like in the simulator. So this is the
simulator, and we also have a Test On
Device option so you can test during your
process to see how it would work on the device. But for now, let's see how
this would work when we invoke our Coursera demo app. So we'll say, ask Coursera demo
to start my machine learning course. GOOGLE ASSISTANT: Sure,
here's the test version of Coursera demo. SPEAKER 1: Here's
more information about machine learning. Do you want to start the course? MARIYA NAGORNA: Great. So what we see here is the
card that we created, right? So all of this description
information down here, the title of the card,
this image, the suggestion chips down there-- they were all specified
in the fulfillment code that I showed you
in dialogue below. So now let's say yes to
start the course podcast. SPEAKER 1: Sure, here's your
course on machine learning. SPEAKER 2: Our first
learning algorithm will be linear aggression. MARIYA NAGORNA: Great. So this allows me to listen
to the course podcast while I'm driving. Now let's imagine that I
started doing that in my car, and then when I get home, I want
to continue watching my video on the Smart Display device. So let's try it here
and hopefully it works. OK, Google, continue my course. GOOGLE ASSISTANT: Sorry, I don't
know how to help with that yet. But I'm trying to learn. MARIYA NAGORNA: OK, Google. Continue my course. GOOGLE ASSISTANT: Sorry, I don't
know how to help with that. I'm still trying. KRISHNA KUMAR: Third try. MARIYA NAGORNA: OK Google,
continue my course. GOOGLE ASSISTANT: OK,
let's get Coursera. SPEAKER 1: Welcome
to Coursera, do you want to continue your
machine learning course? MARIYA NAGORNA: Yes. SPEAKER 1: Sure,
here is your course. SPEAKER 2: [INAUDIBLE]. MARIYA NAGORNA: Thank you. [APPLAUSE] Thank you, demo gods. All right, so let me jump in on
a quick recap of what we just did. We showed you how to
create actions.xml and how to use built-in
intents to enable actions in your Android app. And then we showed
you how by doing so your app can
now be discovered across the many
surfaces on Android. Now, we briefly glossed
over these, too, but we had our colleagues
give a whole talk on how to build app actions
two days ago, and if you missed
it, don't worry, please do go ahead and
watch the video on YouTube. Now in the main
part of the demo, we walked through how to claim
your actions on Google Project so that you can enable your
actions to work on new devices. And finally, we showed
you how to build and test a simple companion app
that works great with audio and video on a Smart Display. Now, we certainly think
this is very cool. We're very excited, and we tried
to make it really easy for you so that a companion app like
this is just another interface to your existing service. Building experiences
that work on devices which don't run Android
is not about bringing up completely new
infrastructure from scratch. It's more about just
extending your apps to be more action centric and
more focused on connecting with your users throughout
their day, wherever they are-- whether they're at home
or they're on the go. So with that, let me
invite you to visit our web page at actions.google.com
to learn more about the concepts and the
tools that we covered today. And we want to hear from
you, so please do sign up at g.co/AppActions so that
you can give us feedback on our built-in intent catalog. And also by signing up,
you'll get notified when app actions become available. But most importantly, what
we encourage you today is start thinking
about the experience you can provide your users
for those times in their life when their Android
device just might be impractical or just
inconvenient to use in that moment. Take the next step today
and build companion actions on devices that work for
the Google Assistant. We want to thank
you for joining us. And if you have any questions,
please come by our office hours in the sandbox area. Please enjoy the rest
of your time at I/O. [MUSIC PLAYING]