[GOOGLE LOGO MUSIC] AYLIN ALTIOK: Hi, everyone. Welcome to our session. My name is Aylin, and
I am a product manager on Such and Assistant. WILL LESZCZUK: I'm Will, and
I'm an engineering manager on Search. AYLIN ALTIOK: Today
we will talk about how to enhance your presence on
Google with structured data. This talk is for
content creators who want to bring rich
experiences on Search and the Assistant. If you watched our
announcements yesterday, you may already know we
launched two new structured data types, how How-tos and FAQs. Today we are going to cover
these use cases in depth and show you how to get started. We will also do
recaps on use cases we introduced last year like
News, Recipes, and Podcasts. Before we get started,
let's take a look at the content ecosystem. For over two decades,
Google Search has been helping creators
to get discovered and help users to find and engage
with the great content. This has been very useful given
the amount of content that's online, and Google
Assistant, which is helping users
get things done, has also been helping
users find and engage with the great content. We are seeing this increasingly. Just last year, we have
seen assistance active usage grew four times. It's available on more than
1 billion devices, nearly 30 languages, across 80 countries. And let's take a look how
users engage with content. When we look back
20 years ago, users was mostly consuming
content on the web, and they were mostly
on desktop machines. However, the way users
are engaging with content has been constantly evolving. In today's world, we see
users are consuming content on the web, mobile apps, and
conversational Assistant. They are using multiple devices
and engaging with content more moments than before. Let's take a look at
typical user journey, and let's see how they engage
with content in a given day. In the morning,
users can start a day by asking Google
Assistant the latest news and start hearing fresh content
from many news publishers. During a commute,
users can ask Assistant to play the latest episode
of their favorite podcast while they keep their
eye on the road. And during the day,
users browse and find information on many topics
by using Google Search. For example, Mother's
Day is coming up soon, so users could search the
flower delivery timelines from different providers, and
if they are too late to order, like I always am,
they could start searching for last-minute
gift solutions, like how to make
origami flowers at home. And in the evening,
users can ask Assistant to find recipes
and get step by step guidance while they cook
with their family. Reaching users on all
these moments and surfaces is great opportunity
for content creators, but building and maintaining
customer experiences on each of these
surfaces and devices could bring additional
overhead to you. Solving this problem
is our motivation for working on
structured data program. You focus on creating the best
content using open standards, and Google will help you
reach your users across Search and the Assistant without having
to build a custom experience on multiple platforms. Last year we announced
News, Recipes, and Podcasts. The idea was by using
structured data on your website, your content becomes
eligible for rich results in search and voice enabled
experiences on the Assistant. Let's take a quick look at
how you can start with these. By searching your
podcast content with RSS, your users could find
and engage with it on Search and the Assistant. By using M and implementing news
article markup in your site, your content becomes eligible
to get rich results in search and get serviced on
Assistant automatically. And by just implementing
the recipe schema.org markup on your website,
your recipe content becomes eligible
to get rich results on search and voice-enabled
Assistant experience on devices like Google Home Hub. You get out-of-the-box machine
learning and speech recognition technologies without
doing any custom work. We have been seeing
great publisher success with structured data program. You can see a few calls from
recipe publishers where they highlight the ease of engaging
with their users on Assistant and getting out-of-the-box
machine learning experience without doing any additional
code work other than markup their site. And that's why we continue
working on our structured data program. I am happy to let you know that
our recipe schema.org markup now supports video object. With this, you can mark up your
video content on your site, and it will show up
on Google Home Hub where your users can engage
with your content further. Not only we invest in our
existing structured data types, but we are very
excited to announce two new types at this I/O. These would enhance your
content on Google Search and the Assistant. So we announced How-to,
and How-tos is step by step tutorials on real world
tasks like how to tie a tie or how to install a
doctor that walks user through guided journeys. And FAQs are questions that
your users are commonly asking about your
product or a service. Let's take How-to as
an example, and let us show you how the consumer
experience would look like. For our demo today, we
created a sample website for making origami flowers. We added two tutorials
in our website, and other than
this, all we did was to use how to markup
in this website. Let's see how this
transforms the appearance on Search and Assistant. Let's take Search as an example. By just adding a markup on
our site, appearance on Search became richer with
preview of the tutorial with steps along with the
rich carousel of the images. And on the Assistant,
when users are asked for things using
major language like how do I make origami, our system
will find and surface the right content. Let's see how our origami,
two tutorials for origami will show up on Assistant. Let's switch the live demo
for smart display device and let me show how
this works in real life. So I'll go ahead-- GOOGLE VOICE: The mic's back on. AYLIN ALTIOK: Hey Google, how
do I make origami flowers? GOOGLE VOICE: Here's some
instructions to explore. AYLIN ALTIOK: So you
can see two tutorials we edit in our website just
markup now is available and surfaced on Assistant. I'll go ahead and pick the
one, and you are seeing now with the markup, our
site's description is displayed in Assistant along
with the supplies and the steps count. So we can go ahead and
start the tutorial. GOOGLE VOICE:
There are 13 steps. I'll read them one by one. With-- AYLIN ALTIOK: With this
assistant will automatically start talking about each step. So users can go ahead and engage
with the adjustment voice. Hey Google, next step. GOOGLE VOICE: Second step. Orient the triangle with
the long side along the-- AYLIN ALTIOK: And you see
how all the each steps we marked up is showing up. Users can also engage with it
by just tapping the screen. And as you see for
each step, our images are showing up along
with the supplies. And also users
can engage with it without you doing any
custom speech recognition like OK, Google, last step. GOOGLE VOICE: Grab a pot
and some crinkle paper, and plant your succulent. AYLIN ALTIOK: So you get
the idea how your content is showing up in rich
experiences on Assistant, out-of-the-box voice
and speech recognition. And users can go
back and engage more. And this will show up also
to remember in the feature, they can continue
from the tutorials. Let's switch back to-- GOOGLE VOICE: Mic off. AYLIN ALTIOK: --slides, please. By just adding a bit
of markup, your content becomes eligible for
rich results in search, speech cognition
and major language understanding on the
Assistant, and discovery on all these devices and surfaces. Now Will is going to show us
how to get started with markup and how easy it is to implement. WILL LESZCZUK:
OK, so we're going to take a look at how Aylin
built that demo we just saw, and we'll walk through the
entire end-to-end developer journey here. What you got on Search plus
that whole experience, the voice guidance as well
as the navigation, all that stuff is powered
by the same markup. And before we jump
into that, I just wanted to touch
again on schema.org. Aylin showed us a
couple examples earlier in the presentation of
different types of objects. And schema.org, if
you're not familiar, is just an open
standard that defines a vocabulary you can use to add
structured data to your site to add semantics. There's a couple of different
ways you can do this. You can use JSON-LD or
you can use Microdata. I like JSON-LD. It's a little bit
easier to work with. So that's what I'll use today. And a couple of things
just to call out here, those top two highlighted lines. The first one tells us
we're using JSON-LD, and the second one
tells us that we're using that schema.org
vocabulary, which Google understands. A couple other things
to call out here, structured data is not
just for the text content, but also if you have
great multimedia assets like images and videos. You can also specify those
in the structured data. And we'll be able to do
some really cool things as you'll see in a moment. So please go ahead
mark this up as well. With that, let's switch
over to the laptop, please, for the demo. OK. And this is the demo
site that Aylin mentioned a couple of moments ago. That How-to demo was built by
adding markup to this page, how to make an
origami succulent. Scrolling around just
a little bit here, we have all the
tools and materials you'll need to make this as
well as the individual steps and some really nice
instructional images to go along with that content. So we're going to
add markup to this, and I'll show you how to use
the search tools to validate your implementation and get a
nice preview of what this might look like on the search page. The way that I kind of like
to do this is to use this tool called the rich results tester. There's a couple different
things you could do. You could develop locally
and then stage your URL so that the tool can see them. Or you can just edit
code live in the tester. I think that's a little
easier, especially if you're using
JSON-LD, so we're going to do some
live coding here. And then you would just go
ahead and copy that back to your website. It also will give
you, as I said, a kind of a rich preview
of what you'll get on Search, which is really neat. If you're familiar with the
structured data testing tool, by the way, this is effectively
the successor to that. So you can go ahead and start
using the rich results tester tool. OK, so we'll open this up and
switch over to the Code tab. And what I would do is
go ahead and just put in a skeleton of my document to
make it easy to work with here. I've done that and another tab. So I just put, again, a
skeleton kind of HTML document, and then, again,
those hallmarks. We have the LD JSON script tag-- this will be in the head
or the body of your page. And we're using the
schema.org vocabulary. Now we're going to use the
new How-to objects, which we announced today. And just give our
How-to a name, how to make an origami succulent. We'll hit Test Code,
and with any luck, we should have a
valid How-to object. And we don't, right? We have an error. But that's OK, because the rich
results testing tool detected that our How-to is on
the page, but we're missing a required field. You can't have a step-by-step
How-to without having steps, obviously. So this is what that's
telling us here. And then we're going
to go ahead and we're going to add in those steps,
and then we'll have a How-to. Again, I've kind of just
kind of put that code sample into another tab,
and I have the steps here. This is going to be a collection
of How-to step objects. And this is all
documented on schema.org. I'll take these, and
I'll bring these up into our entire document here. Hit Run Test. Give it a minute. And now we have a valid How-to. OK, so this is the
really cool part. I'll show you this new
tool, the search preview. You can hit this, and we
could see what this might look like on Google Search. And there are a couple of
things you might get here. This is kind of the visual
image forward treatment where you could see
the names of the steps that we added a moment
ago below the tiles. It doesn't look like much
yet because we haven't yet added the images, of course. But we also have something for
text forward How-to content, if you don't have those
great instructional images but you still have How-tos. We have another treatment,
which we might show you, which is this
accordion interface. And this also does
some pretty neat stuff. It'll show you some metadata. And as we go back
later and add that in, you'll see how this livens up. OK? So, we'll go back, and this
time what I'd like to do is add in the images. So again, I don't have
those URLs memorized, but I have them in
another tab over here. We'll go ahead and we'll
put this instructional image from the site, so it's going
to be the URL for this image. We'll put that in step 1. And we'll add the other image
for the next step to step 2. Run our test. View the search
result. And now you can see how these How-tos
really come to life if you do have that great image content. So please go ahead,
mark them up. We think this looks
really, really great and really helps you enhance
your presence on Search. If we go back one more
time and take a look at the message we're getting
from the rich results testing tool, you can see that we
have a few warnings here. And so there is a lot of
other stuff you can add to your How-tos-- A lot of metadata
time to completion, description, and, again,
those tools and materials we saw on the demo page. And when you go ahead
and you add all those in, you'll see these optional
field warnings go away. So what I've done is-- actually what I'm about
to do is if we just go back to the rich results
tester one more time and we take the
URL for that site, we'll just validate the
live implementation. Just getting in here. OK, great. Preview the result, and now
you can see all the steps here in the carousel. You can see this nice
metadata about the materials that you'll need to do the
How-to, which will really help your user make a
decision and if this is the right content for them. And on this experience you can
see that we have the estimated amount of time it will take
to complete the How-to, and, again, all those
tools and materials. And by the way, Google will
pick the right rich result type based on your user
and the context. So if you have images,
you still might get this accordion
interface, in which case the images will appear in
line in these sections here. So that's pretty much it. I think we did that in
about five or six minutes. And it's just one investment on
top of your existing content, and we think this is
a really great way to, again, help your results
stand out on Google Search. It goes without
saying that anything you do in the rich
results tester, if you do it just
like I did it here, won't actually affect
your search appearance. You have to copy it
back to your site. So don't forget to do that. And also head into
Search Console. And you could submit
your site map or URLs for re-indexing to
just give Google a hint, hey, there's
some stuff we might want to check out there. So go ahead and do that. Let's switch back to
the slides, please. Just a second here. Great. OK. So, to close the loop,
let's talk about metrics. Search Console is
a place you can go, and it will give you tools
and reports on traffic and detecting issues
with your structured data implementation on your site. This is kind of always
the place to go, and it's the place
that you'll continue to go to make sure
that you're using these new types correctly. So, along with the
markup, we're also introducing two new
things for How-to and FAQ. We have these new
search appearances, and these search
appearances will give you all those stats that
you know and love like impressions and clicks
and your average position for search results
that show with one of these new rich result types. So you can go ahead and you
can start checking those out as soon as your
content is indexed and you start getting
traffic on Google Search. Finally, make sure to
monitor your new enhancement reports when you have
the structured data. We'll also have one
for FAQ and How-to. And this basically shows you-- gives you a bird's-eye
view of the health of your implementation
for your site. Each of the green
bars there you see is, in this case,
a valid How-to, yellow would be
How-tos with warnings, and red or broken How-tos. So especially as you're adding
more content or more types of markup over
time, you're going to want to keep an eye on
these enhancements reports. And finally, don't
forget to check out the unparseable structured
data report, which hopefully is pretty self-explanatory. But this just went live,
I think, last week. So again, it's a good
tool to kind of get a grip on your site's
overall health. OK, and as Aylin
mentioned, we're also pleased to announce
that FAQ is launching today, and you get all the same
tools and support that we just saw for How-to for FAQ. So we won't go through
that whole devex. But what we'll do
is a quick demo, and you get some really
cool stuff out of the box that I'm going to
want to show off. So going back to our demo site,
in addition to the How-tos, is we also added an FAQ page. And I just picked
a couple FAQs here. What's the point of origami,
and what's the right age to start making origami? And this is what you get
on Search in the Assistant. It's a little bit hard to
see, but on Search again, you get this new rich result type. And you get to bring
some of that content forward onto Search to help
your users kind of navigate. On the right, you get more
of a one shot answer format, where it will take
that FAQ content, and it will surface the
correct answer for the user based on their query. And there's more than
meets the eye there, so let's switch over to
the pixel device, please, and I'd like to do a demo. OK. So first I'll just
start out by querying-- asking the Assistant
what's the point of origami exactly the way that it's
put on the FAQ page there. Hey, Google? Hey, Google? Ask origami flowers what
is the point of origami? GOOGLE VOICE: According
to origami flowers, the goal of origami
paper crafting is to create an often
intricate design from a single sheet of paper
in a series of simple folds. WILL LESZCZUK: Great, OK. So it knew how to kind of-- again, no custom
action built for this. It just pulls the markup
right off your page and serves to use as an answer. And we'll try the other
one just to be sure. So hey, Google, ask origami
flowers what's the right age to start making origami? GOOGLE VOICE: These
came back from a search. WILL LESZCZUK: Hey, Google,
ask origami flowers, what's the right age to
start making origami? GOOGLE VOICE: According
to origami flowers, many children in Japan begin
making origami as early as preschool, but
it is up to you to determine when your child
has the hand-eye coordination, manual dexterity, and
patience to practice this art. WILL LESZCZUK: Great. Sometimes this thing has
trouble with my accent-- New Jersey. AUDIENCE: [LAUGHING] WILL LESZCZUK: OK, so now this
is kind of the cool part here. If I formulate this query in a
way that's more natural to me, and doesn't actually
overlap at all with any of the text on the
page, it will still see how to-- it will still
serve the right answer. So if I say, hey, Google,
ask origami flowers if my four-year-old is
too young to make origami. GOOGLE VOICE: According
to origami flowers, many children-- WILL LESZCZUK: And there it is. So let's switch back
to the slides, please. And what we saw there was
speech recognition, an NLU, and this stuff called
semantic matching, which you get right out of
the box from Google Assistant, and it's powered by
Google's machine learning. And so here what we saw was two
completely different queries, one as stated on the page
and one completely different. And Google in kind of a
different twist to what we saw with How-to not
only understands what the user intends,
but also understands the meaning of your content and
can deliver the right answer. And so this is pretty cool. No defining grammars, no
new code, no deployments, right out of the box. And this is the
implementation for FAQ, and it's not really
much of a demo because it's not
really very much. So we think you
get a whole lot out of just a minimal investment on
top of your existing content. So to recap this section, we
talked about schema.org, FAQ and How-to markup. I showed you how to
use the rich results tester to validate
your implementation and get a preview of what you
might get on a Google search, and we used the Search
Console to monitor our site's health over time. And here are some tips. If you're using the
rich results tester, and especially if you're
copying around code samples, watch out for special characters
in the markup, smart quotes and stuff like that. It might give you
some weird output, and just stick to
the good stuff. For How-to markup,
this is really intended for step-by-step How-tos, so
a sequence of instructions to help a user achieve
a specific task. Don't use it for lists
of related information. We really want this step-by-step
guidance Aylin demoed. And don't include step
prefixes or suffixes, either, things like
step 1, for instance, Google, based on the surface
and the context of the user will understand the
right way to frame up the information for the user. So a visual interface
versus a voice modality, you may want to do different
things with that content. So just give us the
content, and we'll put it in the right
context for the user. And finally, when you're
marking up your images, please just mark up
the original images that you're using on your site. Don't give us thumbnails, don't
give us previews or anything like that. And again, the reason
for that is Google will create the
right size thumbnails in the right aspect
ratio for the surface that the user is on. For FAQ markup, just one
thing I want to call out is this is really for answers
provided by your brand. So if you have a user form
on your site, for instance, that's OK too. We have a type of markup
called QA page markup, and that launched last year, so
you can go ahead and use that instead. And that's linked right
from the FAQ documents. So now Aylin is going to
talk to us about another way to add structured data to
your great How-to YouTube content called Templates. AYLIN ALTIOK: Will
showed us how easy it is to get started
on the Assistant by just implementing
a markup in your site. And that was really cool. But we recognize
not all the content creators maintain a website. That's why we have an
alternative solution to build on actions on
Assistant called Templates. Templates is the easiest
way to build for Assistant. No programming
knowledge is necessary. To create an action,
all you need to do is to fill in your
content in Google Sheets and upload it in
Actions Console. With Templates you can create
engaging conversational actions like trivia games,
personality quiz, flash cards, and we are very excited to
introduce newly added type for How-to video tutorials. It's a video-based
templates where you can create step-by-step
tutorials on Assistant by using your own YouTube videos. All you need to do is to anchor
How-to steps from your video with time stamp,
title, and description. And again, similarly like we
showed in the earlier demo, your content becomes
available to show up when your users are asking
about natural questions, how do I make origami? In this example, we specifically
created a YouTube video and uploaded in actions console. So let me show how
that experience would look like on device. Can we switch to smart
display demo, please? GOOGLE VOICE: The mic's back on. AYLIN ALTIOK: Hey, Google, how
do I make an origami bouquet? GOOGLE VOICE: Fold the
square piece of paper in half horizontally
and vertically. Flip the paper over-- AYLIN ALTIOK: Hey, Google-- GOOGLE VOICE: --and fold along-- AYLIN ALTIOK: Pause. So you are noticing
similar to our earlier demo with the markup, again,
we get all the steps anchored in the spreadsheet in
here, and we get the video alongside with the steps. And this is the
video that we just hosted on YouTube that's come. And again, we can go have the
same experience by tapping. GOOGLE VOICE: Unfold
the right flap. Hold the left side
of the kite down-- AYLIN ALTIOK: Hey, Google. GOOGLE VOICE: --open the flap-- AYLIN ALTIOK: Maximize. GOOGLE VOICE: --made
in step 3, so it collapses into the kite shape. Repeat-- AYLIN ALTIOK: Hey, Google. GOOGLE VOICE: --all of-- AYLIN ALTIOK: Last step. GOOGLE VOICE: Make
as many lilies as you'd like to fill a bouquet. You have the perfect
Mother's Day gift. AYLIN ALTIOK: Basically
with this example, again, your users
on Assistant can go through this
step-by-step guidance without needing to
type, and while they are following the tutorials,
they can navigate with voice. And we created all this
just with the spreadsheet. And now let me go to
my laptop and show you how you can get started
with the same experience. Can we switch to laptop, please? Awesome. So I already went ahead and
created a project earlier, but How-to video templates are
available for developer preview today. You need to opt in. You can go to our
documentation to see how you can opt in and get started. And very soon we are
going to make it default. So the first thing you do when
you go to Actions on Google Console, you create a project,
you pick the templates, and you'll see
the How-tos video. And once you click that
this is the first screen that you will see. So all you need to do is
to fill in your content. So let's get started
how to do that. So the first thing you have
to do is to create the sheet. So we have predefined Google
Sheets that you can make a copy and start filling immediately. So this is going
to take a couple seconds to make a copy
for us, and we will see-- awesome. We just start with
the Read me file. You can also go through the
documentation to read more, but this is just to
familiarize yourself which experience you are
getting with the templates. You can add as much as
YouTube tutorials as you want, and, as you add
the YouTube videos, this shows up as a tile
card in your home page. And this is how your users
may find your information, by explicitly asking for your
action like Talk To or Ask. And this is the video screen
where your videos will show up with the steps, the description,
and the anchors that you edit. So this was similar
to what I demoed just on the smart display device. And let's go ahead and
see how you can get started filling the content. This is where you are just
getting your YouTube URL and pasting it. And you can just go ahead
and name your tutorial and give it a summary. And once you do that, you
can move to steps section, and for each of the
tutorials you edit, it will automatically
be populated in here, and you can go ahead and
anchor them by the steps. You give the title to a step,
time stamp of the video, and just the bullet points
for an additional description for your users. So let me just edit something
minor, and say hello from Ohio. This is not the step, but just
to give some thing in here. Let's save it. So all we need to do is to copy
this, go back to our actions on Google Console,
and now next step is for me to upload the content. So let's go ahead and
paste this and upload. While this is uploading, we
are running some validations behind the scene. This validation
passed, but I just want to familiarize
yourself what's happening behind the scenes. If you noticed in our
sheets, we have some section that is required fields. So just ensure that you are
filling all the required steps to make sure validation
doesn't fail. And just to ensure we
create the best user experience in the devices,
we have some limitation to characters. So make sure you handle those. But don't worry, even
if you miss something, in our client side, we ordered
around some validations to warn you. Let's go back to
Actions Console, and now we are ready
to go to simulator and test our sample action. So for those of you who
are not very familiar with the actions on
Google's simulator, you can see the first time the
preview may take a few seconds to load our draft in here. Let's give it-- some time. But you can-- I just want to familiarize
you with the stuff with the simulator
you see in here. You can simulate experience by
giving a different language, different location. So if you are
specifically testing for certain users in
certain demographics, you can go ahead and mimic
those in the simulator. And you see in here
there is a version. So if you are iterating keep
publishing different versions, you will see all
of your versions. So you can test, simulate,
and compare different versions and decide which
one looks the best. For this project, I
only have the draft one, and let's go ahead and start
the simulation with the draft. So we see in here that this is
where your users will-- where you will just mimic
what your users might be saying to your action. You see now two of the
instructions we put in our spreadsheets showed up. We can go ahead
and pick the one. This automatically
starts the video. This is the video
we just played, and you notice that now that
our change in the spreadsheet automatically
populated in real time. The idea is the same. In simulation you can go ahead
and navigate through and just try to simulate it how
your users would engage. Check out the experience. You can also do the same thing
by just adding some voice commands. You either type or you can
also enable the mic to input by voice. Let's say next step. And this is jumping
to the next step. So you get the idea how you
can basically test your actions before you publish. And once ready, you can go
ahead, add your information. And you would be ready
to submit this action. Basically, can we go to-- slides, please? Basically it was like that
easy to build an action just by filling out the spreadsheet. And once ready,
once you publish, you will also start
getting the analytics. So you can see your traffic,
active usage, and the details about your
conversational metrics like the number of conversation
and messages happening. Just to recap, with templates,
we built a How-to video tutorial by going to
Actions on Google Console, picking up the How-to
Video Template, filling out the spreadsheet
by adding our YouTube videos, and we were ready to
simulate real time. And, once we feel comfortable
with the action and test it, we can go ahead and publish
that and start monitoring to see how users are engaging. And some tips, just make
sure you are anchoring your own YouTube videos. Make sure you are
logged into Actions on Google Console
with the same account that you use for
your YouTube videos. And, if you are a brand account,
if you are verified on YouTube, make sure you also follow the
similar verification steps and Actions on Google Console. Similar to markup, watch
out for smart quotes and special characters
when you are pasting your step descriptions. When you test
locally in simulator, you can catch these
type of errors and fix it, so always make sure
you go to simulator and test before you publish. And just make sure you fill
in all required fields, and make sure you are
following the specifications to pass the validations. And now Will is going
to help us put together for the session recap. WILL LESZCZUK: OK,
so, to recap, users are engaging with your
content in more moments than ever before, but being
present in these moments while it represents a great
opportunity for creators to connect with
their audiences, can be a real pain for developers. It's hard to keep up with the
proliferation of platforms and devices over time,
and a lot of times you have to build a custom
app for each one of those. So this is the
problem that we're trying to solve with the
structured data program. By adding a single edition
to your existing content, you can get that content out
to more users more of the time and more of those
moments than ever before across the
entire Google ecosystem. And that's why we're expanding
the program even further to add two new use cases
to serve two large user needs, FAQs and How-tos. Not only that, you get the
bring the very best of Google to bear for your users. You get the power of
Google's machine learning on the Assistant, and you
get these lush visual search results on Google Search. So we showed you two
developer journeys today, one for site owners who
have great existing content, and another for creators who
have great How-to content on YouTube. The markup journey. In that, I showed you how
to add How-to and FAQ, by the way, a markup to
your existing content. We used the rich
results testing tool to validate our
implementation and get a preview of what that's
going to look like on Search. And don't forget to keep an
eye on your Search Console over time the check
your enhancement reports and the unparseable
structured data report. For our YouTube How-to,
we used Google Sheets to add structured data to
our video via templates in the Actions
Console, and we also showed you how to
use the AoG Simulator and keep an eye on
those AoG analytics. So here it all is
again one place. And, when you leave
the session, you could start right away,
adding How-to and FAQ markup. You'll immediately be eligible
for these rich results on Google Search. And, coming soon, you'll get
those amazing Assistant demos that we showed you moments ago,
the How-to guided experience on a smart display and
the conversational UI on the mobile
assistant for FAQs. And How-to video templates
are now in developer preview. So go ahead, go log into
the Actions Console, check it out, and check out our
documentation to get started. With that, thanks a lot. Don't forget to come
check out our sandbox. It's H, and the
documentation links for everything we showed
you today are on the screen. We hope you enjoy
the rest of I/O. [APPLAUSE] [GOOGLE LOGO MUSIC]