[MUSIC PLAYING] JOANNA KIM: Over the
last couple years, our team has been working
really hard to bring the first augmented reality feature to
one of our most-loved products-- Google Maps. We're still in the
midst of this journey, but we thought we'd share
some of our learnings with all of you here today to
hopefully provide some insight into the process, and help you
avoid the same mistakes that we made. You know, people
often criticize AR as being a hammer without a
nail, and it's a fair point. Oftentimes, this is said
about a lot of new technology. However, we on the Maps
team saw an opportunity to address one of our
longest-standing user problems. This scene might look
familiar to some of you. You're in an unfamiliar
area, maybe exiting a subway, and then you pull up Google
Maps to try to figure out which direction to go. And then you just
stand there, thinking, am I facing this
way, or this way? And if you're anything like me,
you just get really impatient, and then you start
walking, only to realize that you've been going the
wrong way the entire time. Well, it turns out this is
a really common problem, and one that can cause
significant stress to some of our users. So we saw an
opportunity to introduce a complementary
technical approach, and combine it with
augmented reality to make navigation
better for our users. Our vision-based approach
works similar to how humans orient and navigate-- by
recognizing familiar features and landmarks. It's a human-like approach
to the human-scale problem of navigating on foot. Solving this has
been one of the most technically-challenging
projects that I've been a part of during my
eight years at Google, and I'm thrilled
to be here today with a team to tell you
more about how we did it. But rather than walk you
through the technology myself, let me introduce Jeremy,
our Tech Lead for Localization. [APPLAUSE] JEREMY PACK: Thanks, Joanna. So let's talk about
the blue dot-- that is, that blue dot that
shows up in Google Maps to show you where
you are in the world, and which direction
your device is pointed. In augmented reality,
robotics, and related domains, we call this process
localization. That is, figuring out
where something is in space and how it's rotated. And humans have been
inventing technology to help us localize and
navigate in the real world for millennia. So think of things like star
maps and constellations, or handle compasses,
magnets that can point you towards north. Or think about street signs
and street maps, or astrolabes and sextants, and
other technology for navigating at sea. But in the past
few decades, there have been some
huge leaps forward in how we can localize
ourselves in the world. The global navigation
satellite systems-- such as GPS,
GLONASS, or Galileo-- consist of satellites in space
that can send signals to a GPS receiver-- such as
your smartphone-- that allow the device
to then calculate the approximate distance
to those satellites, and triangulate its
position in the real world. And this technology
often results in very fast and very
accurate location information for our devices. Now, of course, for
walking navigation, we need a very high
level of accuracy. We need to be able to tell
you which side of the street to be on, and when to cross it. And, well, where do
people walk the most? Often in dense
urban environments. And so in these urban
canyons, we call them-- so streets with big
buildings on either side-- the signals from
these satellites can actually get blocked
by the buildings, or in some cases bounce
off of the buildings before they reach your device. And that makes the device
miscalculate the distance to the satellites and
incorrectly triangulate its position. Now, these aren't
the only difficulties with using this sort of
satellite-based GPS approach. In addition, when you're
calculating distances to satellites, it requires
estimating how much time has passed since the
signal left the satellite on your smartphone. And timing's already hard
enough in software engineering-- getting time synchronized
across devices. But when you're talking
about signals moving at the speed of light,
even a nanosecond of error is going to result in you
miscalculating by a foot-- or a third of a
meter for most of us. Now, a nanosecond
is small enough that even a highly
accurate atomic clock can drift by a nanosecond
once in a while. And I don't know about you,
but my smartphone does not have an atomic clock in it. And so I looked in-- so I wanted to see
if maybe we could fit an atomic clock in there. So if we took a nice
$2,000 atomic clock, there are already
a lot of components in your average
smartphone, and so I think we could fit the atomic
clock in there if we just removed the battery. It's a nice trade-off. Now, honestly, I
don't think anybody's going to pay for that
type of smartphone, but as long as
we're in here, let's take a look at some of the
components we have in there. There are actually a lot
of different things in here that can help with localization. You see an arrow pointing,
there, to the GPS antenna. I'm actually told that you
can get better reception for your GPS chip if you replace
that little antenna there with a paperclip. Now, if any of you
are interested, I do have some paper clips
and a soldering iron, and if you would like to void
your warranty afterwards, just come talk to me. But don't tell the
Pixel team, OK? Now, in addition
to the GPS antenna, there's some other
ancient hardware-- or the newest version of
ancient hardware in here. We talked about handheld
compasses, right? So if you've ever used
a handheld compass, you know that you have
to be careful with it. When I've taught kids how
to use a handheld compass, they inevitably
stand next to a pole, and then the compass
points at the pole. A big, metal object-- or anything magnetic--
can actually throw off the direction of the compass. And so we're going to take one
of those handheld compasses, except we're going to shrink
it really, really small. And we're going to stick
it into your smartphone, and then we're
going to surround it with metal components
and other circuitry. And then we're going to
put a big battery in there, and we're going to run
electricity through everything. Now, on a completely
unrelated note, that's also how you
make an electromagnet. So now we have this
electromagnet smartphone thing, and we take it into
an urban environment, and we surround it
with, let's see, subway trains, cars, utility
poles, fire hydrants, and all sorts of nice, big,
metal, magnetic objects. And we should be surprised
that these smartphone compasses can work at all in that
type of environment. So we need some more sensors. There are some
additional sensors in the smartphone
that can help us. Inertial measurement
units consist of a set of sensors
that can be used to track how the
device is navigating, how it's moving through space. Things like gyros
and accelerometers can tell us how the
device is rotating and how it's position
changes over time. And so your
smartphone has these. But now, if we're trying
to use an accelerometer to measure how position
changes over time, I'll bet there
are a bunch of you out there now
thinking that if we have such a small,
inexpensive accelerometer and we're using it to
measure acceleration and it has small errors,
those errors will grow to big errors in
our positional estimates, especially over longer
periods of time. And so this type of
inertial odometry that is tracking the
movement of something through space using
inertial sensors needs additional signals
to keep it anchored, to keep it working
well over time, and keep accurately
estimating your position, especially when we have
to cram the sensors down so small in this device. We have one more
sensor in the device that we haven't talked about-- the camera. So we can use the
camera in a smartphone to detect visual features
in the scene around us. Think corners of chairs, text on
the wall, texture on the floor. And use those to anchor
these inertial calculations. And so by estimating
where those things are, then we can correct for the
errors in the inertial sensors over time. And this visual
inertial odometry helps make this type
of relative tracking-- this tracking how the device
is navigating through space-- much more accurate. We can actually take it
a step further, though. We can have the device start to
memorize these visual features, making it so that the more
the device moves around within a given
environment, the better it can learn that
environment, the better it can map that environment,
and the better it can track its own movement
within that environment over time. And let's take that
a step further. We could actually precompute
a map using imagery that we've collected
around the world, say, from something
like Street View. And so we could take all
of these visual features from the world, build them
up into a precomputed map, and now use an image
from that same camera, match it against the Street
View, the visual features, and precisely
position and orient that device in the real world. Now, this visual
positioning system is the core piece that makes
our augmented reality walking navigation experience
so fast and accurate, and it achieves a
level of accuracy that is much higher
than you can achieve with those other sensors
that we just discussed. Now, the speed and
accuracy of localization that you see in
this example here is actually pretty typical
in that type of environment, and we need localization to
be that fast and accurate everywhere you live,
work, or travel. So for that reason,
we have built out this visual positioning
service to work everywhere that we have high-quality
Street View data. That is a lot of countries. Now, this is the result of more
than a decade of investment of collecting imagery
around the world so that we can build
products like this. Now, this map-- it makes me
happy and sad at the same time. You can already tell,
at this zoom level, that places like China
and most of Africa don't really have any imagery. And other places look
OK zoomed out this far, but if you zoom
in, you'll notice that coverage can be very
sparse in places like India, and that even in
places like here, there will be some streets
missing here or there, and some of these data is older. Now, lack of imagery, of
course, is not the only problem that we encounter when doing
a camera-based localization. So there are a
lot of things that can confuse a visual system-- things like people,
cars, or anything in the environment that
changes frequently. And so for trees
in particular, we had to do a lot of machine
learning and a lot of geometry to help the system learn to
ignore things like leaves that change frequently. Visual algorithms love that
sort of high-quality feature. And we had to teach it to pay
attention to the structure that is more permanent-- so things like the
building ourselves, and the trunks on the trees. Also, of course-- I don't know about
you, but the last time I got myself hopelessly lost-- which was very recent-- I didn't check to
make sure it was going to be light outside first. And the core of the
problem is that the camera on your smartphone is far less
sensitive than the human eye. Your human eye is this marvelous
thing that does automatic HDR. I mean, it can correct for all
sorts of lighting conditions that your smartphone camera
just cannot handle as easily. And so in a scene that
looks reasonable to me where I can navigate fine, all
that the smartphone can see is the headlights on the cars. Now, those headlights
are generally moving, which means that
ARCore can't successfully latch onto any visual
features in order to tell how the device
moved through space, and none of the
permanent structure is actually visible
to the camera, meaning there is
nothing for us to match against from Street View. Now, even in the
middle image, as you start to get more
light in the scene and you start to be able to
see more of the structure, it still looks so
different from what it looked like when the Street
View car drove by a year before that it's very confusing,
still, to the visual system, and it usually still won't work. Now, as the light
increases a bit more, or in very well-lit areas-- think like certain streets in
New York which are well-lit all the time-- then the system can
start to work more often, but it's really not
something that you can rely on in low light. Now, I already showed you
how packed-in everything is in a smartphone, but the battery
takes up a lot of space-- and that's for a good reason. We don't want it to run out. Now, if you've ever built an
app that uses the GPS a lot, that leaves it on for
long periods of time, you may have noticed that
that can run down the battery. But the cost of
using the GPS chip-- or any of these other
localization sensors-- pales in comparison to the
cost of using that camera. So turning on the camera
and putting that image on the screen of the device
is a significant power draw. And so just opening
up your camera app already uses up a lot of
power and isn't something that you can do for your
entire 2 or 3 kilometer walk-- at least, not if you
want to have your phone last till the end of the day. Now, we looked into different
approaches for handling this. And the first idea
was just we need to make everybody carry around
an extra battery all the time. All we got to do is
make it socially cool to carry around a
battery all the time. And so we experimented
with this. I took a battery out. I plugged my phone into it with
a USB cable through the pocket, and walked around-- trying
to use walking nav-- to see how people
reacted and see if anybody started adopting it. They did not, and
I felt pretty silly walking around like
that-- with my phone plugged into my pocket. In addition, of course,
just having that extra power going in from the
battery in my pocket made the compass
performance even worse. So my next idea was
that I was going to go get a map of Mountain View
printed on the back of my phone so that whenever the battery
dies, I still won't be lost. [LAUGHTER] But when it came
down to it, really realized that we have to
make some trade-offs, here. Just like any of your
location-based apps have to be careful about not
having the GPS on all day, we do the same thing
with the camera. We want to have
the camera on when it's essential to have
highly precise localization. And so when you want to
know exactly where to go and which turn to take,
then we have the camera on, and we can localize you
with very high quality. But once you are on
your way and know where you're
supposed to be going, then the camera turns off, and
we rely on the other sensors to localize the device and
keep you along your path. Of course, we've got to keep
you along the right path. Walking directions
have to be quite a bit different than during driving. Remember, we have to
keep you on a fast route. So we have to take into account
speed limits and traffic. We want to tell
you where to make the right turns,
legal turns, and we want to tell you
which lane of traffic you should be in when
you make that turn. Whereas in walking
navigation, it's very rare that I exceed the speed limit. And I need to know which side
of the street I should be on, and whether or not
there's a sidewalk there. I need to know where
the crosswalks are that I can cross safely on. And not all walking paths
are right next to a road. Many of them are actually
on a trail, or in a park. And often, there are very
convenient pedestrian overpasses and underpasses,
and all of these mechanisms-- all of these different
types of pass-- can actually be the
best and most scenic and safest ways to complete
your walking journey. So we need to take all
of these into account. Now, we get a lot of
the data about where these are from other
providers, but for much of it, we have to use all of this
imagery we've collected-- so Street View data, and
aerial data from planes and satellites. And then using a combination
of manual labeling and machine learning, we find
all of these features that we can automatically to be
able to service them in the map and give you the best way to
get from point A to point B. All right, so we've talked
about some of the difficulty, some of the technology that
we've built to make it so that we can accurately
localize you at point A, and then direct you along
a good route to point B. But this core technology--
this navigation technology-- doesn't really do any
good without being able to be presented to
the user in the right way. Now, remember, when
we're using this, we are walking
down a busy street, and we need to get there safely,
and we want our smartphone to surface the right information
in the way we need it, and that is hard
no matter how good the underlying
localization and map are. And so our user experience team
has done a lot of iteration on this, and Rachel here-- who leads that-- team is
going to tell us all about it. [APPLAUSE] RACHEL INMAN: Thanks, Jeremy. So if developing a new
technical approach using VPS plus Street View plus
ML wasn't complex enough, we also had to dive into a
totally new area of interaction design-- designing for outdoor,
world-scale AR experiences. So what does it mean to reinvent
walking navigation for people around the globe? Well, first we
have to understand how people navigate in a
variety of contexts, city layouts, and city densities. Just like Jeremy
and his team have to understand all the technical
factors for scaling globally, we have to make sure we take
all those different contexts into account. Most people don't realize it,
but cities across the globe are laid out really differently,
and these different layouts deeply affect the
way that people navigate on foot within them. In New York City,
for example, it's pretty easy to understand where
you are at any given time, especially above 14th
street, because it's all laid out on a grid, and you
can just follow the numbers as you go up. But in Tokyo, you don't
always see street signs, most people use
landmarks to navigate, and things are so dense
that that restaurant you're trying to get to might be inside
a building on the eighth floor, and only accessible
through the rear elevator. When starting this
project, we also wanted to investigate
the types of questions that people are asking
as they navigate on foot around the world. Which way should
I start walking? And is this my turn, or
is it one more block up? Where exactly is my destination? And these questions
are coming up because GPS and compass
aren't cutting it, but also, through
user research, we found that many people
struggle with map abstraction. For lots of folks, it's hard-- and sometimes even
anxiety-inducing-- to quickly understand
the relationship between what they're seeing
in the real world with what's on the 2D map. Well, turns out one
of the strengths of AR is allowing us to believably
place things in the real world, just like you might put
your favorite pink alligator on your dining room table. So we thought, what if we
took that strength of AR and combined it with a
new technical approach that Jeremy described? Could we solve
the, which way do I start walking problem, but maybe
also the abstraction issue? Well, that sounds
simple enough, right? All we have to do is put a
blue line on the ground, same as the one that we have
in the 2D map, right? Well, not quite. This is an early
design exploration where we tried exactly that. The trouble with
this approach is that putting a precise-looking
line on the ground doesn't flex well with varying
levels of localization or data quality. Plus, in user testing,
we found that people feel compelled to walk
right on that AR blue line. [LAUGHTER] So that's not good. We needed to find
a solution that would strike the right
balance between providing the specificity and clarity
users were looking for, but also flex well with those
varying levels of localization and data quality. So we did a lot
of explorations-- over 120 prototype
variants, in fact. And the thing is
we had to do this. There's no material
design spec site that the team and I can pull
up to understand how to design for outdoor, world-scale AR. We've literally been
uncovering these best practices throughout this
project, and it wouldn't have been possible
without the ability to iterate quickly and test
with people on a weekly basis. So I wanted to walk you
through a few examples of how the experience evolved
throughout this project. But first, there might
be a prototype up here that looks familiar to you-- maybe a furry friend that made
an appearance at I/O last year? JOANNA KIM: The fox! RACHEL INMAN: Joanna knows. OK, all right, I might as well
address the fox in the room. So at I/O last year, we showed
how AR walking navigation could provide a lot of
user value in Google Maps. We also showed this
friendly, navigating fox. Over the past year, we've done
over 25 prototype variants-- and tested them with people--
of the fox experience. We've found that
it is really hard to get the experience
right between a helpful AR character and a person. People expect her to be a lot
smarter than she really is. Imbuing her with
intelligence, they expect her to know
shortcuts, to avoid poles, to avoid fire hydrants. Some people even expect her to
lead them to interesting things to do in the city. [LAUGHTER] I wish. Plus, people are
enamored by her. I mean, how could you not be? She's adorable! But with all these expectations
put on an AR character, it becomes even harder to
get the interaction right. So rest assured we're continuing
to prototype and test the fox experience, but we
want to make sure that we're providing a
delightful experience, but also being helpful in
the moments that matter. All right, so back
to walking you through a few examples of
how the experience evolved throughout the course
of this project. When working on the
localization experience, we needed to
understand how long it would take to get the user
effectively localized, and how we needed them to look
around at the environment. So at the beginning
of this project, I remember coming to Jeremy
over here and asking him, how long do you think it's
going to take the user to get localized? And I remember you said
something like 10 seconds. It's like, 10 seconds is a lot. OK, what can we
do with that time? So we developed this
particular approach where we're having the user
fill in these 3D shapes as they look around. So that was OK, but
a few months passed, and our localization technology
was getting better and better. It wasn't taking
10 seconds anymore. In fact, in some cases, it
was taking less than 1 second. So we decided to go
back to the drawing board on this
particular interaction, and we developed what you
see here in the experience. We're simply asking users to
point their phones at buildings and signs across the street-- something that we know often
yields quick localization. We have a little bit of
a moment of confirmation when we get that
good localization, but it's pretty straightforward. When working on the
path visualization, we are really excited about
this particular direction. The idea was that
the user would be able to follow the stream
of 3D particles all the way to their destination. It was going to be great. The stream of particles would be
able to flex in width depending on if we had good localization
or good data quality, providing specificity when
we had it, and vagueness when we didn't have it. We were super excited
to test this with users. So user testing came around,
put in front of people, and people hated it. The whole reason they wanted
to use AR in the first place was for that
specificity and clarity, not to be shown some weird,
vague path of particles. Plus, the motion
was distracting. Oh, and people didn't
think that these particles were lovely and
ethereal like we did. They literally
described them as trash. [LAUGHTER] So not wanting our
users to follow trash, we kept iterating. In a design sprint, we
started to gravitate around this particular direction. The idea was that we could
show a little bit of the 2D map in combination with the
AR view, the thinking being that for many
people, using AR walking navigation in Maps will be
their very first time using any sort of AR
experience, so providing some familiar elements can
actually be really helpful. It might be the case
that one day we take out the 2D map at the
bottom, but for now, it provides a really
good gut check. Our journey with AR walking
navigation in Google Maps is nowhere near done, but
we have learned a few things that we think will
be helpful for anyone who might be thinking
about designing an outdoor, world-scale
AR experience. This is by no means meant to be
a comprehensive or definitive list, but merely
a starting point. We know that as we roll out to
more people, we'll learn more, and we'll revisit
these principles. I'm just not going to
move from this slide. OK. We'll learn more, and we'll
revisit these principles. But I wanted to give you
a peek into what we're thinking so far. So the first principle is to
embrace the sense of immediacy. What do we mean by that? Well, when your AR environment
includes nearby buildings, buildings in the distance,
streets, sidewalks, and more, it can be really hard to
manage the user's attention-- especially through this
narrow field of view of the smartphone-- while also communicating
how far away something is. You're much better off
focusing the user's attention on one thing at a time, and
highlighting that focused area. You also want to consider how
you're representing things that are included to make it even
clearer what's near versus far, visible versus not. The next principle is to provide
grounding and glanceability. When doing those 120
plus prototype variants, we began to realize
that we needed to strike a certain visual
balance to be most helpful. The AR objects need to
simultaneously stand out from their
surroundings, but also be placed in a
particular location to provide maximum
clarity to users. So show me exactly where
that turn is, but stand out from the real world. This balance of providing
grounding, but also glanceability, might be
a little bit different than what you're used to seeing
in other AR applications, where the whole idea is to blend into
the environment-- like how you might want to try out that
AR lamp in your living room before you buy it,
or how you want that realistic dragon
to look like it's really flying in front of your friend. But for outdoor,
world-scale AR experiences specifically focused on utility
rather than entertainment, it's important to be both
grounded and glanceable. All right, the third principle
is to leverage the familiar. Google Maps has long been
a 2D experience focused on navigation and discovery. AR now makes up a very small
percentage of that experience. The familiar that we're
referring to in this principle is all the representations
in the 2D UI that users have become
used to over the years-- things like the fact
that we use a red pin to represent the destination,
or a blue callout for upcoming turns. If we had reinvented how we
presented information in AR and come up with a
new visual metaphor, people would have
had to have gotten used to that new visual
metaphor in addition to getting used to the whole
interaction patterns with AR, and that would have been
a lot to ask of people. All right, so our
last principle is one that extends
beyond UX design and has important, but
practical product implications. So I'm going to pass
it back to Joanna, and she's going to tell you a
little bit more about our last, and arguably most
important principle. [APPLAUSE] JOANNA KIM: Thanks, Rachel. So as Rachel just showed,
crafting this experience required considerations that go
far beyond the traditional 2D or web application,
and it introduced a lot of new and complex questions. When we first
started this project, we all envisioned
an AR mode where a user would both start
and end their journey, all while using the camera. But as we continued
to develop, we realized that this was
actually a really bad idea. For starters, people have a
lot of trouble paying attention to their surroundings if they're
only looking at the world through this small screen. In fact, people are often
overconfident in this regard, because they think that
by having the camera view, it actually erases the
negative effects of looking at your phone, but that's
definitely not the case. So we needed to
build in safeguards to effectively prevent people
from walking into a pole, or worse, the street. We did a lot of
experimentation to try to see what would work best
for this type of problem. In the beginning,
we actually just tried punting them
back to the 2D map once they started
walking, but it actually made them think that the
AR feature was broken. We also tried a pop-up
method, like you might see in some
other applications, but they said that they found
it obtrusive and annoying. We also tried
blurring the screen based on the amount of time
that they'd been in AR, but that didn't
really work either. Our current solution
looks like this. If a user starts
walking while using AR, we'll first ask really
nicely to pay attention by using a subtle
message at the top. If they continue, we'll
be a bit more insistent, with a full-screen overlay
that effectively prevents them from continuing to use AR. The feedback has
admittedly been mixed, with some people who find
it still too obtrusive, while others express gratitude
for the reminder to actually pay better attention. We know that we won't
please everyone, but it was really
important to us to try to build in
the right affordances to nudge people to be safe,
and ultimately just enjoy more of the world around them. We think that this design is
a step in the right direction, but if it's not, we'll
start again from scratch. Because at the
end of the day, we want to make sure that our
experience is not only helpful, but also responsibly designed. As you think about creating your
own world-scale AR experiences, we highly encourage
you to think about all the different environments
a user might find themselves in, and help build in
those right affordances to make sure that they're using
the experience in the most responsible way. Encouraging momentary uses
of AR versus one long, continuous path also
helps with battery usage. So as Jeremy mentioned, using
the camera is really expensive. But on the flip side,
using the camera more often actually results, often,
in better tracking and faster relocalization. Truly solving this
particular problem requires hardware
innovations that will take some time to be realized. But what we can do, as
product teams and developers, is encourage user
interaction patterns that help mitigate these efforts. Therefore, the last principle
that we wanted to emphasize is keep AR moments
short and assistive. In some of these earlier
builds, we actually presented all of the
information that you might find on a 2D Google
Maps view and the AR view-- things like time and
distance to destination. And while that information
is undoubtedly useful, we realized that it was
actually encouraging people to stay in AR for
longer periods of time than they actually needed. We basically want to encourage
people to just use it for when they want those glanceable-- that that visual information. And then for everything else,
send them back to the 2D map, or encourage them to
put their phones away. They don't really
need AR just to know that they need to walk five
minutes straight, right? So just to summarize, here
are the four principles that we talked about today-- 1, embrace the sense
of immediacy, 2, provide grounding
and glanceability, 3, leverage the
familiar, and 4, keep AR moments short and assistive. It's definitely not
a comprehensive list, but they're all things that
we had to learn the hard way, so we hope that you
enjoy them, because we had to fail a lot of times
to actually get to this. All right, so we
talked a lot today about why it's so
hard and challenging to make this experience
work well everywhere, but now I want to talk a little
bit about how we're balancing getting that real-world feedback
needed to actually improve the experience, with making
sure that it's robust enough to add value. Normally at Google, we
dogfood all of our products, meaning we basically have
Google just test and use it and provide feedback. However, we don't have Google
offices in all of the areas that we have Street View. Also, the world
changes really rapidly, like Jeremy was
talking about before, and it's just not
scalable to continuously collect Street View on a monthly
basis all over the world. Getting this experience and
the underlying technology right requires real-world
testing by a diverse set of users in all the locations
we hope to launch it in. But we also didn't want
to roll it out too quickly and see the headline, "Google
Maps Launches Useless AR Feature." So that's why, earlier
this year, we actually enlisted our most ardent
Google Maps fans-- local guides-- any local guides
in the audience here today? So we're really lucky to have
a community of Google Maps fans who are willing to take some
of our most nascent experiences and test them out
in the real world, and provide really
valuable feedback to us. Over the last
several months, we've had local guides all
around the world tell us how AR was able to
help them figure out which way to go when they're
exiting the tube in London, or explore a rural
area of Sri Lanka, or find that nondescript
office building in Ghana. At the same time,
they've also told us how we've missed
opportunities to route them on walking paths, how it
can use a lot of battery, and how sometimes, those bright,
big, 3D arrows scare them when they pop up too suddenly. So we clearly have
a lot of work to do, but we're really thankful to
have this community of users stick by us as we try to improve
and create the future of Maps. All right, so what's next? We know that walking is
not confined to daytime-- as convenient as that
would make our jobs-- and so we're exploring new
ways where we can assist users better at night. It's a really hard
[? compute ?] division problem, but one that we're really
motivated to try to solve. We also want to expand
Street View coverage so that we can enable
this experience in more areas across the world. And ultimately, we
know that we need to get better at localizing
users faster and more reliably. But like I said earlier,
improving this experience requires knowing when and
where we fail to be useful, and then investigating
the causes why. That's why yesterday, we were
really excited to announce that we'll actually be expanding
the experience to all Pixel users. Our goal is to expand this
experience as widely as possible, but we
want to balance that with making sure that
we're adding value and making a good
first impression. So we'll be working with both
our local guides community and Pixel users to make sure
that we're continuing to shape and improve this experience. Thank you so much for
coming to our talk today. I can't wait to see what
all of you guys build. And if you have any
questions, we'll just be off to the side of the
stage, and our happy to chat. [APPLAUSE] [MUSIC PLAYING]