[MUSIC PLAYING] ALESHA UNPINGCO: We're
here to talk to you today about designing AR applications. So Google has a long
history in designing for AR. We've been doing mobile
AR for the past four years and working on other augmented
reality projects before that. What's changed most recently
is that mobile AR is really starting to take off,
and Google's AR core makes it possible for anybody
to create quality AR content. So I'll some fun
facts about AR core. The first thing is
that we released 1.0 at the end of February, and
that made AR content available to more than 100
million devices. And we're already
seeing rapid growth with more than 300 apps already
available in the Google Play Store. So you're here because you want
to learn how to design for AR. And what we've found is
that once you understand your users and the
type of experience you're trying to create,
design principles for getting started fall into five
different categories, which we call the
pillars of AR design. First, you want to understand
your user's environment. Where will the users be
experiencing your app? Think about the surfaces
that are available and how your app can adapt
to different environmental constraints. Then you want to consider
the user's movement and how much space the
user will need in order to experience your app. And when it comes to
initialization and onboarding, you want to make the onboarding
process as clear as possible so that users understand
exactly what to do in this entirely new medium. When it comes to
object interactions, design natural
object interactions that convey the affordances
and also the feedback so that users understand how
these digital objects fit in the context of your
real physical space. And when you're thinking
about user interfaces, balance the on-screen UI with
volumetric interface design so that you're able to
create an experience that is meaningful and
usable for your users. So we have some
examples that showcase the different guidelines
within each of these pillars. And the thing that
we want to point out is that this framework can
help anybody get started with AR content creation. So throughout our
talk, we're going to show you some different
demos that you'll be able to play with very
soon through an app we're launching on the Google Play
Store called AR Core Elements. And many of the core
interaction patterns you'll see in this
talk are already available in [INAUDIBLE] form
or will be available for Unity later this summer. ALEX FAABORG: All right. So let's start out talking
about the user's environment-- the first pillar of AR design. So ARCore is a relatively
new technology. So just to begin, let's talk
about what ARCore can actually do. So it does lots of things. The first thing that
everyone's familiar with is surface plane
detection, where it can understand
surfaces, tables, floors, those types of things. It can also do walls. Vertical surfaces. And then, of course,
what's better than horizontal and
vertical surfaces? It can also do angled
surfaces with oriented points. Being able to place
objects on any angle. ARCore does light estimation. This is really important for
having objects sort of look realistic in a scene. And there's also
some other fun things you can do with that
that we'll get into. And announced at I/O
yesterday, Cloud Anchors, which is available
now for ARCore both on Android and also on iOS. It lets you do multi-player
experiences in ARCore with two people
viewing the same thing. And also announced
yesterday, augmented images. The ability to recognize
an image, but then also not just recognize
it, but use that image to get 3D post data off of it so
you know where it is in space. So that's the current set
of ARCore capabilities, and this is, of course,
growing over time. And what we found
is the more your app integrates with the
user's environment, really, the more magical
your app is going to feel. So let's look through
a few examples here. First, surface plane detection. A lot of AR apps currently
only use one surface. Say you're playing a game-- a game board will
appear on a surface. What we found is there's
really no reason that you have to use one surface. You could use all of
the detected surfaces. And at moments when you
have your game interacting with multiple
surfaces, those can be really kind of like
breakout moments in your game where it feels very magical. Even something like if, say,
you're playing a physics based game and you destroy
your opponents castle or something and the
bricks fall onto the floor. At that moment when you
see the items on the floor, it can be really quite stunning. All right. Light estimation. This is critical
for making objects look realistic in the scene. Here's an example that we're
working on to sort of test out some different techniques. Here we have three fake
plants and one real plant, although I think it's
actually a real fake plant. And we have unlit, which is the
most basic lighting you can do. Really not very realistic. Then dynamic and a combination
of dynamic and baked. And what's great with dynamic
is you get real time shadows. And then when you combine
dynamic with baked, you start to see things
like soft shadowing where the leaves of
the plant are actually a little bit darker and
it's picking that up. You can sort of see
an example of how this looks with some movement. And lighting, there's
definitely a lot of innovation that's going to be
occurring in this space as we try to get more and
more realistic lighting. You can see where
we are right now. And especially as the
scene gets darker, you start to see that
the unlit object just doesn't perform as well. So it's really
important that you're using the real-time lighting
APIs that are in ARCore. The other thing you can
do is you can actually use lighting as a trigger
to change something. So here in this example when
you turn the light switch off in the room, the
city actually glows and it responds to that change. And these types of moments
can really feel great in magical freezers where-- imagine you're playing a sort
of a city simulation game and it's having these kind of
significant, meaningful changes based off the
environmental light. All right. Oriented points. This is actually a
very new feature. So we don't have a whole
lot of examples here, but here is one of
the more basic ones. I filmed this when
I was out skiing. And here I'm just attaching
androids to the side of a tree. And you can see that
as I place them, they stick to exactly
that point on the tree, at the angle of where
the branches were. All right. Cloud anchors. Announced yesterday. Here's an example
of that being used. Here both players see
the same game board in exactly the same
place, and they can play that game together. And this is really
tremendously fun. You can actually try
it out in the Sandbox if you want to stop
by later today. And again, this works
with both Android and iOS. Augmented images. There's a lot of different
ways you can use this. One demo that we have in the
Sandbox that you can go check out is actually an
art exhibit that was built using augmented images. [MUSIC PLAYING] So we're really
excited about what you can do with augmented images. And there's really
lots of possibilities from artwork to even
something like just having a toy sort of come to life. You know, the surface
of a product box where you can see sort
of 3D models of what you're about to play with. All right. So now that we've gone
over some of the basics on the core
capabilities of ARCore, let's talk about
how you'd actually start to design an app for AR. So one of the first things
you're thinking is, OK, where do I actually start? You know, blank page
and ready to start having brainstorming
and new ideas for AR. And one of the things that
I'd want you to first focus on is AR exists outside
of the phone, so your design work
should really exist outside of the phone as well. So something I've found
with a lot of people who've done tremendous amounts
of mobile design is they tend to be very attached
to sort of the phone frame and sort of flows of screens. They've been doing
that for so long. One of the first things you
need to do when you're starting to think about AR is to
actually put away all of those, you know, 2D UI stencils. And don't really think
about the phone at all. Instead, what you
want to do is you want to sketch the actual
environment that the user's in. So you should be sketching
living rooms, and tables, and outdoor spaces. And then as you sketch
the user's environment, then you start to sketch
the various objects that they're going
to be interacting with in that environment. In many ways, you can
sort of think of AR as having a lot of
the same challenges as responsive design for the
web in terms of different window sizes, but it's even
more complicated because now you have responsive
design for 3D spaces that are the user's actual living room. So you want to sketch
the user for scale to get a sense of how
you're going to start crafting this experience. The user could be very large
relative to the AR objects or very small. And then you want
to start thinking about how that user is
going to move around in that environment. ALESHA UNPINGCO: And that
brings us to user movement. So now that we understand how
to design for the environment, let's think about how to
design for user movement. And as Alex mentioned,
it's completely OK to design beyond the
bounds of the screen. And what we've found
is that in many ways, this can make the experience
feel more delightful and even more immersive. Because when you have an object
that begins on-screen and also extends beyond the boundaries
of the phone [INAUDIBLE] port, it can make the user feel like
the object is really there. And beyond that, it can
also motivate the user to organically begin moving the
phone around their environment so that they can
appreciate the full scale of these digital objects
in their physical space. And that brings us to
our next observation. Because users are more familiar
with 2D mobile applications that don't typically
require user movement as a form
of interaction, it can be very challenging
to help convey to users that they're able
to move around. So many users don't
because it just doesn't feel
natural based on how you use 2D apps in the past. But what we realized is
that characters, animations, or objects that convey
visual interest on-screen and then move off-screen
can be a natural way to motivate users to move. So here we have a
bird and it appears in the middle of the screen. And when it flies
off-screen, it's replaced with a marker that
moves around and slides along the edge to help users
understand the bird's location in relation to the user. Another major thing that
you want to think about is that whenever you have an
experience that requires a user to move, you also want to
think about how much space a user needs. So we found that
experiences fall into three different sizes. There's table scale,
there's room scale, and there's also world scale. And when it comes to
table scale, what we found is that your experience
is able to scale to the smallest of surfaces
so that many users are able to enjoy your experience. And with room scale, it
expands the impact of AR so that content will
start to feel life sized and you're able to do a lot
more with the space that's available. And world scale has no limits. It allows users to appreciate AR
in whatever area they see fit. And this is an area
we're particularly excited about
because what it means for procedurally generated
content in world scale. So no matter what size your
experience ends up being, just remember to set the right
expectation for users so they have an understanding of
how much space they will need. Because it can be a
very frustrating part of the experience if the
user is playing a game and in the middle
of the game they realize they don't have
enough space to enjoy it. And when it comes to how
much movement your experience requires, there's no one
size fits all solution. It really depends
on the experience that you're trying to create. For example, if you have a game
that requires user movement as a core part of
interaction, that can be a very
delightful experience. You can use
proximity or distance to trigger different actions
so that as a user gets closer to this frog, it can
leap behind the mushroom, or maybe the mushroom
can disappear. And that can be
really cool to see. However, if you have
a utility app where the core purpose of the app
is to help users understand very complex data
and information, then requiring
users to move might be a really bad experience. Because what it
means is that users who have different movement
or environment limitations won't be able to get the
complete app experience. So allowing users to manipulate
the object to rotate it, to move it around in a space
that's more appropriate will ensure that all users
have easy access to the data that they seek. ALEX FAABORG: Because
AR is relatively new, the actual process for users to
flow from 2D parts of your app into 3D can be, at
times, a bit awkward. Amber is just starting to
create some sort of standards around that. So we'll talk about
initialising into AR. One of the first
things you can do is you can leverage standard
View in AR material icon so users, when
they see that, they know that when
they hit this icon they're going to
be going into AR. You can use this in
all the normal places that icons appear. Like floating action
button or on top of cards as the indicator that
you can actually view this object in 3D
in your environment. One of the next
things you'll see if you've been playing
with lots of AR apps is something that you might
not initially understand. I want to talk about
the concept of how understanding depth actually
requires some movement. So you'll see these
types of animations where I was trying
to get the user to move their phone around. So why is that
actually happening? Basically, we perceive depth
because we have two eyes, but we actually get a lot
of our depth information by actually sort of moving
our head around and being in the scene. And for the case of ARCore, most
current phones on the market only have a single
camera on the back, so the device only has one eye. And if it hasn't moved
yet, it doesn't necessarily know what's going on. So this is the first
thing the phone sees. It's going to say, all right,
well, that's interesting, but I don't totally have a sense
of where these objects are yet. And once you just move
just a little bit, then it becomes clear. As soon as you bring in a
little bit of movement, then you have enough of that data and
different angles on the scene that it can start to build up
a model of what it's seeing. So that's why we
have these animations at the start of the app to
try to get that movement, to try to get ARCore to
have enough information to recognize the scene. Next thing you
want to think about is deciding if users
are able to easily move the objects after
they've been placed or if these are really
more permanent objects. And there's, again,
no right answer here. So more persistent
objects might be like a game board or something
that itself takes input, but we want to recommend that
you use standard icons to set expectations for users so
they know as they're placing that object if that object
is going to move around later on as they swipe on it. So some examples of that. Let's say you are placing,
like, a city game. And here as you're
swiping on the city, you're actually going
to be interacting with the game itself. So we recommend using an
anchor icon for these more persistent object placements. And you still want
to enable the user to move the game board
later, perhaps through a menu screen or some type
of re-anchoring flow. So set expectations
ahead of time that the city actually
is going to be sort of stuck to the
ground there for a while as you interact with the game. Versus something like, say,
you're shopping for furniture and you're just placing
a chair on the scene. Here, the chair itself
has an interactive, so you can actually map
swipe gestures onto the chair and just easily move it around. So using the plus icon to
kind of set expectations ahead of time that you're not really
committing to exactly where you're placing this object. All right. So now that we're talking
about objects interactions, there's actually quite
a bit of details there. ALESHA UNPINGCO: So
now that we understand how to onboard
users, let's start thinking about how
users can interact with objects in their space. One of the things
that we challenge you to think about as designers
and developers in the community is thinking about
how to problems solve for user behavior,
even when it's unintentional. So one of the
things we recommend is giving users feedback
on object collisions. And this solves a huge problem
that we see in mobile AR where a user will be
moving the device around and once the device collides
with an object in AR, that object might
disappear and the user has no feedback in
terms of how to fix it. So what we recommend is we
recommend providing feedback in the form of camera filters
or special effects that helps users understand when
object collision is not an intended interaction. And this tends to
work really well. The other thing that
you want to think about is how to give users the
right type of feedback on object placement. And it's really
important in this case to think of each stage
of the user journey, even as it relates
to surface feedback. So surface feedback in
AR is very important because it helps users
understand how ARCore understands the environment. It gives the users a
sense of the surfaces that are available, the
range of the surfaces that are available. So we recommend
including feedback on the surfaces when the user
is placing objects in the scene. The other thing
that we recommend is maintaining the height
of the tallest surface as a user drags an object
from one surface to another. And once an object is
suspended in the air, make sure that you're always
communicating visual feedback on the dropdown point. That's way it's very clear
to the user at all times where the object
is going to land. And once an object is
placed into the scene, we also recommend
providing feedback in the form of visual
feedback on the surface, or even on the
object itself, just to communicate
the object's entry into the physical environment. So now that we know how to play
with objects in your scene, let's think about how an
object might get there. We recommend using gallery
interfaces in order to communicate to
users how they can take objects that live
on-screen and drag it out into their real world. So here you see we
have a gallery strip at the bottom bar. And as a user selects
an object, they're able to drag it
onto their space. And not only that, we're able
to support both selection states and also very familiar
gestures that allow users to manipulate the objects. So you can use pitch to
scale, twist to rotate, and even drag to move. And you see many examples in
our talk of how dragging objects is a very common and
expected behavior. But another alternative for
object selection and object movement is through a reticle. So reticle selection
is also very effective in that it allows
users to manipulate objects in their scene without covering
too much of the user's view. So we have an example here
where reticle selection is being used to select a rock. And that's triggered
via the Action button in the bottom right. But what it allows
users to do is that it allows users to
see the many surfaces that are available. And as you can
imagine, if a user is selecting an object
with their finger and dragging it
across the screen, you don't have as much
screen real estate to see all of the surfaces
that the user might want to place the object on. So reticle selection is
very, very impactful here. The other thing that you
get with reticle selection are raycasts. So raycasts are very
effective in helping the user get a sense of
the virtual weights applied to each of these objects. So here we have another
example where the user is able to pick up a feather. And once the father
is picked up, you'll notice that the raycast
has very little movement and very little bends on it. And for the most part,
it remains straight. However, when the user
picks up the rock, you're able to see a more
dramatic bend applied to the raycast. That signifies the
larger amount of mass, the heavier weight of
this object in relation to the feather. ALEX FAABORG: All right. So let's move on to the final
pillar, which is volumetric interface design. I think one of the first things
you want to consider here is that the phone is
the user's viewport. They're actually using the
phone to look out into the scene and see the application. And because of that,
you don't actually want to place a lot of
2D UI on the screen. That's actually going to
obscure the user's view onto your application. So showing a sort
of quick example. It's obviously a lot nicer to
have a limited set of controls. As soon as you start
to clutter the screen, it really gets in the
way of the user's ability to enjoy the AR app. And the sort of
counter-intuitive thing that we've even found is that
users are so focused on the app out in the world
that often designers will place a control
on a screen level because they want to draw
attention to that control, but it's actually having
the opposite effect. The users are
actually more focused out in the scene, so they'll
actually just miss controls that are drawn on the surface. Just kind of tune those out. So really, you want to be
very mindful of when you're making decisions on if you're
going to put a control up on the screen versus out
into the scene itself. For not just obscuring the view,
but also for discoverability of them finding that. And that's not to say that you
should never put a control up onto the screen, but you
want to be considering a few different metrics on it. So our recommendations is
that you only really leverage on-screen surface UI
for things like controls that have a very high
frequency of use or controls that require very fast access. So like a camera
shutter button is kind of the perfect
example of something that hits both criteria,
where in a camera you're obviously
taking lots of pictures and also you want to take
pictures very quickly. But imagine if
you're playing a game and there is some ability
to, like, fire or something. That would be a good candidate
for an onscreen control because you're both
hitting that button a lot and also you need to get to
that button very quickly. So we talked about
using the View in AR icon to get people into the
experience and transition from 2D into AR, but
also want to be very careful about the opposite. Of when users are now in AR and
they're actually transitioning back to a 2D experience. And one thing we found is
that if the user is not initiating that action to go
back into a 2D experience, it can actually be pretty
obnoxious because they're so focused out in the scene. So the user's viewing
the application, and then suddenly
a 2D UI shows up and blocks the entire viewport. That can be pretty annoying. So depending on even
if the user is exiting, or they're customizing
an item in the scene, or whatever the use case
is, you want that flow back to 2D screen level UI that's
covering most of the screen to be something that the
user's actively doing, and not something that
happens by surprise. So a common thing with
mobile application design is you want to maintain
touch targets that are about the size
of the user's finger. For 3D, this is, of
course, a bit harder because the object could be any
distance away from the user. So quick example of some
things you can do here. Here we have two tennis balls. And when you tap
on the tennis ball, confetti fires out of
it because in AR you can do whatever you want. And we're showing
the touch target size with the dotted line. So one of these tennis
balls is actually maintaining a reasonable
touch target size as it gets farther away,
whereas the other one is not. It's just mapping to the
virtual size of the object. And, of course,
it's a lot easier to interact with the one that
is maintaining a large target size. We've also found
for interfaces where you're manipulating objects,
if you're not doing tricks to kind of maintain
target size, you often get these problems where you
swipe an object very far away, and then it's actually hard
to bring the object back because it's now
such a small target. So you have to actually walk
over to the object to get it, which is a little
bit frustrating. Maybe you could say
it's very immersive, but either way, it's nicer
to be able to actually bring the objects back as well. So on the whole, you want to
be thinking about what controls are going to be on the screen
versus what controls are going to be out in the scene. And kind of a mantra
that the team has had is to say, you know,
scene over screen. Obviously, we talked about sort
of battery cases of when you'd want to put something
on a screen level, but I found that it's many
people's initial reaction to design everything
for the screen level because that's the
type of design work we've been doing
for 2D applications, but you really want
to start thinking more about volumetric UI and having
your UI out into the scene itself. So give a quick example of this. This is actually one of the
demos that ships with scene form. It's a solar system simulator. Loads of fun. Also, it's missing
a planet right now. We did fix that for the
public release, in case you notice that in the video. But imagine now you need
to design the UI for this. So a lot of people
would initially think, oh, I'll have a
gear menu up in the corner. That will throw something
up on the screen. You know, the
problem there is then you're not going to be able
to sort of be as immersed in the simulation itself as
you're interacting with it. So an alternative
way of doing that is actually leverage these
objects on the scene itself. So as you're tapping
on planets, you'll get feedback on
what planet that is, which is nice for
educational use cases. And then in this particular
demo, when you tap on the sun, that's how you start to control
the entire solar system. So here the user is
tapping on the sun, and that brings up a panel. This is actually
an Android view. So in scene form, you can map
just standard Android views into AR. And here you have
controls like changing the orbit speed or
the rotational speed of the planets themselves. And it's really nice
to be able to interact with these objects
in the scene, and not to have that sort of sudden
loss of being able to see things and being sort of taken
out of the experience. And that kind of brings me
to my final point, which is this idea of AR presence. So we'd actually seen this
coming up in user research studies where people would
be looking through a phone and then they would kind of
look outside of the phone to see that something
was placed correctly. And then, of course, we're
recording and so they laugh. And they're like, oh, yeah. Right. Of course. I can only see it
through the phone. And we always laughed
when we saw this happen. And then I was
testing out an app. It was these sort of
plastic interlocking bricks and I had instructions
of what I was building, and I was playing
it for a long time. And at one moment I looked over
to see the instruction book, and it wasn't there. And I had the reaction
that you normally have if, like, an object
just disappears in real life. And, of course, then immediately
I'm like, oh, that's silly. Yeah. It's AR. But I was so immersed in the
experience and the application, and I'd been playing it for
so long that I was no longer kind of mentally tracking what
was real and what was virtual. And I was just sort of buying
that the experience was happening. And you're going to start
to have this experience as well as you're interacting
with these applications. And I would say that's the
moment when your application is performing really,
really well because it means that the user is
just completely immersed and engrossed in
the application. So if you ever have
these moments where people are looking at a
vase through their phone and then they look down and
it disappears and they react, that's good. That means the app
is performing great. ALESHA UNPINGCO: All right. So we've gone through the
five pillars of AR design, which, again, include
understanding the user's environment, planning
for users' movement, onboarding users by
initialising smoothly, designing natural
object interactions, and balancing on-screen and
volumetric interface design. And again, this
framework, we believe, will help anybody get
started with creating apps that everybody can enjoy. So we have a quick
video for you. Some amazing content that
many designers and developers like yourselves from the
community have created. We hope you enjoy. [MUSIC PLAYING] We're very happy to
share with you everything that we had today,
and we look forward to seeing what you create. Please fill out our survey and
check out our resources online. Thank you. [MUSIC PLAYING]