ROMAIN GUY: Welcome, everyone. LUCY ABRAMYAN: Welcome. ROMAIN GUY: I'm Romain Guy from
the Android Framework Team, and now I can finally reveal
some of the stuff I've been working on. LUCY ABRAMYAN: And
I'm Lucy Abramyan. I'm an engineer on
the AR and VR team. I'm actually going to
start off by telling you something very personal
to me and that I'm very passionate about. But it's not just one thing. It's actually a combination
of things that I love. Ever since I was a child,
I've been marveled by space. Isn't it amazing? Just think about
it for a second. The diversity of the
planets, everything that's going on around the
solar system right now. Wouldn't it be amazing if we
could just see it up close? There's just one
problem, though. Humans can't go
to another planet. Not even Mars, just yet. Or can we? And this starts the
second part of my passion. Augmented reality is delightful. It helps you interact
with a world that is within a whole
new experience that is catered to your environment. ARCore is Google's platform for
augmented reality applications. But even if we
use ARCore to help us understand the environment,
what do we do about 3D rendering? OK, so while it's not
exactly rocket science, today, 3D rendering has
a steep learning curve. But if you're like
me, you're still really passionate about space
and AR and 3D and rendering and graphics and matrix
math, so you just start coding and coding
and coding and coding. And then you quickly realize
there's a lot of code. ROMAIN GUY: Yeah,
actually, how many of you have written openGL
in the developer life? So first of all, I'm so sorry. And second of all, you probably
know how difficult it is. I've been doing this kind of
stuff for basically 10 years at Google, and it's a pain. LUCY ABRAMYAN: This proves
my point right here. It's difficult.
And we've noticed that there's so much in
common throughout all AR apps, things like streaming the
camera image to the background, or even just making an
object appear on the screen. So we wanted to
take care of that so you don't have to go
through all the pain. ROMAIN GUY: No, I
go through the pain. LUCY ABRAMYAN: That's why
we built the Sceneform SDK, to make AR development
quick, simple, and familiar to any
Android developer. Sceneform is a 3D framework
that makes it easy for you to build ARCore apps. It includes an
Android Studio plugin that allows you to import,
view, and even edit 3D models. The API offers a high-level
way of working with 3D, and it's tightly
integrated with ARCore, so that makes it especially
easy to build AR apps. Because it integrates with
the Android view framework, you can easily add AR
into an existing app or create one from scratch. The New York Times used
Sceneform in their new AR articles. You can download the New York
Times app today and search for augmented reality, and not
only read about David Bowie, but also walk around a
mannequin wearing his costumes. Otto, the online
furnishing retailer, used Sceneform to
allow customers to see what a piece of furniture
looks like in their living room before they buy it. If you're from a
European country, you'll be able to download
the app and try it out. ROMAIN GUY: And
actually, a couple of the engineers from Otto
are in the audience somewhere, and I just want to
thank them because they were very patient with us when
we were working on Sceneform. They went through all the
early versions of the API, and it wasn't always
working right. So thank you. LUCY ABRAMYAN: Thanks
for your patience. Well, thanks for
helping us build it. ROMAIN GUY: Yes. LUCY ABRAMYAN: Yes. I'm going to play this video
again, because I love it. Remember that solar
system I was talking about that I wanted to bring to you? Here it is, right
inside your living room. And trust me,
without the thousands of lines of rendering code. You can see this
code online right now by going to our
GitHub repo linked at the end of the presentation. So let's walk through
how we built this app. So first, we'll start out
with some common concepts of AR apps, and then show you
code snippets using Sceneform. After we've covered
the basics, Romain will go into detail about
physically based materials and give you all sorts
of rendering knowledge to help you optimize
and make beautiful 3D objects in your app. The Sceneform API
consists of two concepts-- the scene and the view. The scene represents the objects
you are adding to the world, like the 3D models that you
want to place in your AR app. The view is where your
scene will be drawn. In this case, it's
your device screen and where the device
is in the world. The renderer will draw the
scene from this perspective. The view ties into the Android
view framework or system, and that's your
hook into the app. As a developer, you
will build your scene by defining the spatial
relationships of objects. To do this, Sceneform
provides a high level SceneGraph API for defining
the hierarchy of objects and their spatial relationships. One analogy that I like to
think about is the Android view hierarchy, but instead of-- obviously, in 3D. And instead of views,
we use graph nodes. Each node contains
all the information that Sceneform needs to render
it and to interact with it. And finally, nodes can
be added to other nodes, forming a parent-child
relationship. In our solar system
example, planets orbit around the sun, and
moons of these planets orbit around the planets. A natural way to define the
solar system in a SceneGraph is to make the sun the
root node and add planets as the sun's children and add
the moons as their children. That way, if you want to animate
the Earth orbiting the sun, the moon will just
follow the Earth. You don't have to do the
complicated math to figure out the moon's
relationship to the sun while the Earth is orbiting it. In this video, it's the
same solar system example that we saw a little
bit ago, but we've also added touch interaction. Nodes can become interactive by
adding touch listeners to them, obviously. And we propagate touch
events through the SceneGraph the same way Android touch
events are propagated through the view hierarchy. Nodes can contain 3D
models like the planets or the sun or 2D Android views. You can create an
Android view just as you would in
your layout editor and put it in the world
and interact with it like you would
with any other app. It's a first-class
citizen of our scene. So how does an Android
developer usually get started? With Android Studio, of course. I won't go too deep into this
because there is another talk tomorrow morning called
Build, Iterate, and Launch that will show you
all of these steps, but I want to note that we have
built a plug-in for Android Studio that allows you to drop
in 3D models that are built with standard modeling
tools like Maya, or you can download
them from Poly. You go through the import flow,
and that converts the models into an SFA and SFB format. These formats are
particular to Sceneform, and the SFA is what's
bundled into your app. So drop the SFA file into the
[? resara ?] folder, the source assets folder, and you
can ship it with your app. One thing we wanted to note-- yes, you can view
and edit your models right there in Android Studio. ROMAIN GUY: And what's
important to note is that we use the same
renderer on the device and inside Android Studio,
so what you see in Studio will be what you
see on the device. One other thing to
note, again, there are going to be more
details tomorrow-- the importer acts
as a gradle plugin, so for every asset that you
import inside your project, you're going to get a
new gradle task, which means that if your
designers give you a new version of the
asset, all you have to do is replace the file
in your project, and the next time you build,
we will reconvert the asset automatically and
you'll be up to date. So you won't have to worry about
going through a manual wizard every time you get a new
version of the asset. LUCY ABRAMYAN:
Let's start coding? Yeah. Let's start off with
our onCreate method and your activity,
just as you would. Find the AR fragment-- find Sceneform's AR fragment. This takes care of setting
up the ARCore session for you and also manages the
lifecycle of Sceneform. And naturally, it will
contain the AR view, and the view holds a
reference to the scene. I want to note here that you
don't have to use our fragment. You can use the view, and
therefore the scene, directly. ROMAIN GUY: And this is
one of the powerful things about Sceneform. Because it's just using a
regular Android view that happens to be a
surface view, you can drop it wherever you
want in your application. It doesn't have
to be full screen. And more importantly, we don't
take over your application. If you try to do something
like Unity, for instance. Unity becomes your application. It's a very easy way to embed
AR inside an Android application that already exists. And the New York
Times, for instance, is a great example of how
far you can push this, because they show AR
through a web view. So they just put an AR view
behind a transparent web view. And as you scroll
the web view, they load content into the AR view. So you can do pretty
complex things. It doesn't have to be just
a simple full screen example like you're seeing here. LUCY ABRAMYAN: So let's start
loading all of our models. And in this case, I
have to-do for you to load the rest of them. But I'll show you how
to build a 3D model. In Sceneform, we have
two types of models. We call-- not models, sorry. Renderables. Renderables are the
things that are going to be rendered on the screen. I know we're very
creative with the naming. In this case, we
want to load the sun model, which was dropped
into our [? resara ?] folder. And so you set the
source and build. I also want to point out that
these are completable futures. So it will do the loading
in the background, and you can accept and
handle as you would with a completable future. I should also
mention Kotlin here. We could do all
of this in Kotlin, but all of the code
snippets will be in Java. ROMAIN GUY: One thing
we haven't done yet is build Kotlin
extensions for Sceneform, but if you are
inclined to do so, I would love to see your code. LUCY ABRAMYAN: So
remember earlier, we showed you this
graph representing the sun and the planets. Keep the structure in
mind, because we're going to be loading and
creating the SceneGraph. We've loaded all
the models, so now let's build the solar system. We start off with the sun node. Create a new node,
and set its renderable to be the sun model
that you had loaded. Next, we create the Earth node. We said its parent to the
Sun, therefore building the SceneGraph, starting
to build the SceneGraph. Notice here that I have set the
Sun to Earth meter as constant. In this case, I've
used 0.5 meters, because I wanted to fit the
solar system inside the living room. But we set the Earth's
local position relative to its parent, the Sun. Set the renderable, that
we loaded, and continue. Now, the Moon. Create the node, set
the parent to Earth, and notice here now that the
local position of the Moon is Earth to moon in meters. I think I've set it to be 0.1. It has nothing to
do with the Sun. Set the renderable as a moon
renderable, and you're set. Now, to animate their orbit, you
can just use Android's property animation system. We've created some
evaluators for you to use. Once we return the
root node, we now have the entire graph,
our solar system. So let's bring the solar
system into the scene. All we have to do is
parent it to the scene. And that's it-- or parent
the sun to the scene. And the solar system
is now in your room. Or we could do something else. Anchors-- I don't know if you
know about ARCore anchors, but they are how you attach
your content to the world, and ARCore makes sure that
they are anchored to the world. You can get anchors
from ARCore's hit test, based on the user's
touch events, or, with our new
cloud anchors, you can use an anchor that
another friend has created on another device. For using anchors, we've
created an extension to node, called the anchor node,
that just takes an anchor and now brings that anchor
into our SceneGraph. Set the anchor node's parent
easily, just to the scene, and then set the Sun's
parent to the anchor node. Notice how we've
rearranged it a little bit. First we had the scene with
the Sun as a root node, but now we've created the
scene, the anchor node, and then the Sun and the rest
of the solar system. I was mentioning model
renderables before. So now, we'll talk about
the 2D view renderables. To add 2D into your
app, in this case, I've used a little
info card that pops up about a quarter of
a meter above a planet, say, and you want it to display
some information on it. So I've created this node that
would hold the view renderable. I've set it to a
planet, and it will now float a quarter of a meter
above the planet, wherever that planet is. Start off by building
a view renderable. But notice here that
instead of the [? resara ?] source for the model renderable,
I've set it to the view ID, and that's all there is to it. Now you have the 2D
view, just as you would have in any
other application, and you could do things with
that view like set the text. And finally, if you want
to drag, scale, and rotate objects, we've made
that really easy for you by creating
a transformable node. The transformable node
is an extension to node, but it also understands
touch events and gestures, like dragging and scaling
and rotating objects. So in this case, if, instead
of creating a sun node, I created a
transformable sun node, I would just do everything
else that I did before, but now we can actually drag
and move the solar system. I should mention that there
was a talk earlier today about UX interactions in AR. So if you want to know
more about the best practices for UX, please go
back and look at that talk. I'll hand it over to Romain. ROMAIN GUY: Let's talk
about the materials. So this is a deceivingly
simple part of Sceneform, but before you
can understand how to create your own
material, you have to understand the concept behind
physically-based rendering. So after important assets
through the Android Studio plugin, like [INAUDIBLE] said,
you end up with two files, there's a dot sfa and a dot sfb. So the dot sfb is the binary
that goes into your application and that would
load that run time. The dot sfa is, effectively, a
json description of the asset. It looks a little bit
something like this. So this is the sfa
from the Moon, the moon of the Earth from our example. And you can see this
attribute's called baseColor normal and metallicRoughness. So those point to
textures that are defined somewhere else in the sfa file. I'm not going to go into
details about the syntax and the structure
of the sfa file, mostly because there is
excellent documentation that's available online. There's also the talk
tomorrow morning, so you should check that out. But what I want to talk about
is, what kind of textures do we need to create to be
meaningful for the color, the normal, and there's
two real ones, the metallic and the roughness textures? But again, before
we do this, we have to talk about physically
based rendering. So physically
based rendering has started becoming quite popular
over the past, I would say, three or four years. Starting in the
VFX industry is now using a lot of triple A games. It's still fairly
uncommon on mobile, and the basic idea
behind it is that we are relying on
physical principles to define all the behaviors
and all the equations that we use in the
rendering system. So that includes things like
separating the lighting code from the code that defines the
surface and that doesn't impact on the measures themselves. It means that we have to take
into account laws of physics, like the energy conservation. We use physical light
units, so for instance, when we declare a sun
as a dynamic [INAUDIBLE] we use the unit called lux. If you were to
lighten your scene, in your AR scene, that's
just a light bulb, you can use watts
lumens as the unit. So it's a number
of things like this that are grounded in reality
that will help us validate our rendering, which also makes
our life easier, because those are things we deal with everyday
and that feel natural to us. And I'm going to
show you an example. So this is an example-- it's not in AR,
but this has been rendered using a
rendering engine, just a very simple sphere. And you can see that there
are reflections on the sphere. What you are seeing
here in action it is a physical effect
called the Fresnel effect after a 19th century physicist. And the closer you are to
the edge of the sphere, the more you can
see the reflections. And this is a natural phenomenon
that you can see everywhere. So this is a photo I
took in Lake Tahoe, and you can see that, close to
the edge where I was standing, you can see through the water. And the further away
you get, the more you can see the
reflections of the water. And, again, you can see
this on every object. Every object around you
follows this principle. Might not be obvious at first,
but this is what happens. And we recreate this kind
of physical behaviors in our rendering
engine to make things look as realistic as possible. So we follow a work flow format
called the metallic roughness workflow. If you look that
up, you're going to see there's a lot of
information out there. The metallic
roughness workflow is available in a lot
of popular tools. If you've used Unity 5,
you have access to it. Unreal Engine 4 uses it. Blender, recent versions, has
access to metallic workflow, as well. So the way it works is, when you
want to define a new material, you're actually trying
to describe it and define a surface. And to define a surface, you're
going to need three things, and we're going to take them
in this order for a very specific reason, and we're
going to explain them in a little bit. The first one is you need to
define the metallic property of the surface. Often, we talk
about metallicness, but it's not even a word. So metallic, you
know, at a high level, is whether or not the
object is a metal. And we're going to see why
this is very important to us. Then you're going to define
the color of the object, and we call it the
base color, as opposed to the diffuse color
or the specular color. Those are terms
that you might have seen if you use older
engines, and, again, there's a reason for that. Finally, you need to define
something called a roughness. Roughly, it means how
shiny the object is. And if you want to
go further, there are two other things
you can define to give your surface
more details and a more natural appearance. So the first one is the
normal or the normal map. It just helps break the
evenness of the surface. And the second one is called the
ambient occlusion or occlusion. So, first, what is in non-metal? I'll spare you the equation. This is actually fairly simple. So here at the bottom, we have
an object-- an orange object. It's drawn in gray to make
the diagram easier to read. But we have light coming
from a light source and hitting the
surface of the object. And the light that
hits the object gets split into two components. The first one in the white
is the reflected part. So those are all
the reflections. So in that orange ball that
we saw before, at the edges, you could see those reflections. That's this part of the light. And then the other
part of the light is refracted into the object. And most objects around us,
they absorb some of the light. The light enters the object. It gets scattered, bounces
around inside the object. And, eventually,
some of the light will come out,
but not all of it. And this is what
gives object colors. In that particular case, because
the object appears orange, it just means that green and
blue components of the light have been absorbed
inside the object, and that orange
light that comes out is called the diffuse light. The white light is called
the specular light. So specular is for
the reflections, and diffuse is for
everything else. So this is an example of a
non-metallic orange ball, and you can see
these reflections. If you look at the
reflections here, this is simulating
an environment inside the classroom where we
have overhead white lights. And you can see that,
even though the object itself is orange, the
lights appear white. That's because they
just bounce off. They don't enter the object. They don't take the color
of the object into account. And then the rest of the object,
the non-reflective parts, are orange as expected. Now, when you have a metallic
object, metals are conductors, and they're called conductors
because, when that energy hits them-- in this case, light-- the part of the energy that's
reflected into the surface gets absorbed. It just gets transmitted
into the object and does not get
scattered outside, so there's no diffused light. You only get reflections. However, what happens
with these reflections is that they get to take
the color of the object. So you don't get the
white reflections anymore. You get the orange
reflections instead. So if we take the same ball
that we just saw, and we turn it into metal, look again
at the overhead lights of that classroom. You can see that they
now appear orange, and this is because the
rest of the specular light was absorbed inside the object. You can also see another side
effect of this Fresnel law that we just talked about-- at the edges of
the sphere, you can see that the reflections
are not orange anymore. They take on the color
of the environment. And this is, again, a
very natural effect. It feels weird when you
see it for the first time in the rendering engine, but
rendering engineers like me who are obsessed about their
work, come Christmastime, they look at the Christmas tree,
and they take a picture of one of the ornaments. I was holding one of
my phones on the side, I was lighting this
green metallic ball with an orange light from
the wallpaper on my phone, and you can see that,
on the edge of the ball, suddenly the reflections
are not green anymore. They take on the orange light
coming from that light source. So what we just saw
here can appear weird when you see it on the
computer for the first time. But this is an effect that's
perfectly natural that happens everywhere around you. So the metallic property-- I mentioned that this is the
first thing you should decide, and you've now seen
why, because when you define the color of the
object, when you find the base color, depending on whether
the product is metallic or not, it's going to
dramatically change the aspect of the surface. So you should
always decide first whether you're dealing
with metal or non-metal. So when it comes time to
create the actual texture, the metallicness
of the object can be defined as a
grayscale texture, so that uses between 0 and 255. At 0, the object is not a metal,
and at 255 when it's white, the object is a metal. Most of the time, the value
should be either 0 or either 1. All the venues in
between are mostly used for anti-aliasing
purposes, because, you know, your texture could contain
maybe a metal that's painted, and the paint itself
is not a metal, so, at the edges of the paint,
you want nice transitions from metal to non-metal. Certain alloys in real life
happens to be a mixture, and you can use intermediate
values, as well. But most of them, you
won't have to deal with it. You don't have to
worry about it. Very quickly, when
a metal gets rusty, the texture becomes nonmetallic. So if you are trying to
create a surface that's rusty, the rust stains
will be non-metal. Yeah-- oh, yeah, so that's
something that, again, you'll see tomorrow. You don't have to use
textures for everything. You can also use constants. And very often, you
can get away with not having a texture for the
metallicness of the object. You can just say either
it's a metal or it's not. OK. Next step, you have to
define the base color, so the color of the object. It defines either
the diffuse color of the object for
nonmetals, or the specular color, or the color
of the reflections, for metallic objects. And what's quite difficult to
do when you create a base color or texture is that
it must be completely devoid of any lighting
information or in shadowing, and we're going to
look at an example. And it can be hard
because, as human beings, we never see the actual
color of an object. We only see objects
through lighting. So it's difficult to
imagine what it looks like, but you can quickly
get used to it. And whenever you use a tool
like Photoshop or Affinity Photo to build your
textures, make sure that you're working in the
sRGB color space, which is what this tool should be
doing by default, but just in case, make sure
you and your artists work in that color space. So this is a quick guide of how
to build colors for objects. Based on real world data,
most of nonmetallic objects use most of the range
of the brightness, so whenever you pick a
color in the color picker for a nonmetallic object,
the values of your RGB colors should be between 10 and 240. And there's nothing
as dark as 0, there's nothing as bright as
255 when we deal with nonmetals. In metals, on the
other hand, they're are always fairly bright. So dark metals
basically don't exist. So you should stay in the
range that's shown up here. It was mentioned
that you should not have any lighting
information inside your base color of the object. And you can see here a set of
swatches taken from real world observation. And you can see that gold, for
instance, that, in real life, appears quite yellow
and saturated, the base color is actually
not that saturated. All the colors in
the base color tend to be they pale compared to
what you actually perceive. So here's another example. On the right, you can see a
material to represent bricks, and on the left, you can
see the base color texture. And you see the difference. Once we light the object, all
the contrast and saturation appears. But all of that information
is not in the original texture that we're using to
create this material. So, again, if you
work with your artist, make sure they're familiar
with the metallic roughness workflow, or make
sure that they just understand that the base
color map should not contain any lighting or shadowing. Now, I mentioned the
[INAUDIBLE] parameter is called the roughness, and it
defines how shiny an object is. So a simple way to
define a surface is like this-- it's
infinitely smooth. There's no object in
the world that's smooth. And what happens when
you have a smooth object, rays of light that are coming
parallel to each other bounce off parallel to
each other, as well, so you get very
sharp reflections. Rough objects, on
the other hand, have what we call
microfacets at the surface. You can think of those
as very tiny mirrors that may not be oriented
in the same directions. So when light comes in, you
have these rays of light that can bounce off
in random directions, and that creates
blurry reflections. And those are examples. At the top, you can see
a ball of yellow metal, and we increased the
roughness from 0 to 1. And at the bottom, we
have a nonmetallic ball, and we increased the
roughness from 0 to 1. And you can see the effect here. We start with very
sharp reflection, and as we get closer
and closer to 1, the reflections become so blurry
that we can't even perceive that they are reflections. They're there, but they're
spread over, basically, the entire surface-- the entire visible surface. So this is a very
powerful feature, because it lets you,
again, create things like polished metals
or plastic that has been used for quite
a while, and it's become, basically, rough. So roughness is very similar
to the metallic property. It's a grayscale texture. Use values between 0 and 255. At 0, your surface is going
to be glossy or a bit shiny. At 255, it's going to
be extremely rough. You're not going to be able to
see the reflections anymore. And just be aware that
there might be differences between different tools. So if you specify roughness
of 100 that's in Blender, that same roughness
might look a little bit different in a different
engine, because there are different ways of
doing those computations. You shouldn't worry
too much about this. Just tweak the
asset until it looks right instead of [INAUDIBLE]. Sometimes, the roughness will
be called glossiness instead, and glossiness is just
the opposite value, so you can just invert
the texture in Photoshop, for instance, to get
the roughness map. Next, we want to
add some detail, so to save on
performance and memory, we try to use smooth
surfaces, so when you build your mesh
made of triangles, you use smooth surfaces. So here we have an
example of bricks that are completely smooth. To add some details, we
can use a normal map. A normal map looks like this. When you apply it to
the object, suddenly you get a little bit of shadowing
and more information, more details on the surface. I'm not going to go into too
much details about normal maps, because there's a ton of
information available online, so we're going to skip that
before we run out of time. The only thing to know is
that the colors instead the normal map encode a vector. It's a direction. It's not a color. Next one is ambient occlusion. So here we have we
have our bricks, and they have been
textured properly. They have a roughness. They have metallic. They are normal map. But we are lacking what we
call macro scale shadowing information, because a
brick has depth, so it will create shadows on itself. Like, the surface should be
casting shadows on itself. But because we don't have
access to the triangles, instead, what we've created is
just a black and white texture that tells us where
the shadows should be. So, again, that's called
ambient occlusion. Let's see the before and after. That's after, before, after. So, again, adds a lot of detail
and depth to your object. And the ambient occlusion map
is just a grayscale texture. When the values are 0, the
pixel is completely in the dark. You should never have
value set exactly to 0. At 255, there's going
to be no shadowing. It doesn't affect
all the lighting. I'm not going to go into
too much detail here. So any time you
create an object that has cracks or crevices,
that kind of stuff, you should be using the
ambient occlusion map. So in the end, we have
our five textures. We have the metallicness, the
base color, the roughness, the normal, and the
ambient collusion. And if we get them all together
in this particular example, you can create an
object where everything varies from pixel to pixel. So here we have a metal
ball, but for some reason, some of the tiles are missing,
and those are not metals anymore. Their reflections are just gone. So you can see we just--
the smooth textures, we can create very impressive
variations from pixel to pixel and create most
real world materials in a photorealistic manner. One thing you can do to
optimize your materials, especially if use use the
three format called GLTF, which I'm sure they're going to
talk about more in details tomorrow, you can pack the
channels into a single texture. So ambient occlusion,
roughness, and metallicness are grayscale
images, so they can each fit in one of the channels
of a [INAUDIBLE] image. You can do this easily
in any good photo editor, like Gimp or Affinity
Photo here or Photoshop, and you can have only one
texture instead of three. It's going to speed
up your load times. Going to speed up your
rendering, as well. So now I want to talk about
performance a little bit. So one of the features that we
have inside of our rendering engine is something
called dynamic resolution. What we do is we always watch
the time spent on the GPU to render every frame. And instead of dropping
frames when there's too much to render, we adapt
the resolution of the rendering, so we smoothly adapt
the resolution, both on the vertical and
horizontal axis, sometimes with that
first one axis, then the other, sometimes
both at the same time. So what this means
for you is that, as you're building
a narrow scene, if you make it too complicated,
if you have too many objects, if you have materials
that are too complex, we're not going to drop frames. We're always going to favor
performance over anything else. But we are going to lower
the resolution of your scene. It works really
well on our phones, because we have really
high density displays, so it's really hard to
tell when this is going on. And I'm sure that most
of you won't even notice as you are using the app. But, basically,
it boils down to, do you want to see a complex
scene at lower resolution, or do you want to see a simple
scene at higher resolution? The maximum resolution
we currently use is 1080 p, even though on
devices like the Pixel 2 XL, we're not going to use the
full resolution of the display, because it's just
way too many pixels to be able to drive
physically based rendering. LUCY ABRAMYAN: Romain, I want
to call out that the render does this automatically. ROMAIN GUY: Yes, it's
done automatically. You don't have to
worry about it. We take care of
performance, in this case. [? Measures, ?] when
you create your object, it can be really tempting,
especially for your artist, to add a lot of triangles to
create really smooth surfaces. But we're running all
this on mobile phones, so we should be careful with
the complexity of the objects. To give you a rough idea of
what we call a hero object, so an object you can
get pretty close to, should have maybe, at
most, 10,000 triangles. But even that, if you can avoid
using that many triangles, it would be great. And if you use 10,000
triangles in one object, make sure there's only one
of them and not 100 of them. Otherwise, performance
is going to suffer. And this is already-- with some
of our early access partners, this is one of the common
issues that we've seen. Really tell your artist
to simplify the models as much as possible. And it's particularly important
because, every time we have a triangle that's
smaller than the Pixel, the GPU is going to
do way too much work. I'm not going to
go into the details here, because you
probably don't care. But, basically, we might end
up doing the work four times, and we really don't
want to do that. The complexity of the
scene, thankfully, in AR, you're probably not going
to add a lot of objects, but if you create
in the art scene, for instance, where you
have a model of a city, you might be tempted to put a
lot of objects in that city. You know, every building or
every car, every pedestrian. What we're recommending
is that, at most, you should have maybe
100 objects visible at a time on screen. And the reason here
is because we're going to run into-- the CPU is
going to become a bottleneck. So this is not something
that dynamic resolution can help with. We avoid rendering anything
that's not on the screen. We have a lot of
optimizations around that. But if we have too
many objects on screen, we're going to be
bound by the CPU, and there's not much
we can do about it, and you're going to
start dropping frames. And dropping frames is
particularly bad in AR. I mentioned the
format called GLTF. We support OBJ and FBX. GLTF is a new standard
driven by Chronos. Chronos is the committee
behind Open GL and Vulcan. A lot of tools support GLTF. Web sites like Sketchpad
and a lot of assets are in the GLTF format. And one of the reasons
why I like GLTF is because, in the
standard [INAUDIBLE],, the occlusion, the roughness,
and the metallic instead a single RGB texture,
which is something you should be doing for
performance reasons. So if you can ask your artist
to give you GLTF models, it's going to make, by default,
your models a little more efficient to render. One thing we didn't show you
how to do with the [? IPIs, ?] you can do lights in your
scene, and you can add many, many, many, many lights. This is an example of an
early demo of our rendering engine running on my Pixel 2. So here, I think
we had something like 128 lights in the scene. So you see that you can
have many, many of them, and you can still run
at 60 frames per second. What is very important
is that, if you add many lights in your scene,
make sure they don't overlap or they don't overlap
too much, because if you have two lights
on the same pixel, so we have to do the work twice. So you have 100
lights on one pixel, we're basically
rendering 100 frames. So use a lot of
lights if you want. Make sure that
they don't overlap. And to do this in
[? IPIs, ?] you can give us a maximum sphere of
influence for each lights. Finally, if you use
the view renderables, it's extremely convenient. You can create views
the way you do it in the rest of your application. You can put them in the scene. Super useful. But every view is rendered in
software, and for every view, we have to allocate what's
called a surface texture, and this is going
to cost memory, it's going to cost CPU time,
and it's going to cost GPU time. So try to reuse the views
as much as possible. Don't try to put too many
of them on the screen, and don't try to allocate too
many of them at the same time. And, finally, all
the usual advice that we give you for performance
inside Android applications applies to AR, obviously. So doesn't allocate
in the render loop. Don't do too much work. Be mindful of the size of your
APK and all that good stuff. With that, we're out of time. There's a talk tomorrow morning,
build, iterate and launch your apps. There's another one
called designing AR applications that was-- LUCY ABRAMYAN: Today. ROMAIN GUY: Yeah. Today, earlier this afternoon. So come back two hours ago. We also have office hours. There's a code lab
available online to create a similar
scene to what you saw. And that's it. And if you have
questions, you can find us after this talk or tomorrow. We'll be around to answer
all your questions.