♪ [MUSIC] ♪ [NARRATOR] Welcome to Unite Now
where we bring Unity to you wherever you are. [DAN] Hello, and welcome
to what's new in Unity's AR Foundation. Create augmented
reality experiences that blend seamlessly
into the real world. My name is Dan Miller, and I work
as a Senior Developer Advocate focusing on XR here at Unity. [TODD] I'm Tod Stinson,
a Senior Software Engineer on the XR Features team at Unity. [DAN] In today's session,
we'll first start off by giving a brief overview
of what is AR Foundation, talking about the different
package versions and platforms, and then I'll briefly introduce how
to get started with AR Foundation: installing the packages
in configuring your Scene for augmented reality. After that, Tod will go over
some of the latest updates with the Configuration Management, giving you more control
on how features are enabled within your augmented
reality experiences. From there, we'll both overview
some of the latest features enabled in AR Foundation, including Meshing, Depth,
and the Universal Render Pipeline. And finally, we'll talk about some
projects made with AR Foundation. Let's go ahead and get started. AR Foundation Overview. AR Foundation is available
in the Package Manager. It consists of different packages
for the platforms, like ARKit and ARCore, and the ARFoundation package,
which is an abstraction that sits on top of these
different platforms. Depending on the version of Unity
that you're using, there's different verified
versions of the packages. Here, you can see in Unity 2019,
the verified version is 2.1.0. The latest verified
version is 4.0.2 verified in Unity 2020.2. This chart here shows
the different package versions alongside the verified states
with AR Foundation. The color blocking is indicating
the different compatibilities that these packages have with
the different versions of Unity. Looking at the bottom, we'll notice
that the latest verified version of Unity 4.0.2 is verified
with Unity 2020.2, but it's still compatible and works
with Unity 2019.4, the latest LTS version
of the Unity Editor. When thinking about
the different versions of packages, and whether you should be
working in a preview state versus a verified state, we have
some specific guidance around this. Verified Packages
have gone through the rigorous quality
assurance testing required with certain versions of Unity. Preview Packages have not gone
through this quality assurance, but it's where you'll find
the most up-to-date features and functionality. The latest Depth API is currently
available in a previous package available with AR Foundation 4.1. When talking about what versions
of Unity's AR Foundation works with what version
of the Editor, you'll notice that some packages,
like Unity AR Foundation 4.0.2, are neither Verified
or in a Preview state. This is because the package
was verified with a newer version of Unity,
in this case, Unity 2020.2. But it's still compatible
with Unity 2019.4. The big idea behind Unity's
AR Foundation is that you can build once
and deploy it anywhere. In this chart, we see
the devices at the bottom. Next, we have the different native
Augmented Reality SDKs built by the platforms,
and on top of that, we have the Platform Packages,
which are the integrations of these XR platforms
into the Unity Editor. AR Foundation sits on top
of all these packages and gets data fed in through
something called <i>subsystems</i>, which enable the different
features and platforms within the Augmented
Reality ecosystem. Here, depending on
which build target you have, will link up the associated
AR SDK behind the scenes. So, if you're building for an
iOS device, we'll link up ARKit, and if you're building for an
Android device, we'll use ARCore. This allows you as
a developer to just focus on building your
Augmented Reality app, not worrying about
platform-specific features like an ARKit plane
or an ARCore plane, and instead just thinking
about an AR plane. Let's get started
with AR Foundation. First, we'll walk through
downloading the packages. Next, configuring
the platform-specific settings for mobile augmented reality. Then we'll set up XR management, modify an existing empty Scene
to use Augmented Reality, and finally, add some AR features, like Plane Tracking
and Hit Testing. I've loaded up
an empty Unity project, and we'll start by going
to Window > Package Manager. This is where all the packages
for AR Foundation live. I can filter the Package Manager
by searching for AR. Here you can see
the AR Foundation Package, which we'll go ahead and install. Next, we'll install
the ARCore Package. Notice that there's
a separate package for ARKit Face Tracking. Since our demo app is not
going to utilize Face Tracking, instead, we'll install
the ARKit Plug-in. Now we've installed all
the packages for AR Foundation, so let's configure
the Project settings under the Player tab in order to properly configure
both the iOS and Android settings. For iOS, the first thing
we need to add is the Camera Usage Description, which will appear
as a pop-up to our users, the first time the
application is launched. Next, we'll change the target
minimum iOS version to 11.0. Finally, change the Architecture
from Universal to ARM64. Since we're building this project
for both Android and iOS, we can select the Android
settings here and configure them as well. The first things is that we need
to remove the Vulkan Graphics API, since ARCore runs on OpenGLES3. From there, we'll want to make sure
to configure the minimum API level; in this case, we use Android 7.0,
or API Level 24. Here in the XR settings,
if we wanted to override some of the platform-specific
settings, we could. Instead, we'll go to XR Management and make sure that
the plug-in providers for ARKit and ARCore
are properly configured for Android and iOS. Now let's go and start
configuring our Scene to utilize Augmented Reality. To start, we'll delete
the Main Camera since we'll be creating an XR
Object that has a main camera or an AR Camera already configured. This is the AR Session Origin.
Next, we'll create the AR Session. Now that we've adjusted
the settings, added our functionality, we can go ahead
and build out this project and see what each
platform looks like. First, we get prompted in order
to allow access to the camera since we're using
Augmented Reality. From there, there's no
additional functionality, instead we just see a live camera
feed from the camera on the device. Now, let's add some additional
functionality to our AR app. We'll start by adding
an AR Plane Manager to my AR Session origin. From there, I can go
to my Create menu and create an AR Default Plane. Next, I'll drag it into
the project Hierarchy, so that the Default Plane
becomes a Prefab. Then we can assign it to
our AR Plane Manager component. Most managers in AR Foundation
have additional settings. Here in the AR Plane Manager,
we see that there's the ability to detect different planes
for horizontal, vertical, or both. Next, let's create a C# script in order to place
our Object on Planes. We won't use the Start method,
so I'll delete that now. We'll want to store
serialized references for an AR Raycast Manager and store a static list
of AR Raycast Hits, which will be populated
by our AR Raycast. Next, we'll store a serialized
field for our GameObject that will be instantiating
on our planes. In Update, first we'll check
to make sure that the touchCount
is greater than zero. Then we'll store a referenced
zeroth index Touch, check the TouchPhase to make sure
that they were only spawning a single Object each Touch, and next we'll do an AR Raycast
from the AR RaycastManager, passing in the touch.position,
populated with the Hits list, and using the TrackableType
for PlaneWithinPolygon, since we're raycasting
on found Planes. If we find a Plane,
we'll then store the pose from the zeroth it Hit Index, and use that for the position and
rotation of the Object we spawn. Back in Unity, we'll add
the AR Raycast Manager to our AR Session Origin,
as well as adding our new script. Now let's create a simple sphere
to place on our Plane. First, we'll create
a primitive sphere. We'll also create
an empty GameObject, which will act as the parent
for the Object that we're placing. Since units in Unity are in meters, we want to scale down
the sphere for AR. Next, we can offset our Object
so that the empty RootObject appears at the base
of the 3D Object. This will let the sphere be placed
on top of the planes. If we don't set up
this parent-child relationship, our sphere would penetrate
through the plane and wouldn't quite look right. Lastly, let's turn this
sphere object into a Prefab and delete it from the Scene,
and then assign it to our script. Now let's build our project
for both ARCore and ARKit. Here you can see that both
experiences act very similarly. We can find a plane,
and every time I tap the screen, it places a sphere
on the found plane. [TODD] Thanks, Dan. Now that
we know how to get started, let's talk about some updates
introduced in AR Foundation 4. For a while, we've actively
listened to issues from developers where changes in AR Foundation
settings did not result in expected change
in device configuration. Configuration Management, which
had been implicit functionality in prior versions of AR foundation, becomes an explicit operation
that can be more openly handled. Breaking down the issue
of Configuration Management, let's identify the source
of the problem. AR Foundation supports
multiple platforms. Each platform has
a multitude of devices. Each device supports
several configurations. Each configuration supports
a different set of AR features, meaning that some features
may not always be compatible with other features. Finally, for an AR Session to run, one and only one configuration
must be chosen. To solve this problem
with AR Foundation 4, we added an overridable
ConfigurationChooser class. Let's look at three
devices and setups. The Pixel 3, iPad Pro with LIDAR, an iPhone XR but with the ARKit
Face Tracking package not included. This chart displays the number
of configurations that are currently available
on each of these devices. The Pixel 3 offers
three configurations, the iPad Pro supports six,
the iPhone XR offers five because in this example, I removed
the Face Tracking package, thus no configuration includes
the User Facing Camera. For clarity, we'll focus
on a specific use case, targeting exclusively
the iPad Pro with LIDAR. In this use case,
we want to build an app that uses both
the meshing functionality and the 3D Body
Tracking functionality. Meshing is a supported
configuration on this device. 3D Body Tracking is also supported
in the configuration, but those are separate
configurations. Because there is no single
configuration that supports both Meshing and 3D Body Tracking, these features may not
operate at the same time. This is where the
ConfigurationChooser comes in. The job of the ConfigurationChooser is to look at the available
configurations that are possible and to choose a configuration
that best matches the needs of the application. Let's dive into a specific example
by looking at the code for the default ConfigurationChooser
and writing our own. This is the code for the default
ConfigurationChooser. This implementation
is extremely simplistic. The goal of the default
ConfigurationChooser is to choose the configuration that matches
the most number of features. That's it. Nothing fancy.
Simply choosing the configuration with the maximum number
of matching features. Dissecting this code, Line 9 illustrates that
all ConfigurationChoosers derive from the Abstract class
ConfigurationChooser. This Abstract Class
has one Abstract method that you must implement: the Choose
Configuration method on Line 27. This method is given
two parameters: first, the set of all available
configuration descriptors, and second,
a set of requested features. The requested features include
all features being requested across all active ARSubsystems. The Choose Configuration Method
is expected to return a single configuration. This method starts
with a few sanity checks ensuring that valid descriptors
are passed in, ensuring that at least one
tracking mode and at least one camera mode
are requested. The core of this
implementation iterates over each of the available
configurations to find the configuration
with the highest capabilities matching the requested feature set. If there're multiple configurations
that have equal number of matching feature counts,
the tie is decided by a rank value. Each configuration has a rank value
that is set by the device provider. This value is only used
in an instance of a tie to serve as a tiebreaker. After each available configuration
is checked, the configuration with the highest number
of matching features is returned and becomes the configuration
for the active AR Session. This simple logic should suffice
for most AR applications. However, we can override
this behavior with our own ConfigurationChooser. Let's focus on the scenario
where we want our apps switched between the Front Facing
and Rear Facing Cameras. And we want our camera choice
to take precedence over all other AR features. We can now achieve that behavior
with our own ConfigurationChooser. This Sample does just that. We're putting this code
into modding behaviors so we can attach it
to a GameObject. We have a field for the AR Session
that we will be modifying. In the Start method, we create a new
instance of our ConfigurationChooser and set that to the AR Session
subsystem as shown on Line 26. Next, we have our own
ConfigurationChooser implementation. As before, we derive from the
ConfigurationChooser Abstract Class and implement there our own
ChooseConfiguration method. We still do the same
sanity checks as before. On Line 55, we see new code
for implementation. With this line, we are extracting
all of the camera features from the set of RequestedFeatures. We are explicitly remembering
which camera feature is requested. On Line 63, like the previous code,
we walk through each of the available descriptors
and count the number of features that match their
RequestedFeature set. The main difference
in our implementation are Lines 66 through 68. This code adds extra weight
if the configuration matches are requested camera features. This basic heuristic
gives configurations with the matching camera direction
an added bonus to ensure that it is preferred over
all other AR configurations. The rest of the code is almost
identical through the default implementation.
We still choose a single configuration with the best weight. However,
we have given configurations with the matching
camera directions and added advantage
so that they will be preferred. Let's see how this runs.
This is our test Scene. We have a GameObject
that will display the AR Session information to the device screen. And we have added
a ConfigurationChooser GameObject. On a ConfigurationChooser
GameObject, we have a basic script that will toggle the requested
camera direction when pressed, and we have our own
ConfigurationChooser component that will override
the default chooser. First, let's build them with our
ConfigurationChooser disabled. Be sure to build
the Development build as there are additional
login information about the configurations
in a Development build. The build is deployed to my iPad. We can see that clicking the Swap
Camera button does have an effect. The Session information reports
that the requested facing direction is changing from world to user, but the current facing direction
never changes. This is expected behavior with
the currently active subsystems and using the default
ConfigurationChooser. I will point out some additional
information about the configuration that has logged
only Development builds. When a configuration change occurs, you'll see some
debug information logged. This information includes
the complete set of RequestedFeatures,
the features that are supported in the chosen configuration,
and the features that are not satisfied
by the chosen configuration. As you can see in this run,
the User Facing Camera is not satisfied
by the chosen configuration. For iOS, this log information
will appear in Xcode. For Android, this log information
may be found in the Logcat output. Let's go back to Unity.
Let's enable our ConfigurationChooser script
and let's build again. The new build
is deployed to my iPad. We can see that clicking
the Swap Camera button has our desired effects now. The Camera Facing direction
has changed, but do note the other subsystems
that had been running are no longer running
with this configuration. This is because these features
are not supported in the new configuration. We focused in on the log output. We can see the configuration
debug information calls out which RequestedFeatures are not
satisfied in this configuration. In summary, writing your own
ConfigurationChooser will give you more power to control
your AR application state. [DAN] Thanks a lot, Todd.
Now let's talk about AR Foundation's
latest features: Meshing, which is enabled on the latest iOS
device with a LIDAR sensor, generating a runtime Mesh
that can be used for occlusion, collision, and more. Depth, which is an API that returns
a unique depth image at runtime. This can also be used for occlusion as well as applying
unique effects to the world. And finally,
Universal Render Pipeline. Now that Universal Render Pipeline
has been officially released out of the Preview state,
the support within AR Foundation is much more robust. This enables things like
Shader Graph, VFX Graph, and more. First, let's talk about Meshing. This was enabled
in ARKit 3.5 and greater and requires AR Foundation 4.0
and greater. It's available on iOS devices
with a LIDAR sensor. It is not currently available
on Android or ARCore. Within the Mesh API, there's a set
of unique classifications from different surfaces. These include Ceiling, Door, Floor, None, Seat, Table,
Wall, and Window. When enabling Meshing
within AR Foundation and ARKit, if you also enable Plane Tracking,
it smooths out the Meshes for horizontal and vertical surfaces
where planes would be found. If you want a more accurate Mesh, it's recommended to not enable
Plane Tracking as it will pick up
on the finer details and not try to flatten surfaces. Here's a demo built on top
of the latest AR Foundation and ARKit Meshing. You'll see that there's a reticle that snaps to the
currently found Mesh. Depending on the surface I'm on, which is labeled in the
top part of the screen, different icons will appear
for me to place content. Here, I'm looking at the floor
which allows me to place this dresser and the chair, and when I'm looking at the wall,
I have the option to place a clock as well as two different
picture frames. The way this demo works is it's looking at
the Mesh classification and providing the user
with different UI buttons, depending on what surface
they're looking at. Here, I have the ability to go in
and visualize the Mesh. You can see that the different
surfaces are colored. For the Wall, we have purple,
for None, we have green, and for the Floor we have yellow. You can also see the Mesh
being generated there at runtime. Now, let's walk through
how this demo was created and where you can get access to it. Here in Unity, we have
a Mesh Manager. It has the AR Mesh Manager
Component on it, which is required to be a child
object of the AR Session Origin. Like other managers
within AR Foundation, there's some additional
settings associated. For ARKit Meshing, the only
settings that are applicable are the Normals and the
Concurrent Queue Size, as well as the Mesh Prefab. As ARKit is constructing
Mesh geometry, the vertex normals
for the Mesh are calculated. You can disable the Normals
if you do not require the Mesh for Text Normals, to save
on memory and CPU time. To avoid blocking the main thread, the task of converting
the ARKit Mesh into a Unity Mesh, in creating the
physics collision Mesh, if there's a Mesh Collider
on the Mesh Prefab object, or move to a job queue,
processed on a Background thread. Concurrent Queue Size has specified
the number of Meshes to be processed concurrently. Now let's take a look
at our Mesh Prefab. Depending on which components
are on the Mesh Prefab will determine how your Mesh
is visualized and how it interacts
with the real world. Our Prefab requires a Mesh Filter
in order to work properly. Here, we also have a Mesh Collider so our Prefab is generating
collision geometry. If we were to also add
a Mesh Renderer and assign a material, then
we could visualize this base Mesh. Since we're just using
this for physics collision we only require the Mesh Filter
and the Mesh Collider. The key components to this demo
are classifying each Mesh and the surface
to the Classification API, which determines which
classification of the Mesh the reticle is raycasting to. From there, we're also
showing the reticle and snapping it
to the generated Mesh, and finally placing objects aligned
with the Mesh based on UI. For classifying each Mesh, I'm
using a slightly modified version of the Mesh Fracking script, available in AR Foundation's
Samples repository. This creates unique Meshes
that update at runtime based on each classification
and the return vertices. By default, this script
enables the Meshes and has a visualization on them. I've set the transparency
of each Mesh visualization or each Mesh material,
to be completely transparent. So they're not visible by default. For raycasting against these
Meshes, I store a dictionary and that is storing both
the trackable IDs as the key for each unique Mesh, and a native
array of the Mesh classifications. From there, I subscribed
to the MeshChangeEvent from the AR Manager, and
manage the Mesh Dictionary based on a list of Mesh filters returned from the AR
MeshChangeEvent arguments. From there I'm adding,
updating, or removing items from the dictionary. And one small note here is that
we're using and capturing the TrackableIDs
from the MeshName. Now that my dictionary
is properly managed, the next step is to use a physics
Raycast in the Update method, based on the center
screen position. My base Mesh here
has a Mesh Collider on it, and by default, the way the Meshing
system works is that it updates this Mesh Collider
on a Worker thread every frame if the Base Prefab has
the Mesh Collider on it. With my Physics.Raycast
based on the Hit result, I'm again extracting
the TrackableID from the Name and storing the Triangle Index
from the Physics Hit, in order to search
through my dictionary and return the
correct classification based on which triangle
the Mesh I'm hitting. Here we're doing some safety checks
to make sure that our TrackableID is contained within the dictionary, as well as the Triangle Index
within the range. From there, I'm storing
the classification in the current classification field
by using the Mesh ID as a key to my dictionary and the Triangle Index as the index
in my stored Native Array. Lastly, for displaying the Name,
I'm simply converting the enum value to a more readable string
that I set in the UI. For the reticle, we have
an optional bullion for snapping the reticle
to a Mesh or found planes. Since I want the reticle to appear
on all the unique surfaces, I'm using the snap
to TrackableType.PlaneEstimated, as this is what ARKit is expecting when doing an ARRaycast
against the Mesh. Finally, in order for
the reticle to maintain a visible and constant size, I'm scaling the size of it based on
the distance away from the user, or in this case
the Camera Transform. For placing, I'm linking into
the ClassificationManager in checking for Table, Floor,
and Wall surfaces. Based on those surfaces,
I'm displaying some unique UI and linking into a list of Prefabs in order to instantiate
different objects based on the reticle position. I'm also doing a little bit extra
to rotate some of the floor and table objects, so that
they always face the user when they're instantiated. And now, I'll hand it over
to Todd to talk about Depth. [TODD] Depth textures
are a new feature added recently to both ARCore and ARKit. AR Foundation 4.1 Preview
introduces support for environmentDepthTextures
and automatic occlusion. With Environment Depth enabled, the device produces
a texture per frame, representing the distance
from the camera to the visible real-world surfaces. The range and accuracy of depth
information varies across devices. Some devices use a time-of-flight
sensor, whereas other devices might use alternate techniques
to estimate the depth. Depth Images from AR Foundation
are a single-channel texture, and either an R16 or an R32 format. Each pixel of a single-channel
texture is a floating-point value that represents
a distance in meters. Depth Textures are commonly used
to render occlusion between virtual content
and real-world geometry, thus deepening the visual
integration of the experience. Additionally, you can use
Depth Images with computer vision algorithms to build
scanning apps that construct virtual content
based on real-world geometry. Both ARCore and ARKit
now support Depth Textures. ARCore 1.18 introduces
support for depth on a subset of Android devices. For iOS, ARKit 4.0 adds
depth support on devices with the LIDAR sensor,
like the new iPad Pro. With Depth information,
AR Foundation offers automatic occlusion functionality. When enabled on supported devices, depth information
is written into the Z-buffer during camera rendering background
passes. With this depth information in the renderer, virtual content
that is further away will be included
by nearby real-world surfaces. Likewise, when virtual content
is closer to the camera, this content will include
the real-world camera background, thus giving the appearance
that virtual content is interacting in the real world. To set up, you just add
the AR Occlusion Manager to your AR Camera GameObject, and enable the
environmentDepthMode. For the Environment Depth,
there are four possible settings: Disabled, Fastest,
Best, and Medium. Disabled will disable the
environmentDepthTexture entirely. The other three settings offer
a range of rendering qualities, but with the trade-off that each
more advanced rendering choice comes with additional
frame computation. You will need to tune
your rendering quality to achieve the desired results
for your app, without incurring excessive computational overhead. Let's look at a demo
of automatic occlusion in action. With automatic occlusion enabled, we can see this flying robot
will get occluded by passing vehicles. The Depth Texture
updated per frame provides a depth value
for each pixel against which the robot
is flipped when it is behind. Also, we can move the camera
behind this post to show how the post
will occlude this virtual content. The depth texture can be used
in numerous ways. Let's walk through
an example accessing that DepthTexture inside a Shader and displaying
a picture-in-picture view of the depth information
on top of the camera background. We start with a depth
Gradient Shader. Because the pixel values
are in meters, we need to convert
that information into colors to better visualize the data. We're going to pass
the depth texture into the Shader through the Main Text property. Because the pixel values
are in meters, we need a range of distances over
which we're to visualize this data. Thus, we have min
and max distance properties. The Depth Image remains
in a fixed orientation, regardless on how the user
holds the device. To compensate
for that device orientation, we need to rotate the texture
for a good picture-in-picture view. The Vertex Shader
sets the position and texture coordinates
for each vertex. Using the texture
rotation property, we do an affine rotation
of the texture coordinates. This is standard color conversion
from the HSV color space to the RGB color space. Moving into the Fragment Shader,
we first sample the depth texture, but we only need the red component
because the depth texture is a single channel. This gives us a distance in meters. Next, we do some linear
interpolation math to map our distance value
from a value between min and max distances
to an HSV color. By interpolating
the hue component as such, we were mapping
nearby distances to be blue, medium distances to be greenish
turn into a yellow and faraway colors to be red
turn into a purple. Finally, we convert
our HSV color to an RGB, and return that
as the fragment value. Next, let's look at
our component script. We'll declare the name
of the Shader properties that we want to use,
and we'll look at the PropertyIDs to make setting the values
to material quicker. We have some internal fields
that we store temporary state information. We have some
fields that show up on the GameObject and will be
serialized with a Scene. Getting into the real code,
we start in the OnEnabled method where we store the
initial screen orientation, and we update
the RawImage UI element. We will get into the update
RawImage method shortly. In the Update method,
we make sure that the device supports
the EnvironmentDepthImage. If it doesn't, we display
a message on the screen, and early exit the Update method. We grab the environmentDepthTexture
off the OcclusionManager, then we log some information
about the texture either to the screen
or to the Debug.Log. We set the environmentDepthTexture
to the RawImage, we compute the AspectRatio of the
environmentDepthTexture. The AspectRatio can change,
most commonly when the camera
configuration changes. The environmentDepthTexture
AspectRatio typically mirrors the Screen AspectRatio
from the camera configuration. Finally, in the Update method,
if either the Screen.orientation has changed, or the
textureAspectRatio has changed, we want to update the RawImage UI
to best reflect these changes. When we log texture
information above, this is the information
that we log: the texture.format, the texture.dimensions,
and the mipmapCount. In the Update RawImage method, we want to configure
RawImage UI and the environmentDepthTexture
we're rendering to match the ScreenOrientation
and the textureAspectRatio. This will provide us with the most
optimal picture-in-picture view. Lastly, we update
the RawImage UI element with new dimensions
and material properties Switching to the Unity Scene,
first let's note that in this demo, we've added the AR Occlusion
Manager to the AR Session Origin, rather than to the AR Camera. For this demo, we want to use
the DepthTextures not to do automatic occlusion, but to use in
our own custom Shader. Automatic occlusion is enabled
if we add the OcclusionManager to the AR Camera, with the
AR Camera Background component. However, in this specific case,
that is not the behavior we desire. We do not want to enable
automatic occlusion. We achieve this by adding
the AR Occlusion Manager to any other GameObject. In this example, we've added it
to the AR Session Origin. Next, we put our Script component
on an otherwise empty GameObject, and we've hooked up the required
fields for the OcclusionManager, the RawImage UI,
the texture Image Info, the Depth Material,
and the Max Distance value. Let's build the Scene
to see how it works. As you can see, we have
a picture-in-picture view of the Depth Image, overlaid
on top of the AR Camera Background. We can see that the Depth Image
updates in real-time. Let's move on to another example
of how to use Depth Texture in a Scene. Fog effects can be used to add
ambiance to your experience. What if you could add fog using
the Depth Measure by the device? To start, we begin with a Shader. Let's start with the
Background Shader, the ARKit Background Shader
in this example, and we remove the parts
that we don't need for our demo. For the properties,
our Shader needs three textures. The first two textures are how
ARKit provides the camera video for the background image. This is a standard
YCbCr color space. Next, we have our
environemntDepthTexture. In the Vertex Shader, we are
transforming the geometry, which will be a full-screen quad,
and remapping the texture coordinates
based on the device orientation. In the fragment Shader,
we will read the ARKIt Camera Video textures and transform the color
from YCbCr color space into the SRGB color space. If the project is using
linear color space and not the gamma color space, we need to convert
our SRGB color space to linear. We read the red channel value
from the single-channel environmentDepthTexture
to get a distance value in meters. We convert that distance value into
a Depth value for the Z-buffer. With the Unity Fog functions,
we use the Depth value to calculate a Fog factor, which is then used
to linearly interpolate between the background color
and the Scene's fog color. Finally, we return the computer
color and Depth value for the fragment.
Switching over to our Unity Scene, first, we'll start with
the Light settings, under Window > Rendering. In the other Settings section,
we enable Fog. We have the Fog mode set to Linear with the Start distance at 0
and the End distance at 35. On the AR Camera, we've added
an AR Occlusion Manager with the Environment Depth
Mode set to Best. In the AR Camera
Background component, we have enabled
Use Custom Material, and we've added material that uses
our custom FogBackground Shader. Finally, we have a simple
script component that will change the Fog
and Distance based on UI Slider, so that we can interactively
manipulate the fog at runtime. That's it for our changes.
Let's build and run. We have a robot standing
at the end of the hallway. We altered the Fog Distance
using our Slider. We can see as the camera moves,
the fog calculations are using the depth values from the device
to shade the background. Looking at both the Meshing
and the Depth functionalities, you might notice that you can
achieve occlusion effects by using both approaches. So you might be wondering,
is one approach better than another to achieve an occlusion effect?
Mesh-based occlusion will give you nice,
smooth boundaries on surfaces. You will see less jagged
long edges, and for some cases, Meshing will give you a better
visual appearance. However, Meshing has its downsides, most notably, while it'll work best
in static environments, it will fail miserably
in highly dynamic environments. It is because it will take several
frames after a surface is scanned before the Mesh
geometry is updated. Additionally, there are
computational cost with computing Mesh.
This results in more power consumption
and more heat for the devices. Depth-based occlusion works well
in dynamic environments. The Depth texture is updated
on a per-frame basis. The demo with the flying robot
on the street earlier with the vehicles passing by
was using depth-based occlusion. That demo would be impossible
using Mesh-based occlusion. Moreover, depth is supported on
both Android and iOS platforms on a subset of devices. [DAN] Thanks, Todd.
Now, let's look at Universal Render Pipeline support, which was added in
AR Foundation 3.1 and above. It's enabled through a Pipeline
asset custom render feature and becomes enabled
once you've added AR Foundation to a Project
with Universal Render Pipeline. Universal Render Pipeline
supports Shader Graph on all the supported platforms,
as well as VFX Graph on specific platforms, depending
on the enabled graphics API. Here we are in the Unity Project
that we used for getting started. Let's go ahead and open up
the Package Manager and install the Universal
Render Pipeline. After we've installed the package,
we go to the Create menu and select Rendering > Universal Render Pipeline >
Pipeline Asset. This creates two objects
in our Project. In the Forward Renderer,
we can go and add the additional render feature,
with the button at the bottom. Once we've done this,
we can go to the Graphics settings and assign our Universal
Render Pipeline asset. Notice that the sphere
in our Scene turns pink. This is because it was utilized
built in Renderer By assigning the Universal Render
Pipeline in the Graphics setting, we've overridden the renderer. Luckily, Universal Render Pipeline
has a built-in method to update all of your
standard materials that you already have
in your Project. This swaps known Legacy Shaders
to the equivalent of Universal Render Pipeline Shaders. It does not convert
any custom Legacy Shaders that you might have in your
project. You can find this in the Edit > Render Pipeline >
Universal Render Pipeline and then you have the option
to upgrade selected materials or all materials in your Project. Notice once we do that,
our material is rendering properly and we can build out
to our devices. Now let's take a look
at the opposite. Here, we have a Universal Render
Pipeline Project, and let's add
AR functionality to it. We'll start by opening up
the Package Manager and downloading AR Foundation. Here, I'm grabbing
AR Foundation 4.0.2. Now that AR Foundation is enabled,
the next step is to locate the Render Pipeline asset
and add that rendering feature, similar to what we did
in the previous Project. With the Universal Render
Pipeline enabled, we can do cool visual effects
without coding specific Shaders. We can also enable the VFH Graph, which is the next level
of particle simulation, enabling complex simulations
and visualizations. I want to wrap up this presentation by talking about
some sample repositories that are available for you
when getting started and working with AR Foundation. The first repository
is AR Foundation samples. This is available on the
Unity-Technologies GitHub, with arfoundation-samples. This project covers all the
features enabled in AR Foundation with unique scenes and examples
for each of the features. It also has a loader Scene, so you can try out all
the different features and Scenes in a single-build application. This is helpful for understanding
what features are supported on your AR-enabled device, as well as just trying out and
understanding the examples more. Next is AR Foundation demos,
also available on the Unity-Technologies
GitHub repository, under arfoundation-demos.
These are fewer projects that are more focused on single features
as well as higher quality polish. This includes a UI/UX framework
for guiding your users and recently, we added
localization support. There's also an example
of how to do Image tracking by properly referencing the GUIDS, as well as being able to attach
unique objects to unique images. This demos repository contains many
of the demos that you saw here, including the Meshing placement
example, as well as the fog demo. Documentation is included
in the Readme, as well as some of the projects being
hosted on the Unity Assets Store. And last but not least,
is Unity MARS. This is a project specifically
built on top of AR Foundation, so all of the unique and new
features that we talked about today are enabled within MARS. It is a set of specific tools
and functionality, specifically designed
for building augmented reality within the Unity Editor. This includes the ability
to test and simulate what augmented reality
experiences are like in different simulated
environments as well as things like
fuzzy authoring that enables you to procedurally place content
around the world. Find out more
at unity.com/product/UnityBars Thank you very much for coming
to our presentation. I hope you learned a lot
about AR Foundation, some of the latest
features, and more. If you have additional questions,
you can meet us on the forums for a Q&A,
specific to this session. ♪ [MUSIC] ♪