[MUSIC PLAYING] ARRAN LANGMEAD: Hey, everyone. My name's Arran Langmead,
technical artist and evangelist for Epic Games. Today I'm going to
take you through one of the most incredible
features in Unreal Engine 5-- Nanite, a new system that
allows for super fast rendering of geometry, allowing artists
to create film-quality assets for real-time games. In this talk we'll explain
what Nanite actually is and what it's doing. We'll go through how to set
Nanite up on your assets, whether they're existing
inside a current Unreal project or whether you're building
them from scratch. We'll go through
workflow changes that are going to come from this
new way of building, mainly covering UVs, baking,
and remeshing. And then we'll finish up
with some extra things that you need to keep
in mind or consider when building with Nanite. When we say Nanite,
we're actually talking about
virtualized geometry. Similarly to virtual texturing,
Nanite loads in only the data it needs to render the scene. Nanite can render high-fidelity
assets quicker than normal mesh rendering, and can render
hundreds of thousands of instances dynamically. When you create a
Nanite mesh in Unreal, it processes the mesh so that it
can render it more efficiently. The mesh is broken down into
logical, seamless clusters of triangles based on
a number of factors, including smoothing
groups and UV seams. Cluster LODs are generated
at multiple levels of detail, so they can be swapped out
or not rendered at all, depending on distance from
the camera, visibility, and the camera frustum. This is different
from regular LODs, as it's only loading in
the sections of the mesh that it needs to
render the scene. When these meshes
are being rendered, the best clusters are
picked based on the screen percentage they take up. Nanite will always try and keep
the difference to the source match at less than one
pixel, so the maximum level of required detail
is being rendered, but doesn't waste any
memory or processing power rendering triangle detail
that would never be picked up. This system makes poly count--
concerns a thing of the past, but it does come
with a few caveats that you'll need to keep
in mind when creating. Let's start by answering
the most important question of all-- how do I enable Nanite
on one of my projects? And luckily, this is a really
easy question to answer, because it's a tick box
and it's on any mesh that you want Nanite
to be enabled for. All we need to do-- if we have a mesh that's
already been imported in, we can find it from
the content browser or we can double-click
on it in the Static Mesh settings on the scene. Search for Nanite and just
tick Enable Nanite Support. Then all you need to
do is Apply Changes, and this mesh will
then use Nanite. You can also do this
as a bulk action, if you want to, just
by using the property matrix by right-clicking. Go to Asset Actions and Bulk
Edit via Property Matrix. And if you're importing
a mesh for the first time from something like Blender, we
can go in-- go to Export, FBX. And let's just call this
Canyon Rock Example. Press Export. And then we just wait for
the mesh importer to pop up, or we can do this manually
from the Content Drawer. Tick Build Nanite. We probably want to untick
Generate Lightmap UVs as well, if we're using a
dynamically lit scene. And for our import
settings, in this instance, I'm going to use Import
Normals, but by default, this will probably be set
to Compute Normals. And we don't want Compute
Weighted Normals on, unless we want this to be
automatically generated for us. Then all we have to
do is press Import, and then, depending on
the size of the asset, we wait a few seconds. Now that's imported in. You can see it's building a
distance built for this mesh, and that's now completed. And the DDC got updated. And if I open up this
mesh and type in Nanite, we can see Enable
Nanite Support is ticked and Build Nanite
is ticked as well. And the other quick
way of just seeing if this mesh is Nanite
inside the Static Mesh Editor is just going to
Wireframe, and you'll see that nothing renders. So a Nanite mesh won't render
into the wireframe view mode, and that's a nice quick way of
making sure that this thing is actually Nanite. Now, there are
two other settings that come with Nanite as well. We have Position Precision
and Proxy Triangle Percentage. Let's start with the
Proxy Triangle Percentage. Now, when you input
a Nanite mesh, it does a lot of work
in the background that you don't really
have to worry about. But one of the things that it
does generate is a proxy mesh, and we can see this proxy mesh
if we to Show and Nanite Proxy. This Nanite proxy is a low-res
representation of the original mesh, and it's used for a
number of different situations-- mainly ones where
you wouldn't want to use the super high-poly
geometry of your normal mesh. So the three core use cases of
this are complex collision-- which, again, we definitely
don't want to be using complex collision on a
650,000-triangle mesh, so we use the Nanite
proxy instead. The other is lightmapping. So this mesh will be used
instead of the high-poly, when it comes to lightmap baking. And then the third
is platform support. If you have a game
that's going to release onto the PS5 and
the Xbox Series X, but it's also going to release
onto some platforms that won't be able to support
Nanite, like mobile, then you need a fallback. And the default fallback for
this will be the proxy mesh. And that way, you can build your
games, and your environments, and your assets once,
and then distribute them across multiple platforms. So when you build for
PlayStation, and Xbox, and maybe high-end PC, you
can set it to use Nanite, and the Nanite mesh will
be used on these platforms. But when you build
out to mobile, you'll use the proxy mesh,
which is automatically going to set to LOD 0. Now, if you don't like the
results of the proxy triangle percentage, you have
got a few options. The first is changing
this percentage. So by default it will be 0. That doesn't mean that
the mesh will be 0. It just means that it will
use the default triangle count, which is set to 2,000. And you can see that on the left
here under the Nanite settings. By default, any mesh with
a proxy triangle percentage set to 0 will be set
to 2,000 triangles, unless the mesh is already under
2,000 triangles, in which case, it will just be that number. If we set this
number to 1%, then it will be 1% of the
original mesh geometry. This model was 6,470 triangles,
so it gets set to 6,470. And you can see we're still
looking at the Nanite proxy here, so he can
see the actual mesh representation of this asset. If we're still not happy with
that for the platforms that don't support
Nanite, we can still use the regular LOD levels
for those platforms. For the completed
version of this mesh, if we go to our LOD
settings, I have a number of LODs set for
this particular asset. LOD 0 is set to 2,000
triangles, which we'll be able to set for
our complex collision for our Nanite assets. But for any platforms
that don't support it, we'll start at LOD 1,
or even LOD 2 or 3. LOD 1 is a much
higher triangle count, but still is a normal
game-ready asset. It's 32,000 triangles. And we can see our
wireframe here. And if we go to
our LOD settings, you can see our
reduction settings. We have Percent
Triangles set to 5%, so this is 5% of the
original Nanite mesh. And then, if we
go to LOD 2, we're setting this to
50% of base LOD 1. And if we go to LOD 3, that's
set to 50% of base LOD 2. So for this mesh, I'm
just using the base LOD tools that exist
inside the engine, but you can also import your
own LODs to use instead. The last thing you need
to change is Minimum LOD. You can set this per
platform, and for any platform that won't use
Nanite, you just want to make sure that's set
to use 1 as a minimum. Next, let's talk about
Position Precision. Nanite assets are
highly compressed, which means that you can
have millions and millions of triangles in your
assets without immediately filling the hard drive of anyone
who wants to play your game. Position Precision
allows you to set the quality of those
compression settings, and by default, this
will just be set to auto. This will automatically pick
the best compression setting for your model based on
the size and the density of the triangles of your mesh. If you notice any errors or
any issues with your asset, you can override the
setting just by picking a different precision setting. You can see your
default position on the top left here
in the Nanite settings. This is currently
set to 1/16, and we can set this in high or lower
based on the compression setting that we want. The last thing that you need
to keep in mind with a Nanite asset is the materials
that you're using on it. Nanite that can handle
most material setups, but there are a few
caveats to that. Your material can't
be set to two-sided, and it has to use the
opaque blend mode. You can use most of
the shading models, but you can't use World
Position Offset or Pixel Depth Offsetting on your asset. The same goes for Opacity
and Opacity Masking with different blend modes. For the most part,
as long as you're not using any of these
settings, your materials will work just fine. If you do put a setting
in onto a Nanite mesh that it doesn't support,
the Nanite mesh in question will just render black. So it's pretty easy to tell
when you've done something on the material that
won't be supported. Now we've enabled Nanite. We want to test it. So we want to see what
it's actually doing, and we have some
really great previewing options and debugging options
to let you see what's actually happening under the hood. If we go to our lighting mode
and down to Visualization, we can see we have
Nanite Visualization. And we have a load of
different preview modes to actually see
what's happening. I'm going to focus on just a few
of the really important ones-- first, triangles. So we can see our
triangle count here, and we can see the
number of triangles being rendered in our scene. And as we get closer
to these meshes, you can see that this
triangle can't actually get swapped out with more
high density triangles until we reach the
original source mesh. What's great about this is
that we're only ever using that kind of level of detail on
the meshes where it's actually rendered and actually used. So as we zoom out of this
and our meshes take up a smaller and
smaller screen size, they actually use a lower
and lower triangle count. And we can keep zooming
and keep zooming, and you can see that
those triangles are getting larger and larger. And this is one of the
core benefits of Nanite. It's doing all of
that LODing and detail modification per mesh, and it's
focusing on just the geometry that you can see. Next up is clusters,
and this is where the geometry is grouped together
into its logical components. And you can see that
these clusters, again, are getting bigger and smaller
as we get closer or further away to this geometry. Next step, we have a really
useful one, which is overdraw. And overdraw is still
something that we need to be considering on meshes. Now, this scene is good. One of the issues that you
might notice when using Nanite is when you start having
very, very dense geometry overlapping on top of
other very dense geometry. This is something you
want to try and avoid as much as possible,
though, obviously, you will still use some of it. And this is a really
useful way of visualizing that information. So to sum up, Nanite can be
enabled on any static match that doesn't deform. So that means no skeletal
meshes, no world position offset in the material. And you can have both
Nanite and non-Nanite assets inside the same scene. They will coexist
together happily. Nanite allows for a huge
amount of polygon geometry, although it doesn't have to have
a huge amount polygon geometry. And it's able to do this
because it renders faster, more efficiently, and only the
detail that can be seen, at the resolution that
it needs to be seen at, is loaded and shown. One thing you should understand
about Nanite that might not be 100% clear is
that, while you can have millions and millions of
triangles, you don't need to. So all of the assets that
you've already generated don't have to be redone. You can still enable
Nanite for those assets, and you'll still get again. Now let's talk about the changes
to the art production pipeline, if we start using this
one pixel to one triangle ratio for geometry. The old production
pipeline method was this. We had our low-poly
mesh, which we unwrapped. We then created
a high-poly asset to go alongside this, which
would contain all of the detail that we'd want for
that low-poly asset, and then we would bake the
high-poly information down onto the low-poly. From that bake we would get
normal maps, ambient inclusion maps, curvature, thickness,
height, and so on. And then we'd be able to
use those baked maps to fake a level of detail that
wasn't actually there. And we could also then
use that information to texture our final asset. And then we have
the new pipeline, which has been used in
lots of other industries outside games for quite a while. But we have the
high-poly pipeline, which is where we
discard the low-poly, instead of discarding
the high-poly. So we might still start
off with our low-poly mesh, and we'll probably want
to unwrap that in advance. But then we turn
that low-poly mesh into the high-density
polygon mesh that we'll use in
the final asset. With that high-poly mesh
unwrapped, we can then bake it. But instead of getting
the information that we'd normally only get
from baking high-poly detail onto a low-poly detail,
like the normal map, we're just getting information
about the curvature in the surface of the high-poly
that we're working with. So in this instance, we might
have an ambient occlusion map, might have a curvature map,
we might have a thickness map, but we wouldn't
have a normal map, because there's no additional
information that needs to be baked onto the asset. And then we can use these
textures to build our texture pipeline like we would
inside Quixel's Mixer or inside something
like Substance Painter. But the real benefit
of this pipeline is that you're no longer
wasting time generating assets that you aren't
ever going to actually use. So with our previous
pipeline, we were spending a huge
amount of time generating this high-poly asset that
we'd then only used for a bake and then have to discard. Now we can directly use
the high-poly asset, and the low-poly just becomes
the base that we eventually develop into the final asset. One of the harder
parts of building game assets at this higher
resolution is UV mapping. Normally, this isn't a
particularly painful process, but when you're working with
such high-definition meshes, this can become
really difficult. Here I have an asset that
is 650,000 triangles, and even just previewing
what we unwrap on this is going to look like
is really difficult. I have to actually
zoom in really far just to see what the actual
triangles or quads that I'm working with look like. And even if that
wasn't a problem, just moving these elements
around is really difficult, let alone unwrapping. There are a few
different ways that you can tackle this to make this
a bit easier for yourself. I've outlined a few of
the unwrapping steps here that you can
use to make working with high-definition
geometry a bit easier. First, we have the pretty
standard methodology, which is building high-poly
from an unwrapped base mesh. This is exactly what I've done
with this particular asset. You can see that it's a
reasonably complex shape, but the actual unwrap itself
is just based off a cylinder. So here you can see the
side paneling, and then the top and bottom of each
of these cylindrical tops. The really useful
thing about doing this is that we can just start
off with these simple shapes, like this one over
here, and then we can modify this
unwrapped pole if we want. In this instance, I
had these up here, and I scaled this to
match the UV size just to give me as much
definition as possible. In this instance, I don't
really care about the UV space for the top and
bottom of the shape, because I know that I'm
going to be blending out the top and the
bottom with this shape with a standardized texture. Once I've got that, I can start
adding some extra geometry to this shape. Let's just adds to the
top and bottom as well. And then this model is now
ready to go into something like ZBrush, or I can
start sculpting it inside Blender's
sculpting tools to create the shape I'm happy with. And as long as I don't
use any tools to destroy the original unwrap, I'll be
able to continue using this. And then, once I
finish sculpting, I have my UVs already complete. The second option is
preserving your subdivisions and just unwrapping
on the lowest level, and there's a number of
different ways to do this. Inside Blender,
you have the option of subdividing your
mesh as a modifier, but you can leave this
on as a modifier, which preserves the original shape. You can use things
like Crease to create harder or softer edges, or you
can bevel the edges as well. In this particular
example that I have here, I have a number of different
objects and shapes, each with a different
subdivision modifier on, so I can control
this independently. But the actual unwrap
itself is still showing the base subdivision,
which is much easier for me to work with, modify, and edit. The benefit to this is
that I never actually have to delete this
particular subdivision. I can leave this
on as a modifier, and then, when I
export it a FBX, collapse it down to its actual
high-definition geometry. And we can do the exact same
process again in ZBrush. So here I have the canyon rock. And it has seven
subdivision levels, so I can go up and down
based on what current process I want to be working on. But all I need to do to work
on the unwrap of my low-poly is export out, give it a name-- UV Example-- and then I can
import this into Blender, go to my UV edits,
and start unwrapping this particular shape. So I'll just mark
a few seams here. I will select this
loop, mark that seam. And it's just that unwrap. And for this example, I'm just
going to leave this like this. And once I'm happy with that,
I can go back in, export, overwrite my existing OBJ
file, import that back in. And if I apply a
texture map to this, you can see that
we've got our new UVs on this particular object. And I can still go up to my
original subdivision level. The third option is a
little bit more work. It involves recreating the
object at a lower subdivision level, unwrapping
that, and re-projecting. In this example, I use poly
groups and zGuides to create a new mesh at a
lower subdivision level. I take this lower-poly model
into Blender for unwrapping, and once it's unwrapped,
I can position my islands and create my UV. Once I'm happy with that, I
can bring it back into ZBrush again, subdivide that
mesh, and then project it onto the original high-poly. With just a little
bit of tweaking, I'll have an exact copy of
that high-definition mesh, but now it's unwrapped. The fourth option is
by far the easiest, but will give you the least
control over your unwrap. You can use the automatic
unwrapping processes inside most
applications to give you a very quick and easy unwrap. Back with our canyon rock,
if we go into the UV editing mode of Blender, select
all, and then just tell it to Smart UV Project. You'll see it has to think
about it for a few seconds, and depending on the
size of your model, it might have to think
about it for a few minutes, but it will unwrap
the object for you. As you can see, the job that it
does isn't particularly great. There's a lot of
wasted texture space, and a very large
number of islands have been generated, which will
most likely create seams when the texture is applied to it. There are other
tools that have been created that will
do a much better job of automatic unwrapping. Houdini is a good
example of this, but there's still no
substitute for good unwrap work done by an artist. The final step is to not
bother with unwrapping at all. As we're using Nanite,
we can use meshes with a very high vertex
count, and with that comes a huge amount of potential
data in the form of vertex colors. In this instance, I'm
baking mask data into the geometry and using that
to blend between several world space textures. This means that I don't
have to unwrap anything. I can bake a series of maps
into the vertex colors, and then use these to create
my unique information-- comes with a number of other
benefits as well. To demonstrate this, I
have this scene here. These three assets are using
the exact same material, but create different
blends based on the vertex colors that are being used. This saves a huge amount
of texture memory, as I'm not having to export any
additional textures to the base ones that I'm using. I can just create some
base material types. So in this instance, I
have a metal, a rust, and a painted metal. And then I can blend between
them based on the masks that I'm using. In this instance, I've baked
ambient occlusion, thickness, and curvature into
the R, G, and B channels of the vertex
colors, and then I'm using this to blend between
these three base materials-- paint, rust, and metal. Once these are in, I can control
the density of this paint information and the strength
of the paint information with an additional
noise channel to control the random variation. I could also turn
this off altogether, leaving me with
just two channels. And I can control the
strength of the rust build-up on that surface. And then, once I'm
happy with the result, I can apply this exact
material to any other mesh in the scene that has
those same channels baked into their vertex colors. This allows me to create
lots of different material variation using the exact
same materials without having to change any of the properties. So to sum up, Nanite is an
incredibly powerful new tool, but it does come with
some restrictions. Materials for Nanite
can't be two-sided. They have to be opaque,
and you can't use tools like World Position Offset
or Pixel Depth Offsetting. Nanite rendering
currently doesn't support split-screen,
forward rendering, MSAA, or the stencil buffer. Keep in mind that Nanite,
along with Unreal Engine 5, is in early access. That means that it's still
an experimental state, and it's still being worked on. You can expect many improvements
to happen with both Nanite and Unreal Engine 5, as it
gets closer to full release. Remember, Nanite is
super easy to set up-- just a tick box, and that's it. It'll work on any mesh
that will support it. It renders quicker and
takes up less memory, so if you can use it, do. You can use the Nanite
proxy, imported LODs, or UE-generated LODs
in place of Nanite for non-supporting
platforms so your game will work across any platform
that you distribute on to. Nanite compressed geometry, so
file sizes won't be massive. 1 million triangles is
about 14 meg on disk. And some final art
production tips-- keep your geometry
combined, where possible. This will allow Nanite to
reduce the geometry down more efficiently. Always remember to check
your smoothing groups. When you're working with
really high geometry meshes, it's actually quite difficult
to tell whether you have smoothing groups on or off. Just make sure
you've got them on, as this will impact
Nanite LOD generation. Try to preserve your
subdivisions, where possible, and unwrap early. It will make your
life a lot easier. And finally, I'd
always recommend to build at the
closest resolution the player will be able
to see up, and no more. While Nanite compression
is very good, having actual levels
of detail that will never be rendered
or loaded will still take up additional memory. And that's everything. Thanks so much for watching. I hope you found this
primer on next-gen art production with Nanite useful. You can find out more about
Nanite on our docs page along with all other features
new to Unreal Engine 5. If you want to dive
into a live example, you can download the
Valley of the Ancients demo from the Epic launcher. And if you have any questions,
you can contact me on Twitter-- @ArranLangmead.
The geometry compression is pretty impressive. 1 million triangles down to 14 mb? The more they can do to help lazy devs reduce file size the better
Check out some of the sample renders folks are doing on Youtube. There's quite a few that at first glance you'd definitely think they were real footage. Kind of excited to see this tech unfold in the upcoming years.
For static meshes yes, the real question is what is the new bottleneck for detail? Memory? CPU?
I still have to wonder if the hype behind this engine is as astonishing as advertised. Sounds too good to be true.