SPEAKER 2: And
SUITSAT is deployed. Although haunting, evoking the
image of a stranded astronaut floating away from
their spacecraft, SUITSAT is on its way, heading
into the Earth's orbit. Filled with ham
radio equipment, it's ready to transmit pre-recorded
messages from school students and enthusiasts
around the world. [INTERPOSING VOICES] SPEAKER 2: Houston
reports a good deploy to ensure no re-contact with
the International Space Station. SUITSAT's orbit will
decay in a few weeks, where it will then enter the
Earth's atmosphere and burn up. [MUSIC PLAYING] DIAZ: All right,
this is to Houston. This is Commander Diaz. Do every debris close
to our trajectory? SPEAKER 3: Evening, commander. Negative on that. Clear sailing as far as
we can see down here. If there was any cause for
alarm, you know we'd see it to. Your crew members can
keep sleeping tight. DIAZ: Well, I'm seeing
something out there. I can't make it out,
but whatever it is, it's getting closer. SPEAKER 3: Tell you,
commander, we're not-- DIAZ: Houston, repeat again. SPEAKER 4: [GARBLED
RADIO TRANSMISSION] SPEAKER 3: Diaz, do you copy? Commander Diaz, do you copy? DIAZ: Houston, you're not
going to believe this. I'm picking up
transmissions on the ham radio that sound identical
to the SUITSAT experiment. And that debris? It's an Orlan space suit. SPEAKER 3: I'm not sure
I'm hearing you right. Repeat that commander. DIAZ: SUITSAT. I'm seeing SUITSAT. SPEAKER 3: You're
mistaken, Diaz. SUITSAT re-entered
the atmosphere and burned up years ago. It's impossible. DIAZ: Yeah. I know it's impossible,
but I know what I'm seeing. It's SUITSAT. It's come back. And it's not just in orbit,
it's headed right for the ISS. SPEAKER 3: Commander Diaz,
you're not making any sense. Say this again. Commander? Commander? [GARBLED VOICES OVER RADIO] DIAZ: I need to alert the crew. [INTERPOSING VOICES] SPEAKER 3: Diaz? Diaz? Diaz? Commander Diaz, do you copy? Diaz? [MUSIC PLAYING] AMANDA: Hey, everyone! Hike over to the Marketplace
and get lost in this month's free content. Mix things up with a
procedural generation tool, download a treemendous
stylized forest, build a legion of
low-poly robots, get moving with a modular
underground subway, and create your own
adventure with a third-person template--all available
through the end of the month. Plus, raise the roof with
the Easy Building System, now part of our permanently
free collection. Just released into the wild,
Stonefly isn't your typical mech game--it's a chill and
tranquil action-adventure. We caught up with Epic
MegaGrants recipient Flight School Studio, known for
their pinball-inspired hack-and-slash,
Creature in the Well, to learn all about how an
interest in both nature and bugs served as the basis
for this thoughtfully-crafted, non-traditional game. When global brands need to
cater to tastes and cultures around the world,
MediaMonks delivers! On the Unreal Engine feed, learn
how this creative production company utilizes Unreal
Engine to replace live-action packaging
with a CG equivalent to save time and money
without the need for reshoots. Last week, we released Unreal
Engine 5 Early Access and many of you have already started
exploring the new tools and features--and let us say,
your test projects sure do look amazing! Keep sharing your
experiments and tests, and during this
time, please report any bugs you find via
the Bug Submission Form and drop your feedback into
the dedicated UE5 Early Access forums for us to
track and iterate on. And now over to this
week's top karma earners. Many thanks to: ClockworkOcean, mindsurferdev,
Everynone, T_Sumisaki, zeaf, Xelj, Dr Caligineus,
zompi2, Luos, and Mahoukyou. Dashing over to the community
spotlights, in Wild Dive, cross vibrant but unforgiving
terrain as the young ferret Weasley, in the frenetic
first-person endless runner. Dodge obstacles, cross ravines,
slide down slopes and more for the highest score. Download Wild Dive on Steam. Next up is a short
film by Guido Ponzini. Created in their spare time, the
R&D project helped them explore the challenges of creating an
automotive demo--storytelling, rigging, environment
effects, and so forth. Share your feedback
in the forum for Life is a Rally, Guido's
love letter to life and our ability to continue
marching forward, even through obstacles. And last up, give
a round of applause for Arnaud Claudet, a student
at Rubika Supinfogame, who received a Rookie for this
gorgeous short, The Aorta Valley. Inspired by underwater fauna
and Middle Eastern architecture, you can see their process
on their Rookies' page with a complete breakdown
coming soon to ArtStation. Thanks for watching this week's
News and Community Spotlight. VICTOR: Hi, everyone
and welcome to Inside Unreal, a weekly show where we
learn, explore, and celebrate everything Unreal. I'm your host Victor
Brodin and please let me introduce my
co-host for today, Chance Ivey, Senior
Technical Product Designer. CHANCE: Hey everyone. Good to see you. VICTOR: But today, we
are going to talk about Nanites. And to help us
with this endeavor, I have to introduce
engineering fellow for graphics, Mr. Brian Karis. BRIAN: Hello. VICTOR: As well as Galen
Davis, evangelist for Quixel. GALEN: Hey, guys. How's it going? VICTOR: Who I'm sure you
all have seen at this point. Because if you haven't, go watch
the Welcome UE 5 announce video that we released last week. CHANCE: And
yeah, this just proves that he is not in fact
a MetaHuman, which has been asked several times. VICTOR: Does it, though? CHANCE: Yeah. VICTOR: Does it, though? CHANCE: [INAUDIBLE] VICTOR: We cannot know. To kick this off, I would
like to hand it over to Brian to just talk a little bit. What is Nanite? BRIAN: Yeah. So Nanite is our virtual
geometry system now available in UE 5. So virtual geometry
is a way of only kind of drawing the amount of
geometry that you can see, that you can perceive
at the detail level, down to the pixel level. So virtualized means
that it's streaming in just the data that is
necessary to draw that frame. Only that amount
needs to be in memory. And through the way that Nanite
rendering system works is it only needs the processing power
to actually render the geometry that you need to see and not
really spending much effort on the stuff that you don't. So the goals behind this,
the history if we go, if we look back a
bit, I have a lot of experience in my past working
on virtual texturing systems, which do a similar sort
of thing for texture data. So you only bring in the exact
texture data for the stuff that actually lands on
the pixels in the screen. And anything else that
you'd have in memory would just be
working like a cache. That has enabled
artists in the past to be able to use super
high resolution textures and not really have to worry
about texture budgets as much. They just can kind of go nuts
with high resolution textures and not have to
worry about things. So with that
experience in the past, the dream was, well,
we'd really like to be able to do the
same thing for geometry. And being able to do
that for a geometry has a lot more
impacts than just what it would mean for textures. So the idea there
would be to get it pass numerous different
budgets that that artists typically have to deal with. And that has been a dream
this go on for a long time. People have talked about
this in various forms throughout the years. It's something that I've been
thinking about and researching for over a decade at this point. And have had various thoughts
on how that could possibly be achieved, that have
morphed and evolved throughout the years to
try to figure that out. And it seemed like it
was finally time now. As of a few years ago, we
kicked off the effort in earnest to develop Nanite for UE 5. A lot of effort and more than
just myself, a group of us have contributed
to this technology, and it's finally ready for
the world to experience. We've achieved that vision to
at least a respect that I think is pretty reasonable for what
the goals were, for at least at this point of time. There are things that
it doesn't do yet, but for what it does
do, it hits the target for what we're looking to do
and it's now in your hands ready to play with. So we'll start showing
off some things that we've used it for and
our experiences with working with the tech and go into
more, later on in this stream, I'll talk a bit about
how it works or at least a high level view. CHANCE: Well,
that's awesome. And Galen and myself were
able to take a lot of the work that you've done on
Nanite and run it through some paces for the
Valley of The Ancient Project too. I think we had the Lumen in the
Land of Nanite demonstration from last summer that
had really great caves and some stalactites and
whatnot in that environment. And we took a little different
approach, going outside. There's a number
of different things that I'm sure we learned along
the way along with you that we can probably discuss today. But I think pretty much
in Valley of The Ancient, I think all of our static
meshes are using Nanite. Is that right, Galen? GALEN: Yeah. Everything in here is fully
using Nanite, which is amazing. I mean, we kind of mentioned
during the presentation, every asset that we've
loaded in is about one to two million triangles per
asset, which is pretty crazy, considering we have them
scattered in the millions actually in the actual project. So I mean, the math
on that's pretty wild. BRIAN: Yeah. That comes out to like
over a trillion triangles. GALEN: Yeah. BRIAN: You have the
million times a million, and it's actually
more than that. There's more than a million
instances and most of them are more than a
million triangles. GALEN: I have
a really fun memory actually from GDC a couple
of years ago, Brian. I don't know if
you remember this, but this is before Quixel
was actually a part of Epic. And we were up in a meeting
room in the Epic booth and you pulled out your
laptop and kind of showed us an early prototype of this
with very primitive shapes, right? BRIAN: Yeah. GALEN: You were like, hey,
so this box here is actually millions of triangles. So just imagine that it's
millions of triangles here for the sake of
what we're talking about and like showing
all these different kind of primitive objects. And that was kind
of my first exposure to it was several years
ago at GDC in that meeting. And I don't know exactly how
you put it, but you're like, do you guys think you could
maybe do anything with this? And it's like, I think we
could figure something out. Yeah, I think we
could make it work. So that's kind of my first
memory of actually hearing about Nanite. BRIAN: You guys were
one of the first people that had seen it outside
of the Epic walls. We're like, we really
want to do something with you guys going forward. And then, behind the scenes,
wink, wink, nudge, nudge, we should really
acquire this company. CHANCE: Yeah. [INTERPOSING VOICES] CHANCE: I was wondering
why, Galen, when we first started this, that you were
really hellbent on getting a bunch of primitives in
the environment, and not actual Megascans. He was like, Brian showed me
some cubes a long time ago. And I really want to see
if we can make cube world. Make it happen here using-- [INTERPOSING VOICES] CHANCE: No, that's awesome. So yeah, some of the workflows-- I mean, this is the first
time that I'd used the tech. Galen, I know you'd
worked a little bit on living in the land
of Nanite last summer. Project for Valley
of the Ancient posed a number of
really unique challenges to us, where we hadn't
used the tools yet. And so, I mean,
Galen, you've worked on numerous other
projects, that you've had to go through your
traditional building, super high res
geometry, and then decimating it down,
building your normal maps, putting that in there. What were some of the
things that were obvious up front that you didn't have
to do, or at the same time, where did you go and just say,
I've been promised this thing, and I want to kind
of dig into that. What was the workflow
for you there? GALEN: Yeah, I
mean, well, to touch on your previous
projects, I mean, I feel like something that
I've been just laser focused on for a very long time,
and spent a huge amount of time and effort perfecting
and kind of learning about is normal map projection. Going from high to low, and
getting the closest and most representative bank
that you could possibly get through that process. And the fact is, that we don't
even need to do that right now. I think that that's a
really exciting place to be. One of the things you
guys might have noticed, if you've actually
opened up the Valley, is that if you load up any
Megascans asset that we have in the environment,
it's actually not using a unique
normal map at all. And so, this speaks
to two things. First being that Nanite
is bananas, just as far as what you can actually do. And the second being, that
our scanning technology has gotten to the
point where we actually don't need normal maps to
drive a lot of that otherwise macro level detail
that you otherwise would need through normal
game projection techniques. And so, while I mean we did
lose a little bit of quality, a small amount I would
say, we compensated for that with detailed tiling
normal maps, which is something that I think adds just enough
surface variation there to kind of get us
that kind of crunchy feeling that we otherwise
wanted for these assets when you really get up close. So that's the
biggest one for me. I've spent lots of time
in ZBrush and lots of time perfecting cages, and doing
all that type of stuff forever. And this is the first project
where that hasn't even been part of the vernacular,
which is amazing. BRIAN: So none
of the Megascans in that demo had
unique normal apps? None of them? GALEN: No. Yeah, so we're using our
cinematic high master meshes that you can actually drag
in now through Bridge. There's native Nanite actors,
so the U asset actually streams in in seconds. And so, we actually wiped
the unique normal apps for every asset in
the entire project, basically for the
purpose of reducing file size on desk disk. Because we knew that this
was going to be something that people were actually going
to engage with and download. So unique normal maps are
not represented per asset, and unique roughness is actually
not represented here as well. So we use detail tiling
solutions for each of those. Which I think is pretty cool,
and I think that, again, it just sort of speaks
to the quality of the scan assets themselves. And I think that one thing
I'd mention with that is that, we've
obviously identified that Nanite being one
of the main things that we want to target
as far as the product is concerned with Megascans. Is that we want to
constantly refine the processes for actually
gathering the tech or gathering the data itself. So that we can get as
close and true to life as we possibly can from the
scan-- the raw scan asset-- down into what actually gets
dragged into the engine. CHANCE: That's fantastic. Yeah. I'm not an artist. But one of the reasons
I'm not an artist is because of the numerous
tricks that you all have just discussed right now, and
making things actually work to both quality and performance
as things move around. Brian, I wanted to ask-- you seemed a little
surprised that based on Galen's response there. Are you discovering new things? When you're setting out to
make this technology, I guess, when you're working
on stuff that's a little bit more
experimental, and it's not well-established
under a set space, there's going to be some side
effects or some positive things that show up that you
weren't expecting. But I mean, was the artist's
work flow part of the thought with doing this, or was
it mostly focused on, how do we make the highest
possible quality that we can and say performant? BRIAN: No, so that
was a surprise to me. I knew that some
assets had done that. I just didn't know
that all of them were. That was the part that
was surprising to me. So no, it's not having to use-- not having to do that
bake down to normal maps was definitely part of the
original goal for Nanite-- or one of the many
goals for Nanite. But on the Lumen in the
Valley of Nanite demo, it was something that was
used for a few of the assets. But most of the Megascans
still had unique normal maps in addition to the geometry. The reason being
there, not because they had to necessarily, but just
the difference of resolution that you get from a one
or two million poly mesh. But if you have an eight
K normal map on it, like the resolution that you get
of from an eight K resolution is significantly
higher than what you'd get from a million triangles. So just going for absolute
extreme maximum detail quality, that was what was in that demo. So there's different approaches
there, just because-- when we say, you don't
have to use normal maps, that doesn't mean
that you can't. You can. You don't have to make
multi million poly meshes. You can. That's the thing that
Nanite enables that now. And that you can have
very efficient rendering of these very high poly assets. You certainly don't need to
have many million poly meshes. We've shown-- and
we'll show later-- at running live,
we've have some assets that have shown that are
30 million triangles. And it's something you could
do if you want to do that. It's capable of it. But that's not
necessarily the best idea to use stuff of that
high density level in shipping game
content, because it'll be more difficult to work with. It'll be larger on disk. It's able to handle the sort
of extreme amounts of data, and means that you can
directly drop that stuff in without the quality loss,
and be able to just render it directly. But it's up to you how you
want to make your assets. And what's the best
trade off for disk size. What's the best trade
for your workflow, for working with other DDC
packages, things like that. To kind of return to what
your original question was, as far as whether
artist workflow was a part of the goals. Absolutely. That's kind of how I
framed it originally for the goals of Nanite was
about trying to get rid of-- or smooth over at
the very least-- but hopefully be
able to just get rid of the sort of thought
of trying to hit specific technical
budgets, and getting a lot of highly technical
tuning of artistic content out of the heads of 3D artists. So that they're not having
to be constantly mindful of, I want to do this thing. I have some artistic
vision, but I need to work within these
specific technical constraints, or I need to do a
bunch of busy work to get the thing that I
had made into something that is usable in an
interactive fashion. Like, here, I made this thing,
and I've got it in ZBrush, I have it in 3ds Max, I
can render this in Arnold, or whatever, some
other offline renderer. Here's this thing that
looks great in KeyShot, and now I try to bring
it into the game. And I have to go through
this various process of baking my high poly down
into a low poly in a normal map. And I need to make
collision geometry for it. And I need to generate
multiple lob levels for it, and I need to pack
the late map UVs. All of that sort of nonsense
are all technical busywork. And those are the
types of things that we want to either
automate or get rid of, such that you could just take
that thing that was created, that the work went into
making it look good, and then just drop it
directly in the game and be able to use it without
both the amount of work that it took to get
that to run fast, or the technical know how of
knowing what you're balancing, and knowing what
budgets you have to hit, and coordinating that amongst
the larger team of artists. There's just a lot
of extra things that weigh on artists from
that point of view, that should be ideally unnecessary. GALEN: And just the shade
that in a little bit further, for this demo
specifically, we are using the cinematic high
master versions of those assets specifically. But if you're making
your own content, not using Megascan's
content, it's up to you to decide how far
you want to push the tessellation or the
density of the geometry just in general, for
anything that you're making. I mean, we've done
our own personal tests of what are the diminishing
returns of how far you actually push the numbers. And to be honest,
A/B-ing some of this stuff, the
difference between some of what we currently have in
product represented right now to where we probably
could have it is, we're still
trying to figure out what that sweet spot
is actually going to be for these types of assets. And the same thing applies
to making your own content. If you're doing some
hard surface modeling, and you have something that's
really basic, could just be a cylinder, you don't need
to necessarily tessellate that cylinder to the
nth degree in order to kind of get the quality
that you need for that. But the considerations are
always going to be size on disk there. Which I think is one thing that
if artists were to sort of self evaluate in on this
project, I think that we could have maybe gone
through those steps of maybe going through the assets that we
actually use for this project, figuring out where is that
point of the diminishing return, as far as the density
of the geometry, and therefore reducing the size
on disk for the actual project itself. But in the interest of time, it
was a very, very short timeline for this project, we
just started building. So we dragged the assets in,
and we just started actually kind of constructing that. But again, that speaks to
the power of Nanite here, just I'm going to be dragging
in multimillion triangle meshes and just start building
immediately with it. So pretty awesome. BRIAN: Yeah. There's a lot of
tools and things for that specific
purpose that we're planning on making that
aren't really that great yet. So being able to do
asset review and being able to trim down the data
size on disk as a post-process, so not something where you have
to guess ahead of time, here is exactly how much you
should be dedicating towards this asset, and so on. Here's exactly
how many triangles that this thing needs,
and here's how big it's going to be on disk. But so that you can-- late in the game-- when nearing the time
when you need the ship, and you're actually seeing how
big is your game sitting on-- how big is the
download size going to be, what is the size
of your game package, that you can start
trimming at that point, and make your optimizations
after the fact, and not have to worry about
going into some other DCC package and reimporting
meshes and stuff like that. So tools like that aren't
in the early access build, but will be in
later versions of UE5. CHANCE: I like making your
compromises later once you have all the information, as opposed
to having to do it beforehand, and then trying
to figure out what you could have done better. BRIAN: Yeah, because
what you really don't want is to undershoot, just to be
conservative, so that you're not screwed late in the game. Instead, it'd be great to be
able to overshoot it and then trim it back to what
you need to ship. CHANCE: Well, and
yeah, and to follow kind of what Galen said
there, for this project, we were in a few
ways trying to see what we could get away with. And we'll talk a little
bit about what we learned in doing that in a second. But yeah, the download size
for Valley of the Ancients is 100 gigabytes. But even when you
package it, it's a quarter of that--
the entire game. And we could have certainly
gone further down from there. It was just, what can we get
in in time, how far can we push these bounds for a bit,
and early access kind of see where we are. And I think, we've learned
quite a bit about how it works at least from going
through the process of building a project that we
can profile and then release to the community. I know Brian,
you've been chatting with Galen and the Quixel
folks, and pretty much all of us since the beginning
here, and I know that we were using
things in ways that you had not
expected in certain ways. And I know that
there are a number of things that we learned while
we were working through this. Galen, do you have
anything to add there from what were
some of the things that we tried up front
that we ultimately decided that didn't
work, or we had to maybe change our approach? GALEN: Yeah, sure. We could touch on
some of those things. I mean, I think the ultimate
goal for us with this project, it was that we wanted to make
it so that whatever workflow that we pursued from
art point of view, was that we wanted to stay
an engine the entire time. I think that was
something that we really-- as far as releasing this type
of project day one for people, we didn't want there to be these
kind of barriers for entry, just additional DCCs, while
there's amazing tools that allow us to do
that in other DCCs, it was something that we
definitely wanted to make sure that any person that
was watching this day one, that they didn't
feel like, oh, well, I'm not going to be able
to achieve x, because I don't have that package. And so, that was something
that we explored. And so, we actually dug up
kind of an old tool called procedural foliage volumes. And we started loading
Nanite actors into those, and started to propagate
hundreds of thousands of these objects all
throughout the environment, just to see if it
would actually work. And it did actually work, but
there were some considerations that we just needed
to take there as far as that overdraw,
Nanite specifically and then kind of stacking
objects with regard to Lumen and the performance
implications of those workflows. And so, we actually opted
to build this entire four kilometer environment by hand. Which sounds crazy on its
face, but it's definitely something that we were
enabled to do, I would say, by the tech, and that we
built out a pallet of assets, and not only in just
the Megascans actors that you see if you go
to just the Utah pack. But we're really excited, like
I mentioned in the presentation, that we have a new asset
type that we're debuting called mega assemblies. And those mega
assemblies are basically larger elements that are
mega scans assets combined into much larger assemblies. And those assemblies
allow us to just propagate a huge amount of space in a
very short amount of time. And so, what we did is that
we actually kind of dragged in GIS data from
the actual location that we scanned in Moab. So real quick, in case anyone
hasn't picked up on this, this is from an area in Utah. And it's just an
amazing space, actually. The scale is just
ridiculously impressive when you're actually standing
out in the desert there. And so, we did a scan trip
out there for about a week, scanning with drones, scanning
with our handheld scanners, everything, to kind of
get as much as we possibly could, really, to sort
of recreate this area. And from there, we get
this pallet of assets that we then bash together-- just kind of kit bashing. And our guys are looking
at tons of reference, kind of pulling from all the
different kind of pictures and footage that we took
while we were actually out there in the desert, and
assembling lots of pieces that we otherwise were
not able to scan, just because of limitations. There's lots of limitations
as far as scanning, just accessibility. If you look at, just a lot of
the massive canyon walls that are literally hundreds and
hundreds of feet in the air. It's like, yeah, we could
send a drone up there, but we're probably going
to get a very small sample size from that area. Why not reconstruct some of that
from scratch, and actually kind of build it based on
the assets that we were able to go and get. And so, all that to
say, that we use that GIS data from this
exact location. And we started layering
assets on top of it. And that was something that
I think sort of allowed us to use the GIS data
as more like tracing paper, which is kind of the
best way to think about it. And if you notice inside
of the environment, there's actually
not a terrain actor. So we had terrain originally,
and we used that, again, just sort of as tracing paper. And we were able to sort
of layer assets on top. Now, I want to make sure
we really heavily caveat this specific approach here. Brian, I've had
many conversations around this topic specifically. The purpose of this
demo, again, for us was to really push
the boundaries of what the engine can do. We wanted to step on as many
rakes as we possibly could, going into early access, so
that ultimately people would have a good day one experience. And I think that we
did a lot of that. So a shout out to
the entire team for taking all those rakes. Because it was a massive
lift from the entire team to get to where we are now. But with that
layering assets on top of what was otherwise
terrain, everything you're seeing there is densely
tessellated Nanite geometry. So it's pretty crazy,
every single inch of the environment hand
assembled and is actually represented by geometry. CHANCE: Yeah,
all the ground. All of the ground planes
are Nanite Megascans. BRIAN: Do we want to
bring in Erin and show it? GALEN: Yeah, sure. Sure. BRIAN: People can look
at what we're talking about? GALEN: Yeah. I've got it up on my
screen right here. So I was just talking
about covering the ground with geometry. So this is a great example
here, just sort of one of our mid-ground shots. Everything that you're seeing
here that's on the ground is something that is represented
by a densely tessellated geometry. so it's pretty neat
to see just how far the artists on this
project were able to push this, just as far as literally
just taking these insanely tessellated pieces,
and just covering every single piece of
the landscape with it. So, with that, this
is something that's been a limitation in
games for a long time, is just the actual
landscape itself, and figuring out
how to kind of get as close to the tessellated look
that you would have with actors that are sitting on
top of the landscape, integrating with the
pieces down below. And there's been a lot
of advances with that. I think one of the
examples of that, this was something that we were
actually R&Ding early on, when landscape was still sort of a
piece of the equation on this project, is that, we love being
able to kind of leverage RVT specifically. That's something
that really gives us a nice variation of color from
different elements that are actually laid on the ground. You can actually have a
really nice believable blend from these assets
coming up from below it. You can effect that transition
based on some really simple shader math to get you there. And it's really
awesome to see that. But again, we had to pull
back from that approach in this project just in favor of
pushing Nanite to the furthest that we possibly could. And I want to kick
it over to Brian specifically around
this approach, and sort of talk about
some of the pitfalls there. Because we did do some
course correcting on this specifically, but there are
some technical implications that I think are definitely
worth mentioning to this group. BRIAN: Yeah, sure. So I'm a little blind here. I don't have the feed coming
of what you're sharing anymore. So I can't see exactly
what you've got on screen. But yeah, before we get
into some of the caveats, I think I would also
want to just highlight some of the advantages of
this approach versus using just landscape alone
at the very least. And it's not just from
the triangle density point of view, which is still true. It's also what you
get from static meshes and scanned ones in particular,
is that it's not just a height map anymore. When you've got cliff
faces, those cliff faces can have overhangs that can
have actual interesting relief to them. And that can happen not
just on the typical sort of overhang cave case,
which everybody knows can't be done with height maps. But even in the
smallest scale sort of situations, when you see
how rocks on the ground look, when there's a rock
sticking out of dirt. And it's not just that there's
a sort of rocky relief to it. There's a rocky displacement
coming off of the ground. But instead, it actually
looks like there are rocks, because there are, actually
sticking out of the ground. There's just a
realism that comes from that that's just not
achievable through a height map only solution. So I think the sort
of results that were achieved with the
approach that you took with tons and tons of
static mesh instances actually gives a look
that hasn't really been seen before,
or not at the scale, not at this fidelity
level in real time, which is really cool. GALEN: Yeah. Another real quick,
before the caveats, so the other trick that we
weren't able to sort of lean on specifically with this approach
is a pixel depth offset, which is another
thing that we've used for a long time and sort
of blending assets together. Getting that nice dither look
between the different kind of groupings of assets that
you would place on the ground. That was something
that we actually didn't need to use that. And the reason being, is that
since the assets are so densely tessellated, they actually kind
of butt up against each other really nicely, which is crazy. BRIAN: Yeah. GALEN: It's so crazy. BRIAN: Yeah. I found that to be
really surprising too. When I first started taking-- because yeah, some of
the very first data that I tried testing-- early, early
prototypes of Nanite-- was with some Megascans data. Because I was
trying to find, OK, where can I find a bunch
of really high poly meshes? Well, Megascan's library has
a bunch of high poly meshes to use. And yeah, it's quite surprising
that what you expect needs soft blends, what traditionally would
need a soft blend to go from one mesh to the other, when
the intersection isn't low poly polygonal anymore, and it's
actually detailed intersection between them-- it's just
sort of noisy crooked line-- it's not as obvious when it
goes from one to the other. They kind of fit together
in a fairly natural way even without any sort of feathering. Let's see. Can I get this going
again, so I could see what you're looking at? VICTOR: Yeah, so we
just got the screen share back for you Brian, and
Galen, and Chance, since I'm the only one who
actually can see what's going on on the stream PC right now. BRIAN: All right. VICTOR: It should
be back up for you. BRIAN: OK, it's
not yet on his PC. There we go. Cool. So an issue that came
up with this approach is if you try to kit
bash things together, Nanite in some respects can
handle this really well, because it's doing fine
grain occlusion culling. So that, if you have two
meshes that are really heavily intersecting with
one another, such that a large portion of one
mesh is below the other mesh, and the same thing going
the other direction, that parts of either
one of those meshes are completely
embedded in the other, those things won't
cost you anything. They'll be completely
hidden below the surface. And Nanite will cull all
of that geometry that is well embedded
in something else, and it won't cost you anything. So compare that to
traditional meshes and the cost of drawing
that mesh is the cost of all of the triangles of that mesh. No matter whether you
can see them or not. So long as you can see
any portion of that mesh, you're going to
draw that, and that includes all of
the triangles that are well below the surface. That's no longer
true with Nanite. For the most part, you just pay
the cost of what you can see. And it for the most part
scales with screen resolution because of that. So it's really in
most use cases, the cost of rendering
Nanite geometry, no matter how high poly it
is, no matter how it's placed, for the most part scales with
the number of pixels on screen. So it gets more expensive
the higher resolution you want to render at. But there are some situations
where that property doesn't work out so well. And those actually show up
in some of this content. So yeah, if you go back
to the overdraw view mode. So there are a bunch of
debug modes for Nanite, but different visualizations. This is one of them. And this shows a
heat map of overdraw. So you'll see, in
some of these places where it's this darker
purple, is where there's not much overdraw happening. But if you actually
look at the content, there will be a lot of geometry
buried under that surface. And you're not getting
overdraw from it, because Nanite is
culling it well. But there are other cases
where things get quite hot, and that's places where Nanite
is drawing multiple things. So that's the number
of times that Nanite tried to draw to that pixel. That's what this
view is showing. So that's called overdraw. And when it happens a lot,
it can be very expensive. The reason why that
happens a lot-- there can be numerous different
things that can cause that. It's rare in most use cases. But in the Valley of
the Ancients demo, the one case that can
cause this to happen is actually quite prevalent
throughout the map. And that is surfaces that are
really close to one another. So they're buried, but
the thing that is buried is actually really close
to the top surface. So if you have that
just happening once, it can kind of be-- it's going to be more
expensive than if it wasn't happening at all,
but it won't be that bad. But if you get lots of layers
of that, and all of those layers are really close to the
surface, the overdraw can get quite a
bit more expensive. So in this demo, what we see
versus some other content that we've tested in the past,
is Nanite can be maybe up to 2x as expensive for content that
is heavily stacked like this. So this demo in general
ends up being about-- Nanite scales up to about twice
as expensive as we have seen and in other content examples. Now, granted, that's
still fairly fast. So we're still able to
hit 30 hertz in this demo on Xbox and PlayStation 5. It's just, it's
something to know, because if you're trying
to hit a 60 Hertz game, or if you can just
optimize your content and make sure that
that's not happening, then you can make
stuff run faster. So that's sort of stacking
that I'm talking about. Are you in a free
camera right now, or are you on a sequencer track? GALEN: I can be. Yeah, I mean, we can-- BRIAN: If you
want to just fly in, like crash the camera into
the ground, so you can see-- GALEN: I wanted to
call out, too, specifically, that I wish we had some
of the pictures of where we got with the overdraw,
whenever we were first trying to do our optimisations where
the whole screen was mostly yellow. BRIAN: Or when I
said, buried geometry, there is a ridiculous amount
of geometry below the surface. And in the places where
it gets really bad, you'll see a lot of
layering above it. So this is an area
where it's not so bad. That already seems
like would have been a absolute horror for
things below, before Nanite. This amount of
overdraw, this amount of stacking that you're
seeing right here, actually isn't so bad. But there are other
places in this where it's probably five times
as much as what you just saw. And those are the places
that get really expensive. So unfortunately, that can be
difficult to avoid altogether when you're doing kit
bashing, especially if you want to cover an entire large
scale terrain all with just overlapping instance meshes. Doing so is-- some amount of
this is just going to be-- you'd have to except for trying
to make content like this. But there are some
approaches that the team has learned after
doing this demo to try to optimize for
that in the future. But yeah, as was
previously talked about, this demo was a bit
of an experiment. It was, how would you
make a terrain out of instant static meshes in
Nanite without using landscape? How would you do it
completely in engine, and just kind of push things
to its limits? How many instances can
Nanite actually support? Can you cover the
complete ground without any holes,
any cracks, just with instant static meshes? There was a lot
of things that we were pushing the
boundaries on and seeing how it would go,
how it would work, what would we learn
from that process. What worked, what didn't work. And this is kind of the
result of that those learnings in progress. GALEN: Brian,
could you maybe talk a little bit about
the grazing angles issue specifically here? Because I think that was
something that I definitely feel we picked up on a lot. Just quickly, to sort of
visualize this for people. So if you look at things that
are kind of facing the camera, these boulders up here
up top are actually really great examples
of, these are just placed assets that
are facing the camera, as opposed to these types of
areas where you're looking out. And could you maybe
sort of explain the technical implications of
that grazing angle, and what that means for making-- BRIAN: Sure. When it's doing its-- I'll get into-- some of this
will make more sense when after I've kind of
done an explanation bit of how Nanite works. GALEN: Sure. BRIAN: But when it's
doing its occlusion culling, it's taking the bounds of pieces
of that geometry, clusters of triangles, and
it's trying to test to see whether they're visible. And when you get in that
glancing angle case, with many flat things, which
the cases where this happens the worst in this map
are the sort of meshes that look like pancakes. And they have detail to them. They're like scanned
bits of ground geometry. And those bits of ground from
the macro scale are very flat. When you zoom in on them,
they're not so flat. They're very jagged. But if you see them from a
distance, they're fairly flat. And they're kind of stacking
up of mini pancakes. And that sort of
stacking, each one is ever so slightly rotated and
shifted, such that you see one, and then that one goes
below another one, and then you see
that one, and then that goes below
something else, and then you see a different one. And all of that stuff is kind of
stitched up almost like a quilt to make the overall surface. But because they're
fairly flat, once one goes below the
surface of another, it ends up being really
close to that surface. But if you see it
from a glancing angle, that bit of geometry is now-- it's harder for Nanite to know
that that piece of geometry is below the surface of another,
both from a glancing angle and from a distance. So if you fly up way in the
air and look a really far distance-- yeah, like that. And then turn on
the overdraw mode, you'll see that it gets really
hot as you get really far out. So if you're standing on the
ground, it's less of an issue. But if you're flying
up in the drone-- especially if you keep on
going further, and further, and further away, it
can become an issue if there's no merging of these
meshes to other things, which was not set up in this project. So-- [INTERPOSING VOICES] BRIAN: --the
things to know. In this sort of
view, not so bad. It's probably will run
fairly efficiently. But when you get really
far away from it, and you see it from an angle,
it starts getting worse. So this sort of property
isn't universal. I guess it's important
to note, this isn't just a universal property of
Nanite, and all Nanite content will have this sort of
bad overdraw problem. It's just when stuff is
very closely stacked, when they're
overlapped, and there's a stack of-- in some places
there's over 10 surfaces laid it up in the matter
of like a few centimeters. That's how heavy
some of it gets. And it's kind of
worst case scenario. It's cases like those
that once you see them in these sort of from a
distance at a glancing angle sort of thing that things
can get quite expensive. But it all depends
on how it's set up. GALEN: Yeah,
and this is where we did a fair amount
of course correcting. We built out our
set of mass that-- or mega assembly, sorry-- that we wanted to
use for this project. And with that, we were able to
sort of get that Nanite debug view. I don't remember exactly
when that came online for us in the early access branch. But once we got it,
it was like, oh, this is a really, really
great way for us to just sort of
dissect these pieces, and figure out exactly
how we should maybe go back to the well and figure
out how to actually build these in a more efficient way. So using a lot of in
engine modeling tools, just jacketing certain
assets to cull out areas that are just completely
hidden and that type of stuff. And this is going
to be something, just as far as
product is concerned, that we're going to continue to
keep an eye on and figure out the best way to actually
build these things. But yeah, we want to make
sure that we laid out some of the caveats as far
as how we were actually building the stuff. Because we're still developing
a lot of technology on the fly, if you can't tell. So-- BRIAN: Maybe it
would be good at this point to talk about how it works,
because some of these things might not make sense until you
understand a bit more of what is Nanite actually doing. GALEN: Yeah. Jump over to slides? BRIAN: Yeah,
switch over to my screen. VICTOR: Yes,
Brian, we might need you to just stop and
restart your stream to us. That seemed to have
gone down earlier during our little internet out. BRIAN: All right. Started again. Are you picking me up? CHANCE: Checking in. And we're good. BRIAN: All right. Cool. So before I jump
directly into the slides, because this is
referencing our last demo, I just want to show you that
up to give a bit of context, so it doesn't seem like I'm
referring to something else. I think there's been a
little bit of some confusion in the community that
this doesn't work anymore, or requires specifically like
it only runs on a PlayStation 5, or I think there's been just
a bunch of misconceptions. That's not true. It works perfectly fine
in the latest build. It runs great. This is on my PC. Obviously I'm in
editor right now. So I made this slide
deck-- some of these slides were off of this demo, the
Lumen in the Land of Nanite. And I'll be showing a bit
more of this going forward as another example
of content, just like the Valley of
the Ancients demo, this is also a good example
but in a different form. So jump over to the slides. Talked about this a bit before,
but just kind of spell it out. The sort of dream
that Nanite was trying to go for, the
goals that we had was to virtualize geometry in a
similar way to how virtual textures work, to try
to reduce of caring about the multiple
different budgets that are associated with geometry. So that artists for
the most part, just don't have to care about these. So that means polycount
budgets, draw call budgets, and memory associated
with meshes. So with these
budgets being gone, you could directly use
film quality source art, bring it in from wherever that
happens to be created, and just drop it directly in the
engine, and be able to use it, and have this just work. Like I was talking
about it before, of all those technical steps
that an artist might have to do to have some
sort of manually optimized for real
time or game purpose, to not have to do
that stuff anymore. To have the engine do whatever
sort of automatic process is necessary, and to make it so
you could drop it in, and have this show up, And all of
that, not just as the, oh, we can make some
sort of auto LOD, and it figures out all the
right settings for you. But to-- and in
addition for it-- to be automatic and
seamless, to also not reduce the quality to
be able to achieve this. So that's the dream. The reality is that the
problem is so much harder to achieve than a virtual
texturing system is. Because geometry--
the problem isn't just a memory management problem. Geometric detail directly
impacts rendering costs. The number of
triangles that you draw is going to scale
up the cost that it takes to render that, which is
very different than textures which are just memory
that is accessed. So if you have more
memory, it's not like every texel
of a texture has to have some amount
of computation to put it on screen
like geometry has. And geometry isn't trivially
filterable like textures are. It's very simple to generate
a mip map of a texture. It is not very simple to
generate a, quote unquote, mip map of geometry. CHANCE: Is
there a name for that yet, for what a mip map
for a mesh would be? BRIAN: I mean, there's
LODs, which are probably the closest form of this. But even generating
those, it's very easy to mip map a
texture, because texture data is filterable. You can just take four
texels, you take their colors, you sum-- you just calculate
what their average is, and that's your mip map. You can't just
average up triangles, and then, here is
the average triangle. And when you display that
on screen, it's not the-- anti-aliased version of
a texture at a distance is what a mip
map is encoding. That's what I mean
by filterable. And that's not
true of geometry-- the computed LOD
for geometry is not going to be a filtered
version of that geometry. If you draw it at
a distance, it's not the same thing as
the anti-aliased version. CHANCE: Yeah. BRIAN: So anyways,
there's a number of different approaches that
could be taken to solve this. There are a lot of them that
have been suggested in academia throughout the years,
explored in numerous of these, and in my research in trying
to solve this problem. But it's important to
note some requirements that we had for this,
that we were not interested in completely
changing all sort of CG authoring workflow. So we want to be able to
support importing of meshes that are authored from anywhere. It's not like we could only
support them in authored through our own provided tools. We want to make
it so that you can import these meshes
from wherever you happen to have authored them. That they'll still have
UVs and tiling detail maps. They'll still have
shaders that were created in the material
node graph editor, just like you have been
creating them for years. We only wanted to replace the
meshes, and just kind of slot in a different thing, and
not have to replace textures, materials, and all the tools. There's just a giant
ecosystem out there for creating art assets. And we didn't want to have to
replace every part of that, just because we wanted
to change this one portion of the problem. Which rules out a number
of these different other possibilities. So it's been a very
long time exploring these different options. But for our
requirements, haven't found any higher quality or
faster solution than triangles. So this is kind of the
foundation of computer graphics for a good reason. So there are other good uses for
these different data structures that I don't want to knock them. There is very good
reasons to choose them for different purposes. But for ours, this
was the best choice. So if we're going to do a
triangle based pipeline, what would it take
to just bring UE4 up to the kind of state of the art? So I'll just really
quickly review this. Nanite is a completely
GPU driven pipeline. So the renderer is
now in what would be called a retained mode. So there is a complete
version of the scene existing in GPU memory. It's sparsely updated
when things change. So every frame, it
is not uploaded. It's only the changes that
get uploaded to the GPU. That includes all
vertex and index data, is all stored in
a single resource. And then, per view
on the GPU, it would do GPU instance culling
and triangle rastorization. And with that done, everything
that I've got on this slide, we could draw just a depth
only pass of the entire scene in a single indirect draw call. You don't need to do draw culls
for every individual thing. If we do all of this, you can
do it in a single draw call. So we have all the
benefits of GPU driven now, but are still doing
a fair amount of work for triangles that
aren't visible. So we can add on top of that
triangle cluster culling to trim out that
unnecessary work. So I was talking
about this a bit before for occlusion
culling stuff that is buried and can't be seen. This is the reason
why that happens. So to do that, you can group
up triangles into clusters. In our case, it's 128
triangles per cluster. For each one of
those clusters, you build bounding data, so a
bounding box for each cluster. And then we can cull those
clusters based on those bounds. We can cull them
against the frustum, and we can do occlusion
culling of them. So if it's hidden
behind something, if something is closer
than it, and that cluster is behind something else,
we can determine that, and then not draw those
triangle clusters. But if we want to move
past a depth only rendering and support materials too, there
is numerous different solutions to solve that. But we would prefer
an option where we decouple the visibility
from the materials. And by that, I mean
determining visibility on a per pixel basis, which
is what depth buffered rastorization does,
is disconnect it from the material evaluation. The reasons why we
would want to do this is that, switching shaders
during rastorization can be expensive. We would want to eliminate that. Want to eliminate
any sort of overdraw from a material
evaluation or a depth pre-pass that would be necessary
to avoid that overdraw. And pixel quad inefficiencies
from extremely dense meshes. And that's definitely
a target for this tech. We want to get rid of
all of those things. So there are some
different options. The one that was most attractive
for us is deferred materials through a technique called
a visibility buffer. So what this is
doing, basically, is there are
material passes that are deferred-- separated
from the rastorization of the geometry. And we do material passes,
one draw per material that is present in the scene. Not per object anymore. The objects all drawn at once. This is now just
for each material that is present in the
scene, that there will be a single draw call for it. And that material will pass
writes out to the GBuffer. It wouldn't need to. We'll probably end up
supporting the forward renderer in the future. But for now, that was done
so that we could mate it up with the rest of the
deferred shading renderer without having to change
everything going on there. And there's still some very good
reasons why UE5 is deferred, which I won't get into here. But there are some good
reasons why we still want to keep around the GBuffer
and some of the advantages that that has. So with deferred
materials, we can now draw all opaque geometry
with a single draw. It's completely GPU driven. All this work is all on the GPU
without much CPU involvement. And it's no longer
just a depth pre-pass, but it can do the materials too. So it's just full
on opaque geometry. And we're also rastorizing the
triangles only once per view. There's not a depth pass
and then a base pass. There's just the
single geometry pass. Which is great,
because that means it's going to be
expensive enough to draw this amount of geometry. We certainly don't want
to do it more than once. So with that, it's much
faster than before, but it still scales linearly
in both instance count and triangle count. Linear scaling of
instances can be OK, at least within the limit
of scale of levels-- the number of instances
that you'd probably want loaded at a time. We can handle a million
instances easily, but linear scaling in
triangles is not OK. We can't achieve the
goals of just works no matter how much we throw
at it if we scale linearly in the number of triangles. So if we used ray
tracing approach, that scales with log
in of the triangles, which is nice, but not enough. We couldn't fit all of
the data of this demo in memory, even if we could
render it fast enough. We still have to remember,
virtualized geometry is partly about memory. We're trying to
virtualize the memory. Ray tracing isn't fast
enough for our target on all the hardware
that we want to support, even if it could fit in memory. So we really need something that
is better than log in scaling. To think about this another
way, there are only so many pixels on screen. Why should we draw more
triangles than pixels? Ideally, we would just
draw a single triangle per pixel at most. But think about this
in terms of clusters-- because that's what we had
going with the cluster culling-- we want to draw the same
number of clusters every frame, regardless of how many
objects are on screen or how dense they are. It's impractical
to be perfect here. But in general, the cost
of rendering geometry should scale the
screen resolution not scene complexity. This means, constant time
in terms of scene complexity, and constant time
really means level of detail. So we can do level of
detail with clusters too if we build a
hierarchy of them. And the most basic
form-- imagine a tree of clusters, where the
parents are simplified versions of their children. At runtime, we can find
a cut of this tree that matches the desired
LOD, and that means, different parts of
the same mesh can be at different levels of
detail based on what's needed. This is done in a
view dependent way, based on the screen space
projected error of the cluster. A parent will draw
instead of its children if we determine that you
can't tell the difference from this point of view. This gives us all that we need
to achieve the virtualized part of virtual geometry. We don't need an entire tree
in memory at once to render it. At any point, we can
mark a cut of the tree as leaves, and then, not store
anything past it in memory. So just like virtual texturing,
we request data on demand, based on what it's trying to
render from frame to frame. If we don't have the children
resident, and we want them, they are requested
from the disk. If we have the
children resident, but haven't drawn
them in a while, we can evict them
and put something more important in its place. So now that we have
fine-grained view-dependent LOD, and we're mostly scaling
with screen resolution, how many triangles do we
actually need to draw? Remember, we're trying to hit
zero perceptual loss in detail. So how small do
the triangles need to be, such that the error
is less than a pixel big, and is effectively
imperceptible? We can do that-- or, can we do that
with triangles that are larger than pixels? It turns out, in a
lot of cases, yes. Triangles are adaptive and
can go where they're needed. If something is
flat, you certainly don't need to have pixel sized
triangles to make it look no different than the original. But in general, no. Pixel sized features need pixel
sized triangles to represent them without visible error. It's content dependent. So is it practical to
draw pixel size triangles, or in the worst case,
an entire screen worth of pixel size triangles? Turns out, tiny
triangles are terrible for typical rastorizers,
hardware rastorizers included. There are designed
to be highly parallel in the number of pixels, not
in the number of triangles, since that's what their
typical workload is. So could we possibly
beat the hardware with the software rastorizer? Yes, we can do a lot better. Three times faster on
average than the hardware compared to our fastest
primitive shader implementation that we've measured. Even more than that for
pure micro poly cases, and quite a bit more than
that if we compared it to the old vertex shader
pixel shader path instead of the primitive shaders. The vast majority of the
triangles in this demo were software rastorized. So how about the rest? Well, big triangles, we can
use the hardware rastorizer for those or other cases
that we aren't faster at. It's still good
for big triangles. That is what it's designed for. So we might as well use the
hardware for exactly what it's designed for. We're not going to
be able to beat it. So we choose the software
or hardware rastorizer on a per cluster basis, based
on which one we determine will be faster. So all that together, what
does Nanite perform at? That's kind of the complete
pipeline in a very bird's eye, high level view. So with all of that,
what sort of performance do we get out of this? So in the Lumen in the
Land of Nanite demo, there was dynamic
resolution used. This was an earlier version. This was before
much of the work had been done on the temporal super
resolution that's now in UE5. But regardless, we still used
a form of temporal upsampling that was kind of from the UE4
time period of the temporal up sampler. And on average frame
throughout the demo, it hovers at about 1,400
p before the upsample. The time it takes to cull and
rastorize all of the geometry, is about happens in
about 2.5 milliseconds, and costs us nearly
zero amount of CPU time. All that work is on the GPU. And then, the base pass, which
is the deferred materials, applying all those deferred
materials to the geometry that had been rendered out to what's
called a visibility buffer, takes approximately 2
milliseconds on average throughout the demo. And that has a small CPU
cost, because there's one draw call per material. But that really just
scales with the number of materials in your scene for
number of draw culls there. It doesn't scale with
the number of triangles or the number of actual
objects instances placed. So putting this all together
at 4 and 1/2 milliseconds is actually totally within
budget for a 60 hertz game. What would be comparable before
to comparing depth pre-pass plus the base pass from UE4. So it's also worth talking a
bit about the data on disk, because if you go
completely nuts here, you could easily blow through
many gigabytes of data and have very large game downloads,
if this wasn't-- [INTERPOSING VOICES] GALEN: We wouldn't have
any idea about how that works. [LAUGHTER] BRIAN: So we have some
pretty good compression. So yeah, the big triangle
counts would be really big if we didn't compress them. And Nanite is a significantly
more compressed format using our own proprietary
compression format than what the standard
static meshes were without Nanite enabled. So in this demo, all
of the Nanite data on disk in its compressed form
comes out to 16.14 gigabytes. So it's a decent chunk of size,
but it's not ridiculously huge. I think there's
some misconceptions that there was hundreds
of gigs of Nanite data. And that's just not the case. In this demo, the texture
data was far larger than any of the geometry data. I think that was probably true
for the Land of the Ancients demo as well. I'm not sure of the
exact numbers there, but I think the numbers
for the Nanite data were fairly similar in size. Maybe a bit smaller than this. GALEN: Yeah, I
think we were smaller as far as the geo goes,
but we were a bit larger in the textures, just because
we had so many different assets. BRIAN: Yeah. So anyways, these
are kind of numbers. Your own will vary depending
on what sort of resolution you go for, and
how much different variety, and kind of how
all that stuff is optimized. But at least
currently, on average, it's about 14 bytes
per input triangle. The actual Nanite data
stores more triangles, because we store this whole
hierarchy of clusters. So there will be more
triangles stored in it. But this is on average for
the triangles that you import. So with that, if you import
a one million triangle mesh, it will be approximately
about four megabytes on disk on average. It'll vary-- your mileage will
vary depending on what it has. There's various
attributes on that of how well it will compress. But on average, you should
see something like this. And it's worth noting
that that size is actually smaller than a 4K normal map-- considerably smaller. A 4K normal map will probably
be more like 20 megs on disk. So it's not a crazy
amount of data to get a fairly
high fidelity mesh. So it's also worth noting
that Nanite enables some new techniques
that just weren't really all that practical before. So a key one is
virtual shadow maps, which we haven't talked a ton
about so far in the video. And some of that
is known out there, and people are starting
to go play with these. But it's some really cool tech. It's worth talking
a little bit about. So all shadow maps now
with virtual shadow maps are 16K by 16K. So way, way higher resolution
than you've seen before. And the way that
this can be practical is that, just like
Nanite does, it kind of picks the detail level of to
what matches up on screen. So it picks to render
the shadows such that one texel matches up to
roughly one pixel on screen. And then, using the
virtual shadow tech combined with Nanite,
we only render it to the shadow map pixels
that are visible on screen. Nanite culls and lobs it down
to that detail level required. And we support caching as well. So we can avoid drawing
anywhere in the shadow map that we've already covered
in a previous frame. So that means,
for the most part, only the region
of the shadow maps that were updated each
frame are the ones that objects are moving in. So not only can Nanite draw
the entire scene just once, it supports multi
view rendering. So that it can render all shadow
maps for every light and scene in the scene to all of their
virtualized mip maps at once, only drawing
into them where is needed. So you want sharp shadows,
you've got them now. Virtual shadow maps can
do that in a way that wasn't really possible
before without ray tracing. So we've got the
resolution to go sharp, but no one wants razor
sharp shadows everywhere. So we simulate a
physically based penumbras by ray marching through these
virtualized shadow maps. It's also cool to note that
it's no longer requires manual tuning of depth biases. That was another
goal of this effort. So shadow acne and Peter Panning
artifacts are for the most part, not really present. There will still be some cases
where there'll be some issues. But nothing like before. It for the most part,
it kind of works. So Nanite is real. It works today, as you guys are
playing around with and seeing for yourselves. And this demo, and
then the following one, the Land of the
Ancients demo, we've tried to push it to its limits. And it's kind of kept up past
expectations for the most part. But it isn't done yet. There's still a lot
more that we wish to do. It's very workable. Obviously you guys
have seen that you can make cool stuff with it. But there's things
that we're still planning on working on,
still planning on proving. The compression is
absolutely one of those. So the numbers that I mentioned
a few slides back as far as those sizes on disk, we
expect that we can shrink those pretty considerably, because
we're only really scratching the surface for what sort
of compression methods that we can use. But there's also
some other things that are noted in
the documentation that it's worth highlighting. We've focused on
rigid geometry first, because that's over
90% of the geometry that you're going to
have in a typical scene. So this was our
highest priority. But that doesn't mean that the
whole scene can't move at all. You can move individual
objects around. You can scale them. You can rotate them. You can translate them. It just doesn't support
non-rigid deformation. So skeletal animation and other
sort of deformers like that. World position offset,
unfortunately, is one of those. We also don't support
translucent or masked materials, and don't yet support
tessellation and displacement. It's not great for
aggregate type of geometry. That doesn't mean that-- by aggregates, I mean many
tiny things coming together to become a porous volume, like
leaves in the canopy of a tree, for example. It won't do as well of a
job, but that doesn't mean that it can't still be fast. Just don't expect it to
be the sort of magical-- the cost of it only scales
with the resolution on screen. That sort of property
works in many cases, but will likely not
work in this case. But it could still
be pretty fast. So anyways, that's
kind of the slides I had prepared to explain how it
works, showing it in motion. You guys have seen
this demo before. I won't go through
all of it again. This is it running in
editor, by the way. That's obvious. I'll full screen it here. So what I was
talking about before with that sort of
detailed shadows, they get very, very
sharp and detailed, which is great to
be able to show off that sort of intricate geometry. Because without
that, it's harder to tell the difference between
geometry and normal maps when there isn't self
shadowing involved. But when we have
that real geometry, and we can cast
shadows from it, you can really see the definition
in a geometric form when you could see all
those little details casting shadows themselves. So anyways, let's-- CHANCE: Awesome. BRIAN: There was
some misconceptions that we put this crack in
here to hide streaming. I have the whole level
loaded in memory right now. It was actually so
that we could just show the camera getting like
really close to this rock. Just an excuse to put
the animation there. It wasn't really to hide any
sort of streaming things. GALEN: On
that note, too, this is before we had implemented
any type of world partition or anything like that as well. BRIAN: Yeah. This is all what the
previous sublevel streaming. This isn't using
world partition, because this came before
that tech was ready. Anyways. Line up here, you
could see a bit of that difference of the
sharp versus soft shadows. Go into the view modes
here to look at-- so let me turn off
the anti-alias thing so it stops shaking. So this is a view of the
individual triangles. That video compression
is probably going to suffer a bit here,
just because of how noisy it is. GALEN: Yeah, if you
turn off anti-aliasing, some of that jitter goes away. It doesn't help a ton,
but it might help. BRIAN: Yeah, I just did. So if I get in really
close here, you can see, here are the size of
the actual triangles. So we're not drawing points,
we're not drawing voxels, we are drawing the
actual triangles. And when you get close
to it, these triangles are the ones that were imported. It's not like Nanite re-samples
your data into some other form. It's just, once you start
getting further away, you might be able to
tell that it's changing what triangles it's drawing. But what's easier to
see, is if we switch over to the cluster view. So how I was
describing, that there is this hierarchy of clusters,
these are those clusters. And as we get closer, it
swaps those clusters out for different ones. And what's important to realize
out of it, and the reason why this works at all,
is we're trying to keep, essentially, I
mean, it's the math that it's doing is a bit
more complex than that. But from a basic
understanding, we're trying to keep the
size of the clusters about the same size on screen. Because each cluster is the
same number of triangles. There are 128
triangles per cluster. And if the size of
clusters ends up staying roughly the same size
at all these different view distances, then that means
that the number of triangles that we're drawing on screen
is about the same at all these different distances. And I can show
that exactly with-- VICTOR: Brian, just
real quick, checking in here, did you have a hard out today,
or can we continue for a bit? BRIAN: I don't. VICTOR: OK, great. Because the amount of
questions we've received and the amount of questions we
have from the previous dream are starting to add up. So I just wanted to
make sure that we get a chance to cover some of them. BRIAN: All right. [INTERPOSING VOICES] CHANCE: This is all great. We don't want to stop you. BRIAN: OK, because-- [INTERPOSING VOICES] VICTOR: No, no, no. Just making sure you don't
have to ditch real soon. And Galen, I assume
you're good as well? GALEN: Yeah, I'm good. VICTOR: Perfect, awesome. Please continue. BRIAN: So I'll turn
that visualization off so you could actually see the
text on the side of the screen. You can bring this up yourself
with typing Nanite stats into the council. This shows what Nanite
is actually drawing. There's two passes, because
it uses a form of two pass occlusion culling. But the key thing to look
at here is the number of triangles-- down
here at the bottom-- that it draws after
these two passes. So the number of clusters,
which should be obvious, and the number of
total triangles. So even though the number
of triangles in the source geometry for this might
actually be over a billion of what it would be for all
the meshes that are visible, the number the Nanite
actually draws-- rastorizes to the screen,
is much lower than that. It's still fairly high. It's in the tens of millions. In this case, it's 12
million for this view. But as you change
for different views, we can see this in
numerous different scenes. So I go back to the last
one, see here, it's about 16. For all these
different views, it ends up being fairly similar. No matter what
geometry is in view, it doesn't really get much
more than 20 million triangles. And that's kind of demonstrating
this concept of its scaling with screen resolution. So no matter what sort of
content is thrown at it, it's similar in its cost. And that cost is mostly in the
way of a number of triangles. So before I move on, I'm
going to take a look at-- I've just made a test here
for to show off the shadows. So I've got a point
light there with-- CHANCE: That
bicycle's in the demo? BRIAN: No. I mean, this hallway was. This was the area that-- the Beatles, they were
walking around here. It has been running for
hours in the background. So the Beatles are
walked away, I guess. Anyways, so I just
put this bicycle in to get a really thin object so
you could see the shadows well. So these are physically based
shadows with proper penumbras. So they're really sharp
right at the caster, and then as they get further
away, they get blurrier. Which is exactly what you
want for area light shadows. And when seen in
an actual setup, it's less obvious
to see that effect, but it's happening everywhere. That sort of sharpness there
has-- near his fingers, and it gets quite blurry
as it goes further out, which really gives a lot of-- a really nice look to
the shadow casting. Come out to this next scene, and
we've talked about it before, this the statue, and
every one of the statues. It's all the same statue-- is was 33 million triangles,
I think, the original source geometry. And it's insanely detailed. But in this scene,
I think there's close to 500 of them,
making it such that there is very clearly billions
of triangles of source geometry on the screen. But you can see, again,
over in the corner, that we're drawing about 22
million triangles in the end. So if I show-- the
triangle view will be really hard to see with
the video compression. But if you look at
the clusters, again it has that same sort
of attribute, where as we get closer
to the clusters, they stay similar
sized on screen at all these different views. And it just draws more or less
triangles as you get closer. So still, even though the
scene is far more complex, it's still running it at similar
performance to the hallway that we were in before,
even though there's these hundreds of these
many million poly statues in the scene. As well as, I mean, it's
important to note, all of it is kind of insanely complex. The art in this is
just incredible. So I put a couple of things in
this scene that weren't there previously, just to show
something and kind of dispel another myth that I
think is out there. Where did I put them? Here we go. So I've got that bicycle again. I wanted to highlight--
although we've shown off a lot that Nanite is great
for rendering these really high poly Megascans for
photogrammetry sort of content, for very organic stuff,
whether it be scanned or ZBrush sculpted,
those sorts of things. Nanite also supports art
surface model meshes as well. This was actually
a key reason why we didn't pursue some of the
other possible approaches that wouldn't have been a
great fit for hard surface. So if we look here,
there's still-- let me actually reduced
the FOV, so we can see-- we can look at some
of the stuff closer. So we look at these
gears, they're actual geometric gearing. And we look at the
chain, it's there are individual links in the chain. And they're all modeled out. So we look at the polys in
this case, it's not uniform. There are big triangles that
end up going across this slowly changing surfaces. And then there are cases that
have really tiny triangles when stuff gets really dense. Another example of
this is this bust from Chain,
a character in Paragon. So this is the
high poly mesh that was used to bake the normal maps
for this character in Paragon. This bust is 3.5
million triangles. It was sub-d modeled for all
of the sort of mechanical bits. And the organic parts is--
his skin was ZBrush sculpted. And Nanite does a really
good job of that as well. And this is also a great
case to show off just how sharp these shadows can get. CHANCE: Yeah,
looks amazing. BRIAN: And
then, one last fun bit before we move
on to some questions. Fly out here to show this vista. I don't want anybody
to get the impression that I skipped something
on purpose, because it doesn't work. This vista still works. And it's also worth noting, all
of the stuff here in this vista is detailed out to the same sort
of insane geometric detail-- CHANCE: Wow. BRIAN: --as all of it. Every bit of this. It's all built in the same
way as everything else. So last little bit here to end. But I didn't even realize
until preparing this last night for the stream. If you come all the way
down here to the end, where were Echo walks through
this portal at the end, the ring on the portal
is not a mesh itself. It's actually made
up of these instances that were placed on the
wall of the previous area. All of these around. So if you go to
the instance view, they're all individual instances
duped around in a ring. VICTOR: Keep
that as your stream. BRIAN: Yeah. So it's worth mentioning
that that's another-- we talk a lot about
the triangle count, of how Nanite can enable
these really high poly meshes. But that both of
these demos really demonstrate the power of
the huge amounts of objects in the scene, and
what you can get out of that, and the power
of being able to kit bash those huge number of meshes,
and what that enables. So-- CHANCE: Yeah. And then, speaking of that
same super dense high poly-- or high resolution meshes,
everything, and building out an entire level
like this, and then using some of the
open world features that we have in Valley
of the Ancient, I mean, you could kind of
just keep this going and keep this growing as it
streams things in and out. So you're trying to stay
inside memory and whatnot, without having to sacrifice
the amount of instances you have in your game or
your experience, right? BRIAN: Yeah. CHANCE: It's fantastic. VICTOR: Awesome. Before we move
over to questions-- and we can probably
be there for a while-- Galen, I wanted to make
sure that there was nothing that we left off here that
you were planning to cover. GALEN: No. I mean, I guess, we had one
note here just about texturing these types of assets. Do we want to touch
on that really quick? VICTOR: Yeah,
let's dive into that. GALEN: Yeah. I think this has been
a question that's come up a handful
of times actually, and it's a very fair question. The community of
like, hey, so you guys have shown these ridiculous
models that have millions and millions of triangles. But I'd like to
texture these things. How does that work? And one of the things that I'm
not sure if a lot of people are aware of-- so with Mixer
now being entirely free for the community-- that by the way, I
mean, entirely free. So you get access to the
entire Megascan's library, and you can actually go in
retexture assets if you'd like. So I was actually just
kind of noodling over here, while Brian was
talking on an asset. So I've loaded in the
highest resolution version of this asset from Utah,
and getting very high frame rates here, working at 4K, just
created a simple smart material here. So just kind of grabbing
this asset from Utah and just quickly retexturing it. And so, I don't know
if this is something that people are aware of,
but you can very easily just start to grab these
types of assets and reconfigure them
however you'd like. And then, I'm not even using--
if you noticed at the top here-- I haven't even downloaded
our newest version of Mixer. So I'm still using the
2020 version, actually. And we've made some really
amazing modifications to that tool as well, to
where you can actually bring in multiple
IDs and texture pretty complex assets, pretty
detailed resolutions as well. So just something
to be aware of. And also, if you want, one
thing that's really nice here, if I just go up to
the setup tab here. So as you can see, I'm
using the highest resolution version of this asset. So millions of triangles
here in the viewport. You can actually load in
the lower LOD versions of that asset. You can texture at that level. You can also reduce your
working resolution here. Lots of ways to scale it. And then you can bump it
up to the highest level and get textured
assets pretty quickly. So anyways, I just
figured we'd kind of touch on that, if that was something
that community was not aware of, as far as texturing
really, really dense assets. VICTOR: Yeah,
we've definitely received a lot of questions
in regards to UV unwrapping, as well as tessellation,
landscapes. You all think we can cover
some of those a little bit? Sort of generically
talking about-- I think there was 10,
20 different questions in regards to how are we
going to work with our UVs? Can we shed a little
bit of light on that? What we've been talking
about at a big-- BRIAN: So yeah. These meshes are just like--
the Nanite meshes are just like any other static mesh
from the import point of view, what data comes into the engine. So they'll still have
UVs. They'll still have textures mapped to
them, just like any others. As far as how to UV
them in other DCCs, I'm not going to be the
best person to answer that, because I'm not an artist that
goes through this day to day. But I can share a
bit about what I've heard from our art
team, which is, think about UVing early,
and keep that in mind. Keep in mind the fact that
you will need to do that. And don't leave it
towards the end. So if you have duplicated
elements, so for example, there's the shield
of that soldier has a bunch of
duplicated bits of detail that go around the ring on
the outside of the shield. The artist that sculpted
and modeled that, mentioned if he would
have UVd that bit before he started
duplicating it, he wouldn't have had to
worry about UVing the thing after it had been
repeated many times. There's a similar
sort of thing happens if you start adding
subdivisions on your mesh, if you can UV it before you
start adding the subdivisions and start sculpting
in more detail, or if it's just a
standard smooth sub-d. If you can UV it before you
start adding subdivisions, it'll be much easier
than if you try to UV it after you've done things. So I guess, that would be one
thing I would try to impart. I guess it would
be useful to say, we don't really have a fancy
UVing solution ourselves in Unreal to solve this
problem, at least at the moment. So a lot of it, just
I think, is going to come from just experience
of artists in the community, and sharing what you learn. And teach us as well. If you find techniques that help
this sort of process, share it. We'd like to know. We're learning just
like you guys are. GALEN: Yeah, I
would add to that as well, the modeling toolset
that's in the engine currently is still being
developed, obviously, with a lot of other
features in the engine. There are some auto UV
features that are in there. We're working very closely
actually with the modeling team on the art side to make it
so that those tools allow us to do a lot more
than we otherwise would in earlier versions
of the engine. Victor's getting attacked,
I think, by a fly. So-- VICTOR: Like
one banana fly. It really likes my
face, apparently. BRIAN: Yeah. It's also worth noting that
you can store things in vertex color on a Nanite mesh. So that's another
possibility as well. I think there might
be a video that is going to be
posted in the future for us showing off how
that workflow could work. It's not something that we've
done a ton of testing on yet. So again, that's another area
that we're experimenting with, and we'll share what
we find out as it goes. GALEN: Yeah, we actually
small amount of testing with that actually for Valley. And I don't remember,
TOPAS time is very bizarre as far as how it actually-- I don't remember when we
were doing those tests. But I think we did-- Victor specifically did
some tests on that-- Wiktor Öhman,
and he did a great job of showing some A/Bs with that. I don't remember-- I can't
quite remember why we decided to not go that route. I'm not sure if it was size
on disk, just AB and stuff quality, but yeah, I think it's
something that we'll definitely explore in the future. So-- VICTOR: Thank you both. Go ahead. CHANCE: Yeah. On the topic of future
support, there's been a handful here
that have come through. And Victor touched on it a bit. But tessellation,
animated skeletal meshes, world position
offset, other deformations. Anything that's on the road map? Anything that you've got an idea
of how we're going to tackle, or we're not going to tackle,
those kinds of things? BRIAN: So those
things are a little bit on a further horizon. I don't know-- they
certainly won't be coming for the full 5.0 release. But they're very
much on our mind, and we've got ideas on
how we can attack them, and would very much like to
start getting movement on that soon. Things will probably
be happening in a shorter time frame
as better scaling up for high instance counts. So already, with the
Valley of the Ancients has one to two million
instances in it. We're looking to see how far
we can push instance counts. Some extra just editor tooling,
trying to get improvements on compression,
those are things that are in the shorter time frame. But yeah, some of those
bigger ticket things are definitely planned
on the horizon. CHANCE: Sounds good. On the topic of
supported hardware, where are we right now as far
as what Nanite works with, what Nanite requires,
and where do you think we might go next, whether
it be for 5.0 or in the future? BRIAN: Oh, I don't have
it in front of me right now. We've got our documentation
for what we're supporting for the exact model GPUs. So for right now, it's
Nvidia and AMD, I think. Nvidia's something
like a 1080, I think, is the min spec for Nanite. But don't quote
me on this things. We've got official documentation
for what those are. CHANCE: Yeah, we can send
people to the docs for that. Cool. VICTOR: There
were quite a few-- I just want to go
back to the question in regards to landscapes,
and how one today might want to approach that, if that's
what you are looking to do. It's a twofold question. Are we developing a new
method to produce landscapes, or should folks at least in
sort of the about the now time frame, until say, 5.0 full
release, should they just be using regular static
meshes, if they're looking to use Nanite? BRIAN: So if you're
looking to explicitly use Nanite, we do not have
anything in a short term time frame for Nanite
landscape support, other than something of the
style of Land of Ancients, which is lots of
static mesh instances. I guess, the other thing
that I would point people at, if you're looking for a
higher density landscape, is there is an
experimental feature called virtualized landscape? Is that the correct name for it? Virtualized height
map landscape. I think that's it. So you could try
experimenting with that. And it will give
you a much denser geometry than traditional
landscape will. I think it's an experimental
feature right now. And I think that
was out in 4.26, but will then in UE 5 as well. But I am not the expert in that. So those details
are probably off, considering I don't even
remember the exact name of it. GALEN: Yeah, It's
the virtual high field. Yeah, so I would need to dig
up Jacob's amazing flow chart that we were talking about
before we started streaming here. There are some
dependencies there that I don't remember
off on my head, maybe with regard
to Lumen, I think. That are kind of
worth mentioning. BRIAN: Yeah, so
Lumen in the early access build that does not bounce light
off of landscape currently, but I believe that's intended
to be supported by 5.0. CHANCE: And that
doc that Galen mentioned, we wanted to kind
of sanitize it, and get that out to folks, too. So they could kind of
see what at early access is and is not supported. And some of the things that
we've learned along the way. Like we said, this
is in many ways an experiment for us on
how a lot of this works. And I know the Quixel team found
out so much of what we can do. GALEN: Yeah,
one thing that I would add to this
topic of terrain, specifically, is
that Brian and I've been having some offline
conversations about this. This is something that we
hope to work very closely together on as, far as
solving this, and creating what could be landscape 2.0. We're not exactly sure exactly
all the details, obviously, as of this call right now. But it's something that we're
definitely thinking about. It's stuff that we really,
really want to tackle. So expect more in the future. BRIAN: Yeah, definitely. It's clearly something
that major improvements can be made in that
sort of workflow and the quality that
could be achieved with the different sort of
approach and some new tech. But yeah, that's
a bit more future. That's certainly not going
to be in a 5.0 timeframe. VICTOR: Next
question comes from-- not sure I can read it. Are there any papers you would
recommend to read to understand the Nanite technology? BRIAN: Yeah,
I forgot to mention. So if you're looking for
more information on Nanite, I will be doing a talk in
the advances in real time rendering course at SIGGRAPH
in a couple of months. So check that out for far more
in-depth technical breakdown of how Nanite works, kind
of from top to bottom. And I'll have tons of
other paper references and things in that talk. VICTOR: You know
you're talking to the pro when the pro goes, yeah,
just watch the talk that I'm about to do in a bit. That'll be the best information
you can find on this. CHANCE: But yeah,
there will be a white paper is what we're hearing. VICTOR: Yeah,
I'm writing it. CHANCE: Oh, thanks, Brian. This is a good one, Brian. I think we discussed
this early on, but I'd like to hear
it from you too. So how's Nanite changed
collision and creation for assets? Don't we need that anymore? Can we have super
complex collisions? I would say, that's
kind of like-- there's two ways
to look at that. One is about the actual
assets themselves. And then, two, oh God, what
does that do to physics? Right? [LAUGHTER] BRIAN: It doesn't
change anything about physics. Physics still have
their own constraints. They need things in
a similar fashion than they needed them before. There is no Nanite for
physics yet, at least. So as far as what they
need, you can still author collision meshes,
and provide those as custom imported
collision, if you choose, to provide an automated
way of getting either a complex collision
or the simple collision. We have some tools
that are in engine that you can read in
the Nanite documentation for it generating what we
call these proxy meshes, which are kind of just like
a stand in old style static mesh to use for any of
the places that don't interface directly with the
Nanite data structure. CHANCE: Yeah. And generally, what we did
for Valley of the Ancient was, we certainly didn't use
billion poly collision meshes for everything, lest our
computers explode on us trying to calculate that stuff. But we did prioritize
certain areas, like the ground where
the characters walk. We wanted that to be a little
bit higher res, of course, so we could show off
some of the FBIK stuff. But a lot of the actual
meshes out in the scene, we were kind of judicious
about our budget for those kinds of
things, and made decisions based on what would actually
matter to the game play. So same kind of as
before, certainly don't want to
bring something in, import collision as whatever the
native one that comes in there is, still be a little
bit thoughtful as you put those things together. GALEN: Yeah,
one thing that I think is worth mentioning while
we're talking about collisions. So we actually
mean a custom tool when we were creating
collision for mega assemblies, because that was going to be a
pretty serious concern as far as how we wanted to tackle that. With the process of actually
making those pact level blueprints, we're literally
jamming millions, and millions, and millions of triangles kit
bashed together and sort of create something new. We created a Blutility,
I think Marion and Aaron where that's guys that worked
on that specifically, where it's effectively like a vox wrap. So using some of the modeling
tools to basically throw a blanket over the top-- for
lack of a better descriptor there-- throw a blanket over
the top of the mesh, and be able to sort of decimate
that down into something that's actually usable. And that was a really,
really amazing tool that allowed us
to actually create very performant collision
for those assemblies. And I'm not sure if we're going
to be able to release that specifically. Probably have to do a little
bit of cleanup for that. But when we actually do properly
release the MegaAssemblies pack, it might be something
that we can include, because it might help
certain people out with some of the problems they
might run into and creating custom collision for geometry,
or for Nanite geometry specifically. CHANCE: And that
might be something that we can talk to
Aaron and Marion about, just discussing maybe
in a future live stream, or put up some docs
about it and stuff. Because it's a
really powerful tool that we utilized quite
a bit on the project. GALEN: Mm-hmm. BRIAN: One thing I
wanted to mention here, because I think there's a
really cool story with Valley of the Ancient, Great
White Buffalo asks, if Nanite can't do foliage
due to transparencies, why not generate
foliage as polygons? [LAUGHTER] GALEN: We did. We actually tried that. I sent it to both
Brian and Whiting. And it's the stuff
of nightmares, I think, for engineers. It's absolutely
horrifying to look at. BRIAN: I'm
not sure I saw that. GALEN: Oh, really? BRIAN: I don't know. I don't remember that. GALEN: I might be able
to pull it up here, actually. Oh, don't look at my
screen yet, IT team here. If I can pull it up. See if I can find it. But yeah, we did actually
try that specifically. And it was pretty crazy. So we actually used an external
application to generate that. So the way that it
worked was effectively, we used the opacity maps, from
the actual assets themselves, like the vegetation. And then, we were able to
take that and tessellate it inside of Houdini to
create something like that. But I am not going to be able to
find it here in the short time on this stream and
sort of show that. But there were some
limitations there, obviously. I think one of the main problems
is the thin spindly bits that make up most pieces of Nanite. As you start to
get further away, they turn into these
crazy spider monsters, as it currently exists. Brian, I'm sure you've
seen that, where it's just like this weird
ball that starts to-- it looks like a weird tumbleweed,
or something like that. BRIAN: But yes,
send that mesh to me. In places where I've
seen that happen before, those are bugs
that I have fixed. I have not seen one that
does that in a long time. So if you've got a
mesh that does that, I still have a bug to fix. GALEN: It's a
big contender for you. So yeah, no, it's a-- but no,
one of the main limitations that we had actually--
so as Brian mentioned, currently we're not
supporting two-sided materials with Nanite. So foliages actually
looks really goofy when you don't have
that applied to it. We were able to kind of
hack it in a certain way, being that there's some
subsurface properties that you can designate to those objects. But it was, again, just didn't
look right when you placed it in the world. And again, it's
just not something that we can necessarily
prescribe from a product level, at this point. So we decided to abandon it. But yeah, no, Brian, I'll
send you the screenshots. They're absolutely terrifying. BRIAN: Cool. Yeah, send me the mesh too. I want to see what's
going on with it. But yeah, I guess what I'd
say with other things is, try it out. I'm actually interested to see
what you guys have done that-- I've seen a few in
the community that have brought in
full geometry trees and were getting
some interesting looking results already. So yeah, try things
out, see how it works. If you make a giant
forest of them, it probably won't
perform super well. So I definitely
wouldn't recommend building a game
with this assumption that this would work out. But as always, do experiments,
profile, see what things are, and share if you
learn something cool. GALEN: Yeah. It's worth noting too, if
you guys do open up Valley, and you see the
project itself, you will see that foliage
is really kind of the only thing that's
not Nanite for the desert, specifically. So when you switch
over to that debug view and you start
looking at it, those are the only things
really that are not Nanite in the entire project. CHANCE: So on
that note, I think it's important to maybe call
out something we've learned. And Graham from our team was on
to us quite a bit about this-- flapping off Nanite
to show what renders and what not will show you what
you need to convert to Nanite. And so, for us, we had our
character Echo and the grass. So any time we
would flip that off to just make sure that
anything that got in was actually converted
over to be a Nanite mesh. As long as that was the
only thing in the scene, we were in good shape. But Brian, can you
explain why that might be? Is it kind of like a-- use mostly Nanite, or
a mix is maybe bad, or what is the rule there? BRIAN: Yeah, so I
guess the rule of thumb is, for the most
part, if you can, you probably should enable
Nanite on static meshes. So for anything that
Nanite supports, I'd recommend turning it on. Especially if you've got a
scene that has lots of Nanite meshes already in it. Even if it seems like there
won't be a benefit for that, let's say, in these
demos, we've got cases where we have blocker
geometry to block shadows. And it's just like a cube. And the question would be,
well, I mean, it's a cube. We don't need to
make that Nanite. Well, so far we've seen, that
it actually works out better, and how it mixes with
the virtual shadow maps. At least that's been
our experience so far. It's not critical to
make everything Nanite. But the general
rule of thumb is, if it supports what
that mesh is being used for-- if Nanite supports
how that mesh is being used, I would recommend
switching to it. We haven't done as much
testing with scenes that are quite low poly. So if you took a
complete scene from, say, it was built for
last generation stuff. It wasn't really built
with Nanite in mind. And you've got a whole scene
filled with lower poly meshes. Your mileage may vary there. I'd be interested to see
what sort of results people get there. I haven't done as much testing
for that sort of thing. It's most of this
experience has been in these sort of projects that
were for the most part Nanite. And the question was, will
I have this other thing? Does it need to be Nanite? It doesn't matter that much. Should it be or shouldn't it be? The answer so far has
been, yeah, or everything that can be, use it. GALEN: And it's cool
to see the community too. I mean, if you're just
browsing art station in YouTube just in last week, to see
the number of people that have taken scenes that
they've developed in UV4, and flip all the levers on
to get that UE5 experience, that's been really cool to see. And I'd really be interested
to see the data behind it too, of just A/B-ing being
some of the perf numbers there, seeing what it
actually looks like, with having everything
now just switched to virtualized geometry. It's pretty exciting. BRIAN: Yeah, I
would be very interested if anybody has numbers
like that of their scenes. We only have so much data
ourselves to compare on. And we should probably be
doing a lot more investigations and gather those numbers for the
scenes that we have internally. But there's so much more data
out there in the community than we could possibly
ever test ourselves. So yeah, I'd be interested
if you guys have numbers out in the community to let
us know how that ends up working out for you. I guess there is one
extra thing to say there, as far as, again, we've talked
a lot about this super high poly meshes. But Nanite is also really good
because it's all completely GPU driven for handling
really high instance counts, and not having to deal with the
draw call sort of performance. So even in scenes that might not
be high poly meshes themselves, it can scale a lot better
to high object count scenes and reducing the CPU burden for
a scene that would previously have been fine on the GPU. It might run a lot faster on
the CPU after enabling Nanite. VICTOR: In regards
to the topic of when-- if it works, if Nanite works,
could we touch a little bit on multi-view, stereo,
virtual reality, and what the future for
Nanite might look like there? BRIAN: Yeah, there's
no real technical limitation there at all. In fact, Nanite
should be even better than the traditional pipeline. Because Nanite can render-- we already support
multi-view in a way. It's the reason why we can
draw to all virtual shadow maps in a single pass. We just need to do the
plumbing, basically. We need to hook up
the split screen, and we need to hook up
stereo rendering for VR, and make sure that
those paths are using the same sort of Nanite
multi-view functionality. So that you'd be able to render
both eyes in a single draw. You could render split screen-- two completely different
views in a single go. We just haven't
connected the dots yet. VICTOR: This
screen is a good call out, because that's
something I definitely haven't seen anyone ask for. But that's a good way to think
about it, if you're not as used to tackling stereographic
displays, such as VR, and et cetera. Thank you for that, Brian. And next week-- for
all of those of you who have asked about Lumen--
we will be covering Lumen next week on the live stream. So we'll get into that as well. We had a question here from
[INAUDIBLE] who is asking, are we able to transform any
older object to Nanite version with one click, or do we
need to spend some quality time to do it? BRIAN: No, but
it really is one click. In fact, you can convert many
assets with a single click. If you select a ton of
assets in asset brother, you can do a right click menu. And there's just a way that
you can convert all of them in a batch operation. Or alternatively,
it's just a checkbox in the static mesh editor. These are just in code. They're used static mesh class. They're the same asset type. They're the same static
mesh asset type as before. It's just a checkbox on
it, which tells the engine to build this Nanite data
structure and store that, instead of what
it used to build. And then when we render
that mesh for the frame, it goes down this
Nanite rendering path. But otherwise, from the
data asset point of view, it's just a static mesh. And really, it's
just a click box. CHANCE: No
workflow changes there. Just what you've got,
it's going to work. GALEN: Is it represented
in the bulk property matrix, Brian? Do you know? BRIAN: Probably. CHANCE: Yeah,
I'm sure probably. BRIAN: Yeah. I think you could probably
enable it that way as well. But we added a special-- there's a Nanite thing
in the right click menu. You can specifically enable
things in bulk selection. VICTOR: Mr.
Soren asked, is there a limitation on unique objects? It seems like it is more
focused on many instances of a few high res
unique objects. Is that correct? BRIAN: No, not really. So as always, with
unique data, the more different pieces of
data needs to be stored uniquely in memory, I guess
that should be obvious, but as far as Nanite's
efficiency in rendering, it is actually more efficient
than previous techniques for handling instancing. Where instancing
before, it would be a draw call for each
different instance static mesh. So each time you would
have a different mesh that would be instance many times,
it would be a separate draw cull for each one of them. With Nanite, it's a
single compute dispatch, or single draw indirect
cull for all types of meshes in the entire scene. So the entire scene
goes in a single pass, regardless of whether there
are a million instances of the same mesh, or
many different instances of different mesh types. If you had a million
different unique meshes, that might end up consuming
a huge amount of memory. I don't think we
have any examples of trying to push
it for that much different unique
different types of meshes. But it doesn't really-- other than just the amount
of base memory for having a mesh in memory,
there's not really any sort of performance
cost for it. It would be more
just-- there might be a memory impact for having
a million different-- that one would probably cause problems. I'm not sure what
our actual limit is as far as number
of unique meshes that can be loaded at once. Probably have one, I just don't
know it off the top of my head. VICTOR: Yeah,
it's interesting. When you're not using the draw
cull as a measurement for that, getting those statistics out. CHANCE: That's another
one that the community can go nuts on, and see
how far you can push it. I think that there's somebody
out there that's probably already working on that. BRIAN: It could be. I'm pretty sure there's
an actual number for it this could be a problem. But I would have to look it up. I don't know it. [INTERPOSING VOICES] BRIAN: It's large
enough that it has not been a concern for literally
anyone in Epic so far. There's only so many-- because
that also comes down to, unless you're building these
in some sort of procedurally generated automated way,
there's just only so many meshes artists can author. And how many the ones that
you could actually reasonably import before you
just run out of time. CHANCE: Yeah. I think for our
Megascans, I mean, we have a few hundred
in there, right Galen? GALEN: I don't know
if it's even 100 actually. CHANCE: Oh, really,
It's lower than that. OK. Oh, I'm sorry. I deleted about 300. That's what it was, because
we had the whole download, and we ended up down there. That's right. BRIAN: Yeah, I
guess what you're saying, there is something that's
not specifically the mesh. But on the material, there is
a draw call per unique material in the scene. So each one of those meshes
had a different material, which is fairly common, then
it could start scaling up with the number of
unique materials. But if there are
many meshes with all had the same material on them,
it should render just the same. CHANCE: I have
a personal question. It's my question, not
from the community. But I'm sure you've seen the
photo scan dog, and the tens of thousands of things in there. I just wanted to make
sure you had seen it, because it's pretty amazing. BRIAN: Yeah, it's great. I was very happy to see that. The way that that
tweet was phrased is, I got up to 1,000
before I got bored. And I was like, my reaction
was, oh, yeah, that's great. But only 1,000? How about a million? [LAUGHTER] BRIAN: Because you
can do a million too. GALEN: I like the dog,
and I also like the bananite. That's that my other
one that I love. So-- CHANCE: Really. Yeah. That must have been
what you were imagining whenever you were putting--
how can we make infinite poly bananas. Oh goodness. VICTOR: All
I ever dreamt of. Next question comes from
Derek Revere, who's wondering, will Nanite support translucent
and masked materials? CHANCE: Oh,
that's a good one. BRIAN: Not currently. Will it in the future? Yes. Masked is something that we're
definitely interested in. Translucent becomes
harder, because of this-- how I was describing-- we rastorize out to this thing
called a visibility buffer, and then apply deferred
materials coming off of that. That doesn't work. You can't do a deferred
material to something that has-- one pixel has
many layers of materials that all contribute
to that pixel. That technique only
applies if there is one material that gets
applied to this pixel, which means it has to be opaque. Masked is not-- within
the engine terminology, is not opaque in how that
selection box says opaque and masked. But from a conceptual
point of view, masked as a form of opaque,
just with a texture mask, deciding where it is opaque
versus where it's nonexistent. So masked is something that,
especially from the foliage or chain link fence-- and there's many examples
where you'd want to use that-- is attractive for us to support. But is a bit of a
challenge to do so. So we need to think through
how to make that happen. Really, if you want to get
down to the nitty gritty, the reason why
that's challenging is that, we would need
to evaluate an arbitrary shader for what
the masked value is in the middle of rastorization. And that rastorization right
now is a fixed function-- compute shader
software rastorizer. Having it do an arbitrary
shader evaluation in the middle of that is
something that we can't really support at the moment. And if we add that
support in the future, it'll have to be
delicately handled to not make it tank performance. Which if we did it naively,
it absolutely would. Kind of a similar problem to
masked materials in hardware write tracing. Where they support it. But if you use
them, things start getting slower pretty rapidly. So we could do the same thing
with a great deal of work to just make that work at all. I'm not sure how value it would
be if the performance tanked. So we have to do it carefully. CHANCE: I'm
actually going-- [INTERPOSING VOICES] BRIAN: Oh. Sorry. VICTOR: Go ahead. No, you go, Chance. It's your turn. CHANCE: I'm super
curious about this one. Can we manually
tweak the density of the triangles or
clusters based on distance, or is it something that's fixed? BRIAN: It is fixed. And that is actually on purpose. The algorithm that it's
doing is complex enough. [LAUGHTER] BRIAN: If
there was an area, such that artists could
start controlling it, having those edits actually
persist after a change would also be complicated. So if we make a change
to how the algorithm that builds the Nanite data
works, make an optimization and improvement in
the future, I don't know how it could
possibly retain that data. CHANCE: Right. BRIAN: And then, how would
we expose an edit towards it? Really, the core
design here is to try to make it so that you wouldn't
have to do such a thing. That it's just
completely automatic, such that you don't
need the care. And then, you won't have
to waste your time with it. I understand the
motivation though, because although Nanite-- 99% of the time, the
algorithm works perfectly. And you get something
that is imperceptible as far as its degradation
as it gets further away, it looks like the
original authored mesh as you'd see it in
some offline renderer. But there are cases
where it fails there. It's not perfect. So it may make some
guesses that Nanite thinks it's making an
imperceptible change, and it actually is perceptible. In some cases, it's very
clearly perceptible, because it actually
looks pretty bad. But those cases
are extremely rare. And extremely rare,
such that hopefully that won't really
have to support any sort of artist
editing of them on the fine-grained
sort of detail level. If we do add sort of artist
tweakable knobs, that will probably be a very high
level concept, which is maybe, here's a whole area of it
that I want to preserve at a higher detail level. You paint in and say, I
don't care about this. Or here's a fix up area. And it wouldn't even be a
care about, it would be, because it should be caring
about the things that matter automatically. But it'd be like a, it
screwed up right here. So don't screw up there next
time, or something like that. Or an even higher
level thing, which would be just an entire
slider, which is like, this thing seems to degrade
quicker than it should be. Bump it by 50%, and don't
reduce the triangles as quickly for this one asset. [INTERPOSING VOICES] GALEN: But it's a
great question, though. So I mean, that fear
stems from probably, if I had to guess,
from this user that's asking this is like, maybe I
have horror stories of LODs jumping and popping
on previous projects, and I want to be able to
control that as an artist. That totally makes sense. Now, I can say, having worked
on both of these two projects, that that has been a
nonfactor on either of them. So hopefully that makes it a
little bit easier to digest. BRIAN: Yeah. It's funny how, I
mean, it was very challenging to build this piece
of software, I assure you. But it's funny how much
easier simplification-- mesh simplification-- gets,
when what you're targeting is not giant polys that have
some sort of meta meaning, the sort of like human
perceptual shape meaning, and start turning into
pixel size differences. Once things get
down to that level, some of the challenges of
building level of detail actually get easier. And then, you just have
to deal with the, well, how do I actually achieve that
level, which is difficult. But those are, I
guess, those sort of what you might be used to
for a really bad quality auto generated level of detail,
that sort of level of detail, if you draw it really,
really tiny on screen, you stop caring. I guess, is kind of the way
to summarize that problem. CHANCE: That makes sense. Victor, I interrupted you. VICTOR: There's just
a plethora of questions, so-- CHANCE: Yeah. VICTOR: We'll will
continue down the list. Next question comes from
gamer Lieber, who's wondering, will Nanite support
procedural meshes? BRIAN: Not any time soon. So that's the data
structure that I discussed at a super high level. And there's a ton of intricacies
to make this actually work out. The cluster hierarchy is
semi expensive to build. So we've worked a lot on making
all of the Nanite data building pipeline as well
optimized as the-- I'm not sure if
there's just as much-- but a fair amount of time. Maybe not equal, but
almost equal amount of time has been spent in making the
build efficient, and super parallelized, and just
trying to trim down the time to take to build
the data, as there has been to render that data. And the reason why is
that, we want to make it so that when you
import your data, you're not waiting for minutes
or even worse than that, hours, waiting, and waiting, and
waiting to get your thing in. Because if that
was true, no matter how great we could make
the runtime experience, if you had to wait for
hours to get your thing in, it just doesn't matter. You wouldn't use it. So we want to make that
process as fast as possible. But that said, it's still
a heavyweight operation. Something that can't really
be done in real time, because there's a lot
of things that it needs to compute to build that
hierarchy, such that it doesn't create cracks, and that
we can get a high quality result without a unreasonably
huge number of triangles drawn, to get those sort
of numbers to look like 20 million
triangles on screen for this resolution
is the care needs to be done to make that happen. So if it was procedurally
generated mesh-- I'm sorry, I'm assuming
by procedural meshes, what you're saying
there is procedurally generated every frame. VICTOR: Right, yes. BRIAN: If you're talking
about procedural output, from Houdini or something
like that, in which case, I would say, it already works. VICTOR: Yeah. BRIAN: I'm assuming you
mean procedural, like runtime procedural. VICTOR: Yes, runtime. BRIAN: Yeah. So that is probably not
going to be supported. Or if we'd ever figure that
out, it won't be any time soon. I don't know how to attack
that problem at all. VICTOR: And if you don't,
then I'm not sure who would. BRIAN: I don't want
to say that it's completely impossible, because I don't
know, maybe two years from now, I was like, aha. But I have no idea how
to attack that problem. GALEN: I'll take a crack
at it, Brian, don't worry. VICTOR: Game
jam it up next weekend. GALEN: I guess. VICTOR: Let's see. We had another question
from Crucifear, is it advisable to use
Nanite for lower poly meshes in a very large scene
with multiple instances? BRIAN: So yeah, I think
we talked about this already. And yeah, if you've got
very high instance count, Nanite will probably be
more efficient than not. And as always, try
things and profile. That's always going to be the
best answer to a question. We can give advice. But at the end of the day, we
can't guess all the things. And we can't know exactly
what your data is. So always try things in profile. But yes, if I were
to make a prediction, I would say, yeah, in a lot of
cases, Nanite will be faster. In some cases, significantly
faster than Nanite disabled. VICTOR: I remember
seeing another question-- it's somewhere in the doc here. But it was in regards
to, are there still any reasons why one
would want to optimize a poly count of a mesh
when you're using Nanite? Is there any reason, if
you had an A and B button, one would export less,
one would export more, is there any reason
to go with less? BRIAN: Absolutely. And the biggest
reason is disk size. So yeah. It's the size on disk is
going to scale up linearly with the number of triangles
that were imported. So when I gave those numbers,
as far as approximately, it'll be about-- what'd I say, 18 bytes
per imported triangle? If you can import
less triangles, you'll get less bytes on disk. There's also the sort
of knock on effects of when you're authoring things
in other modeling packages. Working with less may
make your lives easier. So if there's not a quality
reason or a workflow advantage from working in a higher thing,
if it's not saving you time, and it doesn't end up in
a better quality results, then don't waste
your time doing it. I guess is the
other thing there. As we mentioned earlier
in the stream though, there are some tools
that I'm hoping to get in for future versions
of Unreal, maybe even 5.0, to have it so that you could
make those determinations for what is the maximum quality? What is the highest number of
triangles that would be there when you get close, to be
something that you could tweak in engine after the fact. But even in that case,
that would save you on the disk space
side of things. So you could adjust that
afterwards, and essentially change the-- it would be similar
to you changing how you imported it originally. It would just be, oh, there
would be a after import modification before
it's stored up in the Nanite data structure. Then it would be like,
well, how much time did it take to import? If it was a smaller mesh, it
would take less time to import. So it really just
comes down to, do it, if you get perceivable,
valuable, better quality results, have more data there. Or if it's making
your lives easier by not having to optimize,
or bake out a normal map, or some other step that
would cost you more time. VICTOR: I saw Chance
almost joke about this a little earlier-- oh. CHANCE: I was just going
to say, not only is it less time to import, less
disk space, but it's less for your teammates that are on
a VPN, coming into the office, to actually download-- [INTERPOSING VOICES] BRIAN: Yeah, there
was something on that topic that I wanted to bring up
here, but that you're just reminding me of. So when we talk
about disk space, I think most people
are going to assume, to see how big these
things are on disk, I can look at the U asset
for that static mesh sitting in my directory, in your project
folder, look at that and say, aha, this is how big
this asset's going to be. That U asset contains a lot
more than just the Nanite data. And it's not necessarily
stored in the way that will be in the final packaged
cooked version of the game that you would ship. It includes the actual
source data, not in exactly like an FBX format,
but something like that, in a
completely lossless way, and stored out not in a
compressed form at all. It stores other bits of
metadata about that mesh and how it was imported. So there is the source asset,
and that can be fairly large. Then there's the Nanite
data, or other rendered data that could be generated if
you had Nanite disabled. That will be stored not
in the U asset at all, but stored in and the DDC-- the derived data cache. That's the actual
data that would end up in your final cooked version. But even that, as it's stored
out directly for DDC data, is before any sort of
platform specific compression. So for example, on
PS5, there's going to be an additional
crack in compression. On other platforms,
there'll be some other form of LZ compression that's
put on top of that. So if you look at
just that file-- if you could find
it-- it's going to be some weird
hash numbered thing. So if you look at
those sizes, they won't be the final
compressed size. They'll be significantly
larger than what you'll end up with on disk. So I think there's going to
be this sort of misconception of like, oh, you said it
was going to be this big, but it's like 20 times
bigger than that. What are you talking about? That thing that's
bigger is not the data that we're talking about. It's not what you'll end
up shipping at the end. It's your development data. As Chance was saying, that
can still be burdensome as you're developing things. Because everybody that
syncs to your source control is going to have to
get that data too. There's for future
versions of Unreal that will be improving
that process-- will be improving cooking
and storage of that stuff, and how the DDC works
to make that process a bit more streamlined. But those are concerns
worth considering, just how much of
your hard drive space do you want this
project to consume. Can be a factor even if your
final download is only tens of gigs, it could
be hundreds of gigs. And that can be seen just in the
Valley of the Ancients project. The actual project that you
download from us, is was it 100 gig or something like that? CHANCE: It's
right around 100. And then, you actually-- so you download a copy, and
then when you make a project, it makes a copy of that over. So the requirements
are kind of high. BRIAN: But then when
you package it for shipping, the thing of how it would end
up on, say like a PlayStation, for example, is more like-- I don't remember what it was. But it's 20 gigs or something? CHANCE: Yeah. Win 64 is like 26, 25. So we're looking at 75%
of what you would actually download there. And again, we didn't
go through the process of being selective
about, am I ever going to see this
mesh to the point that it needs to be
this many polygons. The same with the
actual texture data. And still, I think
the texture data is the vast majority of
that size, not necessarily the mesh data. BRIAN: Yeah. So unfortunately,
how it is right now to see what is the stuff
in your final game package that's taking up
most of your space, if you just wanted to see, hey,
this mesh, I just imported it. How big is it going to be
final compressed on disk? We don't have ways of displaying
that easily in the engine right now, which is actually
a big problem. So it's something
that we're going to be improving going forward
so that you can better analyze and understand what of your
asset's textures, meshes, whatever, is contributing
to your final package size. CHANCE: And on that, it's
not the exact same one to one, and I wouldn't say it's 75%,
just take whatever and then times 0.25 you get the final. But in the content
browser, you can still select assets and use the
size map there, so at least get a good idea of where
everything is on disk. And you're probably going
to find the vast majority is going to be in textures. VICTOR: I'd like
to tackle some questions that we received during the
last live stream as well. We have already
tackled some of that just throughout
the presentation. But one question
that came in was, if Nanite will support
world position offset? BRIAN: So that is a form
of deformation-- arbitrary deformation of the mesh. There's actually-- I
could just like run off numerous cases of these. So skeletal meshes
are one of them, morph targets on
skeletal meshes is different from linear
blend skinning, with bones and skeletons. World position offset
as a form of it. Spline meshes, although that's
not really an animated form, it's still a way to deform a
mesh in a way that isn't just as simple as a scale. All those types of things
are not currently supported with Nanite. We only support rigid meshes. So that you can translate--
you can rotate-- you can apply non-uniform scale. So different scale
in all three axis. That is it. That is what we support for
Nanite meshes currently. World position offset is
obviously super useful though, and it's something that we
want to support in the future. But that kind of goes along with
solving just the deformation problem in general. So I kind of lump it into the
same bin of, if we can support arbitrary deformation in all
of these various forms, WPL would be in that bucket
that we would be wanting to hit in that entire effort. VICTOR: Thanks. I actually went
through pretty much all of the other questions
that were related to Nanite during our previous stream. And I think we have covered
pretty much everything that came out of that stream. Chance, were there
any more further up that you were interested
in snagging for today? CHANCE: I
think there's still a lot of really great
questions in here that we could dive into. I know we're kind of
pretty far on time, and I think Galen
has to get a haircut. I'm just kidding. But I think that we can probably
collect some of these things and take back to the forum post,
and see what we can get answers on for things we didn't cover. A lot of the ones that have
been coming through as dupes, I think we've
answered pretty well. Like the big trending
ones for sure. VICTOR: I
had one for Brian. Brian, someone was
asking if we can share the slide deck
after the stream that you were presenting? BRIAN: Sure. VICTOR: Cool. Well, we'll figure
that out off-line. Just have to ask you first. Don't just want to start
uploading your stuff. We will get that to you. All of that information that's
related to the licensing topic, that includes-- ever in the future, if we do
have a sample project that's specific for the live
stream that we share, any Slidex, any other
presentations or documentation, you can always find that
in the forum announcement post, which is on the events
section of the Unreal Engine forums. And that is also where the link
to the Slidex download will be. BRIAN: Actually, I
should modify my answer there. Probably I will
have to ask people. VICTOR: If
it's a yes, it will be in the forum announcement post. And then, there was,
I guess, a question I wanted to leave
till the end, which I thought was interesting, was
where did the name Nanite come from? How did Nanite come to happen? BRIAN: There was a lot
of different suggestions. So really it was like, early
on, I usually don't personally-- I don't like giving
just brand names for features that could be just
explained for just what it is. For instance, the
virtualized shadow maps. We didn't come up with
some flashy name for those. It was just like, it is
virtualized shadow maps. But it started
becoming difficult just in the code, where there was
a lot of different algorithms that all fit together
to form what Nanite is. And there was no
name that didn't get ridiculously long
that could kind of sum up all these things. Or if it did use one
name, it would just be about one part of
the data structure, or one part of the algorithm. So I really needed a
name for a namespace to put around all of it. So then, it started
becoming hunting for a name, and had nothing to do with
any sort of marketing. It was just like, what
can I call all this code, so I can refer to it,
so I can namespace it. So we went through a
lot of different names. It was actually-- I'm not sure if I'm
supposed to says this. I'll avoid saying it. It used to be called
something else in the code, and then got renamed. It doesn't matter. But anyways, yeah, in the hunt
for the name, the reason why Nanite was attractive
was, we were trying to talk about
really tiny things to refer to the sort
of micro poly stuff. So I was just like, micro
meshes, no, that's actually like a type of clothing. So I didn't want
to call it that. And it's like, oh, nanoscale,
like nano something. And I don't know. I'm a big cyberpunk nerd. So Nanites came up,
and people liked it. So we went with that. GALEN: That's great. CHANCE: Yeah, naming
everything is hard. BRIAN: Naming
things are hard. CHANCE: You just described
what it guess through. I'm trying to name
a variable, when I know exactly what it does. BRIAN: Yeah. Just naming things in
the code is hard enough. But when it also needs
to be a public name, you need to make
sure that it doesn't clash with something else. If somebody's going to
Google something, what hits are going to get-- what is
already existing that it could conflict with. Yeah. It's hard. VICTOR: It is hard. Any last thoughts
before we actually wrap this wonderful
time together up, with all this information? GALEN: I just
can't wait to see what people come up with. I mean, I think that's
the most exciting part about early access. And that's why the surprise
at the end of the presentation was one that we kept
pretty close to the chest. It's been super inspiring
just in the last week, just like I said,
pop over to YouTube, and look at art
station, and just see what people are making. I mean, and we're
only a week in. It's crazy to see how
far people are already pushing the tech, and
tutorials that are popping up, and everything like that. So I'm just super inspired. And I just can't wait to see
what other people come up with. So-- BRIAN: Yep, that's
my exact same response is, can't wait to see
what you do with it. And especially so in
the odder use cases. It's great to see people making
stuff that looks like the stuff that we've put out so far,
and seeing people get really high quality
results just quickly just drag in this
Megascan, that Megascan. And some of these very
first day art station posts were really gratifying to see. It's like, wow, that is
a great looking shot. And you've probably
did this in one day. Because you didn't
have this yesterday. So it's like, I [AUDIO OUT]
you made this scene today. So that's a really cool to see. But the ones that are also
really exciting for me to see is, try using it in
something in a way that's completely unlike what
we have shown so far. I'm really excited to see a lot
more hard surface model cases. I'd like to see
people try out stuff that we even suggested
might not go well. Try making a polygonal tree out
of it, and see what happens. I'd love to see it. Just make stuff like
we haven't seen before. VICTOR: Thank you,
both, so much for coming on the screen today. It's been a pleasure. I know, Galen, we will
see you again next week. So will Chance. So biggest thanks to you, Brian. Because you're the victim more. If you are still watching,
and you've been with us here from the start, thank you
very much for tuning in today. We do have a survey that
we link every week that you can go ahead and fill out. Let us know what we did
good, what we didn't do good, what you would like
to see in the future. Unfortunately, the
next two months are kind of occupied with all
this UE5 stuff that's going on. But we do add all of
the recommendations to our list of streams
for the future, since we try to find
the right people for it. If you are new to the world
of game development and Unreal Engine, and you would
like to get started, you can go to UnrealEngine.com,
download Epic Games Launcher, and install UE4, UE5, whichever
one you would like to choose. That's a whole
conversation on its own. If you're learning though, and
looking for a lot of materials, Unreal Online Learning, which
is Learn.UnrealEngine.com contains a lot of
tutorials for UE4. And there is no reason why you
can't use a tutorial for UE4 and then apply your
knowledge in EU5. The viewer might look
a little different. And the placement of
check boxes set up might be in different spots. But the knowledge itself
is fairly almost the same. I don't know. Can I say the same? At least when it comes to
the features that are in UE4 and that have now sort
of moved over to UE5. And that's a great
place to start. Make sure you like and
subscribe on all of the places. We got the Twitters,
we got the Facebooks, we got the-- you
know what they are. I don't have to
repeat all again. But the forums are
a great place where you can talk to other
developers in regards to your projects, your
problems, anything else that you might have. And I also like
to do a shout out to UnrealSlackers.org, which
is our unofficial Discord community, where you can do that
in real time, which is great. We're all on there, I think. I don't know, except Galen. Galen's not on. That's internal pun. Weird. All right. Community spotlights. Every week we highlight
a couple of the projects. Go ahead and add us on Twitter. It's a good place-- the work
in progress section that are released on the
forums are also good, as well as the Discord channel. We follow you as much as we can,
and we try to find everything around there. You can also just go
ahead and send an email to community@unrealengine.com. It's another good
place, in case you haven't announced
anything public, and you would like
to let us know. That's always cool. I can tell you that. If you've seen on
Twitch, there's a cool Unreal Engine tie there
you can add to your streams. Go and combine that
with game development, if that's what you're doing. That's the easiest way
for folks to find you and your live content
that you are producing. Make sure you hit that
notification bell on YouTube. I just said like, subscribe,
but there's that bell as well. And next week, we
are going to cover Lumen, which is our global
illumination technology that was released with
UE5 in early access. And we're going to have Daniel
Wright, Galen, and Chance on for that. So a couple of familiar
faces, which will be great. And once again, do
want to thank Brian, and Galen, and I guess,
Chance, too for coming on. It is nice to have Chance there. I'm enjoying the dual hosting. I can focus on these pages
and pages of questions. [INTERPOSING VOICES] CHANCE: Happy to be here. This has been awesome. Thank you, Brian, Galen. BRIAN: Thank you, guys. GALEN: Thanks
for having us. This was awesome. VICTOR: Cool. And any conversations following
up in regards to this topic, feel free to use the
forum announcement post. We will be looking at
that quite intensely. All right. It's time to get off. It's time for our weekend. I wish you all the very best. Stay safe out there, and
we'll see you again next week at the same time. Bye, everyone. [MUSIC PLAYING]