VICTOR: Hi, everyone,
and welcome to Inside Unreal, a weekly show where we
learn, explore, and celebrate everything Unreal. I'm your host, Victor Brodin. And with me today I have two
amazing artists from our teams. Let me introduce Matt Oztalay. MATT: Hello. VICTOR: Technical artist. I was going to say that
before you said hello, but we didn't make that. And Jakob Keudel. JAKOB: Hi. [LAUGHTER] VICTOR: There we go. We're live, boys. This is it. Today we're going
to talk a little bit about how you can optimize your
environment in Unreal Engine. But to start off, I
wanted to have both of you introduce yourself, because
this is the first time you're on the stream. And so why don't we
start with Jakob? JAKOB: Yeah, hi. So my name is Jakob. And I'm here as an
Environment Artist for Quixel at Epic Games. That's my job. Yeah, that's pretty much it. Very short and concise. VICTOR: That's good. Matt's turn. MATT: And
I'm Matt Oztalay. As Victor said, I'm a Technical
Artist, Developer Relations specifically. So my job is working with
all of our licensees, which is a lot of fun. I've been working in
the video games industry for like 10 years, and
like the common thread through just about every
project I've ever worked on is performance and optimization. And that's why I'm so excited
to be on the livestream to talk about performance. VICTOR: It's
your time to shine. With that said-- MATT: I'm so excited. So-- VICTOR: Yeah,
please go ahead, Matt. The floor is yours. MATT: So a couple
of things to get started. This is not talking about
like mobile performance or VR hardware, but a lot of the stuff
we're going to be talking about is broadly applicable. We're really focused on like
deferred rendering, current gen pipelines. And like, yeah, of course,
there's the fewer triangles, you know, cheaper shaders,
all that fun stuff, but that's only
part of the story. And this kind of assumes
that you are intelligently building your environments. You don't have 50 bajillion
triangles in your rocks or anything like that. And we're not going to talk
about like quality sliders or anything like that. This is like we're going to
optimize for your high end. Oh, and before we
get started, I should note that I'm running on like a
2080 RTX big, beefy processor, big RAM. So I am-- you know,
some of the stuff is going to look like it's
running really quickly, but we can get it
running a little faster. But the thing about
performance is that a lot of this conversation
has to happen in context. Not every game or
project is going to have the same
problems for performance. It's going to vary from project
to project, how you set up your project, how
you've constructed your environments, which is
why I'm so excited that I got to work with Jakob at Quixel. And he was working
on this scene. So Jakob, why don't you
tell us about the scene that we're working on today. JAKOB: Yeah, so
this is a scene that is currently in development. This is not released yet. It will be released. It's essentially
a medieval scene, and we're planning to have
this actually be a game. Because the entire thing with
the demo projects or projects that Quixel has released
so far, we always strive for VFX quality graphics
inside of Unreal Engine. And even with Rebirth,
which had sections that ran absolutely
fine in real time, it wasn't actually
on peak performance with you being able
to walk through. And so since we've
joined Epic, our focus sort of shifted a little
bit to, A, providing you these demo scenes so that
you're able to dive in and sort of see and
dissect how we do things, but also understand
how things are done. And up until now,
we've never really hit on how do you create a game? Or how do you make
things run in the game? We've always just
demonstrated visuals, but never actually
proven our point of yeah, this actually runs in real
time on current gen hardware. So with this project,
we will actually want to prove ourselves
and demonstrate to our customers
and Epic's customers that this is something that
is absolutely possible, even with a super small
and condensed team. And so me being just an artist--
so a super quick background. I sort of dropped out
of university super quickly into Quixel. And my entire focus
with Unreal Engine has always been just cinematic
shots, so just pure visuals. I never worked as a game artist. So now, with this
project especially, I'm sort of out of
my depth considering what we want to hit
on visually and what steps we actually need to take
to make this run smoothly. And so this is where Matt
jumps in with his knowledge and advises us on how to build
systems, what systems are smarter ways to approach things
compared to maybe vanilla setups, or just
brute forcing things. And this is why we
have this stream in the sort of perfect
context of this project, because performance
optimization, as Matt stated, is something that continues
throughout the process of a project. And it wouldn't make
sense to show you a finished scene and just
talk about, hey, yeah, these are the steps that we
took to optimize the scene. But rather we dive in now
when everything is not really finished at all
and show you what steps we are taking
in this moment, and continuously
throughout the project to sort of get the
best out of the engine and out of this project. Hope this will make sense. MATT: Yeah,
it works for me. So one of the things when I
first pitched this to Galen, Jakob, and Victor was that-- we'd started talking
about this a while back. I got into the project
on the earlier side. So I got to do the
thing that I think is the most important
for performance optimization across the board,
and that is I got to set up my perf metrics early. Because if you are checking your
performance early and often, you know when you've
added something that affects performance. I'll give you a great example. I was working on a project
many, many years ago, and our QA department every
day had cameras set up throughout the map. And they would go
to each camera. We had console commands. So it was the exact
same camera every time, exact same transforms,
same hardware. And they would take
like a two or three second average of
the frame time. And then they'd add
that into a report. And so for any
given app, it might have had 20 cameras, because
they were pretty large. And I remember, there was this
one cave that for the longest time had been running at 6
milliseconds, 9 milliseconds. We were targeting 60 frames
per second, 16.66 milliseconds. So everything was going great. And then one day,
suddenly it shot up to like 25 milliseconds. And because I knew
that the day before it had been running
at 9 milliseconds, I knew that there was a specific
camera that was causing issues. And I could go to that
camera in my build, run a pics capture on it,
which is a GPU profiler, and dive into the draw calls and
figure out what was going on. And it turned out that
there was a distortion texture that was
like 2080 x 2080, and this was on the Xbox One. And so it was a
really heavy texture with a really
heavy shader on it, and it was full
screen, so affecting every pixel on the screen. And so because we had
been checking performance every single day for
every single camera, we knew something was wrong,
and we could immediately react to it. And so that kind of leads
into the other thing. How you set up your project
is going to be different. So we're working with
a first person camera. So I always know
that the camera is going to be right down at
eye level for the most part. We're not going to be way
up in the sky looking down on all of the trees way
out in the distance. I know for the most
part that we're going to be down here
and super contained. The other thing is
making sure that when you are deciding on
your perf metrics, you are being
consistent about it. So like I said, I
know that every time I profile this project I'm going
to be launching the executable. I have a blueprint that
goes through and does the profiling for me, and
I'll show that in a second. And I know that I'm going
to be targeting, let's say, 20 milliseconds, or
something like that. As long as you're
consistent, and as long as you're focused on the
same things every day, then you notice if any changes you've
made have been for the better or to the detriment
of your performance. The other thing that I
think is really important when we talk about
performance-- so I'm going to go into my
console commands, hit stat FPS so we
can talk about this. So there are two numbers
when you do stat FPS. There is the frames per
second and the millisecond. And when we talk
about it colloquially, we'll talk about
frames per second. We want to hit 30 frames. We want to hit 60 frames,
or something like that. But I think more important-- because the difference between
20 frames per second and 30 frames per second
is 10 milliseconds. But the difference between 30
frames per second and 60 frames per second is 20 milliseconds. But the difference
between 60 frames per second and 75 frames per
second is like 5 milliseconds. And so saying that,
oh, this change we made got us 50 frames back
or 20 frames back, that number means
different things because the amount of time it
takes to calculate that frame changes. So I like to talk
about performance in terms of milliseconds,
because this number is a straight line. So if I look at
something and I say, hey, this is going to
save us 2 milliseconds, or this costs us 2
milliseconds, that's always going to be 2 milliseconds. Whether that's the
difference between 20 frames per second and 25
frames per second is variable. So I can always look
at milliseconds, and I know that's
just sort of true. The other thing
to keep in mind is that performance is not a menu. I don't think of
performance as a menu. Like, oh, yes, we can
have 10 characters, and then we have to stop. Perf is a pie chart. And it's a reflection
of your values. So maybe you're
working on a game, and you know that
you really only need a few thousand
triangles to represent your entire environment,
because you're going to spend a bunch of
time with your lighting or your post-processing. And that's a decision
for you to make. So in this context--
this is, again, why I like talking about
this scene in context-- in this context we
know that we want to focus on the environment. We want to highlight the quality
of Quixel's assets, which is very high. And so we're going to be
focused on the environment and showcasing the
environment assets. So that means, yeah,
our triangle count is going to be
really, really high. And maybe we can
make up for that somewhere else that
doesn't affect performance. The other thing I really
want to talk about is like, I am not really going
to be doing anything today that is so drastic as
like we're going to take all of our
textures, and we're going to down res them to 1024. So I'm not going to rip out
instructions out of a material. I'm not going to reduce the
triangle count of anything. Because I want to maintain
visual quality for as long as humanly possible. If you get to the
point of like, man, we tried everything,
every single thing, and we just can't
get the triangle, or we just can't get
the frame times we need, maybe it's time to look
into triangle reduction. Maybe not. But I think for
the most part, we can get there without it,
especially in this scene. The other thing to
keep in mind as you're looking at performance
is the amount of effort you are putting
in, versus the number of pixels that are affected. Because one of the big
things about performance is that if I can
make an optimization to a post processed material
that saves to one instruction, that's one instruction. I don't have to run for every
single pixel on the screen. But if I have this one unique
asset that's like this rock off in the corner,
I'm not going to focus on making this the most truly
optimized I possibly can, because it's only right there,
and it's only there one time. So let's take a
look at the scene. Let's run some perf. Let's do that. So I have set up a
blueprint, and this blueprint has a bunch of
perf cameras in it. And what it'll do is
it'll go through-- we're going to
launch real quick. This blueprint will
go through the scene. It will move from camera,
to camera, to camera. It will run a GPU trace
using Unreal Insights, and we'll go back and
look at that trace. And then we can get the
average of the frame time for each of those cameras. And I'll show you what that
looks like when this builds. VICTOR: And Matt,
can you explain now why you're picking launch
rather than plain editor? MATT: Yes, so
again, this goes back to the consistency question. As long as you are-- I like to compare
apples to apples. So if I want to make sure that
the scene is running at frame rate, I know that I'm going
to launch it through Unreal, as opposed to play in
editor, because there's a little bit of overhead when
you're running the editor. So if I want like a
ground truth, if you will, of performance, I'm
going to look at it here. And the other thing
I'm going to do is I'm going to
full screen that. So I have a blueprint
setup in this scene. And again, this
is a sample scene that will be released
at a later date. So this blueprint is
going to be in there too if you want to take a look. And I can just do a
KE star start perf. And KE star is just going to
call the custom event, start perf, on all of the blueprints
that have that custom event in the scene. There's only one. So I can hit this. And we can go. And we can take a look at that. So I've got that stat
FPS up in the corner. I'm running in Unreal trace. I'm doing an Unreal
Insights trace on the GPU, and I'm gathering a screenshot. And the other cool
thing is that, because I have all these
screenshots, if I wanted to go back and do like a little
progress GIF, it's built in. I get that for free
from doing this. And so we can see
like, hey, we're running 43 milliseconds, for
the most part, and we're done. So I only have
about eight cameras. So now that I'm
done with that, I'm going to Alt-F4 out of there. And that's going
to finish running. And I'm going to
have Unreal Insights. So Unreal Insights is a thing
that ships with UE 4.25. We've made some
improvements in 4.26. They're super, super cool. And I'll go into
that in a second. Because we're running
off of GitHub, I built this through
Visual Studios. And we've got instructions
for that on our website. So in Unreal, when you run
a trace, it will output a-- excuse me. It'll output a trace file. I'm going to go, I'm
going to open that file, because I know where it is. Livestream, Saves, Stage
Builds, Windows Editor, Medieval Village Saved. Where'd that go? There we go. OK. Is that my perf? That's not my perf. Where did it go? Oh, no. VICTOR: We can
mention, if you're curious about how Matt is
using Unreal Insights here, there's plenty of
documentation out there. There is documentation on
how to use Unreal Insights, to sort of gather
the trace and use it, exactly what Matt's
doing right now. MATT: So I cannot
find the trace file from that session. That's OK, because
I have backups. JAKOB: Very
smart thing to have. MATT: Yes. VICTOR: It's been
cooking all night, right? JAKOB: Yep. MATT: All right, so
I'm going to grab this one. And this looks
like a lot, right? I'm not too worried about it. So this is going
to show you every-- this is the big full trace. This is every single
trace, every single thread, every single call. But we're focused right
up here on the GPU. And what I did in
my blueprint is I call trace.start
and trace.stop. And that means I can
look at this and go, all right, this was camera one. This was camera two. This was camera three,
camera four, and so on. And what I can do is I can
Show All Columns, View Column, Coverage Inclusive Time. And if I highlight this
little section here-- Collapse GPU-- I can look
over here and see the GPU. So the average for that
was 48.64 milliseconds on camera one. I can look over at camera two. Collapse that. Again, 60.59 milliseconds. All right, so 48.9. So I can see that
each of these frames took a little bit
of time to render. And I can keep that in mind. And I can even dive in,
and I can see, all right, so the pre-pass
was 3 milliseconds. Shadow depths was whatever,
13.2 milliseconds. And this gives me an idea. This gives me a place to look. And it also gives me metrics
to which I can compare. And I can go back tomorrow
after we make these changes, and I can run this
test again, and we can see whether or not that
was a net gain or a detriment to perf. And these things match up with-- and after we open the map here. But those numbers will
match up with the numbers that we see in the editor. And I can show you. Ooh, ha, I knew this
was going to happen. All right, so we're
going to close that. VICTOR: Welcome
to the club, Matt. MATT: We're
doing it live. VICTOR:
We're doing it live. So that's not the first time. It's not going to be the last. In the meantime, mind answering? Oh, actually that's
loading pretty quick. You know what? You're going to be
up and running here. MATT: I also
got to open the map. So I'll take a question. VICTOR: All right,
all right, all right. Let me get the right doc here. Let's see, there was
one that was related. Oh, we did get
the question again if the demo scene will be
available for download. I think it's worth repeating
that ultimately this is an example scene that will be
available for free for download using both Megascan's assets
and some other assets that are permanently free
on the marketplace. MATT: Yep. VICTOR: Let's see. It's still up. MATT: That's I
think my favorite part about this project is that
once we're done, all of this is going to be freely available. JAKOB: Yeah,
it's essentially going to be a huge learning
opportunity for people to just really dive into
and dissect everything that's going on in the scene. And there's a bunch
of cool tech in there. So we're talking-- I mean,
obviously your performance blueprints. We have volumetric clouds. We have in engine
created assets in there. So this scene is going
to be fun to dive into. MATT: I'm partial
to the landscape material. I think it's really cool. [LAUGHS] JAKOB: Oh, the
landscape material is the best. MATT: But that's
a different livestream. All right, so we're back. And so all of those numbers
we saw in Unreal Insights map to the numbers that we see
when we run the console command stat GPU. Right? So I see here's my total. Here's my shadow depths. Here's my base pass. But you'll notice
that the average here is only 30.5 milliseconds, and
we were seeing 40 milliseconds. Well, Matt, what
was going on there? Well, I'm rendering
a much smaller frame. And that's why I
think it's so, so, so, so, so important that when
you're measuring performance you are consistent. I know that I am
rendering at 1920 x 1200. Those are my numbers. And so if I go to
compare the shadow deaths value in this shot to
something that I was seeing in one of my perf cameras, I'm
not comparing apples to apples, and the numbers won't match up. Now, if I make a change here and
I don't move my camera at all, then I can see how it changes. So the other thing to
keep in mind, there are a number of
commands to look at. Like I said, we're focused
on basically in stat GPU. I'll turn that off. But the other things are
like, we got stat game, right? I can see what my
world tick time is. I can see all the transforms,
and all that fun stuff. I've got stat RHI. And we'll talk a little
bit more about this, but this shows me
the draw primitives. It shows me the triangle counts. It shows me the memory load of
everything, matte render thread commands. There are a lot of different
places to look for perf. So the F draw scene
command, you'll notice that this number
looks really close to the one that we see in stat GPU. And that's what that is. And so what I like to do
when I see performance-- Oh, now, let's talk about
the blueprint first. So I have a sublevel
called Performance. And that has all of
my perf cams in it. And that has all of
my perf cameras in it. JAKOB: Are you
searching for the app level? MATT: Oh, no. I'm searching for the folder
to just show all of them. There we go. And these are just
camera actors. And I really use
them for placement and piloting in
their transforms, and they all have an actor
tag, which is perf cam. And I have a blueprint
called BP perf analysis. Let's take a look
at that real quick. Drag that onto the screen. So on dot construct, on
begin play it goes through, and it gets all actors
of tag, perf cam. And I'm only doing this once,
so I'm not worried about this. The thing to keep in mind
is that the order of this is not necessarily
deterministic. So what might be called in
the editor camera actor two-- and it should be
the first camera, in my mind, that might end
up in a different order for various CPU reasons. So I have a sort
perf cams by name. I will probably optimize this,
but for our current purposes-- for current purposes
I get the number from the end of the camera's
actor name, and I bubble sort. And it's not the most
cool, but it does give me a deterministic order. And when I go to release
the game, this is probably-- quote, unquote
release the game, I wouldn't be including
all of my perf stuff. This would all be
hidden in debug. And then start perf is
that console command we called earlier. KE star star perf
calls this function. Run stat FPS sets
my index to zero, and we're off to the races. And it will grab that index. It will get the camera at
that index in the array, move the camera to it,
and run start trace GPU so we know that we're
starting a GPU trace on that, wait two seconds, stop,
take a screenshot, let us know that it happened,
wait a little bit longer for the camera to settle out. Cause sometimes if
you call shot and then you immediately move on, you
might get like a blurry frame. And we don't want that. Because then we don't
know what we saw. And then if we're
supposed to keep going, because I've
also set this up to work on a single camera, if we're
supposed to keep going, make sure that we're
not at the end. And it will either
continue, or it'll stop. And all of these are console
commands that I can call. So I can call set cam. So maybe I was looking
at my perf data, and I saw that camera one, or
camera index one, AKA camera two, was at like 60
frames per second. So what I can do is I
can hit Play, cause I want to go to that
camera, and I want to take a look at some of
the issues that it's having. So I will KE star set cam two. Aha, I've already off
by off by one-ed myself. KE star set cam one. All right, so this was
a shot that I noticed was particularly heavy. I should also note,
in Jakob's defense, I have turned some
things off, optimizations that he had already done. JAKOB: No, no, no, no. MATT: The numbers
you seen here are staged. So I can look at this,
I can look at stat GPU, and I start my like
Dr. House level of differential diagnosis. I look at the
highest number, and I see that the highest number
we're seeing is shadow depth. And this is going
to be a lot of fun. So shadow depths is
basically rendering all of our dynamic shadows. And we have a lot
of dynamic shadows. And I see this a lot where the
questions may either be, hey, my shadows disappear
after a certain distance. Or the shadows really
close to the camera are super, super blurry. What do I do? And this all falls under the
realm of cascading shadow maps. And these are, basically
we are rendering from us, from the camera
forward different shadow maps. And these are-- they
might be 1024, 2048, or something like that,
and they cascade out to a certain distance. And the system has a way to
break up the first cascade is X units away. And it all fits into the
dynamic shadow distance. So now that I know that my
shadow depths are super, super high, I'll stop playing,
and I'll go look at the directional light. Directional light. So I know that this is
going to be casting shadows. It's movable. I know that it's casting shadows
because that's our sunlight, and it should be
casting shadows. So there's a couple of things
that are happening here. The dynamic shadow
distance is 30,000 units. That means we are running
dynamic shadows all the way out to 30,000 units away from
us, 30,000 units away from the camera. And that is really, really far. VICTOR: Right. MATT: That's from here-- gosh, what is this asset at? This asset is 920 by that. Way out into the distance. Grab a cube for distance, right? So this cube isn't even 30,000
units away from our main level. So we're rendering dynamic
shadows all the way out here. Well, that's scary. Which, hey, maybe we
don't need to do that. And again, the focus
here is let's try to improve performance without
sacrificing visual quality. And since I know
that we're not really going to be seeing dynamic
shadows out to that distance, I can bring that number in. So I'm going to
leave stat GPU on, and we're going to go
down to our camera. Cause again, we want to
compare apples to apples. All right, and I'm not going
to move my camera from here, so we can see these
numbers change. So I'll come down
here, and I'll say, let's move the dynamic
shadow distance into like 15,000 units. And we see that number
start creeping down. OK, so that saves
me 4 milliseconds. And I don't notice a difference. JAKOB: The number
can be a big deal. MATT: Right? That can be make or break. Now we got to keep going. JAKOB: No,
I've done that. MATT: So the number-- in my experience--
your mileage may vary. In my experience, the
number of cascades has a bigger effect
on performance than the size of the
cascades or the distance. So if I crank this up to 10-- because, again, this goes
back to how many operations are you doing over, and
over, and over again? And if we are doing 10 shadow
cascades every single frame, that's going to cost a lot. But if we are doing
one shadow cascade with a higher resolution,
that might be cheaper. So 10 shadow cascades cost us-- that we went from 16 to 23. That's 8 milliseconds. If you're targeting a 60
frames per second game, that's half of your draw
budget, and we haven't even started talking
about characters. And we haven't even
started talking about particle effects,
or anything like that, all the dynamic
stuff that happens. And that's another thing I
should've mentioned at the top. When we're talking
about environment, I like to keep my number
like a few milliseconds lower than the overall
target for the game. So if I'm targeting
a game that's running at 30 frames per
second, that's 33 milliseconds. My environment target might be
25, 27 and 1/2 milliseconds. Again, it depends
on do I know if I'm going to have 1,000
characters on screen or if I'm only going to have 5? And these are value judgments
you have to make for yourself. So let's say I'm only
going to need one cascade. And we can watch
that number drop. Oh, wow. OK, so with two value changes
just on the directional light we've gone from 20 milliseconds
to 11 milliseconds. Haven't changed the camera. So I know that these
numbers are relatively correct to each other. So the other thing you
can do, maybe you do want shadows that extend
beyond 15,000 units. And that's where distance
field shadows come into play. Now, keep in mind, when you
generate distance fields for your meshes, that
is a project setting. And that has to happen for
all of your static meshes. So there will be an
additional memory load there, but we're now going
to be calculating shadows against the
static distance fields. Auto save, of course. And those are a little
cheaper to calculate. And so the distance
field shadow will basically-- this distance
will cover everything from the end of your dynamic
shadows up to this distance. And then beyond that
will be the far cascade. And effectively, the far cascade
is an opt in shadow cascade. If I have a giant
pillar that I need to be casting a shadow
across the entire map, but the giant pillar is
a bajillion units away, I might opt that into the
far cascade and nothing else, and work with that. So the far cascade
settings are down here. But again, for our
purposes we don't really need a far cascade. We can cover all of our
shadows within, say, with one cascade at 15,000 units. And it looks like
distance field shadows didn't have a huge effect on
our perf, positive or negative, which is great. I just turn that. All right, so that was 8
milliseconds right there. Everything's going great. And you know, there's
probably going to be a little overhead
here too for the GPU, because I'm streaming
off of this computer. My GPU is also doing a
bunch of work to do that. So that's something
to keep in mind too. OK, so let's move on from
that, because the next thing I noticed is that my
base pass and my pre-pass are really high. And this is where
we talk about, this is your number of draw calls. These are your triangle counts. These are your shaders. That's where all of
this is coming from. So let's take a look at that. So what do I mean
by a draw call? You can think of a draw
call as the same geometry with the same material. And the reason I say geometry
is because vertex colors are part of geometry. So that means if
you have two rocks and they are the same
static mesh asset, but you have applied different
vertex paints to them, they are different draw calls. And this is all within the
same like static mesh actor. And we have added
dynamic mesh instancing, which is super, super useful. That was added in 4.22. And that gives us
the ability to-- it tells the CPU to
intelligently group draw calls together. And the easiest way
you can inspect that is with stat RHI, our Render
Hardware Interface for those interested. And again, this shows
us our draw calls. So draw primitive
calls, almost 14,000. That's a lot. My advice is usually if
you're targeting something like the Xbox One, that
should be maybe 2,000. Higher end, maybe
you have a bunch of stuff in a different
quality setting. Maybe you can go
up a little higher. So on my machine, again, I can
process a lot of assets, right? But we can see what happens
if we do r.meshdrawcomman ds.dynamicinstancing. And we just set this to zero. So watch that draw
primitive call. So dynamic mesh instancing saves
us almost 11,000 draw calls, which is huge. And it may be more huge. It may have more of an
effect on GPU performance on lower end hardware. Again, we're running
beefy, beefy hardware. JAKOB: What
are you running, Matt? MATT: I am
running a 2080 RTX. I've got 128 GB of RAM. And I've got some
monster server processor. This is a very, very
powerful machine. And I'm glad to have
it, cause for some of our licensees
I'll be answering ray tracing questions. Or I will be working on
a very, very heavy scene. And it's really nice to
have that horsepower. OK, so I know that in my
base pass, so in this shot, base pass is 4 milliseconds,
shadow depth is 9 milliseconds. So one of the
things that I can do to reduce the
number of draw calls is we have to think
about occlusion and how the draw thread prepares
all of the data for the GPU. So the first thing it does is
it looks at everything's call distance. So every actor has
a max draw distance. So I'll go grab that. And I can go here,
max draw distance. And I can say,
hey, so this is set at 0, which means
that as far away as I could possibly get from
this thing, 50 bajillion units, right? This draw distance
is going to be zero. And zero means don't
worry about it, which means we're always
going to be drawing it. And that's not necessarily
a good thing, especially the further and further
away you get. Because one of the big
things about GPU performance is, as the size of the triangles
get smaller, and smaller, and smaller, the more
expensive it gets for the GPU to render them. So if you have a triangle that
is smaller than a single pixel, the GPU is actually rendering-- let me see if I get this right. It's effectively rendering four
pixels around that triangle, and if you have like
four subpixel triangles all next to each
other, it's now having to render those same
pixels 16 times, which is super expensive. Again, the more operations we do
all at once, the more expensive it's going to be. So the things that
I can do to reduce the number of draw
calls, right off the hop I can set max draw distances. I can set max draw distances. I can set that really easily. And I can do that with
call distance volumes. So that's just under Place
Actor, Call Distance Volume. I can drag that into the world. And what this does is it
gives me options for sizes. And this size, keep in mind,
is the largest single dimension of the bounding box. So if you have an asset that is
like 10 units tall, by 10 units wide, by 1,000 units long,
that's going to be cut-- that's going to be hit by
anything that is 1,000 units. If I have something that is
50 by 50 by 50, that's 50. So maybe I'll say
everything that is 50 units, I want to start calling
that at 1,000 units. I'm going to make that
call distance volume super, super big. All right, so I'm going
to make that 15,000 units, because that's
basically the size of my map. And you can have a bunch
of these in your level, and these will be-- basically anything
within this volume will be affected by
the called distances. And so maybe I want
to start calling out stuff that is 50 units
after 1,000, 1,000 units. And maybe I want to
start calling stuff out that is 200 units or
larger after 2,000. And, again, this is all going
to be a value judgment for you. I should also note that
this will only affect if you have game view enabled. And it looks like we've called
out a bunch of our trees already. So that's good to keep in mind. So if you had a
game cue. JAKOB: It looks like
you've affected pretty much the entire foliage [INAUDIBLE]. MATT: Right, so
I think that's because we have this size zero
call distance zero. So I'm going to
make that 500 units. Yeah, there we go. The trees are back. MATT: So
maybe I want to start calling everything that is
bigger than 500 units at 5,000 units away from the camera. Aha. Maybe I want that to be 15,000
units away from the camera, or 20,000 units away. Or maybe I want everything
that's bigger than 500 units to never call. And that's what I
think I'll do for now. So again, so I moved
my camera around. So this number is not going to
be perfectly accurate, right? All right, so with that,
that's at 13,000 units if I set that to 0. So that was 1,000 draw calls. And we're looking
at the frame, and I didn't see anything change. Did you see anything
change, Jakob? JAKOB: No, not really. MATT: Great. So yeah, let's say I
leave that at 2,000 units. Shoot, that's 1,000
draw calls right there. Hey, maybe that's
half your budget. Great. So that's one quick
technique to reduce the number of draw calls. And I'll show you with
our good pal RenderDoc why that can get
super, super scary. So I have, over here on the
other screen, RenderDoc. This may look scary, but
I think it's a lot of fun. So currently I'm having
a little bit of trouble making the packaged build. Obviously we will have
this fixed before we release the full sample. But when I do the launch
it will make an executable in saved stage builds. So I know I can launch that. And I'm going to say, hey,
when I hit launch, open the map medieval village block out. And while we're at it,
just enable Unreal Insights with a GPU trace. I'm not going to launch that. I already have a capture. Oh, big, big, big
important note. If you were launching
with RenderDoc tracing kind of
attached to the process, make sure you Capture
Child Processes enabled. The other thing is if you were
targeting DirectX12 hardware, make sure that you have
developer mode turned on in your window settings,
otherwise none of these things will show up. So what we can see here is all
of the commands that got sent to the GPU to draw the frame. So this is after call
distances have been checked. This is after frustum
calling has been done. And I can talk about
that in a second. But we can go in here, and
we can look at base pass. And this will show
us all the stuff that happened in the base pass. And I can see the straw material
drew the static mesh, SMR car test 11 times, 11 instances. This is the dynamic
mesh instancing happening all at once. Scene color deferred. I'm going to overlay
with wireframe mesh. We're going to scroll down. We can see all of
our stuff start to show up in our
different buffers. And as we go further
down the command list, more stuff is getting added. Whoops. We're slowing down. JAKOB: Oh, no. Oh, no. MATT: We're
doing all right. We're doing all right. JAKOB: But this is
really the beautiful thing, because for me as a sort of
just purely visual artist who never really needed
to, quote, unquote, worry about performance, because
my real time was Sequencer real time compared to actual
real time, this Insight, even for me who works now at
Epic Games, is super valuable, in seeing how you can actually
deconstruct what your scene is doing. And, of course, it crashed. MATT: Right,
too many triangles. Correct. JAKOB: This is
super, super important and super valuable,
to sort of get a breakdown of the tools
that are available, and how to use them to
their full potential. MATT: So
one of the notes that I would like to make right
off the hop is when you are attaching profiling tools to
your game or to your editor, or something like that, there is
going to be a bit of overhead, because when we attach the
process we're basically saying, hey, I want the verbose
log of all the render commands. And that's going to take
a little bit of extra time to calculate. So in some situations
it may not be too bad. In some situations, it may be
many, many, many, many extra milliseconds spread across
each of your draw calls. So all right, let's
jump back in here. So right, I see there are many,
many, many, many, many, many, many, draw calls. And that's OK, cause we're going
to start getting them down. But what I wanted to show you
very, very briefly in here-- let's go all the way to the end. Let's maybe look at that. And let's look at triangle size. Oh, it does not want to
look at triangle size. That's OK, we can
do this over here. So we can see this in the quad
overdraw, optimization V mode. So what this is showing us-- VICTOR: All right,
sorry about that, everyone. Quick little audio
interface crash. And if you watch the
stream, it has happened. I'm still waiting
on my new interface. It's difficult now
in the pandemic times to get the kind of
hardware you need. But we are back, and Matt is
ready to show you everything about overdrawing. MATT: We're back. OK, so we're going to
talk about quad overdraw. So back in the days
of forward rendering when we had to render basically
everything from back to front, where if you had a giant fog
plane that would have to render on top of everything
behind it, you'd have to run the pixel shader
over, and over, and over again. And there's certainly
still some of that, right? This is one of our
fog particle effects. And we can see that that's
causing a lot of overdraw. But what this view shows
you is the number of times each pixel is getting rendered. And we can see that in
a lot of situations, each pixel is getting
rendered four times, right? And what this is
is small triangles. And one of the best ways you
can combat that is with LODs. And what I love, love,
love about Unreal is that back in the
day, I had to think about making sure that all of
my LODs were perfectly setup. Of course, with
Quixel, you get LODs for everything, right out of
the box, right out of Quixel. But maybe you were
working on a custom asset, and you were building
something yourself, and you haven't
thought to make LODs. Or maybe you just want to
make sure if LODs do affect or do improve your
performance, you can generate them
automatically in Unreal. This is I think one
of my favorite things. JAKOB: And I
think one of the-- sorry for sort of jumping ahead. MATT: No, no, go ahead. JAKOB: I think it is
genuinely one of the most overlooked core
features that will help you deal with pretty
much any asset, whether it's a Megascans asset and
you bring in high poly data for a cinematic, but
you want to still crunch that down a bit. The automated LOD setup,
and the reduction settings, so the non-destructive reduction
power of the static mesh editor is insanely useful. It is brilliant. And so many people don't
really either know about it, especially when they start
getting into Unreal Engine, or they just don't use it. But you should use that
every single time you get the chance to. MATT: Yes. JAKOB: It
can single handedly get you a quad overdraw
from, let's say, four times or even seven times
drawn per triangle, to easily down to two or one
without any fuss of creating manual LODs and
importing all of that. MATT: Right, and
maybe for your hero assets, maybe for some assets that
are more important, or maybe for various computational
algorithmic reasons, your LODs aren't up to
your level of quality. You can always import your own. Or maybe you import your LOD
one, but not your LOD two. But it is truly as simple as
setting the number of LODs. And this will automatically--
you know, if you have one LOD, so that's LOD 0, and
you want another one, you can turn that number to two. And we're going to
spend a little time. And we're going to spin up-- boom-- another LOD for you. And all of those reduction
settings are right here. Percent triangles is going
to be percent triangles at the last level, and
so on and so forth. And these are all,
back in the day it used to be distance based. Nowadays it's screen
size based, so based on how big that thing is on
the screen, which LOD are we going to pick. So up in the corner, I can
see all the way back here, this is LOD 6. Its screen size is 0.17. And now it's only 40 triangles. And that's great,
because I wouldn't want to render this
triangle this far away. So we can see that looks solid. I only want to
render 40 triangles. I don't want to
render all of them. Because that's not
detail you can see. And I think that's one of
the things I've heard before, that, oh, but this is
going to reduce my quality. Well, do you see that
from that distance? Not really. JAKOB: And
the quality reduction is something that-- or the
supposed quality reduction is something that I was
afraid of starting out, but especially my workflow when
I bring in assets is I usually just bring in the
Megascans LOD 0, which is a very high resolution mesh. It is up there. And then just in the editor,
I start bringing down the reduction settings from the
original 100% triangles down to 50%, down to 35% of the
original triangle count. And you really don't
see any difference, unless you start
rendering out in 4K or 6K, or do burnouts in
6K to render down. MATT: Right,
at which point you are rendering at
a higher resolution. And yes, you may
see that detail. And then, suddenly, that is not
a sub pixel triangle, right? JAKOB: Yes, exactly. MATT: So
one of the things I hear, one of the
concerns I hear is, Matt, the number of LODs,
that setting is not exposed to the bulk property editor. And you're right, it's not. But you know what
is? [? LOD ?] group. So LOD groups are
also super useful. These are prebuilt
settings for all of the different
LODs for how you want to set up your LODs
for different settings. So a large prop, a small
prop, a Vista asset, an architectural
asset, all of these are built into the editor. And either I have shared those
documents in the forum post, or I will after the livestream. And these set these
numbers automatically. So a foliage asset
may have eight LODs. A small prop may have
two LODs or four LODs. So if I go, select all
these assets, right click, Asset Actions, Bulk Edit
via Property Matrix, it's right there. And I can select all of these. And I can just type small prop. Or I might select those three
assets and go to LOD settings, and I might type
small prop over here. And now it's going
to go through. Now, everything is going
to be set up to small prop. And it generated all
of the LODs for it. And it did all of the
other reduction settings. And it did the auto
screen size settings. And it took 30 seconds. JAKOB: Yes,
that is the big point with the sort of in
engine LOD crunching, and poly count
reduction settings is they're incredibly fast. It really takes--
I mean, you've seen it changing three
relatively high quality assets to completely shuffle
their settings around. What was that? A second? Two seconds of actual
computation time? MATT: Mhm. JAKOB: It's so quick,
and it's an absolute non hassle to work that way, to
optimize assets that way. MATT: Right,
so that's huge. LODs are huge. And again, the thing
that I want to focus on is rendering the detail
that we can actually see, because so much
of our frame time is probably spent rendering
detail that we can't see. And that's the crux
of this is I want to render the details
that we can actually see. Speaking of details
we can actually see, I made a little test map. I want to show you
guys this test map. So one of the things
you can do-- hey, I'm going to save this. One of the things that you can
do with your landscape material that I have a lot of fun doing
is using landscape grass types. And landscape grass
types basically say, if I sample this layer,
say grass, put grass fully, put this foliage asset
Where I tell you to. Oh, did it not do it? Oh, it didn't save right. JAKOB: Oh, no. MATT: We're doing
foliage enabled. OK. Find parent. Oh, maybe it's thinking. VICTOR: Computers
have emotions too you know. MATT: It's true. I love my computer. I hope it's doing well. JAKOB: You need
to hold them very dear and talk sweetly to them. VICTOR: The same way
you should talk to your plants. JAKOB: Yep. MATT: My plants
are very important to me. Almost. Oh, yeah, I think it's thinking. VICTOR: I mean, your
fans in your power supply, they like carbon dioxide
just like your plants, right? You know, someone back
in school told me that. I'm sure it's still true. MATT: Yeah, definitely. I've got a gas generator
running this computer right now. Did we mention the
storms outside? JAKOB: I was
just about to ask how it's looking on your end? VICTOR: I don't think we
mentioned, the trees are going way back and forth there
on the East coast here and close to Raleigh. MATT: Yep. They're coming in. OK, it's happening. All right, so aha ha,
ha, there it goes. JAKOB: Perfect. Brilliant. MATT: That's
what I'm looking for. Right? So one of the things that I set
up on this landscape material is different foliage types,
different landscape grass types per layer. And for the purposes
of demonstration, I have turned off the call
distances on all of them. And so we can see way
out into the distance all of these tiny
little foliage. Oh, man. And again, that's a
detail that I can't see. And so what I can do, I can go
to my landscape grass types, grab this one. So this is a landscape
grass type asset, slightly different from a foliage asset. And we'll get into foliage
assets in a second. And so the start and end call
distances are very, very high. So again, let's
do a little test. Let's come down here, go
into game view, stat RHR. Woo. Only 2,000 draw calls,
because of dynamic instancing. Let's see what happens
when we do this. R.meshdrawcommands
.mechinstancing 0. Ah, OK, so those are
all instanced together, which is fine, because
they were all basically part of the instanced
foliage actor of the level. But that doesn't mean I
shouldn't bring these down. So maybe I only want to render
these out to 5,000 units. So that brought everything
in, and it's probably going to go through. Yeah, it's got to
regenerate some stuff. It's got to think
about what it's done, because the settings
in here are also-- those are also settings
that focus on distribution. So it probably is
regenerating those. Yeah, there it goes. So now, that was dirt. And that dirt
layer, you can see, we're only rendering
dirt out to 5,000 units. And, of course, these
things have LODs, because really at
a certain point, we really can't see that detail. And, of course, this
is my little test map. So there would, of course,
be more environment art built on top of this that
would hide just the flat plane. But call distances are
critical, because this is the first and cheapest way
to discard stuff from rendering. The other thing that
discards stuff from rendering is not something you
necessarily have control over, but that is
frustum culling, which we talked about earlier. And I can see that
with frieze rendering. VICTOR: That's a
really useful command. MATT: So we can
see that one of the things that we immediately
frustum culled. So frustum,
the view, so that's the near clip plane, the far
clip plane, and your FOVs. And so that looks
at that, and it does I think bounds
testing to see if something falls within your frame. And it'll discard. It won't even send to the GPU
things that you cannot see. So stuff that's behind
you, not going to see it. If you have a big rock that
has a really big bounds, but it's behind other stuff,
it will get sent to the GPU. And then on the GPU, we will
figure out whether or not we need to do the
base pass for it. Excuse me. So frustum culling
basically said, nuke out. We're not going to render
all of this landscape stuff behind you. And, of course, you'll see
rendering frozen at the top there to let you
know that that's happening. Boom, we can pop
that back in, right? So now my frustum is
getting updated every frame. OK, so we're still focused
on draw calls, right? So I do know that
in my level, I'm just going to filter
all foliage here. So I have these
landscape grass types. I'm actually going
to filter those out. So I have static mesh
foliage and actor foliage, all great stuff. This is all placed. Jakob, did you hand place these? Did you paint these in? Or did you use foliage volumes? JAKOB: Both actually. Right now, the project, we have
a few larger foliage volumes that just helps you expand the
level beyond what makes sense to hand paint stuff in. But, of course, as soon as
we're talking about small set dressing and very specific
set dressing, [INAUDIBLE] the props, then we're not
using the volumes anymore, then actually starting
to hand paint these. MATT: Yeah,
right, because, again, artistic control is
really important. JAKOB: Oh, yes. MATT: So
the cool thing is is a lot of the settings
for static mesh foliage are bulk editable. So what I'm going
to do, again, you can pin properties
to add columns. And I will freely admit that
I hit pin call distance. And I started clicking around. I'm like, I can't edit these. What's going on? I can edit these. VICTOR: It might be
worth mentioning a little bit about the bulk editing matrix. MATT: Yeah,
I mean, this is one of the things I
do as a tech artist. Oh, man, all that yellow
is really bouncing. Raytrace GI. JAKOB: Oh, yes. MATT: Yeah, so the bulk
edit by property matrix thing is really handy. One of the things I hate
to do as a tech artist is the same thing over, and
over, and over, and over, and over, and over again. My first tool was
just automating a thing that toggled collusion
on and off, because I just couldn't bear doing it by hand. JAKOB: Sorry,
the small delay is-- MATT: Oh,
no, you're fine. So getting access to that
is as easy as selecting a bunch of assets. Go into Asset Actions, Bulk
Edit Via Property Matrix. And that'll bring up everything
that you have selected. And you can go in, and you
can select a subset of assets. So maybe I want to set the
call distances on my fern. And maybe I'm going
to say, I want to call ferns at 1,000 units. Boom. We pop ferns to 1,000 units. And maybe I want to set
dry plants to 500 units. Or maybe I want to go through,
and I want to grab everything. And I will say, you know what? I just want to do
some quick testing. Let's see how this goes. I can come in here and
I say, call everything out at 5,000 units. It's going to take a bit,
cause it's got to go through and set those settings. And I'm just going to
set that to 4,500 just to be on the safe side. All right, so, again,
we looked at our scene. We said, you know, we're really
not rendering anything out beyond 15,000 units. And at a certain
point you're not going to see the
foliage anyways, or you're not going to
see the trees anyways. Or you're not going to
see the clovers that are on the ground cover all
the way out at a distance. And we can say those
can go away, or not go away necessarily, but we
don't need to render them. Again, only render the details
that you can actually see. And, of course, we're
going to save this. JAKOB: I think the
bulk added property matrix is, I mean, not only
for, for example, setting cull length distances. For me as an artist
you sort of talk about getting in the zone
when you, for example, edit a layer and start set dressing. You don't want to set
dress and every two minutes stop because you need
to change a setting and do something technical. You want to be in that
zone for as long as possible to sort of stay
productive and actually get work done. So what we do, for example,
is experiment a lot with different bitmap settings
for textures, for example. So how can I reduce, for
example, texture size to just make sure that my GPU
memory just isn't completely overloaded with just loading
massive amounts of 4K and 8K textures, while still
keeping very high quality and crisp looks in
the actual scene. And so instead of you
going in and changing every single texture,
we're like, yeah, this works, and then going
back, no, this doesn't work, and now I'm stuck
editing textures for-- I don't know--
four hours instead of actually set dressing. It's way easier to just go into
the property matrix and just bulk change these,
especially once you've found a setting
that works, and you want to apply it to
previously worked on textures. VICTOR: Oh,
Matt, you're muted. MATT: And don't
forget, you can also set up your texture groups. JAKOB: Yes. MATT: Right? So you may have texture
groups in a world. I want to make sure that the
mip LOD bias on world is one, or something like that. I want to discard
the highest thing. You can do that here. You can set up your
compression settings. All of those, you
have a lot of control just by setting
the texture group. And, of course, what
I love about that is that it is effectively
non-destructive. You can go in,
and you can change all these different settings
for your texture groups without having to
go through and then manually update everything. So sometimes I might do this
toward the end of a project when really I'm trying to
get a half millisecond back. And it's like, all
right, well we've got five 4096 normal maps. I'm like, well, maybe
those are a little big, and maybe we don't
really see those details. And maybe we can chop those
down to 1024s or 2048s. And that's, again,
because we're trying to get that little bit left. Right now we're
focused on a big chunk, and, again, how
much work do we have to do that affects
as many pixels as possible without affecting the
overall quality of the scene? That's what I'm
focused on right now. And that's what I think
a lot of people can do, and they can do really,
really quickly, right? I saved 8 milliseconds
off the scene just by not rendering shadows
that I don't even see, and now we're going from there. So I'm going to open up for
the next thing on draw call reduction. I'm going to open
up the other map. That might take a little bit. Victor, is there a question
from the chat that I can take? VICTOR:
Yeah, we got plenty. [CHUCKLES] MATT: Oh, boy. VICTOR: You're
going to be here a while. MATT: Oh, boy. VICTOR: Do large
textures in landscape layers impact performance? MATT: Sampling
large textures does affect performance. The bigger the
texture, the more time it takes to filter, sample,
and all that fun stuff. And as the rendering
engineers have told me, we have done some
optimizations to that so that it's not quite like
it was, cause that is advice that I remember from
the Xbox 360 days. A large texture is really
expensive to sample. Well, that's, to an
extent, still true. But if you are using
runtime virtual textures for your landscapes,
like we are in this, it actually ends
up ultimately being cheaper to do it with a
runtime virtual texture, even though the runtime virtual
texture is effectively larger, because of a lot of
the optimizations we've done there, which
are really, really cool. And I think there was a
livestream about that with-- was it Jeremy Moore? VICTOR: Yeah, we did
one right as we shipped. I don't exactly remember
which version that was. But if you go and search "Inside
Unreal" virtual texturing, or virtual textures, you'll
find that livestream. MATT: Mhm. So one of the things that I
noticed while I was working, while I was looking at
the scene, and, of course, while I'm looking at
my stat RHI, right, so I have many, many, many
thousands of draw calls. And this goes back to if you
are looking at perf early and often, and if
you as a tech artist, or you are environment artist
are communicating freely back and forth, keep the lines
of communication open, then you can catch stuff early. And you can talk about how to
construct assets, or construct your environment in a way
that is conducive to perf. So as I mentioned earlier, we
have dynamic mesh instances, which does save a
number of draw calls. As we saw, that saved
whatever, 10,000 draw calls for that one shot. It might be more. It might be less in this scene. But that only goes so far. So again, as we said,
that is same geometry. And that includes LODs. So LOD 0 and LOD 1 are
different geometries, even though they are
from the same actor. And it's also kind of
doing some clustering. So it'll say that these
five trees over here are going to be a dynamic
instance together. And these five trees,
even though they're the same geometry but they're
on the other side of the screen, those might be a
different thing. There's ways we can
get around this. And so one of the
things I noticed, as Victor was talking, or-- I'm sorry-- Jakob was
talking about constructing the cottages, the
thatched roof cottages-- I can't believe I
made it this far in without talking about
the thatched roof cottages. So these, Victor, and-- my god-- Jakob has gone through
and constructed these thatched roof cottages with-- that's why, translucent
selection-- with individual thatches as individual
static meshes. And this has created a very,
very beautiful thatched roof. MATT: What's that? JAKOB: To the
house on the right, I think it's lit a
bit more beautifully. That one on the
bottom-- MATT: This one? This one's your favorite? JAKOB: But then
on the other side of it. Yes, there we go. The quality starts
to pop up there MATT: I
really love these. And luckily, a few of
these have been split off into their own sublevels
so we can work on these. And I'm going to
go into unlit mode. We can work on these
directly and specifically. So I can show you kind
of what's going on here. So I pre-staged my
little Emeril Lagasse. All right, and then
we're going to let this rest for four hours. And four hours later,
we're going to pull out the perfectly rested thing. So I noticed that there are a
number of actors that were used to construct the thatched roof. In fact, 563-- 460. And so each of
these is going to-- now, it's going
to LOD separately. It might not dynamically
batch instance together quite the same. And Jacob, you were
telling me that one of the things that
you wanted to maintain is that nice kind of
gradient as you move away from it so that it's not-- if the whole roof LOD popped
all at once, we'd notice, right? JAKOB: Exactly,
like my, quote, unquote, basic artistic brain went,
OK, I have this thatched roof. And I know that I
need a base mesh. And I need some sort of cards on
top just to fake masked volume. And so my first
instinct was, well, when I have all of
this in one mesh, and I get away from
the camera, or will start getting away
from the object, then obviously all of that will
sort of pop out in an instance. And what I did for
these is, first of all, set their culling
distances, but also have their opacity decrease
based on distance to camera. So they are on their own,
even if they are not actually culled, fading away,
which sort of helps you hide that sort of immediate
disappearing of cards. But in my head,
it made more sense to keep those as individual
meshes or individual pieces, which could cull out that
more aggressively without me rendering a completely
transparent object for longer than it needs to be rendered. MATT: Right, cause
translucency can get expensive. So now we're going to
get into the construction without affecting
visual quality. So I want to make sure
that these cards fade nicely and separately. And I will freely admit
that I changed the material so that we can do this live. JAKOB: How dare you. [LAUGHS] MATT: Did I
leave that plugged in? I left that plugged in. So the way he's doing
that is down here. So he takes the opacity
mask, does a little bit of multiplication on it. And then basically we get the
distance between the camera and the position of the
pixel that we are drawing. We add our fade out
distance, and we divide that. So this is basically a
normalize to range function. We do a little math on
that and saturate it to keep that value from 0 to 1. And then we LERP between
our base opacity and 0, and we plug that straight
into the opacity mask. And so that's how he's doing
the per pixel fade out, which is super, super cool,
and super, super good. And again, it's on
a masked material, so those were a little cheaper. But what this means
is that all of this is happening at
the material level. And I can use this
to my advantage. So I'm not going to
move the camera at all. I am going to stat RHI so
we can see how many draw calls we're working with. So this house effectively is
anywhere between 800 and 1200 draw calls. JAKOB: It's an entire
mobile game essentially. [LAUGHING] MATT: You
think about, I remember somebody saying like the
sound file for the Sega at the beginning of
Sonic is like 1/8 of the total cartridge. These are all the
draw calls you get for your entire mobile game. JAKOB: It's good
if you can be wasteful, but it's smarter to do it
the smart way obviously. MATT: Right,
well, so the cool thing is, we have a couple of built
in tools that can make this really, really, really easy. I'm going to show you
two things you can do. And both of them are
just with a right click. So the draw calls
popped up there because of the highlighting. So we're going to remember
our old value of 1,000. We're going to select
all of these bad boys. There we go. Select all those bad boys,
right click, Merge Actors. And the one I'm going to use
is this one over on the right, harvest geometry
from selected actors, and merge them into an actor
with multiple instanced static mesh components. This is what I would consider
a destructive change. So this might be
something as we're getting toward the end
of this project, Jakob, we know that the
houses aren't going to change around too much,
we might pull this trigger. VICTOR: There's a way-- MATT: Go ahead. VICTOR: There's a way to
make it non-destructive that I used in a project,
where we basically have a separate
sublevel where we keep all of the
original pre-merging, where we keep all the meshes. And so right before
you do the merge, you essentially copy over
to your un-merged level, and you make sure that that
level is not loaded by default. And then you can
work it or do it in a non-destructive way, where
you then later can open up that sublevel, select
meshes in there, merge them, and
then move that over. MATT: Right,
and so as you're going through your project, and
you want to set something up like that, do it. It makes things so
much easier on you. So for the time
being, we're just going to do this little test. Again, our draw calls
were around 1,000. And again, our dynamic
instancing does a lot of work here. But we can help it out. We can tell it what
to instance together. And we can do that right here. And so the key to
me that this was a good idea is
that we have maybe four different
static mesh assets, thatch L, light, heavy, SM card. I think there are
maybe four or five. But there are a lot of them. And that's what cued me in that
this was kind of the way to go. So we're going to do all that. We're going to turn
it into an actor. It's going to be instanced
static mesh components. Boom, merge actors. I haven't moved the camera. That was 500 draw calls. That's a mobile game. JAKOB: Ship it. MATT: Ship it. And so now this whole thing-- roof base, ha, ha. Give me. There it is. So the whole actor itself
is all of these little-- it's all of these. They're all instant
static mesh components, which means that the computer
knows that these should all be drawn in the same draw call. So now, I know that there
are probably some more draw calls in here, depending
on how things are set up, same kind of thing. So that's not the only thing. That's not the only
thing that we can do. So let's turn those
bad boys back on. OK, that was a lot. So say maybe for
whatever hardware reason we can't instance all
of these together. What we can do instead-- oh, boy, scrolling,
scrolling, scrolling. JAKOB: It takes
a while scrolling through 500 individual actors. MATT: Right, so
I can come back here. I can merge actors. And I'm going to do
this harvest geometry, and merge it into
a single actor. So what this is going to do is
create a new static mesh actor. And we're going to put this
under my Developer folder. Oh, secret's out. Save. So this is creating
a mesh proxy. So this is going to do a
little bit of extra work to generate this geometry. And it's going to
bake some stuff down. And I might just
tell this to not. So I would just say that this
version of the merge actor's tool is really, really good
if you have a bunch of-- if you want to create a
whole new static mesh, merge it together,
combine it, do a bunch of extra optimization. That's really good. That's what the proxy is really,
really, really good for, maybe less optimal for this use case. What I really wanted to
show you was merge actors. So what I want to do
is merge all of these into a single static mesh, not
change the geometry at all, but merge the
materials together. That's this one
over on the left, grouping them by materials. Because in certain
circumstances, depending on how you
press the buttons, you could end up with all
of these merged together, and each mesh will be its
own element, its own material element. And that, for certain purposes,
is a different material. So when we say same
geometry, same material, different material,
that's more draw calls. So we're going to merge that
together, developer's folder. Yeah, that's what
we're looking for. Hey. Namescapes, they're
really important. All right. So now, I have this, and I
can drag that into the world. And we can see that that's
all of our thatches grouped together. I can come in here. And I can see that it's only the
materials for the thatches that are there. There's only five. So this is now effectively
five draw calls. But it does have the
LOD popping question. But because we've added
that little extra dithering to the mask material, it kind
of hides that transition really, really nicely. So the caveat with this, as
you merge static mesh actors together, this is
a unique asset. This is going to have its
own memory footprint, which may or may not be desirable. But these are all options. These are all options that
you can take advantage of and to keep in mind. So as Jakob and I
progress with this scene, as we look to optimize it
as much as we possibly can, squeeze out all those GPUs,
all those frames per second, we're probably
going to go through and probably going instance
these together just to reduce that
number of draw calls. And there are a bunch of
places that we can do that. So I'm going to bop
back to the main map. And I'll take another question,
or at least come up for air. VICTOR: We can do that. JAKOB: Make sure
you drink something. MATT: I am. VICTOR: Have a bit of water. JAKOB: Perfect. VICTOR: Matt Blulu123123
asked, are there any updates in regards to full
trees from Quixel? MATT: Ooh,
good question. Jakob? JAKOB: Well, I can't
really comment on that directly, other than
it is being worked on, and it's going to be good. [LAUGHS] MATT: I'm excited. JAKOB: Good. VICTOR: Great. It's going to be great. JAKOB: Yeah,
I'm not really allowed to directly
talk about that stuff. It's still something we
want to guard very tightly. But it's going to look great. VICTOR: I can tell you
that you'll know when we know. MATT: When we
know, I will blast it. I will sing it
from the rooftops. I will stand on top of
my roof with a bugle playing some beautiful tune. JAKOB: Oh, yes. When they release, I'm going
to do a full dance routine. VICTOR: Maybe I can get
you on for another livestream. JAKOB: Oh, yes,
please, absolutely. Gladly. MATT: So we're talking
about draw call optimization, talking about construction. So they've built out this-- the windmill. JAKOB: This
is the best example. MATT: And this is
what we call kit bashing. This is when you've
got a kit of stuff, and you bash it all together. JAKOB: This
is literally in level prototyping for block out. MATT: Right. So this was one of
the whatchamacallit. Oh, we're getting
into the live stream where my brain
starts falling apart. This is kit bashing. One of the first games I ever
worked on we built like this. But there were some
optimizations in that game that instanced everything
together and kind of did those draw calls, because that
was a different game engine. Here we've got tools
to do that, right? So as this gets finalized,
as somebody said, [INAUDIBLE] approval,
yes, this is what we want the windmill to look like. We'll go through. We'll merge it together. We might instance everything
together so everything draws really, really quickly. The other thing-- I can't
believe I forgot about this-- and Chris Murphy has
talked about this before, one of our evangelists
out of Australia. So we have a foliage dense
level, and a foliage dense-- I didn't know you could do that. A foliage dense level
is kind of a cue. There are some cues in here. So we go to our
project settings. We go to rendering. We go down to optimization. So right now, I
have early Z pass. So that's when we're
rendering our depth pass. I'm going to set that to
opaque and masked meshes. And it's going to
think about that. And then I'm going to
turn on mask material only in early Z pass. So this will
basically help the GPU know how to pick out masked
materials in the Z pass. So I'll just turn
that bad boy on. And we're going to
restart the project. I'm going to come up for air. And we're going to
take another question. VICTOR: Sure thing,
just looking at that. So [community member] asked,
how does Unreal handle a huge amount
of geometry overlap when crafting an
entire environment full of overlapping objects? As a primarily mobile
and VR developer, the sheer volume of triangles
and range of textures seems too much to handle. MATT: Mhm,
so I mean, that's kind of what we're
talking about. If you have a bunch of geometry
stacked on top of each other, that's when we start
talking about maybe turning on HCB occlusion, which
I need to get a better explanation of it,
but it's effectively a faster but more
conservative way of picking out what to occlude. And there is usually tricks
to figure out whether or not that needs to be
enabled or disabled. It's a project setting. So that would be
something to look into. Of course, we also have
I think VR [INAUDIBLE] occlusion is another occlusion
method for VR specifically that can be really,
really useful. And the other thing to
think about, so, Victor, I mentioned this before
we started the call, and I meant to work
this in early on. The hierarchical instance static
mesh [INAUDIBLE] maker that you had made for a VR project-- and you have to start getting
into those like really, really nitty gritty optimizations
at that point, because now you have
to render everything in both eyes, which means
twice, which means more stuff. And you have to do
it 90 times a second. So that's 11.11 11
11 milliseconds. You have to render it more. Yeah, I really sympathize with
the amount of optimization that you have to do there. I know that wasn't like
the perfect answer to that. But so mask and
early Z pass really, really helps us, again, because
we have so much foliage. We have so many
masked materials. And we'll see how that
affects our quad overdraw. I think that looks
better, I think. We'll work on it. Again, we're going to
keep optimizing it. And you'll see what we are able
to do when we don't have to do it live, and we can keep going. But yeah, that's another
project setting that really, really helps. And so this is a
good example of if I want to see if that really,
really helped, I can go back, and I can run my trace. I could run my full perf test,
because that is a project level setting that's going to affect
every pixel on the screen any time it's rendered. That's a cue that it's
time to do the full perf test instead of doing
it camera by camera or something like that. So while that's compiling, do we have another question? VICTOR: Sure do. Mug17 asked, are draw calls the
only reason for high texture performance loss? If so, is it because of
texture complexity or size? MATT: Hm, let's see
if I can parse this right. So a texture is going to be an
input into the material that is part of the draw call. So if you have a super high
resolution texture, which means super high
resolution MIPS, and that is going into
a very complex material, and it is spread across
the entire screen, then, yeah, that's going
to be more expensive. The reason the number of
draw calls is expensive is because it's the
number of operations that we have to do over, and
over, and over, and over, and over again. So the fewer operations we have
to do, the cheaper, the faster things can go is
effectively how that goes. So if a high
resolution texture is an input into a
bunch of draw calls, then it is going
to be expensive. So I am going to
fullscreen this, because, again, that is
the basis of our test is the full screen. And I'm going to do
KE star start perf. 30 milliseconds. Hey, that shot's down
to 40 milliseconds. That's great, because
that started at 60. JAKOB: Yeah, that looks-- MATT: So just our shadow
change and our mask material change, that's starting
to bring things down. That's great. I'm super excited about that. And that doesn't include
our draw call optimizations for the thatched roofs that
we will do at a later date. And I have buried one perf bomb. So I don't even need to
go look at the trace. But I can look at that and say,
yeah, that looks a lot faster. So I'm going to hit Exit
and see if that works, instead of hitting Alt-F4. JAKOB: I mean,
one thing, for example, when we talk about the thatched
roof is the biggest reason why they're separate right
now, and actually not smartly merged and reduced
in terms of their draw calls is that we might want
to make changes to them. It's the great thing of
having all of that in engine right now. And it's also a great thing why
we have these in some levels so that we can constantly
and continuously update these meshes, if need
be, and push them back to the main level. But if you think about how
many draw calls we've lost or we've saved by merging these
together in one single house, and we have-- ah! Let me lie-- I think eight or
nine houses with completely thatched roofs in the level. So this is going
to be a huge save. So even now, just seeing
how rather quick and simple performance optimizations can
bring down performance cost, even if one of the possibly
biggest culprits is still there is super encouraging. And yeah. MATT: Mhm, yeah, so I
can look at my own real timing insights. So I figured out what
happened last time. Instead of exiting out of
the scene, I hit Alt-F4. So some of the shutdown
stuff didn't happen, which is why I didn't
have the timing file, cause this gets
written out kind of. Either you are connecting
to a live session, or the file gets written out. So what happened was I
crashed the game basically instead of exiting smartly. So I can look at this, and I can
see that, yeah, it looks like-- I'm looking at my
average, 41. Right, that was like 60
earlier, or 58 earlier, 41. This is going great. So I'm really happy with that. So that was two changes. I barely had to do any work. It saved me 20 milliseconds. Let me check my notes
and see how we're doing. Aha, that's right. So sometimes as you're
working on your frame, as you're working on a scene,
you're working down this list, right? We're burning down this list
to see what needs cast shadows. Maybe I turn off shadow casting
on a bunch of stuff that's really close to the
ground, or I know it's going to be in
shadow all the time. Maybe I'm optimizing
my materials. Maybe I'm changing how I do
shadows or something like that. And I really get it down. I get it, and I
squeeze, squeeze, squeeze, squeeze, squeeze. So I have to start looking in
other places for performance. And maybe I have to look for
a quarter of a millisecond somewhere. And maybe I say, hey,
post-processing's super high. That's really strange. How did that get there? He asked knowing the
answer to that question. So oh, this is a good example. So maybe I need to figure
out what's costing me time in post-processing. But I don't want to
fire up RenderDoc. So if I hit
Control-Shift-Comma, that will pop open the
GPU Visualizer, which is one of my favorite tools. Aha, there we go. So this is a visual
representation of the kind of stuff you
would see in RenderDoc. But you don't have the
granularity to like-- in RenderDoc I can find this-- I can look at the geometry that
is being sent to the GPU that's getting rendered. This one is a little
less featured, but it is still super
useful for this purpose. This is sometimes where I
will start on a project. So I can look at this. I can see all of the
different groups of draws. I can go in here. It takes me what? 0.5 milliseconds to
render my virtual texture. That's great. But then I can look
at shadow depth, and I can say, hey, what's
going on with my shadows? Oh, man, I've got two different
shadow atlases rendering. That's really strange. And I can come in here,
and I go, oh, man, I have all of these
other dynamic lights that are casting shadows. And I can look at that. And I can say, man, I'm
spending 5 milliseconds to render dynamic shadows for
lights that maybe I don't even see or lights that don't even
need to be rendering shadows. Ooph, that's 5
milliseconds right there. Pop those bad boys out. But maybe we're past that point. Maybe we need to get down
into the post-processing. There's all this stuff that
happens in post-processing. Some of it's useful,
some of it maybe not. Motion blur is happening there. Let's see if my post-processing
material is happening. Did my post-processing
material not apply? Oh, how shameful. So let's say I had a
post-processing material that was super, super expensive. Yeah, that didn't save. That's so strange. We can edit, though. Post-process. And you'll notice
nothing happened. There's a reason for that. OK, so I added that
post-processing material again. My camera hasn't changed. Pop this open. Scene, Post-processing. There it is. There's my
post-process material. And let's say that this
was super, super big. But I can look at
this, and I can see that, yeah, this
post-process material was taking 3 milliseconds,
let's say. And then I would know
to go look at that and say, all right, so a
post-processed material is something that is happening
to every pixel, every frame, no matter what. And so I might look
at this material and see what it's doing. And I might look at
this material and go, this is doing a blur. And it's doing a lot of it. So we're sampling the
scene color nine times. And then we're doing an
if on it a bunch of times. Ifs are really
instruction heavy. And then we're adding it all
together and dividing by 9. And that might be a
super expensive material. One of the things
I might do, in here I might use a LERP
instead of an if. Let me show you what
that looks like. This is my little fun hack. Component mask this. So this comes out as a
gradient from 0 to 1, and maybe I want to do like-- so maybe I want something
to be either red or blue. Let's say that. And instead of using an if-- And ifs give us
really hard lines, because they're basically
only running per pixel. And maybe that's not optimal. So I'll do that. I'll do that. And maybe I say if it is greater
than 0.5, I want it to be blue. And if it is less than
0.5, I want it to be red. So I can say divide by-- I remember my math. Yeah. Divide by 2. Remember your math. And then I'm going to floor it. Aha, nope. That's right. Right, I remember my math now. I went to art school. Sometimes I don't remember. Divide by 0.5. So that means anything
that is more than 0.5 is going to be more than 1. So 0.75 divided by 0.5
is going to be 1.5. And then what I can do is I
can come in here with a floor. Now, that floor says anything
that is less than 0.5 or anything that is less
than 1 is going to be 0. Anything that is less
than 2 is going to be 1. And that means that anything
that is less than 0.5 is 0. And anything that is
greater than 0.5 is 1. And I can plug
that into my LERP. And now, anything that
is less than 0.5 is red. Everything that is
greater than 0.5 is blue. And it saves me-- I think I ran the numbers on
this-- a couple of instructions over using an if method. And now, of course,
we're getting into super, super really nitty
gritty optimization of I have to make sure that I
save 10 instructions out of this material. And maybe this does save
you some instructions. And maybe this is in
your master material, and your master
material is applied to literally every
material in your game for literally every
static mesh in your game. And so it would behoove you
to find ways to optimize that, because, again, how many times
are we doing this optimization? How many pixels are we
affecting with a change that we are about to make? And I think I've got it all. I think I covered everything. JAKOB: Man,
you covered a lot. VICTOR: Everything? MATT: So not everything. So I do a lot of
performance optimization. I love doing this. I love talking about this. And there is so much
more that goes into this. We can talk more
about occlusion. We can talk more about
construction methods. We can talk more about,
how do we optimize our shadow pass even further? There's so much more
that goes into this. And I really love talking
about it, because I like it when things run at frame rate. That's one of my
favorite things. So yeah, I think
that'll cover it. Questions? VICTOR: Yes. And something that you
were touching on there that I thought we could
start off with the QA section to talk about, often I
see the question like, what is my performance budget? And that's an impossible
question, right? MATT: It is. VICTOR: You can't
answer that question. And so could you
talk a little bit about the difference
between all of the things you need to take in mind
when you are working towards performance of a game. And I want to hear a little
bit about the difference of like time cost versus
like the savings you get, and how that actually applies in
the real world for, say, a game studio or a film studio. MATT: OK, so
when you're coming up, you have to decide your budget. And as I mentioned kind
of at the top of this-- and I will reiterate this-- your budgets are a
reflection of your values. What graphical features do
you value more than others? It's always going to be
relative to something else. So if I have a game where I know
that I need dynamic lighting, maybe every static mesh
is going to be dynamic, and I need to hit 60
frames per second-- Or maybe I'll give
myself some room. I'll say 30 frames per second. So 33 milliseconds. So right off the top-- maybe I have a bunch of
characters running around here too. Maybe I'm making a hero
shooter or something like that. I will say, all right,
I'm going to reserve about eight milliseconds just
off the top for my characters. Give myself some
buffer, because I know I'm going to have a bunch
of characters running around. There's going to be a
bunch of particle effects. I might be doing
some in world UI with mesh widgets
that are dynamic. And that's going to be a lot. So I'm going to say,
maybe that'll give me about 8 milliseconds. Because sometimes
your characters only take up a little
bit of your screen. The environment
is going to cover a huge amount of your screen. That's why I will say,
hey, for that game I'm going to give
myself 25 milliseconds. And if I have 25 milliseconds
to render everything-- that's shadows. That's base pass. That's lights. That's volumetrics. That's post-processing. That's all of my
environmental effects. It might be-- I say my base pass
should be 6 milliseconds. And I'm throwing out numbers. And you can change these
numbers based on your values. So if you know that you don't
really need dynamic lighting, you can bake all
of your lighting, and you want to spend more
time in your base pass, great, 8 milliseconds
on your base pass. But you may have to sacrifice
some of your lighting features. Maybe you don't get
light function materials on everything. You don't use IES profiles on
all of your dynamic lights, because those are expensive too. So maybe I might give
myself, yeah, 2 milliseconds for a GPU scene update. Great, that's fine. Screen space AO,
0.2 milliseconds. That's totally fine. If I see that number creep
up, then maybe somebody changed a quality slider. And it's all give and take. Usually you're probably going
to spend most of your time on your base pass. You're probably going to spend
a lot of time on your lighting. And then translucency,
of course, is its own separate
pass these days. So if I have a bunch of windows
in my downtown city scene, I'm going to want to spend
more time in translucency. That might be 2 milliseconds. And these are all little
things that we can do to optimize our performance. And I think it's
really important to think about these things
really, really early on. Maybe you need to make
a stress test level. And you just take this
level, and you throw everything you have at it. You turn on every feature
in your post-processing. You crank all the quality
sliders all the way up. And you see what your
scene looks like. And if you're like, well,
that's too expensive, all of these features turned
on and building it this way is maybe not going
to be the best way-- maybe I know that I should
reduce my poly budget. Or maybe I know that I shouldn't
spend 16 milliseconds rendering shadows. These are all
things that I think should be part of
preproduction in your game. These cannot be overlooked,
because the most stressful thing for everybody in a project
is when you get to two months before you're supposed
to submit for cert. And oh, we got to start
talking about performance. And everybody's been making
everything look really, really good, and nobody's
been looking at performance. And now, as the
tech artist I might have to be the big bad guy
that comes in and says, no, you got to take
out all your polygons. And that's stressful
for everybody. That's stressful for me. That's stressful
for the artists, because they put a
lot of work into it. And they put a lot
of work into stuff that isn't going to
ship with the game because we weren't thinking
about perf holistically and from an early, early point,
and thinking about our budgets. I hope that answers it. VICTOR: Yeah, I think so. Always great to hear, you know,
you've clearly done this a lot. And so just getting
your thoughts out there, the terminology,
and everything else. And I know that that's
extremely valuable. Yeah, let's move on to some
of the questions we received. And there are a couple
really good ones here. I'm going to start with a
question from [community member]. He asked, or they asked,
what is the best way to test performance
for computers slash consoles with
different specifications without buying all of them? MATT: Ooh,
that's really tough. So I will talk about
a situation where I have had to optimize perform-- yeah, so I have had to think
about performance for VR for a low spec project,
or a VR project and worry about it on a low spec machine. And so what I might end
up doing to kind of get an idea of what that looks
like is I might come into-- where did it go? There it is. I might come into my engine
scalability settings, and I might crank
them down to low, just to see what it looks like. And then if I do
that, on my machine it might be 10 milliseconds. And you're like,
great, all right, that's at 10 milliseconds. Ship it. I'm like, well, no,
that's not the target. So maybe you have to do a
little bit of mental math and think to yourself,
all right, well, if I need to hit 10 milliseconds
on a 980, that means it should be 5 milliseconds on a 2080. And you might have to scale
your numbers down and say, yeah, if it needs to be well
optimized on a low spec machine, it should be hyper optimized
on a high spec machine. That's one way to
think about it. At the end of the
day, though, there's kind of no substitute
for testing on device, because at the end
of the day, if I need to hit 30 milliseconds
on an Xbox One, the only way I will know
for absolute certain that I'm hitting 30
milliseconds on the Xbox One is to test it on the Xbox One. VICTOR: Yeah,
that real world test is only the real way that you
can get those numbers, right? I was saying this in
chat a little bit earlier when the question came up that
it all depends on your budget. But if you're able to scrap
together a GPU from a friend, bring your build over
to someone-- maybe not during these times,
but send it over. I once set up a blueprint that
would just automatically run console commands on
development builds, and I asked the
community for the game, as well as people I knew. Hey, can you please
spend a couple minutes, and just download
this and run it, and take me a couple
of screenshots, and send that back to me. MATT: Right, so you
raised a really good point. I've already got
this project setup with a blueprint that does that. That's a great idea. So I'm on my workstation, my
big, powerful game development PC. My personal PC is not
quite as powerful. I will readily admit that. Maybe I need to test how
this scene runs on a 780. Actually, I think it's a 760. So I might package
this project, or I might grab that staged build,
and I might copy it over to my PC. And I'll just sit there, run
the game, KE start, start perf. And then zip that up, and send
it over to Unreal Insights over here. JAKOB: I mean,
especially if you're not working on a project that
is locked under NDAs, and you have friends. And just even family,
and you're just working on it
yourself, then, Jesus, feel free to send that over. And ask them, hey,
you don't need to know anything about how a
game engine works or anything. MATT: Right. JAKOB: Or
how to play game. Just please, open that and
tell me what the readout on XYZ is, and especially
with, for example, Matt's blueprint that will
be provided in the project. That should be a huge
help to all of you. And the best thing about having
a lot of friends and family that are not in
the industry or not really interested in gaming,
they won't have good hardware. So you get optimal testing for
old and sort of out of date or lower spec hardware that
you can test your game against. VICTOR: And 18 processes
running in the background. JAKOB: Yes. MATT: I have
a very old iPhone. A very old-- it's
an iPhone 7, right? But I got Test Flight installed. And my mom has Test Flight
installed on her iPhone, which is older than mine I think. And she knows how to use it. She can download a
thing, same thing. Maybe you have a
little UI button that you set up for your
iOS game, start perf. These are all-- or your Android. These are all totally fine. Ask your friends. VICTOR: Yeah,
and this is something that you might want to think
about when you sort of-- in the initial design
process of your game. If we take a look at--
so I had Sumo Digital on, and they were talking
about their mobile game Spyder for Apple Arcade. And to ship on
Apple Arcade, you're required to hit a certain
benchmark, or a certain year back worth of iOS devices that
you're required to run on. And so they needed
all of those devices in-house actually throughout
their development test that. And so if you're
a smaller studio, that might be a budget that
you're required to think about already when you set off. Will you be able to hit
the performance benchmarks on all the devices that
you're supposed to launch on? What's that going to cost us? What's that going to add in
terms of QA, time, and money? And so it's definitely something
that you should do throughout. And I think Matt brought
up something really important earlier. Do not wait until the
last couple months. Even the first game
build that you package to have anyone test, you
should run a little performance benchmark on that. MATT: Mhm. JAKOB: I mean,
especially if you don't want to annoy
artists and arts directors. Because if they
lock down visuals, and artists spent
countless hours dumping little teeny tiny stones
into crevices of your level, and the art director really
nailed them into, yeah, this is supposed
to look like that, and we're not going
to change that, and then at the last minute
before shipping when everything was, OK, yeah,
we've done our job, tech art comes in and
says, well, we're sorry, but we have to nuke half of
your art process, or progress, nobody's going to be
happy in that instance. So yeah, make sure to
start as early as possible, and be just considerate. I mean, even with
talking to other people and their experiences,
when you don't have quickly available, amazing tech
artists to your rescue, just ask in the community,
and ask for other people's experience, because a lot of
the stuff I see new people or new users struggle
with are relatively-- I don't want to call it
basic, but surface level. So what Matt, for
example, touched on, easy steps that are super
effective that affect all pixels on the screen. And maybe you just don't
know about these things. Just ask around. And these starter level
optimization passes will help you out a lot. MATT: Yeah. VICTOR: And
that also goes back to the time question, right? Because sometimes
it might not even be worth the time getting
those extra milliseconds. It might not be worth the cost. You might not be able to
afford that studio wise, right? And that's why performance
and optimization will, for the rest of our
lives, be a very big topic that is complex. But there are steps
to take to mitigate the problems that might come
to your project a little bit later. Let's move on to
some of the questions here to make sure we get them. Little Sarah Lee asked, I'm
making an open world map with world composition,
currently empty with just a basic landscape material. I constantly get small lags when
I move from one streaming level to another. How can I diagnose this? MATT: Ooh, I don't
specifically know the answer to that question. But it sounds like it might be
a hitch, so like a little one frame thing. I will absolutely post
in the forums after this. I'm going to type my note now. I will post in the forums
after this a quick and easy way to do that. Because yeah, those are
really, really tough, because that's
not like a-- yeah, I can set my camera up and go. And you can't like, ooh, let me
in RenderDoc just hit the GPU capture like right
at the last second. Those things will show up. And you'll see those happen in
Unreal Insights, those hitches. And then you can dive in
to that specific call. And then I believe
there is a way to for the game to detect when a
hitch occurred and then output a bunch of data about that. I will post some info about
that in forums after this. VICTOR: Yeah,
there's a console command. Perhaps that is from an older
time before Unreal Insights, but stat start
hitch, and then stop. You can actually set a threshold
for like a certain process that took a certain amount of time. And you can dump all
of those to a dock, and then go through, hey, what
are these hitches coming from? So there are ways to
troubleshoot that. MATT: Yeah. VICTOR: And Matt
mentioned the forum thread, shameless plug. I also was going to throw
in there, if you're curious, have these questions. The forums are a
great place where you can talk to other
developers in the community. And I also want to drop
UnrealSlackers.org, which is the unofficial Discord
community for Unreal Engine. There are many of those,
but Slackers is great. MATT: Mhm. VICTOR: Let's see here. I thought this was a
pretty good question. [Community member] asked
a quick question. What is the best mechanism to
help artists slash engineers understand the guidelines for
a particular target platform? MATT: Ooh. VICTOR: He's
bringing some details here. There's an art type-- oh, he's referencing
Linter, which is one of the tools on
the marketplace that allow you to troubleshoot
or take a look at how your project performs. It's used if you
are actually looking to submit your content
to the marketplace. And you can use that
tool to get a quick check to see if you have
anything that's like completely whack, that's
easy to change in terms of performance, and
naming hierarchies, and stuff like that. MATT: Oh, oh, oh, OK. I think I conceptually
understand what this is. It's basically like
a project level, go through, look at my
settings, see if there's time. Yeah, because Linter
is a thing that I'm vaguely familiar
with from Python where it'll go through
your Python script. And yeah, OK, I don't know. Again, there's
really no substitute for testing on a device and
stress testing for device. And you might have to say-- or at a certain
point you may have to go through and set poly
budgets, and actor budgets, and things like that. But nothing is better
than communication, and open and honest, and
constant communication so that everybody's
on the same page, everybody's on the same page
from the start of the project. Those are all, I think,
super, super critical VICTOR: And educate
throughout, and update the team with your findings,
right, if you're the person who is the
dedicated performance judge. I don't know. MATT: Yeah, they're
the performance guardian. Before I started at Epic, every
project I worked on the QA was doing their perf analysis. They were sending
the, OK, here's the 10 cameras from this map. Here's the 10 cameras
from that map. And then that would cue me in
to go into highlight areas. And then I would
go through and do the highlight report of,
all right, this camera took 10 milliseconds. And yesterday it was
taking 8 milliseconds. And here's the breakdown of what
changed, what the deltas were. I'll show you the
spreadsheet that I have for our little project here. Super, super simple
right now, just so I can see how things change. But here's all our cameras. This gives us a little
heartbeat from change list, to change list, to change
list to show us what the cameras were running at. And these are a
little out of order, cause I hadn't set up the
deterministic ordering before I started this, but
it gives you an idea of like, all right,
when I started checking things were rough. And then one day things
were a lot better. And then the next day
something changed. These numbers went up. And a little bit of spreadsheet
goes a long way there. VICTOR: Yeah, it took
me a while as I got started, but getting over your
fright of spreadsheets is quite a good thing to put
on your learning roadmap. So that's what
you're looking to do. There are many reasons
why spreadsheets and CSV files can be extremely useful in
any form of real time product. MATT: Yeah. VICTOR: So
[community member] asked another question I
thought was pretty good. Advice on what aspects
to simplify first when optimizing for
lower spec computers, for the lower graphical
settings without making the game look terrible? MATT: Yeah,
so the first thing that you can do
there are there are a lot of knobs that can get
turned under engine scalability settings. There are a lot of
knobs that can get turned with material quality. So we can very
briefly touch on that. So if you have a
quality switch, so this lets you determine
how your material will work at different
material quality levels, which are set in your
engine scalability settings. Excuse me. And so you might say
at the Epic level I'm going to use
this whole big graph. But maybe at the low level,
instead of doing a full box blur on all nine pixels, I
might only do it on four pixels. Or I might only do it on two
pixels, or something like that. There are a lot of ways
to basically build content for your high end and your
low end all in the same place. The other thing that
is super, super helpful is in your static meshes--
because, again, you can only push so many
polygons, so much data through lower spec GPUs
or different hardware. So you might need to
set your minimum LOD. So this little plus
icon over here, that's an override
for a platform. Or it adds an override
for a platform group, or a quality level, or
something like that. So I might say, cause
I'm going to make this a mobile game,
and a console game, and a desktop game. On mobile I'm
going to use LOD 2. That's going to be my new LOD 0. And that reduces the
number of polygons that the mobile renderer
is going to have to parse, without making any additional
changes to your assets, or without making any
additional changes to what your game looks like on PC. And that is a whole
extra talk about how you can build a really, really
high quality visuals, and then with some settings,
without having to have a full sublevel
that is like low or high. You can also set the detail mode
for all of your meshes, right? So I can say, hey, this one's
only going to show up in high, but this one's going to
show up in low, right? There's so many
different things you can do without having
to have like completely separate assets, which
I really, really love. VICTOR: Let's move on. I have a few more
good ones here. MATT: Cool. VICTOR: Sarah Lee
asked another question. Can you set different
shadow distances, et cetera, for different use cases? For example, if a player
is flying a plane, can you set a higher
value than when the player is on the ground? MATT: So
what I would do, if I know that
those are dynamic, and if I knew that
I was flying, I might have a blueprint
that changes that when I get into a plane,
for example, or when I pass a certain altitude. I might have a blueprint
that goes through and changes that, cause those
are all dynamic settings, especially if you-- Direct channel. Can I not spell? VICTOR: Or did
someone rename it? MATT: In
the wrong level? Lighting? Oh, this happened
to me once before. I have to reload the level. But yeah, so all the settings
on directional lights, hey, if you know when
a player is going to be getting into
those situations, you can setup functionality to
apply those different settings. Oh, don't do that. Reload the map. Don't save. VICTOR: Ah. MATT: Ah, there it goes. JAKOB: It needed
to happen one last time. MATT: Yeah,
one last time. VICTOR: And
thankfully, you have a really nice
background on your PC. MATT: I know,
how beautiful is this? VICTOR:
The Digital Cowboy asked, when it comes to LODs,
is there a benefit to not using LOD 0 and LOD 1 if
the asset has very high poly accounts, such as
the Megascans asset? And then he had another
little question. To mainly help cut
down on project size? MATT: Ah,
project size is critical. So I think if I'm worried
about how much room-- especially in these times,
or for a distributed team or something like that where
you have my household internet might not be the best. And I don't want
to have to subject my contractors or
my friends that are working on this project to-- and I don't want to
have to subject them to downloading 20 GB or 30 GB. That kind of you need to think
about that at the asset level. Like I think-- Jakob, correct me if I'm
wrong, in Quixel Bridge, you can set which
LODs you export? JAKOB: You can currently
set what base LOD you export. And depending on whether
or not in the Live Link you chose that you want to
actually have an LOD setup, then it'll automatically
pull in all subsequent LODs. This behavior might
change to something that is even more user friendly. We're currently working
on that to make this as pleasant as possible. MATT: Excellent. JAKOB: So huge
user improvements are being worked on. And especially
with the LODs it's a bit tricky, because Unreal
Engine does a fantastic job at still making use of
the LOD 0 normal map on completely different topology
without breaking normals. And so that's something
that we really want to leverage, because
it's unique to Unreal Engine. MATT: So that,
as you're going through and you're like, well, if I
know that the platform targets that I'm working
with, I know that it can't handle 50 million
triangles, or even 5 million triangles,
then if I know that I'm never
going to use them, then I will just
not import them. You may have a situation
where you want like your-- so maybe you have an asset
that the character holds in the first
person, and you also want to have, without
changing anything, the asset on the ground. Maybe you set the screen size
for that asset, for LOD 0 to be really, really high. And then like as soon as it
gets out of the player's hand it pops over to LOD 1. I've seen this done before so
that you're not rendering out the little crevices
on your pickaxe, or something like that,
all the little wood grain. As soon as it gets
out of your hand, it switches over to that LOD. That's always
something to consider. Yeah, like it like
I said, if you know that you are not going
to use all of that data, I wouldn't import it. JAKOB: Yeah. VICTOR:
Bob'sNotYourUncle asked, how many layers are recommended
for painting layers which are landscape material? And in turn, can all of those
have different foliage grass types without impacting
performance too much? MATT: Ooh, it depends. VICTOR: I
didn't say they were going to be easy questions. MATT: Yeah, no, I mean,
these are all good questions. And certainly, everybody
wants guidance. And you know, it depends on
how much of your landscape are you going to be seeing? What platforms
are you targeting? Because at a certain
point the number of layers will increase the
number of textures, and then you have to use
shared texture samplers, but maybe the platform
you're targeting doesn't have shared
texture samplers. So I think for mobile,
that may be an issue. So it depends. The other thing to keep in mind
is that-- and I'll shift to-- the other thing
to keep in mind is that this will increase
the number of materials that you have to compile,
because the more layers you have per component, the more
sub permutations of the material that it'll create. So we've got one, two,
three, four, five, six, seven layers on
this one right now. And those are all just running
through a layer blend. And again, you'll get
to see how all of this plays out when we release this. The other thing I've added are
two non-weight blended layers for snow and water. And the other thing I've
added, if it'll show up, is a foliage mask, which
is not weight blended. So you can paint out
foliage in certain areas. But at the end of the day,
I wouldn't necessarily recommend more than like 16. That'd be huge. This is kind of where
we wanted to stop it. But all of this is going into a
runtime virtual texture, which greatly simplifies the draw
cost of the landscape for us in our situation. But yeah, the more
permutations you have, the more draw calls it is. The more draw calls it
is, the more expensive it's going to be, and
so on and so forth. So that's something
to keep in mind. The trouble is, once we
start getting into that, once we start
getting into the like how many layers can I have? What texture size? What are my poly budgets? These are all, again-- there are certain,
not hard limitations, but reasonable limitations. And some of that is stuff that I
have developed an instinct for. Like I might look at
something and say, that feels a little high. But that's because
I've been doing this for a really, really long time. And it's just, again, a
statement of your values, of your chosen art
direction, what compromises you want to make. And if you say, I want-- And maybe you have a
situation where you want a bunch of different layers. And maybe you know that
four of those layers are only going to be on
this landscape over here. And four of those
layers are going to be on this
landscape over here. And maybe you just have
two different landscapes with two different materials. And that's maybe OK too. It's all something
to think about. VICTOR: If you leave
with anything today, folks, it's all something
to think about. MATT: It depends. And a lot of this is
going to be up to you. Sometimes I wish there were hard
limits, where I could just say, no, you cannot have
more than this many. But some of those
limits are hardcoded, and they are very, very
high, and they are probably much higher than most people
are ever going to need. VICTOR: SqueeTimes
asked a question earlier. And I thought we could
iterate on this a little bit further, because there's
actually quite a bit-- if you all still have some
time for more questions? MATT: Yeah, I have
a little bit more time. VICTOR: OK, I was
thinking another 15 minutes, and then I think
we're calling it. MATT: Perfect. VICTOR: So it was
when you were showing off the thatched house. And the question was, why
are the houses separated into a level by themselves? Is that a common practice
or just for performance? And so could you
iterate a little bit about sort of the
workflow benefits, as well as some of the
potential performance benefits when it comes to
using sublevels? MATT: So I'll let Jakob
take the like construction. JAKOB: Yeah. And you take over when
we gets into performance. So when it comes to
constructing these, we are constructing them
out of modular pieces from the Megascans
library, custom created assets to sort of
make them feel a bit more rundown and fantasy-esque. So the problem with
that if I'd assemble all of that in my main
level is that I'd possibly create a lot of dirty files and
have a super cluttered world outliner. By keeping them in
a separate level, I have one place for
each and every house that is specific to that
construction process. So if I want to
change a house, I don't need to worry
about anything else. I just need to go in there,
know what house I'm working on, and can change everything
without affecting my main level. I can change lighting
scenarios, switch from HDRIs to
procedural sky systems, play around with
my fog, really dig into my post-process settings
to see when certain materials or effects break, and
just really sort of break the construction
level without actually breaking my main level. Now, the cool thing that
we've done on this project is that we've actually created
a template actor system. What that means is
that I essentially propagate all of the mesh data
that I have in my construction level to a template actor. And I can just spawn
that template actor in my main level, and
it will automatically place all the instant
meshes, the static meshes in the correct order, and
with their correct translate in my level, which means now I
have one actor that I can start moving around containing all
of the different static meshes. And especially with
still needing or still possibly changing
these things, it's good to be able to
communicate back and forth. So now I can, in my
main level, for example, change the placement
of certain elements, and propagate that back
to my construction level. And this is just a really
good workflow, especially when you start having sub assemblies
in engine to keep things clean. Because nobody wants to go
in the main level, especially other artists working
on the same project, and dig through hundreds,
and hundreds, and hundreds of actors that just
belong to the construction of houses that are
completely irrelevant for the actual gameplay
level, if that makes sense. Now, performance wise, Matt
will be able to address that. MATT: Yeah, so from
a performance standpoint, one of the benefits of having
different chunks broken up into sublevels, obviously-- OK, and our call distance
volume is turned back on so we're losing all the trees. Very aggressive call distances. So maybe I know that
at a certain distance I don't need the
inside of this house. Maybe we can go into
all of our houses. And that's where we can start
worrying about streaming in all of these
sublevels, and say, I'm only going to stream
in the inside of this house once you're maybe, depending
on how long it takes, 1,000 units way, 2,000 units away. The other benefit
is H LODs, which is this is another
livestream altogether. But the super, super,
super simplest idea I can communicate to you is
like group a bunch of assets, make an LOD for them. Making an LOD of the LODs. So like I know that when I
am super, super far away-- maybe I'll set this up
for this project too, just get all the perf working-- is maybe when I get
super, super far away, we'll have H lots
for all of these so that now this house is
its own single draw call, single mesh baked
down into a 2048. Yeah, it'll cost us a
little bit of memory, and we'll plan for that. But then this thing, because
it's in its own sublevel, we don't stream it in
until we get much closer. And then the H lot
for it will kind of persist in a different place. Yeah, and so having
everything broken up into sublevels kind of
supports that workflow. And I really hope
that when we released this project I will have an
example of how that works, cause I think it's
really, really useful to do for a lot of
people in a lot of situations. JAKOB: One quick note,
just for some differentiation purposes, we have
houses in sublevels that are actually grouped
under the persistent level. So you can unload
and load these. But the actual
construction levels are not the sublevels
of the persistent level. They're a completely
separate map, which is in no way
directly affiliated to the persistent level we are
working in as our game map, just to make that clear. MATT: Yeah. VICTOR: The one
last thing I wanted to add, which is probably the first
reason why I got into sublevels was because it makes
collaboration when you-- and you should have your project
outsource controlled version controlled, whichever
term you prefer-- MATT: Yes. VICTOR: Because
it allows you to-- so if you're unfamiliar
with version control, go check out the stream
I did with Aaron. Great stuff there. It's in the playlist. Say if Jakob wanted to
go in and play around with little thatched
pieces on a house. And there's this all
everything in just one persistent level, he needs
to check out that level. And what checking out
means is that he basically locks that file. Then no one can actually
submit changes to it while he has it checked out. If that's your
workflow, you're going to have to talk to each other. Someone is going to
overwrite and clobber changes, which means
that you're both going to push two separate
versions of the same file to the repo. And it's going to go, hey,
you have to pick one of these. And so sublevels allows-- the typical setup would be
like environment art, maybe one for audio, maybe one for
gameplay purposes and whatnot. And that allows the team
to, at the same time, work together on separate
sort of little subsections of the level. And it just makes-- Yeah, and even if
you're not, say you're a solo dev
on this project, it still makes sense, because
it allows you to track down problems much easier. If you break something,
or you made a change, you can see exactly, oh,
it was in this sublevel. And instead of you
going around looking, take a profile GPU capture here,
a profile GPU capture there, you know that it was
just this one sublevel. So a lot of good
reasons to do that. Also, just general organization. It helps out a lot. MATT: Right. JAKOB: I mean, you
can see in there actually that we have a basic
atmospherics folder, audio. We have foliage volumes. We have the forge and
the different houses as our separate sublevels, the
gameplay and the performance, and the lighting as
separate sublevels. And the same goes for
two basic sublevels that we call geometry
Victor and geometry Jakob. And Victor Ruman
is the second artist working on this project, or
the second environment artist. And we want to make sure
that we are, A, able to work on separate houses,
and start to move them around if necessary, and B, do
different dressing passes simultaneously in the
same persistent level. And this can get hairy. And we're only two people. So with sublevels you'll
just really make sure that the changes go where
you want them to go, and where you can actually
collaboratively stream that back into the
main content stream. MATT: Yeah,
that's critical. JAKOB: Yeah. MATT: I remember
my senior thesis in college, which was an
Unreal 3 cinematic project. We didn't have
source control, and I was working with a partner. And our version of
source control-- we had a shared directory. And our version of source
control was, hey, Matt, I'm working in this level. Please, don't touch it. Just over the shoulder,
and that was it. JAKOB: Yeah. MATT: We've come a
long way, especially with-- it's easier to get access
to things like Git. It's easier to get access
to things like Perforce. I remember at one point
I set up my own Perforce server on my own
computer, just cause I love source control that much. And the other thing
to think about too is like source control also
allows you to do backups. And so the thing I was
showing in our perf report, those were the
different change lists. And what knowing
these different change lists gives me is I can track
down the specific change, or I know the range of changes
in which a change could have occurred that
affected performance. So if you are running
your perf tests on every change list,
or every 10 change list, or something like that, that'll
help you narrow things down. And it gives you
a lot of backup. Because God forbid,
Victor and I were talking about things are
super windy around here. The power might go out. If the power goes out
while your hard drive is in the middle of a
read-write operation it may corrupt your file. If you corrupt a file and
you do not have a backup, that data is gone. Like the bits for that file
are completely blown away. I've seen it happen a couple of
times, and it's soul crushing. JAKOB: It is painful. MATT: Because there's
no recovering from that, and unless you have
backups in source control. And then you're fine. You just, OK, I lost
a day's worth of work, versus a months' worth of work. VICTOR: Power supplies
are not eternal, folks. MATT: I have
two, and I'm like, oh, boy, I have 10 minutes. Even then I'm like, oh, God. VICTOR: The Digital
Cowboy asked another question. Foliage that is instanced
has no collision. Do merged meshes
like this-- and he was referring to when
you were playing around with the merge actor tool--
do merged meshes like this retain their collision
after merging? MATT: So if we have
just a merged static mesh actor, we can go take a look at that. I don't actually know if
those had a collision on them. But, of course, if
you merge it like so-- oh, don't crash. If you merge it like so, you
can set up your own collision for this one, just like you
would any other static mesh. Because this is not
an imported asset, you may not have the ability
to do the full custom collision shape UCX kind of thing. Instant static mesh components? I don't actually know the
answer to that question. And I will answer
that on the forums. VICTOR: We're getting
potato quality from you right now, Matt. MATT: Oh, no. VICTOR: But
good thing we're almost at the end
of the stream, yeah. MATT: I've used up all
of my Internets for the day. VICTOR: [Community member] asked, do you have any experience with
Imposter Baker for optimizing levels? MATT: Ah, I do
enjoy the Imposter Baker. I have a little bit
of experience with it, and I find it to be a really
useful tool for optimizing specifically, very specifically
like foliage assets. Because what it'll do is
it'll do basically two things. It'll give you the baked
card representation of it. And it'll give you a single
frame representation of it that kind of interpolates
which subframe it looks at based on
your camera angle. The Imposter Baker is super,
super, super, super useful. And if you know
that you're going to have a bunch of trees
that are off in the distance, right, I don't need to
render all of these-- ooh, lord. I don't need to render
all of the triangles for this tree at this
distance, cause everything kind of flattens out at a distance. That's just how
perspective works. Maybe I only need
this to be a card. And that card is just
looking up at a texture to figure out what to
draw instead of focusing on all of the
individual triangles, or all of the different
parts of this tree. The caveat is that it
won't be an animated tree, but the swaying of this
tree at a certain distance is not going to be-- that might be a half
a pixel left or right, and maybe I don't
need to see that. Yeah, I like the Imposter Baker. VICTOR: All right,
definitely end here. So there are a couple more
questions that we're not going to have time to answer. I will make sure I submit all of
these to Matt and Jakob in case there's a couple that
they thought were good and they want to
provide an answer for. Let's see. We can have Sam drop the link
to the forum announcement post. That's where all of the
discussions post stream occur, questions, answers in
regards to the livestream. So go check that out. But I do have one
last question for you. And I want you both
individually to answer this. What is your
favorite spooky game? MATT: Oh, man. Aw. So I guess I can talk
about this a little bit. I'm going to do Extra Life
again for the first time in like seven years. And that's the 24
hour gaming charity. And the first time I
ever did it I set a goal. I was like, all
right, if I get $500 raised I am going to play
Amnesia Dark Descent starting at midnight. And I will not stop
until I finish the game. And by gosh, I did. And it was I think the
single most terrifying thing I've ever done or played. JAKOB: Oh, boy. Yeah, like I'm not as hardcore. I get scared really easily. Like gory stuff I don't
have problems with. But-- MATT: I do too. That's why it was the $500 goal. JAKOB: So I usually stay
away from these types of games. And for example, I count
the Metro franchise into my list of super scary
games that I've played. But I think the definite top
pick would be Alien Isolation. MATT: Oh. JAKOB: Because that AI
is way too smart for my liking at scaring the crap out of me
and hunting me through levels. It is mm-mm. MATT: That was a game I
had to play with the lights on. JAKOB: Yeah,
with my friends near me. MATT: Yes. I think when I did
that Amnesia livestream I had a friend that was like my
support friend, my buddy Pat. He was just there. He was like, everything OK? JAKOB: Yes,
it's going to be fine. It is not real. It can't hurt you. [LAUGHING] But there are some really
gnarly, scary games out there. MATT:
Masterfully horror. JAKOB: Yeah. VICTOR: And more and
more are going to be developed. There is no end to
horror, and they're going to get
scarier and scarier. I'm also in the same boat. I really can't deal with it. And cause I've been
doing VR for a long time. VR horror games
is just a big no. And I have to play
them for the game jams. [LAUGHTER] JAKOB: I
was about to say, if it's only a monitor in
front of you, that's one thing. But if you're actually
stuck in this game-- [INTERPOSING VOICES] JAKOB: Like
I watched some-- because I don't own a VR setup,
I watched some Half Life Alex footage. MATT: Oh. JAKOB: And some
of these instances, I said that I would
right about there crap my pants, because
that is really scary when you think about not being
able to just look away. MATT: Disengage. Yeah. VICTOR: I remember
having to explain to someone who was testing a horror VR game. And they asked, like
I can't not see this. And I had to explain that
like you can close your eyes. You know, you can
close your eyes down, just what you do in real
life, and you won't see it. But due to audio, it still
feels like you're there, right? Anyway, that's
wrapping up for today. Thank you so much
to Matt and Jakob for coming on, providing all
of their knowledge in regards to performance optimization
for environments. JAKOB: [INAUDIBLE]
to Matt for that. I was just the
counterpart artist that needs some
teaching on performance. MATT: Kudos to Jakob
for this beautiful environment that I get to optimize
and work with every day. It's so much fun. JAKOB: Well,
thank you very much. But same goes to
Chris, Alex, Victor, and all the amazing dudes
and dudettes working on this. MATT: This is a
group project, for sure. VICTOR: And
it will eventually be available to all
of you for free, probably on the
marketplace I would think. JAKOB: Yes, exactly. It will be available in
all its glory-- there we go-- on the Unreal
Engine Marketplace sometime in the future. Yes, fully furnished,
packed, optimized with all of the goodies from
the process in there for you to take apart and dig through. VICTOR: That is great. If you joined us
today and you've been here from the start, thank
you so much for hanging out. We hope you had a good time. If you are new to
Unreal Engine and you're interested in what it might
mean to work with Unreal Engine and produce all kinds
of amazing things, from games, to movies, to
archviz and simulation, go to UnrealEngine.com. You can download the
launcher for free, as well as Unreal Engine, and you
can get started immediately. Unreal Online Learning
is a great resource if you're just getting started. There are I don't
know the number. There are a lot
of courses on how to do various things in
Unreal Engine, everything from getting started into doing
deep dives into lighting, VR, you name it. Most of it is there. If not, we also have
an amazing community that is producing all kinds
of specific niche tutorials, on YouTube as well. Go ahead and check them out. Also, if you enjoy watching
live content on Twitch, there is actually an
Unreal Engine tag. And so you can go
ahead and filter for Unreal Engine
and game development, and you will find some of
our amazing community members that are streaming live,
probably right now on Twitch. And we are probably going
to try to raid one of them when we're done here. Make sure that you let
us know how we did today and what you thought
of the stream and what you would like
to see in the future. We have a little
survey that we drop. If you're watching
the stream afterwards, you can also find that in
the forum announcement post. We love your feedback,
and it's important for us to know what you'd like
to see in the future. So go ahead and make
sure you fill that out if you have the spare time. Even though we are not able
to see each other in real life as much as we want to, there
are still the virtual community meet ups happening
around the world. If you are interested in
that, and you're maybe getting a little bit excited
about the potential possibility of getting back
into the real world and doing things together,
communities.unrealengine.com is your place for these
virtual meetup groups. If you can't find a
group in your area, there is a nice little button
that says Become a Leader up at the top right. You can click that,
fill out a form, and we will get
in touch with you in terms of what it means
becoming a meetup group leader. As always, make sure you check
out our forums, UnrealSlackers.com, Facebook, Reddit, Twitter. We're pretty much
everywhere you could want to see us, in terms of
our news, updates, tutorials, and everything else that's
in the world of Unreal Engine is coming out. So go ahead and make
sure you follow us, and yeah, read all that stuff. I'll also go ahead. And if you're checking this
out on YouTube, make sure you hit that notification
bell to make sure that you see when we
produce new content. Every week we are releasing
content, tips and tricks, webinars, et cetera. Our educator streams,
which is tomorrow, it all goes up on YouTube as well
if you prefer that platform, or if you are unable
to catch the streams, because we know it's
like in the middle of the night in Australia right
now, to Chris Murphy's demise. He is going to have to
come back on the livestream at some point. That's going to happen. Thankfully he has
better internet now, so he won't be potato
quality throughout. If you are interested in
seeing your content as one of our countdown videos at the
beginning of the livestream, go ahead and send us around
30 minutes of development. Fast forward that
to five minutes, and send us that file to
community@unrealengine.com. Keep it completely plain. We will add the countdown. Oh, and also make sure you
drop your logo in that email so that we can make sure that
we credit the right person who made it. I mentioned this. If you're streaming
on Twitch, make sure you use the Unreal
Engine tag as well as the game development tag. That's the best way for us to
see what you're working on, as well as the rest
of the community. And yeah, next week I am
actually going to have Wyeth Johnson and John
Lindquist on the stream. And they're going to talk
about advanced Niagara effects. That forum announcement post
is already up on the forum. If you're curious about
that or have any questions you would like to ask
them prior to the stream, go ahead and make sure you
drop those in the thread. And once again, special
thanks to Jakob and Matt for coming on the stream today. It's been a pleasure. I'm really glad
to have you here. And hopefully get to do
this again sometimes. And then we wouldn't be here
if it wasn't for all of you out there. Big thanks to you
for sticking around, asking awesome questions,
and enjoying our content. Hopefully you learned something. If not, I'll go and check
the YouTube video again and try to see what the console
commands that Matt typed in. There is a lazy dog sheet. I don't know if that's
the proper English term. But we call it a lazy
dog sheet in Swedish. I don't really know if
that's the English term. Sheet, sheet, sheet, yeah. It's so easy to say. I think there is one
out there in terms of these console commands. And what they are,
there is a slew of them. I think I read a tutorial. There's so much content that
I've seen that's in my head, and I realized like I don't
have links to half of these. So hopefully your Google
Fu is better than mine. Which leads me to the
point of our transcript. All of our livestreams
are actually transcribed, which means that later on
YouTube, about a week roughly after it's been
uploaded, we also upload accurate
captions for the stream. And so if there were any terms,
any terminology, or something that you had a difficult
time hearing what we said, you can either turn on
captions on YouTube, or just go ahead and download
that entire transcription file. What's really cool about
that is that if you watched the livestream,
and you remember, hey, they were talking
about landscape, or foliage, or anything else, you can open
up that text file, Control-F, search for the term, and
you'll actually see a timestamp when we were talking about that
in the stream, which is useful way if you want to
get a little bit more context about that word
that you were looking for. With that said, it's
time to say goodbye. Y'all stick around
for a little bit. And to you all out there, I
hope to see you again next week. Until then, stay safe
and have a great time. Learn as much as possible. And yes, time to say goodbye. MATT: Bye y'all. JAKOB: Wonderful evening. VICTOR: Bye.