VICTOR: Hey, everyone,
and welcome to Inside Unreal, a weekly show where we
learn, explore, and celebrate everything Unreal. I'm your host, Victor Brodin. And my guest today is Senior
Technical Artist Jon Lindquist. Welcome to the stream. JON: Hi. Thanks for having me. VICTOR: Well, of course. Today we're going to talk
a little bit about advanced Niagara effects. But first off, I
wanted to ask you-- you mind telling our
viewers who you are and what you do at Epic? JON: Yeah, no problem. I'm Jonathan Lindquist. I've worked in games for
the last 13 years or so. 10 of those were at Epic. Started off on Fortnite
and moved over to Niagara. And since then I've been
creating the library and working on interesting
problems for GDC and other demos. But, yeah, that's-- I guess we can just jump
right into the presentation? VICTOR: Let's go. The floor is yours. JON: Let's go. So actually, could we
play the first video? VICTOR: Yes. Let's go ahead and do that. One moment. All right.
I think that got everyone excited. JON: Great. That's awesome. So what you just saw was a
combination of techniques and new technology that
have been compart-- have been packaged
together within Niagara for users like yourself. So cracking open that
package and understanding how everything works
together can be made easy. So let's just go
into how that works. So first of all,
we'll be talking about how particles can
interact with the World and how they can learn more
about their environment geometry. And then we can start
to dive into some of those specific
examples that you saw-- the flocking bats and
swarms of insects, and then, finally, the
position-based dynamics demo, which is a very unfortunate
man made out of corn who gets popped into kernels. VICTOR: The popcorn dude. JON: Yeah. So first we'll talk
about World interaction. So we want to look
at the methods that Niagara can use to gain
information about the World. The first is, Niagara can
read a model's triangles. And this is a great tool for
very specific one-off effects. Say, if you wanted
to place particles on a character's
skin, you'd want to read the character's
triangle positions and then place
particles on them. And next, we can read-- we can trace against
physics Volumes on the CPU and the CPU collision particles. And then G-- Sorry. GPU particles can
create a scene depth. So from the point of
view of the camera, a particle can trace
outward and find the closest opaque surface. And that's super helpful for
certain types of effects. And finally, we can query
distance fields and Volume Textures. So given the problems that
we're facing within these demos that we just showed,
we can start ruling out different methods of
analyzing the World based on our requirements. So first of all, triangles. Triangles only work in-- for certain use cases. Niagara would have to be
pointed towards a specific mesh to query. And that wouldn't work
well for the insects, which are supposed to crawl
across the entire World. Then there's physics Volumes. As you can see in this picture,
a collision for a given mesh can be fairly coarse. So that wouldn't be good
enough for the insect swarms. They would click through
the visible geometry. Another option is
the scene depth, but that has limitations. And then it's two-dimensional. If an insect were to crawl
along the scene depth and crawl behind a
surface, it would no longer have any information to
pull about the World. So that leaves us
width distance fields. And as you can see,
distance fields provide a great
combination of accuracy and volumetric representation. So for all of the demos that
you're going to see here today we're going to be referencing
the global distance field inside of Unreal Engine. So a global distance field is
a volumetric Texture, a Volume Texture, that's black and white. And it contains values. And each of those
pixels, or voxels, say how close that
location in space is to the nearest solid surface. So inside of Niagara,
we might want to query a specific
location in a World space. We'll call that P. And
P will be this red dot. So with P-- or, with the
global distance field, we can ask it for its
distance to the nearest surface at this location. And it might tell
us that the nearest surface is 5 centimeters away. That's great, but if we
really want to place particles on that surface
we'll need to know which direction the surface
is from our current location. So we can sample the global
distance field several times in a pattern-- in a cross pattern around
our current location. And that'll give us a gradient,
if we compare all those samples together. And that will point us
towards the surface. So if we take the distance
from the first sample to the surface and the
gradient that we've just calculated from
this operation we can move the particle
to the surface. That's one of the two
ways that we typically use distance fields. The second way is
to do something called sphere casting. So sphere casting operates
on similar principles, but it allows us to trace
in a given direction. So say our particle
is P zero, and we want to trace along this
line to find the nearest opaque geometry. What we can do is we can
say, at this position the closest surface is
10 centimeters away. So that means that we can
safely move along this arrow 10 centimeters without
hitting a surface. So we haven't found
the surface yet. So we'd like to do that again. So at P1 we can sample
again and find the distance and then move along
that vector once more. And then again and
again and again until we get to a solid surface. So that allows us to basically
retrace within the World and find surfaces. And it also allows us to
just find the nearest surface easily. And you'll see how useful
those are in the future. So before we go
any further, you'll see a lot of debug
visualizations that I've made. And that's really
helpful, especially when you're working
with complex effects. So I made this Module,
which is in the tool set, called Sprite Baseline. And basically, you can give
the Module a point to start at and a point to stop at. And it'll produce
variables for you that you can use to draw lines. Here's a video showing
that in action. So these birds are drawing
lines to the nearest surface that they can find. And it's also recasting forward
using the other technique that we talked about,
called sphere casting. So additionally, sometimes you
might want numeric information. So if you go to
the Material Editor you can find a series of
material functions called Debug Float X. And that will actually
take any numeric input in and display it as a number. So in this example, you can
see that the birds, while they flock around, are
actually telling me more information
about what data's being computed internally. So what I've done is I've
taken information that's inside of the particle effect and I fed
it into the dynamic parameter. And then I've placed a
sprite in the emitter that shows this Material, which
draws the numbers out in space. In the future
we're going to have debug functions
that don't require additional work on our part. But as of right now, this can
help you in a number of ways. So there are a lot of
high-level functions that we've already produced
for users, which are great. But if you want to get into
the nitty-gritty details you can also go
inside of a Module and then query the distance
field directly using a collision query. So you just feed one
of these functions a location and
the query, and you can learn more about the World. So we can get into flocks now. Flocks are interesting
because they only need to know about the
World and avoid the World. But they also have
to avoid each other. And they have to
interact with each other. So each bird needs to know about
its immediate surroundings. And if you were
to do it naively, it'd be really expensive. But luckily, Unreal has included
tools to make that less so. We'll go into those, as well. So when we're talking
about flocking mechanics we have to think about each
bird as an individual actor. And each one of these actors
has a view of the World. It only knows certain things. And we can model that
through different equations. So the steps that we're
going to take today is, we're going to first
make each bird particle avoid the environment. And then they're
going to look around in their immediate
surroundings and find birds that they would like to flock
together with or be attracted towards. And then, when two birds
get too close to each other we want to make sure that
they don't inter-penetrate. So we're going to make
them avoid each other. And then, to mimic
real behaviors, when several birds are
flocking together we're going to have them match
each other's velocity. And we're going to have a
little bit of extra code to handle special
cases like when two birds face
towards each other and they're going to collide. We'll specifically
write something that'll make them turn
away from each other. So using those functions that
we talked about earlier we can start to see
how they're useful inside of a flocking system. So in this case, I've
spawned a grid of particles. And then I've had-- and then I'm using this avoid
distance fields surfaces GPU Module to push
the particles away from any surface that's nearby. Most birds wouldn't
want to fly into a wall, so that's the first behavior
that we're going to mimic. And then, Additionally,
we know that birds often look straight and avoid
obstacles that they see coming toward them. So in this case, we're using
a sphere trace to mimic that. And there are certain
issues that one might find as you write
avoid simulator that aren't immediately apparent. So as the needle gets closer
and closer to your reality you start to pick up
on little behaviors that you might not
notice initially. So one of those
issues is that, say if you were to take a particle
and you were moving forward and it-- you wanted to make it avoid
an obstacle in front of it, one thing you could do is you
could just apply a force to it to slow it down and then push
it in the opposite direction. But that would be very
unnatural for a bird to do. It would fall out of the sky. So instead, we can
redirect the bird using a cross-product operation. We can actually just reflect
its velocity off of the wall. So this forces birds to turn
away from obstacles as opposed to just slowing down and then
stopping in certain cases. That'll be a little bit
of a theme during today's presentation, is
that not only will we want to mimic certain
behaviors, but we also want to be very careful
with how they're exhibited. So our particles
right now, if we were to use the example
that was just shown, would avoid the World. But they're not
avoiding each other. They don't see each other. So what we can do is we can-- within our content examples
you can find easily understood examples that
will show how this is done. But for this
presentation, I would just like to cover the
theory behind it. So say that this
first group of dots is your particles in 2D space. If we're going to mimic
a bird's avoidance system you might say only the particles
nearby will really affect it. So we can say that we want to
collect the nearby particles and then react to
them in some way. In a naive solution,
you might just query every particle
in the system and then ask it
how far away it is and which direction
it's facing and then react to it accordingly. But that would be
really expensive. So we can use something
called a spatial hash. Or, inside of Niagara, it's
called a neighbor grid. Basically, we can take
all those particles that we're talking
about, all those birds, and we can throw them into a
volumetric, three-dimensional grid. And once they're in
a grid we can then ask for certain grid cells
and locate any particles that are within those grid cells. So that narrows down
the problem for us. We now, instead of having to
call 10 particle locations, we can now pull, say, four. And this is really important
when it comes to efficiency. Now that we know
what other birds we have to interact with
at a very coarse level, we want to get into a much
more narrow band of what the bird can actually see. So in this case we're
mimicking a cone of vision. So we're taking the
particle's forward velocity and we're using a
dot product to locate any particles that are right
in front of this bird's path. And we're also measuring the
distance between this bird and all the other birds. And we're able to
ignore the other birds and just focus in on
the ones that matter. Luckily, all of this is done for
you inside of our Module sets. So you can just drop down
to Boid Force Module, and much of it's taken care of. So I also have to note
that some of the Modules that I'm pointing
out here aren't a part of the official
Niagara Module set yet. So in 4.26 we have a
contents examples map. And you can find all
of this content there. And eventually, as
additional iterations are made on the Modules,
they'll get incorporated into the official library. VICTOR: I should
mention that the content examples for 4.26 will be
available once 4.26 is out in full release-- which is pretty soon. So if you're excited
to open up this part and take a look at it,
you will be able to in just a couple weeks. JON: Cool. So let's see. So there's the
actual model that's used to mimic bird flight. And you can see that we have
an avoidance force, cohesion strength-- which basically means that the
bird looks in front of itself and then finds all of the
birds that it cares about. And then it averages
their positions together. And then it finds
the center location. And the bird attempts to
become part of the group by aiming towards
the center location. So there's a force
associated with that. And you can tweak
it as you need. And then there's
a certain distance that each bird wants to keep
away from all of its neighbors that it can see. So we've exposed that value. And then as a little bit
of a tweak, what we've done is to expose the movement
preference vector. And that'll force the birds to
move more on the x and y-axis rather than straight up
or down, because it seemed pretty unnatural when the birds
started flying directly upward. And then, finally,
the last force is one that makes them
match each other's speed. Let me go back to
the presentation. So say we got the particles to
move the way that we wanted. That's great, but we want
to represent those particles as meshes. And orientation is really
important in that, say, case. So we've released a Module
called flight orientation. And what that does is it tracks
the velocity of each particle. And it uses that
forward velocity as a lookout direction. But it doesn't just do that. It also banks the models as
it turns, or as it rotates, which feels fairly nice. And you don't have
to know how it works, but just in case
you're curious, this is a little bit of
the math behind it. We take the previous
frame's forward vector and the current
frame's forward vector. And then we find the
delta there, between them. So if the particle's
turning this way we have a small area that's
pointing in that direction. And then we find the cross
product with the up vector. And we can use the
result of that to rotate an alignment quaternion. And then what we
can do is we can limit the amount of rotation
that each bird can possibly take. So and we can also decay
that rotation over time and do a number of other
things to make sure that each bird tries to stay
upright as much as it can. And as it turns quickly,
it should naturally turn it into the curve. So you have your
models now, and they're rotating as you might want. You'd want to animate them. So there's a tool out there
called VAT, or Vertex Animated Textures. And-- animation Textures. And it allows you to take a
fully rigged Skeletal Mesh and bake out the morph
targets for that mesh into a Texture that can be used
inside of Unreal as a World position offset for one
Static Mesh's vertex positions to provide you
with very natural motion. There are additional plugins
out there that can also be used, but this one works best
when a mesh deforms in a very organic manner. So if you'd like
to know more, you can search for vertex
animation tools online and probably
find the link. It's also here right now. So we can talk more about
the swarms now, the insects. A lot of what you've
seen so far also gets applied to the swarms. Insects, as they're
crawling along surfaces, have to orient themselves
to those surfaces. So we actually use the
same set of Modules that you saw for the
flocks in the swarms. So first what we
need to do is we need to place the
particles on the surface. So we reference the global
distance field again. And we find-- we
first spawn particles within a box, just randomly. And then any surf-- any particle that's too
far away from a surface we just kill right away. It's called rejection sampling. And then we take the
remaining particles and we move them just to the
surface, as you've seen before. And then, whenever
we move the particles or update their
location in the future, we perform this operation again. We don't kill them
again, but we always snap them back to the surface. So as you learn more about
advanced particle effects you may want to accomplish
multiple goals at once. And sometimes that requires a
little bit of a forceful hand. So in this case,
we want the insects to move around randomly. But we also want them to
adhere to the surface. So after we allow them
to move around randomly they often become
detached from the surface. And so as a post-process we can
just take their final position and then pull it back. And you'll see that
type of approach time and again throughout this-- throughout these effects. So in this case, we did
something a little bit differently. The insect legs
are rather small. And we had hundreds of
them running around. So modeling those out
would have been too costly. And it wouldn't have yielded the
types of looks that we'd want. So there's this
approach called spline thickening that one can do
inside of the Material Editor that became quite helpful. So the legs of each insect
were actually modeled out as a strip of polygons. And then within the
Material Editor, we took those polygons and
then stretched them out and thickened them
so that regardless of whatever direction you looked
at the insects from their legs always seem to have volume. So that was great, but it
wouldn't have worked well with the previous approach
that we mentioned for the bats. It wouldn't work well for
that animation system. There's an alternate
animation system that we created which
basically attempts to make a Static Mesh move
around as if it were a Skeletal Mesh. You can check out the post
about it on my Twitter account. But you can also look into the
Unreal Engine's extras folder. And you'll find the script
there, along with tutorials. So this shows one Skeletal
Mesh animation being applied to multiple Static Meshes. And that's an additional
benefit that one gets by mimicking
Skeletal Mesh animations. If you were to track the
movement of individual parts, then you wouldn't be able
to apply that animation to other assets easily. And additionally, since we're
tracking phone rotations and location shifts, we can
actually layer other animations on top of it. So when you see the
insect's wings flap, that is actually another-- a second animation that
we've captured that was layered into the walk cycle. In the video that
you saw earlier you might have noticed that
the insects would crawl around slowly and then, as the
player approached them or as the player threw a
light in their direction, they would scatter quickly
or search out an exit. And we might call that
a state machine, then. The insects do one
thing, and then they do another when
they're provoked. So what I've done here
is I've created a struct. And that struct has a
number of properties to it. And I created multiple structs. And as the insects move
from one state to another we slowly lerp between all of
these values that you see here. And each of these
values then later get used in the simulation to
change their behavior patterns. So it becomes a really
nice way to consolidate all of your animations
into one location. So in this one lerp function
I'm actually driving, say, 10 different Modules. And time to talk about
the kernel, I guess. So before we talk about the
specific collision technique that was used for
the kernel we should talk about alternative
collision methods. So here's a method called
continuous collision detection, which basically takes
an Object at point A and then finds out where
it will be at point B and then attempts to
find any collisions that happen within the traversal
from point A to point B. So in this diagram you can
see that this particle has moved toward the right
location, but it's now penetrating the ground. So to solve that issue you
can move the particle back to the location of
penetration, or the moment before it first penetrated. And then you can reflect its
velocity and the remainder of the delta time
for that update and send the
particle on its way. That's actually how
the collision Module inside of Niagara works. But this demo
required collisions that were far more complex. So we used another method,
called position based dynamics. And in this example we
can see that in one frame, two particles are not
colliding with each other. So it's fine. But on the next frame, after
we've done all of our updates, we can see that the blue
particle and the red particle are penetrating each other. So to resolve that what we do
is we move the two particles in opposite directions. And based off of the
mass of each particle, we move them less or more. So if a really heavy Object
hits a really light Object, we're going to move the really
heavy Object a small amount of distance and the light
Object a much larger amount of distance. So this can happen
many, many times. Say if you had a 100
particles all bouncing against each other, in order to
find all of these little issues and to correct
them all you might have to cycle through every
particle several times. So what we can do is we
can find the position that the particle
initially wanted to be at and the position
that the particle ended at. And we can derive the
velocity from that delta. So we can say that over
this small slice of time, this particle went from this
location to this location. So we can say, if it moved this
far over this amount of time, the velocity is X.
And that's basically how the collisions work
inside of the kernel demo. VICTOR: That was good. JON: Thanks, man. So this is just an overview
of the overall implementation inside of Niagara. And it's not meant to
be a bible or anything. It's just meant to give you
an idea of the flow of data and how these different
constraints can be applied to a particle system. So here, when we solve for
solve for Newtonian motions, we're just saying that this
particle is located right here. And it was moving at 10
centimeters per second. And it traveled for one second,
so given the laws of physics, the particle should
be over here. That's great. Now we know where the
particle wants to be. And then we want this particle
to know about its neighbors. So we feed this particle
and all of its neighbors into one of those grids that
we talked about earlier. So this is where position-based
dynamics comes into play. We have this grid. And each particle
looks around and finds all of the other particles
that it could collide with and then corrects those
inter-penetrations. And after that
process is finished we then check to see if
the particle has collided with the World afterward. And if it does, we then pull
the particles out of the surface and update the velocity
based off of that trajectory and animate it for
the rest of the time. So if you were to look very
carefully at the output you may notice that the
particles were fixed up correctly here so that
they weren't penetrating each other, but then if
the particle penetrated the World it could be moved
out of that collision surface, and it could potentially be
penetrating other particles again. But you don't actually see this
in practice, for the most part. One could iteratively go
through this whole cycle again and again,
but it would make it too expensive for games. So this is actually
a decent tradeoff. So the physics behind
moving the particles around is one part of the problem. Another part is that we want
to represent the character. So typically, with
effects, one might just place particles on a mesh's
surface and be done with it. But in this case, since we're
working with kernels of corn that will fill up
an entire Volume, we have to use a
volumetric representation of the character. So for that reason-- this is something that you
do when you make demos. You go off in the deep end. But for that reason,
I cut the character up into multiple parts. Each limb was separated
from the rest of his body. And then I baked out a
distance field for that limb. I could then use
that distance field to pack it with particles. And you can see
that process here. In this case, I'm
only using a box. So I'm spawning a bunch
of particles within a box, giving them volume. And then I'm allowing
position-based dynamics to pull them out of each
other so that they're no longer penetrating each other. And then, after
that process is run you can see that the
particles no longer take the shape of a box. So I can reference that
same distance field that I was talking
about earlier again and I can kill any particle
that's outside of a surface. So you can actually
watch the process happening in this video. So the particles spawn. And then they pull
themselves out of each other. And then they kill off
all the extra elements. So having particles that
kind of look like a character isn't really going to
get you all of the way. What we wanted to do is have
a laser cut through the-- cut through the character
and to detach particles that would have been supported
by those particles that were cut. So luckily, the human skeleton
does support structures. And it makes it easy to
create a support system from the alignment of bones. So for every assigned distance
field-- say, the hand, or the arm-- I associated those assigned
distance fields with a limb. So as you can see, this
arm is associated-- all the particles
in this arm are associated with the shoulder. And his shoulder can
provide me with an arrow, an arrow that's pointed
along the bone direction. The hand can do the same. And the idea is that I
want to find an arrow that traveled across the
body down to the feet, because the feet are
going to be what supports the rest of the character. So we're going to take
a step back and look at the whole process again-- spawning particles
inside of an SDF. I'm using PBD to push them
outside of each other. And then I'm cutting
off all of the excess. And then I'm copying
those particles over to another emitter. And then I'm running
this process on it-- this process, which injects all
of the particles into a grid, and then finds the
nearest neighbors, and then looks along that arrow
that I defined in this step and finds particles
that are in alignment with that arrow
that are close by, and then assigns this
particle its parent. So if this particle is located
at the top of the character and this particle is
located just below it, this particle will attempt to
look downward to find the feet. And the first
particle it will find, which is the closest and
the most in alignment with that vector, will
be the second particle. So this particle will
then say that it relies on the support from this-- from the second
particle, which is just to say that if something
happens to the bottom particle, then something should
happen to the top particle. This video demonstrates
that process. What this emitter
is doing is it's taking the particles that
have been given parents and have a support
system in place and it's randomly
detaching them. So you might notice
a single particle turning red and
then falling off. What we're doing
is we're saying, detach from the
rest of the emitter and just become a position-based
dynamics particle. And then you can see that
happening again and again, randomly. And each time one of those
particles are detached, all of its children
also get detached. So one might ask, how do we
attach something or detach it? So attachment's an
interesting area. There's a lot of
ways to go about it. But the simplest way, and
the way used in this demo, is to just lerp between
two sets of values. So if these particles are
attached to a character, then they should
just remain in place. And their velocities
should be zero. And they shouldn't
be affected or moved by any other particles. So that's what we're doing here. At the very end, after
everything else is done, we're telling these particles,
if they are supported, which is just a
bool that I wrote-- if they are supported,
then stay in place and remove your velocity. And that's it. But if they are not
supported then what we do is, we allow gravity and other
forces to act on the particles. So it becomes a
fairly simple process when you break it
down in that way. So this is an example of how you
might just completely override a physics simulation if you
wanted to apply a constraint. But you might also just apply
other physics to it instead. So maybe what you'd want to do
is apply a spring constraint to a particle. And then, as soon as that
constraint breaks or it's no longer supported, you
could remove the force from that spring constraint. And then the particle can just
fly off into the distance. I guess we could probably just
watch this over and over again and enjoy his pain. VICTOR: I already did on Twitter the first time you posted it. [LAUGHTER] JON: That's funny. So this became an
interesting problem. As things progressed
I got to the point where I was cutting the
model apart via laser. And since this is all
just math and stuff, it's like, what is a laser? You could say that-- a point in space. You could use a
dot product to find particles that are on a
line from one point in space to another. That'll work. But what if that
line moves quickly? You could do that
multiple times. You could say that
over this frame the line moved from
point A to point B, so check here, check
here, check here-- like, check multiple times. But that's kind of lossy. Never get great
results that way. There's always a
particle or two that'll slip through the cracks. And you can see that
here, in this diagram. So say this is our popcorn
dude, and a laser just happens to miss the center
point of all those kernels. None of the kernels get popped. It's really unrewarding. Another method that we can do
is, we can sweep collision, or basically say, we want to
find every possible collision that occurs from point A to
point B. So in this case, I'm using a triangle, which is
a function that we've written and added to Niagara now. Basically, you can
take any point in space and find out how far away it
is from a triangle in space. This is super
useful for a number of-- a number of reasons. But in this case, it allows
us to create a triangle from where the laser is starting
to where it ends on that frame. And then we're able to find
the distance of every particle in the kernel to the
surface plane of this laser as it's sweeping through space. And once we know what the
distance is between the laser's path and the
individual particles, we can find penetrations
by simply subtracting the radius of the particle. If the laser were two units away
from the center of the particle and the particle is 5
units wide in its radius, then we can say that the
particle was hit by the laser. So one last thing that
I wanted to talk about was methods that you can use
to cheaply lay particles out in an organic way without
seeing penetrations-- interpenetrations with
the resulting particles. So given all of these
fancy tools that we have now one might say, I want to-- and this would be
totally valid-- I want to place a bunch
of particles around. And then I want to check this
particle's position compared all the nearest particles. And then I want to
delete any particle that interpenetrates with it. That's one way to do it. Another way to do it
might be to spawn-- even though you want to make
these particles look very organic in the
end, you might want to spawn them in a regular
grid and then randomly kill those particles, and then,
within each grid cell, randomize the location
of those particles. So no particle ever jumps
outside of its grid cell. So there could never be any
cohabitation of particles. And then you can trace
the distance field, find the nearest
penetration point, and then kill the particles
that never penetrated anything, and then retain the
particles that did. So that becomes a very-- it becomes a very efficient
way of organically distributing your particles
through the World. For instance, this
was used on the bats in the cave in the UE5 demo. So that's it. Sorry if I'm a
little-- VICTOR: That's it? That's it? JON: Yeah. [LAUGHTER] VICTOR:
[INAUDIBLE] there. Thanks a bunch, Jon. We have received quite a
few couple of questions. And we definitely
have some time left. So if you're cool with that,
we can go into questions. JON: Yeah,
that sounds good. VICTOR: All right. So I'm going to try to start off
a little bit with the kernels, since that's where we ended off. And that we'll move back towards
the swarms and the flocks after. Wyeth chimed in in chat. Let's see. He asked me to ask Jon about
what he's doing more than one iteration to fix
up the collisions and how that works in Niagara
using the new features. And he wanted you to talk a
little bit about the simulation stages. JON: Oh, yeah. Simulation stages are great. From this stack you can see that
we have our standard emitter spawn, update, particle spawn,
particle update, and our event handler. And in this specific emitter,
which is a GPU emitter, I've enabled simulation stages. That allows you to add extra
stacks to the overall effect. So in this case, I've added
an extra simulation stage that I've called Populate Grid. And that just takes a
particle and injects it into a neighbor grid. And Then I've created another
simulation stage-- just to clarify, we have
multiple simulation stages to delineate operations. When we're populating
a neighbor grid we want every single
particle to go through the process of injecting
itself into the neighbor grid. So you want this
neighbor grid to know where every single particle
is by the end of its operation so that the next
operation can know for sure that it has a fully
populated grid that it's working with. So populated the grid. And then I've created
another simulation stage. And this simulation stage
handles the collisions. And each time you have
a simulation stage you can tell it how many
times you want it to operate, how many loops you
want it to perform. So in this case, with the
position-based dynamics, I've only had-- I only have it
looping three times. And a lot of the-- the number of
iterations that you need varies greatly based on
your particular setup. But the kernel, for instance,
only loops 12 times or so. So if I were to
create that simulation and only have it
loop once you might see a large pile
of popcorn kernels laying on top of each other. And then, for each
iteration, each particle would check to find
any penetrations. And it would correct itself. And this is in the case
of, say, one iteration's-- one iteration cycle. It would find a penetration
and correct that penetration, and it would be done with it. But that wouldn't
yield the results that we'd want because after
one penetration is fixed up another has probably started. Just imagine a bunch of spheres
all penetrating each other. And then one thing
corrects itself. That probably means that
another problem has occurred. So if we up the iteration count
we can correct one penetration. And then, as another forms,
you can correct that as well. And onward and so forth. So you obviously want to
limit the number of iterations as much as possible for
performance reasons. But you'll get more
accurate results with the increased number,
or with larger numbers. VICTOR: It's
always a balance, right? JON: Yeah, definitely. VICTOR: The chat was
curious if it might be possible for us to make the slide
deck available for download after the stream. JON: Yeah. That's-- VICTOR: I can
go ahead and help you with the videos
and such and make sure they get uploaded
so they probably play for everyone else, as well. JON: OK, Cool. That'd be great. Yeah. I just have it sitting
in my Google Drive, so-- VICTOR: Yeah. It might take a little while,
so not immediately today, but perhaps early
next week we'll try to get it out
for all of you. Let's see. Keep going on the position-based
dynamic stuff here. I'll start off where
those questions started. One moment. Let's see. Here we go. Sedokun asked, "Are the
collision frame rate dependent? Or better question-- how much
are they frame rate-dependent?" And this was in regards to
the position-based dynamics. JON: So it would
be frame rate dependent because with
position-based dynamics-- I wouldn't say that this
is true with all equations or implementations, but with
this specific one that we've written we take the particles
and we fully update them. And we find out
where the particle would like to be at
the end of that update. And then we run our
position-based dynamic sim off of that new location. So if a particle were
moving really quickly it could potentially jump
from point A to point B and miss a number of
collisions along that path. So for that reason, this
particular implementation could be frame rate dependent. But other than
that, the math still works out, because what we're
doing is we take two particles and we pull them
out of each other. And we derive the time-- or, the velocity based off
of the amount of distance that they traveled or
the given amount of time. So you shouldn't really see
a large amount of popping based off of that reason-- for that reason. VICTOR: Crestycomb was wondering, "Is position-based
dynamics doable on the CPU in any kind of way? Alternatively, is
a ribbon renderer on the GPU something
on the to-do list?" JON: I can't speak
to the ribbon renderer. I know that it's been brought
up once or twice before. Position-based dynamics
would be possible on the CPU. It would just be overly costly
because you don't currently support neighbor
grids on the CPU. It's a GPU-only feature. So you would have to
rely on the event system in order to communicate the
position of each particle to every other particle. And that would be, probably,
too costly to actually run. But it would be a possibility. VICTOR: PartikelStudio
is wondering, "Was the baked distance
field only used for the initial
packing, or did it drive the animations, as well?" JON: That's actually
a really good question. So I think I might
have glossed over that. So in this instance we use
the assigned distance field to initially pack the limbs. And after that what
we can do is we can store off the
local space location of every particle compared to
its root or the anchor point. So from then on, as the
character moves around in space all we're doing
is storing off the bone ID inside of the particle. And then, on update, we're
reading the bone's transform. And then we're transforming the
particle into the bone space. So we're basically
attaching the particle to that local space position. So in the end, the
runtime emitter only has to have access
to its local space position in relationship
to the limb and the limb ID that it should be following. But in the actual
simulation it isn't just a slave to that position. It uses that as an
attraction location. So each particle attempts to
reach its local space position, but it doesn't force it. So you might have noticed
in one of the videos, as the character gets up
and he starts moving around, his feet, as they
penetrate the ground, kind of dissolve and
spread out a little bit because their attraction
location, the position that they're
attempting to get to, is beneath the ground plain. And it looks kind of
appealing, I guess. So I think that these
types of loose constraints can be more organic and
interesting than some of the rigid constraints. VICTOR: We had
another follow-up question from PartikelStudio. "Does the stages
run sequentially, or will one stage
finish all its loops before moving to the next one?" And then, "can you
chain two together as loop buddies, et cetera?" JON: You can't
chain them together as loop buddies at the moment. But each stage does
completely finish itself before passing the baton on
to the next simulation stage. So that's really important. And that's why a lot of
this stuff functions the way it does, is because
every stage is known to be completely finished
before the next stage starts. VICTOR: knnthwdrff asked,
"can the kernel collisions be accurate according to the mesh? Or are they generalized
as spheres?" JON: There
would be additional work to make that a possibility. And it's something that I've
been interested in doing at some point. But the math-- the simulation
becomes a little bit more heavy at that point. And some have questioned
whether that's the right thing to do inside of a
particle system. It starts to become more of a
physics simulation territory type of thing. But I think that
there would be value in trying to do, maybe,
soft body sims in the future or something like that. VICTOR: Sedokun
had another question. "Can we fracture
geometry based-- can we fracture geometry based
on those sphere locations?" JON: So
the question at that point becomes what you're,
exactly, moving around. You can do-- say, if each
particle were represented by a piece of geometry, then you
could technically break apart a piece of-- I don't know, a
model or something. But you'd have to
assign a different mesh to every particle if you
wanted it to be unique. Theoretically, the particles
will move around as spheres and will look fairly realistic. So that's a possibility. VICTOR: Sedokun
had another question. A lot of good questions
today, Sedokun. "Can particles be separated
based on strain between them?" JON: Yeah, actually. So you're able to query all
your neighbors a reframe. So if I were to, say, look at
my pairing on a regular basis and ask how far away it
is and what kind of forces are operating on
it, I might want to detach that
connection between them if it grows to be too strong. So you could actually come
up with a very compelling physical simulation
using that method. VICTOR: Let's
go back a little bit, to the flocks and the swarms. Bob'sNotMyUncle asked, "How do
you get particles to collide width distance field GPU
particles on a transparent Material slash mesh
such as water?" JON: Sorry,
could you repeat that? VICTOR: Mm-hmm. "How do you get particles to
collide with distance field GPU particles on a
transparent Material slash mesh such as water?" JON: Oh,
so how would we get flocks to
interact with water? Is that the gist of it? So I guess in one
of these instances there's certain approaches
that you could use. When you're querying the
global distance field, that is the information that
the particles have access to-- basically, their
understanding of the World. So if a given mesh doesn't
contribute to that information then you would have to
rely on another means to emulate that influence. So you could do that through
several different processes. But in this case
what I've done is-- as you can see, these
particles are turning red. And they're detaching them
from the rest of the body. And they're moving around. But they're bouncing against
this invisible surface. And I've actually placed
meshes around this display that contributes to the
global distance field. But it's transparent. So it can be done just
by adding something that contributes to
the global distance field that's not visible. VICTOR: Like a hidden mesh
with collision turned on. JON: Yeah. Actually, collision doesn't
need to be turned on. There are just settings
to be aware of, though. I think it has to cast
some type of shadow, because the global distance
field ignores meshes that do not cast shadows. So that was one of the
features that-- or, that's one of the little
caveats that I learned about while making this. But yeah, I used a
vertex shader to offset the location of the mesh. VICTOR:
DragonHeart00799 asked, "how do you know when
to use CPU or GPU simulation for your particles?" JON: I guess it
depends on what your needs are. In most cases I would
rely on GPU particles because they're very
efficient and they can push a large number rather easily. We also have a great
number of features that are available for GPU particles. There are few things
that are missing at the moment, which I'm
sure we'll get to eventually. But I believe ribbons
is one of them. And events are another. But events can be emulated
through particle-- the particle attribute
reader, which is what we've been
using a lot here. So say if I had a
particle and I wanted it to know about
its neighbors, I would create a particle
attribute reader data interface that would allow me
to, via particle ID or index, read information from the
payload of another particle. So I guess long answer short
is that I would particularly steer toward GPU emitters unless
there was something that a GPU emitter did not provide. VICTOR: There was
another question from Sedokun which might relate. "Can we trigger sound events
from particle timeline or events?" JON: You can
actually-- there's actually-- I believe there's a renderer
now or a data interface that plays sound. So you can actually do
it within Niagara itself. So there's that-- there is a-- let's see. It's somewhere around here. VICTOR: A
little quick fly through the content examples. This is what it looks like,
by the way, if you're watching and you haven't downloaded
our content examples yet. All levels, based on
everything from blueprints to particles and-- JON: I'm
sure that Wyeth, who's watching right now,
who's worked on this example, is screaming my name. He's probably like,
it's the other way. Go the other direction. VICTOR: He's typing. "It's a data interface. And there are new content
examples for playing audio. It's in the Niagara hallway." JON: Yeah. So maybe this is it? Or, no. Sorry, I must have gone
past it or something. VICTOR: He's typing
again, trying to give us some-- oh, he said "not in
Niagara_advanced," which is where Jon is. And so I believe it's
the Niagara hallway map. JON: Oh, OK. It must have moved, I guess. VICTOR: And for those of
you wondering about the content examples, you can find
them under the Learn tab in the launcher. They're available for each
and every one of our engine versions. And this particular one
that Jon is in right here will ship when 4.26 comes
out in full release. But a lot-- some of
these things already-- some of the Niagara examples
already existing in 4.25. JON: Yeah, so I think
the goal of this presentation wasn't to connect all of
the lines for everyone so that they would be able to
replicate any of the effects that we're talking about here. But the content examples goes a
lot further in that direction. So cracking open any one
of these particle effects will really teach you a lot. It's probably-- in my mind,
would be the preferable way to learn. VICTOR:
And on that topic, we are going to have Wyeth
on this stream later, probably at the
beginning of next year. And we're actually going to
go through some of the content examples specifically. Let's continue with some
more questions here. Sedokun, another good question. "Is spatial hash part of
Niagara or just an overlay built on top of it? As an example, what
controls grid cell spacing?" JON: Oh. So neighbor grid is a DI, a Data
Interface, inside of Niagara. So basically, you
can create a grid. And you specify the number of
cells that are in that grid. And then you have the job
of translating World space positions into this grid space. So we've done that and we've
made that fairly simple for you through a number of Modules. So this is a bare bones
example of how that's done. And I can bring up this emitter. So this one just visualizes
that neighbor grid in 3D space. So I have a 4K monitor
and a 1080p monitor, or a 1080 monitor, right
next to each other. And I'm trying to figure out
how to get this over there. There it goes. So here I'm using the
initialize neighbor grid Module, which both
defines a neighbor grid, which looks like this-- it asks you how
many neighbors you-- what the maximum
number of neighbors is per individual cell. And 15 is really high,
but it's just a demo, so it doesn't matter too much. And then the number of cells
on X, Y, and Z that you'd like. And then we have
this function here which allows you to give the
grid a location, a rotation, and a scale. And that pretty much
defines the transform that you need to move
particles from World space into your grid space. So this particular Module isn't
a part of the library just yet. It's going to go through
some additional iteration before it gets
contributed to it. But it is available
within the 4.26 Project. And you could potentially
move it out of there and move it into another branch. But yeah, so just to
close that loop there, this Module creates a
number of transforms that you can reference to
transform World space into grid space. And it also creates
the grid itself. And the grid Object is something
that you interact with. And you tell it to-- you add particles to it
and, basically, the grid stores off particle IDs that
you can reference later. VICTOR: hiroyanyade asked,
"Last time I looked at the
neighbor grid 3D stuff and nodes that came with it
they had to be used together with custom HLSL scripts. Are these HLSL blocks going
to be proper Niagara nodes?" JON:
There's still a lot of work being done in
that area, so the scripts that we've written so far
aren't being incorporated into the package yet. But we'd like to maybe
refine a few UX-- a few rough edges,
and then I think that we will bring those
over into the library. But yeah, for the moment,
right now, you mostly interact with it inside of HLSL. But there are the nodes-- you can do things inside of
the node graph if you like. I guess it's just-- comes down to preference. VICTOR: Jumping a
little bit between the subjects here, but there's
also other cover, and we've received questions
throughout the stream. "What's the rough
cost for having a Niagara boid simulation? How does the cost increase as
the particle count goes up?" JON: I haven't run
a cost analysis on that yet. But the PBD sim was much
less than a millisecond. So that had many more Actors and
it was doing a lot more work, so I imagine that the boid
sim would actually be cheaper. VICTOR: In
regards to the vertex offsets being baked
into Textures, the question from
ALegend4ever is, "When you have a lerp
driving that many animations, "Wouldn't that be
resource intensive? And if not, how does
it keep it down?" JON: So inside
of the material graph it's referencing
a single Texture. And each line
within that Texture is a frame of animation. So the material graph
actually just reads one line and then reads the next line. And then, over time,
lerps between them. And then, as soon as
one line finishes up, or one frame finishes
up, then it just jumps down to the next
line and then lerps again. So it just keeps on doing that. So the overall costs-- the runtime cost of performing
that operation is very small. And the costs of the
Textures, I guess, wouldn't be insignificant,
because you would need-- you need HDR
Textures, basically, to store vertex positions. But it depends on your Project
and what your needs are. I've shipped many
Projects using them, and it wasn't really an issue. VICTOR: Are you
good to take a couple more questions before we wrap up? JON: Yeah, sure. VICTOR: All right. Let's keep going. This is good. HeliusFlame was
wondering, "how is the interaction between the
light and the bugs built?" JON: Oh, that's cool. Yeah, there's actually
a Module that I created. And let me see if I
can find the video. So basically what I did was I
mathematically modeled a cone. And I made it into a
Module within the library. So I believe it's
called a void cone. But let's see. Yeah. So you can see that there's
a Module called a void cone. And it basically
asks you for the apex of the cone, which is the pivot
point, maybe, and an axis, and then the angle of the cone. And that will find-- it'll create a fall
off, a 0 to 1 fall off, as to whether a particle
is within that cone or not. And then it also creates a few
additional vectors for you. It finds the closest vector
to the axis from your particle position. So you can use that
to push particles away from the cone in the most
efficient way possible. So in that video that you saw
within UE5, as the character is walking through the dark hallway
and our flashlight is shining on the bugs, we're just
mathematically recreating that flashlight. And each bug is checking to
see if it's inside of that cone or not. And if it is, it finds the
most efficient route outside of the cone and then
uses that as a force to push the particles away. And it also changes
the insect state. So instead of just
wandering around, it enters a scared state
and it moves quickly. If the insect is
wandering around, usually, just randomly over time,
it'll walk a little bit, and it'll stop and chill out. And it'll do that
over and over again. But when it's scared
it doesn't stop. It just runs faster than usual. Its movement is more chaotic. And it looks for an exit, too. But that's another
story, I guess. Oh, I just wanted to
bring up one more thing. I think it leads
into an area that we didn't discuss that much. And it's the use of constraints
throughout your effect. So I mentioned a
constraint that could be applied to an effect at
the very end with the PBD sim. I was saying you can
run this physics sim, and then at the very
end you can choose to either lock the
particles to the mesh or you can run the physics sim. In this case, I
wanted to make sure that the bugs behaved
in a very realistic way. And I was finding that if I just
relied on a single constraint at the very end you
might find insects that would jump out of
geometry or clear small walls or something like that. So with the flashlight,
for instance, as I find the most efficient
route outside of the cone I also look for the nearest
distance field normal. So say this is the insect
and this is the ground plane and this is the cone. And the light is
telling the insect to move this way, which would
push it through the ground. I actually read
the distance field normal, which is right here,
and the distance field position. And I say, instead of
pointing down here, snap to the ground and
push along the surface. So that's a-- that'll be another
demo that will be included in the content examples. VICTOR: Let's see. 1bulletpr00f was asking,
"how performance heavy is this? Is it really possible
to use something like kernels example in a game?" JON:
Like I was saying, the kernel example was less than
a millisecond on my machine. So the final
rendered version was using ray tracing,
which was actually the majority of the cost. So I imagine if you had
a millisecond of time available to you, you
would be able to do that. VICTOR:
Yeah, that seems-- absolutely seems doable. It also depends-- if your game
is all about blasting lasers at popcorn, dude, one,
then you can probably go pretty heavy, right? JON: Yeah. VICTOR: Twice as big. Ten times as big. jsafx asked,
"In Cascade, there was an option to stop rotation
upon colliding with the ground. The Niagara collision Module
doesn't seem to have it." JON: OK, so yeah, I
would have to look into that, I guess. VICTOR: We'll follow
up on that one, folks. See here. Get into some of the
last questions that have come in as we've been
answering the other ones. Let's see. knnthwdrff asked, "regarding
attribute reading, do you use a tic in another
Actor to query that?" JON: No. It all happens within Niagara. So just on every
frame of execution you can perform
operations and particle update or anywhere else. And an attribute reading is
another one of those operations that you can perform. VICTOR: Let's see. JON: The
Niagara attempts to make the most intelligent
decision possible. So if you have two emitters
and one reads from another, it will find that
dependency and it will make the emitter that will
be read from execute first. And then the second, the
reader, will be executed second. VICTOR: PartikelStudio
asked, "are there any examples of emulated events
using the particle reader?" JON: Oh. Yeah, actually, I wonder if
Wyeth can answer that question. I know that he worked on that. VICTOR: Cool. We'll let him write
that up for me. And then I will get
to the next one. Let me scroll up, make sure I
didn't miss anything earlier. Sedokun was wondering, "Is Chaos
using distance field collisions as well?" JON:
Yeah, it's using it. But it's different than-- well, actually, I shouldn't-- I don't think I
should talk about it. Never mind. It's not an area of mine. VICTOR: We'll leave
that one for the Chaos team, which is coming up. And you talked about cone case-- casting. Let's see. "In regards to
position-based dynamics, does PBD check each
particle predicted position against stationary
position--" oh, OK, sorry. And I'm about to sneeze. It's tough. "Does position-based
dynamics check each particle predicted position
against stationary position of other particles? What if two particles
move towards each other? Would they intersect
for a single frame?" JON:
That's actually taken care of by the iteration
loop that we were talking about a little bit earlier. So say if one particle's
stationary and then another particle is moving
and the two would penetrate each other, at the end of
the position update then we start looking
for intersections. And we attempt to correct
any of those intersections multiple times in
order to make sure that, as one intersection
is corrected, another doesn't form. So it should operate correctly. In one case, in
one of my videos, you might have seen one
particle going through another. And that was because one
emitter is siloed off from another emitter. So I actually had multiple
emitters sitting next to each other. And one was using a PBD sim,
and the others were not. So in that case, where-- think it's over here. Yeah. So this is one emitter. And then this is
another emitter. And then that's another emitter. So if you see any
penetrations there, it's because they're not
looking for each other. VICTOR: All
right, all right. Let's see. One brief check over. I think that's it for now. We're also at the one
and a half hour mark. That's actually pretty short
for-- some of the last few that I've done have been
almost two and a half hours, running up on three hours. So that's great. Go ahead and manage this. One last question for you, Jon. Mugs17 asked, "Are
there any resources that you would
recommend to learn about particles or Niagara? Perhaps Jon has any
personal recommendations." JON: Oh, there's
this coding training website that I like. Let's see. Oh, processing.org, I
believe, is a good reference that I sometimes rely
on for information. Cool. VICTOR: Awesome. Well, Jon, it's been a pleasure
to have you on the stream. I hope everyone out there
had a good time checking out some of the Niagara systems
that were made for the UE5 demo. It's exciting, especially since
you can actually work on them or use all of these
tools already in UE4. There's no reason why-- if
you're interested in this you don't have to wait for UE5. Just download 4.26 preview
and start digging into it. Going to do my little
outro spiel here. If you've been watching from
the start of the stream, thanks so much for
hanging out with us. We do these every Thursday
at 2:00 PM Eastern Time. Next week I actually have the
team behind what remains of-- or, not the entire team,
but a large percentage, if you're considering. Ian Dallas and
Brandon Martynowicz is coming on to
talk about the art behind building What
Remains of Edith Finch. If you haven't played
that game, I highly recommend you do so
before next Thursday if you're interested
in watching the stream. One of my top 10
games of all times, I would almost put it
in, because of how-- it's such an original game. Barely want to call it a game. Anyway, they're going to
be on the stream next week. If you're interested in
that, make sure you tune in. And we have a little
surprise for all of you-- something that's going
to be announced very close to the stream next week. If you are curious about
some of the terminology or you don't remember
when we said something throughout the stream,
for all of them we have Courtney
behind the scenes. And she writes a transcript
for the entire stream. What that means is that you
can go ahead and download that transcript. And there are timestamps next
to all of these sentences that were said. And so if you remember that
we spoke about something and you want to look it up,
even if the stream was only an hour and a half, it might
still be a little bit difficult to know where that was. And you can download that
transcript, Control-F, and you can search
for the terminology that you're looking for and
find all of the occasions when either Jon or me mentioned
that throughout the stream. We do a survey every week. I think it's going-- if it
hasn't been pasted in the chat yet, it will be pretty
much as soon as I say it. Please let us know what you
thought about today's topic, what you would like to see in
the future, and how we did. It's very important for us to
know what you'd like to see. And we take that very
close to our hearts. We are still-- virtual
meet ups are still happening around the world. If you go to
communities.unrealengine.com, you can find a
meet-up group that is either close to you
or perhaps somewhere where you would like to move to. Check out the land. They are usually throwing
meet ups on Discord, since we are not able to
see each other in person at the moment. If there is no community
meet-up community in your area, there's a little
button that allows you to request to become a leader. Fill that form out, and we will
hopefully get in touch with you as soon as possible. Also, make sure you
check out our forums. There's a great community
Discord called Unreal Slackers, UnrealSlackers.org. Plenty of people to chat
with, talk about UE4. And go ahead and jump
into that voice chat. I was there the other week,
and it's a good time just to hang out with some people. Also, of course, Facebook,
Reddit, Twitter, social media, LinkedIn-- you got it. We're on all the places. And that's where you'll find
all the latest and greatest news from Unreal Engine. If you would like to be
featured as one of our community spotlights at the
beginning of the stream, make sure you let us know
what you're working on. The forums are a great place. The Discord's
another good place. Our station has a whole Unreal
Engine category right now. They're all fantastic places. Or you can just
add us on Twitter. They're all good
places to let us know. We want to see what you're
working on, because it's always usually very exciting. You all are so great. We also have countdown videos
that we do for every stream. They are generally 30
minutes of development. Fast forward that to five
minutes and send a video to us and you might be featured
as one of our countdowns. Don't put anything on
it-- your logo, et cetera. Just send it to us. And we will edit that together. If you stream on
Twitch make sure that you use the Unreal
Engine tag as well as the game development tag. Those are the two best
ways to be featured or for people to find you
if you're specifically streaming Unreal Engine
development on Twitch. Make sure-- if you're
watching this on YouTube after the stream was live, make
sure you hit that notification bell. And you will get to--
you get a notification when all of our content is
being uploaded to YouTube. Sometimes they're big drops. Like the Unreal
Fest Online, I think we pushed almost 50
videos from all the talks that were uploaded. So there's a lot of good
content on our YouTube channel if you're looking for it. I already mentioned
that next week we have the team from Giant
Sparrow, that developed What Remains of Edith Finch. They're going to talk a bit
about the environment art and how they were able
to put that together. The forum post is
up on the forum, so you can go and check
that out if you're interested, or already start
asking questions for the team if you would like to. And as always, thanks to our
guest today, which was Jon. It's been a pleasure. Hope to get you-- get to see you
sometime on the stream, maybe next time with Wyeth. JON: Yeah. VICTOR: Yeah, we're going
through the content examples. Anything else you want
to leave the audience with before we tune out? JON: Oh, yeah. In addition to
processing.org, I think that The Coding Train
on YouTube is great. That's it. VICTOR: Awesome. Well, with those
words we're going to say goodbye to all of you. We hope you're staying
safe out there. And we'll see you
again next week. Bye, everyone. JON: Bye now.