>>Jurre de Barre: Hi.
First of all, thank you for coming this early
to the first talk of the day. This talk will be
about new animation features we have added to the Unreal
Engine over the past year. I will mainly try to talk about
why we added these features, the problems we ran into,
and how we tried to tackle them. Even though
most of these features were driven by Fortnite, they should be applicable
to any size project or game you are working on. First of all,
a little bit about me. My name is Jurre de Barre. I am an Animation
Programmer at Epic, and I have been there
for almost over four years now. To start off
this presentation, I would like to give you
some insight into the anatomy
of a Fortnite Character, and that will give you
a reference frame for other features
in this presentation. A Character Skin
is actually composed of multiple Skeletal Meshes, the first one being
the Base Skeleton. We also call this
the invisible one because it is actually
a Skeletal Mesh that does not have
any skin geometry. It is basically
just a bare Skeleton. On top of that Skeleton, we layer these
individual Character parts that make up
a total skin for Fortnite. For a minimum, you would need
a hat and a body, but it could contain
any other parts such as hats, backpacks or any accessories
that the Character uses. That is what it looks like. We actually have multiple types
of these Base Skeletons, and that is to make sure we can accommodate the
different sizes of Characters, but also differentiate between
the different genders as well. Something to note about this
is that the hierarchy - the topology of the bones
of these Skeletons are all exactly the same. We actually look
the rotations in Maya, and that is because
that helps us when we do our
Runtime Retargeting. But that means that
any translations in the bone can actually - that is what gives it
the different shape and proportion of Character. Now we have a Mesh,
now we have a Skeleton, so now we need to animate it. Our basic Character Animation
is driven by a single animation Blueprint that is run
on this Base Skeleton. When I am talking about
basic Character Animation, it is locomotion, using weapons,
or playing emotes. In our case, most of the
animations are actually authored on the medium sized
Base Skeleton. Then at runtime,
we actually retarget it whenever this Animation Blueprint
is used on a different Skeleton as to which the animation
was authored on. This gives us an overall pose
for the Character, the main Character Animation.
Then the individual Character parts use that animation
from the Base Skeleton, and we can actually copy that
because the Character parts share a subset of the
Base Skeleton’s hierarchy, so we can directly copy
over that pose data. Then on top of that
the Character parts have the ability
to have leaf joints, and that allows for adding
additional animation for the specific parts
such as facial animation, hair, clothes, or other stuff. Each of these
individual Character parts will then have their own
Animation Blueprint that drives the animation data
for those leaf joints and adds
the procedure animation. As you can see in the image,
this is an animation graph where we take the copied
pose from the Base Mesh, and then in the middle
we run some AnimDynamics to add the procedure animation, and that is the final pose
for this Character part. That is just an overview of how
we build our Characters, and that brings me
to our first feature I want to talk about today, which is Control Rig
and how we use it to ship a specific
Character on Fortnite. As you might have noticed,
this is the Werewolf skin. It was a big beat in Season 6,
but introduced something new, which is that it has dog legs, which means it has
three bones in its legs rather than two
on the human Character. Now the question is, how can
we still share animations that are authored
for a human leg and make sure it runs
on this three-boned Character? Because we do not want
to throw away our work for all of these animations. This is what the Character part
looks like. What we did, we added the three bones
to the Character part itself and kept the original
human bones as well so we can transfer
the animation at runtime. That means that the geometry
of the part is actually skinned to the dog bones
rather than the Character bones. This is what it actually
looks like if you just use the copy pose. You can see
the Character’s motion is basically working
except for the legs, and that is because the legs are
of course skinned to the dogleg rather than the Character leg. The first thing we tried
was to us IK to drive the animation
of the dog bones. But as you can see,
that falls apart really quickly, and in even more extreme poses,
it almost looks like it is a puppet that we are
pulling on with strings. What we need to fix is basically
some custom retargeting because we want to go from
the human leg to the dogleg. Then we can always use IK
to make that pose better. The feature we used
was Control Rig. Just a quick refresh
on what Control Rig is. Control Rig is our in-Engine
rigging system. It is actually graph-based. The nodes in this graph
are called Rig Units. They have the ability to modify
transforms for a specific pose. Whenever you create
one of these Control Rigs, you can actually use it
in the Animation Blueprints and it just shows up
as a regular node, and it will take a pose
as an input and then will run the Rig Units to modify this pose according
to your predefined behavior and then output the pose
when it is finished. Why did we use Control Rig
to fix this? Of course, you could say, well,
this is a specific problem we are having
with this Character, and we could fix it with creating
a new native animation node to provide a specific solution. But because we are always
working on so many skins, we want to make sure
that our pipeline is flexible and make sure that
whenever we run into this issue, we provide a solution that we can reuse on any
different problem that comes up. Control Rig is basically
what we need, because we need to be able
to manipulate the bone transform to do
this simple retargeting. Just to summarize, Control Rig
gives you a scriptable animation node that allows
to manipulate bones. In doing that, you can create
Procedure Animation, do retargeting
which we need in this case, or actually any rigging as well.
The really cool part of this is whenever
we compile a Control Rig, it compiles down to native code. That means that we do not
actually need to run any byte code on our VM such as when you are using
an Animation Blueprint or a regular Blueprint.
This makes it very fast as well. To fix this specific solution, we take the relative
transformation between the bones
in the Character leg and the dog leg
in the bind pose, and then we use that
to transform the animation data to make sure we can transfer
the animation from the Character leg
to the dogleg. This is what it looks
like visually. What we are doing here is we are taking
the information for Bone 1 and 2 and moving it over
to the dogleg. Because the third bone
does not actually exist, we do not actually
have the ability to transfer any animation, so we will need to solve that
in a different way. Then the foot bone
is still the same, so we can transfer that as well.
As you can see, the pose looks much better now
in comparison to just using IK. If you look at,
for example, the feet, you can see the dog leg bones actually do not reach
the targets that were actually defined
in the Character Animation. What we do is we add IK to make
sure we hit those targets. This is what the final pose
looks like in the game. This is the extreme
pose as well. The really cool part
is that this solution was completely created
by our technical animators. That means there was not
any need for an Engineer to creative this native node
or go back and forth to find this behavior where
normally you would be working with the technical animator
creating a native solution, and then it might not be totally
what he is looking for, so you need to go
back and forth. But this actually shows
how powerful it is to just completely be able
to do this in the Engine. On the right, you can actually
see the Control Rig that does this retargeting. This is actually
a relatively simple example. It can do a lot more. One of the things is
one of our technical animators created a Biped Rig
using this system. If you want to have
a deeper dive, I totally recommend checking out
the enhancing animation with Control Rig video
which is on our YouTube channel. Or, if you want to have some
hands-on experience, it is available in early access
in 4.22. That is Control Rig. Next up is our
Runtime Retargeting. As I already mentioned,
we use Runtime Retargeting whenever we are playing back
an animation that is authored
on a different Skeleton then we are currently
running the animation for. There were some improvements
and requests for more animators in terms
of what we can do with it. The main thing is being able
to do squash and stretch. That was mainly because
the animators wanted to be able to push that cartoon feel
of the game that is Fortnite. But because we did not actually
have any support for animated bone translations, you can see
in this specific example whenever the Character lands, you can see the spine
compresses on the medium male. But whenever the retargeting
is done to the large male, this is completely lost. That is because of the different
scales of the Skeleton. We actually do our retargeting
in two steps. The first step, make sure
that we transfer the animation from the Source Skeleton
to the Target Skeleton. That is set up
in the Animation Blueprint. As you can see, there are a couple of animation
sequence players here. Each of them has a property
called Retarget Source which basically maps
to the Skeleton this animation was authored on. Whenever that Skeleton
is different than the base Skeleton this Animation
Blueprint is running for, we need to do some retargeting. But it also allows us
for a lot of flexibility, because you can use animations
that were authored on all types
of these Base Skeletons and run them in the same graph,
which makes it very flexible. Then the second step is we use
Space Switching and IK to fix up contact points
which might get lost whenever we transfer between
different sizes of Characters. What do we actually need
to retarget? I mentioned earlier
we locked rotations between all of these
Base Skeletons, so we actually do not need
to do any retargeting there. The scale gets picked up
because of the proportions between the different Skeletons. That means translation
is the last thing that remains. We used to have three modes
to actually do this retargeting. The first one is Skeleton, and that means it takes
the translation from the Skeleton
in bind pose. For this specific example,
what we are trying to do is we are trying
to pull the clavicle node up. But because the bind pose is
static, nothing actually happens.
The second mode is Animation. This will take the translation
from the animation data, but that means that between all
of those different Skeletons, the bones will always be
in the same place because we are just using
the translation and we are not doing
anything special with it. That means we had this extra
mode called Animation Scaled. This means we are still using
the translation data from the animation, but we are now scaling
the length of the vector according to the ratio
between the Base Skeleton and the Retarget Skeleton. That makes it
so it is offset correctly. But this still left
something to do, because now we have
the correct length but not the correct orientation. We simply added something
called Orient and Scale, and this reorients
the translation and scales as well by using a delta between
the bind pose of the source and Target Skeleton. This was already available
in 4.20. This is what we mainly use on most of our bones
on our Characters and whenever we have a bone
that does not need retargeting, we use the Animation Mode,
for example, in this case, on the root. This is what the same example
looks like. If you look on the right, you can now see
whenever he lands, we actually get that squash
effect working as well, which was something
that we were looking for. But whenever you start
retargeting animations from a smaller
to a larger Character, you might start to miss
these contact points. In this example,
there is a clapping emote where on the medium male,
the hands actually reach. But because
the larger male character has much broader shoulders,
the hands do not actually reach, so we need to fix
this up as well. As I mentioned, we use Space
Switching and IK for this. In this specific example, normally the hands
are independent of each other, so we do not have
any information for it and it is just driven
by animation. What we did to actually
solve this problem is we added a bone that describes the relationship
between the two hands. That means that we now
have a bone that describes where
the hands should be. In the example GIF, you can see
on the animation transfer one, you can see the bone in orange is actually showing
where the hand should be, and the white hands actually do
not reach those targets yet. But when we use
the Space Switching IK, we can now make sure
we drive the hands using IK and make sure
that they still clap, so to say. Space Switching basically means
you are changing the relationship
between bones to do this. We actually drive the Space
Switching and IK using curves. Those curves are just stored
on the animation sequence. Then they are used inside
of our Animation Blueprints to either drive Animation Nodes
or Rig Units to do this specific IK. Just a quick recap. This is
our entire retargeting process. We start with the source
animation data. If you only use the copy pose, you can see that it is a really
scrunched-looking Character. His shoulders are looking weird. His wrists
are doing something funny. That is where the orient
and skill comes in. Now we have the correct pose, but we start to see we are
missing these contact points. Finally, the Space Switching IK
gets us there. We actually use Space Switching for other things
in Fortnite as well. In this example,
there is parent-child bone fighting in this animation.
It is because the guitar was childed to the arm bone
or the hand bone. Because we have these large
swinging motions, whenever we are interpolating
between those keys, there might be
some precision issues which causes
this jittering effect. As I mentioned, because we use
this more extensively, we wanted to be able
to have something to fix this up in the Engine. Back in 4.14,
we added virtual bones. Virtual bones allow you
to define a new relationship between bones,
just as we did with the clap. You can do this in the Engine, so you do not need to reimport
any animation data or go back to Maya
to fix this up. For this specific case,
what we did is we reparented the guitar
to be in chest space. Because the chest is more static
and will not have this large swinging motion,
it is much more still. To fix up the hands, we transfer
the hands to be in guitar space so we actually get
the correct motion. This is actually what it looks
like in the Anim Blueprint. This again just shows
how powerful Animation Blueprints are and Blueprints
in general, because content creators
are very flexible into what behavior they want and also are able to iterate
really quickly. Which brings me to this slide
called “The dilemma with Blueprints”. As I mentioned,
they are very powerful. They allow content creators
to iterate very quickly, get a behavior
that they are looking for without any Engineering
intervention, so to say. But the big downside
is executing them is slow. That is simply because they get
compiled down to byte code. It runs on our virtual machine.
As most of you know, running anything
on a virtual machine will be slower
than actually natively. Because on Fortnite,
performance is critical. We are shipping to all these
different types of platforms. We actually had to do
hand nativization. This means the technical
animator creates the behavior, creates the Animation Blueprint, and then it is handed off
to an Engineer who recreates
that behavior in C++, and then we swap out
the Animation Blueprints. This gives us the perf
increase we are looking for, but it also means
that we are introducing a bottleneck on Engineering. It basically means, how many
programs can you dedicate to nativizing Animation
Blueprints? That will actually determine
how fast your game is running. We tried to make
some improvements to reduce the overall overhead
of running Animation Blueprints. As of 4.20, you are now able
to run Animation Nodes natively driven
by float, bool, or Curve values. That means it removes
the overhead from converting
or fetching data in Blueprints. This is an example of it.
As you can also see, it reduces the amount of clutter
in your graph as well, because normally you would need
three nodes to do this. You get the Curve value, and then you would negate it
to get a correct alpha range and then you feed
it into your IK node. But that means that the first
two nodes are actually execute in Blueprint frame,
which makes it slow. On the right is
the exact same behavior, but just as a single node. As you can see,
we define the name of the Curve. You can actually do
any transformations on the value
of the Curve as well. If you have been
using this before, you might recognize
the little lightning icon which indicates that it is
running on the fast path which means it runs
completely natively and does not introduce
any Blueprints VM overhead. This actually makes this
a little bit more flexible. We added different methods
of processing or transforming the value
that is read. Before, we had bias and scale
and clamping. In 4.20, we also added range
remapping and interpolation so you can do all this natively
rather than needing to do this in a Blueprint.
For bool value specifically, you are also able to define
the blend in and out times, the blend function
you want to use, or even define a complete custom
blend curve to do this blending
in and out of a node. That is some stuff we added
to the Animation Blueprints to make them faster in runtime. That brings me to Animation
Compression, how we use it on Fortnite
and the improvements we made. First of all, memory matters. I think we all know whenever
you are creating a game, you always need to keep
in the back of your mind how much memory it is using. Because Fortnite is a project
that is ever-growing, we are always adding content,
whether it is skins, animations,
emotes, whatever, it will add
runtime animation data. It also means that
in the back of our minds, we always need to keep, oh, we are shipping to these
low-end devices, example mobile or Switch, which the memory limitations
are much harder and you cannot actually go over because you will just
crash your device. On Fortnite, we use our
automatic animation compression. What this does is it basically
tries out all the existing codecs
with different settings and then tries to find
the most optimal way to compress your animation clip. It does this using
a strict error measurement. That basically means that
the compression actually will not impact the overall
look of your animation, so you can trust it to be looking
what you intend it to be. A little bit of insight
on Fortnite, we are actually shipping with 4500
individual clips of animation. That actually adds up to
a whopping 760 megs of RAW data. Using this system, that actually
allows us to reduce this by 86 percent. But the big downside
is it takes six and a half hours if we would want to compress
this entire set of content. That is just too slow
to enable it by default. We needed to make
some improvements there. On 4.21, we started
with white listing of codecs. This basically meant
we looked at the codecs, looked at how effective
they were. If they were not effective,
we just threw them away. That allowed us to reduce
the overall number of codecs from 35 down to 11, which also makes it
so that doing the compression, because we now need
to check less codecs to find the most optimal
compression method, that takes down the time
to a little less than 2 hours. On top of that, we have the
ability to parallel evaluation. This means that the compression
will scale according to how many cores
you have available. In our case,
this was able to bring it down from 2 hours to 40 minutes. That is just a little
about the compression time. Then we also added a couple
features to even further reduce the runtime memory usage. The first one is per platform
Animation Downsampling. This is like last resort
kind of feature, because what it does,
it takes your animation data and it will cut
the sampling rate in half for any sequences that are
longer than a predefined length, so in our case,
longer than 10 frames. This means it is destructive, because
we will just take your keys. We will just throw away
half of them and rely on interpolation
between the remaining keys. If you have any clips that are
actually important to you which contain
a special animation for a cut scene or whatever, you can actually
opt out as well. In our case, this actually
allowed us to reduce the overall runtime memory usage
by another 30 percent. Lastly, we have
Float Curve Compression. We actually added compression
to our Rich Curve, which makes sure that
it is more compressed and more packed tightly. It already gives us
a little bit of a speed up in terms of decompression time, but also a little bit of gain in
compression of the overall data. Then we have a second option which is called
Uniform Curve Compression. What this does will
take your Curve and it will sample it
as fixed intervals. The sampling rate is actually
tied to the sampling rate inside of your animation itself, which means that there is always
a Curve key for each animation key
it contains. Then we will compress that data using the same approach
as the Rich Curve. Actually, that gives us a 1
to 2.5 compression ratio and a 1 to 8 decompression
speedup, so that is great. It is smaller and it is faster.
But we are not done yet. There is still a future
for Animation Compression. First of all, we want to make
sure the operation is actually Async, which means
whenever a content creator is importing animations, he can import them
and he can just continue working and the compression will run
in the background. On top of that, we are also
looking to allow plugins that add specific
compression schemes to further extend the Engine,
so to say. That is compression.
Next up is sharing. We actually added this feature
for a specific game mode that we are
thinking about on Fortnite. We wanted to add large crowds
of animated AIs to the game on top of the already
hundreds of Characters we already had in there, which means that we needed
a solution that was going to have
a small impact on our CPU usage to make sure that we can animate
all of these enemies. Because of course,
animation is heavy. We also needed a system that was
easy to tweak, easy to setup, and allowed us to start
using it quickly. The basic approach we take it
that we would share animated poses
between instances. This means that we only need
to animate a certain number of poses.
Those poses are then distributed over the individual members
of the crowds, which reduces the animation
time by the ratio between how many animated poses
you need for a certain number
of people in the crowds. We already had this cool feature
called Master Component. This allows you to copy
a generated pose from one Skeletal Mesh Component
to another, which means that you can
set up Child Components for a specific Master Component. Whenever we need to evaluate
the animation, we only evaluate it
for the Master Component, and then the data is copied over
to the instances, which is basically
what we are looking for. The first thing we had to do
is we had to define States because we need to know
what State a pawn is in and which animation
ties back to that State. In our case, we had idle,
running, and of course, dancing, because Fortnite. Now we have a State, but now
we need to be able to determine which State
a specific pawn is in. For that, we introduced
something called the AnimationStateProcessor. This is very similar
to an Animation Blueprint, because as an input,
it takes a pawn, and then it looks at data
on the pawn. Then according to that,
it will output an enum value that determines
which State you are in. In Fortnite, we actually used
replicated properties that were replicated
from the server to the client. Then we would read
those in this class, and then according to that,
it would output a State. That is how we did it
in Fortnite. You can actually implement
this processor in Blueprints when you are prototyping.
You want to set up the behavior. It is also possible to do it
natively in C++ for speed. Of course,
when you are shipping, I recommend doing it in C++. Now combining these two, we now have the ability
to animate a crowd. As you might notice, all of them
are using the same pose. We needed the ability
to add some variation. You can actually add variations
in two ways. The first one is you can set up a different animation
asset per State, which means you could have
different idle animations or running animations. Or you can have the ability
to set the number of permutations
per animation asset, which means we offset
the start time of the animation into the clip. If you would have
two permutations, the first clip would start
at 0 percent, the second one at 50 percent,
and then that makes sure that we do not actually
have these matching poses. This is what it looks like when we add
a little bit of variation. It becomes much harder
to find the identical poses. Next up is on demand States, because what happens when we
need to play a timed animation? For example,
when we hit a Character or we kill a Character, it needs to play an animation
at that specific time. We cannot just jump into
a looping death animation, because it might
look very weird. On demand is basically whenever
a Character enters this State, we kick off an instance
that plays this animation. Whenever it is finished, it either returns
to its previous State or it moves to a new State.
The user can actually define how many of these
concurrent instances you want to have available.
Because you still want to limit the number of overall
animated poses. Whenever we run
out of instances, we actually jump to one instance
which started last. For example, if four frames ago
an AI started dying, we just reuse that instance rather than
spinning up a new one. Combining that all,
we now have looping State, we now have on demand State,
so we can do attacks and hits. But as you can see, any transitioning between the
two States looks rather gnarly. That is because we are
just popping between them. There is no blending. For blending,
what we actually do, we take the poses
from the two States that the instance
is moving between, and then we use
an Animation Blueprint to blend those two poses, and then we distribute
those over the instances. Which is really cool,
because whenever two or three or an X number of AIs
are actually moving between the same States, you can actually share
this blend as well. Just as the same
as with the on-demand States, you have the ability to limit
the number of blends happening concurrently,
again using it for scalability. This is what it looks like
with blending. We now have the looping State,
the attack, and now we blend between them. You can see it is
a little bit more smooth than just rather popping. The last thing we added
was additive animation. For example, whenever a
Character is running towards you and you hit it, you kind of want
it not to stop running, but you also want it to give
some feedback to the player that it is actually hit. For additive animations,
we take the State that a Character
is currently in, and then we apply
the additive animation on top so we actually get
a response as well. This entire system
is completely data-driven, so we have a single repository
defining the States, the variations,
the blend in and out, the blend in times
for the specific States whenever we are blending, and also we have some per
platform scalability settings which I will get back to
in a minute. This is actually
what it looks like, what the debug view looks like. The top row shows
the blend instances, so you see them kicking whenever a Character
is moving between instances. The bottom row is actually
the individual States that are running
to drive these poses. I hope the video works.
Yes, there it is. You can see the Characters
running around, and then you can see
the different States kicking in, and whenever they are moving
between States, you can see the blend. When you start hitting them,
you can also see that the death animations
are starting to be used as well. This is rather fun.
That is that. As I mentioned, because we are shipping to
all these different platforms, we need to be able to scale
our features for them. For this specific feature, you are able to set
per platform limits. For example,
the number of concurrent blends, as I mentioned, but also which animation
variations or permutations you want to use
on a specific platform. You also have the ability
to completely disable blending. This is actually a screenshot
from our mobile release where we disabled blending
to get a little bit more CPU perf back. On top of that, we also use
the significance manager to drive whether or not
we actually need to blend. This is just because if you have
an AI that is very far away, you will not actually notice
that pop whenever you are moving
between States. We are not completely done yet. We are still thinking
about stuff we can add to improve this. One of the first things
is actually sharing the render for that data for the pose. Because as I mentioned,
the MasterPose Component just copies over
that entire batch of pose data. For this specific use case, we do not actually need
to copy it around. We can just make sure it points
to the correct location and avoid
any extra memory usage. Another thing is instance
rendering for Skeletal Meshes. We actually did not opt to do
this for specific Game Mode. That is because we were actually
limited on CPU rather than GPU. But if you are going to push
a much, much larger crowd, it might end up that you end up
being GPU-limited, and that is something
where instance Skeletal Mesh rendering would come in. You can even take that
a step further as well by using imposters, just like you would use
imposters for trees that are very far away,
so just some billboarding. Rather than actually needing
to render a Skeletal Mesh, we would just do an imposter. It is actually available
in 4.22, so you can go check it out.
It is a plugin. It is very customizable
for your project because most of it
is data driven. But we are always welcome
to hear your feedback and see how you are using it. As a little bit of detail
on how it performs, this is an actual crowd
of 200 AIs. It actually takes a little more
than 1 millisecond to do all of the animation, generating all of
the animation poses. You can actually add
another 200 AI and keep the same
number of poses, which means that you would not
actually increase your CPU usage to actually animate
this entire crowd. That is animation sharing. Next up, animation budgeting. Last GDC, we talked about how we
use the significance manager to bucket all of the objects and how significant
they are to a Character. Then we bucket them according
to how important they are. For animation,
we use these values to drive something called update rate
optimization, shorthand URO. What this basically does,
it limits the amount of ticks. It changes the rate
at which an animation or a Skeletal Mesh is ticked. On the left, you can see it
is ticking every frame. The next image shows it ticking
every 4 frames, and the next one is even 10. Then another thing
you can do with it is actually disable
interpolation. That is because
whenever you do interpolation in between ticked frames, you are still going to pay
a GameThread cost for that. If you want to decrease
the usage even further, you can opt to do
no interpolation. But the thing we found was, because we are doing
these large events and we are also introducing
large Game Modes with lots of Characters
close to the player, this system started
to fall apart. It is because whenever
we are adding players, there will always be
a cost to them. In this specific graph, you can see the increase
of CPU time for doing animation, and you can see in the different
inflection points of the graph is wherever we are entering
this quality bucket according to significance. As I mentioned,
whenever we add a Character, we are going to
introduce a cost. Because we cannot reduce
the animation quality to zero because that would mean a Character
is not animating whatsoever. We needed to have
a different solution. The thing we came up with
is very similar to how we would do dynamic
resolution for rendering. We have a fixed budget, and according
to the fixed budget, we adjust the quality
and in this case the quantity of the animation
rather than the time. This allows us to optimize for
quality rather than performance. It also means that
on low-end platforms, where previously we used URO,
you might find that we still have
a little bit of budget left. We could say, well, we can
increase the animation quality on low-end devices
because we now actually know how long it takes
to do this animation. I can see we can up
the animation quality because we still have
a little bit of budget left. The first thing we needed
for this was to be able to estimate
how long it takes to animate a single
Skeletal Mesh in our system. To do this, we take the total
animation time for a certain frame
and we divide it by the total number of Animated
Meshes for a certain frame, and then we smooth that over
a window of a couple frames, which gives us a rolling average
as to how long will it take to animate
a single Skeletal Mesh. Using this as showing
in the graph, on the horizontal axis
we have the CPU time that it takes
to do all of the animation. On the vertical graph, we have the rates at which
we are ticking Components. One means it is ticking
every frame, and each box is an individual
Component. This is the ideal case, because we have a certain
number of Components and it fits perfectly within our
budget, so nothing to do here. But what happens when we need
to add another Component? As I mentioned, we needed to
make sure it fits within budget. To do that,
we actually reduced the quality. Just like we did with URO,
we halve the tick rate, so that means that we now tick
two of the Components, each on a frame, which means
that the cost for them is half, which makes sure that
we actually fit within budget with the new number
of Components. This is actually more
of a real life situation. The system also allows you
to set fixed Components
that should always tick. For example, in our case,
that would be the Character itself or any squad mates
that are nearby. Then for the rest
of the Components, we sort them according
to their significance value, and then we adjust
their tick rates to make sure that we hit budget. This is actually
what it looks like in a graph. In this specific situation,
the player is playing a game. Then suddenly,
there is a large crowd or an increased number
of Characters on the screen. You can see the dark blue line. Suddenly, we are going
over budget without the system and we start to frames.
With this new system, you can see we might bump over
the budget for a little bit, but then we start adjusting
the animation quality to make sure we scale back
the time it needs to do the animation, to make sure we hit our budget
and maintain frame rate. That was the first step
to getting it up and running, and then we needed
to make some tweaks and changes because it is a new system. You are always
running into problems whenever you are doing
your development. The first thing
we needed to add, or the first thing we found was that you need
a minimal bar for quality. This is again, very similar to
dynamic resolution for rendering where you can have
a minimum screen percentage at which you want
to do your rendering before you start
impacting frame rate. In our case, the equivalent
is the tick rate. As you can see,
the full rate looks fine. That is what the animation
is intended to do. 25 percent means it takes
every 4 frames, which is still reasonable. It looks different,
but it is still there. But whenever you start ticking
at a lower rate, for example, every tenth frame,
it starts to become a flip book. We wanted to make sure
that we always hit a certain lower bar of quality
that you can predefine. Another thing,
just like we did with URO, we added the ability
to not do any interpolation for a specific number
of Components, which means
that it reduces a cost for them a little bit more. We have more dials to tweak to
make sure we are within budget. We can even take
that a step further. That means we are using
the MasterPose only, which is just animation
on the Base Skeleton. We are not actually running
any of the individual Character parts. This means that we lose
any of the procedure animation or any animated leaf joints
on those Character parts, which means that the decreasing
quality of your animation is rather big, so we only use this mode
whenever we go over budget. Another thing we did
was previously we had an area around the player
for which we tick any Component that was off screen
but within that area. In theory, it could mean
with a 100-player game that you are looking at a wall and there are 50 people
standing behind you and we are all ticking them,
and we needed to tick them to make sure we still hit
the animation notifies, which drives footsteps, so not
that you suddenly get jumped when there is
someone behind you. Using this system,
we now make sure that we only tick a fixed
number of off screen Components, because we also notice you do not need
footsteps for 50 people, because you will not be able
to distinguish them. As I mentioned,
this MasterPose option makes it so that the quality
of the overall look and the quality of the animation
has a large impact on it. We ran into an issue, whenever we were close to being
a little bit over or in budget, you can see we are switching
between these States to make sure we dial down
the animation quality and that we are in budget, and then we see,
oh, we are in budget. We can increase the quality.
We had this flip door effect. You can see we are constantly
changing between States, which looks rather weird,
especially up close. We throttled the rate at which
we switch between States to make sure
we have certain frames in between going from
one State to another. But when we did that, we found
that we would now be overshooting the budget
too much and for too long, and it took too long to actually
adapt back to the budget. So what we did was the rate
at which we reduced the quality is now scaled according to
the increased load of animation. That means that whenever we are
increasing the animation loads by a lot, we make sure we dial down
the quality much faster than whenever there is
just a small increase. This makes sure
that we are still within budget or in budget much faster
with an increased load. This is actually the system
in action. I will actually dare you to try and find out all the individual
States it is entering. But this actually shows
the quality of the system and how it allows us
to do these large Game Modes with lots of relevant
and close Characters without actually going over
our animation budget. Just a couple of details on how
it is actually implemented. First of all,
it was released in 4.22, so you can start using it
whenever you want. Of course, just as with
all of the other features, making sure that we can
actually scale this system down according to
our specific platforms so you can define your budget
on a per platform basis and all of the other tweaks
as well are all fixable and driven by CVars. The actual system that manages
this is implemented in a very data-oriented way. Any data, we need to determine
which State or which bucket
this specific instance is in. It is very closely packed, so whenever we need to iterate
over them for a specific frame, it has a very catch-friendly
memory access pattern. Actually, to use this, we added
a new Skeletal Mesh Component that automatically registers
with the system and that allows you to opt into
the system rather than apply it to all of your Skeletal
Mesh Components in your game. An actual cool thing as well,
because we were constantly looking at profiles
of our animation systems when we were working
on the system, so we found loads of
micro-optimizations that we could make to even make the overall
animation system faster. These are just a couple of them. That means whenever you are
upgrading to 4.22, you get free perf, so if you like free perf,
that is great. That is all for the budgeting. That brings me to the Skeletal
Mesh Simplifier. In 4.22, we now have the ability to do Skeletal Mesh reduction
as well. We already had Static Mesh
reduction and proxy LOD as well. But we now also
have the capability to reduce your Skeletal Mesh. It is actually six times faster
than our previous solution while still using an edge
collapse algorithm. On a per LOD basis, you can set target
for the number of triangles or the number of vertices
you want to end up with. In addition, you can also reduce
the number of influences per vertex
according to the number of bones you want to be able
to influence them. Again, when you are working on
new features, you find problems. In our case, because the
Character parts are all separate and they will be reduced
in isolation, we started to have these cracks
where the meshes originally met. To address that, we have
the ability to lock free edges. That means that any free edge
in your mesh will not actually
be reduced whatsoever. This makes sure
that we preserve continuity between the original
Character parts and makes them look baby smooth.
But that does not mean that we actually do not hit
the targets anymore, because we will now reduce
the rest of geometry to make sure we still get
the numbers you are looking for. Another thing was
it might look similar when you are trying to generate
a very low lot with a very,
very low number of triangles, you might notice
that you are starting to lose specific characteristics
of your Characters. In this case,
the eyes start to fall apart. To address that,
you can define important bones. That basically means
that any vertex that is weighted to this bone, you will be able to bias
the amount of reduction that actually
is performed on it. This is what it looks like
as a wire frame mode, and in this case, we made sure
that we do not actually reduce any geometry that is skinned
by the facial bones, but just as with
the free edges feature, we will now make sure that we still hit the vertex
or triangle target, but just by reducing
the rest of the mesh rather than in this case,
the face. That is everything that shipped
in 4.22. You can all go
and check that out. I have got two more features
to talk about in 4.22 or they are expected in 4.22.
4.23, no promises. First of all,
we did a pass on the UX and the overall look of our
animation and montage editors. If you are familiar
with the Montage Editor, this might look very familiar. We have our Montage sections.
You can preview them. You have your timings
for the individual sections and then your node device. This is what it looks like
in 4.22, and this is what it might
look like in 4.23. Everything is now combined
into one animation timeline. You can see the actual
animation timeline on the bottom has disappeared and is now on the top
of this combined Widget. This is very, very similar
to the way the Sequencer looks, and it also allows
for having this common UX for Widgets in our Editor. It still of course supports drag
and drop and it comes with a lot of features
that will be very similar to you if you have used
Sequencer before. One thing you might
have noticed as well, previewing is now pulled out
into a separate tab. You can still preview
your individual sections or all sections, but you can also set up a chain
of specific Montage sections that you want to preview to see
the transitions between them. That is the montage editor.
Next up is the Animation Editor. Again, very familiar,
your notify track, your Curves, and again,
your animation timeline. Of course, it would not surprise
you that it looks very similar to how the Montage Editor
now looks. We have a unified timeline. The Curves actually
look different as well, and that is because you are
now able to edit the Curves in our Curve editor. We reused the Curve Editor
from Sequencer, but also Niagara as well, so whenever we make
an improvement to any of these common features,
this common tool, all of the individual Editors that use them will benefit
from it as well. This is what it looks like when you are actually using
the animation timeline. You can now scroll. You can also have
a scroll wheel on the left. One thing to mention as well, you can set it up
to be displayed in number of frames
or in seconds and also show the percentage at that specific point
of your sequence length. You can also jump
to specific frames, so you can enter say frame 40
and it will jump to frame 40. Then just as with Sequencer, you can resize the zoom
of your timeline. You can scrub it, and you can
also define a specific range inside of your sequence
that you want to zoom into. That is something to look
forward for in 4.23. Then the last feature is
the skin weight profile system. We are actually
adding this system to be able to improve
scaled down animation. As I mentioned, we are shipping
to low-end devices, which means that we might not
be able to run Rigid Body nodes,
to do cost simulation or any animation dynamics
for these specific platforms. On the medium
and high-end platforms, we might disable these
features for specific LODs. Because we know if a Character
is at a certain LOD distance, we do not necessarily need
all this animation information. Then even on top of that, with introducing
the animation budgeting, it might be that we use this MasterPose Component
option up close, which means that
we are disabling this procedure animation, which makes a very large
visual impact. This example is an issue
on mobile where the bandana is not
actually being simulated, so it sticks straight out,
which looks rather weird, and it is something
we needed to address. This is where skin
weight profile comes in. Skin weight profile is basically
a set of custom skin weights, and you import it
on a Skeletal Mesh basis, so you import your exact
same Skeletal Mesh with a different set of weights. It will generate the data
for the skin weights for this specific profile, and all of this is done
in the Skeletal Mesh Editor. On top, you can see import
or copy it from a mesh that is already
in your project. Then at the bottom,
you can see what it looks like if you have imported a profile. It has a name, whether or not
it is the full profile, which I will go back to
in a little bit, and also the source
files per LOD, which also means
that if you do not use the in-Engine reduction tools
to generate your LODs, you can import your specific
skin weight profile for your specific lot that you
have imported before as well. If you have used
the reduction tool, we will make sure
that we generate these skin weight profiles
for the LODs that we are generating as well.
When you have imported it, you can actually preview
what it looks like in the Engine or in all of
the animation editors. This is possible. In this case, the finger
is reweighted to the hand, so you can see the influence
of the hand changes according to the profile. You are also able to do this
in the level editor as well. You can select your instance, set up the profile
if you want to preview it. It also works in PIE,
so if you want to preview what it looks like using
a specific Animation Blueprint, that is also possible. As I mentioned,
you have this ability to define a default profile. The default profile
comes into a runtime system called Default Overriding.
What this allows you to do is that during serialization
of the Skeletal Mesh, you are able to override the
original set of skin weights. You can do it in two ways.
You can do it statically, which means
that we will serialize the full skin weight buffer,
and then we will override it with the data from
the skin weight profile, which means tat it
is a destructive operation because the original set
of weights are actually lost because you are
overriding the buffer. But this also means
that you are not introducing any memory overheads, so you can use these
on platforms on which you know, oh, I need this profile to
always be used on this platform, so I do not really care about
the original weights anymore. Then the second way
to do it is dynamically. In this case, we allocate
an extra buffer on top of the default
skin weight buffer so it introduces
a little bit of memory usage. But it allows you to dynamically
turn it on and off as well because we still have
the original buffer. Whenever a system asks
for the skin weight buffer, we either point it
to the alternate skin weights or the default skin weights
according to how you set it up. Again, this is all driven
by CVar, so you can set this up
for your specific platforms and your specific
scalability settings. On Fortnite, there is still
an API available to set up profiles for a specific
Component during runtime, so this means that we will load
the profile buffer similarly to how we do
the dynamic overriding. We allocate the buffer
on demand, and then we load
the skin weights into it, and then again, whenever a system needs to use
the skin weight buffer, we return the dynamic one
rather than the original one. Whenever the animation
budget allocator is switched in the MasterPose, we actually set up
a specific skin weight profile to be used rather
than default skins. This is cool and all, but what
does it actually look like? On the left, you can see
what it looks like whenever we are
running the dynamics, so the Animation Blueprint for
the specific Character parts, in this case, the skirt,
is actually running. You can see it is
a rather extreme pose, but this is
the intended behavior. In the middle is
what it would look like whenever we switched
the MasterPose, which means that because
there is actually no animation data available to drive
these dynamic skirt bones, it will just stick out. That is the main goal
for the system. Whenever you use
a skin weight profile, you are able
to reskin your geometry. In this case, we reskinned
the skirt to the legs, which means that even though
we are using the base pose from the skeleton,
we can still use that pose to actually skin the geometry
to have a similar look as it would be when you are
using a procedure animation. That is actually the last
feature I want to talk about. I want to finish off
with a big shout out to all of the people who actually contributed
and worked on these features and give them a loud applause.
Thank you very much. [Applause] ♫ Unreal logo music ♫