>>Amanda: Hey, everyone! Just over one year ago,
we launched Epic MegaGrants,
a $100 million program to accelerate the work of
talented teams and individuals working with Unreal Engine,
3D graphics tools, and open source software. We are happy to share that to date,
more than 600 remarkable recipients have
been issued MegaGrants, totaling over $42 million
in financial support! As part of the program,
we've also earmarked 129 Magic Leap 1 devices for recipients. And now,
we are thrilled to welcome AMD,
who has introduced their support with the generous
contribution of AMD Ryzen 7 3800X desktop processors
eligible for giveaway to Epic MegaGrants recipients. Head over to the Unreal
Engine feed to see a selection of projects included in the
latest round of MegaGrants and visit
unrealengine.com/megagrants to apply! We're excited to announce
the launch of The Pulse--a new series of interviews and live
Q&As that will uncover emerging trends in real-time 3D. In the first episode David Basulto,
Founder and Director of ArchDaily.com,
will lead a panel to discuss how real-time tools
are transforming collaboration in architecture, engineering,
and construction. Learn more and register
for our upcoming episodes at unrealengine.com/thepulse. Unreal Engine 4.25.3 is now
available on the Epic Games launcher and via GitHub. See the full list of
fixes on our forums, then update to take
advantage of these changes. Hit Apple Arcade title
Spyder is the latest entry to emerge from Sumo
Digital's internal game jams. In preparation for
an official launch, it was time to rebuild the
existing Blueprint-based prototype in C , with the intent
of making it more characterful, maintainable, and performant. Get a walkthrough of the
animation and movement systems for Agent 8,
the game's many-legged protagonist from Sumo's own programmers! After an impressive showing
for Unreal Engine developers at May's Inside Xbox event,
an array of Unreal Engine-developed
games for Xbox Series X were shown during Microsoft's
highly-anticipated Xbox Games Showcase. From world-bending
thrillers to the return of an award-winning franchise,
check out the Unreal Engine-powered
games that took center stage during the Xbox Games Showcase. To prepare students
for successful careers in visual effects and
game development, the DAVE school encourages
collaboration, iteration, and problem-solving
under the same demands of real studio environments. Offering two tracks
in VFX and gamedev, they recently collaborated
to produce Deception, a short film developed
by the school's students and faculty using
virtual production techniques and Unreal Engine. View more of the
cross-discipline project and see why they believe that
"real-time is a game-changer." Speaking of student work,
there's only one week left
to submit for a chance to be featured in our Fall
2020 Student Showcase. All submissions
are due by August 5. Head over to the
feed for full details. And over to our top
weekly karma earners. Many, many thanks to: ClockworkOcean, Everynone,
FadedXNova, EvilCleric, knapeczadam, Jasdela,
Shadowriver, Tuerer, and Prince Latte This unique
spotlight is Rachael, the world's first fully automated
autonomous 3D design influencer on Instagram. Designed to create
"likeable" images, Rachael's been posting one
new image per day since May and will continue to do so
without any human intervention, as long as the computer
she is running on is online. Find out more about Rachael
and creator Matthias Winckelmann on their website. Inspired by the
fantastical worlds of Studio Ghibli,
real-world Havana, Cuba and the bustling
markets of Thailand, the Scavenger's Colony is an
exciting miniature environment created by Arnaud Claudet,
a 3D environment artist studying at Rubika Supinfogame, France. Watch Arnaud's full
showcase and see more of the incredible
figures used in the piece! And lastly, play as Cassie,
an intrepid scavenger seeking salvage in abandoned mining
facilities in the narratively driven puzzle
platformer Sky Beneath . Embark on an adventure
with your inventor friend Annie, take control of gravity,
and solve challenging puzzles in a mysterious sci-fi setting. Find out more on Steam! Thanks for watching this week's
News and Community Spotlight! >>Victor: Hi everyone, and welcome
to Inside Unreal, a weekly show where we learn, explore,
and celebrate everything Unreal. I'm your host, Victor Brodin,
and my guests today are Jeremiah Grant,
Product Designer-- welcome to the stream-- as well as Helge Mathee,
Senior Engine Programmer. Welcome you both. Today, we are going to
cover a little bit about how to animate with Control Rig. Would one of you like to give a
brief breakdown of what Control Rig is before we get started? >>Jeremiah: Yeah, so Control
Rig is a node-based rigging system. It allows you to animate in Unreal,
create procedural rigs, and drive them in various ways. So real quick,
what we're going to go over today-- we're going to
go over what Control Rig is in a bit more detail,
show you some little micro lessons of little features you can
do in Control Rig, and some of the new features
that you'll find in 4.25. And then, I'm going to go
over the Control Rig mannequin sample. And,
we put this on the Marketplace just a few weeks ago, I think. And, I want to dive
into it a little bit more. Talk about why we did
things the way we did and how you can
expand it yourself. Then, we'll talk about
animating with Control Rig. This is using Sequencer,
being able to create content. We'll start with this Control
Rig mannequin example, and how to use that animation
in your own gameplay. And then, runtime rigging. We haven't talked about
this much in a while, but we're going to
explore this a little bit, creating procedural
motion and using that in your Anim Blueprints. And then,
we're going to give a sneak peak on a couple of things that
you can expect from 4.26. >>Victor: Sounds exciting. Let's get started. >>Jeremiah: So, Control Rig. As I mentioned before,
it's a node-based rigging system. You can build animation
rigs with familiar functionality that an animator may be
used to from a DCC like Maya or Blender. We can see the example
of Control Rig here. In fact,
this is the same mannequin example that's available in
the Marketplace now. You can expose those
controls to animators, or you can expose
them to gameplay. You can create really
cool procedural animation, and we're going to show a
few examples of that pretty soon. And, one of my favorite
things is Control Rig is accessible through Python. You can create scripts to
automate your workflows, or augment your pipeline. So I'm going to jump into a
couple of micro-lessons here. I'm going to show a
couple of things in 4.25 that I'm pretty excited about. And, yeah, let's jump right in. So first off,
let's look at this mannequin here. I have a quick setup,
and I'm going to use an Aim Node. First off,
I should probably talk about this editor in general. This is the Control Rig Editor. We have a viewport over here. We have a rig graph
or a certain node graph. And rig hierarchy
and execution stack. So you can see,
in our rig hierarchy, we've already
imported our skeleton. We'll talk a little
bit more about how to create a Control Rig later. And, all I'm going to be doing
is maybe some transforms. So I have to Begin Execute,
which is where my graph begins. And,
it's flowing right into my Aim Node. This is very similar
to any other Blueprints that you'll see
throughout the editor. So, in this case,
I have an Aim Node and the very first
thing I want to show you is a new feature in 4.25,
which is Control Pin Value. So, normally you would
have to type in a value and move my head around. And,
it's kind of slow and cumbersome. But now,
I can right click on this Pin and say Control Pin Value. And,
you'll quickly see it created a manipulator in the viewport. I can drag that around,
and you can see my head responding. And, you see the value
updating in my target and my aim. So next,
let's take that to the next level. I'm going to add a
couple vectors here. Let me just connect this up,
and you'll see, now, I have a few things
showing up already. So let's talk about those. First, you might notice,
there's some weird pins on this Add Node. And, what they're showing,
is that there's some debug information
or some embedded nodes inside these pins. So we can see the screen if
I right click on this result pin, but see there's visual debugs. Let me just go ahead
and remove that. In fact,
I've got a couple of them in here. So let me right click on
here and add visual debug. We immediately
see this line turn red, which is saying that there's a
node embedded between these two pins along this path. And, our details pane shows the
details for that embedded node. It's got a couple
of parameters here. I'm just going to change
the color of it to green. And,
since I'm adding values here, I'm able to make
adjustments in multiple ways. You also notice when
I connected those pins, they updated immediately. The results in the
viewport appeared, and that's because we now
have auto-compile in 4.25. So you don't have to keep
going back and manually compiling every time you
make the change to your graph. So let's go ahead and take
this value from my pin A. And, again, I'm going to
go into the control pin value, and see how I can
move things around. So let's look over here. And then,
maybe I want manipulate this. Again, I can adjust-- the switch is down. Origin and just adding
those values together. Now, if I take this,
again, to the next level, I'm going to add in a sin,
so I can look back and forth, still adjust my head movements. This will be really helpful,
let's say, if I just kind of a scanning thing,
and I want to just adjust where that camera is scanning,
for example. So I'm just going to hook
this up to my second Add Pin. You can see it's
barely moving there. That's because my
Sin is between -1 and 1. So I want to highlight
another feature that we have. I'm going to right
click on this Sin Pin, and I have add interpolates. So this is very similar to
some of the alpha settings you may see in Anim Blueprints. I can do mapping, map ranges. I can clamp my results. I can interpolate-- so I
can speed up or slow down, depending on which
direction the value is going. So, in this case,
I'm just going to map the range from -1 to 1,
which is my incoming from my sin. And,
we'll do something reasonable, like -30 to 30. So, now, you see my head
scanning back and forth. It might be helpful,
though, to see this value. So I'm going to go and
add another visual debug. And, I can see where my
head is aiming, and then added on this procedural movement. So now,
if I right-click on this pin, I go to my control pin value, I can look around and
quickly see my character with this procedural motion
and my art directed motion. >>Helge: So I want to point
out one thing, Jeremiah, actually. Can you zoom in larger than
what we can see on the screen right now? I think with holding control or
something, you can zoom larger? >>Jeremiah: Like that? >>Helge: Yeah. So, first off,
I'd like to highlight that they're not weird pins. They're bugs. Look at them. They're like tiny,
mini bug icons. That's what it was
supposed to look like, anyway. So, either hollow, soulless,
or with a soul, anyway. So one of the things
that we added here, is not necessarily just
bug and interpolate, but the notion of what
we call an injected node. So really,
what's going on here, is that, where you see
these bug icons on the pin, there's a node hidden
that you can edit. If you go into edit interpolate, you can get to the
details panel of that node. What you can also do, though,
is you can actually bring out the node to the graph. So, if you right-click
and click on eject, it takes that node out
of the pin-- so to say, the hidden node-- and now, you can see the
settings you have on that node are the same settings that
you saw earlier in the details panel when you
chose edit interpolate, or whatever it was. The reason I want
to highlight this, is that you can learn
about these notes that way, and you can use them separately
as well for other purposes. And the visual
debug is the same. The visual debug
is actually a node, that,
as the value is going through it, draws it on the screen. And,
because you do this so many times, we decided to have a workflow
where it's hidden in the pin, but you can always bring
it out to look at that node, and you can use it
for other purposes as well if you want to. >>Jeremiah: Yeah, exactly. It's been really
helpful for us when we're creating complex rigs to
see a point space, maybe a pull vector, or interpolation. I really like having
consolidated graphs. So I like having these
nodes injected in the pins. Right now,
we don't have the ability to re-inject that pin
so once you eject it, that's kind of
where you're stuck. But, it's a quick way
to see what's going on, explore what we're doing
under the hood on each of these, and just to consolidate
your graphs. So I'm going to go over-- >>Helge: I didn't want
to say anything, sorry. >>Jeremiah: I'm going to go
over to my next little example here, and talk about a
couple more things that we have in 4.25. Here, I'm going to be
using a distribute rotation, and a vector 2D control. So,
I've already created this control. I have this available here. And, one thing you may notice,
if you've used this before, we now have limits available. They're both visible and
editable in your details. Now,
I've set up this graph so when I move this control around,
it's limited and rotates my neck and my head bones. And, the way I'm doing that,
is I'm just getting my vector 2D, and then multiplying that value. So right now, it's set-- it's limited to -1 to 1,
and then multiplying it by 30. But, I could just as easily use
a remap, or an evaluate curve to adjust the way that
this behavior occurs. In this case, this is just
the easiest way to do it. And then,
I'm converting that to a rotate from my distribute rotation. One really cool thing
about using these nodes like distribute rotations,
instead of just setting transforms directly,
is you can do things across a chain. And so, in this case,
I could have it just on the head, or I could have it on the spine. So let's go ahead
and take a look at what this looks like
if I do the entire spine. So now, really quickly,
I'm able to get some quick motion across my entire character
by using a single 2D vector. This could be additive, as well. So if I wanted to have full
FK controls between this and then do an additive
distribute rotation, I could do that. You have all the control
you want by using math nodes and processing in a stack. In fact,
let me just point out here, that we have this
evaluation stack. You can see directly,
the order these executions are happening in. So if you wanted to create more
complex, art-directed behavior, and then layer on
procedural motion, you can define the
order that that happens. And,
one last thing I wanted to point out. On controls, we now have
this animatable [AUDIO OUT]. And,
that allows you to create controls just as a rigger to determine,
let's say, debug scenarios,
or complex procedural motion that's driving controls and
then driving bones. But, you don't necessarily want
all those controls showing up in sequencer in your rig. You can uncheck that. And, only the controls
if you want to expose are exposed in the Sequencer. With that,
I'm going to hop over to Helge. He's got a few
things to show as well. >>Helge: Let's see
if I can figure this out. So does that work for you guys? >>Victor: Yep. >>Helge: Great. So I'm actually going
to go ahead and create a Control Rig from scratch,
and then we'll jump to prepared assets. Depending if I get stuck,
I might jump to the prepared
assets anyway, but I want to show
you how you get started. There's a bunch of different
ways you can do this, and I'm going to go
with the simplest way, I'd say, in terms of UX. And that is, creating a Control
Rig off of a skeletal mesh. So what I'll do is I'll enable
the skeletal mesh filter, so you can see the skeletal
meshes in this project. And, I'm going to right
click on the mannequin, and I'm going to say
create a Control Rig. That will set up
everything for you so you don't have to
do the steps manually. So if you do that,
and you look for the Control Rig that was just created,
you can see what this has done-- it actually
has done a bunch of things. You can do all of this
manually if you want to. It's imported the hierarchy. So it's imported all the
bones of that skeleton into the rig hierarchy view. Also, we have another view,
which is called Curve Container which contains all the curves. So in this case, it's imported
those two curves, as well. And, it's done another thing. It's actually set up the
preview scene for the editor. And it will use this
skeletal mesh as well. So it seems obvious,
but now what you have, is you have the hierarchy,
and you can see the particular
skeletal mesh over here. The reason that that's sort of
a setup phase or a setup step, is because Control
Rig is actually independent from
these assets completely. It has its own
hierarchical representation, and you can change the
preview mesh at any time. Currently, in this version,
the topologies of those skeletons have to match to make it work. But, what's nice is, if you have
slightly different characters, or you wanted to try it
on different meshes that share a skeleton,
you can switch those. So if you hit compile-- I'm actually going to choose
one option here, which is draw all the hierarchies. You can see all the
bones in the editor. And then, I'm going to
start building up my demo. So the first one will be-- in preparation to this,
we've got a lot of questions around, are there IK solvers? What kind of IK
solvers are there? So I'll try to cover
three of them in this particular example. And bear with me. It's a very simple example, but I want to take my time to
sort of show you all the steps. So, first off,
I'm actually going to create a control
for my IK solver. So I'm going to
right-click in here and say create a new control. You can also do
that using control-N. And, you can see it's born
this control down there, outside of the hierarchy. And I'll call this
hand left control. Next, I'm going to move
that control to where my hand is, so somewhere here. And then, we're going to use
the contextual menu, and say, set its initial transform to
the closest bone that you find. So it's going to snap it onto
where the hand is, and it's going to change its initial. So let's look at the details
so you can see what I mean. So the initial of
this control now is set to where the hand is, which means if I move it
anywhere and I hit Control-G, or I right-click and
say Reset Transform, it's going to snap it
back to where it was. So this is the
zeroed-out transform for animation purposes, right? This is where you would
start with the animation. So in this case,
for the hand, that's correct. Obviously, for a large,
elaborate rig, you wouldn't just have
this in global space. You'd probably have the
controls parented to each other. You'd have a main control
a pelvis control-- whatever you need. But for this example,
we'll just have one. Next, I can bring in
my control to this graph and read its value. I'm going to say,
get the control in. This is going to set
up a node for me-- setting the right
control to read, and then you have access
to different things here. So we're interested
in a transform. But in case this had limits,
you can also get access to a minimum and maximum limit. And then I'm going
to set up an IK solver. So I'll just pull
out the execute-- you could also right-click-- and type in IK. And you can see we have
a bunch of different ones. I'm just going to
focus on the first three, and actually is going to
start with the most simple one, the Two Bone IK, or basic IK. Next, we have to pick,
what bones do we want to use for this? So I'm going to choose upper
arm on the left side and lower arm on the left side. And for the effector,
I'm going to use the hand on the left side. And you can see that it now
does something-- obviously not the right thing,
because this looks pretty screwed up. But let's look at
what is happening. Basically what's happening
is that our effector, which is this port,
is at the center of the world. So it's pulling the arm down. Also, because I don't have this
flag enabled called Propagate to Children,
it's just moved these three bones. But all the nested bones,
like the fingers, are still in place. So I'll just click this. So you see the whole arm
is now being pulled down. So that's better. Next up,
you could use the feature that Jeremiah was presenting,
which is Control Pin Values. You could do that-- and then move it up. But instead, I'm actually
going to plug in my control. So now you can see
it's solving the arm. But there's still a
couple problems, and I'm going to
fix them as I go. First of all,
there's what's called a pole vector in this thing. And that's used to define
the three points of the triangle. So we started with the upper
arm-- the root, the effector-- and then the pole vector,
which is currently, probably, at 0, or 001, or something like that. So I'll change it. I'll change it to be a location. And I'll put it somewhere else. So if I go ahead-- and
what I'm actually going to do is I'm going to use point,
which is nested to this control. So I'll make an
absolute out of this, and I'll use this
control as a parent. And then I'll move it. And the control's axes,
like Z a bit backwards and at negative x-- so I'll do, like, 30, -10. And then I wire this into
the translation of that result. I'll wire into the pole vector. So now if we rotate the hand,
for example, we can see how the
pole vector is also turning. Again, this might not be
necessarily the right solution. But for demonstration purposes,
it's what I'm going to use. And you can
visualize this as well, but adding a
visual debug to this so you can now see
where that point is. So you can see it, this is what I'm
currently using-- not ideal, but we'll leave it for now. You can also see there's
some other problems here. So you can see how these
bones aren't aligned correctly. And you have
some options for this. So the secondary axis
is what the roll is used for. So the primary axis
is where the bone is pointing along the chain. And the secondary axis is,
what's the axis that needs to point
towards the pole vector? So this is incorrect,
so I'll change this to negative Y. And there we are. So now we have
our basic IK set up. And now we can
go ahead and start looking at some of the
other IK solvers that we have. So for that, I'm going to go
over to the prepared asset that I have set up already that
looks very similar to what we just built.
So you can see, this is what I was
showing you before-- basic IK. Make absolute and
whatnot-- a hand control. Down here,
we have some other IK solvers. So we have the FABRIK, which is
Forward And Backwards Reaching Inverse Kinematics. And that's a solver that can
use more than two bones. It can work along
a longer chain. So what I've done is I've set
it up to solve from the spine-- the first spine bone
all the way to the hand. And then down there,
we have yet another multi-bone solver called CCDIK. It's going from spine
one to hand as well. This has some more
settings around rotation limits and other things. But we're not going to go
into too many details of that at all. I just want to show you that we
can switch between those here. So what I'm going to do
is I'm going to pull the hand to a location-- let's bring the character
into an interesting pose, maybe pull it down a little bit. And you can see how the
FABRIK solver is actually giving us this solve along the spine. You can see I can pull
him a bit more like that. Whereas if we use-- if
I pull this back up here, obviously,
the basic IK will just go like that, and that's it. It can't do more, because it
only solves these two bones. So we have FABRIK. And then having a setup like this,
we can leave the control, rely on autocompile to
change the rig as we go, but keeping the
control where it is. It's actually interesting to
just switch between and see the different solves. So CCDIK has a slightly
different passive reaction. As you can see,
he's already leaning to the side, even though I'm
pushing just a little bit. If I go out here, you can see
FABRIK solves very differently. So this gives you an opportunity
to test for different parts of the body. You can test the
different solvers. You can blend them with each other,
obviously, and can choose
the right algorithm for the right job
on each body part and even along your animation. Next up, we're going to
look at spaces and controls. So if you've already
used controls a little bit, then I'm going to
try to cut this short. But we had the question
a lot of times in preparation to this, which was,
what's the difference between spaces and controls? Why do you even have the two? So controls is what
we've used so far. Controls are objects
you interact with and you put animation on. They're usually visible. They don't have to be visible,
but they're usually visible in 3D. So they'll be dials
you click on and move. Spaces, however,
are intermediate transforms that you can use to organize,
to offset controls. And spaces can also relate
to existing objects in the tree. So in this scene,
what we have-- on this rig, we have two spaces,
which are nested below bone. So one is this hand left space. That's below the hand left. And then we have the
same thing on the other side. So here's hand right space. And that's under hand right. And then we have a third space, which is at the top
of the hierarchy, which is called
hand center space. And below that,
there's a control. So there's actually
a control here. You can't really see it. It's in the body in there. And then we have a little graph. And what that does-- it takes the left hand space
and the right hand space, interpolates the two, right? And then sets the center space. So for simplicity's sake,
I've just set position. And what that means-- if we add the procedural
animation here-- if we add an animation clip,
rather-- for example,
I don't know, this guy, where he's hovering in midair-- you can see how this space
where the control is parented in is now between the two hands. And I can still use this
guy for animation purposes. So now I have a control
space which is procedural. And obviously,
you can change the settings. So you could set this
to be 0.2 or something, interpolate it between-- and I could go ahead and,
I don't know, wire-- like Jeremiah was showing,
wire a sin wave to this, right? And then you would see
this guy go back and forth. And the control isn't changing. It's the space that controls in-- oh,
this is going to -1. Anyway,
you can see how this is going over-- interpolating. And this is what you
would use a space for, right? Not only for hovering transforms
like this, but for anything where you have
to relate controls to bones or other things,
or where you need intermediate transforms if you want to
procedurally set and drive. So that's what spaces are for. Anything to add here, Jeremiah? Any comments? >>Jeremiah: No, I think that
you pretty much covered it. Spaces can be really useful
for both offsetting transforms-- you can also nest
controls under controls. You have options to do either. This type of scenario
that Helge has shown here is maybe creating some
sort of weapons system. If you want to have
a weapon bone, and you want to be able to control,
procedurally, which hand that attaches to,
or attached to the back, for example,
a space really help you out there. >>Helge: Yeah. All right. So next up-- the next one that
is also based on some questions we got is squash and stretch. So this is not an elaborate
squash and stretch setup, but I wanted to
take a try at this and see what we can get to. The quick answer is, yes,
you can do squash and stretch. You can do it in multiple ways. What I've decided to do,
though, is showcasing some of the
other nodes that we have. So in this example,
I can pull the head up, and you can see
how this is creating a stretch and a squash. Again, I wouldn't say it
necessarily well art directed. But what's interesting,
also-- we also have this IK spine alongside. And the way that this works,
actually, is in multiple phases. So let's go in and
disconnect this part. Again, kind of neat
highlighting in this release, we have autocompile. So as I disconnect it,
the rig is already reflecting the change. I don't have to do anything. So you can see how,
right now, these are just placing the
bones without changing any of the scale. And then we'll add
to squash after-- the stretch after. So this is using a node
called Fit Chain on Curve. And what that does-- it takes a
four-point Bézier curve that we procedurally create here-- I'll show it in a second-- and then positions
the chain on that curve. So I can enable it by
going into the Details Panel and enabling the debug. And then you will actually-- if
I disabled bones for a second, you can see it really well. You can see the
curve as it's drawn. You can also see its tangents. So this guy here has a
tangent point down here. And then this is
the interpolation that it's putting the chain on. And then once that's done,
what I do down here is I take the second spine bone,
I measure the distance-- and this is a rough estimation-- I measure the
distance to the pelvis. And then I
interpolate that value, giving a minimum
of 10 and 30 units. And I run it through
this curve to give it a smooth interpolation. And then I change the
scale of these bones. So if I just bring this back in, that now gives me this behavior
where they will start scaling if they're in that range. And you can use
debug watches and stuff like that to see what is
giving you, you can go in-- this is the way I set this up. So you can say,
watch this value, and then watch this value
to figure out the ranges. And now as I move it,
you can see, right? The minimum that I chose was 10. So at 10,
it stops with the maximum scale. And then up here
at 30 somewhere, it stops again at the
minimum scale for the stretch. And then this curve, you can re-adjust it
to a slightly different interpolation. So I can adjust the tangents
to give it a different feel, and so on. So I wanted to show the curves,
because they're actually quite interesting. We have a bunch of
interpolation nodes for curves, and they can be used for
better art directability of rigs. And of course,
you can blend between them and do a bunch
of things with them. All right. So then my last one. So what I want to
also mention is-- this is not necessarily
new for this version. We've had it for a while. But in difference to Blueprint,
most of our nodes have the ability
of being stateful. So what does that mean? Usually when you want
to simulate a value-- let's say a float growing over
time or something like that-- you have to set up a variable,
and you have to add to that
variable over time, and do your own
math to figure that out. And for a rig,
that can be very involved, because you might want to
do simulations and things go everywhere, right? And so what we've decided to
do is come up with stateful nodes. So in this example,
I have this rig. You've seen it before, actually. It's the exact same one
that we use to look at. But we're doing something
to the control value as it's coming into the rig--
again, just for demonstration purposes. So right now,
what I'm doing is I'm basically drawing what that
resulting transform looks like somewhere-- I think down here. I'm drawing this value so
you can see it as an axis but don't have it selected. You can see the axis here. And obviously,
as we're just piping it through, it's in the same position,
right? Now,
the next thing I could do is, instead of just running
this value through, I could go ahead and
wire it to the Accumulate. Now, the Accumulate is
one of the simulation nodes. You can find them
in the Contextual Menu on your simulation. There's a bunch
of different ones. And we have Average,
and Delta, and Value Over Time, and Overlay Integrations. What this does-- it basically
blends towards the target value over time. So it doesn't copy
the value straight. It will stick to the
previous value a little bit and then blend against it. And that means,
as I'm moving this control around, I get this hand to arrive
at that location, right? So for an IK on a hand,
that's probably not a good thing to do. But for lots of other
things inside of of procedural animation,
that's really useful. Good examples are
creatures like fish, where you want
the fins to follow, with a certain delta over time,
a certain motion, combined with harmonics. That can be really useful. And obviously,
you could do this on anything. You can not only
do this on vectors, but on floats and other
values you're running. But you could do a metric,
right? How far are the hands apart? And then run that through
an accumulate lerp, and then use that to
drive procedural animation. So there's a lot of
options with that. And we also have
other simulation nodes. So the only other one I'm
going to show is overlay. And that's just
overlay integration. It gives you springy behavior,
all right? And you can do that,
again, on vectors, transforms, and floats. And it's useful for
creating spring alike. This is not a full
physics simulation. It's just a little
integration that you can use for secondary dynamics,
antennae on creatures, and all sorts of stuff, right? But it's interesting you
have it directly here. And obviously, you can have-- you could say, create a lerp. So we'll just create-- and interpolate, right? And then we would go
ahead and run this guy in here. And now we'll run
this and interpolate it with the original control, let's say,
just to demonstrate this. And now, right? You have something where you
can animate how much overlay is visible, right? So right now, it's 100% visible. If we put this to 1,
it's turned off. We should probably
turn around these ports. But anyway,
you can set it up in a way where you can drive exactly,
right? What you want to see, features
turning on and off, et cetera. And with that,
if there's nothing to add, Jeremiah,
I'll hand it back to you. >>Jeremiah: Yeah,
I did want to point out real quick on your squash
and stretch example, just to backtrack real quick,
it does highlight one of the powers
of the Propagate to Children. We've been asked
about this before. In what scenario
would you use this? Well, in this case,
we see the body is doing the scaling, but the arms are still
maintaining their scale. So you can control exactly-- >>Helge: Yeah, that's true. >>Jeremiah: --what's
changing and what's not. Yeah. >>Helge: Yeah,
these guys are turned off. That's right. Yeah,
I'm scaling the spine bones, but I'm not scaling
the arms with it. Yeah, that's true. >>Jeremiah: Yeah. And additionally,
the arms are going along for the ride when you're adjusting
the fit to curve. But they're not going
along for the ride in scale. So you're able to
finely tune what's happening when it's happening. >>Helge: Yeah, that's very true. So we do have the
parent-child constraints that you know from
other 3D packages that you can drive if you
want that to happen or not. And they're literally--
they wouldn't have to exist, but we keep the
hierarchy as it is. So based on the child-parent
behavior in the tree, they're going to inherit or not,
depending on the setting, which is why you'll see this setting
on every single mutating node that mutates the hierarchy. And we've decided-- arguably,
you could say this is wrong or not-- but we've
decided to have-- turned off on all of
them by default. So you always have to turn it on. The idea here is that turning
it off means the rig runs faster. And we'd much rather
have you consciously turn it on than
forgetting about it and then having this
extra cost everywhere. Yeah, that's all. I'll close my screen share. >>Victor: Thanks
a bunch for that. All right. We definitely have a
couple of questions, if you don't mind tackling them. >>Helge: Yeah, that's fine. We have-- [INTERPOSING VOICES] >>Helge: --after, right? If I recall correctly, Jeremiah? But yeah, let's spend a bit
of time now taking questions that came up until now. Sure. >>Victor: Someone's
wondering if it's possible to cache the
results of Control Rig. Maybe it's worth mentioning
how you go through-- how to specifically
record it as an animation. >>Jeremiah: Yeah,
you absolutely can. And I was going to go
over some of that stuff with our Sequencer Demo,
where you can place your character-- you can either run
it through gameplay, or you can add it
in a level sequence and record the results there
[INAUDIBLE] so you can use in any of your gameplay. >>Victor: I think if that's the case,
let's just go ahead with the presentation. And we'll continue with
the questions at the end of it. >>Jeremiah: OK, yeah. >>Helge: Sounds good. >>Jeremiah: All right. So again, next, I wanted to go
over the Mannequin Rig Demo. I'm going to go over, again,
what we did, why we did it, and how you can
adjust it yourself. So this is the
Mannequin Control Rig. When you open up the project,
you're going to get something
that looks like this. We have our mannequin in here,
and it's just got some basic animation
on it using Sequencer. And actually, just to address
that question we just had, if I wanted to
save this animation and use it during gameplay,
I could just right-click on the Actor here
and create animation sequence. And that'll save
an anim sequence and avoid having
to go back to FBX, so if you wanted to export the FBX,
you could. >>Helge: It also--
just for clarification-- also records any of the
procedural animation on the bone. >>Jeremiah: Exactly. It records all the-- >>Helge: --animation on your hand,
but you had something
else that's connected with a procedural
spring that you built, that also gets recorded. >>Jeremiah: Yeah,
and that's a good point. I've used this
before on Fortnite, for example, where I wanted
some dynamic simulation, some procedural motion that
I had created in Control Rig, but I wanted to bake
that into an animation. So I created that procedural motion,
recorded that motion, and then merged that in
with my base animation. It worked really well. So in that way,
I was able to use Control Rig as a tool and not just
an animation platform. So let's take a look
at this real quick. We have our mannequin. We have a set of controls. And we have a
pretty basic graph. When I created this, I wanted
it to be an educational platform. I didn't want to go
too complex with it. And that's one of the reasons
why it doesn't have fingers and it doesn't have IK/FK
on the arms or an IK/FK spine. I wanted it to be
a way for everyone to step through and
build complexity as they go through this graph. So in order to do that,
I started just with grabbing each control. And in this case,
I'm getting a control and it's global space,
and I'm passing it into a set transform for a bone. And all the way through,
I have Propagate to Children turned on. You'll see I just started
with the root pelvis. I go down the spine. It's all doing the same thing. Nothing new here-- and the head. As we step down,
now I get into an arm. An arm is a clavicle. In this case, it's a clavicle. And it's an IK-- a basic IK. And I've created a basic
pole vector, we've got available
back here that I'm feeding in the position to. And you saw Helge
do a demonstration of exactly how to set that up. And then I have a control that's
representing my end effector. And with that,
we have animation. So we have the
same things-- left and the right arm. And it gets a little more complex,
again, when we go into the legs. Here,
I wanted to introduce math notes just to illustrate that you can
do all these calculations yourself, or you can-- these calculations
can come externally if you wanted to feed in and
determine where these pole vectors are, for example. So I wanted to create
a little bit of a foot-- kind of a reverse foot. So again,
I have my foot control, which is just like my hand control. And then I'm doing a
little bit of calculation for where my IK vector is. So a few ways that people
would do a reverse foot would be a stack of controls,
and then you'd parent the IK under the lowest control. Or you could have
constraints across-- have a reverse foot, a forward foot,
and constrain between those. There's a ton of ways to do it. In this case, I wanted to
have each one of these controls affect my IK. And so I have a toe. I have this little
effect that I have here. And there's nothing procedural. It's just the
parenting structure that I have that allows this. But what it means
is that I wanted my IK to be essentially parented
under this ball control effects. But I also wanted my
IK to be under this foot. So in order to do that,
I have the calculation from my foot control to my ball control,
and getting the relative offsets, and piping that in. You can think of it as like
a parent constraint when you maintain offset in Maya. And then we do the same
thing again here for the toe pivot. So I'm able to calculate these. And this is an interesting
way to see that I'm actually doing some calculations
from the bones and controls, and piping it in,
again, two bone. And again, I'm just
duplicating logic for left to right. >>Helge: One of the things
I'd like to mention real quick here is that some
of this is constant. So I want to bring
this up in terms of performance real quick. So get initial transform and
get control initial transform-- this little branch of these
three nodes that say get initial, then Make Relative, this
whole thing, these three nodes-- they're all constant, right? So they don't change at runtime,
because the initial transforms never change. And so the compiler,
when we optimize the rig, actually computes this
once and keeps the value. And these nodes
are not run at runtime. I just want to highlight this,
because a lot of times, people think,
can I add all these nodes? Is it going to cost us anything? And anything which
is constant gets folded. We call it folded. So we remove the nodes. We keep the result.
We keep the nodes here, obviously, for the user so
you can see what you're doing. But a lot of the rig that you're
showing isn't actually run. It's just there to describe,
right? What is the overall
relationship of everything? But then during runtime,
we optimize it as much as we can. >>Jeremiah: Yeah,
that's a good point. And I've seen,
online, a lot of people are focused on this execution
stack and using the number as a gauge for performance. But with Control Rig,
it's designed to be hyper-performant. And there's a lot of
optimizations for you, like that internal
caching if the lines are going to be static. So, so far,
I recommend using it as-- go crazy. If you run into
performance issues, I'd love to hear about them
and hear about those cases, because it is really
designed to be optimal. and-- just really quick,
just for procedural motion and for authoring animation. >>Helge: Yeah,
the way I usually go about it is very
similar to coding, is, don't try to outsmart the
compiler by trying anything which seems smarter or-- in fact, go for readability. Go for what you feel like
makes it the best usable asset. And if you come across
performance issues, like Jeremiah said, we'd love
to hear about the performance issues, because then
that makes it our problem. We can't go for a
system where you have to trick it to make it fast,
and then have it-- you pass on the rig,
and the other person can't use it, right? Go for simplicity
over performance, and we'll try to
solve that for you. >>Victor: I'm going
to take this opportunity to let everyone know that if
you are interested in continuing the conversation, or if you're
working with Control Rig, the forum announcement
post for the stream is a good place for
those discussions. So I'll go ahead
and drop the link. >>Jeremiah: Let's see. Next, I want to talk about-- what do I want
to talk about next? All right. So I went over
the rig real quick. I don't know if there are
any questions about this rig outside of what it mentions. But I do want to talk about
how we can augment this, adjust it,
how a user can make this their own. So a common question is,
why didn't you add fingers? That's fair. It's because,
at the end of the day, there's a lot of
different ways to do it. And I didn't want
to just duplicate the same logic that we
already learned at the beginning. So this keeps it a way to learn
varying ranges of complexity. But in order to add fingers,
you could duplicate this logic for
just straight FK controls on the fingers. Or you could use
the distribute rotation that we learned earlier to
where I showed from the spine to the head just
based off a vector 2D. Or you could even use
the CCDIK or FABRIK IK that Helge showed to
do some sort of IK control. And you can blend
all of those together. An example that we
could do on the arm, if we wanted to do IK/FK arms,
for example, would be we could create a new flow control
and drive the weight value. Then you can blend
on and off your IK, and you just have IK controls
and FK controls-- again, very similar to
the type of logic that we do down here at the leg. So from these building blocks
that we put in this mannequin rig, you can extrapolate
a lot of different ways to design the rig to do exactly
what you want in the way that your animators
would like to interact with it. >>Helge: I'll also
take the opportunity to mention,
stay on the live stream because eventually,
we're going to talk about a bit of an outlook for 4.26. And we'll probably
mention fingers again then. That's all I'm going
to say for now. >>Jeremiah: And
a little sneak peek. Yeah. All right. So let's go ahead and-- I think I have just one
slide just so we can officially be on the next step,
which is Sequencer. So once you have a rig,
we want to animate the rig. What good is a rig if
you can't animate it? As a rigger,
you feel really good about yourself. You've made
something pretty cool. But we want to create
that cool content. So I'm going to jump into the
next step of this Sequencer. As I mentioned, when you
download the sample projects, we already have a
level sequence in there for you and a quick map. You can see this
animation playing. And again, you can right-click
on this Create an Animation Sequence and use
that for your gameplay. Let me get to my notes again. So a few things to mention-- I want to actually show
several ways to use Sequencer to animator your rigs. We're going to start basic. I'm just going to
create a new scene, and I'm going to drag
in my mannequin here. And a way that people are
used to interacting with a level sequence and adding
in their animation is, you have your Actor,
and you just add in an animation. So I'm just going to add in,
let's say, the walk. And we'll play this. All right. So I have a walk animation. And let's say this is mocap,
for example. And the fingers aren't animated. Well,
a quick way to do that is to use a new track available to you
called Additive Control Rig. This is an
auto-generated Control Rig that allows you to manipulate
bones additively on top of your animation sequence. So I'm just going to go
ahead and add that here. And you see it created a track,
and it's got all of the bones
available for you to key. Let's go ahead and
keep playing this just to-- I'm going to do this a
couple times just so I can illustrate what we can do. I'm going to set some
keys and rotate that up. All right. There we go. So I do have auto-key turned on,
which is allowing me to [INAUDIBLE]. So you can see that I've
got some additive-- horrible additive-- animation on
top of this walk sequence. If I wanted to take my mocap
again and tweak it real quick-- let's say I adjust
the finger pose-- and then save that out again-- right-click,
Create Animation Sequence. Or if you're a tech
animator or rigger and you want to create some
quick poses for a pose asset, you could do the
same thing here. You can right-click and
say Create Pose Asset. So this is using the
Additive Control Rig. And it could be a
little cumbersome when you have all of
these controls showing up. There is a way to
work around that. You could just right-click
here in this section on your Additive Control Rig. And you can toggle rig controls. So here, I'm just going
to turn off all of them. And then I'm going to
turn back on the ones that I want, shortly, I'm sure. >>Helge: Quick question,
Jeremiah. Quick-- >>Jeremiah: Yeah. >>Helge: --question, Jeremiah. And this is putting
you a bit on the spot. But don't you have a
degree in animation? This edit is a bit-- >>Jeremiah: That's true. >>Helge: --simplistic,
if you ask me. >>Jeremiah: Technically-- >>Helge: And he's looking
like he's enjoying some music. That's what I think. >>Jeremiah: Yes. >>Helge: But yeah. >>Jeremiah: Yeah,
I say I technically have a degree in animation,
which is why I'm a technical animator. >>Helge: Nice. All right. >>Jeremiah: [CHUCKLES] So
now I can turn on my upper arms. Where's my left one? There we go. All right. So now let's say I just
want my upper arms. Or if I just want fingers,
I can add those in so that those are the only
channels that I see here in my Additive Control Rig. So that's one way to
animate with Control Rig. But let's look at another. And now I'm just in
my Skeletal Mesh. Also, I'm just going to
click on Track again instead of adding an animation,
or my Additive Control Rig. I'm going to choose one of
the control rigs that I've created. And in this case, it's going to
be the Mannequin Control Rig. Now I'm able to animate
this directly as if I were just animating in a DCC. And I'm going to take a quick
moment to pause here and point out a few tips and tricks
for animating in Sequencer. So the first one
I want to mention is this animation mode,
which should automatically pop up when you drag this in. It has a couple cool features. One is Only Select Rig Control. So now as I'm
clicking through here, I'm not clicking through-- I'm getting [INAUDIBLE]. I'm not getting my
skybox or my backgrounds. I'm only able to
select controls. This makes it a lot easier
for interacting with your rigs when you're
animating in your levels. Another one is I have
a control hierarchy. So I can make my search up here. Let's say I only want upper arms,
if I can spell. Let's see. Upper-- all right. I can't spell. So additionally,
this is live controls. As I'm selecting-- this is
also selecting my control in my view port, and it's
also selecting the control here in Sequencer. And if you want your Sequencer
to be a little simpler to read, you can adjust this filter
and say Selected Control Rig. I'm maxing out my
system here. All right. So now as I click through this-- maybe I lied. Ignore that. >>Helge: It's making it faster,
for sure. >>Jeremiah: It definitely
made it way faster. Yeah, now you don't
have the controls. Yeah. >>Helge: Yeah. >>Jeremiah: So that part
doesn't seem to be working. But it is really helpful to
have this control hierarchy up when you're
working so you know-- you don't have the flat list. And you can also use the
filters to search through the tracks And setting on auto-key
so you can quickly key to your tracks. >>Helge: I think I got it now. I think it's because everything
is called forearm and not upper arm in this rig. >>Jeremiah: [INAUDIBLE] >>Helge: So that's why
"upper" wasn't finding anything. >>Jeremiah: Oh,
because they're the controls and I have the bone
names in my head. >>Helge: Isn't that right? I think that's why. I was really confused,
because the search-- >>Jeremiah: Yeah. >>Helge: --is working. It's just-- I think
it's forearm-- >>Jeremiah: There you go. Yeah, I just-- >>Helge: --not upper arm. So-- >>Jeremiah: I've been in
skeleton land for so long. >>Helge: Yeah, it's fine. >>Jeremiah: Yeah. So those are two ways to
interact with your rig, both dragging in a Skeletal
Mesh and animation, or it could be an
Actor, you can throw an animation on there and
use your Additive Control Rig, or have a Skeletal
Mesh or an Actor and adding in a
Control Rig as a track. One more thing
that you can do is-- to really quickly set this up. Sometimes it's tedious to
go through at a level sequence and plug everything in-- is I can just drag and drop
a Control Rig into my level, and it'll set it up for me. So here, I'm just going
to drag in my Control Rig. Control rig pops up. It automatically created
a level sequence for me and added that Mannequin
Control Rig in as a track for me to start animating immediately. All right. See if there's any less things. Yeah, I think that's-- that's
what I wanted to cover for this. >>Helge: That's great. And just to clarify,
setting up the sequence like this is the same as
doing it manually. So of course,
it's faster, right? But the point is, if you want
to customize this, add tracks to it,
It's just a level sequence. It's the same set up. It's just a little script
that makes things faster. As you drag a Control Rig in,
it takes the right Skeletal Mesh, creates a level sequence,
puts the Actor in, puts the track in, and sets
everything up for you, right? >>Jeremiah: Yeah. Yeah, exactly. A couple more quick
things to note-- actually, when you're animating in here,
especially when you're creating controls with limits,
like a vector 2D control, make sure that your
snapping is turned off. I've run into this
issue if my snapping is set to 10,
because I've been doing some set dressing or something,
and then I'm trying to move a float or
vector that's limited between -1 and 1, it's not going to move. And you'll think
that it's broken. But really,
it's just your snapping is turned on. And so the control is not
able to leave that 0 to 1 space. Again, the Select Only
Controls is really helpful-- or,
Only Select Rig Controls-- which is available in the
animation details here. You can turn on
your hierarchy if you want to see your skeleton
and see what that's doing. Maybe you have
some procedural bones, and you want to see that moving. Additionally,
as you have controls selected here, you have your
transform available. So that allows you
to set values directly as a transform and
also key them here. And one more thing
I meant to mention as we're going
through the mannequin-- but I have some
nested controls here, and there was a little bit of
confusion about why I did that. That's just coming
from animation. You want to be able to
have some additive control. So maybe movements
is on my pelvis. But then I still
want to tweak my-- or, on my body. And I want to tweak the
pelvis a little bit just for offsets. Sometimes you'll
layer controls just to give animators
ability to highly refine the way that they're
interacting with the rig. Another great example is,
you have gross movements-- gross general
movement-- and then you want to have just
fine jitters on a control. You want that gross
movement to be on one control and that fine jitter not be
muddying up your curves for the [INAUDIBLE]. That's why we added
some of those in. >>Helge: So also, as we're
going to look at runtime rigging next, I just want to
highlight that if you're not planning on using a
rig for runtime rigging, but for authoring
animations like this for creating animation assets,
you could go nuts with the
complexity here, right? Because when it's baked
down to an animation asset, there's no performance
cost to any of these controls. So in other words,
adding more granular control and getting real high fidelity
on authoring and making it really easy for animators
to get just the exact animation out of the character
that you want doesn't cost you anything
in terms of performance when you bake it down
to an animation asset. So that way, like go nuts. There's no cost in having
multiple-layer controls like Jeremiah was showing. >>Jeremiah: Right. Yeah. So I definitely
recommend-- just have fun, and add in procedural motion,
add in authored controls. You can do some
really cool things when you start combining
the types of notes that we have available
in the Control Rig. Something else you'll
see in Control Rig is,
we tend to build it as a framework. So it's from the ground up. So you'll see very
small math nodes. We've got just atomic nodes. In fact, in Control Rig,
we call them rig units. So each rig unit is
originally designed to be small,
little bits of logic that you can build into
more complex logic. And as we continue
building Control Rig, we'll see higher-level
nodes that are essentially going to be versions of a
bunch of smaller nodes combined together. So you should have
the ability to create really complex behavior. And eventually,
we'll see the ability to have just IK/FK switch
with my arm and all the controls that I need for
that as an example. All right. So now I want to hop
on over to runtime rigging. And again,
we've mentioned this before. And really, this is the ability
to have a procedural rig that's doing some sort of behavior
on top of your animation. Or it could be a
Control Rig that's doing all the motion for you
based on some sort of gameplay. Let's take a look
at this real quick. I want to show you, first,
what we're going to be looking at. And then we'll take it
apart and see how we did it. All right. This is my trusty mannequin. And we've done some
slight modifications. So he's got a hat on
his head with a propeller. And as we run around,
it's based off the velocity. The propeller spins. I'm also using
some interpolation. So once I stop running,
the propeller slows down to a stop. I'm using the entire
velocity of the character-- so not just using 2D. And that gives us
some fun results. Additionally, I've added on
just a little piston onto his arm so we can see it aligning
based off the arm movements. So this is illustrating
taking gameplay logic and feeding it into Control Rig,
and taking existing bone transforms
and manipulating something procedurally. So we can-- >>Helge: Can I just say
I really love this content? I think it's awesome. It's so simple, but it's so fun. I like it. >>Jeremiah: Thank you. All right. So let's look at this. First,
let me show you my Anim Graph. It's pretty straightforward. Everything we've shown
before has been a Control Rig as an asset we've dropped
into a level and manipulated. But now we're
looking at Control Rig as an Anim Node
and an Anim Blueprint. And all I'm doing here is
I'm using this Anim Blueprints as a post-process graph
on my Skeletal Mesh. So we can see-- I've got my Skeletal
Mesh up here. And I've just plugged in
my example content here. Oh,
we also see something interesting. I've added a speed
variable that I've calculated in my Event Graph. I'm just grabbing
the pawn and getting his velocity. And I'm plugging that
into my Control Rig. And this is
illustrating parameters. So this is a way that you can
get external gameplay logic, however you want
to calculate that. Maybe your anim
instances have some logic that it could pass through,
or you can calculate it [AUDIO OUT]. And I'm using that as an
input into my Control Rig. And let's see how
we set that up. All right. Here's my Control Rig,
and we're just going to focus on this part,
which is the propeller spinning. In order to create
a new parameter-- let's go to create one now-- you see New Parameter
and you see New Variable. There is a difference. And in 4.25, we see parameters
as taking external data into the Control Rig,
whereas variables are managing internal data-- just storing a value and
using it somewhere else within that Control Rig. And that's illustrated through,
variables have getters and setters,
whereas parameters only have getters. So if I wanted to
create a new float, I could just click in here
and type in something. I can say, my cool parameter. And now I have a new
parameter called my cool param. I already have one
created called propeller spin. And if I click on this,
you'll see that's the only one
I've got available. So my setup is,
take in the value-- in this case,
its speed is getting passed from my Anim Blueprints-- and run it across this curve. I'm using a curve
instead of a remap, or instead of a multiplier,
so that I can refine the way that
the speed of the propeller ramps up. So if I want to wait
until I'm fully sprinting-- so I want the propeller
to slowly speed up, and when I'm fully sprinting,
then it's really going-- I could adjust this curve
to be something like this. I'm doing the remapping. So I know my speed just from
watching my Anim Blueprints during debug. It's going between 0 and 600. So I'm remapping 0 to
600 to a max rotation speed. In this case,
40 seemed about right. I'm using accumulation nodes. We saw a few of those,
previously, in our examples. In this case,
I'm just doing accumulate add. So I'm slowly adding the value,
and I'm starting at 0. And then I'm just
converting that to a rotate. So here,
I'm just spinning on x and setting the rotation in local space. I don't need to
be in local space. I just need to spin
around [AUDIO OUT]. So let's test this in here. Let's create a quick
example of how to test it. I'm going to use my
favorite example-- you've seen it a few
times already-- where I'm going to create a sin. And I'm going to use time. I'm going to need something
that goes a bit more extreme. So once again, we're going
to right-click Add an Interpolate Node. I really love these things. My in range is -1. My out range-- we'll call it-- let's leave it 0 to 1,
so we'll clamp it. And I'll say 0 to 600. There we go. So now we see,
as time is fluctuating, or as the sin is fluctuating,
it spins up. And I'm using the properties
of the map to ignore anything below -1. So that allows
us to see it settle once sin is down below 1. And this allows me to fine-tune
what that speed may look like. So here, it may go real fast,
real quick. I don't want a slow ramp-up. So I have some art
direction available here. Let's go ahead and go
back to my parameters so that I can get
my incoming data. And now let's go over and
take a look at this piston that I've set up here. Here, I have just a couple aims. And I did change a
couple values in here to make it work better for me. In this case,
I have my upper-arm piston, which is this bone. And it looks at the lower arm. I have these bones just parented
under upper arm and lower arm. And I have set these
values to work for each bone. So in this case,
since both of these bones are pointing down the arm,
I want this bone-- the lower piston bone to be
looking back, in which case I'm using negative
1 on my axis to get that behavior that I want. I'm using location,
because I'm using the targets of the opposite
bone as my location. I think I may have adjusted my-- no, I think I left my
secondary axis the same. Since in this case,
these are round, it doesn't really matter if
they spin along their axis. So I just left that alone. And that's all I have
here for this logic. I could make this more complex. I could say,
once I've reached a certain extent, then scale these pistons. Or I could-- so I get some sort
of stretch or some compositions so I don't get a gap. But I left it simple for now. >>Helge: There's something
I'd like to [INAUDIBLE] on that which is that
it's hard to understand initially, especially coming from
an environment like Maya, where the execution is bound to
the DG, the Dependency Graph, right? Where there's-- you
can't introduce cycles in a Dependency Graph. You can,
but it's not recommended, right? Whereas, for us, you can have-- this is a good example. You have one bone
looking at the other one. But then you have
bone A looking at bone B. But then you can also
have bone B looking at bone A. It's not actually a cycle. And that's because
our graph only describes what needs to happen in order. And there is no problem if you,
for example, aim something once and aim it again later. It's not a problem. Or you can decide to
not aim it again later, and have it procedurally
turn on or off, or whatever it is. And that's because
we're actually creating a stack of execution. We don't have a
graph at runtime. We have a list of things to do. And that gives you
much more freedom than you would have in a
DG-based execution environment. >>Jeremiah: Yeah, absolutely. And you saw some of those,
as another example, with the way that
Helge was controlling the IK with overlay. And you can add those
effects on top of the control to move the control
in space and then animate on top of that control. You get this cool,
complex behavior that just wouldn't be
possible otherwise. Let's see. So one thing I didn't mention
here in the Anim Graph is, how do I get this
pin off the Control Rig? Once you've created
a parameter-- in fact, let's go to Create a
New Parameter in-- let's call it-- let's just say
make a bool. I'm going to say My Bool. And if I go-- does this show up automatically? It does. So once I click on the Control Rig,
it automatically shows up. I can pass in a pin,
or I can pass in a curve. So if you have some really
complex logic [AUDIO OUT] as curves that you're
passing through, you can map those
through your Anim Blueprints to drive your Control Rig. In this case,
I'm just setting my value based off of gameplay logic. And in order to have
that pin show up, I'm just going
to check this pin, and the pin shows
up on my Control Rig. And now I can use
it however I'd like. >>Helge: A way to rephrase this,
which I quite like sometimes when people are really
experienced with the Anim Graph,
is that runtime rigging really means you have the ability of
building your own anim nodes, right? That's basically what this is. You now have a Control Rig node,
which is running another piece of logic,
where you have more freedom, and more flexibility,
simulation capabilities-- lots of things that you don't
have in the Anim Graph. So you can use it for
building your own nodes, basically, right? >>Jeremiah: Yeah. And another thing to mention is,
runtime rigging allows you to create
complex behavior on top of existing animation without
relying on baking animation-- or, having all that logic
existing in your DCC rig. So in this case,
let's just show, really quick-- I have my trusty mannequin here. Let's do-- I want, actually,
my base mannequin. Here we go. So I'm using this mannequin,
because he doesn't have the propeller and the-- actually,
I'm doing this all wrong. I want to use my runtime guy. Here we go. All right. So I have this guy,
because he does have the additional
context there. I'm going to go ahead
and add a level sequence. And I'm going to-- And I'm going to
add in animation. In this case,
it's just animation that's available from the
third-person sample content. So let's do walk, for example. So as I play this,
you can see the piston is working automatically. Even though there's no animation
available in this walk cycle, I'm still able to solve
using post-process graphs. So what this means for your
characters and in your games is you can have a base locomotion
set for all your characters, and you can do all of
your secondary complex-- let's say shoulder pads,
loin cloths, pistons-- anything you can think of
that would just be procedural, dynamic motion-- don't bake those into
your anim sequence. You can build a Control
Rig and do that logic for you. And then all of that secondary
logic is reactive to your game as you add in more content. So I add in a new emote for this,
the piston. I don't need to update
all of that in my animation. I feel like I have
one more thing. I think that's good. All right. I think we're ready to
go on to the next thing, unless we have any
questions about runtime rigging. >>Helge: And maybe
you want to pause. Is the next thing the look out? What is next? >>Jeremiah: The next thing is
looking forward [INAUDIBLE]. >>Helge: So yeah, I'd say,
let's do a series of questions, if we have any,
before we go there. >>Jeremiah: Yeah. >>Victor: We have
plenty of questions. Some of them, I believe,
you have addressed. There were definitely a
couple around the topic of runtime rigging. And I think we
tackled that fairly well. If there are any more
questions there, let us know. Something that I
wanted to point out-- the runtime
rigging-- for example, that's the stuff that you can do for,
say, a full-body IK rig for a VR avatar, right? And you would essentially
construct your own Control Rig and then use those nodes
inside the Animation Blueprint to be able to drive that rig. Is that correct? >>Jeremiah: Yeah,
that's correct. So for example, with this guy,
if I wanted to, instead of driving-- in fact, I could even do it. I could create a parameter that
would be a transform parameter. And in this case,
I'm taking transform inputs, and I could feed
it whenever I want. So in this case,
let's go ahead and-- this is a great way to show
how I can combine logic on top of controls,
still drive controls, and have animated controls. So I'm going to have
this transform parameter-- it just has a default
name right now-- and feed in a transform. Here, I'm going to
drive the control location. I'm not going to do
a full setup of this. But hopefully,
we can illustrate what we're doing. So I have my left hand here. I would set my
control transform. And then connect that here-- So I could drive
my control transform through an external parameter
using the Control Rig Anim Blueprint that we showed here. And that transform
would just show up right on my Control Rig node. And I could feed in transforms
through my gameplay logic. If it's VR inputs that you're
handling through Blueprints, you could set through an
instance or any other gameplay logic [INAUDIBLE]. >>Helge: Also,
you could add parameters that-- building a rig that,
by default, has controls. It also has transforms,
but just ignore it. The weight would be
set to 0 in this case. And then you can
have a parameter, which is your "turn on for
runtime" parameter, right? And then what it would
do-- it would set up all these weights-- so
to override the controls with runtime values. So that way,
you don't have to have two rigs. You can just
integrate it with one. And again,
for some optimizations we do, if you have branches which
are constant-- in other words, there's a branch
that is never going to be active because
you have the weight that's always to 0 with optimizing that out,
anyway. So don't worry about building
more logic into a single rig if you're not going to use it. It's not a cost. >>Victor: And you could even,
then, use the same Control Rig to capture those transforms,
recording those transforms using Sequencer,
and act, right? Still on the topic
of the VR avatar-- and record those transforms,
and then play them back at runtime to have a sequence
in runtime playing in the editor where you see
someone who looks like it could be what the player
would look like, and act that out. >>Helge: Absolutely. One of the things we found-- and this is maybe
getting super nerdy. >>Victor: That's
what we're here for. >>Helge: It does make me nerdy,
which is that you can use Control Rig as classic
replacement for linear content creation for authoring animation,
right? But you can also use
it for runtime rigging. But you can also
use it-- and that's where it gets real interesting--
as part of your tooling. You just use it like-- Jeremiah was
mentioning it earlier. You use it as a middle step
somewhere in your custom workflow where
you use it to create some procedural animation
mixed with something else. Maybe you just use it
to bake the upper body, because that's what you need
for your particular area in VR. And then you combine that
with other keyframe animation from another source. It's really there for
you to experiment-- part of our Lego bricks, really. >>Victor: So one of the
questions that came up in regards to runtime rigging was if
we have done any tests on the performance cost. Do you have a ballpark that-- I think that's
what we're looking for-- not exact numbers. But I think the question was,
what is the performance cost of
running a full-body control rig over an animation
instance in gameplay with default specifications,
as an example? >>Jeremiah: So we've
shared this before, which is why I'm
comfortable sharing it again. We use Control Rig on a majority
of our characters on Fortnite. And we've seen some characters,
in previous, early seasons-- for example,
with Werewolf and the Mech-- those are running
full Control Rig and being driven by
retargeting animation. So we have Control Rig. We have animation. We have layers of-- your backpack is
attached in the backpack, and they have dynamic motion
that's also using Control Rig. And then you multiply
that by the scale of the number of characters
that you may have on-screen. And we're running
that on a switch, or running that on
a phone or a PC. So it's designed
to be performant, and it's designed
to be flexible. >>Helge: That being said,
there's a grain of salt. As it's a big,
visual system, you can build a rig which
is not performant, right? You, as a user-- you can
build one that is just so big and does so many
things that it's not going to fit your requirements,
which is why we've integrated
a profiler into it so you can look at all the numbers. You can see how
long things take. You can see which
nodes are taking the time. And we use these tools,
like Jeremiah said, to run it on the majority
of the characters that you have in a
current Fortnite session. So those rigs, to be fair,
are usually not full-body rigs. But their sheer number,
I guess, compares, right? So if you have 100
players with partial rigs, let's say-- half-body rigs,
if you will-- then that gives you
an idea of the invocation cost if you have, like,
three characters. >>Jeremiah: Yeah. >>Victor: All right. Let's move on. Let's see how many
of these we can grab. Can you pause a running
game and adjust the pulse with a Control Rig? >>Jeremiah: Helge? >>Helge: I'd wish you could. I don't think you can right now. You can attach control rigs,
like Blueprints, to runtime instances. So you can look at the
value and the pose going through a Control Rig
for a running game. But you can't adjust it. And I don't think you
can pull out the values. You might be able to copy
and paste off values, right? Off that instance
when it's paused. But yeah, you can't adjust
an animation in the gameplay. That being said,
Control Rig has been designed to eventually do this. I'm not going to say when we do it,
if we do it. It's more, it has been
designed with a clear separation that all the things
necessary to do adjustments to the rig at runtime are there. Just, workflows around
that are not exposed. >>Jeremiah: Yeah. And just talking
about debugging, I thought I'd bring up just
a real quick example here. I'm running around. I have a Control Rig--
or actually let's debug this guy. All right. I don't want to fuss
with this too much. But you can live
debug your Control Rig and tweak your parameters. It's not adjusting controls,
but it's adjusting your graph and
having it [INAUDIBLE]. I think we actually had a
couple questions that we wanted to answer that we'd seen. >>Victor: Yeah, you can go
ahead and tackle those first. And then in case some
of them aren't covered, I'll just move on with
the other questions. >>Jeremiah: Yeah. Helge jumped in, and he talked
about spaces versus controls, and different ways
that you can drive logic. One thing that came up was,
how can a bone be used? How can you add bones-- or, when you add bones
into your skeleton, what can you do with those? In 4.25, you can use those to
calculate intermediate values. You can use, for example,
your IK solvers and solve those on
those intermediate bones, and then derive some
sort of logic from those. It also allows you
to start [AUDIO OUT] and feed those values
back out to an Anim Graph or different areas of your rig. You can solve things
in different ways parallel. Helge might-- >>Helge: Basically, in technical
speak, if you add a bone-- maybe you want to
do that here in this rig, just so we know what
we're talking about. >>Jeremiah: Yeah. >>Helge: If you add
one of these green bones, they show up in
a different color if you add them
to your hierarchy. We're talking about adding
it to the hierarchy, so-- >>Jeremiah: Yeah. >>Helge: --what's happening,
man? All right. >>Jeremiah: What have I done? All right. So-- >>Helge: Just add a
single bone somewhere. That's what I mean. We don't actually have
to use it for anything. Just add it there
so we can see it. So there's a new bone
being added to the hierarchy. It's green. You can rename it. You can put it anywhere. And they're just parented in. And so basically what these are,
right? They're not real bones. They're not part
of the skeleton. But they have the
same characteristics as-- and they have
parent-child relationships. So if you give them
a local transform, you can ask for the
global transform as facility. And that makes it easier. But really what they are-- they
are transform variables, right? They're just
variables you can use for simulation with a
parent-child relationship. So lots of times,
you want to put something, like a jiggle under the
clavicle or something. And you want an easy way to
ask for where that is globally. You could do that with
a transform variable and compute the global
all the time yourself. Or you can just
put a bone there, simulate it, and then,
in some later section of the rig, ask for the global of that. And it's an easy accessor,
right? That's the reason why we
make it part of the hierarchy. But that's really what that is. They're not real bones. They're just, yeah,
transform variables with parent-child relationship. I hope that answers it. >>Jeremiah: Another question
was about Control Rig sequence. Now, this was something,
I think, in 4.24 and that was in Sequencer. You were able to create a
Control Rig sequence or a level sequence. And level sequences-- now,
everything gets added into
a level sequence, and you're able to bake out that
animation as an anim sequence, if you would like. So we don't have
that difference. I think [AUDIO OUT]. We're going to get through
some more questions here. >>Victor: Yeah,
I can grab one from chat in between, if you don't mind. >>Jeremiah: Yeah, go for it. >>Victor: Could
positioning and rotation of rig elements in a scene
be accessed via Python during runtime of the game? >>Jeremiah: No. Python-- well, maybe yes,
but I'm going to say no. Python-- I don't know
if it could be done. I wouldn't-- >>Helge: No. I'll ring-fence this topic. No. It's because-- it's very simple. We use Python as an
orchestration language, is what we call it. So for a tooling
inside of the editor. So that's what it's there for. So yes,
you can adjust the pose of control with Python in the editor. In the runtime, you have access
to C++ and Blueprint, right? That's what you have. In both of those,
you can access controls and set them. So you can use
Blueprint as well for that. It's just, you don't have access
to Python in the runtime, which is why you can't use Python
for controls in runtime, either. >>Jeremiah: Yeah. So the case that I
would use Python is generating controls,
generating your Node Graph in controlling. So this is constructing
your Control Rig, not driving your
Control Rig in the editor. >>Victor: If you're curious
about our Python integration, you can go check out a stream
we did last year with Aaron-- Aaron Carlisle. It's in the playlist. Did you want to move
on with your question-- the next ones, Jeremiah? >>Jeremiah: Sure. Yeah, we had a couple
questions around physics-- both physics and driving
Control Rig with physics, or attaching objects to
something that's dynamic, like cloth. So currently, you're not able to
feed in a point on a Cloth Mesh into Control Rig. It must be, in some way,
deriving that logic through Blueprint,
in which case you would pass that value through a parameter. But that's not
directly supported. That would have to be
logically built out. And driving Control
Rig with simulation is not something
that's directly supported. But you would be able to
in serial have a rigid body node and then your Control Rig node,
in which case, since the Control Rig
node would be taking the outgoing pose
from your rigid body, it would act relative. So you have the ability to
determine where your control rig is solving. >>Victor: All right. Should I continue? >>Jeremiah: Go for it. >>Victor: How much
time do you guys have? >>Helge: Sure. >>Victor: Still good to go? All right. Let's continue. >>Helge: Sure. If there's still questions,
let's try to cover some of them. >>Victor: Yes, yes. Yes, there are. How can we use
multiple control rigs based on the same Skeletal
Mesh in the Sequencer? Is it possible to create
interaction between two characters using this? >>Jeremiah: We'll pause
that question for now. >>Victor: Pause
that question for now? >>Jeremiah: Yeah, yeah. >>Victor: We're going to
pause that question for now. In Two Bone IK node,
what is the difference in result between the two
options under pole vector kind? >>Jeremiah: It's
a good question. And that's really confusing. >>Helge: That's a-- yeah,
it's a great question. I love that that's coming up. It's been confusing everyone,
and I apologize. >>Jeremiah: Yes. So-- >>Helge: Basically,
that's my answer, is I apologize. >>Jeremiah: To just let everyone
know what we're talking about, on a basic IK,
there's pole vector kinds, and there's direction
and location. You also have a
pole vector space, where you can define a bone. So pole vector location is,
there is an object in space, and you're looking
at that object. Pole vector direction
is the direction around a bone based off the
specified space. So if it's none, I think it's
just that bone specifically. But if you define
a different space, you could rotate, say,
around the chest. Did I answer that well enough,
Helge? >>Helge: So, no. But this is where he shows
you that this is a bad option. So basically,
location means, the vector that is connected to your
pole vector pin is a location. It's represented as
a position in space. So in this case,
it's the global space position of the left knee-- whatever
control you have there, right? And we're aiming at a position. But sometimes what you're
feeding in is not a position. It's a direction in space. It says, like, that way. It's not exactly the
position where you're going, but it's that way. And then this is understood to
be a vector and not a position. Now,
I added this when we designed this, because a lot of my cases
where I was using directions to orient something, like,
upwards or sideways. But apparently--
this is my learning from this-- is riggers
think in locations, and everybody wants
this to be set to location. And the default,
I believe, in 4.25, is direction,
which confuses everyone. Now, I can't talk too much
about the next release, but I can tell you that we've
changed the default to location in the next release,
because this was a major complaint that everybody had. So I hope that
clarifies a little bit. >>Victor: Yeah, I think so. Next one up. Do you include a Three
Bone Digigrade solver? >>Helge: No. >>Victor: I have no
idea what that is, so-- >>Jeremiah: Yeah. We did illustrate a
few of the IK solvers. Or, Helge demonstrated
a few of those available. So play with those ones. >>Helge: Yeah, we don't have
a specific three bone solver. You can do it different ways. There are different approaches. When I helped rig
the the mech, we actually used three basic
IK stacked on top of each other, because In Control Rig,
remember, you can actually do
things multiple times. And we also have--
can you right-click? So let's see. I'm a bit on the jump here. So I don't know if that
exists in this release. But if you right-click
and say Basic IK-- yeah, so we do have it. So there is the
basic IK transforms. You can use left node there. So this node is the
same math as a basic IK, but it's not actually
modifying a skeleton. So you can use it
to give it some data. Say, like,
this is where my root is, and it gives you
back the transforms that would need to be set. And what this means is you
can actually do multiply IKs, mix them, and create results-- art directed results for
three bones, for example, It's a good approach,
just to say, create an artificial-- a fiction chain from the
root of your three bone all the way to the vector,
solve that in some form, and then do two additional
IK solves for the segments to solve a leg like that,
for example. Or, like Jeremiah said,
use FABRIK or CCDIK. >>Jeremiah: Or Spring IK,
which is our third one. >>Helge: Yeah,
there's another one, I think. >>Jeremiah: Yeah. >>Helge: Just try them, yeah. >>Victor: Are there
plans for it to be possible to have a single
Control Rig which works on several different skeletons? >>Helge: It's a bit of a hint,
but we'll pause that as well. >>Victor: We will
pause that as well. Could you control the
keyframe slash tangents on the curve node
with other controller? >>Helge: No, no. Do you mean which curve
node are we referring to here? >>Jeremiah: Are you talking
about that evaluate curve? >>Victor: I believe so, yeah-- the one Helge had
in his demonstration. >>Jeremiah: So he just
had an evaluate curve and I think it pulled off and
created a curve this way. >>Helge: Yeah,
other than through the UI or Python, you can't drive the
contents of this curve right now procedurally. So you can't create the
curve itself procedurally. But if you want to take note,
Jeremiah, that's totally something
we could do eventually. There is another curve-- and
maybe just in case that was the question-- the Bézier
curve that you create to fit the chain-- you can totally drive
that procedurally, 100%. It is always a fixed four-point. So there is a node that's
called four-point Bézier, and you can create it-- set each point-- so the start,
end point to two tangent values,
and then drive those with controls or drive them differently. >>Jeremiah: Yeah. And that's actually exactly
what Helge is doing here. He's driving-- >>Helge: Yeah. >>Jeremiah: --those
points through controls, and then calculating the
secondary control points. >>Helge: Yeah. >>Jeremiah: But that could be
an external parameter or something else. >>Helge: Yeah. >>Victor: Let's see. Can you expose a property
like strength slash weight on a node to sequencer
along with the controls? >>Helge: So yes. We haven't shown this yet,
but the way you'd do it is you change the
type of the control. So you can create a control-- maybe,
Jeremiah, if you want to show
this in this one. It doesn't actually matter. >>Jeremiah: Yeah. Yeah,
I'll do that for sampling precision. So I'm just dragging off
the sampling precision, and I'm going to
create a parameter. >>Helge: Yeah,
don't create-- sorry, to intercept. Don't create a parameter. Actually create-- [INTERPOSING VOICES] >>Helge: --going to show. Yeah. So-- >>Jeremiah: Yeah. >>Helge: --go
into rig hierarchy. Create a new
control just at the root for demonstration purposes. And change its type
from transform to float. >>Jeremiah: Yeah. >>Helge: And that
changes a bunch of things. You can see it changes the
input fields in the Details Panel. It changes limits. And it will also change
the way the limits and the controls are drawn in 3D,
because now it's basically a slider, right? >>Jeremiah: Yeah. >>Helge: And then you
can use this to set a value, and you can animate this. Yeah. >>Jeremiah: Yeah. >>Helge: You can
disable the gizmo. So in the Details Panel,
you have settings around the gizmo,
which is a 3D representation of the control. And then that way,
it doesn't show up in 3D. But it's animatable
in Sequencer, and you have a float input. >>Jeremiah: Yeah. Additionally,
it's worth mentioning that there's different
ways to adjust your limits. And something I hadn't
mentioned earlier-- but I like to think of controls as,
essentially, a UI-- an interface to your rig--
not necessarily transforms that you're manipulating. >>Helge: Yeah. >>Jeremiah: And so that
lets you think of rigging a little differently. So in this case,
I could move this float control, just sitting somewhere in space,
and scale it up by adjusting my
gizmo transforms. Or I could create
a space above this and scale that up--
and non-uniform scale it,
because it doesn't really matter, at the end of the day,
because you're just adjusting this float value. There's a different
representation of the output value. The output value is not linked. Whereas, something like
Maya rig in the transform. Manipulating that transform in
space has direct ramifications. >>Helge: Basically,
we go in the opposite direction. The way I always explain it-- I think, though,
the metaphor to the UI is great. But the way I look at it is,
in Maya, you take 3D transforms, and you shoehorn
them into the slider by limiting other components,
removing rotation from it-- all this stuff. We do the opposite. We basically say,
we have a float, which is just a single number. And we put it in
space somewhere. But it only has
that single float. So the only thing you can
animate is that single float. And you're never going
to get into a situation where you animate the wrong
component on a transform or anything like that. >>Jeremiah: So I'm
just going to drag this into Sequencer to demonstrate. So now I have a couple
transform controls. And at the bottom here,
I have a new control, which is just a float-- >>Helge: Yeah. >>Jeremiah: --like we'd expect. >>Helge: Great. >>Victor: About complexity,
do you plan to add something like functions
or macros to Control Rig? >>Jeremiah: It's
a good question. That's all I could
say about that right now. >>Victor: I think you
tackled this briefly. But dynamic squash and
stretch was demonstrated earlier. Do physics bodies
automatically respond to those animated changes? >>Helge: It depends on where
you run the Control Rig, right? >>Jeremiah: Yeah. So that's a good question. I could see you running into issues,
because depending on how you're adjusting those bones,
if you have rigid bodies
happening afterwards, and you're scaling
those bones non-uniformly, you may end up with some
results you're not expecting, including trying
to push collision bodies into each other and
getting some sort of explosion as they're trying to
push each other away. >>Victor: Is it possible
to have something like Maya's Set Driven Key? For example,
where you have an animation curve driven by a slider value? >>Jeremiah: So if you're
talking about building poses and feeding those in,
currently, we don't have poses
implemented in Control Rig where you can store those poses,
then drive them based off of curve. But one way you
might be able to do this is using Control Rig to drive
logic and feeding out curves. So in this way, I could say,
set curve value. And as long as those
curves exist on your skeleton, I could drive this value through
some sort of gameplay logic. And in my Anim Graph,
I could use my Pose Driver, and-- let's go to target,
for example-- and I can have that pose,
which-- my pose asset that I showed
earlier where I can create poses in Sequencer. I could save some poses,
use that as a pose asset, load that in here,
and drive that via curve, either demonstrating it here,
or names, if I had names curves skeleton. And those curves would
link between Control Rig and the Pose Driver. I would need my Control Rig
to come before my Pose Driver so it can drive
those curves before-- >>Helge: One of
the testers has done that using the green bones. If you recall,
we have the green bones you can add. So if this is about
poses on multiple bones, then what Jeremiah is
suggesting is a better approach. But if this is about-- I don't know, have a pose in
the clavicle have, like, four different
locations this needs to go in, and you want to
interpolate using a float, then you could do that. You basically feed in the float. You store the expected
different clavicle transforms that you want as green bones. You just put them there as well. And these are the poses
you're going to refer to. And then the Control Rig
would blend between those using procedural logic. What we don't have
is this one-click solution where you just say,
like, store these poses and then interpolate
between them. The workflow that sits on top
of [INAUDIBLE] in Maya-- we don't have that. >>Victor: Can we create new
or assign existing control rigs to Skeletal Meshes at runtime? >>Jeremiah: You have the
ability to change out what-- not change out,
but drive what Control Rig is active in your Anim Blueprints. So that's a quick way
that you could have logic to determine what your-- you could create
branches in here to say, based off this branch,
I'm using this Control Rig A versus Control Rig B. So
that's one way you could do it. I'm sure there are many
other ways-- >>Helge: There is another way. I don't know if this is
accessible in Blueprint. I'd have to check. But the approach we're taking
for Sequencer, we have added a new custom instance
you can add to Skeletal Meshes. And we are actually doing
that at runtime in the editor as well. You could do it in
runtime in your game. So if you wanted to assign
that animation instance to your Skeletal Mesh,
that has all the facilities you need to set the Control
Rig and interact with it. That's basically what
Sequencer is using under the hood as well. And it's there as part of 4.25. So, sadly, there isn't a
big page of documentation on how to do that. But if you look at those
animation instances, there's only one in the Control
Rig's plug-in source code. That's the one we're using. And it's a new anim
instance for this purpose. What it doesn't allow you to do
is mix with other Anim Graphs, because it's a
custom anim instance. But it depends on your case. If you just want, at runtime-- have a Control Rig take
over a Skeletal Mesh, you could do it with that. >>Jeremiah: Yeah. >>Victor: We had a
few questions coming in-- if it's possible to use
Animation Insights to debug Control Rig. >>Jeremiah: Not currently. There are some profiling
tools available in Control Rig. But with Animation Insights-- made it into 4.25 and focused
first on the gameplay side. And I think we'll see
Animation Insights rolling out into the others
as we go forward. >>Victor: OK. Could you set up a filter on
your sequence based on name
control underscore? >>Jeremiah: Is that just,
can I type "control underscore" here? Is that the question? >>Victor: I think so, yeah. >>Jeremiah: I think
I can type "control." But yeah. Yeah,
if you are asking about being able to save those filters,
I don't think you can save filters,
currently. >>Victor: Yeah,
I'm not sure about-- yeah, if it's possible,
but I guess not having just a button as the other filters in,
say, the content browser. >>Jeremiah: Right. But that does point
out something. Since Control Rig
and Sequencer both can be driven through Blueprint, you can create Editor
Utility Widgets that can do some quick logic. So if you wanted to create
a character [AUDIO OUT] using Editor Utility Widgets, you could absolutely do that,
and just do some name-based
selection stuff. That would drive your
selection in viewport. >>Helge: And we've
mentioned this before, but there's actually
several people who have built stuff like
that and posting it on Twitter, where they're basically using
Blutility editor widgets together with Python. And we've said that before,
but I'll just repeat it again. Control rig is 100%
exposed to Python. So you have access
to the hierarchy. You can introspect what bones exist,
what controls exist. You can look at every node
and I directed all these things. So if you want to have
a custom widget that has buttons where you can
click and it selects all the hands, let's say, as a simple example,
you can absolutely do that. And there's examples
of people building these kind of workflows. >>Jeremiah: I'm sure we
have a few more questions, but I want to make sure
that we can talk about 4.26, because I know it will
introduce more questions. And I'll answer a few
that we have already. >>Victor: Let's go. >>Helge: Let's do it. >>Jeremiah: All right. So let me get my slide back up. All right. Looking at 4.26. Let's start at the top. Helge,
what's a Control Rig component? >>Helge: It's a component
that has a Control Rig in it. So-- >>Jeremiah: Yes. >>Helge: --basically
what this means is that you can add to any Actor,
any world. You can add a component
that hosts a Control Rig. And then you can
drive that Control Rig and map it procedurally, right? So that might be as
simple as mapping the Skeletal Mesh
the way you used to, or it might be add
Swell Static Meshes, and drive their
transforms and Control Rig, and feed data in and out. So at that point,
Control Rig becomes a function that you can use
inside of your Blueprints. And then you could do all
sorts of interesting logic with this. Do you want to cover
earlier questions, Jeremiah? There was, like,
using a Control Rig with different kind of
topologies or different kind of skeletons? >>Jeremiah: Yeah. So to add onto this,
Control Rig component allows you to define a Control
Rig and the Skeletal Mesh to drive. So you can drive
different Skeletal Meshes with differently control rigs. You would be able to--
I think Helge mentioned, you can drive controls
through Blueprints. You can drive bones
through Blueprints. And you can also re-initialize
those bones in new poses. So this would allow you
to deal with proportion changes for different Actors,
for example. Additionally, since you're
able to pull bone values out, or control values,
you're able to drive Static Meshes. This allows you
to create rigs that can now drive Static
Meshes without having to do any skinning. A great example of this would
be doing a car, for example, and you want to have
some doors open, and it's just a bunch
of Static Meshes. You can do that
through a Control Rig component [INAUDIBLE]. This also-- we'd
had some questions about retargeting between
control rigs of driving control rigs with control rigs. And this is where
you can start getting into this really cool area of
feeding values between control rigs or across Blueprints,
and gameplay logic. So-- >>Helge: We had a question
before around VR, right? And given two hand positions
and a head position transform, and things--
solving a character. Now,
using an Animation Blueprint, unless you need the features, Animation Blueprint
is too big for that. In the next release, you just
can create an Actor or your pawn and add the component to it,
add the rig to it, feed it the data, and solve
for all the other bones, right? So it becomes much simpler. Control rig becomes
part of the tool box and is no longer caught
just in the Anim Blueprints. You can use it for
other stuff at that point. >>Jeremiah: Yeah. And that way,
you don't have to rely on parameters, if you don't want to. You can just feed
the values directly in. Next, let's talk about
Setup Graph real quick. So Setup Graph is
another graph you can create in your Control Rig. It's another event. And it allows you to
initialize your rig before it runs the updated event. So in the example that I
mentioned earlier where I have, let's say, six characters
of different proportions, and I have one Control Rig,
I can re-initialize those controls to be in the
correct place for that Skel Mesh without having
to create unique control rigs for each one
of those characters. >>Helge: Yeah,
just to clarify, because he said "run before the update,"
it runs only once, right? So the Setup Graph--
it can run multiple times if you use a component,
because there, you have the capability of
saying when it should run. In general, though,
setup runs once, adjusts whatever
it needs to adjust, caches whatever it needs to cache,
and then update goes every frame. So topology changes are useful. Sometimes for your
custom IK solver, you might want to measure
the original distance between the hand and the clavicle,
right? You do that in the Setup Graph. You store it
somewhere in a variable. And later,
you use it during the update. So for simulations,
a lot of times, you want to remember
something for simulations-- when you start and so on, right? So that's what this is for. >>Jeremiah: Another great example
would be with the VR example. One way that's
commonly used so that you don't get hyperextension
of the arms is you measure your
arm length holding out your VR controllers. You could do that and
then feed those values into your Control Rig so you
never hyperextend your arms. Anything else to mention about
Control Rig component or Setup Graph, Helge? >>Helge: No, no. That's good. >>Jeremiah: All right. Looping and branching. A little self-explanatory,
but we're going to have looping for For loops. And you can do For Each. You can gather--
create collections in a lot of various ways. We're calling groups
of things collections. You can do it by name. You could do it by hierarchy. You could do it
between two bones. You could say,
give me everything that's just under the hands,
or, give me each finger and combine these collections. So if I wanted to rig a hand,
for example, I could say, give me
everything under the hand bone and treat this as fingers
as individual chains. And then feed those
collections into transformations. Over to you, Helge,
because I'm running out of words. >>Helge: Yeah, that's fine. So do you want to show
the biped rig real quick? >>Jeremiah: Yes. >>Helge: Can you switch
to just showing the biped rig? I'm not even going to
go into too many details. So with this feature,
we're adding looping and branching. Looping means doing
something multiple times, right? And branching means
switching between different things, either A or B [INAUDIBLE]. A or B switching
is really important. It's self-explanatory. It just means you can
have features in your rig that are on or not on. You can do it or not do it. And that means you
can now build things which react to context, and-- VR rig that works for
animation that works for real life. And just build it in a
way that works for both. If you just scroll
back in your rig, just look at the whole thing-- sorry, I mean the graph. That's also nice,
but I mean the graph. So in the top, right? So he's doing something there-- I think just measuring
roughly seven times, right? >>Jeremiah: Yeah. >>Helge: He's taking control,
putting on a bone. Taking control,
putting on a bone. In the next version, you just
have to build this graph once and then loop over the
things you want to do this for. So your graph
becomes much smaller. There's also another benefit. In case there was
another bone intermediate, because your rig has changed,
your skeleton has changed, now you just have to change
what the loop is running on instead of having to rebuild the graph. And the reason I said "a
bit of a hint for fingers"-- fingers are a really
good example of this, because you have a lot
of chains that are all similar and you want to do the exact
same thing 10 times or something on your character,
then looping is your friend, right? >>Jeremiah: Yeah. It gets into a place
where you're constructing runtime rigs, or animation rigs,
dynamically based off of some sort of
dynamic collection. Let's go with that. >>Helge: Yeah, let's stop there. Can you go back to the slide? >>Jeremiah: Yes. Backward compatibility--
we've have these questions a bit before. Hopefully, this livestream
illustrates that Control Rig's not going anywhere. It's currently marked
as experimental, but we're fully supporting it. And as we go forward,
we're adjusting nodes as we find better
ways to do things. So when we get into 4.26,
you may see a lot of nodes may be deprecated. But they're deprecated. They're not removed. So your rigs won't break. If you develop things
now and fingers crossed they won't break. If you create things now,
there should be a path forward. And there's going
to be the better way to do it,
but everything [AUDIO OUT]. >>Helge: Yeah,
we're using [INAUDIBLE], so we're not going
to break your content. Control rig is fully supported
and is a big initiative for us. So we are investing in it. The reason we wanted
to show you a bit of 4.26-- we usually don't do
outlooks-- is just because there have been a lot of
questions around, should I invest in this now? Should I use it for animation? Should I wait? And it's a major
initiative for animation. Go ahead. Test it. Use it for your stuff. It's a fully supported feature. >>Victor: There was a
question that came up, which was, when will Control
Rig not be experimental? >>Jeremiah: Unfortunately,
we can't answer that right now. >>Helge: No. >>Victor: Do you have time
for a couple more questions? >>Jeremiah: We've got
a slide for questions, so-- >>Victor: Look at that. >>Jeremiah: Yeah,
let's do a few more. >>Victor: Do you have any
examples of inverse kinematics, forward kinematics blending? So if you wanted to sometimes
use inverse kinematics and blend to forward
kinematics during a clip? >>Jeremiah: Oh,
we don't have an example of that up. But I tried to illustrate
quickly how you would go about doing
that by driving the weight parameter of a basic IK. Yeah. >>Helge: I'd say
maybe we can try. No promises here,
but maybe we can try. As we update--
with future versions, we update the mannequin to
start adding more features into it. And there's obvious things
that we want to do there. But maybe we can try to do
that for either the arms or the legs to illustrate it
in a simple way. I don't know. But again, no promises here. But we'll take note. >>Jeremiah: Yeah. >>Victor: Let's see. Can you build your own custom
script-based Control Rig node in Python and/or some
C-like optimized language? >>Jeremiah: You can create your
own custom rig units using C++, but you can't construct
nodes [AUDIO OUT]. >>Helge: Yes. So we focus on making it really,
really easy to build units. So I always suggest that
if you're curious about this, take a look at the source code. Under the Control Rig Source,
there is a folder called Units. And that's where all our
units are-- all our rig units are. And they're very
simple to implement. So for Anim Graph-- this is
maybe where some people are coming from-- we had, like,
two or three classes you'd have to build-- runtime version of the node,
editor version of the node, edit mode,
and a bunch of those things. And there are reasons
for that-- for Control Rig, we went for a simpler path,
which is, you build a struct. The struct properties are
the pins you'll see in the node. And then there's
one function you have to implement for
the execution of the node. But to make it really easy
for people to build those. So yes, that's C++. But we've had a bunch of
people giving us feedback saying, I have these three
transforms coming in. I have some other setting. And all I want to do is do
some custom math with that. For that,
you don't need a 101 primer for C++. You can probably
still turn that around if you're willing
to learn a little bit. And a lot of the nodes
that we have are simple like that if you look at them. So it's a good
resource for learning. You can use Python
to build your graph. So if you want to orchestrate
using Python-- say, like, create these three nodes
and wire them up this way, because that's what I need-- you can do that with Python. So while I know that's not a
yes to your original question, there's these two paths
for you to look at the stuff. >>Victor: Let's see. There was another
question about physics. We already covered that. Can you control
animation keyframes slash tangents using
other Control Rig node to drive or change-- >>Jeremiah: I think
we did that one. >>Victor: I think
we did that one? OK. >>Helge: We did that one. >>Victor: Is it possible to
make IK with constraints or limits in Control Rig? You can probably set that
up using just computer data, right? Like, clamp-- >>Jeremiah: Yeah. >>Victor: --in this range-- use a Map Range
Clamped or something. And I guess that would
be the approach, then? >>Jeremiah: Yeah, if you're-- >>Helge: Well-- yeah. Go ahead, Jeremiah. >>Jeremiah: I was
just going to say, if you're looking for a simple,
one-and-done node to do that, I don't think
we have a node specifically for that scenario. But you can do the calculations. >>Helge: Also,
one of the things we didn't show-- and we're not going to show,
I think, because we're running out of
time-- is you can set controls, right? And this seems a bit strange. So you can, for example,
have your hand control drive your bones. And then you might have
moved the control too far away from your arm, right? So you want to
constrain something else. What you can do
after you've solved the bones is set the
control again to correct it. So if you say,
the control can never go further than this place
or closer to this bone, or whatever your logic is,
then you can just set it. And in the animation experience,
what it will be like-- you'll move your control
freely until you reach that. And then the rig
will correct where the control is afterwards. It's a bit of an
interesting workflow, because that's not
possible in other systems. But because you have the
stack where you can just change things one after another--
so solve the bones using the control, and then move
the control again to limit it-- you can come up with some
interesting solutions there. >>Victor: I'm trying to catch
some of the new questions that came in here. We tackled that already. >>Helge: I have a hard stop
in seven minutes, just so-- >>Victor: All right. Well, if that's the case,
I think this is a good time to end it. I will make sure
to provide you guys with all the other questions
that came in in case there's something that-- >>Jeremiah: Sure. >>Victor: --you'd like
to tackle afterwards. If you're looking
for those answers, go ahead over to the
discussion thread, which is on the Unreal Engine forums. That's where we post all
of the upcoming livestreams and follow up afterwards as
the video lives on in perpetuity on Twitch and YouTube. With that said,
thank you guys so much for coming on. Before we leave,
I do want to take the moment to let everyone know that
we covered a lot of stuff today,
which you might know if you spent the entire stream with us. Thank you for sticking around. If there was something
during the stream that you don't remember
exactly what timestamp we were talking about,
we do add transcripts for all of our livestreams on YouTube. And so in about a week
to a week and a half, depending on how
long the streams are, we upload the transcript
in the YouTube description, and we have the
timestamps in that transcript right next to what was spoken. And so right here, right now,
this will be a transcript. And if you go look at it,
it will be, like, the 1 hour, 55 minute mark, roughly. And then you can actually
Control-F in the transcript, find the terminology you were
looking for, get the timestamp, and then go and watch
the video and find that. It's a nice little tip. I think we have transcripts
for almost at least two years of livestreams are available. So you can go check that out. It's a good little tip. We're still looking for
new count-down videos. In case you've been
following us from the beginning, we do a five-minute
count-down video that is 30 minutes of
development from your project. Go ahead and speed
that up to five minutes and send that to us
together with your logo, and you might become one
of our count-down videos. As always, we're looking for all
the projects and amazing things that you guys are
creating out there. And so make sure you
let us know on the forums if there's a good
release channel. You can also hit up our
unofficial Discord channel, Unreal Slackers. Or you can just-- you can PM us on
Discord or-- yeah, release forums are a good place,
though. And then others can come in,
and chime in, and look at what you're doing. It's all good. Make sure you follow
us on social media. If you're curious about
the upcoming livestreams, we do have the
schedule listed on Twitch. It's a little short right now,
but there's more coming there soon. And I think with that said,
thank you guys so much for taking
time out of your day to come on and talk to us
about these amazing new tools. I'm excited to see what
the community comes up with in terms of using them. Make sure you do let us know. Post in the thread
for the livestream itself or on the Work in Progress
section on the forums. That's also another
good little shout out. I think we might end up doing
another one at some point. It seems like you all are
working on some new things that we might be able
to cover in the future. How does that sound? >>Helge: Yeah, sounds great. This was really, really fun. Thank you for setting that up,
Victor, and it has been a
great experience. Looking forward to the
outstanding questions on the forums and
ongoing discussions. It has been very
rewarding to see what people come up with
on Twitter and other places. So I'm looking forward to
feedback from the community. And yes, I am totally
up for doing another one. And Jeremiah has been
forced to do it with me. >>Jeremiah: Yeah,
I'm looking forward to it. >>Victor: Awesome. With that said, I wish you
all to stay safe out there, and we will see
you again next week. Bye, everyone.