>> Thanks for coming.
My name is Ryan. I am one of the product
managers on Unreal Engine. And we are here today
to talk about in-camera visual effects
with Unreal Engine. So first off,
quick show of hands, how many people went to
the User Group event on Monday? Many people. I am just going
to reshow the video; it kind of sets the stage
for a lot of the -- or basically this
kind of scope of work. And also, you know, just to make
sure everybody has seen it, because we worked
really hard on it. So here we go. [VIDEO PLAYS] [FOOTSTEPS IN GRAVEL] [ENGINE STARTS] >> Great. Let's cut and reset. [MUSIC] >> No matter what
the Project is, the creatives always want to see
the closest representation to the final product as early on
in the creative process. >> When I first walked
on the set and the wall was up, and we started to look
through the camera, it really started to feel like I was just filming
in this actual location. >> We can track a camera's
position and space in real-time, and render its perspective so that we can
compellingly convince a camera that something else is
happening in front of it that really is not there. >> The thing to keep in mind
is that the 3D World that you see on the wall
is also a 3D scene that you can manipulate
in Unreal Engine. So this gives the filmmakers
full flexibility to make any change
they want to the scene live. >> So it is really exciting
to see that we can use the real-time lighting to not only change
the environment virtually, but also have it affect
the on-set lighting as well. This opens up kind of like a
virtual playground to shoot in. >> With a VR scout, you have
your department heads collaboratively
going into the World together. They are able to make decisions that inform the creative process
down the road. It gets them more grounded
into the scene that they are shooting, into the story
they are trying to tell. >> Because you can interactively
change the World, it brings all of
those departments together, because each one of them
has a role in how this World is portrayed at some point
along in the production. >> The director and the DP,
they are back. And they are now able to work directly
with the teams in real-time. It is a really exciting,
creative process to be part of. >>Ryan: So yeah, you might
have been watching that, and been, like,
that is pretty cool. It looks pretty easy, right?
Like, how hard could it be? Just, like, step one,
build a wall, right? Look how fast -- it does not
take any time at all. They move really, really fast,
the wall just goes up. You know, step two,
just get a nice environment like this one,
from the Quixel team. Looks pretty good,
looks pretty real, right? Then last step,
point the camera at the wall and shoot some stuff, right?
It is, like, final, right? Just party. We are good.
Nailed it! Yeah, so it turns out there is
a little bit more to that. Otherwise that would be
the end of the talk, and I would have stayed up
all night on this talk for no reason.
But we are here to help, and that is actually what
the rest of the talk is about. You can still give yourself
a high five at the end, like, I will, that is for sure. So yeah, this talk kind of
centers around a project that we have been working on, and a starting roll out that was
sort of what the video -- the making of the video
that we showed was the genesis of this project.
It is a big collaboration between all of the
companies you see up there. They are people
that we have worked with on a lot of previous projects; it is kind of like this sort of
Voltron to a virtual production, to try to build out all these
workflows, figure things out, and democratize these tools so that they are just
available in UE4, and available to everybody
in the audience to try out for themselves. So we are going to kind of focus
more on the specific LED wall shoot aspects of it. But I did want to quickly
highlight the Magnopus team. We worked really closely
with the Magnopus team on VR scouting. There is a suite of VR scouting
tools that is coming in 4.23, which is releasing next month, and then there is a second wave
of it coming in 4.24. And we are really focused
on providing filmmaker-friendly UI and UX for exploring the scene,
collaborating in VR. And Magnopus
brings a lot of expertise to the table in this area, that we are trying to leverage
and use, include to improve
in the product. On the Quixel side,
you know, our demo scene is from Quixel, uses their kind of
Megascans library of all of these great-looking
photogrammetry Assets. I am always blown away
by their work, and we kind of want to point
people to this training piece. So it was given
by this guy Galen, Galen worked with us
actually on stage, tuning the environment. It is a really, really
informative training piece. It is like its own talk so I encourage you
to watch this video. But sort of getting back
to the shoot aspect, there is kind of
three key components here; there is the camera
tracking aspect, so we are live-tracking
the camera, like, where it is in space,
what lens it is, essentially what it is
supposed to be seeing. There is the LED wall, and then
there is Unreal Engine itself. Right, so the live camera
tracking essentially lets the background
do what it is supposed to. We render the lens field of use
so that anywhere you move it, you get this sort of
real parallax. When you move side to side,
the parallax happens, and you will see a shift -- you will see extra depth
when Matt, the DP, is kind of moving the camera
back and forth, and panning. So this is more than just
a flat video wall, like there has been --
a number of pieces out there, where they are showing
a flat video wall. A lot of those shots look great, but we feel like this
addition of live camera tracking adds quite a lot
to the experience. And then similarly,
on this sort of wider shot, we are both booming up
and dollying in, so you can see when you boom up,
the horizon shift. And then as we dolly in, what the lens is actually seeing
in relation to the wall. It is sort of becoming
more of a picture in picture within the wall, so essentially, what is inside
of the camera view is the thing
that is always moving, and then what is
outside is static. And the reason for that
is because we are using
that static outer capture to add real-world reflections
and lighting. This here is a little bit better
example, where we have the camera
pointed on the actor, and he is moving
the camera around, but in this case, the camera is definitely
not seeing that side wall. And you can see that no matter
how he moves the camera, it is always static right there
because in a nutshell, you do not really want moving
reflections or moving lighting, because that is not what would
happen in the real world if you were just sitting there. And the end result
is like a shot like this, which we sort of stage
to showcase the reflections. But you get this nice -- you just get nice
looking real reflections. That is sort of all there
is to it. So that is kind of like
the three main components. Then on the Unreal Engine side,
we have all of these different stakeholders interactive
with one another, and that all kind of
comes together with the Multiuser Editor. So this is a feature
that we have added and worked on over
the last couple of releases. And we really see it as the core of our
virtual production toolset; it is the thing that lets -- sort of as Matt
was saying, all the different parties
working together, they are all,
technically speaking, likely to be
on their own machine. And all of those things need
to interact with one another, all in the same scene. So they are all talking
to each other, and any modifications they are
making get communicated out. And that is all through
the Multiuser Editor feature. I should also add that LiveLink
helps a lot, too. We are going to now
go through the -- we are going to go through
the difference between those two things. But in a nutshell,
Multiuser and LiveLink are how we sort of take
all this data that is happening, and changes
that are happening on set, and fold them
all together into one. To start to get more specific,
like on our particular shoot, we ended up having essentially
eight machines that we needed to track,
or were doing stuff of interest, so that it is all kind of
centralized through Multiuser, but sort of in our kind of
pyramid of machines on set, there was
the Profile machine, right? So there is where the camera
tracking data is going to. So Profile has a system
that can live track the camera, and that data comes through --
that is actually their server that then broadcasts it out
to everybody else. We have the primary
Unreal Engine machine, and this is what
the stage operator is on, right, so any modifications
that we are making live on set for recording in Take Recorder, all that is happening
on the primary box. Then we have three render boxes. These three render boxes
are the things that actually put pixels
on the wall. We are going to kind of go into each one of these sections
in more depth, but this is sort of
setting the stage for all the different parts. There is another machine
that is doing a live comp -- potentially optional, but if you are maybe going to
have some green screen shots, you may also want
to do a live comp. That is kind of its own station. Then last, but not least,
there are the VR scout machines. In our case, we had two, so two
people could be doing scouts. Obviously you could have
any number of these things, if you had a whole bunch
of people collaborating. The people might be doing
set dressing, they may just be watching
what is going on in the scene. Maybe they are prepping,
planning shots that they are going
to shoot next, while other shooting
is happening. The point is that
all of these things are going to be happening at once,
all in the same scene, live. So we will start with Profile
at the top. This is a quick video of some
of Profile's work, right, so Profile has a mix of in-house
and third party tools that they use to prep things, and this is them kind of setting
up their camera for the day. When that data is transmitted
to Unreal Engine, you will get something
like this, right? So you can see
in the background, we kind of go back
and forth here, but Matt Workman is moving
the camera around, he is moving it up,
he is moving it back down. So as he moves it back down, we will kind of rack-focus
back on the UE4 screen. You can see
the camera is moving. And this is just to show
that anything that is happening in the real world in the camera is being reflected live
in Unreal Engine. So this is a live camera,
and as a result, we can do a live render
based on that camera. Few things to note about
LiveLink just specific to plugging 4.23, so in 4.23,
LiveLink got an upgrade. It now has this concept
of roles, so that kind of helps you manage the different types of data
that are coming in. In this case, Profile's stuff
is all coming in as animation. They do a bunch
of mocap work as well, it is simpler for them
to keep it all consistent. But you can create other roles, like the main one
we are seeing is tracking. There is a lot of different
camera tracking providers, so this is kind of
making it easier for them to stream in their camera
tracking data, like for folks like
InCAM, MoSys, et cetera. We also have this concept
of presets, so presets just make it easier to manage
all of these different settings, of which there are many
that can be easy to screw up. Oftentimes you have
different settings, even for the same provider. So in our case,
we had a different LiveLink setup when we are doing
live comp with Composure than we did
for our standard preset. That is why we have
two up there; we just have
the main Profile one, and that is what was used
for the wall. Then we have the separate one
for Composure. So next we will try to make
this distinction between what LiveLink is for,
and what we use Multiuser for. This is actually kind of
a little bit of a puzzle, so I hope this diagram reads. But there are kind
of two concepts that have strengths
and weaknesses. And depending on the use case, you want to use one
versus the other. So we will start with LiveLink. The approach to LiveLink,
it is a multicast workflow. So Profile takes
in the tracking stream, and they basically
broadcast it out to all of the other machines that are interested
in the camera data. So that means it goes out,
and we are just streaming it out to all of
the different machines. The idea here is that
you want to use LiveLink for things that have rapid,
high-volume transactions, like frame-to-frame
camera tracking data. Profile actually sends us four
pieces of info for every frame. Then the distinction to note is that with the nDisplay
render machines, we also have to keep in mind
that the different machines rendering the images to the wall
also need to stay in sync. So our approach
to that is more of like a master-slave
style relationship. In our case, we designate the
first render box as the master. That is the only one
that gets the LiveLink data, and then it is
the responsibility of the first render machine
to tell the others. That is how we make sure all of the rendering machines
are in sync. This is sort of contrast
with the Multiuser Editor. This is like
a different approach; it is not necessarily as live, or as the LiveLink approach,
right? So there could be
a little bit more delay. But the advantage to this is that it is a more robust
collaboration mechanism, and it supports anything
that you are going to potentially change
in the Editor; you do not have to do
anything special, you are just working
in the Editor as normal, and it is just being sort of
broadcast to everybody else. This one is more
of a client-server model. So we have a Multiuser server
that we start up, and then all the different
machines connect to it, right? So all of the UE4 machines
connect to it, and then we still have
the same concept here where the main render box connects to the Multiuser
session, and that is receiving
any change. So somehow, the primary box
maybe increases the exposure or moves something around,
or maybe it is more likely that somebody in VR
scouting is dressing. Maybe they move some rocks,
or they move a prop around -- that gets sort of sent
to all the other machines through the Multiuser feature. That is also how it would
be reflected on the wall. This is a kind of a slight pivot
here to more of a pipeline chat, but as you can see, this is
a lot of machines, right? It is a lot of stuff to manage,
to actually make this happen. We have invested in some tools
to do that, just because if you
really think about it, every time you needed
to start this up, or maybe something happened, or even if you just had
to do this once a day -- starting up all these things
and setting them up is kind of a pain
in the ass, right? You start out,
maybe nothing is on. So here you are starting out,
you are, like, here I am, let us get this started.
So first thing you do is, you light up the
Multiuser server, you are, like, cool.
Multiuser is up. Then you go to you
first machine, the primary, so you turn that on, connect it. At this point,
you might be thinking, hey, this is, like -- I still have a lot more
of these to go, right? You keep going. You got render, you are, like,
everybody is waiting on me. At the comp, everybody is mad.
People blow up, and then at the end,
it took so long, everybody is asleep --
like, cannot shoot. So to get around that,
we invested in, this is a pretty basic pie
slide app here, which we just called
a "stage control app." But the key here is that
it is a big launcher, right? It knows how to launch
all the machines. It does all of the different
maintenance and management to lighting up a session
and turning it off. At the top here, this is how you connect
all the different machines. There is a way to define
all the different machines. There is a space here to say
which scene you want to open. Our model for managing
scenes on the wall is, you basically use the launcher
to launch a specific scene, and then you sort of bring
it down to bring it back up. That way, you can make changes
on the fly much faster. Or if there is a problem, it is very easy
to start it back up again. We kind of emphasized
that route, because it is new tech, and we thought
that there might be issues with the scene going down. And now we have come to realize
that that is actually just a really nice approach to bring
your stuff up, bring it down -- it is very clear
what is happening. You always know
that you are starting up from a clean session.
Then on the side, you can define all the different
machines that you have, so these are all the guys
that we had on stage. There are also some other
permutations that we started to find as we got into more of
a production rhythm, where sometimes
people would drop out, sometimes people would drop in, they would be doing
their own stuff. So the control app
kind of lets you maintain that you can drop people out,
you can bring them back in. It has worked pretty well.
I have to say that when we were going into
this production cycle, it was a little bit
of an afterthought, but this app
was really important. I actually cannot imagine
doing the shoots efficiently without it. So yeah, pipeline tools --
pipeline tools are good, I guess, is the takeaway. All right, so nDisplay is
a feature in Unreal Engine that kind of comes from the live
events projector space, like a lot of cave setups
are done using nDisplay. Now it has been extended
to kind of work with this LED wall
shoot workflow. So the idea here
is that it is a scalable rendering setup
for multiple displays; you can have as many displays
as you want, so potentially an LED wall
can be as big as you want. You just need
more machines, right? So depending on how big it is,
you just get more machines, you buy more graphics cards
from Rick -- that is the idea here. So bringing it back
to the camera tracking aspect, and the in frustum
versus outer frustum, like a couple of things
to note about the feature set. First off is that
we have designed it so that you can get the highest
possible resolution in frustum, so most of the graphics hardware
and the setup as a whole is set up to sort of emphasize
the frustum is always as big as it can be. Typically, that is at least
3K or 4K. You also have the ability
to reduce the resolution outside of the camera
to preserve scene performance. Like, again,
you are never seeing what is not in the camera. It is just there for lighting
and reflections. Oftentimes you will run that
at half-res or lower res, just to save performance, and really make sure
that the in frustum is the highest
possible resolution you can get. Then we have also got support
for curved LED walls. So we are going to get into a little bit
more of the details on that, but that was another feature
add that came with this round. As mentioned earlier
in the diagram, there were
three render machines, so this is a little mockup
we did just to show which machines were on
which sections of the wall. There were, again,
three machines, so render one was the left wall and the left side
of the front wall, render two was the right side
of the front wall and the right wall, and then render three
was the ceiling tile, or the ceiling wall. So to put it all together,
they are all 4K outputs. As a total, they were around
12 million total pixels. So nDisplay is
a little bit tricky to set up, so we are going to go through
a few slides here to kind of guide you through those steps, and show off
some of the new features that allow this setup to go in. First step is to enable
the nDisplay Plugin. This has been around for a bit, but obviously you are going
to have to enable this guy. Second is this new plugin,
right, so the nDisplay team
has added this additional plugin called
"Warp Utils," and this is what is used
to generate the data that actually drives
the mapping of the wall. So when they were kind of
putting this together, they were talking about
their PFM-based approach. I work remote from the
rest of the team in LA, and so they were going over this
on a call, and I was trying
to play it cool, but in the meantime I was just
Googling what is a PFM file. [LAUGHTER] So a little bit of background
on the PFM file, so PFM stands for
"point float map," but I believe it can also stand
for "pixel float map." But we call it point float map,
for what it is worth. It is basically
a Mesh of the screen. So each vertex in the Mesh corresponds
to a pixel in the LED, so it is like
a really dense Mesh. And the idea here is that
this produces the best quality when they do the mapping. Most LED walls
have the same nature. There is a bunch of different
ways to generate the PFMs, this is kind of going through
the golden path, or what I think is the easiest
to understand, right? So when you are building a wall,
you are essentially -- you are likely to have
all of the same panels. Most of them are going to have
at least the same nature. So it is typically tiles
of the same size and resolution, so part of the plugin setup
that nDisplay offers, it has a procedural generation
setup for configuring the wall. So in this case,
our panels were 50 centimeters by 50 centimeters. So this is important information
to remember for the next steps. Once that plugin is enabled, some additional Blueprint
Classes will be created. This is going to kind of guide you through this
step-by-step process. The first step is to create
an LED tile Blueprint, so you can create that,
you can name it. Then when you open it up,
there are parameters in it for you to define essentially
the size of the tile. So we measure it before, and they were 50 centimeters
by 50 centimeters. Then pixel-wise,
they were 176 by 176. This is information
that you would get from whoever you bought
the tiles from. Once you have defined your tile, the next step is to construct
a wall based on those tiles. So for that there is
another Blueprint you create, which is the LED wall
Blueprint -- kind of same setup; create it, name it.
It has got its own settings, which are pretty
straight-forward. So you are going to define the
type of tile that you are using. Oftentimes on a stage, you may
have different types of tiles for different sections
of the wall. For our wall, in our shoot we had
two different types of tiles; we had a certain type that was used for the front
curve and the side. Then we had a different setup
for the ceiling, because the ceiling tiles
needed to be lighter in order to be hung
from the ceiling, without it sort of
dropping on us. These are kind of
some of the nuances to putting these shoots
together, which has been
pretty interesting. But the idea here
is that we can support that. So you can define --
it assumes that each wall has a certain type of tile, so we define all those walls
separately and differently. In this case, we are mapping
the front curved wall, so that one was 18 by 17.
You can also tell it an angle, so in our case
the angle was 3.5 degrees, so the curvature of it
was 3.5 degrees. So you fire those in,
set those up, and you are pretty much good
to go there. Next step is to drag it
in the scene. This is basically to place
it into the World, right? So you are setting up
a scene in UE4 that establishes where
the stage walls are. So you drag that in. You are going to want to set it
to the top left coordinate, so when we set up the wall, you are placing the point
in the top left. Then if you hit
the Build button, it acts as a preview,
so that when you hit Build, it is going to show you
where the wall is. You can then continue
to move it around to place it in the right spot. Then the next step
is to give it an origin. The origin is kind of what maps
all of the walls together, so in this case, in this demo,
I am only going to show one. But if you had multiple, they all need to know where they
are relative to one another and that is what
the origin is for. You can pull that guy out, you can pull it
out of the Warp Utils plugin content. You have just got
to drag that guy in there and place it and position it
to a known place. One thing to note
is that it is like a best -- we generally suggest that it is
a best practice to keep it the same as your camera
tracking origin, so that way, essentially zero is zero
for both the live camera track and the stage wall space.
Eventually those are going to have to be
kind of married together, so it just makes sense to keep
them consistent in that way. Otherwise, you are going to have
to chase an offset all the time. You can use the offset,
and we actually tried it. At this point, I would really
strongly recommend that you keep them the same. It just makes your life
a lot easier. All right, so I kind of wish I had cut these
down a little bit, but we are kind of
in the home stretch here. Now you are ready to generate
this PFM file. You give it a path,
you tell it what the origin is, you select the wall
and generate it, and that gives you this file. Now you are set on the PFMs. And the last step is to bring
the camera into the mix, right? So you are going to need
to bring in a cine-camera, and then basically also add
this re-projection camera, which comes
from the nDisplay content, and you have just got to marry
those two guys up. So you basically need
to tell the nDisplay camera what the Unreal camera
is that you are going to use. Also note this viewport ID, because that is what
you are going to use later to map the in frustum setup. Last, but not least, it is
another best practice to include it in your hierarchy. So in the Outliner,
you will have this kind of tree of cameras
that is the Unreal camera, the camera you get
from the camera track, and then also this guy. So you are generally going
to want to keep them together, that is my advice to you. Last steps to configuring
nDisplay is the nDisplay config file. This is a big spaghetti soup
of text. We are going to go through
it bit by bit, because I was always really
intimidated by this thing. But over the past week, I have come to understand
it much better. So I am sharing that
with you now. If you go by section by section, it is actually
quite hierarchical. So first step is, you need
to define a cluster node. A cluster node
is a render machine. Very straight-forward, you are
basically just giving the IPs for the three boxes
that we have established having. A window defines views
for each node. We render the inner camera
on each node, so this basically means
that if you pan the camera, and the in-camera view is on
any of the walls like that, it will show up. Then this is also defining
what we were describing earlier. So render machine one
is on the left side, in the front left, and then render two
is on front right -- this is where
you make that definition. You always want to be
in full-screen. This is a really long story, but just always stay
in full-screen. Really bad things happen
if you do not have this full-screen thing on. Rick was on the email thread
about that, I think. A viewport dictates
which pixels go where. This is something
that will typically work with the people
who manage the wall. It basically just comes
from the LED processors. It is part of the interplay. You only have to
set this up once, but it goes into the mapping. Then last, but not least here,
is the projection. So the projection knows where to
get the data and how to use it. This is where you kind of tie
in this PFM definition of where your walls are.
Then it kind of ties together with what the camera
is supposed to be. So you are telling it
where all these things are, and this is what
lets nDisplay know how to map the data from the PFM to the data
that it is going to render. MPCDI just means
it is a PFM-based mapping. Alright, I think
I got through that part. We are going to switch
gears a little bit, and just talk a little bit
about scene organization. This is another thing
that we have been iterating on, and it is something that we will
keep working on with production. But all of this stuff
I just went through, it all talks about how you set
up the stage content, right? So, where are the walls,
where is the camera -- there is a whole bunch of stuff
that you are always going to have to kind of
have set up in the scene. But at the same time,
if you are doing a higher volume LED wall shoot,
you are likely going to have a whole bunch
of different environments. The way we do that
is to separate all of the stage stuff
out into a stage level. So that is separate
from the actual content. The idea is that you can have
a stage level that has always got
the stage stuff set up, and then you can
kind of keep it together just by subbing
different environments in. Otherwise, you have to keep duplicating
all of the stage stuff. This has worked
pretty well for us, but it is something that we are
going to keep thinking about, and try to make easier to manage
for the content side of things. There is some balance
there between the two. Then another tip here that
the team wanted me to pass along is that you do
always need to set all of the levels to loaded,
and that is a Multiuser feature. So if you do not
have them all loaded, then the Multiuser changes
will not be transacted properly. All right, so Genlock
is the next topic. Genlock is basically
how we make sure the camera, the walls in Unreal Engine
are all in the same phase, so different things
could be refreshing or capturing at different times. That would result in footage
that does not look good. So Genlock really just keeps
everything at an even 24fps. We do that mainly through
NVidia Quadro Sync, so the cards in the
machines have this NVidia Quadro Sync feature. We synchronize them,
in our case, to a master clock, so there is something here
basically telling all of the different machines
to render at 24fps, because that is what we are
capturing and shooting at. This is how it ends up
laying out. We have a master clock, this is the thing
that is basically the metronome; it is saying 24fps,
it is telling it -- it is going to tell
every machine to make sure to render and/or display
at the same interval. So the master clock really only
needs to be connected to anything
that is feeding the wall. So the primary machine
and the three render machines are the ones that are connected
through to the master clock, and they are the ones that are
always hitting the same even 24 when they are feeding
to the wall. The other machines,
like the VR scout machines, it does not necessarily matter that they are rendering
at the same interval, because they are just making
content adjustments, likely. This is kind of confusing,
I have to admit. It has taken me
a really long time to wrap my head around it,
personally. But I keep coming back
to this kind of metronome idea, which is just that you want
everything on 24fps because that is what
we are shooting at. Another thing to point out here
is the comp. It may seem counterintuitive that the comp is not tied
to this master clock. It potentially can be,
but for the most part, the comp really only needs to be
in sync with the camera feed. It is receiving a video feed, so it sort of ends up
being separate from it, because it just cares about
the video feed it is getting. Hopefully this video
is going to explain the sync stuff
a little bit better. This is something
that we stare at a lot. It is just a red bouncing ball, and it is just
how we check sync. As you remember, we have a seam
in the middle with the way we have configured the wall,
so we have a render box on one, and we have a render box
on two on the front wall. We just set up this ball
to bounce on the scene. In this case, the ball
looks like a ball, right? It is very smooth, right?
We are saying sync is good. The counterpoint error
is when it is not good, it looks like this. It means that the machines
are not at the same sync, are not running
at the same interval, so the metronome is off. One machine is -- they are not
rendering on the same frames. This is something
that we also stared at a lot, when we are learning
about all of the stuff. In this case, this would
be tearing, and tearing is bad. Especially on a case
where we are panning across, you would get tearing,
and it would not -- you would not get a final with something
like this on there. So that is just something
to be aware of. Another thing that we have done
a lot is looked at the back
of the machine. You just want to see
these green lights on the NVIDIA Quadro cards.
The green light is good. That is sync. This is latency -- another video
that I did not turn the audio off on -- sorry. But full disclosure -- there is
a little bit of latency, right? In order to get all of the data
through, there is a bit of lag. You can see as the operator pans
back and forth, it is a little bit behind. We have worked really hard
to make this as low as possible. We have made
a lot of improvements, but there is definitely
more to come. It is a big collaboration between, essentially,
every link in the chain. The LED teams
and the processors, the cards that we are using -- we want to get this
as low as possible so it is as responsive
as possible for filmmakers. But there is a little bit
of latency to be aware of. We measure it with this very
scientific process of taking a photo
with our phone, showing timecode on the wall
through Unreal Engine, and the timecode at readout
on this little device. So we take a lot
of these photos, and just sort of, as a public
service announcement, be careful when you are taking
these photos on set. Often times sets do not like it
when you take photos on them -- there are all these rules.
Do not be like Andy -- he took photos on set,
he got in trouble. But he is really happy about it.
We kept Andy, he is great. Anyways, what are we on now --
iPad. The iPad is a Remote Control app that we wanted to give a lot
of these creative controls, put something more directly
in the hand of the DP, the production designer,
the gaffer, or the director himself.
The idea here is that on stage, you may not have the actual
desktop machine very close. Often times the production teams
just do not want that. Then there are a lot
of frequently done operations that maybe they are
too spread out, or maybe it is a little bit too
complicated to do in the Editor. So we kind of try to consolidate
into this concept of a Remote Control app. That uses a new plugin
called "Remote Control." This is basically a rest API that allows you
to create web style apps that adjust parameters
on anything in the scene. We tried different approaches,
like using OSC -- there is a bunch of other ways
of approaching this, but ultimately we have kind of
settled on this rest API idea, which has worked
pretty well so far. After you have enabled it,
the next step is that you have to enable this console command,
which is called "Web Control." Apparently this comes
from a Fortnite creative, but it is a step
that you can easily overlook. It is going to become a project
setting, I hope, in 4.24, but I just wanted
to point it out because I know
I got tripped up on it. I forgot when I first
started using it, I forgot to turn it on, and did
not know why it did not work. So do not be like me. The Remote Control
is another thing that connects into this kind
of ecosystem of Multiuser. If we go back to the diagram
we had before with Multiuser, we are now adding
this Remote Control web server, which runs on the primary box. Essentially, the iPad will
communicate directly to the primary box as if somebody on
the primary machine did it. So that is how it gets into
Multiuser sessions. Anything that the filmmaker
changes on the iPad app is also reflected on the wall,
and all the other machines. So step through a couple
of examples here, this is one of the tabs on the app
which is for stage manipulation. It is one of the more simpler
examples, and it is for the director or the DP
to move the World, right? They have the camera set up, but the stage itself
is not moving, so your way of moving around
the World is, you can just shift
where you are in the 3D space. This is an iPad app where they
can kind of move through space, or rotate the World to get
the right angle and camera, without necessarily
moving the camera. All it is doing here
is pointing to an Actor that is in the scene,
which is this guy here. Really, all it is doing
is just moving these values, but through the iPad. This is kind of
a simpler example, but it is, again, something
where through the rest API, you are just modifying
these parameters on the Actor. The result
is something like this. In this case, Colin is rotating
the World here. He is on the iPad,
he is just rotating, figuring out which angle
he wants to see there. A more sophisticated example
is this sky switcher. This is something that we set up
that allows the teams to potentially
change time of day. They may dress a set
to be a certain way, and then you may shoot
different scenes there, or you can make it sunset
for the entire shoot, or for the entire day, say. You do not have to worry
about the actual sun moving. So on the top, you can kind of
switch between different skies you may have
preselected and like, and there are other parameters
there as well. These are then put through
a Blueprint. It is the same idea; instead of being a simple Actor
that we point to, in this case
we have made a Blueprint that exposes
all of the controls. Again, it is basically just like
a one-to-one match. All of the stuff
that you saw in the iPad are just directly
controls here, that through the rest API
that you can adjust. The result
is something like this. So this is one of our demo shots
where we show just how you can
spot time of day. This is a dynamically-lit scene
in Unreal Engine; the sky is acting as an IBL. It is just showing how the
entire environment can change, but at the same time it is also
showing the different flexibility you can get
from the control app. A few more screens to show here,
like this is the Templates page. This is for saving different -- this is for saving different
templates that you might like. At the beginning of the day
the team might sort of plot out the different shots,
so they can move things around, make the adjustments they want,
save them, and then come back
to them later in the day. These can also be recorded,
if you are using Take Recorder. There is the ability to make
these kinds of adjustments inside and outside
of the frustum, like once we started
working with DPs, they had all these requests for the different kind of
fine-tuned controls they wanted. Through the iPad, we set it up so they have
these types of controls. Sometimes outside they want
to make it brighter to light the actor, but they do not want it
in-camera so much, or vice-versa.
It is just another example of kind of
the production efficiency we are trying to add
through an app like this. As another public service
announcement, apps like this actually generate
quite a lot of data, so you just have to be careful
from a network perspective. We recommend having -- if you are doing an iPad
like this on set, give it its own network. We have had many misfires
where we did not do that, and the laggy iPad really does not fly
very well with directors. Another thing to note is that
you should also avoid putting it on the mocap or camera
tracking network as well. These are all kind of dos
and don'ts, but the mocap
and the camera tracking, it actually generates
a lot of data. So you have got other data that needs to travel
on the same network. They can kind of get
in each other's way. I highly recommend
giving your iPad app, or any of your control
apps their own network. All right, so the last bit
we will show here is the green screen. On the screen page,
we also have the ability to basically with one click
get a green screen. So for the most part, we hope
that people are able to -- I'll just go back -- so here is the video
of turning a green screen on. Just with a simple click
of the iPad, you can add green. You can also add
tracking markers, then you can also change
the density or the thickness of these guys. The idea here is that there
are still going to be some cases where you need a green screen,
like maybe the background does not look good enough
for a particular shot, or maybe for various constraints you just are not able
to get a final. The idea is that when you are
shooting with an LED wall in Unreal Engine
and camera tracking, it is still adding
a lot of efficiency in that way. If you would have imagined
a world where you were shooting on a set and you did not have
a green screen up, and then somebody had to come in
and install one, it is certainly longer
than one click. On a personal note,
with the tracking markers, it really sort of blows my mind that you can just
set these things up. I spent a lot of time -- I do not know if anybody
has ever done tracking markers. But it takes a really long time. Sticking all of these
tracking markers up, going on a scissor lift,
taking them down -- it was a real moment for me,
personally, when we first rolled this out, just because it would have taken
a full afternoon to put the tracking markers
up otherwise. Last bit on green screen is that
we also have some live compositing features
that are Composure, and in this case,
our AJA SDI input. This is just us testing around
where we are moving the camera, just the green screen frustum, the idea here is that we are
still getting better lighting, we still have the environment
capture on our stand-in. There is less spill, he is still
getting lit by his environment. You would hopefully get a better
green screen played out of it. Then you are also getting
this preview comp, so Composure
has a keying feature. We also have SDI out
to pass it back out. So this is another example
of one of the virtual production features of Unreal
Engine at work. We will close with an outtake. This is the shot we were
just messing around with. We mostly shot with
the physical motorcycle, but we also had a CG one. Originally,
we were going to shoot this walk-up shot to the CG
motorcycle, and sort of --
that was how we prevised it. The way that the shot
came up was, we were about to shoot it and we forgot to add the
motorcycle back into the scene. So we had rigged it, and when
we spawned it, and it bounced. It had suspension. So everybody thought
it was funny, it just looked really funny
on the wall. The actor was, like,
"What if I snapped my fingers," and we just triggered it.
Then everybody liked it, and we kind of workshopped
it for a couple of minutes, and that ended up
being the shot. And the takeaway for
is that it may seem small, but it is something
that we are really excited about with virtual production,
bringing these tools -- this is something
that never would have happened if it was always
all green, right? Nobody would have ever thought
to try that shot. So it was kind of an interesting
microcosm of the way people are working together
with this technology, where it was a happy accident
to some degree; that we forgot to put it in,
and then we dropped it in, and everybody thought
it was funny. Now it is this shot.
There it is. This is my dog.
I took this picture of my dog. I was trying to find a much more
organic way to get this in, but I did not have one. So this is my dog
in front of the LED screen. >> What is his name?
>>Ryan: It is Mattie. >> Mattie?
>>Ryan: It is a she. >> She.
>>Ryan: It is, you know, at the very least,
we are working on our dog influencing through
the LED walls, too. That is my talk. Thank you.
That is all I can think to say. [APPLAUSE]
A lot of people in the sub were blown away by the Mandalorian's use of this technology, but it is being developed all over the place and at smaller scales as well. For example it is used extensively in First Man, just on a more limited basis.