♪ ♪ Welcome to the advanced lighting session.
My name is Jerome Platteaux. I am a Lead Artist at Epic Games. I have been working at Epic
for a little over 2 years. Before that I was a CG Supervisor at ILM
(Industrial Light & Magic) in San Francisco. Today, we are going to try to figure out what is the best way
to light a project in Unreal Engine. There is a lot to cover today. I might go a little bit fast.
I have 200 slides. Bear with me.
[Laughing] A quick overview. There are technical
and artistic decisions you have to make. What are the best way to prepare
your assets and your scenes? We are going to cover the baked lighting setup, the dynamic lighting setup,
and the image based lighting setup. Then, we are going to talk about
the tricks we can use with shadows. Then we will talk about the overall
reflection solution in Unreal Engine. First of all,
a technical question. What is your target media? Are you targeting for mobile? It might be the lowest
GPU power you can get. Or are you going to have
a project that is more VR oriented? This means it is going
to be the most demanding. You have 90 fps (indistinct) in stereo, and it is slightly over 2k.
It is very demanding. Or is it a project that is
a realtime cinematic, so it requires the highest
fidelity for your project? This includes the best anti-aliasing, the sharpest reflections, etc. The good thing about projects like
a cinematic and realtime is they can be optimized per shot,
so you can put all your light and effort on
what is visible in this shot. That is another good thing
those types of projects have. Those projects could only
be shot in 24 fps. As you know,
movies are shot in 24 fps. That is a lot less less demanding than a
VR project at 90 fps, or a game at 60 fps. Other projects don't need
to completely be in realtime. Look for an interactive workflow like
being able to do your lighting, and being able to work in-engine,
but if the final image takes a little bit longer,
it is not a big deal. For example, that type of project
also exists in movies. ILM did a render of K-2S0
on those two shots and they were taking slightly
over a millisecond, so they were working almost in realtime on the desktop. But when they were about to do the render,
they were actually rendering in higher resolution with all of
the parameters maxed out, basically. One thing you have to think about is if the overall game or project static. You can bake static lighting,
which is another good thing to have. For example in Fortnite, the lighting
constantly needs to be reevaluated. Things are destructible,
or you spawn new assets, so the lighting is constantly changing. Then sometimes almost nothing is moving,
but it is just too big. You have an open world,
so you cannot bake the lighting. It would take hundreds of thousands
of textures and lightmaps. For example, you would need to use
dynamic lighting in an open world. What is the target hardware? Is it consumer hardware
like a console or a regular PC? For example when we did Robo Recall, we knew we had to meet the minimum specs
required by Oculus to make sure it runs. At that time, it was an
NVIDIA GTX 970. Or sometimes you have
a more specialized project. ILM did a VR project
that was only shown in museums, so they were completely
in control of the final specs used for PC. They just jammed a bunch of
hardware together. It is something that would cost
way too much for a regular consumer. Another big question when
you start a project is what renderer do you want to use? Do you want to use the deferred renderer
or the forward renderer? I will give a simple explanation of
the differences and how it works globally. The deferred renderer. The deferred renderer is the
default renderer for UE4. This is how it works.
You store the raw data into Screen Space Buffers, like I show on the top right. Once all the raw data is stored,
then you compose them back together and that gives you the final image. Lighting is evaluated
at "light" draw time. That means the deferred renderer
kind of evaluates the light all at once. The extra screen space data can be used for more
sophisticated post process effects, and a simpler combination
of lighting methods. For example, we use the extra data for the
Screen Space Ambient Occlusion (SSAO). and Screen Space Reflection (SSR). It is the golden path for Unreal Engine. It supports all the rendering features. The deferred renderer is
recommended for any project. Why would you use the forward renderer? It was developed with VR in mind when we were shipping Robo Recall. All the lighting and shading
is evaluated at object draw time. The renderer has to evaluate all the
lights and objects you have. One object with 20 lights
is going to be a lot more expensive. The forward renderer can do
very specific optimizations. We can turn the evaluation
of the roughness on or off. Or we can have some very cheap reflection evaluation
on specific shaders. You can turn that option
on and off per shader. You have another anti-aliasing option
that is great for VR, which is MSAA compared to TAA. In VR, the headset is
already slightly blurry. Using temporary anti-aliasing (TAA)
would make it even more blurry. We use MSAA because
it gives you a sharper image so you don't lose any
extra detail in your pictures. How do we prepare your assets
for a good lighting? First, make sure your Materials
are physically correct. One of the most common mistakes we see
is the Base Color values are are slightly too dark,
or are slightly too bright. Watch out for those. We see a lot of people setting
the Metallic value to 0.5, or not completely 0 or 1. Think about Metallic as a mask. It is a metal object,
or it is non-metallic object, It is the same thing with Specular.
We see people putting color in Specular. It is an inconsistency of
the reflection intensity. To adjust the color of the Specular value,
you adjust the Base Color value. Overall, we have two types of Materials: non-metallic, also known as
dielectrics and insulators, or the metal Material,
which is a conductor. As you can see,
there are a lot of references on the internet
of the most used value for the Base Color of non-metallic Materials. You have a lot more range
on the non-metallic Material compared to the metallic Material. Pure metallic Materials tend to stay in
the same range, like very bright. To give you a global idea
the charcoal, which is one of the darkest things
we have in our assets, has an RGB value of 0.3. That is what I mean
by not going too dark. It is never purely black. There is always a little bit of
information in the dark. It is the same thing for
the white or bright part, it is never completely white.
You never have a value of pure 1. For example, the snow has a value of around 0.9. This is an interesting fact.
This gives you an idea with an actual photo of the darkest
substance we can find in the world. It is not completely black;
there is still a little information. This is how something that is
close to 0 looks in the real world. It looks alien.
It looks very interesting. To give you an idea of pure metal,
their RGB values tend to stay bright. When you create your asset,
try to stay physically correct.. How to prepare a scene
for a good lighting. When I first start the lighting,
I deactivate Auto Exposure. You want to make sure the lighting or the intensity of your picture is not
automatically adjusted by the engine. Turn off Auto Exposure. Deactivate Screen Space
Ambient Occlusion (SSAO) and Screen Space Reflection (SSR). By deactivating those,
nothing will darken your shadows and you know what it will look like.
It is the same with reflections. You only work with the
Reflection Capture Actor first. Then when you are happy with that,
turn on Screen Space Reflections. I recommend keeping
the default tonemapper. For example, I know some people
make any given mesh slightly too dark. Instead of brightening up their lights, they would start to
brighten the tonemapper. I would say it is a very
last minute adjustment. Don't touch the tonemapper
unless there is a really good reason. I turn off Vignette Intensity,
so nothing darkens my image. I turn off Bloom so nothing
contaminates any light on top of it. I recommend to always
create a chrome sphere. You can place the chrome Sphere
in the world as a good reference. I use a pure mirror chrome Sphere. Base Color is set to 1, Metallic is set to 1,
and Roughness is set to 0. The sphere purely reflects the
Reflection Capture Actor. I know exactly how
my reflection will look in the world. Then, I create a Grey Sphere,
which is 50% Grey. What does that mean? Onscreen, it looks 50% Grey in RGB. The linear value is 0.18. If you set up your Material with
Base Color set to 0.18, then the Material detail
will be 50% Grey. Then, I set up a Directional Light
set at 3.14 (pi). Basically, this light... Let me check. Now, we can verify
what I am trying to do. I am trying to create
a normalized lighting. I see my Material is 50% Grey, or 0.18 in linear. Then with the Directional Light
set at 3.14, I get the same value in the Viewport. My triangle is working.
I have my Material that looks the same in the world, and
that gives me the Grey intensity I want. What are the advantages of and differences
between baked and dynamic lighting? With baked lighting,
you have GI (Global illumination). You have cheap soft shadows, which are not
evaluated because they are baked. It is less global GPU demanding. Here are the cons.
You have slower iterations. Every time you change something,
you need to rebake. The lighting is static.
That basically means nothing moves. If you spawn another asset,
it would not influence the lighting yet. This demands more memory
because all the light in the lightmaps is baked,
and then stored in memory. What are the advantages of
dynamic lighting? It is "what you see is what you get"
(WYSIWYG). Basically whatever you adjust,
is what you see onscreen. You basically don't have
any surprises when you bake. The good thing is it is not static. It is completely real-time,
so you can adjust everything. The big con is there is no GI. It is too expensive right now.
We don't have anything to calculate the GI. Obviously, dynamic lighting
is more GPU demanding. What is the default
light baking setup? What is the default setup use case? Usually, you start with
the Directional Light set as Stationary. Then you do the classic Lightmass baking,
so you see the GI start appearing. It is not very clear on the screen,
but you get GI light bouncing around. Another aspect is the sky dome,
which is static too. That gives you the overall
global lighting you need. I will go over the lightmaps.
What are the lightmaps? Lightmaps store the lighting information
and are packed into textures. If you go into World Settings, you can basically see all of
your lightmaps in the bottom right. You can see each one, each lightmap is a collection
of HDR textures. Additionally, it is not just storing
light intensity, but also light direction. Sometimes people want to use VRay for example,
and then bake the lighting. Then they will use the black and white
or color maps and import them into the engine, and then
assume they will get the same lighting. You need to extract more information.
You need the lighting direction because we use a combination
of those two maps for the normal map. Basically, there are lots
of things happening on the back end. Lightmap resolution. Lightmap resolution is important to do
when you are baking the lighting. Of course the lower
the lightmap resolution, the faster it is to generate.
You have faster iteration. You also get a very soft shadow.
Globally, there is a lack of shadow definition. With a higher lightmap resolution
it is slower to generate, but you have much better
shadow definitions. I want to show you the
difference between the two. One is clearly defined
compared to the other. Some people still generate
lightmaps outside of UE4. For example, they use 3ds Max
and unpack their lightmaps in it. Now we have a very good
algorithm inside UE4, so I don't go outside of UE4
if I want to repack those lightmaps. I recommend you stay in UE4, especially because sometimes an object
that was supposed to be far is now used very close,
and you want to repack the UVs. Do everything in UE4 because
it is faster to adjust things.. I recommend trying to get
the unpacking as tightly as possible because you don't waste
that many pixels. You don't waste pixels
between each item. That basically gives you a better
shadow definition for the same resolution. Remember the minimum lightmap
resolution always needs to be less than or equal to
the lightmap resolution. You have to set the parameters,
and usually I keep them at the same value. Lighting types. When you are baking the lighting, only two types of light are baked: Static and Stationary. One thing to note is that
the Directional Light, the Point Light,
and the Spot Light are the only ones which
emit photons, for now. The Sky Light doesn't emit photons.
What does that mean? There is only one bounce
because these do not emit photons. For example, that means for this part we should have more light coming
into the dark area. But because we only evaluate
one bounce on the sky light, it stays really dark. Lightmass settings. You can find the obvious area
to improve the quality of your lighting by clicking the Build button,
go to Lighting Quality, and you can select between
Preview to Production. The other important area
is the World Settings. You have Static Lighting Level Scale, Num Indirect Lighting Bounces,
Indirect Lighting Quality, Indirect Lighting Smoothness,
and Volume Light Sample Placement Scale. We are going to go over each one. Num Indirect Lighting Bounce. This tells you how many times
the light is going to bounce around. This picture gives you an idea
of what it looks like with 0 bounce. That means it is only direct lighting. This is when the light
bounces one time, two times, three times, five times, ten and twenty. As you can see when the light
starts to bounce over 3-5 times, you don't see much of a difference. To give you an idea,
most of the time in movies the lighting doesn't bounce more than
three times because it gets very expensive. The good thing is that
UE4 doesn't take much longer to bounce
the light 100 times. People tend to have a lot of
indirect lighting bounces. I don't think it is necessary. Around 3-5 is good enough. Static Lighting Level Scale. Again, this is in World Settings. What is the difference between that? The idea is...
I will go back again. The idea is by reducing the scale, you are trying to evaluate
a smaller portion of the world. By reducing the scale,
you sample more in the world. You start to pick up more
and more detail, but you also get a lot more noise. I am going to show you
how we get rid of that. Indirect Lighting Smoothness. The idea with that is...
I am going back over again. We tried to smooth the amount of noise per object. I will go back again. By default, Indirect Lighting Smoothness
is set to 1. When you start to increase the value
too high, it starts to smooth too much. Then, you actually start
to get light bleeding. The corner and dark area
is bleeding into the bright area. I tend to stay between 0.7 and 1 for
the Indirect Lighting Smoothness value. Indirect Lighting Quality is basically increasing the number of
rays in the final gathering. This slide shows a building time
set to Medium. I will go back again. By default, it is set to 1. You can see it gets rid
of most of the noise. Of course, you can see
it also takes more time. Lighting Build Quality. This is what I was talking about
when you click the Build button. In terms of time, this quickly goes
into high-end production. I work a lot in Preview. Sometimes I check
if it looks good in Medium. Then over the weekend,
I might look at High and Production. Those are the rules
we try to stick with. Basically, the Static Lighting Level Scale multiplied by Indirect Lighting Quality,
should stay around 1.0. What I mean is this.
If we try to get a lot of detail, (we set the Static Lighting Level to 1.0), we tend to introduce a lot of noise
because of having a very small scale. This is where you want to increase
the Indirect Lighting Quality. If you multiply the two values together
and it is close to 1.0, your lighting should be
pretty clean overall. Another important thing
to know is the most time consuming part is
the Indirect Lighting Quality. Static Lighting Level Scale does not have
that much influence on the building time. Num Indirect Lighting Bounces is
pretty low on building time. Indirect Lighting Smoothness
is barely visible on building time. The most important setting
is Indirect Lighting Quality. Volume Light Sample Scale. When you build and bake the lighting,
everything is now static. If you spawn in an asset
while baking, it needs to pick up the lighting
that was actually baked. We use the Volume Light Sample Scale. It's a bunch of little Spherical Harmonics
placed into the world that are picked up by the movable object. By reducing the
Static Lighting Level Scale, you increase the amount of
props in the world. Lightmass Portals are something
we tend to forget, but they are very useful. This image is globally lit
by a Sky Light. In the middle, you can see there is
a small door and a small portal. The portal indicates to the renderer
where to focus most of the photon, and where to focus the sample. This is without the Lightmass Portal. Without the Lightmass Portal,
in the corner you start to see noise appearing. It is even more obvious on this image. You get a lot more noise. Whenever you can, try to add
some Lightmass Portals. They are good for small areas like windows
or doors where you need to direct the light. When you get to an area
that is way too big, Larger portals produce varying
results or are not as useful. Keep your Lightmass Portals
smaller in size. Lighting scenarios. What is the idea with lighting scenarios? Sometimes you want multiple lighting
scenarios, like daytime and nighttime. You only want to manage one environment. We used to do this by having
two environments: one for the daytime
and one for the nighttime. You would bake the two. Then depending
on which level you want to load, you load the daytime
or the nighttime. That means anytime you changed
something in the daytime, you had to change it in the nighttime.
It was just a nightmare. The good thing is now you manage
one environment and you store all the lighting information
in the Lighting Scenario Level. All of the lightmaps are stored
in the Lighting Scenario Level. You have a lightmap for the daytime,
and one for the nighttime. and then you load the one you need. You can only have one lighting scenario
loaded at a time. You cannot have the two and then transition
between the daytime and nighttime. It is a feature we would love to have,
but not yet. At least for Robo Recall,
having daytime and nighttime gave us more variation in terms of having
a player rediscover the environment with a new eye. This was also introducing
a different type of gameplay. People would tend to play slightly
differently at night than in the daytime. How to do that. It is pretty simple. Save all the lighting
information in a sub-level. Right-click on the sub-level and select
Change to Lighting Scenario. This converts the sub-level
into a lighting scenario. Things to remember. Again,
only one lighting scenario at a time. All of the Reflection Capture Actors
in your level are regenerated at loading. That means for nighttime,
it will look at recreating all the Reflection Capture
Actors for the night, and then recreate them for the daytime
again at loading. The goal is to store the information and then you don't have to recapture that,
but that feature will come one day. All of the baked lighting scenarios
we have set up right now was actually used for
Paragon and Robo Recall. Let's go over the
dynamic lighting setup. A classic setup is a Directional Light
set to Movable, and a Sky Light set to Movable.
As you can see, it is overly bright. That is why you need to activate
Distance Field Ambient Occlusion. On top of that, we add
Screen Space Ambient Occlusion. It is kind of like
adding smaller detail, a finessed occlusion
for the smaller detail. What is a distance field? Distance fields store the distance
to the nearest surface at every point. Distance fields are stored in each asset. To activate the distances fields,
you go into the project editor. Check Generate Mesh Distance Fields
and it will do so for each asset. Then you will start to see UE4 building
distance fields for each asset. When it is built, each asset is stored and you have to rebuild them
every time you open the engine. It is saved with the asset. When you put all the
distance fields together, this is what it looks like. Distance fields can be useful
for a lot of different things. Distance fields can be used by default
for generating the DFAO. DFAO is the default one
used by the Sky Light. For example, you can use those
to generate your own AO. You have a way to access those
distance fields in the Material Editor. Then, you could create a Material function
and plug that into your DFAO. The big difference between
those two techniques is If you look at the distance field
generated in the Material Editor, it is using the simplest way
to evaluate that which is how far is this object
from the other asset? You just get a gradient based on that. In DFAO... Let me look closer again. DFAO is more interesting because
it gives you that dome effect like when you go outside and
you have that kind of sky AO. Going back again. When you use a purely mathematical
method, it gives you distance. It is very linear. DFAO is more natural looking. Distance fields can also be used
to generate soft shadows. By adjusting the radius of the Spot Light, You can see the distance field is being
used to evaluate the raytrace soft shadow. Here is what it could look like. It gives you that really,
really nice shadow. But it is pretty expensive. People also use distance fields to generate some sort of
soft body deformation. It can be used for a lot of other things. Things to know about distance field. Distance fields are only for
Static Mesh Components. I would recommend avoiding huge non-uniform scaling. When you go too far,
you start to see some artifacting. When you have really large objects, they start to have really poor
distance field resolution. You might have to break them
into smaller pieces. The ambient occlusion updates are spread over multiple frames. You don't have instant AO. It is barely noticeable. You can fix the artifacting. On the left, you can see
there are black areas around the arch. You can just increase
the resolution for the distance field and it should get rid of the artifacting. The DFAO can also be tinted. If you want a sort of bounce effect
coming from the ground, you can make it slightly bright and tint
it to the overall color of your scene. It's not very obvious on this monitor,
but I swear it is brighter. The dynamic lighting setup was
actually used for the Fortnite game and also the trailer
we released a few months ago. Let's go over Image Based Lighting (IBL). That is a project we did 2-3 months ago to show of the new
Composure Editor in UE4. Those are the plates
we filmed at Epic headquarters. The idea was to integrate a
CG Character on top of it with interactive shadows, based on the lighting
captured in the scene. How do we do that?
First, we need to set up the scene. Make sure you have a plane
to cast and receive the shadows. Then, we have the back plate
that was filmed. We did some tracking. We constructed the camera. We imported all the cameras inside
and used the new Composure Editor in UE4. The Composure Editor is kind of like
the Nuke version in UE4. It takes whatever back plate you have
and you can compose CG on top of it, or the other way around. Globally there is one
Directional Light, and a Sky Light with
an HDRI plugged into it. The Composure Editor
uses a lot of Render Targets. You have the Render Target
for the background, You have the Render Target
for the Character. There is another Render Target
for the shadow, and then we start to
compose it all together. The shadow is used to
color-correct the back plate. By using more and more
Render Targets, you start to fill up
the memory really quickly. We don't have any more
Render Targets to generate the shadows and extra AO for the Character. We just use a Blueprint
that follows the Character's footsteps, and then darken those areas. It is very subtle,
but you will need those. On the left is the shot,
the back plate. On the right is the
HDRI that was captured. Make sure those match perfectly. Make sure all the values for
the dark are correct. Since the Character was going from
the dark side to the sunny side, we capture two HDRI: one in the sunny side, and one in the
dark side. Make sure those are aligned. Then the idea is to lerp between when the
Character goes from the shadow to the sun. How do we do that?
We actually have a way in Blueprint to track the position
of the Character within the world. Then, we also define
where the shadow happens. If it is on this side of the shadow,
then activate that lerp, and basically transition
between the two. That technique was used
for the Human Race demo. They didn't have to lerp
between two HDRI. The good thing was
they had that cart. That cart was driving around the roads
and capturing an HDRI. They had a sequence of HDRI. Instead of loading two HDRI, they load a sequence of HDRI
and constantly update the lighting. Then, they were changing the HDRI sequence for each shot. Shadow maps. How do they work?
Just a quick recap. How do we go from
the left side to the right side, knowing that we only have one
Directional Light coming from the left? From the perspective of the
Directional Light, we capture a depth map. The idea is to give you
a kind of Z gradient. We re-project the depth map,
and then we do a depth test. That gives you an idea if part of
the asset is in a shadow or not. I am talking about that because
it is important to know that the resolution of your shadow
is very important. You go from something that
could be very pixelated or way too smooth, to something that
is really nice and sharp. Of course, that comes with a cost. One thing that is important
with shadow maps is the bias. When you have a pixel
that is kind of covering part of the asset in the shadow
or in the bright area at the same time, you end up with those kinds of artifacts. You fix that by pushing the shadow map
inside the object. Then, you start to get rid
of the artifacts. Right now, what we have
on the left is a lot of artifacts. By pushing the shadow
into the ground, we lose the weird artifacts. It comes with a cost.
Sometimes it is problematic. For example,
when you have a bias that is too big for a smaller asset,
then you start to get the light leaking you see
on the left on the Character. When you light something that is small, make sure the bias is
as small as possible. When you have to light something really big,
you have to increase the bias. You have to know if your Character
is using the same shadow map, we start to have some
light leaking in the face. This is a trade off
that you have to figure out. Either I am focusing more on the
Character, and then make sure you have a specific light
for the Character with the smallest bias as possible. Cascaded shadows. We use these a lot, of course. What is the idea behind
cascade shadowing? When you have a large surface
to cover and create shadows, you might use one large shadow map
for the whole surface. Then, they start
to be pixelated. The idea is to cut the large part
into three individual pieces. You split the camera frustum into pieces. This allows for a higher resolution for the nearest part of the shadow. Then, it goes lower and lower
over the distance. How does it look in real-time? The red part is high-resolution
which focuses on the close area. Then, it goes to a lower
resolution in the back and basically covers a greater distance. The problem with a
Cascaded Shadow map is when you pull back and
sometimes you start to see those very weird artifacts where it goes from a high-resolution
Cascaded Shadow map to another map that is still about the
same resolution, but covers a bigger surface. It looks low-resolution. A way to get rid of that is to increase the number of Cascaded
Shadow maps between the two. Or just increase the overall resolution,
if you can. Then you have the open world situation
where even with a Cascaded Shadow map, it is still way too low-resolution. You might want to actually use raytraced distance field shadows
in that case. Directional Light stationary area
shadows. What does that mean? By default, the Directional Lights are extremely sharp. By enabling the Area Shadows
for Stationary Light property, it bakes the shadows into the lightmaps. What is allowing us is... Sometimes you want an
overcast lighting where the source of the lighting is
actually very broad. This is where you can change the Lighting Source Angle
and set it up to 10.0. On the left for example, Lighting Source Angle is set to 1.0. With the same Lighting Source Angle
set to 10.0 it tends to broaden the shadow
like on an overcast day. Capsule Shadows. We actually use a physical asset, which is the little capsule, to create
an approximation to render the Character to support very soft shadowing. Those are really great.
We use Capsule Shadows a lot in VR because they create some
very cheap shadows. The biggest problem with
Capsule Shadows is for example when you have a Character
that creates shape shadowing. The nose shadowing on the rest
of the face, for example. You lose all those details because
you are using a large Capsule Shadow. If that is important to you, you may not
want to use Capsule Shadows. The most important is the
Character casting shadow on the ground. I would recommend that if you are
optimizing very aggressively, then switch to Capsule Shadows. Capsule Indirect Shadows
are also useful. Sometimes the Character is actually
standing in the shadow and you need the AO (ambient occlusion). This is where Capsule Shadows
are very useful. It uses the probes we saw earlier
in the baked lighting. Those are Spherical Harmonics. That gives an indication of where the shadow should be
projected on the world. Raytraced Contact Shadows
are a pretty new feature. Going back to the bias,
sometimes you need a pretty large bias, but you still want nice shadows
on smaller objects. Now, we actually have a property
called Contact Shadow Length. With a small value,
you do sort of raytracing and then you get those details
on smaller assets. It is pretty useful. I think it is a bit expensive. Reflections. The High Resolution Reflection capture. This is also something new
that we had to create. By default, I think the Reflection
Capture Resolution is 64 or 128. As you can tell on the left,
there are some very blurry reflections. By increasing Reflection Capture Resolution,
you are increasing the memory a lot. Be careful when you change the Reflection
Capture Resolution. It is not free. But the reflection is a lot sharper.
Why is it useful? We introduced it with the demo
we made with McLaren where we needed a very nice
and sharp reflection on the cars. Planar Reflections. These are really useful. It actually creates a Planar
Reflection Actor and then it reflects everything in the world based on the Planar Reflection.
It is actually mirroring it. It doesn't work for anything
that has hills. It works on purely planar surfaces. The good thing is
you can optimize those. You can tell it not to
reflect the entire world, and just reflect those specific assets. It can get really expensive.
What you want to do is make sure the overall reflection looks
good with the Reflection Capture Actor. When you want extra details,
add the Planar Reflections. Then, you have a list of assets on
the right and choose which ones you want. Choose what makes the most sense,
and makes the most change in the picture. Another way to optimize that is you can
actually have several Planar Reflections. This worked well for us in Robo Recall
because we teleport. By teleporting to another place, we turn
off one reflection and turn on another. Another way to optimize that is by switching from one room to another and
turning the reflections on and off. The last thing is Bent Normal maps,
which is a new feature we have. Bent Normal maps act as a
reflection occlusion. If you bake your Character
in a T pose, you have a Bent Normal map that would
start to occlude the reflection. It avoids light leaking, from the Reflection Capture Actor. I am running out of time. I would recommend to dive into
Bent Normal maps because they are very useful. What is coming up for new
features in Unreal Engine? We are going to have a new way
to use the Light Volume Sample. Now they will be
a lot more accurate. Now, we are going to have a Sky Light that will use a Cube map instead of a
Spherical Harmonic to generate the lighting. You are going to have a lot more
control of the direction of your light. For sunset, are you going to feel the
sunset on your Character when it's baked? Finally, we have the multi Bounce
on the Sky Light, on emissive objects, and on a lot more. Take a look at 4.18.
It is going to be awesome. Thank you.
That's it. [Audience Applause]