♪ [MUSIC] ♪ [NARRATOR] Welcome to Unite Now
where we bring Unity to you wherever you are. [PIERRE] Hello everyone,
and welcome to my session about high fidelity graphics
for games with the High Definition Render
Pipeline (HDRP)l for Unity. During the next hour,
we're going to have a deep dive in HDRP’s rendering
and lighting settings. Hopefully, after watching
this session, you will have a much better understanding
of the key features of HDRP, and you should be able
to increase the visual quality in your game Projects. My name is Pierre Yves Donzallaz. I'm a Senior Rendering artist
at Unity. I specialize in lighting,
rendering, user interface, and tools design for art workflows. I've worked as a lighting artist
on several productions such as the <i>Crysis</i> franchise,<i>Ryse,
Grand Theft Auto 5,</i> and more recently<i>
Red Dead Redemption 2.</i> What can you expect
from this session? I will present to you
several key features of HDRP. I will let you know how to install
it, and tune its main settings. I will then show you
how to use the Volume system. HDRP focuses a lot
on physically-correct setups, so I will spend a lot
of time emphasizing photographic concepts
for beginners. Therefore, will also have
a fairly long segment about exposure and lighting setups. We’ll cover lights, shadows,
reflections, and fog. We’ll end the session talking
about the most important post effects, like Tone Mapping
and depth of field. You might recognize
the environment I used: it is based on the Amalienborg
Palace in Copenhagen, and it was recently used
for a demo Unity produced in collaboration with Lexus. It was a showcase
of virtual production for automotive scenarios. The entire environment
is modeled in 3D, and it's a good testing ground to showcase
the physically-based lighting, the shadowing, the volumetrics,
and the post-processing effects that HDRP has to offer. But first, let’s see why you would
want to use HDRP in the first place. The High Definition Render Pipeline (HDRP)
has been designed to produce high-quality visuals, mostly
for higher-end platforms such as desktop PC and consoles. It supports all the advanced
rendering features you can expect from a modern rendering pipeline,
and then some. It also has one of the most
convenient post-processing pipelines out there,
the lighting can be set in a physically-correct way,
and the shadowing quality can be pushed really far. HDRP also offers volumetric fog,
so that you can better simulate thick atmospheres. You might think HDRP is tailored
for photo-realistic rendering, however, nothing prevents you
from creating a very stylized game, as long as you’re prepared
to dig into Unity’s Shader Graph to create a more distinct look. Let’s see how you can install HDRP
in a couple of clicks. First, you will want to use
the Unity Hub to download Unity. Then, you can create a new Project
and pick a template. You could pick the HDRP template
and you’d be good to go, however, for this demonstration,
I’ll show you how to set up HDRP from scratch
with the 3D template. Once Unity is opened,
open the Package Manager, and search for High Definition
Render Pipeline. Make sure you select the correct item in the list
and then click the Install button. The installation process
will take a couple of minutes, to import the package
and compile the Assets. In the end, a nifty tool called
the HDRP Wizard will pop up. If it detects problems
with the setup of your Project that might impact HDRP,
it will let you know. Simply hit the “Fix All” button,
and accept any request from the tool, and
your Project will be good to go. To tune the HDRP graphics settings,
go to Edit > Project Settings. In the Graphics section, you can
see that an HDRP Asset is already assigned
to your Project. If this field is empty, you can
either look for the missing Asset or run the HDRP Wizard
to fix this issue. This Asset is saved on disk
and contains all the HDRP Quality settings for your Project. You might have several HDRP Assets,
if you wish to support multiple platforms, like consoles
and PC, for instance, and have different
settings for each. You can easily swap this Asset
in the Editor or at runtime. Let’s look at the many settings
that can be tuned for HDRP. Let’s head to Quality > HDRP. You will be presented
with a long list of features that you can enable globally
for your Project, and Quality settings that you can set
to fit your Project’s requirements. Keep in mind that whenever
you enable something, HDRP will allocate resources
to this feature, like memory and processing power. So you need to be careful
and not enable features you have no need for, otherwise, you might pay a performance
cost unnecessarily. An important setting is
the Lit Shader Mode, which lets you choose between Forward
and Deferred. The Lit Shader is normally applied
to most opaque objects. In terms of visual quality,
there are no major differences, although in theory,
Forward should give you the best shading precision,
but at a cost. Therefore, you might want
to stick to Deferred for games and use Forward for applications where
performance isn’t always a concern. What’s great about our system
is that you can select both modes. This way, if you head
to the HDRP Default settings, you can control independently
how your Cameras, baked reflection probes,
and real-time reflection probes will render the Lit Shader. This could helpful in certain
situations where one mode might be a lot more efficient
than the other one. For example, you could rely
on Deferred for the Cameras and the real-time reflections,
but use Forward with Multi-Sampling Anti-Aliasing
for the baked reflections, to increase their quality. With HDRP, you even
get custom settings for each of your Cameras
and probe objects. This way, you are able to override
the Lit Shader mode for these objects and switch
between Deferred and Forward, as well as toggling
many other rendering features. I didn’t mention yet that one
major drawback of Deferred is the lack of support
for Multi-Sampling Anti-Aliasing. If you were to select it, the MSAA within Forward checkbox
will be unselectable. But don’t worry,
this can be easily mitigated by using another type
of anti-aliasing. In a minute, we'll cover all types
of anti-aliasing offered by HDRP. Finally, another way to control
the rendering settings for your Cameras is to use
the Render Pipeline Debug window. There you’ll be able to toggle
features on and off, for all the active Cameras,
including the Scene Camera. It is particularly useful
when debugging and during your optimization pass. You’ll be able to easily find out
the source of the culprit. For instance, you can disable the
rendering of transparent objects, the post-processing, or the fog, including
its volumetric effects, and don’t forget
about the Reset button, in case you want to quickly revert
all the changes you’ve applied to the Cameras. And remember, this window doesn’t
affect any of your Scene data, those changes are not saved. Previously, I’ve been
talking briefly about Multi-Sampling Anti-Aliasing,
or MSAA for short. Let’s go into more detail. The purpose of MSAA is to remove
the jagged edges along polygons. It is an expensive effect, both
in terms of memory consumption, bandwidth, and processing power.
So it must be used with care. I’ve also mentioned Deferred Mode
doesn’t offer MSAA. So let’s have a look
at the alternatives. In your Camera Inspector,
you will find several post-processing
anti-aliasing techniques. As their names suggest,
they are run during the post-processing phase
of the rendering. The most affordable one is the Fast Approximate
Anti-Aliasing or FXAA. It detects edges in the image
and blurs them, no matter if they belong
to Geometries or Textures. It has a very low impact
on performance, but it often produces
blurry results. It is a good solution, however,
for lower-end platforms that can’t always run
more expensive solutions. Next is SMAA, which is a slightly
more expensive solution that focuses on reducing
temporal artifacts and improving the sharpness
of the image. It doesn’t always produce
the best results, however, as it tends to favor sharpness
and can miss certain jaggies. Finally, Temporal AA or TAA is one
of the more advanced solutions that can combat
most types of aliasings. It does require motion vectors
to be enabled in your HDRP Settings, however. These are anyway required
for object motion blur, and they can be used
to enhance other effects like Screen Space Ambient Occlusion, so you will probably want
to keep them turned on in any case. You might observe some ghosting
around fast-moving objects or trailing artifacts on new pixels that don’t have
a motion history yet. However, because the image
is moving, it is fairly difficult to notice,
except if you pause the rendering. TAA is also a great solution
to prevent specular aliasing, like on this door which uses
a normal map to create the illusion of geometric details. In comparison, MSAA
is totally unable to remove this type of aliasing,
because MSAA only works on polygons. HDRP even allows you to customize
the sharpness of the effect, depending on your tastes
or artistic requirements. You can opt for a softer image
or a sharper one. Let’s have a quick recap: Here the Scene at 5x zoom
with no anti-aliasing. Then, with MSAA; FXAA: noticeably blurrier; SMAA: clearly sharper; And TAA: a good compromise. TAA is very popular nowadays as it
provides a great mix of performance and visual quality,
with a few caveats, however. Also, note that you can combine
MSAA with one of these techniques, for instance, you could use
MSAA two times with FXAA, in case TAA isn’t suitable
for your Project. Now, let’s have a look
at our Scene in more detail. I have a very simple set up. I have two main GameObjects,
one for the environment, with all the geometries,
props, and decals, and one for the Lighting. I have 3 time of day options:
for afternoon, evening, and night. I also have a fourth Overcast
Light setup that I only used for baking and testing assets
in neutral lighting. In HDRP, many settings are set
with a Volume system. We’ll get to this system
in detail in a few minutes. As you can see, if I disable
all my lighting objects, HDRP still automatically applies
some basic settings to my Scenes. This is to make sure I get
a least a sky, some shadows, and a few basic
post-processing effects. If I open my Project Settings and
check the HDRP Default settings, I can check which settings
are applied by default. If you’re not happy with these
values, you can, of course, use your own, based
on your Project's requirements. Just make sure you tick
the override checkbox on the left to activate the override field. The whole idea with these settings
is that you can tune the default values for your Project
in this window, and then you can override
them with volumes that you place in your Scene to customize the lighting for
parts only of your level. For example, if I disable
the sky overrides on my volume, it will revert
to the default sky value I just showed you
in the Project Settings. If I re-enable my sky overrides,
the default settings of the Project will be overridden by mine. Let’s dig deeper
in the Volume system. Adding a volume is very simple,
you can use one of the provided shortcuts that
will have pre-applied components. Instead, I’ll just create
a volume from scratch, so you can understand
the system better. Make sure you rename
your object properly. I’m going to assign
a volume component to it. This volume can be global,
which means it has infinite bounds,
or it can be local. Let’s keep it global for now, so
that it affects the entire level, no matter where my Camera is. For now, the volume doesn’t
do anything, that’s because it doesn’t have any profile
assigned. So let’s add one. These Assets are stored on disk,
it gives you a lot of flexibility to share them between volumes. The fun part
can finally start, I can now add overrides to my volume. The list offers many overrides
for fog, skies, post-processing effects, etc. We’ll go through these
in detail a bit later. I’m going to add
a color adjustment effect first, to tune the saturation. I want to create a very high
contrast black and white style. So I’ll need to add
a color-grading effect called Shadows Midtones Highlights. Let’s drop the shadows radically,
lift the highlights slightly, and reduce the mid-tones to enhance
the dark tones even further. Remember that the volume component
has a Weight value that you can use to fade
the effect in and out, via Script or Timeline
for instance. Now, let’s switch
the mode to Local. As you can see,
the effect disappears. That’s because the volume
doesn’t have yet a physical size. So let’s add
a Box Collider to fix this. Then I can use the very
practical handles to resize the volume to fit my needs. Then, as I move the Camera
inside the volume, you can see that the effect
will activate immediately as I cross the boundary
of the volume. And my grading disappears,
as soon as I leave the box. Obviously, those harsh transitions
are rarely wanted, that’s why we provide
a blend distance. This way, you can have
a very smooth transition entering and leaving the volume. There can sometimes be confusion
about the global volumes. You can have several global volumes
active at any given time. However, if you’re not careful,
you might end up with some conflicts between them. I’m now creating 2 global volumes. The first one will turn on
the volumetric fog, whereas the other one
will keep it disabled. So you can certainly understand
that those 2 volumes are going to compete
over the same parameter. How does HDRP decide which one
will win over the other one? As you can see,
the last active volume will take precedence
over the other one. This isn’t a wanted behavior,
as it cannot be predictable. This is why you have the ability
to set a priority for each volume. The volume with the highest
priority will win, whenever there is a conflict,
like in this example with those 2 volumes fighting
over the volumetric fog. You can obviously add these volume
components to any GameObjects. Adding them to Cameras,
with a local Volume, is an especially convenient way
to not pollute your Scene view
with global Volume settings. I’d recommend you avoid using
global Volumes unnecessarily, especially if a set of effects only
apply to a certain Camera or area. As you can see in this example,
my game Camera uses a very dense fog
via a local fog volume, and it won’t impact
my Scene View Camera, as long as it doesn’t
enter this local Volume. If I used a global Volume
for that Camera, my Scene Camera would also
be constantly fogged out. Finally, you need to use
a Visual Environment override when you want to change the type
of sky, like PBR Sky or HDRI Sky. Only then will the overrides
for the sky you chose be interpreted correctly
by the Volume System. Now, let’s talk about
the fundamentals of photography, and by extension rendering. Exposure is something beginners
have a lot of trouble understanding Sometimes the image turns
entirely black or it becomes too bright when pointing the Camera
at dark parts of the Scene. Understanding exposure is therefore
crucial to the lighting process, if you want to use physically
based lighting especially. So, what's exposure?
It’s actually very simple. It’s the amount of light that will
affect the sensor of the Camera. The high exposure value
means a lot of light, whereas a very low exposure value
means a low amount of light. For a sunny Scene,
your exposure might be at 14, whereas for a moonlit Scene,
it’ll be around -1. Let’s use the concept
of the Exposure Triangle, very common in photography. But we won’t go into too much
details because it can get complicated very quickly, if you’ve
never heard about photography. When your Camera takes a photo,
you or the Camera will need to play
with 3 main factors. In no particular order,
we have the Sensitivity: this defines how reactive
the sensor is to the light. The more sensitive,
the brighter the image. Then we have the Shutter Speed: this defines how long the Camera
will take to absorb the light. This is controlled by a mechanical
or electronic shutter. And finally, we have the aperture: this controls how wide
the opening of the lens will be. This is controlled by the aperture
blades inside the lens. These blades open to bring
more light onto the sensor, and close to reduce
the amount of light. Exactly like the iris in your eyes. Whenever you take a photo,
each of those 3 parameters can be increased and decreased,
to reach the desired exposure. Here’s the formula that
will give you the exposure value, based on the settings
the Camera or you picked. Let’s have a look at a few photos
I took with a smartphone, and see what
the exposure value will be. Here we have a daylight shot. If we check the properties
of the image, we can see the aperture is given
by the f-number, at 1.7. The shutter speed, usually given
as a fraction, is here at 1/2480, and the sensitivity, that’s
the ISO value, is at 100. If you plug these numbers into
the formula I gave previously, you end up with
an Exposure Value of 13. Here we have an overcast shot,
and using the formula, the Exposure Value is around 10. With the sunset, we’re at 8. In a very bright interior,
around 6. At night, with artificial
lighting, about 2. And in a very dark interior,
around 1. Now, you should
have a pretty good idea what the Exposure
Value corresponds to. And you should be able to look up
any of your photos, and calculate the Exposure
Value for yourself. This is especially useful
if you want to recreate a certain real-world Scene
in 3D, in Unity. And you don’t need a fancy Camera
for this, any smartphone will do. If you want to light your Scene
in a physically correct way, it is crucial that you use
the right exposure. If you don’t, you will have
many lighting inconsistencies, and it will be much harder
to balance your Scene between natural
and artificial lighting. Now, let’s see how you can apply
this knowledge in HDRP, which uses a very similar system
to the ones found in real Cameras. By default, you might not be happy with the way
the exposure is handled. This is normal, this is something
you have to tune for yourself, as we have no idea of the type
of Scene you will create in HDRP. For instance, the default settings
might darken your objects too much when looking at the sky,
or you might want to prevent dark parts of the Scene
from becoming too bright. Or overall, you might think
the Scene is just underexposed. To review all the default settings
applied to your Scene, open the HDRP Default Settings. If you scroll a bit, you will
eventually find the Default profile that is applied
to any of your Scenes. You can see that the Exposure Mode
is set to automatic by default. Based on the version
of HDRP you downloaded, these settings
might differ slightly. So we’re going to set
the exposure ourselves. I could do it here,
in the Default Settings, if I had a very simple Scene
with 1 light setup only. However, I have a complex Scene
with 3 different times of days, so I will need to override
the exposure settings with the Volume System
in every time of day. For this exercise, I will create a
Global Volume, though in practice I would recommend you use
Local Volumes instead, if you have larger levels with many
different zones that might require their own exposure overrides,
like interiors for instance. To override the Exposure Mode,
which was set globally to automatic if you remember, simply
tick the override checkbox. As you can see the entire screen
turned white. Why is that? It’s because the default value
for the exposure is set to 0. And if you remember correctly,
an exposure value of 0 should be used for night time
or a very dark interiors. So obviously, your Camera
is now wide open, ready to collect as much light as possible, and
therefore, everything turns white. To fix this, let’s just override
the fixed exposure, and use a value that
corresponds to bright daylight, somewhere around 13.5. The exposure is literally fixed,
and no matter where you look, the brightness of the image won’t
fluctuate anymore, no matter if you look at the sun,
or at a dark corner of your level. Don’t get me wrong, the automatic
mode is also very useful. It’s the mode I use the most. As we’ve seen, by default,
the exposure will be free-floating, based on what the Camera sees.
And this might not be wanted. Thankfully, the automatic mode
provides several metering options, to let you control
how the Scene lighting is sampled. First, the average mode will use
the entire frame to calculate the target exposure. Is it a very
stable way to control the exposure. Then, we have the spot meter,
which only samples a small number of pixels right in the center
of the frame. This mode is fairly unstable
because very bright pixels in the middle of the image
will force the Exposure system to underexpose the whole Scene,
and dark pixels will overexpose the Scene. Finally, we have
my favorite method, which is the Center-Weighted Mode. In this mode, the entire frame
is checked, however, higher importance is given to
pixels in the center of the image. You get the best of both worlds. At this stage, you will still
experience unwanted exposure changes when entering
dark parts of your level. For instance, the sky will
become totally overblown and look like a daylight sky. By opposition, placing
the Camera right in front of very bright pixels will force
the Exposure system to underexpose the whole frame and
darken the entire sky, for example. To prevent this, HDRP provides
the min and max limits. To avoid underexposing
the entire image, simply lower the Limit Max parameter until
you find an acceptable Value. I will lower it until the sky
becomes more visible. Then, to reduce the over
brightening of the image when looking at a majority
of dark pixels in your Scene, I will increase the Limit Min Value
until the sky looks less overblown. And I now have a perfectly tuned
automatic exposure: looking at very dark areas
won’t overexpose my sky, and having very bright
objects in the view won’t radically influence
my final image. All this, while still having
a dynamic adaptation to the Scene. You can also use
the Compensation Mode to slightly over or underexpose
the Scene manually. However, be aware that
the Limit Min and Limit Max will remain active, and therefore,
you won’t be able to push the Exposure beyond those 2 values. Only use the compensation to
slightly nudge the exposure target. It should not be used
to solve problems related to poor exposure setups. The curve mapping is only meant
for expert users that want to remap
the exposure curve. Add keyframes into the curve,
and change the exposure output on the Y-axis, for each input
on the X-axis. It can be useful in very dynamic
lighting scenarios, such as open-world games,
or in a situation like this, where I want to specifically
flatten the exposure curve for this particular time of day,
between 0 and 5 EV. However, it requires knowledge
of exposure, and more often than not
a well-tuned Automatic exposure can often offer the same result. Do you remember the Exposure
triangle I mentioned previously? In a real Camera, the shutter
speed, the aperture, and sensitivity
control the exposure. Because HDRP focuses
on physical correctness, of course, we also offer
a physical Camera that will let you set up your
exposure like on a real Camera. I will add a local Volume
to my Camera, to not pollute my Scene view
with unwanted exposure change. Let’s add a Box Collider
so that my local Volume gets proper boundaries. And after I created my profile,
I can add an Exposure override and set it to use
the Physical Camera. When switching to this mode, you
might notice the same phenomenon as when using the fixed exposure
for the first time, the image might turn totally white
or totally black. Let’s head to the physical
section of my Camera, and tune these 3 parameters. If you remember, I showed you this
photo I took with a smartphone. Let’s use these settings
and see how HDRP handles it. I’ll input 1.7 for the aperture,
that’s the f-number. Then 2480 for the shutter speed. And 100 for the ISO. And voila, I get a perfectly fine
exposure for my daylight Scene. I want to point out that this mode
is technically very similar to the fixed exposure mode. So it is mostly recommended
for static lighting condition, such as cutScenes
or beauty shots, as it requires a fully manual setup. As a bonus, let’s have a look
at a real reference of the palace. It's a photo I took after I made
the lighting for the palace, that’s an important info. Again, I just need to switch the
exposure mode to Physical Cameras, and a head to the physical section
of my Camera to match my Camera setup, I’ll use ISO 800,
1/20th of a second, and f1.4. We’re already
in the right ballpark. Let’s adjust the white balance
to match the reference. I’ll also reduce the fog density,
as it was a much clearer night than the lighting, I decided
to go for in this demo. Then I can slightly adjust
the midtones to match the tonemapping on the Camera,
and maybe slightly adjust a night sky, as it was darker
than in the HDRI, I chose. And here’s the reference again,
for comparison. My point isn’t about making a
perfect comparison, it isn' at all, but you can be in the right
ballpark at least with HDRP, and you can certainly
trust this renderer when you want to make high-quality
lighting for your AAA games. Finally, I want
to emphasize that no one forces you to use
a correct exposure. You could decide,
for whatever reason, to stick to a fantasy exposure
that makes no sense physically. However, we’ll see in a few minutes
that you would then have to come up with light intensities
that do not make any sense at all. And as a beginner, it is useful to
better follow the rules initially. If you’re an expert and understand
the implications of not using a proper exposure,
then please be my guest. You have to play with several key
components to set up the lighting. First, the direct lighting
is usually provided by the sun. Then we usually have a strong
source of indirect lighting, which is the HDRI sky
in my global Volume. You can see how much impact it has on the indirect lighting
in the Scene. And finally, the fog,
which at this time of day has a dramatic effect,
because it is lit by the sunlight. To visualize some
of these lighting contributions, head to Window > Render Pipeline,
and open the Debug window. You can enable Fullscreen
Lighting Debug Mode to see the contribution
of each component. Direct Diffuse
will only render your lights without any indirect lighting,
which means shadow will be black. Indirect Diffuse will only show you
the ambient lighting provided by your light bakes. Speaking of light bakes, let’s see
how you could bake this Scene. Go to Window > Rendering,
and open the Lighting settings. You have to provide a volume
profile that will be used for baking. It's the one currently
active in the viewport, it's overcast lighting condition,
ideal to bake the indirect only. Make sure you pick the HDRI Sky
for the static lighting to match the sky type
in the baking profile. For this Scene, I will only create
an indirect only bake, so that I can reuse the bake
in multiple lighting conditions later, for afternoon, evening,
and night. I’ll now brush
over the Lightmapper setting, as it's beyond
the scope of this session. First, pick the GPU Lightmapper, as it can greatly reduce
the baking time. Then activate the AI Denoiser,
as it can provide great results with a fairly low amount
of direct and indirect samples. Finally, set the texel resolution
based on the quality you’re after. In this case, I use 8 texels per
Unit, that’s a meter in this case, which is more
than enough to capture the soft indirect
light bake I’m after. Let’s hit the Generate button,
and wait for the bake to complete. You can keep working on the Scene, the GPU Lightmapper
will work in the background. It can be many times faster
than the CPU, nearly 10 times faster
in this example. Note only Mesh Renderers marked
to Receive Global Illumination from lightmaps, will be baked
into the lightmaps. Otherwise, they would need
to rely on the Light Probe Groups, which I am not using for this demo, as I do not have any
dynamic objects in the Scene. Let’s wait for the bake
to complete, it should take a bit more than half a minute
for this level. And we’re done. I can now modulate
the intensity of this bake with an Indirect Lighting
Controller in my global Volume. So, how do you set up
the direct lighting in your Scene that could be your sun, the
moonlight, your artificial lights? HDRP is physically based, and
it offers real-world light units. This is something beginners
can have trouble to understand, but it’s just a matter
of looking up values in a table, inputting them in Unity,
and that’s it. No more guessing,
wondering if the sun intensity should be 3.2 or 17.6,
which it isn’t, by the way. There are many tools to measure
the amount of lighting received, emitted, or sent towards a specific
direction. A luxmeter will measure the lighting
received in a certain area. It is useful to measure
the sunlight, the skylight, when in the shade, or the general
lighting received on a working top. The light is collected
by the small hemisphere and then a sensor
reads the intensity. The unit is lux. That’s what
Unity uses for the sunlight and the sky intensity. And we have the Integrating Sphere,
which measures the entire output of a given light source,
like a bulb or a tube light. The bulb is turned on, the door
is closed, the light bounces around in the white sphere, and a sensor
will read the light intensity. It is important to understand
that this measurement doesn't give you any information about
the directionality of the light. The unit is lumen,
and Unity supports it as well. It is a great unit for point lights
shining light uniformly in all directions. Then finally, we have
the Goniophotometer, which is able to calculate the intensity
coming from any angle around the light source. This is usually used to measure
lights that have reflectors to focus the light
in particular directions, like headlights and spotlights. If you’re familiar
with architectural rendering, this is how IES profiles
are generated. The Unit for it is candela. Unity supports it too, it's great
for spotlights, for instance. Here’s a quick roundup
of useful intensities for your directional light.
As you can see, the range is huge: from 0.001 lux all the way
to a 120,000 lux for the sunlight. Let’s see how it looks in Unity. When you add a light to your Scene,
you might sometimes wonder why you can’t see it.
If I add a directional light and keep its intensity at 3,
you can see that the light doesn’t seem to contribute
at all in my Scene, yet I still pay the processing cost
of this light, under the hood. The physically correct value
for the sunlight around noon
is roughly 100,000 lux. When using proper exposure
for the Scene, you can see that it looks
correct at this value. Let’s also tune
the color temperature. When the sun is high in the sky
on a clear day, the temperature should be
about around 5500 Kelvin, depending on
the atmospheric conditions. If you don’t use a physically
correct color temperature for your sun,
you will immediately break the photorealism of the image. What about your sky?
You can use your own multiplier, or simply input the exact
exposure you desire. For this bright afternoon sky,
I’ll use EV 14.5. After the long section previously
about exposure, there should be
no secret about it anymore. You can even tell HDRP
the lux value you need, and it will automatically expose the sky
for you, based on your input. To make my point regarding
the importance of using the right light intensity
for a given exposure, let’s simulate a sunlight
for our night Scene. As you can see, the image
is totally overblown. Let’s revert back to
the correct value of half a lux. The color temperature
of the moonlight is actually fairly warm at 4100 Kelvin. When activating
my artificial lighting, you can see that the image
remains perfectly balanced. Let’s have a look
at their intensity too. Let’s turn on my street lamp Prefab which contains the emissive bulb
and the light. The light is set to 2500 lumen,
which is roughly equivalent to a 200-watt
incandescent bulb. Where can you find
these intensity values? Simply shop around,
every light bulb has a spec sheet where you can find the lumen output
and the color temperature. Simply input those in Unity,
and you’re set. There is however a little
complication for spotlights, if you want to use them
with lumens. Let’s pretend you found on the web a car
headlight, ranked at 2000 lumen, and a blueish tint at 7000 Kelvin. Like this, it doesn’t look very
powerful at all for a headlight. Did they lie, or is there
something wrong with Unity? None of that actually. You need
to enable the reflector flag. Display the Advanced Options,
and tick the Reflector checkbox. Let’s also activate Shadows too. Now the headlight looks believable. When the reflector flag is
turned off, actually the spot will behave like a point light
in terms of light intensity. This can be useful if you
don’t want the lighting to change when switching between
point and spotlight. However, this isn’t really
physically correct. Therefore, I highly
recommend you enable the Reflector Mode
if you work with Lumens. Like in real life, if you increase
the cone of light, you will reduce its luminous
intensity. And if you focus the beam, the luminous intensity
will radically increase. For instance, this is how tactical
torchlights work. They don’t necessarily have
a very high lumen output, but they have a great reflector
able to focus the light for maximum effect
in a very narrow cone of light. As you can see, when switching
between light units, Unity will automatically do
the conversion for you, so you don’t have to do the math. Finally, if you don’t like
any of these units, and prefer a more artistic way
to control the light, I would suggest
you use the EV100 units. Each increment of this value
will actually double the luminous intensity
of the light. In technical terms, you’re
increasing or decreasing the intensity by 1 stop. We’ve talked a lot about lights,
but what about shadows? In the HDRP Quality settings,
you'll be able to tune the default resolution, precision,
and overall quality. I’ll focus on the directional
shadows first. Let’s have a look at the shadow
section in our sun’s Inspector. The Update Mode is particularly
useful if you want to force HDRP to refresh the shadows
constantly or not. If you have a dynamic sun,
for instance, you’ll certainly want to keep
this mode to Every Frame. When set to On Enable, the shadows
will only be generated when the light gets activated. Obviously, if the sun
or any other object moves, even the Camera,
you could be in trouble. You can also pick the shadow
resolution you wish for any light. The lower resolution,
the blurrier they will look and the less connected to their
shadow caster they might be. Very high resolution, without
appropriate filtering might look very sharp, however.
More on this in a minute. The values for Low, Medium,
and High can be tuned in the HDRP Quality Settings. Scroll to the shadow section,
and tune these numbers to fit your Project’s need.
You can also use a dimmer, if you want to simulate
semi-transparent shadows. As well as changing the color
of the shadows produced by this light only, if you want
to simulate the light going through a tinted window,
for instance. Don’t abuse this feature, however,
as normally, shadows should be pure black in practice
for opaque objects. With the medium filtering quality, our high-resolution shadows
look artificially sharp. This can be radically improved
by activating the high-quality filtering
in the HDRP Quality settings. Now the shadows
will become blurrier the further they are
from the shadow caster. Exactly like in real life. Obviously, there is a performance
hit associated with this mode. You can see how dramatic
the difference is between medium, and high quality. The blurriness
is controlled mainly by the angular diameter of
the light. The bigger the source, the blurrier the shadows,
like in the real world. A value around 0.5 degrees should
be used for the sun to correctly simulate its angular size
from the perspective of the Earth. Let’s have a look
at the Shadow Debug view with the Render
Pipeline Debug Mode. In the lighting tab, I’ll activate the Fullscreen Debug view
for our sunlight. Now you can appreciate
the effect more obvious. For comparison, here’s
the medium filtering quality, and here’s the high-quality one. And remember that this effect works
for point and spotlights as well. Let’s tune again
the angular size of the light to showcase the effect. It is certainly one of my
favorite features in HDRP. I can also tune the Shadow Cascade
settings with the Volume system. Cascaded shadows are only used
for the directional lights. Other lights won’t rely
on this technique at all. I can tune the maximum distance my sun shadows are going
to be rendered at. Let’s keep it
at 200 meters for this area. I can then override the distance
distance splits for each cascade. Let’s activate the Debug view
to easily tune them. At first, I’ll have 2 cascades,
to better showcase the features. I can control how cascades are
spread changing the split distance. I can also tune the blending
between both cascades by overriding the border value. Without this blending, the shadow
transition between each cascade would be very harsh.
In fact, each cascade will halve the resolution
to keep the shadowing cost down. You will most likely want to use
as many cascades as possible. You should tune them in a way
that each one roughly fills a quarter of the screen,
in order to spread the cost efficiently, while maximizing
the quality at short range. What about shadows
or point and spotlights? They don’t use the same cascade
system as the directional lights, otherwise, they are very
similar in their behavior. Let’s have a look at common issues
on Point lights instead, related to shadow biasing. Let’s activate the Shadow Debug
view in the Debug window. First, you can tune the near plane
and increase it as much as you possibly can
to get maximum shadow precision. Make sure you don’t push it
too high, otherwise objects between the light and this
near plane won’t cast any shadows. The Slope Scale Depth bias
is incredibly useful to solve any stepping you might see on surfaces
perpendicular to the light. In the past, such artifacts
would have to been fixed by increasing the normal bias,
but this had a negative effect of sinking the shadows
inside the objects. The Shadow Fade Distance
is a very practical feature to make the shadow disappear
beyond a certain distance. This is incredibly important
when optimizing the lighting, as rendering many shadows
at the same time, especially from point lights,
is expensive. It is very common
for current-gen games to have a shadow distance
under 30 meters, by the way. Finally, you can also impact
the shadow blurriness of the light by tuning its radius, like on the
directional light I've shown you. This also has the effect of
reducing the sharpness of the specular highlights produced
by this light, like in reality. Activating the Fullscreen
Specular Lighting Debug Mode, will make this effect more obvious. The last weapon in your arsenal for
shadows are the contact shadows. These are using the information
on the screen to generate additional shadow details
by raymarching in the depth buffer. It is a very useful technique
that can fill the holes due to the lower resolution
in the shadow cascades, especially at medium
and long-range. You can add a contact shadow
override to one of your volumes to set your own settings, like the
length of the rays, set in meters. Enabling the Contact
Shadow Debug view would make it easier
to demonstrate. As usual, you will have
the choice between low, medium, and high quality. You can tune these
in the HDRP Quality settings. You can use your own sample count,
be aware that this will, of course, increase the impact
on the performance, potentially. By the way, don’t worry about
the flickering in the Debug views, this is only related
to the temporal anti-aliasing. The contact shadows can also
be very useful at long range, beyond the maximum shadow distance
for the directional light. As a test, let’s set
the sun shadow distance to 0, and see how the contact shadows
will behave. Here’s my Scene with contact
shadows off, and then on. As you can see, the effect could
provide a decent shadow fallback for distant objects
that are visible on the screen. I’ve been talking
about lights and shadows. The next component
we will study is the reflections. In HDRP, your sky will be automatically reflected
onto the Scene. However, how can you get
your objects to reflect as well? For this, you will need to manually
place reflection probes. These will capture
the surrounding environment and then cast the specular
information onto nearby objects. Then, another technique
will be screen-space reflections, which will attempt to reflect
the Scene onto the entire frame. As we’ll see, this feature
has many caveats. First, let’s look
into the reflection probes. As I said, these are objects, you
manually place inside the world. As you can see in this chrome ball,
the reflection of the statue doesn’t seem to match
the actual Scene. This is because the capture point
of the reflection probe that affects my chrome ball
is relatively far away. My point is that those reflections
have a fixed position in space, therefore, you need to place them
in your environment as you please, to ensure the reflections are as
good as required for your Project. Here’s another case
where my chrome ball doesn’t receive
correct reflections. To fix this, I have created
a reflection probe with a box shape to capture
the surrounding columns and shield this area from the sky
reflection. Finally, my chrome ball receives decent reflections
that match this area. As it moves away from my
box-shaped reflection probe, it will receive the reflections
from the other probe in the center of the arena. The probes in HDRP have
many new features compared to the built-in
render pipeline, therefore I invite you to go
through the documentation to get familiar with all
the new functionalities, such as the Projection Settings,
the Reflection Proxy Volumes, and the new Planar Reflections that are great
for flat surfaces and mirrors. In the probe’s Inspector, you can
set the probe type to Baked. The resulting reflection
will be baked into a texture. This mode is great
for static Scenes without many dynamic objects. When set to real-time, the probe
will be generated on the fly. You can set it to refresh only
when the object is activated, or every frame, which is very
expensive. However, it is a great way to get real-time
reflections for dynamic objects. For instance, when I move
the statue, I can see the reflection updating
in real-time on my chrome ball. To reduce the cost of such
reflections, you should also take a look into their Frame settings
in the HDRP Default settings. You might want to disable certain
expensive effects globally. Or, you can tune
each probe individually with the Custom Frame Settings
in the Inspector of the probe. For instance, you could disable
MSAA, so that the probe won’t render the world
using this expensive anti-aliasing. You can cull transparent objects,
decals, shadows, or fog. It will entirely depend on the type
of the game, you're doing, how you use the probes,
how many you have, and if they are set
to refresh constantly or not. As a rule of thumb, if you cannot
see the visual benefits of a given feature, you should
try to keep it off or at least minimize its impact
as much as possible. The final component
of the reflection system is Screen Space Reflection,
or SSR for short. To use them, make sure
the feature is enabled in the HDRP Quality settings. The SSR settings can be tuned
with the Volume system as well. As the name suggests, this technique
only relies on the screen data, it is unable to provide reflections
for objects outside the screen. The usage is therefore,
fairly limited, but it can provide great reflections on flat surfaces,
parallel to the Camera. As I move my chrome ball,
you can see that it actually occludes the reflection
from the lamp behind. This is because
the renderer doesn’t have any knowledge of the world
behind opaque objects. If we enable the Screen Space
Reflection Debug view, the effect is easier to assess. For instance, you can tune the
minimum softness of the materials that will be able to receive the
effect, with a couple of sliders. You can also decide if the effect
should consider the sky or not,
when applying the reflection. In case you see strong artifacts
on the sides of the screen, you can increase
the edge fade distance. As usual, you will find
the low, medium, and high-quality settings
in your Project settings. You’re always free to use
the custom quality and tune the number of samples
to boost the quality, however, be aware
of the potential additional costs. I must admit, I’m not
a particular fan of this effect, I usually prefer well-placed
reflection probes in my environment that will give me near-perfect
360 degrees reflections. However, as a fallback, Screen
Space Reflections can be acceptable for certain use cases where having
too many probes isn’t feasible. Obviously, the main drawback
will be that those reflections will disappear, as soon
as the reflection caster will be offscreen. The fog and its
volumetric effects simulate the atmosphere and give
a better sense of depth. Volumetrics can really improve
the visual quality of your levels. Make sure they are enabled
in your HDRP Quality settings. The high-quality mode is designed
for very high-end platforms, think twice before enabling it. To tune the fog,
you will need to use a Fog override
assigned to a Volume. The fog attenuation distance
will let you create a wide range of atmospheres. Like in real-life,
increasing the fog density will also dim the sunlight. Obviously, turning off
the Volumetric checkbox will revert the fog
to its Unlit Mode, making the Scene a lot less
dramatic, in this example at least. By the way, in case you cannot
afford the volumetric fog, an alternative is to rely
on the sky to tint the fog. You can tune how far
the effect extends and control the sharpness
of the effect by tuning the mip level of the sky
used to tint the fog. Obviously, the effect
won’t be as great visually as the volumetric fog, but it can
provide a boost in quality compared to a flat unlit fog. I can also tint
the volumetric fog color. However, scientifically speaking,
the fog should remain as colorless as possible,
in order to pick up the coor from the sky and ambient lighting. The anisotropy is particularly
useful to create front or backscattering
for the particles in the air. A positive value will increase
the fog brightness towards the light source,
whereas a negative value will have the opposite effect. If you activate the advanced
settings, you will be able to tune the depth extent,
which controls how far from the Camera
the volumetrics are rendered. Increasing the Uniformity,
will in practice give a more linear precision
to the effect, no matter the distance
to the Camera, whereas a low value will provide a greater precision
near the Camera. This slider is useful
if you notice nearby fog is lacking resolution,
for instance. The filter checkbox is particularly
useful if you notice strong noise and flickering
around light sources. An alternative way to reduce
flickering is to use the ReProjection in the Frame
settings of the Camera. However, this effect only works
in the Game view. You could activate both at the same
time to reach the best quality, but this would potentially have
a fairly high performance cost. To create very local pockets
of fog in your level, you will have to use
the density volumes. Add the GameObject to your Scene,
and tune its size. You can use the very
practical handles to resize it. Each Density Volume
can have a custom density, as well as a custom blend distance, to fade the harsh transition
at the edge of the Volume. Let’s assign a 3D noise
texture to it, and let jump into game to see
the effect on my cutScene Camera. You can tune the speed
of the effect to something more reasonable, as well as changing the tiling
of the 3D noise. As you can see, the anisotropy
will also affect the fog around my local lights. The effect is very stable
and the transition at the edge of the Density Volume
can be seamless. And finally, you can tune the color
of this particular Density Volume. Again, scientifically speaking,
using white will ensure that the fog only picks up its color
from the surrounding lighting. These Density Volumes are terrific,
they can really enhance the atmosphere wherever you need,
without having to mess with the global fog. A very common effect you’ll see
in games these days is Screen Space Ambient Occlusion. It is used to simulate the lack
of indirect lighting in parts of the image. It is very useful if you don’t have
a light bake or AO maps, or if you have many dynamic
objects in your Scene. To use it in your Project, enable
the Screen Space Ambient Occlusion in the HDRP Quality Settings. To control this effect, just add
an Ambient Occlusion overide to your Volume,
and you will be set. You can play with the intensity until you reach
the desired result. Make sure to not abuse
this effect, as it can introduce
cartoony results. By the way, the SSAO
in HDRP gets tinted by the albedo, to offer
the best possible quality. It isn’t a monochromatic effect,
like in many other game engines. Then, you can control the influence
of the effect in areas affected by direct lighting, such as
sunlight or artificial lights. This effect isn’t very realistic, but it can help ground objects
into the Scene. Then, you can tune
the radius of the effect, which might help you get
a wider AO around your objects, however, larger radiuses
will come at a cost. To tune these effects,
it can be useful to use the Render Pipeline Debug Window,
and enable the SSAO Debug Mode. This view is very practical
to tune the radius and steps used for this effect. It is much easier to judge
the quality of the effect as well. You can pick a quality level too. Those are actually set
in the HDRP Quality settings. You can use them to have
more granularity on the quality and cost in different areas
of your game, for instance. If you pick the Custom Mode,
you can then freely override any quality
settings underneath. And make sure you hit
the More Options button to display all the hidden
advanced parameters. Of course, using
the Full Resolution Mode will give you the best
quality, however, it will noticeably increase
the cost of the effect. The major drawback
of Screen Space AO (SSAO) is that it can only rely on depth
information in the current frame. This means it has zero knowledge
of the world itself and it can only use what the Camera
sees to produce the darkening. If you pan the Camera left
and right, you will see the effect appears and disappears
on the edge of the screen. Obviously, it is more visible
when using the Debug view. HDRP provides a very useful tool
to manage your lighting objects. Go to the Window > Rendering
and open the Light Explorer You will be presented with a list
of all lighting related objects, such as Lights, Volumes,
and Reflection Probes. It also offers
the possibility to edit some of their
most important settings. You can, for instance, tune the
color of light, their intensity, and if they cast shadow,
as well as the shadow resolution, without having
to leave the Light Explorer. Each property can be sorted, so it
can be very useful, for instance, to find the brightest lights
in your Scene, the ones that don’t cast shadows, or even
order them by color temperature. There is no doubt the Volume tab
will be very convenient because tracking where volumes are
and which ones are active can be very difficult, if you
aren’t familiar with a Scene. This is because the Volume
components can be added to any GameObjects, and therefore
they can be tricky to track. You can even use
the Light Explorer to apply settings
to your lighting Prefabs. Simply select your objects
and use the right-click to either apply the settings
or revert them. This is a very useful feature
when you want to standardize the lighting across all your lamps, without having to leave
the Scene view. Let’s look into a very nice HDRP
feature: the Light Layer system. As a lighting artist,
I can tell you this is the best thing
since sliced bread. You can use it to tell which lights
or reflection probes should affect
a certain set of objects. In this example, I'll place a light
in the garage behind the doors. For this scenario, I’ve reduced the
shadow fade distance to 20 meters. That’s a very typical distance
for many games these days, especially when you have
hundreds of them on screen. As I move the Camera more
than 20 meters away from the light, the light will stop
rendering its shadows, and I get unwanted light leaking. This is expected. But as a lighter, light leaking
is my worst nightmare. Fixing this problem
with HDRP is very simple, thanks to the Light Layer system.
I just need to re-assign the light to another layer
than the default one, I’ll pick the interior light layer. Now, you can see
that the light leaking is fixed and the light will only
be able to affect objects tagged with the Interior
light layer. I’ll assign the guard hut
to the same interior layer, and you can see that the hut
is now affected by the light, as they are both on the same layer. Obviously, in this scenario,
this light leaking isn’t wanted. However, you might have
situations where you want outdoor lights to cast lights
in the interior only, for instance, to fake natural
lights coming through windows. Here’s another example
where light layers can be incredibly practical,
that’s for a cutScene scenario. I have my cinematic Camera setup with custom post-processing
effects, and my character's surrounded by three
cutScene spotlights. I’m not satisfied that
the columns are being affected by these cutScene lights.
To fix this is very simple, I assign the lights
to the cutScene layer only. I then select my character
and mark it to be affected by the cutScene light layer. Now the lights
will only affect my character and they won’t affect
the rest of the environment. If you’ve ever done
cutScene lighting before, you can certainly understand
how practical this is. HDRP even pushes further
by offering control over the shadowing
of these light layers. I can prevent the objects
on the default light layer from casting shadows
from my cutScene lights. I can even tell my characters to
not cast shadows from these lights. Obviously, this is quite
a niche feature, however, having the ability to prevent
an entire set of objects to not cast shadows
from very specific lights is quite common
in real productions, where you don’t want to get parasite
shadows from unwanted props entering the cones
of your spot lights. For the last part of this session, I’ll focus on
post-processing effects. Let’s start with tonemapping. Tonemapping is an effect
applied to an HDR image in order to display it on a low
dynamic range display, like most screens
on the market today. By default, HDRP will use
the ACES tonemapping curves, which will give
a filmic look to your image, with increased contrast,
notably in the shadows, and slightly
desaturated highlights. As you can see, using
no tone mapping at all would be terrible
for your image quality. All the highlights in your image
will be clamped. The neutral Tonemapper, on the
other hand, will provide a fairly flat look, preventing harsh
shadows and clamped highlights. It is a great Tonemapper
to use when reviewing Assets like Materials and Textures,
as you do not want to apply unnecessary contrast
and desaturation to them. Then again, we have
the ACES Tonemapper, you can see that it is more
contrasted than the neutral one, it will have a better
highlight reproduction and will desaturate
them slightly as well. With the Custom Mode,
you can create your own tonemapping curve
to achieve a specific look. You have control over the shadows
and the highlights via the toe and shoulder
controls on the curve. This mode should be reserved
to expert users, however, as you can introduce unwanted
contrast and may unnecessarily darken
the image, for instance. For beginners, I would
highly recommend you stick to the ACES Tonemapper. Next is White Balance
which is a very important effect when using physically correct
colors for your lights. In real life Cameras or your brain
will attempt to automatically adapt to the environment by compensating
strong color casts they see. It is important to use
this effect at night, for instance, when you have very warm
incandescent lights in your world that produce
a very orange color cast. You might want to fix this
by lowering the temperature, which will in effect
reduce the reds and push the blues in your image. Alternatively, you can also
be creative with this effect, and use it as a color filter. Bloom is often an effect
that is being abused in games. Thankfully, HDRP now provides
a physically correct bloom that will simulate light bouncing
around in your lens and in practice reducing
the clarity of the image. Increasing the bloom intensity
to the maximum value will basically
heavily blur the image. You then control how much the light
is being scattered across the lens
with the scatter Slider. If you want to create
a less realistic effect, do we still provide a threshold
Slider that glows for discarding the blurring effect
on pixels under a certain limit. Finally, you can also tint
the bloom in an artistic fashion, though for correctness, white
should be your color of choice, as the color of the bloom should
depend on the Scene lighting. The last effect I will mention is
the Depth of Field, DoF for short. What is it exactly?
When you take a photo, you or the Camera
will focus on a subject. This subject
will be sharp, hopefully. The depth of field will be
the distance between the closest and furthest points
that appear sharp in the image. So, to make sure
everything is in focus, you will need a huge depth
of field, whereas if you want your foreground and background
heavily blurred, that’s called bokeh, you will
need a very shallow depth of field. This effect is controlled
by the aperture of the lens. If the lens is wide open, you will
have a very low depth of field, and if the lens is closed,
you will get a high depth of field. So again, DoF
doesn’t mean blurriness, that’s often something beginners
wrongly associate depth of fields. A large DoF is actually a synonym
of sharpness in practice. As usual, you can control the DoF
with the Volume system, for instance by adding a local
volume to your cutScene Camera. HDRP has 2 distinct modes. The first one is the manual mode,
that has no physical correctness. You are able to specify exactly where the sharpest point
of the image starts, and where it ends, by controlling
the near and far blur. Naturally, if you aren’t careful,
you can end up with totally unrealistic results
that will raise a few eyebrows. As an example, if I create
a very strong foreground blur, followed by a very wide depth
of field, it will look unrealistic. Therefore, you need to be careful
when setting those distances, so that it doesn’t look like you're
bending the laws of physics, otherwise, your player
will notice it. At least, I will. If you head to the Physical Camera
section, increase the aperture, and you’ll then be able
to set the aperture blade count. This will change the look of your
bokeh circles in the background. Usually, more expensive lenses
will have more than 7 blades, in order to create
rounder circles of confusion. Using a low value
will look cheap, literally. Then, we also offer a physical mode that will be based
on the aperture of the lens. In the Volume, you can set
the focus distance. But for the actual depth of field, head to the physical
Camera section instead. The lower the f-number,
the less depth of field you get, and the blurrier
the background becomes. Increase the aperture, and your
image will become sharper overall. HDRP also simulates many lens
aberrations like lens distortions, which will, in practice, give a
radial warp to your defocused area, like on many real lenses.
The effect is very subtle though. Finally, you can simulate
an anamorphic look seen in higher production value
content, something that can radically boost the cinematic
quality of your cutScenes. Make sure you use a positive value
to get this very specific effect. If you activate the advanced
option for the DoF overrides, you will be able to tune the quality
of the effect more precisely. As usual, head
to the HDRP Quality settings to review them and tune them. Or simply use your own
by switching the Quality to Custom. Be aware that using
the full resolution mode and increasing the samples
will improve quality, but also greatly affect
your performance. So choose very wisely. And we’ve finally reached
the end of this session. I hope you’ve enjoyed it, and hopefully, you’ve learned
a few things. If you’re new to lighting,
it’s a lot to digest, so I would suggest you have a look at the excellent documentation
for HDRP online. Make sure you select the right
version, as each version of HDRP brings many new features
and improvements. Speaking of which, you can expect
more improvements coming to the Exposure system,
with more controls and better debugging tools,
Light Layers, and Volume Visual Debug Modes, as well as better temporal
anti-aliasing and depth of field. Again, thank you for watching,
and if you liked the video, please let me know!
Take care and au revoir! ♪ [MUSIC] ♪