Harnessing Light with URP and the GPU Lightmapper | Unite Now 2020

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
♪ [MUSIC] ♪ [NARRATOR] Welcome to Unite Now where we bring Unity to you, wherever you are. [CIRO] Hey, everyone, and welcome. Today we're gonna talk about Harnessing Light with the Universal Render Pipeline and the GPU Lightmapper. My name is Ciro Continisio, and I'm the Lead Evangelist for the EMEA region at Unity. In today's agenda we have a few things. First up, the Universal Render Pipeline, specifically, we're going to take a look at how to harness real-time lighting and shadows, so real-time lighting and real-time shadows. Then we're going to jump onto our bigger section, which is baked lighting, and specifically with the GPU Progressive Lightmapper. We're going to take a look at many things, but specifically I wanna touch on a few things, which I think are very important to control the quality of your bakes, and that create a lot of misconceptions, which is filtering and charts, UVs, and then we're going to see how to do mixed lighting. Finally, we're going to touch on Light Probes and Reflection Probes to top it off and really achieve the look that we're looking for. This talk is going to be very interesting for lighting artists, environment artists, and in general, anyone who wants to make their Scene look beautiful, take control of lighting, and understand what's going on under the hood, and at the same time, make it performant. During this talk, we're gonna take a closer look at this small Project I've put together using assets from the FPS Sample and from the Snaps HD Art pack. And this Project features a few interesting uses of light. First, let's get started with Universal Render Pipeline, or URP. Universal Render Pipeline is the optimized and scalable rendering pipeline that is built on top of SRP technology. It is a pipeline that is production ready right now. You install it via the Package Manager like many Unity features lately, and it's multiplatform, so it supports all of Unity's build targets, VR, AR, obviously, console, mobile, PC, Desktop, Mac. And finally, it's scalable. This means that basically the pipeline is able to scale to change its capabilities according to the device capabilities. And you can always get the best rendering possible on each device. This is really demonstrated in our demo <i>Boat Attack,</i> which was made by one of our technical artists, Andre McGrail. We released it last year and it features URP at its best. It has water shader, planar reflections, the water shader has caustics as well, so it has additional render passes. It uses terrain, it uses vegetation, it has a day/night cycle as well, so it's really, really complete, and it really showcases the best of URP. Now, let's take a look at the lighting strategy for my small demo, starting with real-time lighting and shadows. And let's do it in the Editor. So here's the Scene, as you can see, it's a small diorama Scene, it's an exterior, and then it has an interior part here you can see in this hangar, let's say. For this Scene, I decided to employ a strategy that relies on a main directional light, which is the Sun at the top. And as you can see, this light can be anywhere in the Scene, so rather than having it here at the bottom, which can be annoying to me, usually I bring these directionals very up high because it doesn't matter where they are, since the rays are parallel, it only matters the rotation of the GameObject. So that light gives me a main contribution of light on the whole Scene, and a big part of it, as you can see, this object here at the back, these are marked as Static geometry so they're going to be baked. But then I also have some real-time contribution, specifically on the character so that, as you can see, if I move the character around, the shadow is going to be updated. This is called mixed lighting and I'm going to touch on it later. Let's focus on the real-time aspect for now. And then another light that I have here in the Scene, is on top of the Terraformer Gun, you will see it if I go here, if I select this one, and this is a Spot Light. And as you can see, the Spot Light is marked as Realtime, so when I go into the game, I can actually walk into the interior and then the Light is gonna cast real-time lighting on top of this spaceship, on top of the crates, and also, it's gonna cast real-time shadows, as you can see here, when she moves the gun, the shadows change. So these are two lights that have some kind of real-time contribution to the Scene, and she's able to cast this Light on both real-time objects and static objects. For the purpose of illustrating how real-time lighting and shadows work I've created this simplified Scene, which only contains non-static objects. So as you can see, if I select any of these, they're not marked Static, meaning that they will cast a real-time shadow. And also the Sun here, is a Directional and is a Realtime light, so here, you can see where it's placed, and if we move it, the shadows change. So to control the quality of this lighting and shadows, what you will do is go to your Universal Render Pipeline Asset, this is the same Asset that you have here in Project Settings > Graphics, and it's the one that enables the Render Pipeline itself. So if I select it, I can see it's here. And if I go here, I want to focus on a couple of sections, Lighting and Shadows. Lighting allows you to enable the real-time lighting so you can have it disabled, and this means that the light is only gonna give a color to the Scene. But if you enable it, then the light is actually going to cast shadows and perform Per Pixel lighting. And the Shadows, you control them here. You can enable Shadows or not, and the resolution of the Shadows is very important. I have it on 2048, you can set it to what you want. To understand what this means, we need to understand that, basically, the technique here is that from the point of view of the Sun, so if you imagine, if I go to Align View to Selected, and now I'm looking through the eyes of the Sun, I'm basically casting a 2048 by 2048 texture on the Scene, and for every pixel of this texture, Unity's calculating the geometries and it's checking whether these geometries cast shadows on other geometries. So as you can see, the projection is really exact and from that point of view, each pixel of this texture is stamping, let's say, a shadow on a texture, which is called the Shadow Map. And you can go up Resolution, so this Shadow becomes more neat. If you go down, they become more blurry, if you want. And they start showing also artifacts, which we'll see how to correct. For me, this works but this really depends on your device and the performance you want to hit. You can also enable these Shadows for Additional Lights, and this is something I have because I have the Spot Light on the Terraformer Gun, so I want to have additional Shadows. And for this one, you can see that I employed a much lower Shadow Resolution, and that's fine for me, that works for my Scene because the Shadows that that the Gun is going to cast are only very small in the Scene, very small in the view, so 512 works for me. The other important factor here is the Distance. So you can see here, these Smokestacks are roughly ten units apart, ten meters apart, and so I'm covering an area of 40 units from the camera, so I only get to see Shadows up to this point. If I were to bring this Distance up, you can see that the Shadows now appear for the last Smokestacks. But I also lose resolution. As I bring this up, you can see that the more I bring it up, the more I lose resolution along the whole length. And that's because this resolution is basically being spread from here to like 600 units away in the Scene, and maybe sometimes, you don't even see this 600 units. So for now, I'm only seeing 50 of them. So for me, for example, I could bring it down to 70 and then I get really nice Shadows. But obviously, if you have a point of view which looks like this, you might want to bring this up to a certain point. The second technique you can use is Cascades. If I have No Cascades, it means that basically, this Distance and the Resolution are being used in full and in the same way from distance zero to distance 70, in this case. If I use Cascades, what Unity does is it basically calculates different Shadow Maps, and it does it in kind of intervals, so you have an interval where you have a Shadow Map, which is higher resolution, and then it goes lower resolution the further you get from the camera. So in this case, Four Cascades means I have four Shadow Maps, four Resolutions to play with, and you define the percentage of these different distances here. So if I go to No Cascades, as I was saying, basically what I'm seeing here, see better in the Game view, is that the same Resolution is being used for this Shadow and for this Shadow, so they have the same blurriness. If I go to 500, you can really see that it's blurry here and it's blurry here. But if I enable Cascades, you can now see how this Shadow gained quite a lot of resolution, while this one still stays a bit blurry, especially if I go to this distance here, you can see how this is very blurry, while this one is still very decent. And then you can control the percentage of these Cascades, you can see that if I bring it down, this part here, really becomes neat because now this something like 5%, this one is the 5% of my distance, then 20% means that it goes from 5 here to 20 here, and this is the percentage of this number here. So you can imagine an imaginary line that goes from the camera to 480 units away, and now you can tweak these percentages to be a percentage of that length. The other thing I want to touch on is Depth Bias and Normal Bias. These are basically to combat what is called Shadow Acne. If I bring, for example, this one a little bit down, you can see how this Shadow Map that I'm requesting from the camera is investigating the geometry on the ground, and I see these artifacts. And the higher this Distance is, the more you see the artifacts. If this was like 40, you will see way less. But if you want Shadows that are a certain distance, then you start seeing artifacts and then you use these values to combat those artifacts. And you bring them up to basically find the balance where you don't see the artifacts or maybe they're very slightly noticeable. I wanna say one thing that is very important, I want to give this suggestion to you: When you tweak these values, and you're analyzing whether the artifacts are there or not, artists usually get too close and they see the artifacts, they bring this value up, and then they're fighting all the time with this value and with the artifacts. The point is, as a designer, as an artist, as a developer, you see the game through different eyes but players will see the game from different eyes. So they will see it from here, from a different point of view, and also, they will be very busy looking at other things. So you don't really need to be too precise with these values. You can achieve some compromise, which actually looks good. For instance, another aspect to consider in this Scene is that these ground Planes have no textures. When they will have a texture, the artifacts will be less visible. So you need to take several factors into consideration and really look at the game through the eyes of the potential player of your game and not through your own eyes, when you tweak these values. As a recap, this is the situation of real-time lighting in URP. You have several options, you have Directional lights, Spotlights, Point lights and Area Lights. And some of them, as you can see, feature real-time lighting, some of them also feature real-time shadows. It is the case for the Directional light and the Spotlights. That's why I decided to use them for my real-time lighting contribution. The Point lights, they don't have real-time shadows right now because URP only features a forward renderer. With the arrival of a deferred renderer in the future, they will gain the ability to cast real-time shadows. But for now, that's not the possibility, as I can do in this presentation. So I decided to leave Point lights out for my demo. And then you have Area Lights, which are by definition only baked. They're more complex lights, they are kind of like a surface that emits lights and I've used them in the interiors, and I'll show you how when I touch on baked lighting. There's no way to have Area Lights be real-time for now. Now that we covered real-time lighting, let's talk about baked lighting, and specifically with the GPU Progressive Lightmapper. To do that, let's go directly into the Editor. As you can see, in my demo, I have plenty of static Scenery. I have the structure, the rocks, the plants, and none of them move, except obviously, the player and some crates. So for me, it makes a lot of sense to use baked lighting to obtain better lighting in the Scene, and also squeeze some performance. To enable baked lighting, you need to do a couple of things. The first thing is you need to make sure that every structure, every piece that you want to bake the light on, is marked as Static here in the Inspector. You do that by going to the GameObject that includes the Mesh and then you mark it as Static. And obviously, I suggest doing it on the Prefab level so you can then reuse that piece over and over again, and it will always be Static. The other thing you need to do is to make sure that the Lights are being baked. So in this case, for example, my Sun Light here is marked as Mixed, it could be Baked or Mixed. Mixed lighting, again, I'm gonna cover it later, but Mixed means that this light is gonna participate in the bake. And then I have other lights here that are participating in the bake. If you go inside here, you will notice that there's two Area Lights that are basically a little trick that I learned a while ago to capture the emissive light that is coming out of this crate. So basically, this crate, if you inspect their material, you can see that they're emissive, this means that the light, the simulated light is gonna be baked into Lightmaps. However, because these details are very thin, it might be that the resolution of the Lightmap doesn't allow me to capture them properly. So what I did is I faked it by placing a baked Area Light just in front of it with a blueish color, and the Area Light provides this nice gradient here on the ground. And then I placed another one for good measure, just because I wanted the bounce back on the structure so it looks like these lights are very powerful. If you notice, I haven't done it on the crates at the top because these lights are so small compared to the natural sunlight, that I don't expect them to emit any blue light, so I don't need to fake that. And I have another baked Area Light here to cast on the ground. Same situation, I wanna simulate the emissive and I wanna enhance it somehow. Once I have light set up, once I have the structure set up, I need to go to the Lighting panel and make sure that Baked Global Illumination is checked. Once you checked it, you uncover all of these options, and then you can press Generate Lighting and start the baking process. I've actually already completed the baking process for this Scene, so I don't wanna launch it right now. It doesn't make sense to just make it real-time. The thing that I wanna point out is one of the most important settings is the Lightmap Resolution. Lightmap Resolution defines how many texels, so the pixels on the texture, are gonna be used for one unit in the Scene. So it means that if you have a Cube of one unit, that unit, that Cube is gonna be baked with 30 by 30 texels on the Lightmaps. I have 30 texels per unit and I can check that once I baked if I go to this menu and select Baked Lightmap, and now I can see the actual Lightmaps that have been produced by the bake. And also, as you can see, you're able to see the resolution that has been used for those Lightmaps. You will also notice that, actually, some objects seem to have bigger texels, especially these rocks here on the back, compared to other objects which seem to have smaller texels. This is because there's another setting, which is very important when you bake, you will find it on each and every object on their Mesh, and it's here in the Mesh Renderer > Scale in Lightmap. This is a multiplier and is basically saying, of those 30 texels per unit, I want only this object to have 0.2 of that, so one-fifth, which means that this object is going to occupy less space on the Lightmap. This is very important because, for instance, in my case, the focus point of this Scene is here, is here at the center where the Terraformer, the character is walking. These rocks here at the back, I don't really need very detailed shadows. Remember, what we talked about before, the player's gonna look here, not look there all the time, so you don't want to waste a lot of Lightmap space for those rocks. And that's why you can select them and you can enable this parameter to be something less than 1. In the case of objects in the foreground, I have 1, and that produces this difference in resolution. The other important setting of the baking is the Direct Samples, the Indirect Samples, and the Environment Samples, they really affect the quality of the bake. But to explain that, let me go back to the slides. To understand those three settings, we need to talk a little bit about how Lightmappers actually work. The idea is that after you define the resolution, for the texels per unit that you're going to use, you can imagine that the structures, the geometry is covered into a texture like this one, right? And for each texel, you can imagine that they're shooting rays out, rays that are trying to find light sources and they're bouncing around and capturing light as they bounce. So the Direct Samples, they are just trying to find lights. And in the case of the Directional Light, you can imagine that the texel is shooting a ray in the direction of the Directional Light, the rays are all parallel, and if it gets the sky, it means that the Directional Light has exposure on that texel. And it means that that texel is leaked, and this defines the shape of the shadows, the main shadows that you see on the ground, and on the walls, and stuff like that. So why use more than one Direct Sample? That's the question that everybody's thinking. When you're talking about lights that are not the Directional, for example, a baked Area Light, in this case, the area of the light is actually bigger and the rays can be emitted in many directions. So the point here is that you want to emit multiple samples in different directions and see if they hit the Area Light. And some of them are not gonna hit the Area Light, so depending on how many find the Area Light, that texel is gonna be more or less lit. So if the texel is more exposed to the Area Light, for example, it will be brighter. That's very simple. In the case of Indirect and Environment Samples, what happens, for example, is that in the case of the Environment ones is that the rays are being shot by the texels, and are trying to find the sky, and they can do it not only in a direct way, but they can do it by bouncing. So as you can see here, that texel up there on the ceiling, is shooting a ray which is bouncing once and then it's managing, let's say, to escape and find the sky. And every time these rays bounce, the color that they find is sampled and is brought back to the original texel. So this means that that texel there on the ceiling is gonna be a little bit blueish because after one bounce it finds the sky, so it samples a color from the sky, which is light blue, brings it back to the ground, that texel is also gonna be a little bit blueish, and then it brings it back to the ceiling. But it loses some power because it's bounced once, and so that texel on the ceiling is gonna be just slightly blue. And that's how the bounces are calculated. So you can imagine now, if you go back to the Editor, that you understand this setting here, the setting of the Bounces. This means that if I set it to None those rays are never gonna bounce, meaning that only something that is exposed to the sky is gonna receive that lighting. And with two bounces, it means that each texel is emitting rays that can bounce twice. This setting here, Russian Roulette, this means that there's a random chance that those rays can be killed, so stopped from bouncing any further, and that will inform how many rays bounce, but also it will make baking times slightly shorter because a certain percentage of those rays is actually discarded. Let's talk about another option that you have, which is Filtering. When Filtering is not on, you might get something like this. You might get a bake that is very noisy. The noise comes from the fact that each texel is gonna shoot rays, is gonna shoot these samples in random directions and sometimes the samples find a lot of light, sometimes they find less light. And this randomness creates this difference in color, which results in noise. And in this image, for example, you can almost see the resolution of the Lightmap in the noise. When you apply Filtering, you can get a result like this. So that was the same bake with the same parameters but with Filtering on. The point is that you could potentially up the number of the samples so high that you get no noise, or very little noise, but the cost of that, the time that it will take to bake that will skyrocket. While you could keep the same number of samples and enable Filtering and get a result that is much, much better for a fraction of the time. Let me show you in the Editor. So Filtering, you find it here, you find it in your Settings under the Lighting panel, and the default value is Auto, meaning that there is some filtering on. You could also have None, and if you go to Advanced, you uncover all of the options that you can have for Filtering. So you have Filtering on Direct samples, Filtering on Indirect samples, and Filtering on the Ambient Occlusion. As you can see, I can also select the type of filter, and you have two filters: Gaussian and A-Trous. The difference between them, so Gaussian is a filter that doesn't differentiate between any pixel and between geometries. So the idea is that, let me go to Baked Lightmap, the idea is that you take a couple of pixels and Gaussian is just gonna blur them. If you select up to 5, it blurs across 5 pixels, and it does that also under geometry. So for example, here you can see how this pixel is a bit lit, and that's because the Gaussian filter blurred it with some lit texels that are under the geometry. On the other hand, A-Trous, which I haven't used here, just to demonstrate what Gaussian can do, A-Trous does something different. A-Trous is like a smarter filter which reads geometry and tries not to filter, not to blend texels that belong to different geometries. So for example, in this case, because this geometry is partly in the ground, the A-Trous filter will detect that and it will not blur this texel with whatever texel is under here because they are intersecting. And so what A-Trous does is that it gives you slightly better results with geometry that is touching and is intersecting. In addition to Filtering, you also have the Denoiser, which is another option that we added recently, and Denoisers are basically AI-based filters so they don't just filter anything but they run an algorithm to decide what to blur, filter, and what not. And you have different types and some of them rely on Graphic Cards, so it really depends on which hardware you're running. And between these two options, you will see you can really get the results you want. Filtering doesn't happen on the whole object but it just happens within something called UV Charts. Basically, the object is divided into different islands, kind of like UV islands, and then the Filtering is applied, as you can see, only within the boundaries of each Chart. So for instance here, it will go all the way up to the boundaries of the Chart, it will blur the pixels that are inside but not the ones that are outside, so it will not blur with the ones that are outside of this Chart, and so forth for each Chart. And if you go to the Scene, you can see that we actually have a Visualization mode for UV Charts and now I can see how each and every object has been divided into Charts. So for example, this one here, you can see that there's this Mesh Chart at the top, this blue one, and so the Filtering will happen only between the texels that are part of the same Chart. So these two will blur and these two won't. The way you divide objects into Charts, there's a couple of ways. First way would be when you have, for example, in this case, and you can see totally, the difference with Charts within this object and this object here. These are much more neatly organized. So if you select one of these objects and you go to the Inspector, and you select the Mesh that this object is using, you can see that, for example, this one we're asking Unity to Generate Lightmap UVs. So we're asking Unity to make the UVs for us, and this means that these Charts will come from those UVs that Unity will generate for us. Actually, a better way would be to create the Lightmap UVs yourself. So if you have an object which has been properly UVed, then the Charts will be much, much better organized, as you can see. And this makes a lot of sense because in this case, for instance, I might want to have some filtering happen on this area, but I don't want the filtering to blur things in this area, otherwise, it will kill my shadows. So when you do proper UVs, Charts are much more organized and produce better results. This brings me to another topic which is very important to baking, which is UVs. UVs are basically our way to say to the software, how to map a 2D texture to a 3D geometry. They basically create a correspondence between each and every pixel of the texture, to the geometry, to the triangles of the Mesh. And you usually create your UVs in DCC tools and then you import them into Unity. And you usually do that for, for example, albedo textures, so you decide how the albedo texture gets mapped around the object. But it's also very important to do that for Lightmaps. So if you go into Unity, actually, we can inspect the UVs of objects, if you select the object that has the Mesh and then you click on the Mesh and find it in the Project view, you select it here. In newer versions of Unity you have this ability not only to inspect the Mesh, but also to check its UV Layout. And as you can see here, this one is a bit off-center, but you can totally see how the polygons are present on the UVs. And objects can have multiple sets of UVs. They can have one for the albedo, for example, for the color, and another one for the Lightmaps, and they can differ because with the albedo, it's okay to have multiple parts of the object to be overlapping each other because, for instance, this coordinate here and this coordinate here of the crate might have the same color and the same scratches, that's okay. But when you talk about Lightmaps, you want them all to be unique because if the light is hitting the crates from this side, and you're baking the light in the Lightmap, then you want this side to be lit, but this side has to be shadow. So those triangles can't be on the same space on the UVs. So they have to be unique. So if you don't want to author your UVs manually outside of Unity, one thing you can do is you can select an object and check this checkbox. So for instance, I've done it on this object here, and what that does is it asks Unity to Generate Lightmap UVs for you, and Unity will do it according to the settings of this menu here. Basically, here you're saying how Unity should reason when creating these UVs, and one of the most important areas is this one, Margin Method. It can be Manual or it can be Calculate in 2020.1, and Unity, what it's doing is it's trying to create some margin between the islands of the UVs. So if I go back to the UVs of this object, let me go to this Channel here, you can see that the islands, so these parts here, are separated by a certain amount of hypothetical pixels. This depends on the resolution of your Lightmap. So sometimes when these two islands are not separated enough, you can get some overlap because they're maybe sharing the same pixel. If there's enough pixels in between these then it would be fine. But if this object actually is too small on the Lightmap, then you will see some nasty artifacts. And these parameters are all interconnected, and they could really make or break your bakes. When you select an object, actually, you can inspect where this object is in the Lightmap. You can do it in two ways actually. You can do it in the Inspector, and you see the Lightmap here and then you can open a Preview of the Lightmap and you see the object here. So this object is being, let's say, it's being printed here on the Lightmap. And you can see its different islands and you can see that, for example, between these two islands there's only two pixels. So this tolerance is the one that we saw before that you can either do it manually in your DCC tool, or you can have it here and you can see Unity can calculate it for you, assuming that the Lightmap Resolution is 40 pixels. So in this case, for example, where I have 30 pixels it might not be enough, or it might be okay. But as you can see, they don't share any pixels, so this should be okay. But sometimes for other objects, details could be too small, so you really need to make sure that these UVs are well done, either by tweaking them into the options on the Inspector, but my suggestion is always you wanna author your UVs manually. So you really need to make sure that they're well done to support the resolution that you wanna bake at. Another option is to go here in Lighting > Baked Lightmaps, and then you can see the Lightmaps, for example, if I select this object, it shows me that it's contained here and then I can click this and I should be able. Sorry, I'm not selecting the Mesh, now I am, and now I can see that this object is this one here in the light. So you can really see where they are, you can turn the geometry on and off, and you can also tweak the intensity, the exposure of the Lightmap. And you can inspect issues. So when these UVs are overlapping, for some reason, you can actually see it here, there's another Visualization mode which is called UV Overlap, and UV Overlap, as you can see, it gives you an overview of where some objects might have UVs that overlap. So let's inspect this one actually. Let's open the Preview, you see it's here, so for some reason, this object has been scaled very small on the Lightmap. Maybe it's the, yeah, it's the Scale in Lightmap, you see it's very small, so the object has become very small on the Lightmap, which means that if we go and check its area, you can see that in this part here, the UVs are overlapping. And I guess it's this part here, you can see how these two UVs, I don't know if I can zoom in even further, these two UVs are basically sharing the same pixel and that creates this overlap. Now, the point is, in this case, I really don't care because as you can see, it's such a small detail on the Scene that I don't, it's not producing any artifact, and if you see here, this mechanical arm has the same thing but it's very small again. So this is all tolerable overlap and it's fine for my case. Sometimes you'll see bigger patches and then you need to correct by making sure that the UVs are well done. I don't have time to go over each and every setting for the baking, but I just wanted to finish with this one. Basically, there's two types of settings, let's say. This is my distinction but you could see this in a different way, but I will say that there's some settings which will affect the performance of the game, and some which are just like artistic decisions, so to speak. So the ones that you see highlighted here in yellow are settings which will affect the size of the Lightmap, the compression of the Lightmap, which means that the file will be bigger or smaller and thus require more memory or less memory to be managed during runtime. And the Direction mode will also affect the number of Lightmaps that you get. So basically those settings highlighted in yellow are something that you want to discuss with a technical person and establish some kind of guideline for your game, depending on the device and depending on the performance that you wanna hit. The ones in blue are the ones that are not really affecting how the game runs because once you bake the Lightmap, that's it, that's the texture. And whether that's a nice bake or that's a bad bake, the pixels are the same, the game doesn't care. So these blue settings are the ones that you, as an artist, needs to control and here you can really create a workflow. Because, for instance, you could define the ones in yellow at the beginning, make some tests, and then you could totally keep working with lower settings on the blue side, and experiment and experiment. And then once you're satisfied with the results and you found your settings, for example, for the main light, for the color of the light, for the intensity, and so forth, you could launch a final bake. Maybe you launch a final bake everyday, and you bring all of the blue settings to the maximum and you just leave it baking for awhile. So there's some workflow that you can find in there where you can iterate faster and then launch a final bake, which is higher quality, every now and then. Now that we understand real-time lighting and baked lighting, we can talk about mixed lighting. Mixed lighting is a bit of a complex subject because you can have different types of mixed lighting, we call them mixed lighting modes. And URP supports two out of three of them. So it supports Baked Indirect and Subtractive. It doesn't yet support Shadowmask and Distance Shadowmask, which are basically the same mode, but it will come in the future. So Baked Indirect works this way, you have lights that cast shadows and everything is real-time, and then the bounces of those lights are baked. So the indirect lighting, the lighting that's bouncing, is baked into Lightmaps, and then the direct lights cast the lights and shadows in real-time. This is very flexible, this allows you to have something like a day/night cycle, for instance, when you can turn off the main Directional Light and you could make it blueish and turn it very, very low, and have like a night Scene. And if you want, you can turn it on and have a day Scene, and they could both live in the same Scene and you would be able to switch between them at runtime. But at the same time, you would have indirect lighting with bounces which makes it a bit more realistic. Subtractive mode, instead, is the one that I used. And in this case, you basically bake almost everything, you bake all of the static objects, you bake the lights, you bake the shadows, and the bounces on those objects. And then you have the ability to have dynamic objects cast shadows on top of them. So let's see in the Editor. In my situation, as I said at the beginning, almost everything is static here. The only things that are dynamic are the character and these crates, and one of the two robotic arms, this one here. And this is because they move, so if I press Play, you will notice that they are all able to cast shadows that can change at runtime. So this one is casting shadows as well. And if we go back to Edit mode, this shadow here is baked because these are static objects, so they're casting a baked shadow. This shadow here is dynamic, and if I select the Terraformer, and I bring it, for example, here, you will notice that the shadow of the Terraformer is overlapping with the shadow that is baked. So as you can see, Subtractive, which is also one of the cheapest lighting modes, is not perfect but it gives some pretty good results. And the point is you need to go into the Lighting menu, and then Environment, and here you can choose the color of that shadow that is being superimposed on top of the baked lighting. And you need to find the right color to really match the color of the shadows, and you can get some pretty good results. Obviously, you see the game from here, so this matches pretty okay. And again, this is the most performant because now, almost all of my Scenery is baked, and then I only have this small shadow to calculate, which allows me, as I was saying before, to have my light. Sorry, it's here in the Settings, to bring my distance to something very, very low, and utilize all of the resolution of the Shadowmap for this shadow here because it's the only one I'm seeing. So I don't care about objects that are in the back because they're baked. And the other thing I wanna say on your Pipeline Asset, you need to enable Mixed Lighting here. And then in the Lighting panel, in the Scene, in Lighting settings you're able to select which Mixed Lighting Mode you wanna use. As you can see, I only have Baked Indirect and Subtractive, which is the one I'm using. You also need to make sure that your Directional Light is set to Mixed. Now you finally understand what Mixed means, it's both real-time and baked, and it's mixed in the sense that it's real-time or baked for different objects. So it's baked for these ones and real-time for this one. So now I hope you understand what mixed lighting modes are and which one to use in which case. And one small tip that I can leave you with is that you could also have different mixed lighting modes in different Scenes, depending on the needs of your game. Sometimes a Scene can be in the open and you wanna use Baked Indirect because you wanna have a day/night cycle. While maybe in an interior, you might have Subtractive and just bake everything out. And once Shadowmask and Distance Shadowmask becomes available, you can also put that into the mix and really take advantage of whichever is best for whichever case. You can see more information about Lighting Modes in the Documentation and read all about them. Let's talk now about another technique, which really allows you to tie together the world of baked lighting with dynamic objects. This technique is called Light Probes. Basically, Light Probes are small spheres that sit in the Scene in groups. There's gonna be plenty of them. And they capture the light when you bake lighting, and then when the game is running, dynamic objects can sample these Probes, they sample the closest Probes and they take the Lighting data from them and that basically allows a dynamic object to receive Lighting data from what was baked during the Baked process, including all the light bounces, all the emissive materials, and all of that. So it really makes dynamic objects stand out because they can now receive really realistic lighting even if that lighting has been baked beforehand. To create Light Probes, you just need to go to right click here, and go Light > Light Probe Group. Now, I've already created one and I've already baked one, so you can see how here I have plenty of Probes in my Scene. And as you can see, you really need to place them where it matters so you wanna distribute them by hand and capture the differences in lighting. So for instance, here I have some of them in the open, and then I have some of them in the interior. You can see how I have one of them here on the threshold, one of them here inside, until the interesing points, like the Area Lights that I showed you before, and these Probes are set to capture that light. So basically, they constitute a network of little Probes, of little interest points, if you want. And then when you have a dynamic object like the Terraformer, or let's take the Crate, that's an easier to understand example, and if I select the Mesh, you can really see that this Mesh is sampling from the closest Probes, and it doesn't just take one, but it averages the Lighting on the four closest ones. So basically they need to form a pyramid. Let's take another Crate as well. So you can see here how this is sampling from these Probes here, which are darker, this one too. And you see that it's sampling from these two dark ones, and then there's one here, which is capturing some of the blue emissive lighting from the Area Light here. So this Crate shows some blueish tints, and you can see how I can move it closer and closer to the Probes that have captured more light. I can move it further, it's still dark. I can move it to this side, it becomes darker. When I move it to this side, it progressively becomes brighter until it gets into sunlight. As you can see, Light Probes don't provide the silhouette of the shadows. The shadows have been baked so Light Probes just provide lighting information and the amount of intensity and the color of the light. So you will never see a shadow that has been baked cast a real-time shadow on a dynamic object. That's not possible. But as you can see, the approximation is quite good, you can really feel the difference when the object is inside and outside in the open. Just one quick note about the placement of Light Probes, as I was saying, they work in pyramids, so you wanna organize them in little pyramids. And as you can see, I didn't place them where my character couldn't go, for example, so I didn't place many around here 'cause the character cannot reach. I didn't place them here at the top because the character cannot walk there, so I'm only placing the Light Probes, as I said before, where I have interest points, where I have places where the lighting is interesting and changes in some way. So obviously, I didn't place any Probe under this rock because I don't want the character to be darkened by this shadow. The character, obviously, doesn't go under the rock. But I do have one here, under this rock because when the character walks close to it, then it gets some kind of, let me show you, it gets some kind of shadow. Where is the character? It's here. So if I drag the character to this position, you can see that it's very, very faint, the change in... actually, sorry, it's too high. So if I place it correctly, you can see that the character is now slightly darker as a result of that Probe. So there's a slight change in lighting between this position and this position. And then, obviously, here, it can become darker and so forth. This brings me to another little tip that I can give you. So these plants here, they are baked but they are using Light Probes for lighting. And if I select any of them, and I select the Mesh, you will see here, that in the Inspector, basically, these have been marked for... Where is it? Contribute GI. So they are static in that sense and they do contribute to the GI. But then here instead of selecting Lightmaps to Receive Global Illumination, I decided that these plants actually are lit by Light Probes. And this is because these plants contain a lot of detail but it's very small and spreading all these polygons on a Lightmap means that I'm using a lot of space on the Lightmap, which will make my bake times higher. But I'm not actually seeing these details because I will see these plants from up here. So even though you could obviously scale them in Lightmap by... I'm not gonna see the value here but the value that we saw before, you could use Scale in Lightmap and put it down to something smaller, but then you will just capture details in a more imprecise way. So a better technique is to actually select Light Probes for these objects, so these objects are gonna contribute to the GI, and as you can see, they do cast a shadow, and the shadow is baked into the Lightmap on the ground, but the object itself is not being lit by a Lightmap, but is being lit by Light Probes. In fact, if I enable the Gizmos again, you can see that this object is showing me that this resulting lighting is the interpolation of these four Light Probes that are nearby. And actually, if you wanna see Light Probes a little bit better, this is another little trick, you can go to Lighting and at the bottom, you will see Workflow Settings, and there's a setting which allows you to visualize All Probes No Cells. And if you do that, and you have Gizmos on, you will see all the Probes in the Scene and at all times, you don't need to select the Light Probe group. And this is really useful because it allows you to debug and see if something is wrong. I can take a look at my Probes and see if the colors that I see that they have are something that I expect. As you can see here, this is carrying some of the blueish light from the Skybox, but it's darker because of the shadow of this walkway, let's say. And similarly, the ones that are inside, they carry some of the blueish light from the Area Light, so I can really debug and make sure that all the Light Probes that I placed are capturing the right light and the right shadows. If you wanna turn them off, you go here, and you select Only Probes Used By Selection, and now you can only see the Probes if you actually select them. One final thing on this topic is that we have a new Visualization mode here, which is called Contributors/Receivers, and this one really allows you to check the Scene and basically debug, if you want, and check if you forgot to set the settings of any object, and if the settings are correct. As you can see here, all of my dynamic objects, excluding the character, actually, because the character is a Skinned Mesh Renderer so it's always dynamic, but the ones that are just Mesh Renderers and are not set to receive or contribute to GI, you see them in yellow. And then the plants, for example, the small rocks here, I set them to Contribute to GI so they cast a shadow, but they receive lighting from the Probes, they're in red, and everything that is completely baked and contributes to GI is in blue. So this is a really good visualization to check if everything is correctly set up. Finally, Reflection Probes. Reflection Probes are used to capture reflections of the Scene to make materials like metal and glass more realistic. And reflections are usually a very expensive technique, you might have heard of the rise of Ray tracing these years, and that's a technique to capture reflections. But Ray tracing is a very expensive technique. So if you don't have that type of hardware, or your game runs on a different type of hardware, then you really need Reflection Probes to the rescue. So Reflection Probes are basically spheres, and you can see one here, which capture the reflections of the Scene in a spherical way. And then the objects that are close to them can sample that Probe and use it for reflections. So my setup here is I have two Probes in this Scene, and this is basically my tip: you wanna have multiple Probes for different, again, like the Light Probes, different interest points. In this Scene, obviously, we have the exterior and then we have the interior. So I have the Terraformer, which has a very reflective helmet, and I want her helmet to reflect different things depending on whether she's inside or outside. So I created two Reflection Probes, and obviously, you create them just by going here, Light > Reflection Probe. And once you have it, you need to define, as you can see, a box which defines where this Reflection Probe is used. So in this case, I created a big box that comprises the whole Scene for this one and this means that wherever she's gonna be in the Scene, she's gonna use this Reflection Probe to create reflections. So if you see her helmet... let me disable Gizmos. She's now reflecting the exterior and this side is not maybe the most interesting, but if you turn her around, you will see that she's reflecting the structure that's behind us, the hangar. So that's capturing that reflection here. And then when she goes inside, she will use another Reflection Probe. And now, she's capturing the inside of the hangar, and we see the door and we see the Area Light that you have up here, so this reflection now, accurately matches the inside of the hangar. As I was saying, it's an approximation. You don't see the reflection updating as she moves here or here, but it's good enough. And to obtain this secondary reflection, I created another Reflection Probe, which is here, and as you can see, this one has captured, you can actually see it here in the Preview, it has captured the inside of the hangar. And the point here is that this Reflection Probe has a different box which is just the size of the hangar, and I set it to be on Importance 2, which means that when she's inside this box, it will take this Reflection Probe and not the one outside. So it's kind of like a Subtractive mode, if you want. So this box here and then this one here subtracts and this volume means that this Probe is gonna use instead of this one. I just wanna point out that actually even if you didn't have these two Reflection Probes, and I wanna show you by bringing the Terraformer out, even if I didn't have these two Reflection Probes enabled, she will still reflect the sky and that's because even if you don't have any Reflection Probe in the Scene, when you bake lighting, even if Baked is not enabled but it's just like real-time lighting, Unity still produces a third Reflection Probe for you. So you see here I have three Reflection Probes. The two ones that I produced, the interior and the exterior one, and then there's always a third one, I call it the courtesy Reflection Probe that Unity creates for any area where you don't have a Reflection Probe in effect, Unity effectively creates a third one that just includes the Skybox. So that's the reason why, even if you have an empty Scene and you create like a sphere, and you make it very reflective, you still see the sky and it's because of this Reflection Probe, which Unity creates for you, even when you don't have the ones that you create in your Scene. We reached the conclusion of this talk. I just wanna leave you with a few final remarks and suggestions to make your bakes better when you're baking lighting with URP. The most important thing is, obviously, understanding the contribution of each and every settings. Baked lighting, especially, has a few settings and they're all documented so I will suggest after you watch this talk, to go into the Documentation and just read the ones that are not super clear to you, and really understand what they do because they can really make or break your bakes. I will suggest to bake smartly and refine at the end of the process. What I mean is, when I showed you before the settings that contribute to the quality of the game and the settings that contribute to the performance of the game, there are some of them that you can keep low when you bake and when you make experiments, and you don't really need to have them at the maximum quality all the time. So bake smartly to be able to be faster. And obviously, use GPU Lightmapper, which is much, much faster. And then you want to refine at the end of the process. So when you're done, when you decided the color of your lights, the intensity, the position, you can then bump up all of those settings and produce the final quality of the Lightmap. You want to prepare your assets accordingly, this is very important. Establish a proper workflow. So prepare the UVs, as I said before, they really affect the way that the baked lighting calculates the UV Charts, which in turn is gonna affect the filtering. You wanna prepare the normals accordingly because if normals are wrong, then the lighting is gonna be wrong. This is super important. Sounds silly but so many times there's more mistakes in the normals, like rounded edges when there shouldn't be, and then you get light spillage, lights that seem to come from nowhere, and that's because the normals are pointing in the wrong direction. And then, obviously, the suggestion that I always give, look at the game through the eyes of the player. You're not the player, you're inspecting the game too close when you make decisions on the fidelity of the lighting, and especially when that's connected to the performance, especially when that's, for example, real-time lighting, that can really affect the performance. You need to look at the game through the eyes of the player and think of what they will notice, not what you notice right when you build the game. So if you liked this talk, you can actually download the Project from this URL, you can just grab the assets and play around with them, and really explore the Scene that I created and that I explained during this talk. And if you have any questions, you can come to the Unity Forums, we have a forum thread where you can ask questions to me, to the lighting team, to the URP team, so yeah, really come with any questions you have in your mind, and we'll answer promptly. This is all from me. I hope you enjoyed this talk. I hope, now, lighting with URP is a bit more clear and you can really harness the power of it to make your Scenes look beautiful and performant at the same time. Thank you so much for watching the talk. And if you want to chat, you can hit me on Twitter at that address. Otherwise, I'll see you in the next talk. Bye-bye. ♪ [MUSIC] ♪
Info
Channel: Unity
Views: 59,956
Rating: undefined out of 5
Keywords: Unity3d, Unity, Unity Technologies, Games, Game Development, Game Dev, Game Engine, baking gi, lightmaps, lightmapping, gi, global illumination, lighting, urp, universal rendering pipeline, gpu lightmapper, lightmapper
Id: hMnetI4-dNY
Channel Id: undefined
Length: 60min 11sec (3611 seconds)
Published: Wed Jul 22 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.