Unreal 5.3 - Making a volumetric ray marching shader from scratch (part 1)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hola amigos welcome to this new tutorial Series where we're going to be creating not only these volumetric Ray marching material that you can see here but also an editor that will allows these us to create our own custom pseudo-volume textures within the engine these types of materials can be used to render things like these clouds here and support as you can see ambient and direct lighting self-shadowing and all kinds of transforms so we can scale this rotate and move and it works as one would expect and as you can see here as well it supports intersection with opaque objects at least now there are a couple reasons what I'm making this tutorial series the first one is to teach you all about volumetrics and raymark the ray margin algorithm which I think is pretty fun but the second one is because while some of this content is available in the engine if you search for volumetrics in the plugins folder there's this plugin here which is a library of volume creation and rendering tools using blueprints however at least as of unreal 5.3.1 which is the version that I have installed in my PC some of this content doesn't work especially everything related to pseudo volume textures doesn't render depth correctly so if we just use the content straight from the engine things like intersection with this sphere wouldn't work right also we have some artifacts that happen when the camera gets far away or too close and things are not being rendered correctly now the other thing I want to do before I start in the tutorial is thank Ryan brox who is the principal Tech let me see principal technical artist and epic games who has a Blog called Shader beats that teaches all of these Concepts in wonderful detail and now without further Ado let's get started in unreal most objects are rendered using polygons these Define the surface of an object and of course is this surface is closed we have a volume but with certain kinds of objects like this Cloud here polygons are not really a viable option we could always make this into a billboard but if we wanted to move the camera in and out of this Cloud we would need to use volumetric rendering now there are multiple ways to define the contents or in this case the density of a volume for example using sine distance fields and some math but for this series of videos we're going to use textures and there are also a couple of options here unreal since version 5 now supports volume textures which store the information in a three-dimensional space much like a 3D grid of voxels each point in the texture has x y z coordinates or uvw and one value the advantages of using this method are that you only need one sample to get any value from the texture and also that this can be accelerated by GPU Hardware so it's very efficient however authoring and editing these textures is more complicated and since unreal doesn't have volume render targets yet requires the use of other 3D software like blender or Houdini and since we're going to make a cloud painter in a future video for this one we'll use pseudo volume textures which are very similar to Sprite sheets as we can see here we have a single texture with multiple frames so it has 12 percent so 144 total and to render releases as a volume we can cut and arrange these frames so each one becomes a different slice of our volume and when we put them all together we have a render and I mentioned earlier that with volume textures you only need one sample and that is because with this type of textures when we want to know one value from the volume we need to sample it twice for example let's say that we want to know this value here right we would need to sample this one and then this one because there's no slice here and then Blend or lurp between both to know the approximate value around this point and otherwise we will see a step in on its on the axis that is sliced and now that we understand how we're going to use the textures let's see an overview of how a volume rasterizer or array marching algorithm works since our texture defines a cube we'll use that as our base mesh here we're looking at it from its side now for this example let's take a sphere if we want to render this spheres volume from this camera's perspective we need to paint the correct color in all of these pixels here now we can take the direction from the camera to the pixels and move forward one step at a time sampling the texture on each other position along the way and this idea of the race marching forward is what gives the algorithm its name now if we sample the texture in this case we can for example let's say take the first Collision and we would get something like this these pixels will be green and these ones would be background color and using that we could render something like a perfect volumetric sphere but that's not very interesting and we want to do clouds so let's do that instead so we're going to let the race continue throughout the volume and then return the accumulated density that it finds for example if this sphere had constant density the rest the the Rays that went through its Center would have a higher accumulated value that the ones that barely grazed it gives us a result that would be more like this and this is going to get a bit more complicated once we add lighting and color and other effects but the ray marching algorithm is still going to be the base of most of it and now without further Ado let's head over to Unreal and start working on it let's start by creating a new material that we can call maybe tutorial gray March or something like that let's open it up and change the on the material properties change the shading model to unlit because we only need the emissive color for now now all of our operations are going to happen in a custom node running some code so add a custom node and let's start by adding some inputs we're going to need seven of them so one two three four five six and seven let's give them some names first we want the to specify which texture or which through the volume texture we want to sample so call this one text next the number of frames in our texture so X Y frames next in the code we're going to use the total number of frames which is very easy to calculate it's just X Y frames times x y frames but if we run that code inside our custom node it's going to be a slightly less performant or it's going to add a couple more instructions that if we do it outside this is yes because of how unreal works so let's add also the total number of frames here so we can call this one num frames next we want to specify the number of steps for our Ray margin algorith algorithm so we can call this one Max steps and similarly in the code we're going to use the state the size of each step to move this Ray margin or each Ray forward and the size of that is just going to be 1 which is the total side of the cube divided by maximum number of steps and again we could do that operation here but if we move it to outside it will reduce the number of instructions by one or two so let's add another one and or call this one step size step size only two more now we need the direction of the camera or the camera view in local space so call this one local cam vector local cam back and finally we need to tell the code which pixel or the position that we want to sample from this volume so call this one current position or core pulse now let's fill in these parameters for the texture we want a texture object parameter so we can change this later on different instances and we can call this one volume texture oh next scalar parameter oh there we go which for the X Y frames in our cases 12. 12. and now let's add a multiply node and multiply it by itself and connect XY frames to x-ray frames and then multiply to num frames now another scalar parameter for maximum steps let's start with maybe 16. and let's also add a divide node and do one divided by maximum steps and connect this one to step size now for the local camera Vector let's get the camera vector which as you can see it says camera Vector w s for wall space so we need to add now a transform node and convert from World space to local space and finally for the current position we're going to use this node called bounding box based 0 1 uvw and before we connect it there let me show you what it does so if we connect the RGB output to this emissive color and we change the preview to a cube we have this beautiful Rainbow Cube and if we look at the bottom you might get an idea what's happening here this looks very much like a default XY or UV coordinates so if we connect the arc output of the red output to the emissive color we have this gradient and if you look at the Gizmo here at the bottom we can see that it's gravy the gradient matches the x-axis so 0 to 1 on the x-axis and the same for the green Channel yes in this case is for the y-axis and finally the blue channel is for the z-axis cool now this RGB makes a lot more sense now let's put it back here and connect the RGB output to the current position now we can connect the output of the node to the emissive color for now and instead of editing the node here or the code here because this editor kind of sucks let me open a new notepad or WordPress session and edit the code there and I'll be right back to explain what it does here's the code as you can see it's pretty simple for now we are first creating a new variable called acume dense for accumulated density and this is going to store all the density in the volume that our Ray is going to encounter as it marches forward through it next we have a for Loop that is going to run Max steps number of times and inside this for Loop we are running we're doing three things three lines the first one this is just one line we are taking a sample from this texture we are storing it in this variable called current sample and we're doing this using a function called pseudo volume texture that requires the following parameters first we need a texture to sample and a sampler to use don't worry too much about this parameter just use text sampler next we need a position to sample we are which we are saturating to make sure that it stays on the zero to one range and then the number of frames per side as a vector 2 or a vector one that is then converted to a vector two and the total number of frames now we are storing this curve sample as a single float because we are only storing the red Channel or any of the channels of this texture in the next line we are updating our accumulated density which started at zero adding the current sample value times the step size and we are scaling it by the step size because if you think about it if you array marches forward in b big steps is going to accumulate more density than if it marches in tiny tiny steps and finally we are updating the current position so our Ray marches forward by adding the negative of the local camera vector or our camera direction also scale by the step size and we are reversing this local camera Vector so it's minus the vector because otherwise array would go back in the direction of the camera and that's because the way our cube is mapped now after all these four looped is complete we just return the total accumulated density now let me copy all this code copy and then go back to unreal where we can paste it here and replace this one press enter and we have a weird mess it's pretty cool actually when you look at it from the top but it's pretty weird and that's because our texture that we're sampling is not through the volume texture so let's replace this for a wisp and see what we have cool we have a volume now it has the Alpha from the cube but don't worry too much about it and it looks pretty cool when we move the camera around we can see that the 10 drills are actually in 3D space and not just like flat planes or anything like that but we have this weird and a frosted glass effect on the size of the cube which actually betray that our shape is rendered as a fake Hue so if we look at it from the front or from any of the sides it doesn't look bad especially when we look at it from these gracing angles we can see that it has this weird artifact now we could increase the number of steps to maybe 128 and that gets get gets rid of the of the artifact but we still want to fix that problem so let me go back to 16 and I'm going to switch back to photoshop for a moment to explain why this is happening the reason this artifact is happening is because of the way we're passing our current position to our function if you remember we're using this bounding box01 uvw which is giving us gradients that are aligned with the world so we have this zero to one gradient and in all axis now that means that it doesn't matter where the camera is pointing from when the Rays intersect this box then they will move in spaces that are aligned with this grid and if we are using a small number of samples or if our content or our texture has values around the edges then it will show that these values are kind of Blended in this way align with the grid now to fix this problem we just need to realign the grid and make it face the camera so we have something like this and in this way the planes or the intersection planes will be aligned with the camera and the artifacts will be almost invisible now let's go back to Unreal and make this happen conveniently for us the volumetrics plugin that I mentioned at the beginning of the video has a function that does exactly that so let's add a material function call and the one that we're looking for is called volume box intersect and you can create this function yourself in case you don't want to enable the polymetrix plugin and just to explain what it does we're going to go inside by double clicking on it and examine these notes so we have a custom node with three inputs first we have a custom scalar that can be zero or one depending on whether or not we want to align the planes with the camera next we have the maximum number of steps of our Ray marching algorithm and finally we have the scene depth and here's the code and feel free to pause the video and copy this into your own custom node if you want but basically what this is doing it's finding the planes in their perpendicular to the camera and intersect this bounding box and it's going to find the first intersection so the first point where the camera touches this bounding box but also the last one so the last point where the camera is inside this volume before leaving it and in theory we would need we wouldn't need to do that right we just need the current position which is the entry position however if you look at the last line says return float for entry position comma box thickness and if we examine the rest of the function we can see that we are splitting the result using a component mask into the first three components which are the intersection of the ray with the mesh and the second one is the thickness of this box now there are still reasons to do that and the first one is that if we just take a range of 0 to 1 as we were doing earlier with our bounding box if the camera is looking for example from this side the distance between this point and the point where the ray leaves the box here it's around 1.7 something so our zero to one range would make the ray go like this and then stop around here so this part of the volume wouldn't have would never be rendered now the second reason is a little bit of an optimization so let's say that we're looking at the Box right at this point so we don't need a lot of steps to cover the distance between this point and this point so a lot of our steps would be wasted just tracing nothing right so we can use the Box thickness to pre-calculate the number of steps that we need to cover any distance now there's a third reason actually to use this node and that has to do with a problem that you might have noticed earlier which is that we cannot go inside this mesh so if we move the camera inside the cube everything's gone so let's first connect these inputs here and then we'll fix the intersection or the camera being inside the volume problem so here we need to connect the maximum number of steps to the steps input and for the second one we can create a new scalar parameter call this one online planes connected here and if we just now replace this current position input with our intersection position we will get an error and it says only transparent or post process materials can read from scene depth and if we as we just saw this node is using the scene depth so let's go to our material parameters and change the blend mode to in this case Alpha compose it or pre-multiplier Alpha now are oh volume is back and if we reduce the number of steps back to something like 16 now we have these artifacts but if we enable the Align planes option now they're gone and we could see a little bit of an artifact kind of in the direction that the camera is moving there's like a very soft triangle here but it's very subtle and we're still only at 16 steps so we said something like 32 or 64. we already get pretty good results now we still have this problem so let's see how to fix that next we're going to need a new mesh that we will have to create from a scratch and we can do this in any 3D package like 3D Max or blender or magia but the operations that we have to do are so simple that we can do them in real using the modeling tools to enable them we can go to plugins search for modeling and we have two two sets of tools or two plugins that we can enable modeling tools editor mode and static mesh editor modeling mode after enable this and restarting we have a new mode on the viewport called modeling so first we'll leave the box so let's go to create box and place a box close to the other one now we can click on accept and now we have a box but the pivot is not centered like on this one it's just a center on one face so let's go to the transform or the X form button or Tab and go to edit pivot now we can just click on Center and accept that perfect now we need to invert the normals on this Cube so we can only see the interior faces and to do that we need to go to the attributes trips tab and here we have normals and we can click on the this invert normal checkbox and when it's on WE invert in the normal so we as I mentioned only see the interior faces perfect now the reason this is gonna work is because now that we are using this volume box intersect we're getting a ray that goes from the camera to the first phase that we encounter and then moving to the other side of the volume that's kind of the algorithm now obviously if we do this here the array is still going to intersect with the first phase that we see which is that one but there's still it's still gonna move towards the other side of the volume so in this case if it intersects this point or on that face at the other side it will move towards the camera and test that let's go back to the selection mode and apply our material to both of these actually we can double check that our new mesh has been created in the same folder and we can even rename this to something like box normal invert now we can apply the material and at first glance they would look the same however we still cannot go inside this one but you know finally we can move the camera inside our little cloud and that's pretty cool perfect the next thing that I want to address is the opacity of this Cube finally now we could do something like create a new Vector parameter call this one maybe like Cloud base color connected to the emissive color output and then just reconnect the custom output to the opacity and now we have a cloud that we could change the color maybe to something more festive like a pink yay and that's pretty cool but that's not really accurate and there is a better way to calculate the density or how this density affects visibility and that is to use something called bear Lambert law and that's usually normally used in chemistry to calculate how much light is absorbed by a medium and unreal has the version of that so let's just search for bear and we have Bears law which if we look at the tooltip is just e to the power of minus the density we have our density here is the output of our custom node we can connect it to the thickness and then for depth scale let's create a new scalar parameter and call this one maybe density scale or density and give it a full value maybe a one for now connect it to the depth scale and let's connect the result to the opacity and see what happens and we have something but it's kind of not exactly is the negative of what we want and that's because this formula is giving us how much light is absorbed by the medium so if you are since we are using it for opacity we need the opposite of this value which we can get with just a simple one minus node so search for one minus reconnect it to the opacity and there we have our cloud and now we can scale the density maybe something like 16 yeah and that looks pretty cool now we can apply and save and go back to the viewport for a moment so I can demonstrate the next problem that we will have to fix here we have our beautiful pink clouds and we can still go inside of this one and outside and it looks great right however let's see what happens when I move this sphere inside the cloud whoop well maybe I'm not oh no there yeah I'm completely inside the Collision but it's clearly not working and if I move it here around this part you can get idea what's happening so what we are seeing here is the intersection between the sphere and the cube the mesh not the volume inside so here we are looking at the back faces of our Cube and if we move it to this other one we will see the front faces if I move it error just around this part and that makes sense but that's not what we want so let's go back to the material editor and see how to fix this new problem the first thing that we want to do is to disable the depth testing we don't really care about testing the mesh or this box against the rest of the scene we just care about the volume so let's search for depth on the material parameters and check the box for disabled depth test now it's up to us to determine which objects are painted on top and which painters are which objects are painted behind the cloud and we can do that using this box distance or box thickness parameter that we haven't used yet now if we want to use this as our number of steps currently this value goes from 0 to 1 so we need to scale this by our parameter Max steps so let's multiply this box Distance by maximum steps so now it will go from zero steps at the edge and around 100 and something at this extreme case now we need to pass an integer so let's floor this number and now let's clamp it just to make sure that it doesn't go to extreme values so maybe 256 as the maximum now we can reconnect this to Max steps and it should be working let's apply and save and you'll see that it really doesn't so we have this sphere clearly like a couple units behind the cloud but it's still painted on top and if we move the camera closer we can still see we can now see the effect that we're looking for so this is what we want to happen when the sphere is inside the cloud but it's clearly not so there's something wrong here and if we move the camera up we can see also that our Cloud disappears kind of fade is starting from the bottom now if you are using the content from the volumetrics plugin you might have encountered this problem and you might be wondering also how to fix it because some of the materials on the plugin at least on version 5.3 are still out of date so let's go to our material and for now yes disconnect the opacity and reconnect or connect the Box distance output to the emissive color I'm not worrying about rearranging it this is just to show what's Happening so at first glance it seems to be working right we have a value that is zero at the edges and as the planes get further away from the camera we get values that are closer to one and in these cases even higher than one so there seems to be nothing wrong it works fine right still not there's some camera dip so I don't know seems to be fine now if we hit apply and save and go back to the viewport we can see that there's something wrong with the way that the formula is calculating the depth so this sphere is behind so of the cube so the values here should be one and if the sphere was inside the Box we should see values kind of like this so as the sphere gets farther away we get darker values but that would be if the sphere was here and if we move the camera up we can see that we have the same problem that in this case we're looking at the depth of the floor so the problem if we go back to the material is in this volume box intersect so let me go back to the way it was connect this to the opacity and our cloud-based color to the emissive and now we can go inside this function and fix the problem the problem is in the code from the custom node and thankfully for us Ryan Braggs which I'm pretty sure is the one that made this code left some comments from previous version that will be very useful so take a look at these two lines here here we're normalizing a value in the zero to one range by first dividing this locally scene depth by some scale factor and then dividing It Again by The Dot product of the camera forward and the camera vector now these lines are commented right now so they are not being used and if we uncomment that we'll get an error that says that a scale is not defined and it's true we don't have a scale variable so let me go back to warpad for a moment and show you the line that we need to add it's pretty simple we're just defining this new variable called scale and that's going to be the length of a vector that Vector is this one here which is one zero zero so it's a vector in local space that measures one unit now we need to transform that to Old units so we are using this transform local Vector to wall and that's the value that we save here as a scale so let's copy this line and go back to the code and paste it here at the top so as the second line let's paste it and now we need to comment a few lines and and command a couple ones now we are not going to use this depth position variable so we can comment this one and then all the ones that use that one so this float 3 the position line and then the next one then the following one where we are dividing it by 256 and finally this one where we are selling the locals in the variable to be equal to the length of this depth position you can command or just remove the entire block as you prefer now the last thing that we want to do is uncomment these two lines local lists in depth divided equals and the other one that is also local Sim Dev divided equals now we press enter and apply and save and then go back to our material and save it again let's see if this works let's move the sphere back inside the cloud and now it's working it behaves as one would expect and we see more of the sphere the closer it gets but still is partially included and we can still move the camera in and out and even really far away or really up and the clouds are still being rendered now you might have noticed that now we have a new artifact happening and there's this bombing that happens around the sphere or inside the sphere and that is because the way we are pre-calculating the number of steps so for example here we have let's say three steps and here we have four steps it's not that those because we can see bits of the cloud on the front pad yes for simplification we have three and four here and then five and six steps however the sphere we know it's a smaller sphere so this could be one step but this should be like 1.5 steps and we cannot tell the for Loop to run four and a half times but what we can do is run the for Loop the number of steps that we need and then take one extra step that is shorter and it's just the distance that we need to cover up to whatever point we're sampling so hopefully that made sense let's go back to the material editor and change our code a little bit actually I at some point at some point I change the blending mode to additive so let's change it back to Alpha composite oh and now we have this artifact which we can fix probably by just multiplying our color by our opacity mask and then reconnect it here perfect now for let me rearrange this a little bit and now for our final step in our Ray marching algorithm we already have the integer number of steps here on this floor and it's already scaled by the number of Maximum steps that we want so if we instead of taking the floor of the number or the number and we take instead the fractional part that's going to be our last step and now when we write our code we need to take into account that this is a fraction of one step so anyway let's add a new input to our function and we can call this one final step size and connect this frag to it cool and nothing has changed so let's apply and save and head over to warpad to see the changes in the code here is the updated code for our custom node with all the new lines highlighted in red and the first one is not really related to our array marching final step and the main reason why I'm adding this line is because of this function here lwc to float and that stands for large wall coordinate to float if we go back to our material for a moment uh where is it here oh we are to calculate the camera Vector which we are passing here as this input local canvac we are taking the camera vector and then we're using this node that transforms and we select it from World space to local space however by default unreal now uses this large World coordinate system that is meant to reduce or remove the problems from using large walls when you have objects really far away from the origin now the problem for us is that there is no large wall transform that we can apply here in the node graph and apparently that that's that's not a problem things seem to be working fine but let's see what happens if I scale this object like five times right zoom out the camera whoops now it's clearly not working as intended we still say the other wisp that this one the bigger one is clearly not working now let's go back to the code and just copy this first line now if we paste it here so we override this input value and we hit apply and Save normesh has a scale properly and the material works fine set it back to scale of one and move this so we can see we can fix this banding artifact so that's for that line now the rest is basically more or less a copy of what happens inside the for Loop but if you think about our the order of operations inside the loop is as follows first we take a sample at the current position then we accumulate the density and finally we move the ray one step forward now that means that at the end of the loop we have taken one last sample and then we move the array one last time before we take a final sample that means that if we want to get the final the position correctly we need to first take a step back so current position we are subtracting the local camera Vector this time instead of adding it and then we are taking one final step with the step size scaled by this final step size input after that is business as normal we just take the sample and then we accumulate the density and remember that we need to scale that at density accumulation by the size of this final step and now we can copy this and go to our material editor and actually now that we have this here let's see if just by pasting the code here after the for Loop paste these four new lines and hit apply and save this banding improves and it's gone and let's expand this and see we're still only on 32 samples and this looks pretty good there's a little bit of banding but it's very very hard to notice mostly on the point where it really intersects you can see sometimes they some clouds kind of increase in density but the overall effect I would say is pretty good and works right as intended and that is going to be it for this first tutorial as we saw learning the re-marching algorithm wasn't as complicated as it sounded in the next video we'll see some lighting options like direct lighting or ambient lighting and probably self-shadowing and in the meantime please consider subscribing and clicking that Bell notification button if you want to get alerts when I upload new videos see you next time
Info
Channel: Enrique Ventura
Views: 8,146
Rating: undefined out of 5
Keywords: material, shader, tutorial, unreal, unity, lesson, course, graph, shadergraph, shader graph, nodes, node, texture, 2d, 3d, high quality, 4k, volume, rendering, raymarch, raymarching, 5.3
Id: eDYyBc3cRmw
Channel Id: undefined
Length: 46min 18sec (2778 seconds)
Published: Mon Oct 02 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.