I Made A Blob Shooting Game With Ray Marching

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video we're going to create a hybrid game engine with Ray marching now Ray marching is a powerful technique that is being used a lot in video games during their clouds smokes Reflections and other cool things but you can use it to render 3D objects using a technique called sphere tracing now the amount of games that gets shipped every year with this technique is zero simply because sphere tracing is too slow so today we're going to build a hybrid solution we're going to take our normal game engine which has a character controller with physics and we're going to create array marching engines separately and then we're going to combine these two to get the final result so all of this should be super duper easy right right so we're going to take our normal engine and add a post-processing Shader this Shader is going to run for every single Pixel on the screen we can sample the color of the rendered image and display it as the final color now if we subtract that color from one we can invert the colors so we have full control over the colors of our pixels so now let's Implement brain marching but first we need to understand how sphere tracing works imagine that you're blind in a room with some objects now I'm God and I'm going to tell you that the distance to the nearest surface is 5 meters now that's not a lot of information but we're going to know for sure that we need to move five meters forward at least in order to touch something so you're going to move 5 meters forward and I'm gonna tell you again that the nearest surface is two meters away so you're gonna do what you did last time because if you move any less than two meters you know for sure that you're not gonna touch something and you're gonna repeat that until you get to the surface or you just get tired and give up this is how Ray marching works but instead of one person all of the pixels on your screen are going to move forward so you start with the camera position and you pick a right direction based on the position of the pixel on the screen now we're going to come back to this line later but let's move forward for now so you're going to March forward in a loop and call the SDF function to get the nearest surfaces distance to you now when the distance is less than this Epsilon value you know that you've hit something so for now I'm just gonna draw white on the pixels of the screen now for the sign distance function of our scene we're going to pick a Cube's SDF and all we have to do is Define An Origin and the size of the queue now if we take a look at the result we can notice that something has went wrong and this wasn't too hard to figure out because the ratio of the width and height of the Box doesn't match what we have in the code so to fix this you need to take the aspect ratio of the cam camera into account so all you have to do is multiply the uv.x by the aspect ratio to test the result I'm going to add another cube with the same position and size to our normal engine which renders triangles now if you take a look at the result the two cubes do not match and when you move or rotate the camera it's even worse so what I'm going to do is I'm going to update the camera's position to match the camera position in our normal engine and to fix the camera's rotation we can apply the camera's core turning on to the ray Direction and to make things a little bit simpler we can put all of this code into a get camera to pixel function now if we run this we can see the result has improved but the cubes still don't match and that's because we've overlooked a very important implementation detail Ray Direction equals a three-dimensional Vector with UV as its X and Y component and one as its Z components now naturally I experimented a little bit this number and arrived at the number 1.3 now 1.3 makes everything look perfect in our scene why is that what is 1.3 in this point I can move forward and you know not care about this number but that's not how engineering works so I went into a deep thought to understand where this number comes from so let's take a look at a simplified version of the problem I have two cubes and I have two scenes and for the sake of Simplicity for the rest of this video I'm gonna call these scenes the real and virtual scenes the real one is the one that is rendered using triangles and the virtual one is the one that is rendered to using Ray marching I also have two cameras which need to render their scenes now if our cubes have the same position and size then why is there an inconsistency between the two renders you guessed it the two cameras need to have the same settings now to fix this we need to explore how perspective cameras work let's say we have a camera in space that is looking at a certain direction now in computer Graphics we have this concept of a near and far plane simply because the computer is not able to handle infinite Precision so we need to limit the rendering area in order to avoid these Precision issues now if we draw the lines that go from the camera to the four corners of the far plane this is going to shave the camera's reviewing thrustum and we can also change the angle between these four lines which is called the field of view of the camera now in real cameras we have this concept of an image plane which forms behind the camera but in 3D Graphics this doesn't exist and the image plane is simply imaginary but for Simplicity we're going to assume that it's on the near plane for the rest of this video so if we remove the near plane forward the object will no longer be in the viewing for some and as a result we won't render it but this description of the perspective camera doesn't quite align with how the camera in brainwashing works now because I was confused about this 1.3 number I deleted the get camera to pixel function and decided to rethink and rewrite it so let's think about the camera's viewing frustum again the race that we're gonna shoot are all going to be inside the viewing system of the camera so let's consider the forays that are going to hit the corners of the far plane now if we extend all of our Rays far enough we can assume that every array is going to hit the far plane somewhere in the middle so with that key piece of information in mind we can again assume that every single array is going to hit the far plane in some World position and now if you calculate that world position we can subtract the camera's position from that normalize the result and get the redirection that we want so now all we have to do is calculate the fourth plane size use this algorithm to get the world position of the intersection point and then normalize it to get a right direction notice that we don't need to subtract the camera's position here simply because this Vector is already positioned relative to the camera and now everything works beautifully by the way if you're still curious about what that magic number was 1.3 you can calculate it with this algorithm notice how it takes to filler view of the camera into account now I found this algorithm in a game that form way after I found my original solution but both functions are mathematically correct and will give you the same result all right so now after rather a lot of work we can render some objects correctly so now let's use some of the cool powers of rainbow chain and merge the geometry of our cube with the objects in the scene so I'm going to add a sphere that is going to intersect with our Cube and for this to work we need to have the SDF of our scene now Unreal Engine calculates the SDF and stores it in a 3D texture but that's simply too much work so instead of that we're going to do a simple trick by having the camera's depth texture we know the depth of every single Pixel so we can use that with the redirection that we calculated to get the world position of all the pixels and we can always calculate the distance to that world position and that's how our SDF of the real scene is going to look like now I know that this isn't mathematically correct but we can use this fake SDF to do what we want so we can smoothly merge the geometries of our virtual Cube and the real scene by using this smooth Min function which I grabbed from iq's website now this is all very cool but I'm getting tired of seeing white as the color of my pixel let's add some lighting and let's make this look a little bit better so I'm going to gather all the lights in the scene in an array and add some lighting for lighting though we need normals and we can calculate those easily by sampling the SDF a couple of times now remember that we're using a fake SDF for the real scene so the normals at the edges of the blending aren't going to always look right but that's fine we can hide those using some coloring tricks let's consider looking at the water for example if you look at the water at a 90 degree angle you're going to see all the Rocks Under it but if you look at it at a low angle you're going to see Reflections this is called the fresnel effect and we can calculate the fresnel factor using the normals which tells us at what angle we're looking at the pixel now we can use that to add some Reflections and finally we can calculate a soft edge around the areas that get merged with the scene and use that to softly fade the edges of our object now we can make this look a lot better by blending the colors of our objects by refactoring our code to use the surface struct we can store more information about the color specular color and the shininess of our objects then we can write a new smooth Min to handle merging these surfaces but blending with the color of our real scene doesn't really look good so let's just blend between the colors of our virtual objects finally we can add some wobbly noise to our spheres to push the distances to the Spheres around in the direction of the normals keep in mind that we can calculate the Norms of the Spheres very easily because it's just a vector from the center of the sphere that goes to the surface of the sphere next I can add some Physics the Spheres and push them around and let's add the shooting impulse to the collider whenever the player clicks the mouse making sure to push the objects in the camera's viewing Direction and now I can shoot some blobs around fantastic but unfortunately as the number of these blobs grow the slower my game becomes and at this point I'm thinking about implementing some performance optimization tricks so while researching I came across this paper called enhanced sphere tracing and the idea of it is very simple sometimes when we shoot arrays our Rays gets really close to the surface of an object but never touch it we can speed up this process though by increasing the real nearest distance just by a little bit and this way we can sample less points along the way but this means that if we get to an object we're going to pass right through it and to handle this case we're going to check if the sign distance to the nearest surface is negative and if it is that means that we're inside the object so what we're going to do is we're going to go back one step and from that point use the real distances again quick pause here so I just want to quickly point out that my Approach doesn't match the approach of the authors of the paper and this is simply because my Approach doesn't give you a perfect image and there are some edge cases that we're not handling here such as the rate going through the objects because the option too thin now even though my Approach isn't perfect it produces near perfect results for this specific game and so I just went with the huge performance boost because the artifacts weren't a big deal now back to the video we can also increase the Epsilon value in every iteration by just a little bit and that seems to give us pretty much the same result but with better performance and now after implementing these optimization tricks the frame rate improved by a considerable amount without sacrificing too much quality thank you [Music] now to be honest I've kept one secret away from you this entire time everything you saw in this video is built on the web which means you can go ahead and click on one of the links in the description to play the game in your browser and that's all I have to say I really hope you enjoyed this video and learned something new and if you did learn something new leave a like And subscribe to my channel I produce a lot of 3D Graphics related content and you can check one of my share tutorials here if you want to get started with shaders and one of my roadmap videos here if you want to start learning through the graphics and 3GS which is the render engine I used here out with that I'll catch you guys in the next videos [Music]
Info
Channel: Visionary 3D
Views: 35,413
Rating: undefined out of 5
Keywords: raymarching, game engine, raytracing, raymarch, smooth min, smooth min function, raymarching shader, glsl shader, hlsl shader, raymarching in 3d, blending color in raymarching, blending geometry in raymarching, three.js, 3d graphics, 3d, computer graphics, graphics programming, ray marching, raymarching game, ray marching game, ray marching game engine, mixing raymarching and renderer, raymarching explanation, raymarching camera, shaders, fragment shader, optimizing raymarching
Id: 9wZL2RzBQyE
Channel Id: undefined
Length: 13min 32sec (812 seconds)
Published: Thu Jul 27 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.