A while ago, I posted a video showing some real-time fractals I created. But I never got a chance to explain how it works and how you can build your own. It's actually super interesting, with some really beautiful mathematics and tricks. I'm going to do my best to explain everything and hopefully at a level that everyone can follow. I'll also be sharing the source code so you can try it out yourself. So at the very basic level, we need to first understand the different types of renderers. A renderer simply takes a 3D scene, which might have triangles, spheres, lines, etc. and renders them to a 2D screen, since unfortunately, we don't have holographic displays yet. There's three types of renderers I'll talk about today. The first is by far the most common, called a rasterizer. With this technique, you start by projecting your 3D vertices into a 2D camera plane. The rasterizer then finds all the pixels whose centers are inside the geometry. And then fills in those pixels with the color or texture of the object. Rasterizers generally support other primitives too, like lines and points. GPUs are heavily optimized for this which is why you'll generally see this used in video games, both old and new. This allows games to run in real-time. The next technique, you've probably heard of, is called ray tracing. With ray tracing, you send out one or more rays for every pixel, and then check for intersections with geometry in the scene. This is done recursively to allow for reflections, refractions, reflections of reflections and so on. Computing these intersections is not cheap though, and since every pixel has to trace so many paths and test intersection so many objects, it's very slow. But, it's more realistic, and the results look amazing. This is why you'll typically see ray tracing used in animated movies, since it can often take hours to render just a few seconds. The last method I'm going to talk about, is ray marching. Most people have probably never heard of ray marching, which is a shame because it's probably the most elegant of them all. And it's what we'll be using to generate our real-time fractals. It starts off very similar to ray tracing. We have a camera, and we shoot a ray out of every pixel. But this time, we don't compute any expensive intersections. We just march the ray until it hits the object. If the march is too big we'll overshoot, so it needs to be small. But when it's small, it has to march so many times that the algorithm becomes completely inefficient and useless. But suppose, we had a magic formula. This formula tells us how far away we are from the scene at every point in space. We'll call this the Distance Estimator, usually abbreviated as DE. If we knew this, we wouldn't have to march a fixed distance anymore, suppose the distance estimator tells us we're at least five meters away from the scene. Then we know, no matter what, we can safely march five meters in any direction without overshooting. After marching, we check the distance estimator again at the new point. This process repeats, until we're within a tolerance of the object and it converges exponentially as we get closer. Also, keep in mind that the distance estimator doesn't have to be EXACTLY the distance to the object. It's really only important that it not be MORE than the real distance. But, obviously the better the estimate, the less marches it'll take. And the faster it can render. So, how do we find this magic formula? Well it turns out that the exact formulas are really easy for primitive objects like spheres and cubes. But what about combining objects? Suppose we have two objects in a scene, each with it's own distance estimator. It should make sense that the distance estimator for the entire scene is simply the minimum of the two estimates, since that guarantees you can't overshoot either one. So one way to build up the scene with a million objects, is just to take the minimum of a million distance estimators. But that's just as slow as ray-tracing, so we haven't really gained any advantages yet. So now let's start blowing some minds. I'm gonna make a small modification to the sphere's distance estimator by adding a modulo operator. Suddenly, we have an infinite number of spheres, and it runs at nearly the same speed. So, what happened? When we apply the modulo operator, we weren't really changing the sphere, we were actually distorting the space. And transform the open universe into a cylindrical one. Where you can go off one edge and reappear at the opposite side. Now, this isn't the only way to distort space. We can, shift, scale, and rotate space as well. But the most interesting of them all are reflections. Reflections are basically giant infinite mirrors that make a mirror copy of one side to the other. And unlike real mirrors, you can actually intersect, and pass through them. I can show this with a still pool of water. Notice that the mirrors can take a single object and appear to split it into multiple pieces. Let's take the classic example of the Sierpinski Tetrahedron. We start with a tetrahedron, scale and translate it, and then slide in the first mirror plane. Then we'll slide in the next mirror plane. And finally one more. That completes one iteration. By the recursive nature of fractals, we can just iterate this until we get the level of detail we want. How cool is that? After just ten iterations we're already rendering a million triangles. Before we look at more examples, there's still the issue of lighting effects and color. For a lot of effects, like reflections, hard shadows, depth of field. They're essentially identical to how you do them with ray tracing so I won't go over those. But there's some extra cool effects you can get for free, that are unique to ray marching. The first is ambient occlusion, which basically says the more complex the surface is with creases and holes, the less ways ambient light can get into it and the darker it should appear. The nice thing about ray marching is that the surface complexity is usually proportional to the number of steps taken. This works great in practice, and it doesn't have any extra calculations. Glow is another effect you can get for free. While marching, simply keep track of the minimum the distance estimator ever got, and if the ray never hit the object, We'll still know how close it did get, and glow can be applied based on that. Another effect Is soft shadows. Real light sources like the sun are not points They take up an angle in the sky, so real shadows appear soft. When marching a ray from the intersection to the light source, all you need to do is keep track of the minimum angle for the distance estimator, and that determines how much light should illuminate the object. It really helps make the surface look better, especially with fractal shadows. And again, it doesn't add much computation at all. So lastly, that brings us to color. Since these are fractals, we can kind of color them however we want. The most common technique is called an orbit trap. Basically, as a surface point transforms, we can look at how far away it gets from the origin as it iterates through the transformations. You can take the minimum, maximum sum, or any other operation on the X, Y, and Z components. Which will correspond to the red, green, and blue channels. The results are really wild, it takes some hand tuning to make something specific. Like when I tried to make those trees have green leaves with brown trunks. But hey, they're fractals! Random colors look trippy, and maybe that's for the best. Anyway, I hope you enjoyed this very brief introduction to ray marching in fractals. If you want to read more about the subject, I've linked some articles that basically taught me everything I know. And as promised, I uploaded the source code that generated every fractal I've shown. Thanks for watching.