Graphics in general are well suited to handling
opaque objects. We can simulate the lighting on the surface with a simple equation, and voila,
you have a lit object. But transparent volumes present problems, because of their very nature.
There’s no single surface point to approximate, in fact if you zoomed in on this puffy little
cloud, what you’d find is lots and lots of tiny droplets of ice and water. Light is bouncing
around in there, and when taken as a whole, the way they reflect and transmit light inside
the cloud is vastly different than a solid object. It begins with an equation called
Beer’s Law, or the Beer-Lambert Law, and it relates to these kinds of semi
transparent materials. It basically tells you how much light gets absorbed, given the
distance into a medium the light is travelling. So if our scene here is mountainous, we can say fog up the scene by applying this
exact equation, exp(-ABSORBTION * distance) And what happens is, as I crank
that absorption slider up, this whole thing fogs up nicely, the
fog that started out in the distance starts to creep closer in a somewhat natural
looking way. And it’s no coincidence that this basic calculation is one of the
classic fog calculations in graphics. And in this case, there’s not much you need
to know beyond how far away the point is, in order to do the fog calculation. If we were to say, bring an
object onto the screen instead, and we want it to be a somewhat cloudlike
material, this is a bit different. If you look at a side view of the
cube, here’s your eye over here, and you’re looking in this direction, you
need to figure out how far through the object the ray has travelled,
in order to apply Beer's law. Which means figuring out where it enters, and
then figuring out where it leaves. Using those, you can calculate the distance in-between and
apply Beer's law using the equation above. Given this distance then, you can apply Beer’s
law using the equation above, and voila, kinda giving you that impression
of a semi transparent interior. So yet another problem is that we’ve made an
assumption here, that the density along this distance here was completely constant, that it
was the same the entire time. What if it isn’t? That means you can’t just blindly take this
distance, you have to do a bit more work. The next step then, is to take this vector, and divide
it into a bunch of segments, then at each of these in sequence, you can sample your density, multiply
that by the segment length, and add it all up. Then you can apply beer’s law, and now
you’ve got kinda swirls and other stuff happening inside this sort of cloudy interior,
so we’re making some very cool progress here. It’s very flat though, there’s no depth to
it. If we introduce a light to the scene, well, nothing happens. We’re ray marching
through our little density field here, but we don’t really have a
way to react to lighting. We already know that the further into the medium
the light travels, the more it’s being absorbed, this is what we use Beer’s Law for.
But what we haven’t figured out is how much light actually gets to any given point. So if you have a light source up here, at each of
these points along this path that you’re sampling, you need to figure out how much light energy
there is by the time you get to that point. And similar to before, since this medium isn’t
uniformly dense, what you need to do is, starting from each point along here, you need to do a
second set of samples towards the light source. Once that code is in place, we can
see the lighting starting to take effect on our cloudy object,
it's looking better already. Jiggle the light around a bit and the
cloudy interior seemingly gets lit properly. Although if you swing the light source behind
our cloudy object, you don’t get that distinctive silver lining that real clouds are famous for.
You’re getting a bit of light bleeding through, but it’s not the same. This is more
smoke than clouds. Why is that? So if we look at a ray of light passing
through the cloud, in reality it’s going to get scattered by the water droplets inside.
until now, we’ve actually been assuming that lighting coming into the cloud is getting
distributed pretty evenly in all directions. This is called isotropic scattering. But
in real clouds, as light enters the cloud, it doesn’t scatter so evenly, it actually has
a higher probability of scattering forward, and this is called anisotropic scattering. So you use something called a
phase function to simulate that, and a common one is the Henyey-Greenstein
model. If we plug this directly into our code and multiply the light energy at any
given point by this new phase function What we end up seeing is better directional scattering as we move the light
around, which is really cool. One of the interesting insights that the
original engineers at Guerrilla Games had was that clouds also have sort of
darker edges and lighter creases. As light enters a cloud, it begins to scatter. Points closer to the surface scatter deeper into
the cloud. Points further inside the cloud have actually received more scattered light than
points closer to the surface. The end result being that points in cracks and crevices may have
gathered more light than points near the surface. The Guerrilla Games engineers called
this the powdered sugar effect, because it looks like powdered sugar, and their simple fix was to modify beer’s law
a bit and drop off the energy at the start. So you’ve got exp(-d), which is Beer's law And you’ve got this new
Powder equation, 1 - e^-d*2 And the combined equation
might look something like this, which they called the Beer’s Powder approximation. And in the following years, other papers came
out improving the look even further. The folks working on the Frostbite engine for EA released
a bunch of improvements to the original clouds. By mixing between a forward and backward
Henyey-Greenstein function, they were able to simulate a back scattering component
to clouds, improving the look quite a bit. There was also a neat paper from some
engineers at Sony Pictures working on Oz: The Great and Powerful, who noted that if
they took multiple samples of the scattering, lowering the extinction for each, and summed
them, it would allow more light through. It basically looks like this, which you
drop in as a replacement for the Beer’s law calculation when summing up the lighting. So we’ve got our little object, and we’ve walked
through most of the calculations necessary, all that’s left is actually shaping this
thing to be more cloud-like in appearance. And that’s where noise comes in. Remember
noise? One of my favourite youtube channels, the Bob’s Burgers guy, just
covered noise in detail. The original paper specified they used
something called Perlin-Worley noise, which is a combo of the 2 noises, although they
don’t specify exactly how they’re combined. So for Worley, you start with this basic
grid, and at every cell in the grid, you place a point at the centre.
Then, using a noise or hash function, you offset it by some point, moving
it away, how far is up to you. Then, to calculate the noise for any given
pixel, you figure out which cell you’re in, and which cells are adjacent. You loop through
each of them, figuring out the distance between your sample point and the neighbouring cell’s
offset point, and take the min of all of those. You end up with something that looks like this,
kinda organic looking. By inverting the colour, you get puffy, billowy shapes. The downside is
that this doesn’t tile, at least not at first, but that’s actually a really simple fix, you modulo
the cell offsets so that they repeat. I was able to add this single line to the Worley calculation,
and voila, repetition in all directions. Perlin has a similar tiling problem, you can
see that an FBM texture doesn’t tile well here. And unsurprisingly, has a similar solution,
a mod on positions in the hash function, and scale that while generating each octave of
the FBM, and you’ve got some nice tiling going on. Easy peasy lemon squeezy. This is where things got pretty
hand-wavy from the original paper, luckily the Frostbite team goes into more
detail. They used multiple octaves of Worley noise to create what’s basically a version
of worley FBM, or fractional brownian motion. Then, they used that to remap perlin noise
to get this slightly cauliflower appearance. Stuffing this into a 3d texture, and
then using that to carve away pieces, and you start to see the
cloud-like features showing up. So now the really neat thing is that you can make
pretty much any shape you want. If I use SDF’s, I can represent cubes, donuts, or even more
complex objects. The next issue is that, right now, I’ve had to constrain the
bounds of my cloudy area pretty small, in fact the bounds of the area that I’m sampling
just barely cover what’s being shown on screen. If I move the cloud, you quickly see that
it runs into the edges of the bounds, but if I expand the bounds out too far,
let’s say 20x the size of the area, it’s quickly apparent that
the approach falls apart. Massively upping the number of samples
improves the look, but tanks the frame rate. Assuming the clouds are implemented properly, at
this point then, it’s mostly an optimization game. This is where the authors talk about using
various techniques to speed things up. They describe using a cheaper sampling
approach with larger steps, and if and when they hit a cloud surface, they backtrack
and then start using high quality sampling. I took this a bit further by using signed
distance fields to describe the clouds' surfaces, and used sphere tracing (check out my
video on ray marching for more details), and then I switch to a high quality sampler
when I’m within a certain threshold of a cloud. I also experimented with running
this at half resolution or lower, and just compositing the result. I
mean, it’s a cloud, so the details aren’t super well defined anyway, or at
least that’s my hope. It looks alright. There’s wayyyyy more you can do here. I know the Frostbite paper describes amortising
across multiple frames using temporal scattering integration and reprojection to speed things up,
but I think I’m going to throw in the towel here. This was a fun little project, and I’m
certain that I got a bunch of things wrong, but it looks cool, so I can’t complain.
It’s also an amazing illustration of the sheer amount of hard work that goes into
just background details in modern games. Code will be up on github at some point. Cheers