Low Complexity, High Fidelity: The Rendering of INSIDE

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Inside is highly regarded by the pro gamedev community. It is mentioned by 4 people in Gamasutra's "Developers describe their most memorable game moments of 2016"

It's also highly regarded by graphics programmers specifically because of everything they talk about in this vid and the stuff in the team's Banding in Games (pdf) paper.

👍︎︎ 7 👤︎︎ u/corysama 📅︎︎ Jan 19 2017 🗫︎ replies

Great video!

Here are two tech papers from Inside rendering internals, also very good: The Rendering Of Inside and AntiAliasing in INSIDE.

👍︎︎ 4 👤︎︎ u/pturecki 📅︎︎ Jan 20 2017 🗫︎ replies

Pretty sick what they pull off.

👍︎︎ 1 👤︎︎ u/[deleted] 📅︎︎ Jan 20 2017 🗫︎ replies
Captions
hello and now welcome um my name is Mikkel I'm a graphics programmer this is Mikkel we'll do the second half of the presentation Mikkel is a very technical artist as you'll see we both work at a company called plated that once upon a time did a game called limbo and neither of us worked on limbo but we have worked on the follow-up title inside let just shipped in June on Xbox in July on on PC and both ship next week on PlayStation 4 that's what we're going to be talking about more specifically we'll be talking about the rendering of inside and we sort of picked some topics that we thought would be relevant to other games and then we thought that that you guys could could steal and implement in your own games the same as we've stolen tons of stuff from from other presentations so I'll show briefly a trailer so we sort of know what the game that we'll be talking about the flavor [Music] [Music] [Music] [Music] [Music] all right so so now that the games actually out are just are my personal curiosity how many did actually play the game excellent like half alive fan base alright so as you can see and as many of you already experienced it's a two and a half D game it's a a puzzle platformer and it's made with with the fixed perspectives the camera is always fixed this means that our artists when they create the art they can make sure that all they know that when the game is shown on a game of screen it will look exactly like it looks on their screen that sort of means they can go in and tweak every pixel for wha for perfection and they can they can they can sort of rely on really subtle details being the same on the on the monitor when when the game is played this sort of in turn means that we can't have too many distracting artifacts because we have these very subtle details that need to to remain on screen so so that means we can't have sort of banding and flickering and aliasing and that sort of thing so that's something this this talk will will return to a couple times I'm from a technical point of view just to get that out of the way we shipped at 60fps at 1080p on on all targets and we are using the Unity engine with we have a source license for that so we made some modifications for from a rendering rendering point of view I'm using a light prepass rendering which looks something like this so we have our first pass over the entire scene which is the base pass that outputs depth of novels we then have a light pass that goes through all lights and samples those depth of normals and then up with the lighting we then have a final pass that that applies then the materials samples the lighting then translucency Pass and sort of a post effects pass to wrap the whole thing up so one thing that turned out to be important rather early on was fog to sort of jet create the the atmosphere in the boot of the the game so initially actually quite a lot of scenes were literally just geometry and the step fog which sort of in in line with limbo reliable lot and silhouettes to create the mood so to kind of show you how much mileage we get out of that this is a scene without any fog and then literal is just adding a linear depth fog we get something like this which is already a rather Moody the only interesting bit we're doing here rendering wise is that we are capping the intensity of the fog to a maximum level so that really bright light sources will will shine through Westview using exponential talk of course you converge towards one and upright light sources what was shining through so on top of that we have this sort of fake farc scattering atmospheric scattering pass the reason why I'm being very unspecific about that is because it's really just a really white clothes we're blurring the entire screen and heading adding that background sup so this was something our artists that are rather early on and I think they used for a great effect but but really rather rather easy to do so so now that we're using our clothes to do this this fake atmospheric scattering what do we do about glow then well we have a second pass that does these really narrow high-intensity glows so the way we do that is like many other games that we we write out a mask from from emissive materials and then we sort of remap that mask to HDR value between one in and seven and we calculate glow from that so one thing that that's obvious in hindsight but wasn't really while we did it was that of course when when because the bloom sort of is an indication that you have a high intensity pixel if you then don't render that pixel with the high intensity that that looks odd and looks like you know you have nothing glowing ah that's giving off this massive blow so it was really important to actually write back the HDR values for it to look more natural so that means that we have post effects paths that looks something like this we have a simple anti-aliasing pass first and then we have the two glow passes they're really separate but enslaved for performance reasons it's important that we have the temple anti-aliasing before the HDR bloom because if we have to Marseille aliasing with the high intensities that we used in the HDR glow are leaving a little bit of aliasing room will make it flicker quite a lot we then have a combined post effects pass that that applies the glow and does this HDR resolve somewhere before some lens distortion or some color of seven and color grading also a little bit more about that so the chromatic aberration like this lens red green blue offsets we most of the time we do it like like most other games where we just sample red green and blue separately with a little bit of sort of radial offset but in situations like like this on the walls we use rather large offsets which means that we then get this sort of triple image effect that looks great whereas really what we wanted was a rainbow like effect so a trick to to fix that is something we found in the demo scene which is use a red green blue texture and then then we do a radial blur and then in our samples of the radial blur but then also sample through the right-wing blue textures to get the weights of of each samples so so that's the way we we get coloring of course the reason why that works is because we're using odd when sampling it sexy we get bilinear filtering for free are using that so that was nice tricks that we didn't come up with ourselves but no completely so so one other thing we do in the in the final postfix pass is the we we want to make sure that we want to be able to adjust the the brightness and the way we do that is just the simple as possible we have I sort of a photoshop's levels filter where we are able to adjust the black level and white level and the issue with that is that it's very easy to to cut off the the whites and get these burned out a wide area so what we did was to be used what's called smooth minimum functions with minimums with maximum functions which we use we take the smooth minimum between the the brightness adjustment curve and then one and that means that we get this sort of smooth fall-off unless this code in the slides the say shaded so implementation that so this this was this worked out really well for us and it was also really easy for the artist to tweak because they just had one parameter which was how smooth should the kink between this will be right so I'll leave the post affection and get back to the the atmosphere of the we're talking about before so the fork we figured out rather early that we needed more than just this this global fog to get the effects that we wanted so we did this local implementation of this local fog where's this seen is an example the flashlight is rendered using this effect and how we did that is that for every pixel we assemble all the way to the depth buffer and for each step we sample the the lighting function so the projected texture and the shadow map and followed for the texture if you do this in sort of a naive way then you end up with something that's that's very slow on this example 128 samples to make it look just okay and that takes more than roughly a frame and a half so if you try to make that faster just by reducing the number of samples you get something like this with quite a lot of artifacts and quite jarring artifacts as well like these these really sharp lines so what you can do of course is that you can go and add a little bit of random into your ray offsets so if you do that then you get something that's um that looks a lot better it's it's a lot slower though now because when you're actually sampling your your textures then the texture cache is working a lot worse because now you have random jittering are in your samples but but looks a lot better and and the reason for this is the is actually quite forgiving towards noise so that's something we choose to look into a bit further so this is the same example but with three samples rather than 24 and that's that's a lot of noise we you know only almost make out the actual light and shadows in this so what we can do then is we use a different pattern to do the dithering so rather than using white noise as we are here we can use a biometrics and already that looks a lot better and the reason why that looks better is because within a sort of small region a biometrics is guaranteed to have all values within that small region and that's that's really good and the reason why it looks better but but where's the eye is very forgiving towards noise it really isn't very forgiving towards patterns so we're looking for something that that has this locality property at the same time has not been a pattern so what we ended up finding was what's called blue noise so that's essentially White Pass sorry white noise that's high pass filtered so blue noise can be still uniformly distributed so that means that within a small region because it changes rather rapidly or changes rapidly and I still uniformly distributed that means we have we have this same property as the biometrics except because we're not guaranteed that that every value is represented but where it's very likely that that most value is represented so we still get really good sampling without it being a pattern so that worked really well and it meant that we I think we reduced the number of samples to half around somewhere somewhere around there all right so I'll leave the sampling off a bit and so about how we actually go and distribute the samples so the way we set up these local four columns in the game is that we insert boxes and within those boxes we have we have fog and of course we don't need to sample outside those boxes because there's no fog we also don't need to sample outside the lights so what we do is that we do a geometric intersection between those boxes and the light first room and that that sort of ball found to are just clipping the light frustum with each plane and then patching up the holes so that means we have this through pass algorithm where first we write out the front faces of the clipped geometry and then afterwards we render the back faces and then we assemble the front face depth and then we have the like the the front and back points that we sample in between those two and that gives us sort of an optimal [Music] range that we need to sample so this thing about using our boxes is something we actually use for effects so in this scene above water we're using a we're using two our boxes one above one below and the one above this just assembling the light as as is wears below we're using this animated texture to to fake caustics which is working really well so something that that you may have noticed that that many others have noticed before us is this is actually quite a smooth effect it's assembling this per-pixel is rather overkill so of course we don't do that we assembly that half resolution instead we could probably get away with sampling it even lower resolutions but this was sufficient so that means to be nine had this three pass algorithm so the tube first passes are the same as before only at half resolution so we right at the front face depth we then do the Ray marching and then we have this third pass that does up sampling from half prescription to full resolution and while we do the up settling pass we also add a little bit of sort of a noisy blur into into the up sampling and the reason why we do this is to get rid of this half resolution structure that that we still have an additional reason why that's a good idea is because we as you may remember have a temporal anti using paths and at least an hour implementation of the simple anti-hazing we're using what's called neighborhood neighborhood clipping and the way that works is that for each pixel it looks at the immediate neighborhood and any value that's within the immediate neighborhood is accepted so that means that it's very good at picking up per pixel noise but it's not very good at picking up a half resolution of noise so so sort of converting this half resolution pads on into a full resolution noise was something that meant that it's not played into the hand of by using bit more which is a trick from DICE I think all right so this is the the final result this is shown with six samples at half resolution this takes just less than a millisecond if we were to sample every pixel on screen which of course we don't we only sample within the actual light cone also with most of the time we're actually using three samples per pixel at half resolution we're not using six so that means that we are doing just less than one sample per full resolution pixel to get this effect so the reason why we've been now obsessing so much about the number of samples of course is that the effect is bandwidth bound like it's bound by the but the number of same as we do so we also do all the other things that that we can to produce the number of samples oh sorry reduce the bandwidth so we lower the resolution of the projected texture we lower the resolution of the shadow maps of course we can get away with this because it's again it's a rather low as a rather smooth effect and we also that's quite a bit of blurring involved so we can we can get away with that so the way that fits into the pipeline is like this so in after the during the lighting path we save off the shadow maps that we know we're going to need then we do the two first passes of all of them rhythm on now we save off these this Bray marched texture and then in the translucency pass we rear-end out the clip geometry and and solve that with with other substances okay so I'll leave off thought for now and so much a different thing that was very important to get the look that we have been in inside namely banding so this is a the effect is still the our surface is banding the fact is they did so this is a scene from the game and I've sort of up the brightness space so we can see what's going on and if we if we look at this then you can see some of the things has talked about in the beginning we have these very smooth transitions and there are a lot of sort of details you can almost notice so this image actually quite has quite a lot of noise in it which hopefully you you can't sell too much but I remove that noise then it looks like this has quite a lot of artifacts as these sort of sharp lines all the way through the image as these sort of rainbow like like effects and also it animates so it's really rather distracting compared to in in relation to this this art style where we we want soften seen throws so of course what's going on is that we're rendering out into 8-bit color buffers and eight bits per channel really isn't sufficient dehumanize able to perceive something along the lines of 14 bits so what we could do of course was to render to our higher precision buffers but on some platforms that would be slower even in too slow to to work and so something else we could do is use the srgb targets render sides but again on some platforms that has interesting reference ations to say the least so we chose to sort of explore tailoring on States to solve this issue so the way that works this is an example where the orange line is the signal we're trying to represent and if we look at the value 0.75 bits we then add one bit worth of entirely uniform noise and that means that when quantizing to wind quantizing 75% of the time we end up quantizing to the value above the signal and 25% of the time we end up concessions to the value below that's because the the noise is in uniform so that means that on average we actually are we actually getting exactly the signal that we started up with so just so so you can tell this is not completely black magic this is how you would actually go about implementing it so this is a pixel shader and on output you just add a random number you can get this like you can calculate this or you can get this from from a texture and doesn't really matter so it's I just wanted you to keep in mind this is actually spectacularly easy to add and rather cheap as well so that's really no reason why any game should ship with with bending including indie games so just to explore this a bit further so above on top we have again the signal we're trying to represent then we have the rounded are quantized output in the second row and then we have the error in the third row and if we then go and add our noise to this it looks like this not ready that's that's really good there's a lot better than than before but if we then go and animate this move it about you can see that we have sort of these these bands of no noise in the second row so what's going on there is what's called noise modulation and it's it turns out to be a property of the the type of noise we're using because we're using entirely uniform noise noise modulation means that the error and the output image compared to the signal is dependent on the signal and that's that's a property of course we don't want because yeah looks weird so so fortunately we can just change the distribution of the noise so if we use a triangular lead distributed noise instead we get this and and now we we don't have noise modulation so we don't have like the final error is completely independent of the signal so if you go back and try to apply that we see that we get the same smooth our gradient that we did before and if we animate it we can see that we no longer have these these bands that we noticed before but now we have quite a lot of noise and that's because we've now added our two bits worth noise rather than one to compensate for for the triangular shape of distribution so what can we do about that well we can go back and use a use the blue noise that we also got that for assembling and if we do that then of course we have neat a triangular distribution for that but if we go and induce that instead we get something that's visibly our less noisy but but also doesn't have these these panes of no noise again so the way we implemented that was to just calculate a pre calculate blue noise into the texture and then we sample that everywhere on screen that gives a complete texture cache coherency between edges paper pixels so that's the sexual relevant really fast so in in the few cases where we are actually bank with bound we then use a ALU version where we calculate through wide notices and add them together and add that to help alright so now that we know how to do the an output we should think about what together so this is an example of a single spotlight shines a very bendy light on sluicey so of course the first thing we do is that we did the lighting pass so if we do that then it looks like this and the LED that's already a lot better but we shall have these waves through the the light that that is not due to the type of noise as you might think but of course because the final pass is breathing the lighting buffer and then flying materials and and writing that into an 8 bit buffer so of course we need to - did the the final pass as well so in doing that we then have an entirely binding free output in this scene so afterwards we have the tress lucency pass and the Seleucid pass might be the most important path to to dinner because during blending you tend to read and write the same values over and over again into the render target and so you end up quantizing up to 8 bits multiple times so of course we need to to do those as well but that pass this one the the post-effects pass is it's also sort of surprising surprisingly are really important to do that it tends to read and write from from 8-bit Brenda targets quite a lot on the post effects path because and with tends to be limiting factor during that as well so here as an example our or a wide globe path we we actually ended up using a template render target and sort of power of 2 compressing it and did the ring to get something that was entirely bending free and at the same time ahead didn't have too much noise so so far we've been talking about bending as an artifact of colors and really doesn't have anything to do with colors it's all about the quantization to 8 bits so in our base path when writing out normals we are writing those out to two 8-bit render targets as well 24 big render sites so of course we can we can deal the novels as well and an exchange the banding for for noise we do this only in the in the few cases where where it's actually needed which is the very few cases in the game where we have these really intense specular highlights with normals sort of burying across really large surfaces ok so now we know to do the everything and how to do so anything else we should keep in mind well it's a really good idea to animate the noise if you don't then when the camera moves you shall have this dusty lens effect where are the the noise moves with the camera also because we're using the temporal anti-aliasing that means that that if you're animated the timber line tourism will sort of integrate the noise over time and and will what kind of soak soak up the noise which so it sort of disappears which is life also of course yeah you should just did a UI because that tends to be a lot of transparencies and a lot of of different like fades and animation that's all thing which is really important and and finally when you are outputting to a monitor or television then if that celebration for example has is set to use a limited range are to be there will be that there may be a small chip that will do a conversion for you to make sure that you match that TV so if you can then make sure to output the correct format so you can probably did that your your signal and output both and then having this the hard way I do it for you because that that's very quite unlikely to to do sort of property living of them right and without a la leave you to the very cool very capable hands of yeah Omega hello so from a lot of love post depicts a lot alluring we're going to drive into in some lighting now so as a scooters that we're using a pre pass prepense lighting for our rendering pipeline so that means that you know during the second pass with our novels and our depth buffer we are riding several objects into a lighting buffer like spots for point lies or directional lights but well we if we want we can actually just render whatever we we like in there so we've got a few types of custom custom bites and whatnot so let's take a look at the first one I'm about which is the bounce light the bounce light is probably the simplest of all of these it's very basically I weight a thing that we use to sort of simulate pseudo global illumination in a very handheld way and if we take a look at this screenshot here it's kind of a point light down there by the box but that's it's not a sharp point light that we want so this is the dining partner we get around the sphere and what makes a bounce light different is that it we just you know wrap the lighting on the other side and this is something that the valve has been doing for a character lighting for a while it's called damper trap or the Lambert's stuff like that and we just got a parameter for that called hardness that we can use to just fade off that you know from the back side product around around their objects and it just makes it less obvious where the light is coming from it kind of makes it look like it's coming from an area rather than a let's think at a point so in this case we're wrapping it you know from front to back but we can actually feel it off completely if you want so that we just get a straight up ambient is nothing but fall-off light so we use this mostly for doors opening or just you know spreading light around when we have a very sharp fixed light hitting something but we can also just you know attach its spotlights and you know hit the ground as it as the spotlight moves around but yeah mostly for windows opening in and the lights of that now a small province II that's kind of a no-brainer but also just gives a lot of freedom is we don't restrain it to just being like a point on the radius because that would mean that you know to fill up this room with lights we would need to just place and place for lights with you know all sorts of overlapping overdraw so we we let let artists scale the lights in any which way they want just you know use non-uniformly you know replace these these points with that with a pill instead and it's just a little more effective and actually makes it it makes it look a little much smooth because you don't have like over lacking overlapping things so yeah that's a that's our G I figure II and next up so because we have a you know past just for lighting we can not just you know add lights to can also subtract or multiply things onto this buffer we don't just need to to do an additive blending we can do whatever blend mode we want here so we've got a few ways of doing ambient occlusion or shadows in a very custom hand or way as opposed to using screen space in occlusion or any effect like that that's baked or screen space so we've got three types of American fusion casters and let's look at the first one and actually the most used one which is the point so the point is mostly used for for characterizes the point is just to try to ground them to the to the ground and to themselves and to each other and you know make this this is a large boy band kind of look like they're huddling together a little closer so in there we've got about one per limb and we've got about 16 dudes so got over a hundred of those little bastards there and so how that thing works is very simple if you have a bow like you know elected reps from from tribes back of the geometry and that's got an editor blend mode well the AoE detail is very simply just flipping out around to a multiplication blend mode and that's basically it's you know non-uniform this gate scalable and well that says one thing though because we place a lot of them we don't want that many you know settings on it so we just have an intensity setting and you know it can play some scales however so we don't have a wrap parameter for this one they just wrapped from front to back very simple and that's the AO decal number one so second one we have is this one we use a little more sparsely just because well it's it's more for for for bigger bigger occurs like this Submariner so the difference between the point and the sphere of corrosion is that for the point you can just see like intensifying it to get closer but this the sphere the point is that you wanted to get sharper as it gets closer to the blue thee or the ground and so comparing the to the point of the sphere a sphere is just you know the angle to the ground another point is just the angle to the ground but we really want to know like what the you know the the some of occluded angles are for this for this ground so the way to implement the the difference between these implementations is we have this normalizer which is just normalizing the direction of the of the of the occluded of the occluder we just need to change that product from this inverse square root through what was this thing here and then actually that gives us the the perfect product we needed and there we get a nice sharp occlusion for basically cheeks now third up on the a/o scale is the boxed occlusion because we've got a rather small time to create in our game I think third screen or so there's a crepe so for all these draggable crates and boxes we need something specific to them and well we what we really want is like have sharp corners around the sides and you know just make it look crazy if you want and the way we do that is we start with what you can call an unsigned distance function to the box so we have the position minus the size and we got like the distance to the first edge of the box and then what we do is we take you know the the angle around that so we you know you know on which point around that on the faces we are and that will give us that very sharp result up there in the top which is not nice you know get these like curvy shapes so instead of taking just the unsigned distance we take the let's call it squared distance which also happens to be on sign because M squared and that's just moves out the first gradient as squaring things does and that's a totally empirical non-physically based way to do this it just happened to look kind of neat and you know one side of this angle around is going to look like that sides up there and if you compose all four together get those sharp edges around around the sides of the box and yeah it's just going to kind of look like the equation of a box and that's nice so those are like for ambient occlusion cast in every direction we've also got one for specifically directional shadows and we use these for places where we have a lot of engine lighting very non directional kind of wide riding and we just need to cast some very soft very place down shadows that might not even be from the right direction so this is the scene without these and if we add them can looks like this instead these things and so there are a lot of these placed around in fact this is like the the the places they've been put and all they are are a what's called a via a projected decal with a texture and just multiplied onto the to the I to the lighting buffer that's that's all there is to it and we use these both for these static scenes where we need soft shadows but also for like of course dragging objects around and that shield there and we do these kind of ocarina of time period shadows around the points feet when he's got a torch and yeah it's nice way of faking some really really smooth smooth shadows so like I said we would do a lot of these so we also need the projection laughs to go fast because projections could be expensive if they are not fast and the way that you would nearly implement like a projected the piece of geometry is in your vertex shader would get out of you ray and in your fragment shader you would multiply that by the deficit of your position and then just transform it with the full matrix into your object space or your decal space now that's that's not that nice because we got a full matrix multiplication and there's gonna I got a large roll of math instructions in there so we would like to minimize that luckily there's a way to do that if you know how to do weld space reconstruction from death buffer in which case you would get a view ring in your educator and you were to rotate the raid to world space rotation and then in your your fragment shader you just need to multiply this world rate instead of your ring by the depth and add a world space offset and yet that so you probably guess where that's going we still need to use a transformation matrix in order to get this world position to decal space so that's obviously I won't want but you know same technique works on object space race so take a view array x the object chip space matrix and you're going to object array and but also going through rotation and scale again you're in your fragment shader and x depth and an offset so now I've gone from a major to multiplication 4x4 down to man operation on a lecture 3 which is neither so maybe another place that we use these details or for reflections so we've got screen space reflections in the game and rather than it being a full post effect on screen the whole thing which is needed in certain places where we kind of need the puddle shader the power entity that we need to pump down in places and we like these screen space reflections because they have a very simple very flexible setup rather than you know going through and doing a plane on reflection way you need to tag objects need to be rendered or you know make a cube map or anything heavy like that this just reflects what's on screen without any sort of heavy duty work and setup wise it's very simply it's going to like the color that you want the reflection to be at least for us and the background color which is well if the area doesn't hit anything we need to display something texture from a projection fernell power and how long do you need the trace distance to be so very minimal set up on the material but as as as as the advantage is pretty good you know what you see is what we reflect that's a very nice thing but that also means what you don't see is not reflected so in this case you know at the edges of the screen what you should do with swing speed confections you would either fade out today a trip down and stretch it but you know in this case fade out and you'll get these very light bright edges where we you know reflect the sky color or whatever very close to the edges of the screen and it just doesn't look nice and this is a first Apple because we can't anticipate which way we're reflecting so we are assuming the normals could point in in which way and we could be reflecting out of the screen and we need to fade out before that happens to not have like climbing textures or whatever and luckily we have this is not a universal solution but for us most of the time we place down a puddle on the ground pointing upward and where to dig aimlessly looking in toward the world so we can be left with just forcing all normals to point straight up and in and never have any deviation on the exact scene which takes care of this problem and now we don't need to fail at all because there's no way we're going to go off screening or this is not going to happen now that's the first way that we can not see what we want to reflect and have a fixed but the other thing is a little bit more to handle which is if you have something kind something so like say our boys jumping in front of a haystack and we've got a camera and we need to reflect that somehow so we have a camera or a reflection ray anything that's behind the boy is technically invisible it looks like this we don't know what it is but of course we're not going to reflect them as if it as if he is infinitely deep in fact we can pretend it's quite shallow we need to have a little bit of a you know cut it off right here we're going to say that there's an edge and in this case we're just missing everything we're just going to reflect off the sky and it's going to look awful I mean with checking with checking that depth parameter but of course we're not just reflecting you know a full resolution thing we're you know taking very quantized steps we're only stepping about 16 to 24 times through this reflection algorithm so actually what we're reflecting light through looks something like this instead for each step and what we want to find in order to fix this problem is we want to find these two edges because they're going to be useful what we want to do is take a mix of ones just above and below these things and assume that we can kind of like stitch it together behind him like that so we need to find these so how do we do that well let's try to step through I have a Ford we're going to need a few a few parameters to keep in our pocket when we are tracing we need the the depth from the last time from the last step we need the difference between the depth and the ray from the last step and we just need the old position from the last step so if we are not behind something we say these things like last time we were successfully aware of our surroundings we save these guys and then we always save the last depth so we step through and oh that was a hard wall there so we're no longer going to be saving the old old difficult position so we save those for later because we want to sample the old position then we keep stepping and oh that was the other one so we checked the last depth and the current depth to just see well if they're very far right that was probably like a split there somewhere and then using that information we just take the old position from last time that we were down at the red line and the position now and just kind of like sample somewhere on that line with like two samples and just there between them and that actually covers 90% of our scenes that we would have otherwise with the boy or any intruder in the game and finally so for projection maps unlit on this guy you know we we have a view ray and we want to put it into like frustum will make a frustum ray so unlike what they call the projected decals we don't need to take some view space to move to object space we needed to take it to a projected space a you know the skewed you know space here and the naive way to do that would be to you know take your your reflective position and put it through a full matrix and then subtract your you know your screen space position but I don't like that so much also it doesn't handle the the near clip plane if anything goes behind that you just start going back into infinite negative negativity so our solution to this is to cheat a little bit and create a little variable on the CPU where we need the the size of the viewport in projected space like on x and y we get the like the resolution of the screen and the like the aspect ratio and feel of you and for the set value which is move like the depth range which looks like that and then in order to transform a viewer a into a projected rate it just looks like this instead and it handles anything behind camera just fine so that's it's raised now like I said we're going to do a few steps and just like the just like the volume lighting we need to we need to do something about these step artifacts so obviously you add some noise to your first step and you know you've got some tutoring going on and this is just to reiterate that the white noise is nice and easy but very noisy bandages are still nice and structural but structural in the back way too and blue noise is our general Savior and you too should use it and with simple energy using is really really nice so finally about that the thickness the wall thickness with screen space reflections like I said we can't assume that the depth buffer is a perfect information of our of what's on screen you know it's just the outer shell the first thing we can see so we need to step through it assuming some sort of wall thickness and we want to get away with like the minimum thickness tweaking we can basically come up with not to not have any stretched stretchy objects or as far as called it the boy with MC Hammer pants and the the simple way that we started simulating this thickness this is say well let's use the you know how much is our use our screen rain moving in set and just say that's our thickness because that means if between now and the last step we have passed through some depth we're good but that has one problem at that wall there and the wall is pointing 45 degrees to the viewer which means the view rays is going straight across the screen which means it's not moving inside at all and it won't reflect a thing but that sucks and well if I take what that you Ray said was made of this is what it was made of and all you really have to do is take out the reflection direction itself and just use that so that's like the total potential movement that it will have and you've got a well you've got my favorite wall thickness there at least so it'll basically catch anything on its way now another way whether we use reflections for different infections or of course for water so we've got we've got some water renderings and then for those you would play with in where by the long water section so yeah we need a bunch of water and we need a few layers for this water we need some some some fogginess or murkiness or like see the depth of it we need to render a bunch of transparencies in terms like you mentioned the the volume lighting underneath the water to sort of sell the thickness of its stuff like that and we need to refract it and you know make it watery and finally reflections and these are not the screen space reflection so these are planar reflections because on a big water surface that expands most of the screen and all of the time we just can't handle the artifacts so for this one we do use playing our camera reflections and so we render out these these layers as individual objects because it's actually the abstraction layer that we think about water is actually useful for rendering as well so this is what happened when we were above when we were below we actually flipped the rendering order so we render reflection first because that's the like the the farthest away thing you know we're in the back to front like a new transparency you do remember the murkiness on top of that because we want to rectify the reflection to then the refraction and finally the transparencies down here so they don't get affected by that fog or anything when we're down under and now for each of these layers in order to get get consider the consistent surface we actually render at each layer three times we render first first up a displacement edge goes to the camera which looks something like this with wireframe turned on and that's like from the camera perspective just a bunch of things that only exist in the frustum making waves and just making sure that when you pass through the reason like this oh the water was a pancake I guess that polygon effect and just makes it look deeper it only goes about six to eight meters into the screen but that's all you need for parallax in order to sell the the deepness of the water then we render the outside of a box which takes care of the rest of the outside surface so kind of like the outer faces of a box encase this like this water volume here is a box and when we're done with that then remember the inside faces of the box but you notice it doesn't render on top of what we draw over to render that's because for each does this out looks like by the way for each of these for each of these layers that we render out we actually write into a stencil bits it's called the I have rendered water since a bit and if any geometry reads from that bit that that's already been rendered it will just discard the fragments so we render the transparencies front to back rather and back to front with rejection instead so only have one one layer so it's kind of like a pseudo set buffer no way and yeah that's it for now that's change gears and going into a little bit more of the VFX area with some smoky smoke so that very simple smokestack there have a bunch of effects on let's try again and just stop it and let's try to save all the effects so that's it now it looks very plain and in fact it looks like it's blending into the background you can barely see it that's because it's base color it's just the ambient color of the room and nothing else so it'll will blend in most cases now the first effect we put on is the presence of a we have a fake point light that we attach to the to the to the to the smoke to make it look like it's been you know hit by the light from the window can it ground it in and up next the texture itself we need to wobble it a little bit around in order to sell some extra sorry we need to light this thing a little bit more just to sell some gradient we need to have it list from top and darken from below to kind of ground it and it's very hard to see but if you pay close attention there is and there is not and there this and finally we're all the texture around a little bit which just gives a little bit of stuff particle motion where it'll be hard to tell where one particle starts or one and the other one one particle ends and the other starts just because they sort of yeah the texture itself is not fully comprehensible we wobble it around using this electical swirl noise which is basically a a UV map info with the swirl effect a bunch of times on it and just apply it over and over a few times and which world downwards in this case to kind of like give the effect that the steam in this case is going downwards you could also go upwards and wait in which way is just it scroll one way in world space basically so next up some more vapor of the hotter kind so I'm throwing around this box just to sort of illustrate how it it handles movement somewhat well and remains fiery throughout set movement so we've got it we've got the smoke effect going up in the background as well here it's that's just the same so this before with a light but the fire what makes it special is using all the same effects the distortion and all that even a bit of gradient lighting but the most important part about the fire is consistent coloring when we first tried fire we just have like you know fire going from red to orange to white to yellow and white in the middle and then blue at the edges but then if a lot of layers of blue a layer on top of each other you can end up with a very bright blue that would look really crappy so we do we have essentially a fire buffer that's a lie but we have a fire buffer which is just the Alpha buffer of what we're rendering into the Alpha buffer is the the HDR HDR bloom value that will really use so that makes perfect sense because well the fire is going to bloom so why not use this so we render out a bunch of black and white sprites of essentially fire - or hotness buffer into one consistent thing with additive blend mode and then we apply one gradient to this before we read it back into RGB and that makes every individual spike look like this or composite it like so and it is it just makes it more consistent to forget color wise now with fire motion was not enough to just have with distortions we've also got this here map of cycle is called fit book and so we've got fit book to go through and we had a couple of issues with this we're trying to like animate these just very few cartoony frames of fire and we tried first you know going sequentially but then that would mean that these nine frames would loop every one second it was a very loopy and then we tried doing random but there's an 11-point something chance percent chance that you will hit the same frame twice which will so the eyes are stuff like lag so that was horrible so we used an in-between thing which is you know we choose columns sequentially and we choose rows randomly which means that we we don't get looping that often and we definitely don't get two frames at the same time and that worked out really well for us now to animate between these different layers this is if I just make each flip a piece a solid color random solid color it would look like this but of course we don't flip between them hard we fade between them but we also don't do that with fate between them using a vertical gradient upward with lawyers because why not and that's fire next up the the base layers so we've got these lens fares and unlike the bloom they are very like specifically put in like you put in a events fire entity on onto a flash side or ants or something right and it'll it'll flare up and we do this just because we like to have our you know our HDR bloom settings different from the flashlights and settings and the flashlights might be smaller than one pixel so it wouldn't get picked up and that's so the way we do it is with like a quadrant like a two by two sprite there and in order to like have occlusion from things we could do what what the standard thing is is to just you know raytrace toward the screen and get collision a bunch of times but a and will be expensive be we don't want to set up collision trees that the boy or anyone is never going to collide with them it's stuck so what we do is for each point in in the vertices here we we sample the depth buffer per vertex a bunch of times stochastically because that's only four times rather than per pixel but of course we don't example at the corners we sample at the middle and you know we pass this on to the fragment shader multiplied by the texture of the fair but of course we could sample it not at the middle but slightly between the corners in the middle just to get like this gradient going across making it look like it's got some volume because it's freeway sampling all former uses anyway might as well give them a little variation and that's basically that so next up some water more one and we're not just talking about water rendering here we've talked specifically about the effects on top of the water and here we've got this stack of a foam that the Olympic died film going on and it's got a bunch of effects is almost the same effects nope sorry exactly the same effects as the smokestack so if I take them all off they've applied if I take the motion off it looks like this or still if I take the lighting off it looks like this all flat and putting back in the gradient lighting and the motion looks like this so it is kind of like blends a bunch of rice together really well makes it look really over here we've got this flashlight and below sure you have a patent we haven't played it if you go into that flashlight viewer you suffer so we need to to emphasize this flashlight with a bunch of effects and the effects that we use are first of all first of all the rain so we have a buncha rain and we tried both doing like post effect scrolling things but that didn't really have any good parallax in it and then we tried some sprites but the scrolling down with textures of rain but I was a lot of overdraw so what we ended up with was a mesh with individual raindrops in it and that has much there's over drop but what animation could be more costly so what we do is we have a vertex shader that for the raindrops going down as soon as the vertex hits the bottom of of a volume it just goes back on top and goes down again that's it and the splash is on the ground all they do is expand with a random rotation and as soon as they're done with that and fade out they go to a new position and do that again and we just do that using like the animation unfolding is you know the fractional part of time so for every half second it goes up and for every you know integral integer part of time it moves to a new position with a new seat and that's basically it for them next up we've got the volume lighting above the water to just show with it where it is and finally we have this this you know playing on the water just showing where it's where it's hitting the water itself with basic it's called Fung specular lagging a little bit of diffuse lighting going underneath we have bunch of the same effects and a little new one so the different effect on here is we have these dust motes underneath the water that are lit up by the by the by the spotlight and they lit up the same way that the they're moving the same way that the spotlight the raindrops are they just move across and when they come to the outer edge they loop back in and just keep going they just scroll across in a volume and come back to the start using it was called modulus that that operator f MA and we've got the volume light underneath but unlike the one above of course these are different volumes just like you said this one underneath has a texture on it that's animating our flight plastics to make it look more underwater II and finally the found reflection which is now a final refraction instead just to show exactly where the light is and that's it for under so for above all that we've got these waves coming up from the boy there I could've chosen a better place to show these because they're very subtle here than that so the way these waves are then they just resample the reflection and refraction with some wavy distortion around him they're done with particle based rings like to see little ring measures that go outward and the way we get those distortion normals are very simple we have the the you know the direction around the ring and we have the phase from center to outward and we just take that the direction on the ring as the normal and multiply it by a sign that's sampled over the face of it so it's you know sir at the starts there in the middle and surround the other edge and then negative a lot in the in the first third and positive a lot second third and that gives a wave normal essentially and now for a different type of water so this is the this is the whirlpool and this one has much different motion on it so it's got some motion going outward rather than just scrolling across and the way we do that is with this this mesh here is high high poly mesh and the reason is it's high poly it's rather than calculating these texture coordinates in in fragment we just made it higher poly count so we did take a cavity in in vertex instead now we could have just you know we could have picked scrolling you know expanding textures outward but that will be somewhat difficult what we chose instead was this scrolling outward but that makes texturing somewhat more difficult so the texture we have scrolling outward looks like this and the criteria what before was it had to look wavy and I had to tile really well but in a non obvious way so luckily there is a good way to tile these things that you might have seen outside it's like these cobblestone bricks like tile perfectly with a wave shape of water so if if I took a picture of this started tracing lines of the wave shapes then took some stock images of waves and they're--the them underneath and then remove the lines suddenly you've got a texture that looks like well that doesn't tile but it totally tiles and yeah that makes up for a pretty good tiling fun texture you guys finally we've got this flood which is a lot more dramatic with sound it's got a bunch of elements stuff we've already talked about in a couple new ones no actually just just definitely talked about so first up we've got this water volume coming outward and it's using our full water rendering thing and that thing however is of course animated so how it looks like it's using three morph targets one for before breaking one for after and one for the four in between it looks like this if you just look at the mesh itself so we're using you know morph targets exported from 3ds max with these three water shapes and animate them and that's how we do the animation of that and we scroll textures on it the same way that we did on the Whirlpool and then we've got this thing on the ground is this bait decal of screen space reflections and the same wave texture that we have before scrolling in the same way that we did before and the texture coordinates what was important about texture coordinates was that they are faster in the beginning and then kind of like ease out as it goes out towards at the end of the decal and we do that by just having like applying power to the y component of the of the texture coordinate so that if you know because very fast in the beginning and then over time it will go down and finally we've got this carpet of particles which are lit from behind so just you know sell of these club cars are lighting it and we've got the same thing for an impact down below and finally we just got like some some species that make it the more scooshy in the beginning and feel more powerful in the start of it and that's basically all the components of that so in conclusion if it isn't apparent already for sampling we really like blue noise and you should like it too we really like Tim Pawlenty reason because it really lets us do all these stochastic effects that we otherwise would not be allowed to do before at that time you should gather all your things with a triangular distribution function and you should expose customizable shaders to artists if you're using any sort of deferred pipeline or any pipeline whatsoever because you get cool decals screens page reflections are a cool and useful and non non screen space ambient occlusion is is cooling useful as well thanks to all these people our colleagues at the plated people have Microsoft graphics team at unity double 11 for making our code fast and the Twitterverse for helping with this talk and that's it [Applause]
Info
Channel: GDC
Views: 131,840
Rating: 4.978488 out of 5
Keywords: gdc, talk, panel, game, games, gaming, development, hd, design
Id: RdN06E6Xn9E
Channel Id: undefined
Length: 65min 34sec (3934 seconds)
Published: Wed Dec 21 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.