Intro to Graphics 21 - Sampling

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you all for joining another lecture for introduction to computer graphics today's topic is a huge topic it's a very big topic very very important topic in computer graphics especially in rendering but also in other parts of graphics as well very very important topic but i would like to go over this topic which is sampling i would like to go over this topic in the context of rendering because it really is one of the fundamental things in rendering that that's why i'm going to talk about it in the context of rendering so i'm going to actually start with rendering start with where we left off right in in rendering so we were here we were doing raytracing right so this image is the iconic image of ray tracing uh this image was first shown in in cigarette 1979. see that 1979. when this image was shown minds were blown people couldn't believe what they saw this was whoa amazing how can a computer generate something like this okay i know i i see not all of you are really appreciating the value of this right so let me give you the context all right let me give you the context i'm talking about 1979 here okay so when you think about that for example the pc of 1979 is this this this thing that just came out the attorney 800 uh model 800 this thing has a whopping extensible memory you could extend it all the way up to 48 kilobytes yeah you could have 48 kilobytes of memory can you believe that all right sorry if this doesn't mean much to you like what does it that even mean like so let me give you a context if you're watching this this uh video with full resolution it's 1080p uh it's every frame of that in rgb e channel being about exactly eight bits this would be about one frame would be about um six million bytes or something like that so you'll need something like 128 of such computers just to store one frame of this video all right so i'm talking about times like this all right so like video games of the time for example this this you know this guy pac-man it came out at 1980 it hadn't just come out yet right and people were not playing this on their computers or mobile phones or anything they are playing this game on devices like this right so i mean that was the time all right so you're not impressed like this let me tell you um two years before 1977 star wars use visual effects computer generated visual effects and this is the computer generated visual effects used in star wars movie uh all right so that's the the first star wars movie pretty much everything else you've seen in that movie where um uh all sorts of um you know actual physical stuff all camera effects not nothing computer generated this was the scene that was computer generated all right so this was a state of the art in computer generated images in 1977 right so when people saw this minds were blown because you see all these reflections and reflections it was amazing all right so but today when we think about this when we think about uh this type of ray tracing we call this wizard style ray tracing why wait it's time matrix saying well here's why because stern with it is the first person who presented this but but then not much later researchers came up with a better way of using ray tracing and to be able to generate more realistic effects than just reflections their infractions uh and they managed to produce something like this this is cook style ray tracing in college um can you guess when that was generated when this image was first presented it's sort of hidden in the image hidden in the image it says you see one nine eight four yeah 1984 that's right so this was 1984 distributed range racing by robert cook this sort of opened up the whole possibilities of ray tracing if you look at this image even in today's standards this is a fairly impressive image right i mean it's it has a lot of very complex components here there's shadows that are sort of soft there are imperfect reflections here and there's obviously motion blur uh like all that stuff they could uh generate using a new type of ray tracing that was called superior ray tracing and after that after that there came the rendering equation a very fundamental work in in rendering that sort of explained what we have been doing with with rendering and that also included a form of retracing to compute realistic images it came up with a very fundamental algorithm for generating realistic computer images it's called path racing very very important algorithm that was this this described in this paper now you look at this image maybe you're not impressed too much i mean it's all a bunch of spheres and boxes yeah i mean this is not a very impressive scene but the algorithm itself this path racing algorithm is still the algorithm we use in computer graphics for generating all sorts of super highly realistic imagery for example path racing just examples from two relatively recent movies right images rendered using that algorithm that path racing algorithm now there are all sorts of other more complicated algorithms came out in the meantime but path racing is still a fundamental technique that we use of course that's it got tweaked a little bit over the years it got improved in in many ways but that is the fundamental algorithm that we use for for rendering and generating super highly realistic imagery right and that is the topic of today how do we start with this simple ray tracing of this wooden style ray tracing that we talked about in our previous lectures and go to something like path racing so that we can use that approach to generate super realistic images in computer graphics that's the topic of today right and that is going to be hinged on sampling but before we get there i would like to talk about what it is that is actually happening in here now when when this work came out in 1979 we didn't have the rendering equation right so people didn't really explain this in the context of rendering equation but i would like to actually step back and talk about this in the context of rendering equation because writing equation is something that exists today and we understand it and we talked about this in our previous lectures so i would like to put this in the context of rendering equation right so this is the rendering equation if you all remember it's an integral over a hemisphere placed over the point that we're shading at this point that we're shading so it's over this hemisphere we're looking at light coming from all directions multiplied by the geometry term multiplied by the surface pidf and this this whole integral means basically summing all the reflected lights along this direction we get the reflected light totally reflected light towards the camera and that that's what we that's what the rendering equation describes all right we talked about this extensively this one i presume you have a decent understanding of what this equation does now there when it comes to computing this however we need to do things a little bit differently so this this light that comes to this point light that comes to this point from all directions that actually includes two types of light there is light that is coming directly from some light sources sources that are emitting light but there's also light that is coming from other objects in the scene light that is reflected light and this concept is the very very important and this is the fundamental thing that when i talk about realistic lighting this is what i'm talking about just computing this integral properly right yeah this one integral explains what's going on and what needs to be computed very well but when it comes to actually computing it in any algorithm we typically do a little bit of a separation here this this incoming like term over here we typically use different techniques for computing light that is directly coming from the light sources versus light that is reflected off of other surfaces so the way we with which we compute this incoming light for these two different types of incoming light is different because you know you've already implemented this light coming from directly from the light sources right we've been we've implemented blend shading and with blend shading we had a light source in our scene and that like we're given the light direction so we've been doing that we've been handling light directly coming from light sources we call that direct elimination but the light coming from other objects in this moon that's not something that we've implemented so that's the that's the bits that is important that's where if you can do this part properly which is a lot harder than just computing uh direct elimination if you can get this indirect elimination right then we can generate super realistic images right so what we typically do when it comes to computing this integral is that we're going to split this like term into two pieces this one turn is going to have this direct elimination that is like directly coming from the light sources and the other term is going to be this indirect illumination light coming in directly from light sources so basically what i did here is that i took the incoming light term and i split it into a sum of direct light and indirect lights and then i separate the integral into two two integrals it's just the same integral just two copies of the same integral right so this part covers the direct elimination component this is the part that we've been using and this part just covers the indirect light indirect elimination coming from all the other objects this is the part that's difficult to compute this is the part that's easy because for example if you have just a bunch of point light sources in our scene or just the directional light sources this integral turns into a very simple thing just turns into a simple sum right we talked about this if i have a number of light sources in my scene i just add the contributions from all these light sources and i'm done so this integral is simplified fairly easily by just looking at a few light sources i have in my scene right so that part is simple now i'm left with this integral this is hard this is not easy so here is what's in in my understanding of with its style ray tracing what with the style ray tracing does here now here's the trick with wizard style retracing i'm going to first assume that this the idf term here is something like a blinn slash foam formulation okay actually it doesn't matter which one you pick you'll end up exactly at the same place um so it's just a planar form formulation that is i have a diffuse term and a specular term right that's the the first simplification that i'm going to make here and then i'm going to say you know what this if you storm i'm not going to have a diffuse term you remember we talked about having surfaces with perfect reflections so if i have a surface with perfect reflections that is not going to happen if you stir now i might have a diffuse term over here i'm not modifying this yeah yet this drdf is exactly what it is i'm not touching that one it's just about the indirect elimination component just this one so over here i am going to say this diffuse term is just black that means diffuse diffuse reflections don't happen for indirect elimination that means that if it is black that means that term is going to just disappear right it's gone now over here what i'm going to do is that i'm going to assume that i'm going to have a perfectly flat surface perfect surface right if i have a perfectly flat surface that means that means that surface is not going to have any roughness it's going to be perfectly smooth and if it is the case this alpha parameter over here needs to be infinity right because i as i increase this alpha term remember that the the shininess turn as i increase it all the way to infinity the surface becomes smoother and smoother and smoother if it is perfectly smooth then it is going to be infinity right and if i set this to infinity something funny happens this term this cosine phi term becomes zero unless cosine phi is one so there's only one value here that will make this term non-zero that is if this this angle is zero that means if the reflection is in the perfect reflection direction if this if this incoming light is in the perfect reflection direction of our view direction that's the that's the only case where this term is going to be non-zero so this equation when i said alpha to infinity turns into this right so if the incoming light is in the perfect reflection direction then that's okay it's non-zero just ks over cosine theta otherwise it's zero so if you just put this equation up here inside inside here what happens is that this this integral sort of disappears right i'm looking at all possible directions but only one of those directions will really count that is it's their perfect reflection direction all other directions sort of disappear sort of so i get rid of this integral right because they don't matter and if i put this equation in there now the reflection incoming direction is always the perfect reflection direction so i get this and these cosine terms cancel out each other and i get this remember this this is what we came up with last time we were saying uh the specular coefficient times whatever the reflective illumination is so you look at the reflection direction compute the incoming light and that multiplied by this specular term that's what we said so basically we can think about this with its style ray tracing as like a simplification of the brdf when you're computing this indirect elimination term and we didn't say ks we called it kr but i was saying if kr is not given to you you should use ks because yeah that's what makes sense and from this explanation i'm hoping that you have some understanding of why it makes sense to use ks instead of kr right it's supposed to be the same same material right it's supposed to be the same brdf except that we sort of simplify that the idea of quite a bit right so this is with this style ray tracing and if we did this yeah in 1979 minds are blown today not so much today we actually want to go back and try to compute this integral properly all right so the rest of this lecture is going to be about how do we compute this integral properly how do we compute this integral property properly and the answer to that is going to be sampling we're going to be using sampling all right so but when i talk about sampling i would like to start with something a little bit simpler than this whole indirect illumination computation i would like to start over here so let's say that i have two triangles like this and i'm trying to render an image from these two triangles and i am going to render a very low resolution image something like this uh like a eight by eight image not a very impressive image and not a particularly good looking image and probably the most uh distracting part of this image is that it's alias right it's that it has these jaggedy edges because i have i this is a very low resolution image right so this is an alias image that is you know i i have these uh what we call staircase artifacts right i have steps of the size of a pixel forming these diagonal edges um a much better image could be generated using the same as the same resolution image if i just use some sort of anti-alias rendering so this is the same resolution image maybe if maybe when pixels are this big it doesn't look that impressive but if you look at it from a distance this one is going to look jaggedy and this one is going to look pretty smooth and nice right so this is this is the kind of image that we want to generate right and here the reason why we have these sort of soft looking edges here is that because we picked the color based on what percentage of a pixel each triangle here was was covering right so that's what we're going to look at here let's let's pick one of these pixels here i'm going to pick this pixel up so this pixel over here and i'm going to expose it just take a look at this pixel i'm going to expose it a little bit all right so we're looking at that pixel now in this pixel as you can see a part of it is covered by one of our triangles the rest of it is our background color right but i need to pick a single color to represent this because i have one color per pixel right so a representative color for that pixel would be something like this overall representative color for that pixel right uh so the whole point is how do we compute this how are we going to compute this let's say that i have this magical function function f i call it function f i could call it something else i'm just calling it f f of x x is a position so given any position this function f will tell me the color at that position it's a continuous function it's defined over this pixel so um this f of x at at this point up here is going to give me the background color that is white um and the same function over over there at that position is going to give me the color of the triangle that is blue all right so if i want to know the the perfect color for this pixel what do i need to do i'm saying the definition of perfect in this case is that i would like to color this pixels such that color of the triangle is going to contribute to the color of the pixel based on the percentage of the pixel that is covered by the driver right so what i'm trying to say mathematically is that if i take an integral of this function over this pixel area the result of this integral is going to give me the color of this pixel okay so now that that's what sort of integral means but here's the problem i said i have this magical function f but i don't actually have a very good and precise definition of f always i'm doing some some rendering i'm getting one triangle at a time and so forth so computing this continuous integral having this function it's kind of hard even if we have this function that can be that can be sort of sort of challenging because this could be a very complex function right and you know mathematically computing this integral can be very very difficult i mean a lot of problems that we're dealing with in computer graphics we're going to have an integral like this and computing this function computing this integral analytically can be very very very difficult sometimes flat out impossible so what do we do what can we do to approximate the result of this integral if we can't compute it so we're going to do this numerically we're going to numerically approximate it and the way that we're going to do this dramatically is we're going to do a sampling if that's why this title stays here all this time we're going to do sampling so we're going to convert this continuous integral into a discrete sum discrete sum of a large number of points that's called n is the number of points i'm going to have take a number of samples and those number of samples are going to the average of those number of samples is going to approximate the result of this integral right so i just picked a bunch of points here and this should be fairly intuitive right so i can i can compute the value value of f value of this function at any point in space my problem was just computing that integral so i can compute the value of f here for any any sample point and if i just count the the samples that correspond to the triangle and count the samples that correspond to the background using the average of these colors i can come up with the expected color for this pixel right so as you can see as n goes to infinity as i increase my number of samples i'm going to get more and more accurate approximation of what the color is supposed to be so this is the concept of sampling i take a continuous integral and i convert it into a sum right very very simple concept i convert it into a sum by taking a bunch of finite number of samples of course the number defines here how much computation you're going to do it will also also determine how accurate the final result is going to be so this is this is how we're going to solve this problem but actually this nice sampling pattern this great base nice sampling pattern it may look good at first class if you just don't think about it too much but this is actually a terrible idea and let me let me tell you why now if i have an edge like this in a horizontal edge like this and when you're rendering images horizontal edges like this actually happen quite often this is not not an improbable case these kind of things actually happen quite often now as this edge as like my triangle that has this edge as this triangle moves up for example it's going to cover more and more of this pixel right so here's what happens as as i move this edge up yeah it is not a mesh case so as this edge moves up uh it was in the the triangle covered none of the samples all of a sudden it's now covering eight samples so the color turn from white to sun this blue as it moves a little bit all of a sudden it's it's covering eight more samples it turned into this one as it moves out eight more samples like this and then eight more samples all of a sudden like this you see where i'm going with this so at some discrete positions in space all of a sudden the pixel color changes from this to this right again moves up and this goes up and finally covers everything and this so that's a little bit of a problem because by doing this as my triangle is moving up i only got to nine different shades of this blue which is unfortunate because i have here in this case i have 64 samples but i'm only able to generate nine different colors which is not very good i mean if i'm using 64 samples i should be able to get 65 different shades of this blue ideally so but of course i ideally i would like to get something continuous which we're not getting but at least i should be able to get 65 different values not just nine right so this sort of very regular patterns are pretty bad because of that now we can actually fix this fairly easily i can take the sampling pattern and i rotate it just a little bit ah you see what's happening now now when this edge moves up is moving up now i'm getting i'm hitting these samples one by one so i am getting 65 different uh shades of blue perfect right it's great it's wonderful i can't possibly do anything better right this is not going to have any problems right uh unless my edge is tilted just a little bit so if my edge is aligned with my sampling orientation it's not going to be well okay so rotating it didn't really solve all of my problems what can i do well ideally what i would like is that i really don't want to have any recognizable pattern here right because if i have a recognizable pattern like this i'm going to have some edge direction where this pattern is going to fall apart so this brings us to a very common technique that's used in computer graphics for sampling that is simply using randomly positioned points in space so random sampling otherwise known as monte carlo sampling it's a very very fundamental concept and and this is the concept that is used for solving all sorts of sampling problems now a big topic a big topic which i'm not planning to get into here is how you pick this random pattern not all random patterns are generated equally some random patterns are going to give us relatively accurate estimations when you do this some random patterns are going to be more prone to inaccuracies using the same number of samples and how we generate this pattern distribution so that you get um a on average a better estimation of of this perfect pixel color is is actually a big topic which i am not planning to get into all right but i want you to understand at least the complexity here that you know there's a there's a whole bunch of work that's done in computer graphics and still continues about how to generate sampling patterns like this so that we get a relatively accurate estimation of the integral on average i'm saying on average that is important because whatever sampling pattern you come up with there's going to be one case where it's going to be inaccurate but on average you would like it to be as accurate as possible and by picking that sampling pattern carefully you can actually improve your odds and that's an important topic very very big topic and i'm skipping that just by acknowledging its presence here all right but the the concept is nonetheless the same it's monte carlo sampling we're going to somehow randomly generate a bunch of samples and we're going to estimate an integral by averaging a bunch of random samples all right now let's get back to where we were we our goal was this right our goal was trying to compute this random equation properly so what we're going to do to be able to compute this part of the interval more properly is that we're going to replace this integral we're going to replace this really complicated impossible to solve integral with with monte carlo sampling and that brings us to monte carlo ray tracing so this was called distributed ray tracing people came up with other names as well multi-color ray tracing is the name that i'm using uh here now when you say monte carlo ray tracing people will understand what you're talking about so this is the this is the concept all right so and here's how it works in this context now my point is figuring out ways to compute this incoming light for any given direction right so what we're going to do is that i'm trying to shade this this point in space what i'm going to do is that i'm going to pick n different randomly generated directions here let's say this is one here's another one here's another one here's another one i will just pick randomly generated directions and i'm going to estimate this integral by using the average of these randomly generated directions so that's the that's the whole idea uh correction here i think there's a there's a missing missing pie term here so this is not the exact term uh don't think about this as the exact term when you convert that integral because the every area of the the hemisphere should also be factored in which i did not in this equation uh but i just want to mention that yeah it's it's this is a representative equation than the exact equation that he would use for correctly computing this integral so and just like how important it is to pick samples distributed over a pixel it is very very important how you randomly generate these random directions you don't necessarily want to generate them completely randomly you probably want to factor in what this yad looks like that's also another big topic in computer graphics how do you generate these random samples so that on average you get more and more accurate estimation of this integral over here it's again a big topic that i'm not going to go into but i would like you to sort of appreciate this this complexity that exists here you can solve this integral by completely randomly generating these directions that would be fine it's just that your estimation is not going to be very accurate you're going to need a lot of samples but if you have a relatively better way of generating these random directions you can have on average a better estimation of this integral using relatively fewer number of samples right so that's another big topic another important thing that made realistic rendering possible today people looked into this problem quite a bit all right so what happens with monte carlo ray tracing when i do something like this now with crook style sorry with with style ray tracing i could do these perfect reflections right with monte carlo ray tracing i can actually compute imperfect reflections like this so in this case the surface is not perfectly smooth so my reflections my visible reflections can come from a whole bunch of different directions and this is sort of estimating this integral for different directions using monte carlo monte carlo sampling well okay to be honest this is not an image that's generated using monte carlo sampling it's just a representative image so don't pay too much attention to this i'm just trying to tell you that we can do stuff like this but this is an image generated using one technology tracing or this really rain tracing that was called at the time or cook style ray tracing right now what can we do with this we can do a whole bunch of very interesting things besides the things that you see here for example if you don't include this incoming light from other objects and just use the uh with it style ray tracing you get an image that looks like this this is an image generated using with its style ray jason we have reflections and refractions yeah but we don't have any other form of indirect elimination coming in right it's just the perfect reflections and perfect reflections that's about it so this is an image that looks very much like cg right you look at it yeah this is cg image it's the cg deal and one thing we can do for example we can maybe fix these shadows a little bit you see these shadows that are very very sharp they have very sharp edges but in reality we don't really get shadows like this in reality shadow edges are sort of smooth and that smoothness sort of depends on the distance of the shadow casting object to the shadow receiving objects but the reason why we have soft looking shadow edges in reality is that in reality we don't have point lights we have lights that look very much like this one up here we have light sources that have some slides right the point light cannot exist in reality because anything in reality has a size any object in reality has a size uh so only point lights would generate shadows so crisp like this light sources that have some size we're not going to create crisp shadows like this they will create shadows that look more like this right so they will have softer edges and this is a really a function of the the the light light source here and how big the light source here is here uh so how do we compute this using monte carlo ray tracing well if i have a point light i'm sending a shadow ray to the point light and i'm assembling that but how do i send a soft shadow ray to an area light like what do i do the answer is simple we do monte carlo ray tracing we just uh which we pack a bunch of samples on the light source and we randomly sample that so that's going to give us an an approximation of this the illumination coming from this area light right so we will do uh and we'll take our estimation there that's going to be one way of computing there's soft shadows or shadows from area lights this is the standard technique used today for computing most soft shadows and this is the probably the only method we have for accurately computing the soft shadows coming in area light sources well this is not the only way though there are other approximate techniques for sort of softening the shallow edges and sort of approximating what the softness would look like there are other techniques like that in computer graphics but this is the way of properly computing these soft shadow edges and that's going to be monte carlo racing okay i did that all right my shadows are soft and whatnot but this stuff doesn't look very impressive right it's dope man it's nice nice and all but still looks very cg the real part the the important part that that would make an image like this much more realistic is handling the rest of indirect elimination indirect so this was making the direct illumination nicer by using multicolored sampling for direct illumination now if we do monte carlo sampling for indirect elimination we can get a much better looking image now what's missing here for shading these pixels is light that is reflected off of other objects lights that is reflected for example off of the walls of that funny little room this is actually what's called the cornell box if we include this if we include light that is reflected off all the walls then we would get something more like you ready this now this is including a bunch of effects one of them is light that is bouncing around in the scene so light that's generated from this light source but then hits one of the walls and bounces off and hits another wall and then other wall maybe one of the balls and then bounces again and hits something else and then finally we we see it in our camera this is an image that actually computes this this really massive integral of lights bouncing around in the scene and and as you can imagine this is a very complex very very competition expensive problem to solve but if you do this and you do this right and if you have a nicely model scene then things start looking super realistic and this is where the secret of computer graphics realism is hidden so this is the difference between generating an image that looks like this that totally cg versus something that could be real i mean i don't know if you ever actually built a room that looks like this uh but still you know if you build something like this this could be a photograph and if you have a more complicated model um in your scene like if you have more complicated speed all of a sudden things start looking a lot more realistic here and there are other effects here for example over here let me see but my finger can you see my finger so whatever so this is the what we call caustics that is white that's passing through that sphere effects like this are impossible to compute using monte carlo ray tracing more specifically though this image was generated using the algorithm called path racing remember path racing i imagine the path racing when i showed you this image because it is the rendering equation so this is the algorithm that is used today for generating all sorts of realistic images path racing can be thought about as a special case of monte carlo ray tracing so this is the monte carlo ray tracing setup right indirect elimination is approximated as a sum of finite number of samples for path tracing what you need to do is very very simple but before i get there i would like you to appreciate the complexity of what's going on here first now i would like to know at this point the amount of indirect light coming from all directions right so what i did is i picked a bunch of samples and i figured out how much light is coming from all of these directions here's the thing how am i going to find let's pick one of these directions how am i going to find the light coming from that direction i i'm going to do what we said we're going to do for regular reflections i'm going to trace array and find out where that ray hits what that ray hits and then i'm going to shade that point and now i figure out the light coming from that point here's the thing at that point up there i'm going to do another shading now am i going to include indirect elimination at that point if i am going to include indirect animation at that point then i am going to generate a whole bunch of random directions from that point right and for for each one of those i'm going to trace another array oh am i going to include indirect elimination there yeah that means so what happens here is that now i started with one ray from my camera one ray from my camera i already hit here let's say i'm gonna pick a small number and here very very small number i'm gonna say 10 10 is probably not a big number right i want to know all the like coming from all possible directions 10 is a very small number so i pick 10 different directions for each one of those they will go and hit something from each one of those let's say i'm picking another 10 directions for each one so what happened here i started with one ray one return into 10 rays and 10 rays turn into 100 rays for each one of those i'm finding another hit point and there i'm also competing in direct formation those 100 race became a thousand rays one day we hit 10 000 rays so as many bounces as i'm adding on top of this i'm getting exponentially more and more rates and at some point this becomes impossible to compute right so with this sort of distributed ray tracing we can handle very few number of bounces with path tracing the kojia's brilliant conclusion for computing this integral was that how about how about i pick how about i pick n equal to one what well okay computationally if n is one i start with one ray i'm gonna get one indirect donation ray and then i'm gonna get another one another one so it's not exploding anymore right so i don't have this exponential growth okay i mean computationally it's manageable but one come on i i say i am going to approximate light coming from all possible directions i say let me look at that direction and i'm done i mean how is that how is that approximating anything right it i mean when you don't think about it much this seems like a very dumb idea it's just how can you possibly approximate it by just looking at one random sample but here's the brilliance really we're not ending there this is this is going to be a bit more more complicated but by picking one i basically i ended up with something like this and i'm going to show you how something super simple like this can actually give us the result that we want so um let's take a look at how path racing works here's my camera i'm gonna generate an image looking at a scene and i'm going to uh pick a pixel well that's a giant pixel yeah okay this is this is a giant pixel on this this was a very low resolution monitor okay so pixels look like this i have to pick a giant pixel otherwise i can't show you what's going on in the pixel right alright so this is one pixel this is not an area of an image this is just one pixel i'm gonna have just one color value here for this for this entire rectangle right one pixel now i pick a sample on that pixel and i'm going to generate a ray and i'm going to traverse that ray i'm going to find where that ray hits now i'm i'm going to do shading and all that but i'm also going to do um i'm also going to approximate the indirect elimination so for indirect animation i'm doing path racing so i just pick one random direction there it is randomly i pick that direction and whatever i find there is going to be my indirect elimination estimation okay let's let's go on with it uh so i traced that ray and i found another point in the scene all right i'm shading that point while shading that point i'm going to approximate indirect elimination there as well so again i'm going to just pick another random direction right back another random direction i trace this and there i pick another random direction and i trace that so like eventually what this this whole thing is doing is it's generating a path for the light from the lights there exists a path such that light bounces around like this and comes through the camera so a path like this exists such that you get you get light to the camera so by doing doing this i've explored one possible light path from the light source by coming all the way to the camera okay to be to be accurate i actually discovered multiple light pads because if i'm doing shading at every point here i'm explicitly looking at the light sources so so i'm looking at i actually discovered many many light paths not just one nine but uh that that's the idea right so we picked one sample here and we discovered a bunch of pads and discovered one bunch of light pads for that that correspond to that one sample okay now this is not going to give me a very good estimation of indirect automation big surprise right just pick one sample for indirect omission it's not going to give me a good estimation it's not going to give me a good destination so what do i do what do i do so this was my one sample what i'm going to do is i'm just going to pick another sample starting from my pixel just pick another sample from my pixel and i trace that find another point and again i pick another random direction over there and i trace that fine you know explore all these paths and find all these paths and when i'm done i start with another random point another random sample from my pixel that corresponds to another point and i explore that another standard sample that corresponds to another point and another random sample that corresponds to another point so you see what i'm getting at here so yeah even though these are all different samples starting from different points on this pixel they can't they sort of go to a sort of similar areas in on in my scene so if you look at collectively all of these rays all these randomly generated rays collectively it's as if i am sampling a bunch of directions over a hemisphere for and for an area not for a single point but for an area so that's the whole idea that i don't have to accurately compute this integral for every one of my pixel samples here each one of my pixel samples each one of them will be very inaccurate but collectively when i have enough number of pixel samples eventually i will be exploring this hemisphere well enough that i'm going to have a better and better estimation of the elimination here that's the idea of path tracing and path racing is a very versatile algorithm that we have other more complicated versions of path racing and more complicated algorithms that exist in computer graphics but of all of them path racing is still a very commonly used algorithm because it it pretty much works in all cases if you wait long enough it's going to converge to the correct results and in comparison this is actually a relatively easier algorithm to implement correctly so let me show you an example of what happens of course i need to have a lot of samples from this pixel right i need to have a lot of samples from this pixel what happens if i don't have enough samples what if it happens if i use very few samples that means my estimation for this pixel is going to be inaccurate so that means the image that i'm generating is going to be very noisy yeah noise here's an example showing uh how an image converges as i increase the samples per pixel so spp means samples per pixel so it's starting with a very small number of samples per pixel which is i believe 100 yeah it's super noisy to be able to get a great estimation of indirect elimination i need to look at a very very large number of samples right and eventually as i add more and more samples i get an image that is converging to what it's supposed to be with patch racing one problem is that this convergence becomes at first it's pretty good but convergence becomes you see like the numbers are increasing sort of exponentially but the noise is reducing sort of linearly so i need to add exponentially more and more samples reduce the noise the same amount so you need to wait for a very very long time to generate an image that doesn't look noisy so what made path racing very popular today is some advancements in in a topic that's called denoising it's an image space operation so let's say that i generated an image like this using path tracing i rendered and this is the image that i got i'm not sure if you can see in your monitors but this is a fairly noisy image right you see some black spots all over the place it's a very very noisy image so with some denoising algorithms that looks at this image and and also looks at the components that form that image like the ingredient directionals and all that stuff by combining this and and then filtering them in in different ways we can get rid of this noise after we render this image we can actually apply some denoising filter and get something that looks like this so we can go from here and we're done with our rendering we're not doing any more rendering we take this image and then we just process that image using some denoising algorithm and we get something like this just just estimates what the finally converged image would look like if we had enough time to wait for path racing to finish so this is a very very important and relatively recent development in computer graphics that made path racing very very popular because we no longer have to wait for path tracing to converge we can just render render enough to get a good enough estimation of what the scene is supposed to look like i mean when you look at this you have a pretty good idea about what this will look like when it converges right so these denoising algorithms is sort of generating an estimation of what the converged image will look like and we have pretty good denoising algorithms today this is just one example of uh intel's you know there other companies have denoisers as well i do know some algorithms quite a good collection of noising algorithms that existed there's still an active research topic but yeah we have very good algorithms for this and we can generate images that look like they're fully converged right there it is so this is going to work out using what we call image space filters and that is going to be image space filtering and all sorts of image space operations is going to be the topic of our next lecture so next time we're going to talk about this we're not going to talk about the noising specifically we're going to talk about the whole image space operations in general not in the specific context of the noisy yeah so i'm not going to go into the details i'm not going to go into the details of a specific denominating algorithm here so i'm just going to end it here i would like you to know that this exists and this is the magic this is the magic that makes path racing work in practice even when we have relatively few number of rays per pixel relatively few like we don't have to go to millions to get an image that looks like this in some cases we can even produce images that look pretty decent using just a few samples per pixel so that has been the topic of this lecture uh that's what i plan to talk about today so i'll end it here thank you all for joining and i'll see you all next time bye
Info
Channel: Cem Yuksel
Views: 753
Rating: undefined out of 5
Keywords:
Id: qgdDu-K0pZ4
Channel Id: undefined
Length: 54min 59sec (3299 seconds)
Published: Wed Nov 10 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.