Intro to Graphics 18 - Rendering Algorithms

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay so i have to do this this formal introduction i don't know why i'm doing this but i kind of feel like i need to do it i don't know why ah okay so welcome to another lecture for introduction to compute rabbits so if you will remember last time we talked about the rendering equation and today we're going to continue that topic today we're going to talk about rendering algorithms now this is not going to be just you know take the rendering equation and come up with an algorithm to solve that equation it's not going to be like that actually you will find that the discussion of rendering algorithms is going to be very unrelated almost completely unrelated to the rendering equation that we talked about uh we're going to tie them later on much later on um but the rendering algorithms when we talk about rendering algorithms we're talking about something much simpler than what the rendering equation uh talks about the random equation talked about shading right light comes to a surface and bounces off rendering algorithms are going to be dealing with more fundamental things um so what do i mean by that well here's what i mean what we want to do with a random algorithm is that we want to generate a raster image right let's say we're going to display it on on a monitor like this right and the image we're going to generate will be something like some background some object some hopefully some 3d object we're going to generate this this image this raster image of whatever you want right but um the part that the rendering algorithms are mostly concerned about at least they're concerned about first is figuring out what parts of these object of this object corresponds to which pixel on this raster image that's the part that that needs to be solved first right so figuring out where my object is on this raster image and which triangle of this this this model corresponds to which pixel right that's the that's the main problem that needs to be solved by the render algorithm and then once we have that and if you have the the information about the surface we know how to do shading right once we once we know what triangle corresponds to which pixel once we know that we already we already implemented shading we know how to do this we can do the rest right so the rendering algorithms will be concerned about the the phase right before that at first all right so um talking about the popular rendering algorithms out there today uh we can classify them into two main groups we can think about gasterization and ray tracing right and in the context of rasterization uh i'm going to talk about a number of algorithms there there will be the painter's algorithm the z-valve for rasterization and a buffer and the reyes algorithm that i'm going to talk about and the other group is formed by ray tracing well you can think of different sort of rendering algorithms in the context of ray tracing as well but they're a little more closely tied together i've listed here ray casting path racing you can list a bit more but most of the ray tracing related algorithms are more about solving the rendering equation itself than just solving the basic problem the basic problem is solved by this ray casting component the rest is more concerned about the rendering equation and how to solve the realistic shading and realistic elimination problems right the whole bunch of algorithms that are based on ray tracing so i'm just going to talk about ray tracing as as one thing and i'm gonna go over these these other rendering algorithms in the context of rasterization all right and i find this this this sort of discussion very very helpful actually i i i think it's important to understand rasterization well so that we can understand ray tracing and we need to understand ray tracing well and we kind of need to understand rasterization well as well so this whole thing is actually that this whole topic i think is a very very fundamentally important topic but this this um lecture is going to be relatively light so um not much math it's more about general concepts so just sit back and relax and try to try to pay attention okay so let's start talking about about rasterization first right so what is rasterization we've done rasterization actually we've been implementing rasterization although the rasterization was done for us by the gpu right so if the broad strokes what rasterization does is that it takes some vector definition of our scene in the canonical view volume and it converts it to a raster image so in this simple example i'm showing a one triangle in the canonical view volume we know how to get there right we've done all these transformations we take triangles in object space transform them in all the way the view space and then the canonical v volume after that rasterization takes over and then it rasterizes that scene definition or triangles or primitives into a raster image very much like this now this is looking okay it's a little low resolution image right it doesn't look all that great it kind of barely looks like a triangle so if you don't have that many pixels of course this is not going to look great you need a lot more pixels if you want to have like a nicer looking triangle edges here but we could still even with this many pixels we could do a better job right that we could do what we call um anti-aliasing uh in which case we wouldn't have just um red and white pixels here we would have sort of in-between color values as well and this looks a bit more like a triangle with a softer edge the softness is of course is a function of how big our pixels are right so we're going to talk about concept of anti-aliasing but basically you can think of this as i'm coloring these these pixels based on what percentage of each pixel is covered by the triangle so if i were to draw the triangle over here you will see that triangle perfectly covers completely covers some pixels here and it partially covers the other pixels and if you can adjust the color of these pixels based on how much of that pixel is covered by the triangle you would get pretty good anti-aliasing um and so that that's the concept of anti-aliasing it actually comes from signal processing that this whole term entirely saying but you can think of this as doing a better approximation of what percentage of a pixel is covered by our objects it's a it's a it's a good way of thinking about anti-aliasing in the context of computer graphics all right so in the end you know we get a nice looking anti-aliased image like this we can do that with with rasterization now the biggest one of the biggest problems that rasterization based renderers will be dealing with is what we call disability that that is if i have more than one triangles here okay if i just don't have just one triangle but if i have maybe another triangle if that triangle now i need to figure out is this triangle in front of the other one or behind the other one right figuring that out is going to be one of the important difficulties that rationalization based renders will will be the other but like it's in this case the blue triangles in front or maybe the red triangles in the front like which one is in front and what should i do based on which one is in front of it it would which triangle right so this is the general concept of rasterization now let's um talk about some specific algorithms using the this concept of rasterization the first one that i'm going to talk about is what is called the painter's algorithm so what's what could a painter's algorithm be what do you imagine when i tell you painters algorithm let's uh let's go through that let's say that you know i have this this canvas and i want to paint right so um i'm putting some paint over it uh right i started with something like this and then adding this piece on top and then i'm adding some more and maybe i'm adding some landscapes and then another landscape do you see what's happening here do you see the pattern there's actually a pattern here like what i did is i started from the back from all the way back and i started moving uh painting things that are closer and closer and closer to the view to the camera i i don't know if it makes sense to to call that camera but whatever we're painting and we're just painting closer and closer parts um over the the parts that are at the back right so that's the ideas from we're painting from back to front and another piece that's closer see another piece that's closer another portion that is closer here and another portion that is closer and one more that's closer and one more that's closer so what comes next can you guess what comes next well at least one of you did very good yes of course bob ross who else uh if you don't know who bob ross is check it out i'm not gonna tell you but don't do it now we're talking about rendering algorithms okay don't get distracted but but do check it out later all right okay moving on so um we have our scene defined as with some 3d primitives and we're looking through this monitor and we're we want to form an image we're going to generate an image on this monitor screen right a raster image and we're going to do that using painter's algorithm so the way the painter's algorithm is going to work is as you might have guessed is that it's going to start by sorting our triangles from back to front because we're going to draw them in that order right so after i've completed the sorting then i can just draw the triangle at the very back and on top of that i can draw the next triangle and the next triangle and so forth and this whole problem of visibility that which triangles in front of which triangle is sort of solved by the sorting operation that i i've done in the very beginning and it's all good this works out just fine except that it's not as easy as you think to sort the triangles well of course there's some computational cost to it yeah there are some computational cost to sorting but beyond that it's not it's really well defined sometimes because i could have geometry like this my triangles could be sort of overlapping and even if they're not going through each other they could still be overlapping in in in terms of how far they are from the camera so it's not always easy to tell which triangle is in front of which other triangle right and in that case this painter penis algorithm is going to struggle right so if it picks one triangle i'm going to get this image if it orders the other way then i'm going to get this image but i am not going to get this right it's not capable of producing what i'm supposed to see here uh it's just going to pick one triangle or the other because i'm i'm drawing them in order with painter's algorithm so it's it's not giving me a solution to that problem all right so i mean you can think that you should we really shouldn't have triangles that are overlapping with each other right maybe maybe maybe that's a sort of weird edge case it really shouldn't happen it doesn't happen or does it can you think of can you think of a model where it has triangles that are overlapping with each other yes one of you did that's right utah teapot actually has triangles that do overlap with each other right so if i were to render utah t-pod using painter's algorithm you know i'm probably not going to get the correct triangles for each pixel here and so that's going to be a little bit of a problem uh so that's the that's the thing with the painter's algorithm just to reiterate it will need sorting and it will not be able to handle intersecting geometry that's that's a severe limitation for the painter's algorithm right so isn't much we can do about that except that we can replace it with a slightly or maybe significantly better uh rasterization based rendered algorithm that is the z buffer rasterization now this is probably the most popular rendering algorithm on earth the most frequently used rendering algorithm on earth way more images are generated using this algorithm way more images by orders of magnitude using this algorithm than anything else on earth because all of our gpus use this algorithm right all of our gpus are designed to to run rasterization and where you know you're looking at your computer monitors everything you see on the screen is drawn using z buffer rasterization right even if even your 2d windows and everything they're drawn using z buffer rasterization so yeah it's kind of hard to compete with the number of images that produce every second on earth using the z buffer rasterization so this is what we use on the gpus not just you know these down you know dv gpus on your desktops or um on your laptops um also my mobile iphones everything that you see that that that has some graphical interface to it it will be handling the rendering operations using z buffer rasterization right so this is an important algorithm and this is actually the algorithm that you have used for implementing our previous projects we've actually used the buffer rasterization because we use the gpu rendering pipeline remember we had the the vertex shader that passed uh its data to the rasterizer yeah that rasterizer does z buffer rasterization all right and then using zebra for esterization we get our image and that's what we've been we've been doing so uh let's let's find out about z buffer rasterization right so this is the concept of rasterization um what we're doing with z buffer rasterization is that we are adding a what we call a depth buffer that is for each pixel here each for each one of my pixels i'm gonna have the rgb or rgb a color value a being the alpha value you may or may not have it but it doesn't matter it's there i have an rgb a color value also i'm going to store the depth value the depth value for each pixel will tell me how far this triangle that corresponds to this pixel is from my view from the camera how far it is from the camera that's where what it's going to store and it's going to store that for a particular point of course it can't store it will not be storing a range it's going to be storing that depth value for a particular point that is going to be there that that is going to be the center of each pixel right so at the center of each one of each pixel here i'm going to have a depth value now we call that that value z value and why is that i mean it should be sort of obvious to you all i think because on image space this is x and this is y so z is going to be towards you right x y z so negative z is the direction that we're looking at uh when we look at an image like this so um all of the z values will be negative so the the z buffer is the buffer that stores all these depth values or z values and using those z values we can determine which triangle is in front of which other triangle for example if i have another triangle like this another blue triangle i can draw that triangle on top of the on this raster image and by comparing the z values i can determine which triangles in front of which one right and so if this red triangle is in front of the blue triangle then this blue triangle will only get these other pixels that are not covered by the red triangle right so that's what z buffer rasterization will be able to do and and to be able to do that i don't need to sort that my triangles at all i can render them in any order that i want and the z buffer will figure out will help me figure out which triangle is actually in front for that point and on and at the center of each pixel uh and of course with this it's perfectly okay for my triangles to be overlapping it's perfectly fine because i'm going to do this comparison for each pixel and for the center of each pixel so i'm going to do this comparison of which triangles in front separately independently for each pixel so which triangles and fonts can be different for each pixel so for some some pixels here the blue triangles in front for some pixels the red triangles in front and that's perfectly fine so z buffer rasterization can definitely handle this significant difficulty of the painters painter's algorithm you will not have any problems with differentiating which triangles in front so all relative triangles perfectly fine so um you know there are there are some nice things about misogyny there are some not so great things about this algorithm as well just a side note one of the things that happened is that based on the order in which you draw these triangles for some of these pixels that i had overlapping triangles i needed to shade these these pixels multiple times right i if i'm drawing the red triangle first i'm going to shade the red triangle and then i'm drawing the blue triangle so i'm sort of going to paint over it so it's very much like the painters algorithm in in that concept but i don't need to pre-sort anything so i can do do the the sorting on the fly using the z buffer that's been the advantage of the z buffer rasterization so um i can also do anti-aliasing here right so i can actually figure out what percentage of my pixels are covered with which triangle so in this case you know um this is like the hand-drawn traffic and anti-aliasing and it should look like something like that it is possible to do this although it is quite a bit difficult there are some difficulties associated with that but the the simplest thing you can do to get an anti-aliased image like this with z buffer rasterization the simplest thing you can do is to have multiple samples per pixel it's a very very simple idea very very widely used idea so this is called super sample antioxidant in this case in in this particular case that i'm showing here i have four samples per pixel so i could be using four or f or or more i could be using any number basically but our gpus that implement z-buffer acidization will only support a few values uh i believe they support four eight sixteen um i don't know if they go beyonce i don't believe so but anyhow it doesn't really matter your gpu may or may not support all of these notes but this is um a typical thing to do super simple untitled scene so this is very much like actually rendering a much higher resolution image and then sort of down sampling it to a smaller image and in this case i am storing four color plus depth values per pixel because i'm using forex super sample anti-aliasing if i use ajax or 16x i'm going to be using that much more per pixel right so this is kind of expensive as you can imagine because this uh what we call frame buffer that contains the depth values and the color values it's going to be it's going to take up a lot of storage so that makes the whole rendering process a lot more expensive also i am going to be doing a lot more shading operations here like each one of these samples per pixel i'm gonna have to shade them so in this case i am doing four times the shading operations as i would do otherwise to compute the colors of these pixels so if i had only one sample without any antioxidant i would only shade each pixel once okay some pixels will get shaded multiple times because i have overlapping triangles but beyond that each triangle will invoke one shading operation per pixel at most right but when i have multiple samples per pixel i'm going to be shading those those pixels multiple times but i can get very very good on dialysis people thought about like i'm going to talk about super sample anti-aliasing i must also mention monthly sample i'm telling you saying that's a funny name actually uh msaa that's a multi-sample antenna it's a it's a it's a mix between not doing anti-leasing and doing super sample anti-releasing the thing is what we're trying to figure out with multi-sample anti-releasing is we're saying that we can compute the color values just fine using just one sample one sample per pixel but the depth needs to be sampled more densely so i can figure out which triangles in front of which one more accurately so with multi-sample anti-aliasing our gpus will store multiple depth samples multiple z values per pixel but a collection of z values will be associated with one fragment and that one fragment will be shaded only once and i'm going to store a single color value that will be associated with more of these pixel samples so multi-sample anxiety thing is significantly cheaper because that's because i don't have to store a separate color value per sub pixel sample so these are soft pixel samples right all right moving back forget about anti-aliasing for for for a second and one of the reasons why antioxidant becomes a little difficult to do and we kind of need to do this super sampling or multi-sampling kind of things is is that with z buffer rasterization what is really difficult is handling transparency so if i have opaque triangles like this it's fine it works just okay but if i have a a sort of semi-transparent triangle like in this case things get a bit more difficult now in this case z-buffer acidization pure z-buffer rationalization will require sort of ordering of my triangles so if i order my triangles from back to front i'm going to be fine so if i just draw the the blue triangle first in this case because blue triangle is at the back and then i draw the red triangle on top i can generate this image using alpha blending without any problems we wrote down alpha blending you know the concept so we can do all the blending here and and we're going to be okay right so every time i have new fragment coming in i just blend it with the previous fragment and i'm going to be fine so i can actually produce this image not a problem none whatsoever but but if i render this if i render these triangles in the reverse order if i render this semi-transparent red triangle first and then i try to render the blue triangle then i'm going to be in trouble because when i try to when i look at the the pixels over here that where the blue and red triangles overlap the depth values will store the depth values of my red triangle so z buffer rasterization will say hey the blue triangle you're you're behind so i'm not going to draw you right you're behind the data that already exists on this pixel so you're gone you're throwing away so i'm not getting the correct image now right because uh the semi-transparent supposedly semi-transparent the red triangle is completely covering the the blue triangle for some pixels right so i am not getting this image that that i should get all right so that is a problem with the z buffer westernization that you kind of figure out different ways of of dealing with so with to summarize with zebra rasterization it can handle intersecting geometry just fine we're not having any problems that the painters algorithm was suffering from but it is having trouble figuring out which triangle is visible we call that the visibility problem uh it's having some trouble with visibility when it comes to semi-transparent primitive semi-transparent triangles okay so it needs sorting for transparency and in that case just like in the painter's algorithm if you just sort your semi-transparent primitive semi-transparent triangles and you draw them from back to front you're going to be okay but it will require that that sorting so there are methods out there that work on gpu you might have heard some of them that do what we call order independent transparency they sort of get around z buffer rasterization and they they don't actually do zebo for rasterization because that's the only way to to get around this so what they do is sort of closer to another rasterization based rendering algorithm that is the a buffer rasterization uh above rasterization is very closely related to z buffer rasterization but the important thing here is that it can handle order independent transparency but it will require more and more memory and sort of dynamic memory allocation that that that's the cost of that so a buffer rasterization is an algorithm that is used for offline software rendering the launch because you can get very high quality images with a buffer rasterization it can give you super high quality anti-aliasing and you can do that very cheaply but it has this extra cost of sort of handling dynamic frame buffer memory and the way it works is actually very very similar to to zero for except that for each one of my pixels here i'm not just storing one color plus depth value but i'm going to be storing depth plus color value and also the coverage like what percent what part of the pixel that my primitive is covering and i'm going to be storing a link list of different fragments that correspond to that pixel for for each pixel i'm going to have a linked list of fragments and i'm going to be storing this linked list until the rendering is done and that's the reason why it requires a lot more and a lot more memory and a lot more dynamic memory uh while you're rendering because it needs to maintain this this linked list and because it because it maintains this linked list now i can draw things in in arbitrary order let's say that i have i have two fragments for for this particular pixel here i'm not showing the background i can have as many fragments as i want it doesn't matter so basically i i have this i have something here uh for that pixel and i'm drawing my next triangle that's sort of behind the red triangle right and with this order with a buffer resonation i can easily you know put the data related to my new triangle in between here right so i'm not losing any information and i can get perfect antioxidant and it works just just beautifully without any problems without creating any artifacts and i do not need to do any kind of super sampling or even multi multi-sampling right but i need to sort of manage this dynamic memory and actually this sort of thing happens a lot you may think that oh it only happens when i have two triangles that overlap it actually does happen when you're using antioxidant and you have two triangles that are next to each other like along the edge i am going to get multiple triangles corresponding to the same pixel and you kind of need to handle those cases really well and they happen a lot all over the place when you're rendering so this is not a rarity it's not rare that you're gonna have multiple fragments per pixel you're gonna have a lot of those when you're rendering especially if you're rendering high quality scenes with lots and lots of triangles with very very small triangles this sort of thing can be very very expensive and and that's why it's it's used for offline rendering not for gpu rendering right although you know there are variants of this idea that is implemented on the gpus to give some sort of some limited form of order independent transparency uh on the gpu today so it's possible to implement parts of this with modern gpus although if you're sort of disabling the z buffer component of your gpu rasterization and you're sort of replacing it with some software handled a buffer like algorithm all right so not supported by gpus at least natively although yeah you can you can do a lot of things in software on the gpu all right so this was a buffer another very very popular rendering algorithm is called writes so that is red red short for renders everything you ever saw yeah so this is uh uh this is actually very very old image from 1980s rather than the pixar studios back then and rendered using the the reyes rendering algorithm and then the idea is actually quite similar it's still rasterization right with reyes what it does is that so back in the day in the 80s people were using all sorts of different data structures for representing 3d models so we were not we were not all using polygonal meshes everywhere people were using other types of objects as well for example via bezier patches for example or nerve surfaces they were a lot more popular than than they are today so the idea there is that you take a surface defined in whatever thing you like and then you dice it out into smaller and smaller and smaller pieces smaller and smaller and smaller and smaller pieces all the way down to something like something very small that is smaller than a pixel and that those are called micro polygons so with radius algorithm you would take a primitive and you you dice it all the way down to these small super small micro polygons and then you can you can figure out the visibility of micropolygons so you you uh work with soft pixel uh level operations to figure out the visibility right so it's the still the same idea the the important part of that is this this aggressive subdivision so instead of taking a giant triangle and figuring out all of the pixels that it intersects with ray's algorithm will just take that primitive and dice it up all the way down to very very tiny primitives and then try to figure out where those are relatively each other right it's still the same idea but the algorithm wise it's very different as you can guess this is actually quite expensive but it allows you to add things like displacement mapping for example so you can take each one of these micro polygons and you you can move them based on some texture space value so your primitives don't have to be flat primitives or even simple curved primitives they can they can have some really complex shapes so this this allows you to relatively easily define very complex geometry and in the end yes this is an expensive rendering algorithm it's not suitable for rendering on the gpu so it's not implemented on the gpus today but it is used for rendering even today actually it is used for rendering here there are some more recent images rendered using renderman renderman is pixar's rendering software that they use for rendering their movies and renderman is the reyes renderer there are other rayous renderers out there as well but renderman is is the one that sort of started this this whole idea and it's all data structure and everything is defined around this concept of reyes rendering right so it's a very very popular very very popular rendering algorithm even used today but today it's often used in conjunction in conjunction with ray tracing because ray tracing will allow us to do a lot more than just figuring out what triangle corresponds to what pixel of the screen it will allow us to do a lot more than that that's what makes ray tracing really really valuable so renderman was one of the last renders to sort of adopt ray tracing because it doesn't really fit well with ray's sort of rendering but eventually they did and so they're they're using it as well so let's talk about ray tracing i'm gonna use that as the segway to talk about ray tracing because i said we need raytracing so let's let's see what this thing is and why this is important all right now i'd like to talk about ray tracing in the context of rasterization i find it quite helpful so i'm going to start with comparing rationalization and ray tracing right so rasterization is what we've been talking about and if you think about the algorithm of rasterization for any of these algorithms that we talked about it sort of looks like a for loop like this right so we start with the primitives and and there's a for loop going through all of our primitives and for each primitive i am figuring out which pixels of my raster image that primitive corresponds to so different drasticization based algorithms handle that somewhat differently but nonetheless they all they all do something like this right they all have a something like a for loop that goes over all the triangles and it will figure out where the pixels are that correspond to each one of these triangles ray tracing will do the opposite so sort of flips the the random process and it starts with pixel samples it starts with the pixels for each pixel sample ray tracing will find the closest primitive so it's it you can actually think of this as like the almost like the inverse way of doing rendering the inverse operation that actually does the same thing so i don't want to call it the inverse of rasterization it's not quite true because there's still producing a raster image they're producing the same output but they're they're they're doing this operation so drag racing does the backwards operation of what rasterization does rasterization starts with the vector image and from the vector image it's going to the pixels raytracing starts with the pixel samples and from pixel samples it tries to figure out which primitives each pixel sample will correspond to right so i'm going to show you this slide that we we looked at earlier when we were talking about transformations with rasterization i have perspective transformation here i have the camera set up over here with rasterization we know what it does here i'm going to take each triangle and i'm going to figure out where it falls on the raster image and i'm going to draw it on the image and then i'm going to take the next triangle and i'm going to draw the next triangle so with whatever rasterization algorithm we use this is the the procedure that we follow with with ray tracing we're going to start with we're going to start with all the triangles at once the entire scene right actually we don't even need to define this perspective projection volume we don't even need to have this this linear perspective at all so i'm going to just throw it away it doesn't matter what's important is what's important is where my screen is where my pixels are in 3d space so this is my pixel sample and if i know the position of my pixel sample in this this camera space i can generate array from where my camera is into that pixel all right so the next thing i'm going to do is i'm going to follow this ray i'm going to trace this ring and figure out where this ray would intersect with my first primitive in this case it intersected with the red triangle so i found the closest point that's closest point in the scene that i will be seeing through this pixel sample and then i can do my shading operation and figure out what the color is and i put that color for that sample and i'm pretty much done and one of the beauties of this sort of rendering is that handling semi-transparent objects becomes quite trivial actually so if my triangle red triangle is not opaque and so it's sort of semi-transparent and i'm sort of seeing through it then all i got to do is i can just continue tracing that right right i can continue tracing that ray and i find the next intersection and then i shade the next intersection and i i can do the the auto blending with with this pixel over here just to be clear we typically do offer blending from back to front but you don't have to you can do all the blending from front to back as well as long as you know the order as long as you you do either front to back or back to front but you can't mix and match but you can't get the the fragments in random order and do try to do alpha blending but you can do awful planning from front to back or from back to front either way it's fine and with with ray tracing we can easily do it from front to back and you know we can continue until the pixel is completely covered and there's no see-through object right so that's the idea of of ray tracing and to form the image what i'm going to need to do is i'm going to have to do the same thing for all of my pixel samples that form my master image and you know i don't have to use one sample per pixel if i want to get nice untie everything i can have multiple samples per pixel just like the z buffer rasterization i can have multiple samples per pixel i can only have just one sample per pixel it doesn't matter for each one of my pixel samples i'm going to be doing the same operation i'm going to be sending array through them and figure out where the ray will intersect with my object and i shade them and eventually i'm going to get my final image like this so as you can see it can handle the semi transparency very nicely as well so it all works out and everything is great but but but this is not what makes ray tracing super important yeah it can handle transparency and yes it is definitely an advantage but it actually gives us things that are much more important than doing this much more important just just doing this one of the biggest advantages of ray tracing is that it can it allow us to do a very realistic shading because we will be able to use the same concept the same mechanism to do very realistic shading operations and this is not something that rasterization based renders provide but raytracing based renders give us this ability to use the same idea just sending race to the scene and figure out where things are we will use the same idea to do all sorts of interesting operations to get realistic shading and that's how we produce realistic looking images and that's why ray tracing is the standard for offline high quality rendering in computer graphics so any of the you know super realistic lifelike images that you see i can guarantee you that they're rendered using gray tracing and there's it's very very hard to get things looking realistic using rasterization based renderers with ray tracing there are all sorts of algorithms that use the concept of ray tracing to get realistic images let me give you an example so just a very simple example here's my scene in my scene i have a teapot and a flat plane underneath all right so i'm doing range racing i'm rendering with ray tracing so here's one of my pixel samples i'm going to generate the ray that goes through that pixel sample now i'm going to call this ray a primary ray so we call this this ray a primary array because it's generated from the camera so all rights generated from the camera that go through pixel samples we're going to call them primary rates right and i trace this primary array and this primary ray is going to hit something in my z let's say that it's hitting this plane over here now at that point at that point when i'm doing shading i can generate another ray for example i can generate a radius like this now this will be what we call a secondary ray we call it secondary there's no tertiary or anything we only have primary and secondary rays secondary rays are all rays that are generated while doing shading okay any kind of ray that we generate that is not generated from the camera we call it a secondary ray uh and in this case i'm using the secondary ray to do reflections to figure out reflections so for example if this plane was a mirror-like object near like material then i would see the reflections of my teapot right or whatever is whatever else is in the scene so i am going to be using the same mechanics of ray tracing to figure out the reflections that i will be seeing on on this object and for that i'm generating a new ray with a reflected direction and then i'm going to traverse that ray now and i'm finding where it stacks and i can shade that point and this way i can get very nice looking proper reflections on my reflective surfaces right so this is where the power of ray tracing is hidden because rasterization based renders will not give us any sort of help to do things like this because rasterization based renderer will take a triangle at a time and put it on the screen and they're done with it all right anything else you want to do you're on your own your rasterization base render will not help you at all but the rage racing base render will allow you to generate rays and to do all sorts of different queries by generating secondary rays during shading and that's where the power of ray tracing is hidden and those secondary rays are used for all sorts of purposes they can do for reflections or refractions if they're semi-transparent objects rays can refract through refractive objects and again something that we can't really do with rasterization they can handle shadows very nice looking realistic shadows even soft shadows we can get with with ray tracing realistic elimination i'm talking about global illumination here that's the the idea that light bounces off in the scene it's a very very important concept for generating realistic images actually very very important concept so we can get realistic elimination so we can solve our rendering equation properly all right so ray tracing will give us the mechanism to solve our rendering equation numerically though but solve our random equation properly so we can get very realistic looking images right that's where the power of ray racing is so going back and comparing rationalization and ray tracing again so you can say well okay like if ray chasing is so powerful why why are we using rasterization z buffer rasterization on all our gpus not ray tracing what's what's the deal now you might have heard some gpus today are sort of giving us some limited form of ray tracing but they're still designed for rasterization they still do rasterization the graphics api still works with rasterization even though they have some ray tracing capabilities i'll tell you more about that in a little bit but they are designed for rasterization the reason for that is rasterization is fast ray tracing is slow and here's why because for each primitive find the pixel samples that's a very fast operation i my pixel samples are on the grid give me a triangle i can very easily very very efficiently tell you where the pixel samples are but what's the operation inside here it's saying for each pixel sample find the closest primitive in my entire scene definition find the closest primitive well i can have millions of primitives tens of millions of primitives maybe hundreds of millions of premises in my scene right finding that one primitive that corresponds to that one pixel that is inherently expensive and i need to do that for each one of my pixel samples right so that's why this part is is a lot slower but the nice thing with another nice thing with rasterization is not just the difference between the speed difference between these two operations but also what's the most expensive thing with computers today in today's computers the most expensive operation is not just computation related stuff the most expensive operation in today's computers is data movement how you're accessing your data and our data we have we we have large memories in our computers today right our drams are giant especially in comparison to how big they were in the past but but drams are designed to optimize data accesses that are contiguous so if you're accessing your memory linearly you get super high performance out of your drams but if you're accessing your memory randomly because i'm just doing some search operations i kind of need to figure out where this thing is this search operation you'll find the closest primitive is going to be accessing the memory sort of randomly randomly meaning um in an unpredictable manner from the perspective of the memory system and so the performance of our memory system is going to be sort of uh being hindered as here as well also for each sample i'm doing the search right for each sample i am doing this crazy random search uh so instead of going through all of my scene once in just one contiguous block so this the speed difference between these two you know this is fast this is slow yes but when you compare how they access our scene data the difference is really drastic like the rasterization is much more efficient than than ray tracing when you factor in the fact that rasterization accesses the scene in a much more efficient way at least for the computers we have today right but there's a limit to that there is a limit to that so with ray tracing to be able to improve this search operation that if i'm going to look at each and every primitive every time i have array this is going to be ridiculously slow it's going to be we can't do any ray tracing if if i'm going to look at each and every primitive every time i'm looking at array so what we typically do with ray tracing is that we build some spatial data structures facial partitioning structures so you can think of this as like a a binary search tree if you will so you build some some tree structures that accelerate this this search operation find the closest primitive by accelerating this this search operation we can we can access our scene that so we don't have to access all of our scene data but then we kind of need to access our scene data randomly this random access is mostly coming from the fact that we're using some tree structure to accelerate this search operation but over here there's no such thing like i'm linearly looking at my entire entire scene right so if my scene is containing a lot of triangles a lot of triangles at some point graphicalization is going to start suffering because it has linear complexity in terms of the number of primitives i have in my scene right so it's linear complexity because i'm going to rasterize each one of my primitives one by one right it's just me only going through all of my primitives it has linear complexity but theoretically speaking raytracing actually has sort of logarithmic complexity because of the the search trees that we form to accelerate this find the closest primitive operation will give us logarithmic complexity in terms of the number of primitives in our scene right so as we render scenes with more and more complexity really really a lot of triangles a lot of primitives at some point ray tracing is going to start catching up people have been making this argument for a long time but turns out you really need a lot of triangles for for this to be actually effective right so for most practical applications rationalization is still giving us faster rendering performance nonetheless nonetheless though as i mentioned earlier with ray tracing i can do a lot of things a lot of things that help me generate realistic looking images with rasterization i have no such thing this is just doing what the primary race can do and turns out when you're doing rendering uh and when you have these secondary rays to handle all sorts of reflections and whatnot the cost of those secondary rays is a lot more expensive so this extra cost of primary race for handling what racialization could do using raytracing is becomes negligible so um ray tracing is pretty much the standard for offline high quality rendering today partially because of that so here are some examples these examples well this this one is generated using the arnold renderer uh the background i believe in this case uh is rendered the rendered part and you know you look at these images they're rendered using a lot of rays per pixel and in the end you get something that is really really difficult to differentiate from reality right so here's a another example again using the arnold renderer it's it's well it's not possible for me to look at this image and tell you oh like these things are rendered and these things these other things are obviously obviously captured by a camera right it's impossible to tell i don't know if this is actually 100 100 rendered or maybe there are there are things here that are captured in the camera there probably there are things in here that capture the camera but i don't know this one i believe uh this is rendered using vray another great render this is also using ray tracing on d-ray it's the right trace-based renderer i believe this is 100 rendered image um but maybe i don't know maybe there are a couple of photographs in there in the background maybe i don't know but you know the whole point is that you can get really super realistic looking images and i'm showing you examples from two renderers here there are a whole bunch of ray tracing based renderers because that is the standard for offline high quality rendering computer graphics today right but one of the things people do is that they combine rasterization and ray tracing so i talked about reyes and and render men and i told you that render man has now great racing support because you kind of need to have raytracing support to be able to figure out a solution to the rendering equation for all these secondary rays where brassicization is not going to help us at all we will need ray tracing right so this is a very typical concept actually back in the day when race racing was too slow to to handle when all of the renders were using some sort of rasterization this sort of setup was actually quite common so you would handle all of the primary visibility using rasterization some sort of characterization could be z buffer could be a buffer or reyes and then on top of that you would add ray tracing support for handling secondary rays that gave you reflections refractions shadows and realistic elimination so all these secondary effects were handled using weight tracing so this is a very very common setup actually even today it's used but but for most cases for when you're doing high quality rendering with a really expensive scene with lots and lots of triangles these secondary race here this is where you spend most of your render time almost all of your render time is here right almost all of your render time is here so there is no point in doing this with rasterization if you're going to spend all of your render time in here you might as well handle this part using ray tracing as well right so that's one of the reasons why raytracing is used for rendering everything in today's high quality offline rendering rendering so there's really no point in dealing with rationalization also remember that rasterization has sort of different kind of complexity different kind of bottlenecks and when you're rendering really really expensive scenes this may actually be less efficient and the way that it uses memory is also less efficient also it requires linear projection for example you cannot do fisheye lenses with lenses with rasterization i can you can reproject the image and you know you can generate an image using rasterization and then mess with it to get some sort of a fisheye effect but that's not what i'm talking about you cannot render a fisheye lens image using rasterization but with ray tracing if you handle primary visible using ray tracing you can do anything you want so it's a lot more powerful and that's why high quality offline rendering is done using weight tracing now i want to clarify one thing about ray tracing here so this is the part where people can get confused about a little bit so if you were previously confused don't worry about it it's quite normal a lot of people get confused here so you can implement ray tracing in software or you can implement raytracing in hardware i mean this is pretty much true for any algorithm i guess you can have you can implement that algorithm by writing a software for it or you can design a chip a custom chip that does that implements that algorithm for you for example our rasterization z-buffer rationalization that exists on our gpus it's implemented in hardware now i we have physical hardware units that does the z buffer rasterization for us but you could you could do z buffer rasterization in software as well a lot of software renders that do for example a buffer they handle rationalization in software same goes for ray tracing you can then you can implement it in software you can implement in hardware now our software can be designed to run on the cpu which is quite typical right typically that's what we do but i can also write software that runs on the gpu so when i talk about ray tracing on the gpu that might mean that i'm writing some software that runs on the gpu or it might mean that my my gpu has specific soft hardware support for ray tracing in which case ray tracing runs on that specific physical hardware unit that sort of accelerates ray tracing so those are two different concepts i just wanted to clarify this for our upcoming project about ray tracing we're going to do ray tracing on the gpu but we're going to do it in the software we're actually going to do ray tracing within a fragment shader so it's going to be kind of a funny way of implementing ray tracing i'll talk more about that later it's going to be a very cool project i'm actually quite excited about that so this is what we're going to do we're not going to be doing this gpu ray tracing using the hardware units that exist on some modern gpus we're not going to be doing that that's uh that's a very different thing right and i i can talk about that as well but that's that's that's not what we will be doing so that is that would be using physical heart rate units that does the rate racing for us but instead we will actually be implementing great tracing ourselves in software but our software will be running on gpu okay so that's the distinction now i also put here in hardware gpu ray tracing you might have heard raytracing on the gpu is something that exists today now the thing is the thing is this is ray tracing support for gpus but the the amount of ray tracing we can do on the gpu is somewhat limited again gpus are designed to do rasterization they can do ray tracing but this sort of ray tracing is more in line with to be to handle secondary effects right so we expect to handle primary visibility using rasterization and we want to add the secondary effects using ray tracing we don't want to handle the primary visibility using ray tracing just yet because ray tracing is still expensive it's still a little too expensive and we get better performance when we use it together with rasterization on the gpu so that's what people prefer to do here now in at the university of utah and the hardware ray tracing group we are working on new type of gpus that are designed for ray tracing and ray tracing only so i'm a proud member of that group we've been working on this this project for quite a few years now we have some really really successful projects that manage to achieve higher performance than existing gpus so i yeah i had to mention that but it's this is not a product that exists right this is this is just uh some some research project where figuring out different ways of designing the gpus throw away all the rasterization related stuff that we don't need and design a gpu around raytracing only actually our gpus run c plus plus code so that's one of the nice things about it so just wanted to mention that because you know i've been working on that project for quite some time and actually people who previously worked on this project at the university of utah went on to the industry and spearheaded the efforts in the industry that gave us the gpu retracing hardware we have today so it's a yet another graphics thing that sort of spun out of the university of utah just have to mention that all right so that's um what i plan to talk about today this is where i am going to to end it and all right i'll end it here thank you all for joining and i'll see you all next time so i'll see you then you
Info
Channel: Cem Yuksel
Views: 918
Rating: undefined out of 5
Keywords:
Id: 0WrzyD8nBlk
Channel Id: undefined
Length: 64min 13sec (3853 seconds)
Published: Sat Oct 30 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.