How Real Time Computer Graphics and Rasterization work

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone today we will be discussing the graphics pipeline this is going to be the least mathematical video in my series on computer graphics but it will be the one that links everything together the graphics pipeline takes in some resources and outputs a render target the resources are triangles which make up a 3d model as well as textures the output is the final rendered mobile so the question is what goes into the graphics pipeline to transform these triangles and textures into a rendering model and there is multiple different stages we're gonna go over all of them but for now I want to focus on the fact that some of them are blue and some of them are green the blue ones are fixed function stages which means that we can set a few parameters to define their behavior but that's all the control we have over them the green ones all end in the world shader and a shader is a little program which we can write by ourselves and which runs on the graphics card since we can write code to define these stages we have a lot of control over what they do let's get started with discussing the input assembler its input is a vertex buffer which contains all the vertex attributes for every triangle that makes up a 3d model the problem is that since all of this data is in one big chunk of memory we can't distinguish the individual attributes therefore we have to tell the input assembler which attributes make up a vertex for example for this vertex buffer we have a position a UV coordinate and some normals this data is not enough though we also have to specify how many components make up the vector that defines these attributes for example the position is a three dimensional vector the UV coordinate a two dimensional vector and in normal a three dimensional vector you might be thinking that it is redundant to specify the amount of components that make up a single attribute however in the case of a position you could example also have a two-dimensional position or in the case of a normal you could use polar coordinates to pack it into two components with this information we are now able to distinguish the individual attributes in the vertex buffer the problem is that a computer needs a bit more information since it will in stored the numbers in decimal form like I've written them here instead it will store them as binary numbers which means it has to know which format it has to use to convert them back to decimal numbers in this case all the components could get stored in a floating-point number of 32 bits once again that's a little implementation detail that is not really of importance to us the input assembler can also assign extra vertex attributes for example a 30 X ID which starts at 0 and increments to however many vertices there are the ID takes up one component and gets stored as an unsigned integer probably of 16 bits every row in our vertex buffer now represents a single vertex and every single vertex in the vertex buffer will now get passed on to the next stage which is the vertex shader the vertex shader takes in all the attributes of a single vertex and outputs a new set of attributes to do that it will use some external data for example a matrix we'll discuss those in detail in a future video but for now all you have to know is that we can multiply the position with a matrix to transform the position so when the position goes into the vertex shader it comes out as a different vector a vector that is transformed by the matrix the vertex shader can also just pass attributes without modifying them for example the UV coordinate usually just gets passed or it could also introduce new attributes for example this color attribute every single vertex at the vertex shader outputs should have a position that is in a range between negative one and one that means that every vertex that exits the vertex shader ends up in a cube of two by two by two for programmers who are watching these videos here is the code of the vertex shader I just described the next stage in the pipeline is the tessellation stage which is a combinational name for the whole shader the tesa later and the main shader the tessellation stage takes in a primitive for example this Square and it outputs a more detailed version of its input in other words the output contains more triangles tessellation is a rather advanced topic so I won't cover it in depth here in short though the way it works is that the whole shader defines the pattern for the outputted triangles the tesa later will then create those triangles based on a pattern and the domain shader will then position the triangles based on some formulas a potentional use case for tessellation is taking in a low poly model and acting a high poly model by introducing more triangles the next stage in the graphics pipeline is the geometry shader it takes in an entire primitive which is a triangle and additionally it can also take in the adjacent vertices to that triangle and it outputs a modified version of its input for example it could just output its input but it could also introduce a new vertex which then turns this triangle into a pyramid the primitives that enter the geometry shader don't have to be triangles they could also be lines that geometry shader now can take that line and expand it into two triangles this is useful when we want to model hair for example we can define the hair as a line but since a line doesn't have a surface we need a geometry shader to create that surface one last primitive we can enter into a geometry shader is a single point and the geometry shader can take that point and turn it into a court this is for example useful for particle systems by now leave these have reached their final positions and this time to draw them and that gets done by the rasterizer the rasterizer I'll put a grid of pixels and it takes in every single triangle for example this blue triangle and then it has to convert that triangle into a bunch of pixels which would look like this now the more pixels you have of course the better the approximation of the triangle let's have a look at the seam we just rendered if this is the camera then the camera obviously saw a blue triangle now what if there was also a red triangle behind the blue triangle well the problem is that we already rasterized the blue triangle so if we now also rasterize the red triangle then it will appear in front of the blue triangle which is of course wrong unfortunately we can't solve this problem by changing the order in which we draw the triangles since that is not possible on a graphics card the solution to our problem is to also use a second image which we call the depth buffer or set buffer the way this image gets made is by converting the Z coordinate of a triangles surface into a color the closer the surface of the triangle is to the camera the darker the color will be in other words the bottom left corner of the triangle is the closest to the camera according to this said buffer if we now rasterize the red triangle then only the parts that are behind the blue triangle will become visible and notice that we also write the set values for our red triangle into the depth buffer we'll come back to you how and why this works in just a moment for now let's focus on the fact that our triangles only have a single color what if we want to apply a texture to that triangle for example well the rasterizer is a stage which interpolates vertex attributes across a triangles surface and for every pixel of a triangle it generates it will pass the attributes onto the pixel shader which is the stage that calculates the color as I just said the pixel shader takes in all the attributes of a single pixel and outputs a color it can do that based on a texture and a sampler we discussed those in the last video which I highly recommend you go watch and for programmers here's a code that assigns a color based on the UV coordinate a texture and a sampler the final stage in our graphics pipeline is the output merger it takes in the color and depth information of a single pixel additionally it also takes in the depth buffer value and the render target color of the same pixel the output of the output merger is usually just the same color as its input however it can also perform blending which blends between its input color and the color that is already in the render target which could for example give this pink color if the depth value of the input pixel is smaller than the value that's already in the depth buffer then the output merger will write a color into the render target and update the value in the depth buffer however if the depth value of the output mergers input is larger than the value that's already in the depth buffer then the output merger won't write that color into the render target and leave the color unchanged this principle is what made the red triangle appear behind the blue triangle and that is it for the graphics pipeline if you enjoy the series and consider becoming a patron on patreon comm 4/30 monkey and with that being said I'll see you all next time good bye
Info
Channel: FloatyMonkey
Views: 19,712
Rating: undefined out of 5
Keywords: triangles, textures, rasterization, shaders, graphics pipeline, computer graphics, opengl
Id: brDJVEPOeY8
Channel Id: undefined
Length: 10min 51sec (651 seconds)
Published: Fri Jan 10 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.