Buffers in OpenGL | How to Code Minecraft Ep. 2

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] 3d games encompass a massive range of topics these topics range from physically based rendering to calculating the physics in a simulated world to just figuring out if the player is touching the ground whatever that is one of the fundamental structures in opengl that helps build these immersive 3d worlds is something called buffers at the end of the day 3d games are simply a transformation of some input data to some output data and today we will be learning how we can represent that data on a graphics card with the help of opengl like i said before buffers are a fundamental concept in opengl so if you can understand buffers conceptually and how they work you already know most of what there is to learn in opengl so with that said what is a buffer and how can we use them if we think about 3d games you'll see the world represented on a 2d screen the world is actually representing your program as a series of mathematical calculations that end up creating some sort of game world space so how do we take the 3d game world and convert it to the 2d screen to display it to the user well we can use some linear algebra to transform all these objects into a 2d image this process is called rasterization and the gpu is very good at this in opengl the way we transform this data to a 2d image is by going through the graphics pipeline process there are a few stages to this passing data to the vertex shader optional tessellation optional geometry shader processing primitive assembly rasterization and finally fragment shading and final tests let's talk about each of these stages very briefly so the very first thing opengl needs is our vertex data vertex data is simply the data about 3d points that create our 3d objects these points can contain metadata such as which texture is applied at this point what color this point should be and whatever other arbitrary data you want to attach to this one common feature among vertex data is a position although you don't need to send position data if you don't want to this vertex data is then processed in a required shader stage called the vertex shader the vertex shader is where we perform mathematical calculations to transform these vertices from world space to 2d screen space if you're familiar with linear algebra then this should be fairly intuitive but if you aren't you don't need to understand the details to understand how the overall process works the way we transform vertices from world space to screen space is typically through a series of matrix math the programmer will typically pass in the object's transformation matrix a projection matrix and a view matrix and then we multiply all these together with the object's model data to transform the data into 2d screen space this is great but it's absolute nonsense if you don't know what any of these terms mean what's a transformation matrix a projection matrix and what the heck is a view matrix well i like to think of the projection matrix as a description of how you would like to project the world onto a 2d screen think of a projector what does it do it projects a film onto a 2d wall this is almost the same thing as a projection matrix the projection matrix typically contains details such as the field of view which describes how wide our camera angle is it also contains the near plane in the far plane which describe how close or far away an object can get to the camera before it's clipped out of our view the projection matrix can also be defined differently for 2d games versus 3d games for 2d games you'll usually use an orthographic projection matrix which removes any sort of perspective from the scene and a perspective matrix adds that perspective back in check out an article i wrote a while ago in the description for more details about how cameras work in opengl what about the view matrix well simply put this is what the camera is viewing i know programmers come up with very creative names for this stuff you can usually define the view matrix by specifying an eye which is basically the camera's lens position and a direction that the camera is pointing in using these two features we can create a view matrix which describes where our camera is located in world space and where it's looking finally we have the transformation matrix this literally transforms our object from local coordinates to world coordinates you see it's much easier to work with an object by pretending it's located at the origin of the world when you use 3d software you do this all the time then after we finish working on the object as if it was located at the origin we would like to place it somewhere in a world this is transforming our object from the origin to its actual position in the world the transformation matrix simply contains the details of where the object should be in the world how it should be rotated and what the scale of the object should be [Music] using linear algebra we can multiply these matrices together to transform all of our objects vertex data which is in local space into normalized device coordinates which is the normalized 2d positions on our screen in opengl the normalized device coordinates range from negative 1 to 1 in the x and y directions this makes it very easy for the gpu to multiply all these values to whatever the user screen resolution is to get the pixels in the correct location this all sounds very complicated but in practice it usually ends up being just a handful of lines of code if you're using a math library to handle all the details for you once the vertex shader transforms our data into normalized device coordinates we move on to the next stage of the pipeline which is optional tessellation the next stage of the pipeline is tessellation if you've written a tessellation shader now i'll admit that i'm not well versed in how tessellation works but you can read some great articles linked in the description for more details the basic premise is that if you send in vertex data and process it using your vertex shader you can then break that data up into even more vertices using this tessellation shader the catch is these extra vertices will always be interpolated in the current primitive that means if the current primitive is a triangle the tessellation will only add vertices within the triangle if the primitive is a line it will only add tessellation within the line after the optional tessellation stage is the optional geometry shader processing stage the geometry shader stage is where you can dynamically add or remove primitives and emit completely new geometry on the gpu i'm not that familiar with geometry processing stage either so i will not discuss this much further as far as i know this stage has shown detrimental performance and is not typically used in real-time applications like games next opengl performs the primitive assembly this process is completely controlled by opengl so you don't have to worry too much about the details basically in opengl there are a few different types of primitives or the simplest objects that the gpu can rasterize the three main primitives and i'm not aware of any other primitives are points lines and triangles why these three shapes well it turns out we can use these three shapes to represent any other shape imaginable or at least make it look very close to another shape another reason they operate primarily on triangles is because of certain mathematical properties that make them very easy to work with check out this discussion by jonathan blow for a great in-depth look at the processing that goes on behind the scenes so opengl will assemble the vertex data that you've passed in into one of these three primitives at this point it will do a few other things like clipping the geometry to the viewplane and culling geometry that's not visible after this the gpu performs rasterization during this stage the gpu simply converts all the clipped primitives into fragments a fragment is basically a pixel but the reason there's a distinction is because a fragment could technically cover more than one pixel or less than a single pixel on the screen depending on a lot of factors opengl will perform some magic to render these fragments to the correct color on every single pixel when the time comes to render these to the actual window finally opengl performs the required fragment shading in any final test fragment shading is the process of coloring in every single fragment on the screen the fragment shader is a required step by the programmer to ensure opengl knows how to color in the pixel the gpu will take whatever fragment shader the programmer has supplied and run the shader on millions of pixels in parallel every single one of these pixels will interpolate any data that was passed into them unless specified otherwise and then perform the instructions in the shader to output a color or some other programmer specified data i briefly mentioned how a math library would be helpful for a lot of these processes so very quickly if you're coding in c plus i would recommend glm as a math library which is graphics library math of course you can use any library you prefer if you're in java i would recommend jammo which is java opengl math library and if you're in c sharp i would recommend opentk's built-in library right now would be a good time to add your desired math library to your project alright let's continue okay i just went off on a huge tangent there what does any of this have to do with buffers well if you remember what i said about the first stage that's vertex processing well how do we get our vertex data to the gpu and opengl well using buffers of course so what is a buffer and how do we use it in opengl a buffer can represent pretty much any data you can think of the buffer is literally just a block of memory on the gpu that's used for different things like storing vertex data or textures in opengl there are specific buffer types that represent different processes there are three very commonly used buffers in opengl they are gl array buffer gl element array buffer and gl texture buffer these buffers are a way to represent an array of data elements which we'll discuss shortly or texture which is usually just an image like a png or jpeg or something there are many other types of buffers though and we'll take a look at a couple more in time some of the more commonly used buffer objects are the gl draw in direct buffer which is a buffer to store rendering commands the gl shader storage buffer which is a special buffer to pass data to shaders the gl transform feedback buffer which is a special buffer commonly used to do complex calculations on the gpu like particle simulations and the gl uniform buffer which is another buffer used to pass data to shaders let's talk about two of the most common buffers you can find all the different buffers documentation available in the description we'll talk about the array buffers and element buffers in this video and we'll talk about the texture buffer once we get to the episode on textures remember the task we want to accomplish is sending vertex data to the gpu how do we do that well we can use array buffers and element buffers to accomplish this array buffers can be created in opengl by using these three commands in sequence is a good time to mention that opengl basically acts like a giant state machine what i mean by this is that many different commands in opengl mutate some global state if you're familiar with oop this is essentially like having a member function on an object that modifies some private member data because of this we often have to set up that global state before we perform certain operations and this can become a very big headache if you're not careful alright let's continue talking about array buffers there's a couple key things to notice in this sequence of commands the first thing is that we have to refer to the buffer by its id in opengl all buffer objects are tracked by a unique id that describes that buffer we need to hold on to this id so that if we ever want to modify that specific buffer we have a way to access it this means that there are a set of commands you can use that are similar to the commands we saw above except we refer to the object directly by its name which is its id so we could achieve the same thing by doing this you'll notice that with this version we don't have to specify the buffer type the reason we don't have to specify the buffer type is because opengl can implicitly infer what the type is since it has the buffer id this method of buffering data is a bit safer since you aren't relying on any global state to be set but it's only available since opengl 4.5 which means you can't target all hardware using this technique unless it supports opengl 4.5 i'll talk about how you can buffer data using named functions in a moment now i kind of glossed over a detail there there's a parameter at the end of these calls called usage what is usage well we can help opengl out a bit by letting it know what we intend to do with this data if we intend to send this data to the gpu and never touch it again we can use the type gl static draw if we intend to update this data commonly and never read the data we can use the usage type gl dynamic draw there are a few other types of usage that you can find in the documentation and learn more about okay so we've created a buffer on the gpu and possibly uploaded that data as well but how do we use that data on the gpu well we can use the data that we have passed using a vertex shader in a fragment shader we'll talk more about shaders in the next episode but for now let's assume we would like to send specific pieces of data to the shader say for instance that we want to send the position and color of the vertex we can send the position as three floats and the color is four floats in the vertex shader we could write something like this the main thing to focus on is the lines that start with layout we are basically saying that we expect a vertex attribute to exist at these specific locations specified by the shader so we expect a vertex to contain two attributes a position in a color i like to specify my input variables with an a to denote that these are attribute variables and specify my output variables with an f to denote that these are being sent to the fragment shader once again we'll talk about this in more detail in the next episode so how do we tell the gpu that our buffer is going to contain vertex data that's three floats followed by four floats which represent a position followed by a color well we can use a function called gl vertex attrib pointer let's talk about what these parameters are this function takes an index first this index is the layout location in your vertex shader that you would like to use so if we specified that our vertex attribute a color was at layout location equals 5 we would use 5 for the index in this function call next up we have size this is the number of components that this attribute has for instance if we wanted to pass avec 3 we would use 3 for the size the documentation specifies that the size must be one two three or four for a basic data type of vector a vector iii or a vect4 respectively after that we have the type this is the type of data being passed in so if you were passing a vec3 you would pass gl float as the data type if you were passing an ivec3 which is an integer vector you would pass gl int then we get a boolean called normalized this flag specifies whether you'd like the data being passed in to be normalized or not if it's an integer being converted to a float this will normalize the value based on the size of the integer so if you passed in an 8-bit unsigned integer and wanted to normalize it it would divide the value by 2 to the power of 8 which is 255 to get the number in a range from 0 to 1. this is useful for converting color data to a normalized range implicitly the last two parameters are the stride and pointer the stride determines how many bytes each vertex is so if a vertex contains a vect4 for the color and a vector iii for the position and both of these consist of four byte floats then the appropriate size would be size of float times four plus size of float times three if your data is tightly packed meaning there are no gaps between vertices then you can just set the stride parameter to zero and opengl will implicitly assume the data is tightly packed finally we have the pointer variable this simply asks for the offset of attribute within the data so if you have a structure that looks like the following representing a vertex then you can use offset of vertex color to obtain the appropriate offset value if you're coding in java or c sharp then you can just use something like float.bytes times 3 to get the appropriate offset value after you call gl vertex attrib array with the correct parameters we need to make sure to call gl enable vertex attrib array attribute location where attribute location is the layout location in our shader this makes sure that opengl enables this vertex attribute as part of this vertex array object state which we will talk about more in just one second alright we have two pieces of information we can create a buffer and we know how to specify different attributes for a buffer but how do we tell opengl to put these two pieces of information together this requires one more opengl object called a vertex array object or vao for short now what's a vao it's very similar to an opengl buffer in the sense that it has a unique id which we can use to set it up the difference is it doesn't store any data that we explicitly send to it but rather it stores specification data about how we would like our vertices to work [Music] so we can create and bind a vertex array object by using these functions we are telling opengl that we'd like to create one vao and store the value in our variable myvaoid now remember opengl is a big state machine so we have to make sure to bind the vao after we create it now that we have the vao bound any subsequent calls to create buffers and vertex specifications will be attached to this vao if you would like to unbind the vao you can use the same function with zero as the vao id like this so let's put all of this information together and come up with a vertex specification for our hypothetical vertex structure we defined earlier we can do something like the following [Music] cool so we have this data that we just uploaded to the gpu and we'd like to draw it but there's one more catch we can't draw this data until we have a working shader unfortunately creating and compiling shaders deserves its own video so we won't discuss how this works however i'll have a minimum reproducible example for compiling a shader linked in the description you can copy that code for now and play around with the attribute locations and we'll explore how that code works in the next episode alright so once we have the shaders compiled and linked we need to draw this data how do we draw this data well since we stored all the relevant information about this data in a vertex array object we can simply tell opengl that we would like to draw the data associated with this vao we can do that by calling these functions first we bind our vertex array because remember opengl is a big state machine and we need to ensure we have the correct global state set up next we bind our shader program you don't have to worry about this too much for this episode because we will go over this in more depth in the next episode but you can copy some code i have linked in the description and just call compile at the beginning of your program and call shader.bind whenever you'd like to bind the shader next we call a new function gldrawarrays this function takes in the primitive type as the first parameter the start index as the second parameter and the vertex count as the last parameter so if our vertex array contained more data say 20 vertices and we specified 14 as the start index and 6 is the number of vertices opengl would draw the vertices 14 through 19 in our array and if we had triangle set as the primitive mode then it would interpret these vertices as two triangles since a triangle has three vertices each finally we have all the knowledge available to draw our first triangle now you'll run into an issue pretty soon the problem is duplication of vertices oftentimes you'll want to draw a specific shape that consists of several vertices but share a lot of those vertices among the triangles that compose this shape take a star for instance a star has 10 unique vertices but if we were to draw that star using eight triangles we would need to use 24 vertices a lot of these vertices would share the same data so we're duplicating a lot of data for no reason if only there was a simple way to remove this duplication wait opengl already thought of how to solve this remember how i mentioned we would be using gl element arraybuffer well now is the time to introduce this functionality an element arraybuffer is a way of telling opengl that we would like to reuse some vertex data that we've already specified so we can tell opengl we have 10 unique vertices and buffer that data as part of our vao object for our hypothetical star then we can use the following set of commands to create an element buffer to help us out this syntax should be pretty familiar at this point an element buffer is just another buffer so we can use the same syntax to create the buffer and bind the buffer finally we can buffer the data the same way we did with our vertex data the only question is how do we specify what the elements should look like well the opengl documentation specifies that elements can only be an unsigned short unsigned int or an unsigned byte but where do we declare what type of data we're using i'll be honest the opengl documentation is not very helpful in making this connection but you can specify the type of data you're using when you draw your elements so say we had elements specified for our hypothetical star that looked like this we've specified these vertices that are unsigned ins by using the type uin32t now as we upload this data using the function calls we talked about a moment ago and as long as we created this buffer while vro was bound this buffer will be related to that vao so we can draw all this data using a different function in opengl like this as long as you have a vao bound that contains an element buffer when you call this function opengl will draw the vertices according to the elements specified in this case we will draw triangle primitives and we'll draw 24 of them since we have 24 elements we specify that they are of the type unsigned in and lastly we provide an offset into our index buffer if you set this to 3 for example then it would start drawing from the third index in our array the reason the documentation says this is a pointer is because you can pass your element buffer directly to opengl here however this isn't a good idea because it needs to upload that data every time you call this function so it's best to think of this parameter as an offset into your element array which is where you would like to start drawing from we've covered a lot in this tutorial unfortunately in order to draw anything to the screen in opengl you have to know all this prerequisite knowledge however here are some challenges that should reinforce your thinking and how all of these concepts work and then i'll talk about a more modern technique for accomplishing the same process and i'll give you another challenge to try and do this more modern technique called named buffers here are your challenges 1. draw a square on the screen using gl draw arrays remember store your vertex positions and normalize device coordinates which range from negative one to one in the x and y directions two draw a square on the screen using gl draw elements three draw a star using gl draw elements four draw the outline of a square using the gl lines primitive complete these challenges real quick to solidify your understanding of these techniques you can find code to set up shaders by going to the link in the description and if you have any trouble completing the challenges you can look at my code for a reference now let's talk about named buffers which are a more modern technique of setting up buffers you could achieve the same result we did above using named buffers which are a bit safer to work with since you don't rely on any global state let's look at this piece of code this probably looks pretty familiar but there are a few key differences first when we buffer our data we can't specify what kind of buffer we are dealing with so we must call one of the gl vertex calls as an example you can see that i buffer the element buffer object using gl named buffer data i call gl vertex array element buffer and specified the vao and ebo ids this lets opengl know that our buffer with the ebo id is in fact an array element buffer you'll notice that below i use gl vertex array vertex buffer for the vbo this lets opengl know that the vboid is an array vertex buffer there are a few extra parameters in this call as well we have to specify the vertex buffer binding which is basically like a slot that we tell opengl this buffer will be attached to this will be useful if you want to specify multiple buffers for different attributes you'll notice that further down we use the same binding point when we call gl vertex array attribute binding the last two parameters in gl vertex array vertex buffer are the offset and the size of the stride respectively the offset is useful if you place multiple buffer attributes in the same block of memory you can usually leave this as 0 and move on unless you have a specific need to change it finally when we set up our vertex attributes we can use the function gl vertex array attrib format which allows us to specify the vao this deals with after we set up the attribute format we must call gl vertex array attrib binding to specify which buffer slot this coincides with and we can finally call gl enable vertex array attrib which allows us to finalize this vertex attribute this is all a bit complicated so don't worry if it's difficult to grasp at first if you prefer the first method of setting up buffers and attributes that's fine as well i just wanted you to be aware that there is a more modern technique to this whole process with that said here's one more challenge for this episode this challenge encompasses the name buffer technique i just described feel free to give it a shot since this is the more modern technique for opengl but don't feel obligated to use this technique if you prefer the non-named buffer technique 5. repeat the star challenge but use the named buffer technique we just talked about i've tried to make these challenges utilize most of the knowledge you should have gained from this tutorial as always if you have any trouble completing the challenge check out the description for a link to my solution to see how i did it of course there are many ways to accomplish this so my solution is not the only correct solution if you have one that you'd like to add to the description reach out to me on our discord server and i'll review your code and consider adding it to this list as always thank you guys for watching if you enjoyed this please like and subscribe and stay tuned for the next episode where i will be talking about shaders in opengl [Music] you
Info
Channel: GamesWithGabe
Views: 3,762
Rating: undefined out of 5
Keywords: gameswithgabe, games with gabe, minecraft, opengl buffers, vertex array buffers, element buffers, index buffers, how do buffers work, c++, java, c#, glfw, glad, opengl, creating a window, how to make a window, coding minecraft, how to code a minecraft clone, how minecraft works, coding
Id: N7kyXkK2E5s
Channel Id: undefined
Length: 26min 55sec (1615 seconds)
Published: Mon Dec 20 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.