Handmade Hero Day 237 - Displaying an Image with OpenGL

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
start recording hello everyone and welcome to handmade here on the show Rico two complete games live on stream or as is the case yesterday try to figure out some way to get OpenGL to stream over OBS which we maybe did maybe didn't I don't know so we're gonna go ahead and try to do today's stream which is how to get the game displaying on OpenGL I'm gonna go ahead and try to do it and hopefully folks will see something but if they don't see something that's just the way it goes sometimes with live streaming so here we go on day 237 if you have pre-ordered the game on handmade hero dorg and you would like to follow along at home that means you will want to unpack day 236 is source code is that correct cuz we had to reboot the machine so we lost the day counter as well but yeah day 236 is the last day unpacked day 236 is south code and you will be where I am right now and you can go ahead and follow along so okay the first thing that we have to do is get the streaming working and like I said we spent all yesterday trying to figure out how to do that and here is what we decided it seemed like if we turned off double buffering then we would be able to to display so if we take this here and we do something like this and maybe we have a thing that's like handmade I don't know what here will we need some additional flag like handmade streaming and we would just say okay in the case where we are streaming we don't include the double buffer flag and we just say note PFD double buffer appears to prevent OBS from reliably streaming the window and then we just hope that what we discovered yesterday actually is some general true thing about streaming OpenGL that works so let's give it a shot here and and there's our pink window and the question is can you guys see the pink window or not do you see black do you see pink what do you see or do you just see a Windows Visual Studio screen what do you see and hopefully the answer is that you see pink good it sounds like people do see pink so okay so we're good alright so let's we had our on day 236 we had an explanation of how what GPUs basically are and just how to think about them in general and as you know like I said on that day our first step because we already have a software renderer at this point we don't actually need to get our whole game rendering through open jail all we really need to do as a first step is we just need to find a way to move the image that we've created of our game we need to figure a toe move that down to the graphics card so that you can see the image rendering through OpenGL so right now what I like I said before what we do is we just call GL clear color and then call GL clear and that's why we're seeing pink is because the only instructions we've given to OpenGL as far as what to create for a command buffer for our graphics card is just that we want to clear the screen to pink and that is it that's all we've done right this said how how how big of the window should this sort of gave the extents of how the clear should happen this gave what the color should be this gave what buffer to clear and then we said go ahead and display that so what we'd like to do here is we'd like to be able to put something on the screen that's actually going to come from our memory so we want something that will take this buffer that we've got here and get that buffer down to the graphics card so that it we can display it now the way that this works is is very very similar to the way that we wrote our software renderer and that's why we wrote software under that way so they want you guys to see how a graphics card works so you if you remember if you don't go back and re-watch the things we implemented it because this is exactly how the card works what we need to do is we need up need to set up an image for it to read from which we is a texture in this case that's what they're generally called on the card so we need to set up something that's gonna be a texture and then we need to draw some and the word primitive is just used for anything like a triangle or a line or a point something that the graphics card understands how to draw look at our case our only primitive really in our render is a rectangle right that's the one that we implemented on a graphics card triangles that is the most typical primitive and the reasons for that are a sort of a separate thing we could go into at some point but for right now suffice to say if we want to do something like draw a rectangle we need to draw of the rectangle with either two triangles to make a rectangle shape right I couldn't go ahead and sort of draw that for you here on day 237 we can either draw it what we want by just creating say two sort of two triangles that encompass the rectangle right that basically say here's here's what it is or we can draw a single triangle right we could draw one triangle and try to clip that triangle to a rectangle because OpenGL actually has ways of clipping things obviously it has to because just like we had to clip things to the screen boundary open just to clip things from you better - and just like we did where we made our clipping be generic so we could do like tiled rendering stuff OpenGL clipping is generic as well you can choose to clip to some region that's actually smaller than what the screen is if you wish so we could use a thing this is called scissoring we could use a thing called GL scissor to do it I'm not gonna do this one right now I'm gonna do this one because this is the more generally interesting case because this would allow us you to draw rectangles anywhere you want without having to reset any kind of clipping mode which you know depending on the card can be an expensive operation potentially and you know it can't be this sort of thing can't exactly be thought of in the same way sometimes so we're gonna talk about it in the in the sense of how you would generally draw quads not the special case of how you might draw one when you're only bleeding to the screen because this is the one that we will actually want to do when we move the rendering over to OpenGL so that you can choose to render everything through up in jail rather than running the software rasterizer we will need to be able to do this for each of our sprites right because each of our sprites would be something like this so we're gonna literally do exactly this thing okay and so the process for doing this is literally the same exactly the same as what we did we need to basically construct this thing and then we need to have those UV coordinates remember we did UV coordinates and what the UV coordinates did is they said you know okay we've got this this sort of thing we're filling on the screen and then we've got some image in memory somewhere and the UV coordinates are what tell us right where to go look to find a piece of the image exactly exactly the same thing here as what we did in our software rasterizer okay so we need to set up a situation that looks like this right we need to set the UV coordinates of the rectangle to be like you know 0 to 1 0 to 1 just like they were in our rectangle we need to load this texture into memory and then once we have all of that working we should be able to draw two triangles that show that basically display our image on the screen ok so we're gonna go ahead and do that all right so the first thing we're gonna try is drawing our quad our you know a rectangle out of two triangles we're gonna try a drawing that without actually filling it with a texture because the texture is obviously a harder step so we're gonna start off by doing something simpler so we're still going to keep this clear to pink because that'll give us a good sense of whether or not we're actually drawing over it so we're gonna keep that clear to pink and then we're going to make some calls that allow us to specify our triangle we're going to do this because right now we're going to we're going to do this in the old-school way first the way that the original OpenGL did it and then probably not today but next week we'll talk about why this is not the way that we will end up doing it why we will replace this but this is the easiest one to learn it's the easiest one to get your head around because it involves less maneuvering ok so the way that OpenGL works in the old-school way is you said Gio begin angieland to bracket a series of operations that would construct some primitives so like I said primitives in this case refers to something the graphs code understands like a triangle so in this case we're trying to draw some primitives primitive schools are like the smallest building block right primitives and so the primitives that we're trying to draw here are gonna be triangles and again there's actually some things we could do that actually be even in this circumstance a little more efficient than a triangle but again I want to start with the most basic stuff so you know yes everyone on the stream yes you could use a triangle strip yes you could do whatever please don't focus on optimization stuff when I'm just trying to give the idea of how something works because that's not the right mental piece here right you want to make sure you understand everything from sort of the ground up and not concern yourself with whether there's some weird like optimization thing especially because in this case no optimizations will be relevant for putting something to the screen it literally does not matter how you do this so in this case I want to say I'm just gonna start by drawing some triangles and the GL begins GL end tells it that everything in between there is information about what triangles to draw then there's a command called GL vertex which specifies points in space and the way that they chose to do their naming screen is its GL anytime you see GL and a command right it can then be sort of suffixed if it takes various different types so for example if I wanted to say there was gonna be a 2d point that was specified in floating-point format it would be GL vertex to s and then I would pass the coordinates the XY coordinates let's say in floating-point format of that point now if I wanted to pass them as integers I would say to I right and if I wanted to pass a three-dimensional point I might say 3f right and again only a certain subset of the functions exist RC these are actual separate functions so they did not supply necessary every possible function but that is the naming scheme for them so for every possible GL vertex that they implemented there's 3f to UB for unsigned byte you know stuff like that okay so if we want to produce triangles here that are going to cover the screen where are the triangles going to be right well the viewport sort of like I said that this this GL command we set up to say where we're trying to render right that the extent of the screen is zero to the window width right and zero to the window height on the Y side so if I want to draw some triangles hmm I am going to have to specify triangles perhaps like this right I'm gonna have to perhaps specify a triangle that's like here here here and a triangle that's here here here okay and we know the coordinates of all these points right all of these points are very easy for us to see what they are so let's take this triangle first it's zero zero two width comma zero to width comma height right and so that's a very easy triangle to draw that would just be let's see here that would just be zero zero window width zero zero that's all right window with window height that's why I said right so that'd be that first triangle and then the other triangle would be almost the same thing except this time it would go the other way round right so it would go it would go from zero zero up to window F 2 with height and then over to zero height right and those would be our two triangles so this I could label these right this is the lower triangle and this is the upper triangle all right so when I do this in theory I think if we ran it now maybe we would actually get a triangle there's a bunch of other things involved here but you know let's just see what happens right all right so there we go there is our there's our lovely two triangles being drawn but as you'll notice they're not where we expected them to be right they if if the coordinate system were actually zero zero was here and you know the this was zero height this was width height and this was zero this was with zero we would expect this thing we drew to fill the screen but it's not filling the screen at this point okay so why not what's going on and the answer is that because whether you want it to or not opengl mm-hmm when you use the old-school way of doing things meaning you're not going through a shader what you're going through is called the fixed function pipeline so there's two sort of schools here fixed function and programmable okay so the fixed function pipeline is the old-school way of doing things that comes from when GPUs were not programmable they only did one thing which was a certain way of processing vertices in a certain way of filling pixels right and similarly in the new-school way we have everything is programmable so we always have these shaders and things like that that we write so in the old fixed function way of doing things what happened is it basically had what you would typically write for a very basic vertex shader was built into the hardware directly and what that was was a transform on the vertices right followed by clipping the triangles mm-hmm followed by what's called a Windows face transform and in programmable land we actually have some of the same stuff the clipping for example is still usually done in a fixed function way but this part here the transfer on the vertices part and usually this window space transform same comes right across so usually this part stay the same these two parts here so this is where it was filling pixels right these two parts here used to be very very rigid with how they operated now they are all in shaders so this part all happens in shader is this all part all happens in shaders and you can customize them to do basically whatever you want right so what's happening and the reason why we don't see this where we would have expected it to based on what we set up for the screen is because the transform that's happening on the vertices is is we haven't set it up it's whatever the default was and the default in this case doesn't happen to be something that takes input vertices and map them directly to the screen that's not at all what actually happens so as a result we get something random that we don't necessarily even understand right because we haven't actually looked at what that transform does we don't necessarily know so you know the filling pixels part is probably fine for us right now but the transform on the vertices part isn't now one thing we can do is we can just step right up the shaders and implement it ourselves exactly where we want to put things we could do that but what I'd rather do is show you a little bit about how that fixed function pipeline it's good to understand kind of how it used to work because the shader that you would implement is actually pretty much exactly the same as what the fixed function think is actually doing if you set some of the parameters all right so what happens with OpenGL is they specify two different kinds of matrices now we've talked about matrices before right matrices are things that are sort of recording a series of math operations that you're going to do on vectors in this case right and so I don't know if you remember but I did something where I showed kind of like how to do vector matrix multiplication where we have something like this right and then we have something like this right and I taught you guys how to do this multiplication where basically you you use vertical and horizontal here to line up which terms go with which terms so you always start and read down vertically the vector that you're trying to multiply this is like the incoming point so in our case this is this thing here at the vertex to I right but in this case I've shown it with three I'll tell you why in a second and then here is like the matrix for example when you do this you essentially get ax plus B y plus cz right so you take first times first second times second third times third right then you move down to the next row do the same thing again e^x + FY plus GZ right and then you move down again IX plus jy too bad it wasn't J Z plus KZ right and so that's how matrix multiplication works for vectors it's also how it works for matrices but we don't really have to care about that too much right now and this is a sort of fairly crucial to understand if you want to start working with opengl especially the fixed function pipeline but to a certain extent also the programmable pipeline because matrices and vectors are kind of native to it in some way and you kind of have to start thinking in terms of that so we haven't really had to do too much dancers I've explained it before on the stream but now you're gonna have to get a little bit more comfortable with it okay and the important thing to note about these is there's nothing mysterious about a matrix this is not mysterious right if you can understand this procedure right here you can see exactly the math that actually happens when you transform something by a matrix so it's very easy for you to see what you want to have happen when you pass in an XYZ and you're transforming it by a matrix you know that you've just got one parameter that that multiplies the x one parameter that multiplies is the the Y and one parameter that multiplies the Z to produce the final X so basically you have three dials right that you can turn for the incoming values that will produce the resulting X prime right the the vector like the the new vector that comes out right and so you can kind of just arbitrarily create anything you want as long as it's just a straightforward linear transform right as long as it can be represented by three coefficients and two additions on the input values you can do it you can do anything you want because you can set these values to anything you want and as you can see you get three of them per output value so you have the complete freedom to adjust each individual output component separately right does that make sense so what OpenGL does is OpenGL takes this one step further and what you can see here are hopefully you can see here is that there's something you might want to do that is not captured by this this is a this is this right here what I've drawn is called a linear transform and it's linear because it's it's lines two lines right it's like basically I can only sort of change I I when I am transforming these things right it's it's something that when I do this multiplication there's no way for me to nonlinearly warp it I can't square any terms I can't produce an x squared on the input right whatever this x value is here there's no way for me to square it the only way for me to square it would be to know beforehand what its value was and stick it in the matrix but that doesn't count right if I'm talking about arbitrary input values there's no nothing I could ever put here would lead to an x squared so it's always lines to lines there's no way to take a line and turn into a parabola right there's no way to take something in that was linear and produce something nonlinear on the output okay but linear specifically does not allow offsets you will notice there is no way to add a fixed value here right and you'll notice I was sly I left out you know values here right I left out D I left out H I left out L right there and weren't in there there's no way to add a value that isn't multiplied by the input let's say I passed into vector zero zero zero there'd it just comes out a zero zero zero right there's no way to move things around I can only sort of reorient them I can skew them I could scale them I could rotate them even but what I can't do is displace them because everything is a function of the input here and so what OpenGL does is OpenGL uses what's called homogeneous coordinates and what they do is they artificially pretend that the vector that comes in actually has a 1 here so it's X Y Z 1 now you will notice that the this column of the matrix here DHL just comes through as an offset on the input so now I have a way to positionally offset my my folks that is called an affine transform when you when you add the ability to displace that is an affine transform and then down here we just usually pretend that the matrix doesn't have anything it's just 0 0 0 1 why is it 0 0 0 1 per se well that's if you want to leave this 1 a 1 right because then when I multiply I always get a 1 in the W coordinate that's what this is called which is just a synthetic coordinate right now there's an interesting aspect of this technically in OpenGL it allows you to actually pass full 4 dimensional parameters if you want to play with this value you actually may want to play with that value why might you want to play with that value well if you've set it to 0 you get no displacement right because all of these guys would drop out because they'd be multiplied by 0 now so in some sense there's a way now to distinguish between a vector which is a direction and a point a vector would be something that is X Y Z 1 and a point would be something that is X Y Z 0 because X Y Z 0 knocks out the translation component leaving only the direction change right whereas X Y Z 1 is a point because it picks up that translation so these are things that can move the things that do not move right so that's a little trick eNOS it's just a little bit of understanding how the math works out allows you to do some things that you might want to do you spent want to be aware of it right okay so that's the idea right that's that's the general idea and so again there's called homogeneous coordinates in the three graphs searchers do I look them up that's what they're called and it's called an affine transform once it allows that displacement as opposed to a linear displacement which is not a linear transform which does not generally have this placement in it okay so what OpenGL does is it defines two matrices there are other matrices that they don't come in here it defines two matrices that you can use Model View and projection in the fixed function pipeline we have both of these and what it does is it allows you to set them separately but it used to be a there used to be a reason why for built in lighting which nobody uses pretty much didn't really even use back then but for built-in lighting there used to be a reason I'm not even to go into it cause it's completely irrelevant why these two things had to be separate because there's really no reason for them to be separate because basically what happens in OpenGL is what they do is they take the modelview matrix and they concatenate these two matrices together so they multiply the modelview matrix by the projection matrix and that is actually the matrix they use to transform all your inputs right so all the input points that come in are always multiplied by the combination of these two matrices which just applies matrices multiplication literally is just a way of combining two matrix operations together and what happens is the the the order is essentially reversed from how you might read it so if you see like if you start from the the vector that's being transformed you read in this order to figure out what's going to happen to the vector first that's gonna get transformed by the modelview then it's going to get transformed by the projection and that gives the new point right now why matrix multiplication combine things I think that's probably better saved for ten for a later stream we want to talk about the bat math a little more especially because we're not actually gonna use that even this stuff we're just gonna pretend the modelview matrix doesn't even exist because we only need one matrix to do the operations we're going to do anyway and like I said actually we'll find out a little bit later on when we maybe look at matrices in a little more detail if we decide we need to really any number of matrices can be smooshed down into a single matrix just by multiplying them all together so there really was never uh need that hat they should never have really had two matrices but because of some aspects of built-in lighting there was a reason why they chose to separate them and and really that's not necessary so in we're gonna pretend that the the modelview matrix never exists and what we're gonna do is we're gonna set that modelview matrix to always just be the identity makes rent we're gonna pretend it doesn't do anything and so the way you do that is GL has something called matrix mode and you can pass what matrix are trying to operate on at a given time again that basically just stores two matrices the modelview matrix a projection matrix and i believe you can just call GL identity which just clears out that matrix and makes it the identity matrix the identity matrix at least I think that's true I guess not geo load identity there it is geo load identity alright so an identity matrix is literally one that just passes everything through exactly it's this matrix and what's interesting about this matrix right is no matter what I stick in here right X Y Z W you can see that it basically just picks out the component that that it's trying to generate and leaves everything else be right it's just X Y Z W so everything the identity matrix does nothing it is a no op it's the no op matrix it's the thing that just picks out the component that came in right that's in that slot that's it this is as opposed to like a permutation matrix which is one which would maybe pick a different component so let's say I wanna like swap the X and y right I could do this right and then the X and y would swap right so basically permutations and identity matrices are very similar if they just you know they just take the terms and shove them out directly as they come in it just maybe they rearranged them a little right okay so anyway GL load identity just says replace whatever is in this matrix slot remember I have two matrix slots Model View projection there's actually another one for textures and we'll get to later but for the for the points that we're sending down there's the modelview matrix and then there's the projection matrix and so if I did this both of our majors will be set to the identity matrix which I believe is what they're set to at the beginning anyway so if I run it I should get the exact same thing I started with and I do right and so now we're in a position to start thinking about what would we actually need to put into the projection matrix in order to get out right in order to actually well I say in order to get what we expected to see here right okay and so to learn to understand this we have to go back to our diagram if this is starting to sound overly complicated this is why GPUs are awful it's because you have to learn every little last thing about how they work internally before you can actually do stuff instead of just writing code that you know how to write but what can you do so in order to say that you have to go back to this thing it is this part right here that is confounding us right look I just said our points are now getting multiplied by the identity matrix we said everything that animate so we're just passing the points are exactly what we pass in so why aren't we seeing them where we should see them on the screen and the answer is because of the definition of what happens here with the clipping and window space transform okay so let's talk a little bit about that okay so for clipping this is just the process where open Jill's doing exactly what we wanted to do right exactly what we had to do ourselves triangles are gonna come in to OpenGL or whatever and they're gonna look like this and here is the screen okay and obviously OpenGL can't just start drawing off the screen although I'll throw in a little mention here just goes time to be aware of actually it can there's this concept typically called guard band clipping so actually when I say it's going to clip it to the screen it actually often doesn't clip it to the screen it actually Clips it to a guard band which is a thing that's bigger than the screen that is GPU implementation specific kinds of stuff but just to be aware of that can happen but point being it has to clip it to something and that something has to be a way of producing a set of triangles that actually can be drawn right so like something like this for example might be a way to clip it to you know some triangles that actually fall within the region it's trying to draw okay we had an easier time of it than OpenGL because we were just trying to clip rectangles that were aligned with the screen for the most part I don't remember what we did for clipping for rotate when we rotated stuff to be honestly I don't even remember what we did but point being I think oh I think we just took we still just clip to the rectangle of the rotated thing so yeah so we always were able to just do rectangle clipping they don't necessarily they might not necessarily be able do that or they could do the exact same thing right they might implement their stuff because they implement it with they implement it very close to how we implemented it so they may do exactly the same thing but either way they're gonna do something to do this clip in here before they actually start to rasterize things and the way that they do clipping is they do flipping to be honest with you I I don't really remember the details of why they chose to do clipping this way but they do clipping in something that they call the unit cube so what they do is the projection matrix instead of transforming things the way we think of it where they transform them into where they should be on the screen which is how we were thinking projection what the projection matrix actually does is it projects things into the unit cube so basically it says all right I whatever my screen was like my screen is this 16 by 9 like TV thing or whatever right well whatever is gonna fit on the screen I'm actually gonna try to fit it into a unit cube which depending on how you actually whether your directory to your opengl and depending exactly how they want to do things but in general it's something like say negative one to one right so negative 1 1 is here negative 1 I'm sorry 1 is here negative 1 on Y right what it says is I'm gonna pretend that everything always falls into this unit cube here right which is actually too wide outside but it's a unit radius and I'm then going to after I do all my clipping in the unit cube space and figure out exactly what I'm going to keep right then I blow it out i scale it back out to fit the actual screen so first we do this then we do this okay and so you can imagine what happens here what that says is after projection right so I've got my projection matrix maybe I'll call this matrix oops matrix projection I take an input point I multiply it by price by projection matrix I get a P Prime out what is this P Prime out this is P in clip space right that is something that's in this space here and what that means is the projection matrix does not move things to screen space it moves them to unit space which explains exactly why having nothing in here does not work if we're doing 0-0 - width height of the screen as our as our values we pass in right because when we do that what we're effectively doing is getting something that starts in the middle right there zero zero in the unit cube and goes way the heck out here with height is like way out there right in fact it's that direction right because the width is wider than the height so then when this thing gets blown up to fit the screen we see exactly what we just saw which is it only fills that upper rectangle okay because it's we we basically the screen space coordinates and it want eclipses coordinates so the trivial fix for this if we wanted to write would just be to actually pass it things in its coordinate space right and that would look like this just pass it in the unit cube and we would have to do anything else if we don't want to okay oops that's one one right that is now directly in clip space which will not get transformed by anything during projection and then after the clip space part of things it'll blow it out to the screen size so there we go now we're filling the screen similarly if we wanted to verify that we weren't like thinking of it wrong we could change these two EPs and do something like percent or just let's just do P is like 0.9 and so then we'll change all of these to peas and then we can see that we can draw a smaller thing that's with inside the balance right inside the balance of the screen all right and so there you can see we've we've shrunk in by 10% the width and the height and as you can see 10% is relative to with their height so there's less of a margin on the top than on the bottom right because in clip space were in we're in a unit cube and then we're blowing it out to a rectangle and that scale to the rectangle is gonna scale X is bigger than wise okay hopefully that makes sense to everyone I know this is a lot of information fast but you know like I said a week it will go over it multiple times we'll get it so hopefully everyone can be on the same page okay so that's just what's defined is happening that's why why that's happening cuz that's how open Gela to find a work you move in eclipses space first and then you move out to the screen so clipping happens here don't you have to screen what does it move out to the screen mean well okay we have P and clip space right it's very simple to think about what happens all we're trying to do here to move into creat it's a screen space is we now need to move negative one to one needs to move to zero zero right so you know first we V Center we could take P clip right and add to it just 1 1 right so that would basically say take the bottom here move it up to 0 0 so that like anchors that to here and then we would have to multiply what is now a 2 by 2 cube right that cube would be like you know it's 2 units 2 pixels tall 2 pixels wide right we would have to multiply it by half the width and half the height right to get it up right so I take this and then I would multiply that by this would be a Hadamard product here right and multiply straight through I know how you write that Hadamard whatever I would multiply that by the width / 2 and the height / - right so I move it to the corner I blow it out to the whole side that's it so so when I say it moves it over the manner it's literally we're just talking about multiplying the points by that to figure out where they are in screen space that's it that's all there is so if nothing mystical it's not magic it's just a straight up trivial transportation transformation if you want to see it written out in scalar form right it's literally like you know the the new x-coordinate equals the the transform you know the clip coordinates oh I don't know how you want to write that but like you know the clip x-coordinate you know plus that one and then times the width over to right and the y prime would be the same you know I'm saying but we don't care about any of that right it suffice to say mentally we know it's doing this we never touch that anyway the only way that we have an effect on it is this GL viewport call is the thing that gives it that piece of information right so it uses these to know how it should blow out so when we say width over to where's the width coming from it's coming from right here right where's the zero zero coming from so it knew where to put the thing it's right here so if you wanted to put the viewport somewhere else you could put it somewhere else on the screen if you wanted to because these could be anywhere it's basically the transform in some sense I should say the transform goes from clip to the viewport which might not actually be aligned with the TV we lined it with the screen because that's what we want but we could have maybe made something else right we could have made a different shape for it if we'd wanted to we don't but we could've alright so that's the first step of things drawing an actual rectangle screen we did it yay congratulations everyone wins so now the question is how do we get a texture down there how do we get something that's drawn on to the screen well what we need to actually do now is draw something that actually has some UV coordinates right because we need to tell it to pull from our texture we need to do the thing that I that I was saying we did our cells before so we need to set up these UV coordinates 0 to 1 0 to 1 so it knows how to grab from the image this is exactly like what we implemented so you all should be very familiar with this the concept of of thinking about an image as having a UV coordinate where 0 0 is the bottom corner 1 1 is the top corner right and anywhere between 0 & 1 is picking out like pixels in this thing ok well the way OpenGL works in this old-school fashion here again like I said the old-school way is the GL vertex call is considered the finalization call for that vertex and anything that happens before it is considered as associating an attribute with that vertex so for example if I said GL color 3f and I pass a color here like for example I pass magenta and then down here I pass another color that is white you will note that the white color applies to these very the magenta color plus these receipts the white color applies to these ok and hopefully I don't know if I have setup and modulate as a blend note here I don't think so so there you go right oh why did I say magenta I put in yellow yeah and similarly I can associate a different one with every single one if I want to like I can do an R a G and a B here for example an OpenGL will automatically interpolate between those colors just like we did our interpolation right just like we were doing interpolation of UV coordinates it'll interpolate the colors right so all we have to do if we want to associate UV coordinates is we just have to call the opengl function that specifies texture coordinates and we have to specify the texture coordinates for each of these vertices that logically would go with that particular vertex so we're gonna have to figure out what that is we'll do that on the blackboard in just one second let me first spam our thing here there we go so what are the texture coordinates that we actually need to fill with well if we're doing this right here this is what we chose to draw obviously we want the bottom of our image the 0 0 UV to be here right so the first one would be 0 0 this one would be 1 0 right and this one would be 1 1 those would be the UV coordinates right because we want to overlay the image exactly on the triangles okay so we've got 0 0 then we've got 1 0 then we've got 1 1 okay and that again associates thusly yeah hopefully that makes sense and then similarly for the other guys down here we've got the same thing 0 0 for the negative P negative P case right and then we've got this guy is 1 1 because he's all the way over and then this guy here would be 0 1 because he's like up and over right you can almost just read off the negatives on the peas right kind of tell us where those are so now if we run it again we won't actually see anything here but we've associated touch coordinates we won't see anything different because there is no texture being specified but we specify the coordinates and similarly those coordinates get multiplied by a texture matrix I don't know if what the actual let me see here you know matrix mode I don't remember so a long time so I used the fixed function pipeline I don't remember if it's geo texture it is geo texture so GL colors and geo textures also are considered essentially points so I'll just put this here for your sort of amusement we can if we want to transform our textures by a matrix as well if we want to move the texture coordinates separate from the thing we are not going to but I'll just sort of clearly state there that we're trying to transfer everything not at all basically okay so there's our two triangles there's our texture coordinates so they're the only thing that we actually have to do now is we have to get a texture down to the card and that's actually not that difficult in theory it's just really difficult in practice because OpenGL does not make it easy and I don't think we'll have time to do it today we only have 11 minutes left I will basically put in the calls and talk about them and they won't work and then we'll have to talk about why and that will be our job for Monday ok so the way you specify a texture is with the call GL text image 2d and what this is saying is exactly what it sounds like it's saying I want to specify the image for a texture okay and the parameters of GL text to D are some of the worst parameters ever devised for submitting a texture by mankind they are awful all one of the all-time worst api's in OpenGL because OpenGL was actually fairly reasonably designed for back in the day these are awful so let's talk about this so GL text image 2d a little bit of it should look familiar to you you can see here that there's a GL void star pixels as the last remember that is the pointer to the buffer of pixels just like we have created so that part you understand width and height pretty clear border is sort of a separate thing that we won't go into now which is saying if you want there to consider an extra ring around the pixels that gets into mipmapping and a whole ball of wax 3d cards don't do MIT mapping properly they never have it was probably one of the worst design aspects of them and I have a gigantic rant about how broken MIT mapping is but we will save that for some other day when we actually care about MIT mapping which we may never actually on handmade here so separating that part we have the width and the height which you understand and then we've got some things here we've got stuff like internal format and format what does that even mean well what happens is this format here the second format the one that comes before are the pixels just before the pixels in the type right he's saying how we're submitting it right and we've got an RGBA right we've got RGB a RGB a rgba right that's what we've got in our texture only we actually are backwards I don't know if they've got BGR right here they don't exactly be grx there we go we are actually backwards because we use the windows format since we wrote on Windows originally so remember byte order in both in on in memory and and in registers and stuff are different on little-endian hopefully you remember this from way way back when right if you take a look at how Intel architectures right little-endian how they store things here is a register here's how you typically would write a register write the least significant bits go here the most significant bits go here but they're read in memory in the opposite order so this is actually the zeroth location in memory this is actually the the first and the second and the third right so what that means is when you look at a register and you see our a RGB if that's the way you actually put them in your register when you wrote that out to memory the order was B GRA because this is the byte that comes first on a PowerPC that wouldn't be true it would have actually written it out a RGB right and so Windows machines are often BGR the reason they are be GRA is because the graphics programmers of Windows wanted to see them in registers in the RGB order so that's actually the format that we are for amusement purposes maybe we'll pass it and just to not pass it that way if we could we could not pass it that way and see the color reversal you want to the point being it's okay because there is actually a format for that VLB GRA X so it's actually totally fine we can just tell it that's what we've got right and then we have of course our pixels which come from buffer memory okay and then we've got some other information here right we've got hmm what was the other things we knew okay so type we know what that is we know that like we specified the order of the components with this field we now know that we have to specify actually the the type of them they're unsigned bytes right because it's talking about the the size of each component not the size the whole thing so we're taking we're 32-bit colors right in hammer here we're 8-bit great 8-bit are 8-bit g8 b8 bit a so we're unsigned bytes for each one of them 0 to 255 and we're BGR a so that's specifying the image that's coming in and then we've got some other things as Best Buy here so we've got the target and the target in OpenGL is just specifying essentially which texture slot you're talking about this is a very confusing thing but what it essentially means is there's texture 1d texture 2d and so on and these are talking about a 1d texture a 2d texture a 3d texture and so on and they often have functions that will take which one you're trying to talk about the currently loaded 1d text or the currently 2d texture current load 3d texture GL text image 2d it's stupid that it takes one because like you can see it has state go2 actually I think they were trying to leave room for extensions which wanted to target something else and I think they actually do use that in some cases but for our purposes we're just trying to specify a 2d texture that we're going to use right we then have a level the level is for specifying what are called MIT Maps we're not going to talk about them right now so we only need to set specify zero because we're not trying to address anything more copy than that and then we have internal format and what internal format is is how we want OpenGL to store it right because again here we were in this lower part we were talking about how we were passing it in here we're talking about how we want OpenGL to handle it on its side which is really just a suggestion it doesn't have to store it this way it's just suggestion and in our cases we probably want GL rgba 8 right because we want to store 8 bits per channel RGBA which is what we're passing it in and that's that right we also then have to specify the width and the height we come after it and the border value which again is 0 so we have our buffer width and our buffer height and then we have our 0 border and that's how you specify a texture oops that's how you specify a texture in OpenGL but as you can see when we run it we're not actually gonna it's not actually going to draw anything ok so why is it drawing anything well there's a lot of reasons a bunch of things we have to do like I said I don't think we'll get to them in five minutes but when we submit a texture to OpenGL that again and I think hopefully I hammered this home on Wednesday when I did the overview all of these things are basically just giving the the OpenGL driver layer what we're trying to tell the card to do so what this actually turns into is a request to transfer a buffer memory over and how does that work well what's actually gonna happen when they call GL text image 2d is the driver is gonna snap a copy of this buffer memory into its own memory and then later it will kick off a transfer possibly at that very moment but possibly later it'll kick off a transfer that'll start sending it down to the card right then all of these things will build up a command buffer that's how to draw these primitives and that'll get kicked off to the card and then the swap buffers right is just saying hey we're ready to present now so it knows that all of this stuff goes with a single frame and it can like do it all finish it and present it to the screen right so again none of this stuff is necessarily happening exactly when we call it it's all sort of telling the driver what we want and the driver is doing whatever it wants to make that happen okay so why are we seeing a texture well first of all we have to enable texturing OpenGL fixed function pipeline was set up so that you could turn the features on and off as necessary and so there's a function called GL enable which allows you to say whether you want something like texturing to happen so the first thing we have to do is set up an actual you know we have to actually enable texturing so it knows we want it to pull from the texture okay but just doing that unless there's some weird thing that happens here I don't think will be enough for us to actually see our texture on the screen right okay so why are we still not saying we said texture the texture image why aren't we seeing a texture on the screen well there's more to it than that the next thing we have to do is we actually have to look at how the texturing API works which is that we can have more than one texture obviously like if you see games they have lots of different textures and one of the things that OpenGL wants to be able to do is store multiple textures on the card so that as we draw things we can select which texture we wanted to use right we can just say oh oh all this trying to let that texture draw this triangle that texture or whatever that sort of thing right and so although this submits the texture of the card what you're supposed to do first and again you can sort of see here as we're as I'm giving you the introduction to OpenGL basic OpenGL operations you can see it's very stateful right there's a lot of things like we turn on texturing and it's just assumed that everything else has that texture we want to clear the screen well it's just assumed that it knows what the color is because we set that color at some earlier time we could have set that color like way up here right so it's very stateful it stores a lot of state and the same is true of texturing what happens with texturing is there's a concept of what texture is bound all right there's a thing called like bind I think it's bind texture and there's a concept of saying by texture what happens with binding is you need to essentially create slots that name the textures you will submit so that you can refer to them now this may seem superfluous in our case because we only have one texture that we ever want to draw which is the thing that came out of our own renderer but as you might imagine if we convert over to rendering with OpenGL itself we might want to have multiple textures that are different images that were you know putting on things right and so this is what this part is for and what we need to do is do this we need to bind texture that target is again the thing I was talking about where it says which texture slot you're talking about the 1d 2d 3d texture and then the texture is something that we need to create so how do we do that right mm-hmm well we want to do this at startup but I'll just do a static init here just for a quick thing to do move this so we'll do a static init equals false if in it yeah so what we need here is we need one of these texture handles and the way that we get a texture handle is we just ask OpenGL for one so we say GL gen textures which is a request to get textures we say we would like one texture there's like a count and then we say where to put it and if we made this an array we could generate multiple textures at a time so if we do that we can then bind that texture and really all this is doing literally the only thing that GL gen textures does is it's like giving us a name it's basically getting us a pointer it's saying give us something we can use to refer to this image in the future when we need it and all we're ever gonna do is just bind it and submit an image that's it okay so we're getting closer now but I think we're still not quite we're probably not quite there because there's one more thing I believe we have to do but we're almost there to singhsoumya screen so there's our GL bind texture what is the problem you put into there sighs okay fine there you go so I don't think we'll see the texture still because there's one more I think we have to do so we're still white and that is we have to set up the rules for how the texture is sampled and applied to our image because remember our texture is just one of the things that might contribute to the color of a point in this fixed function pipeline we saw before that I could set colors right I was able to set for example a yellow here you know and the points came up yellow so what would it mean if I had the color set to yellow and the texture was something else right what does it mean so you have to set up what's called the texture environment which says what to do with the texture and so you can see there's a function called GN Tex Tex M or let's see GL of text here we go so here is our GL texts them there's I both I and F V say how to sort of sample from the texture I'm not sure if they've got much in here here we go yeah here we go so GL Tex am I for example right we can say what we're trying to do with our texture and in this case we can say like replace blend decal modulate and we can set the mode and what this allows us to do is say what we want our texture to do like how we want our texture to affect the color of the pixels that we're drawing and so this is pretty straightforward in our case modulate means multiply that's what we're going to want and the reason for that is it means that when we set colors whatever our texture actually has in it will just get multiplied by those colors right so I can do geo text and I here I can say that we have GL texture and and I want to set it to yellow texture and mode GL modulate right and that says whenever you sample a texture and you have an incoming color multiply them together right we have another set of things here which are GL text parameter gl Tech's parameter so good the search there they couldn't even that was great nice job MSDN really fantastic good job all right um so let's take a look here where is our geo texture Doc's good yeah so if we well this is 4.0 though that's probably not the best thing let's try this there we go so here's GL @ x parameter and you can see that this is just kind of a multi-function which allows you to set a whole bunch of things here right and so these are just tons and tons of settings that we might want to set for our texture and we'll kind of see how to set those here so let's go ahead and put them in here all right so GL text parameter and then there's one more thing so even after all this I don't think we will actually see anything because there's one more thing I'm pretty sure that we have wrong that we'll have to fix but that'll be left as an exercise for the reader the rest of these in here yeah okay so what are all these things a lot of these we probably don't ever want to actually set but I'm just gonna kind of show them out here so you can kind of see how much randomness there isn't here all right so we'll get to those a little bit later and all of those are going to be target P named int right afterwards so all of these were going to have TL texture 2d in front of them and then they're gonna have some parameter and so we'll talk about these I guess we'll talk about these on Monday and so then the final thing that I again like I think we have wrong here is that this GL text image 2d you'll notice what we did not specify because we've got the width at the height we have the fact that it's PG RA we have it in fact this GL and sign byte but what we don't still have is any specification of what the stride or packing is on this pointer right and if you remember this is actually something that's a you know it's like a bottom up right and it maybe has a certain stride we haven't specified any of that so we have to go actually verify that we're passing something to it that's sort of in the write that the stride is correct and there's a there's a there's a bunch of functions you actually call for OpenGL to set up how it's expecting this it's like pixel transfer settings and all this other stuff all right so let's go through these really quick let me see we got maybe a couple minutes we can spare here we just set let me just say what a couple of these are all right so there's a thing called GL min textured min filter and GL texture mag filter what those are is again these are based on mipmapping and so these I I don't really want to get into what mipmapping is full of these min loud max lot all of these are are basically all mipmapping things and I don't really want to deal with any of those so I'm just gonna set these to essentially admit mapping off right I don't want any MIT mapping of any kind and similarly I don't even want any texture filtering like you know we implemented bilinear filtering I don't want any of that on right now so texture min filter and mag filter min filter says when you are shrinking the image down so when you're taking a texture that's bigger than how it's showing up on screen what do you want to do and GL nearest says just pick the nearest Texel and show it right so don't do like bilinear filtering like we were doing remember we did nearest we did that when we actually made our render we just grabbed the pixel and it kind of looks kind of sparkly and not that good we just want to stick with that for now because we're just bleeding to the screen we should be able to get that fine anyway mag filter is the opposite it's what to do when the texture is smaller than how it's appearing on screen so how do you want to blow it up again I'm just gonna turn all that up min LOD max laud is if you start using mipmapping which we're like I said not doing so we're gonna ignore those for now base level and max level I don't actually remember exactly how those are used for computing those are again all of those things so texture rap s T and R so although we use the terms U and V OpenGL uses the terms s T and R to talk I believe about texture coordinates after they've been transformed I believe I don't quite remember but I believe that's true so if you have like the uvw coordinates of your textures like we was talking about UV coordinates and then I said there's a texture matrix you can use to transfer them if you want well what comes out the other side of that I believe they refer to is st and then the third ones are a few F again if you have 3d texturing or something like that involved and so these rap here allows you to set how you want your textures to wrap do you want them to clamp to the border color do you want them to mirror do you want them to repeat right and so in this case we want clamping because I don't actually want to do anything where mid mirrors or appie training like that these guys here texture priority Parramatta pear phone we don't even want to set any of these these are things that are specific to like texture priority can say when we're putting textures in memory if you have to discard one if we fill up texture memory or something which one should take priority we don't really want to talk about any of these these are all kind of non sequitur for us all right so that's I guess these are not SPECT in our version of OpenGL it looks like they're not so this is probably I guess this is too early for OpenGL to even have that the opengl that we're using doesn't even have that apparently so that's kind of terrifying but okay all right so we're getting closer we're not quite there yet yeah okay we're not quite there yet but we're almost there I think and so yeah like I said well alright or we just are there yet all right well when I said we weren't there yet what I meant is we are there yet and please don't listen to what I have to say because it's all lie and so we are there yet there is the game running through open gel with all the parameters set and I guess we just happened to look out and our buffer happens to be exactly packed the way that OpenGL was going to read it and so we got lucky which means if we just go ahead and set this thing to 1.0 and we could also set our buffer size back up if we want to here we could set it to 1920 by 1080 we should have a full screen OpenGL thing running and so there it is and here we go into the game and there's there's our little dude and he's going around so okay so good we did it a lot of ground covered there and like I said we were going to want to kind of go through this stuff a little more slowly that's like the first pass we're gonna go want to go through each part of that a little more slowly as we sort of expand out our usage of open G to encompass actual rendering of the game instead of just splitting of the image but that was it and hopefully you guys could actually see that like I said I don't actually know if you can but hopefully you can I'm not sure so yeah let's see here let's go ahead and go to some Q & A if you guys have any questions now would be a great time to put a Q : up on the screen for me to look at yeah wretched freak you use static instead of local persister in it yeah that's because this is not actually this is not staying right in fact I not it's it's so not saying we should probably just get rid of it right now like we can just do this if we want for the moment so yeah we just have to do that at some point you know this has to happen so I just didn't feel like bothering at that particular moment so actually when we do in it OpenGL as soon as we actually make our current thing we can go ahead and engine our texture there so as soon as we have a context we can Jen our texture that's really all that has to happen where is that whipped texture yeah I just didn't want to do that at that time but that's not gonna you know that's not gonna stay in there Nixie can we verify the vsync somehow oh we have not actually requested vsync yet to be clear we have not requested vsync yet so there's nothing to verify we have not asked OpenGL to vsync us that's that's a little bit later let's see soy sauce the kid are there any stretching issues when going from UV coordinates to screen coordinates if so how would you fix it there are if like I said it Maps screens it maps the unit cube directly to the window to the viewport there so in this case I believe we are one-to-one but if you didn't if you you would hat you do have to make sure that those line up I mean that is that it you do have to make sure the points you're passing in do line up properly so I'm not I'm not sure what you mean by necessarily how can you how would you fix it you have to make sure the math is correct to to line it up but let's see here the texture name your binding is always zero after the first iteration no no it's not why do you say that messy oh that I disagree with that insofar us I think an old open Jill you don't ever need GL gen textures you can just pick your own arbitrary intz I don't think that's true I'm pretty sure you have to call Jen textures but we can test it we can certainly test it so let's see here yeah so global blit texture handle let's see what happens if I just set it to one alright maybe that just works are you sure that works I'm not sure if that actually works though still because that might just be setting the default texture always that might just be setting texture zero jr. and I mean so I'm not sure I still don't actually know if that's a fair test you'd have to be able to set two textures and pick between them if that makes sense not sure how to do a good test of that at the moment I do know how to test that so if that were true I should then be able to set a different texture here after submitting this one I should be able to set like texture ten and get white again because that texture doesn't have anything in it right all right I am like I literally never knew that that is totally news to me and I have no I so wait are you telling me there was never any reason to call gl gem textures ever that makes no sense I didn't how did I never know that I've never heard anyone mentioned that I totally I have nothing I'm speechless how is that possible that look why do they even have Jen textures that's very strange wood texture filtering GL nearest even occur when image and texture size is the same as it is now yes filtering always occurs so what happens is it's always going to do if if you set something other than nearest if you set GL linear for example to do bilinear filtering what it will do is it will still apply the bilinear filtering and it will just so happen that the coefficients of bilinear filtering end up to be zero and one like zero zero zero one and so you'd still get the correct sampling in theory assuming the card does it right and there's no weird bugs or anything but the the filtering is still applied it just doesn't actually look like anything happened but it's still happening Kerry Johansson in your professional career what GPU library have you preferred to just OpenGL DirectX GLSL etc I tend to use OpenGL because it's more cross-platform but I don't really like any of them I think they're all pretty bad they're definitely not how I would design it can tell you that criminal dragon later on after we upgrade to more modern OpenGL with a game render into a frame buffer and then on to the same triangles or will you just have it render directly to the main window buffer well I guess that depends if you have any post past effects so if you have any post fast effects then you render the game into a buffer and do some effects on that buffer like sequentially usually like ping pong or not usually ping pong but usually like using a buffer chain and then you finally the last thing you do is you resolve the buffer - to actually display it but you you will often render your game into something that's not the final thing you're gonna flip if that makes sense let's see here I need to check climb even saw a chat clients thing I can't see anything Oh computers they're so great let's see let's see here before you remove the unit variable and inline Ashland the texture you were always passing zero as the name by the way oh I see what you're saying yeah okay because I forgot to put a static in front of the thing is that what you're saying I maybe forgot to say static which could have been true messy Elda so there is no way to blitt directly to the back buffer anymore well it depends what you mean right I mean technically that the back buffer quote-unquote is on a separate card so there's obviously no way for the CPU to directly write to it it has to go across the PCI bus now in the directdraw days what happened is they would map the memory so that as you wrote to the memory in main memory would it would basically you know actually be mapped so that would right out to the card so it would like right through the PCI bus and it would appear that the CPU was writing directly to the graphics card but it wasn't really writing right as we're still going over the PCI bus so nowadays since things are so much more asynchronous you just that would be a very slow way to do it what you'd rather do is get it in memory somewhere and then tell the guard to come get it right and so it went it changed from being something that was a direct cpu right to being something that was like more of a gather from the car because it's just way more efficient I don't know that there's no way to blitt directly to it though presumably if you wrote you could still write a direct route driver potentially if you wanted to that that allowed you to basically map some GPU memory and write directly to it it might still work it just wouldn't be a very efficient way to do it compared to I think what they currently do if that makes sense is there a difference in speed now we use OpenGL to move the buffer to the GPU no I mean there shouldn't really be a yeah I don't think there's gonna be any I mean I didn't see any we we didn't really we weren't super measuring it but no I mean I think it's basically the same it might even be a little slower than it was before because it's a little it's gonna be a little less predictable now the way that it works so it's it's you know it's it's not the greatest way to to get something on the screen to render it yourself and then hand it off to a graphics library but yes I mean you can see right here like you kind of oscillates but the frame time is something like you know between 50 and 60 milliseconds and you know we can if we want to get rid of the in it and do that just do the blit ourselves let's see here stretch bits yeah so it doesn't look particularly different not particularly different I guess not too much the other thing you have to remember is the double buffer is turned off so you guys can see it if I was to turn that off right I don't know if if if allowing double buffering would also improve the speed of things let me just check yeah it might so it might be that also not having did the the concession we're making for streaming there to allow the streaming capture that that may also be hurting us a bit but you know Carrie Johansson is there a way you know of to optimize PCI transfers between CPU and GPU using OpenGL yeah I don't think there's actually gonna be anything particularly optimized I don't think there's very much you could do to optimize this particular transfer and the reason for that is basically you're just sending down a texture and you're drawing at once and that's all that's happening I think the driver will probably do a pretty good job of that you know the only thing I can think of that you might be able to do is is get rid of the extra copy here by well okay so let's say that you cared about how long this took right here to copy into the driver memory you could possibly overlap that by doing this on a separate thread while you started rendering the rest of your stuff right so I don't think you could optimize the time that it's taking to transfer this really in any big way because like apparently and this is might be a little bit old but at least previously it was actually a slower way to go to lock memory on the GPU side to lock a region of memory write it into there let go and allow it to transfer that out that was actually slower because it caused what are called a synchronization point in the driver which is basically saying that the driver has to kind of coordinate with the GPU at that point to find it bytes of memory that's gonna lock down for that or whatever and so this was actually a faster way to do those transfers but while it's doing the copy while the driver is taking the text image call what you could do is do that like we could do this on a thread so all of our OpenGL happened on a thread so that while we were transferring the other image we could be rendering you know we could be actually doing work like running the game simulation or something that so while you're not really speeding it up you're overlapping it more that could potentially be right but again I don't think this is not a huge part of our time anyway so I'm not sure that's a really big deal all right I think we are done I don't know if this is applicable to the last question either just got here but geotextile beverage TV could be faster than geotextile to D I seriously doubt that because you're replacing in fact I think it could be slower in this case because the card passed a detect that you're not replacing the texture right so so Shawn just to give you why I think that would be a bad idea for this although I don't know for sure is so tech sub image which you already know but I'm just gonna tell it for other people so text image 2d just romps over the entire thing right so it just replaces everything in the image text sub image 2d replaces just a piece so the rest of the texture stays the same so why would I think that this one might be faster than this one okay the reason is because if you look at what's actually going to happen here what's going to happen here in terms of like the sequence of events right is you've got the GPU and you've got the CPU the GPU here in this case like I'm coming to the CPU I call GL text image okay just specify an image that's going to basically copy things this is that's gonna do the copy right then it's gonna kick off a transfer of the GPU then it's gonna put a fence it's gonna fence right so that it knows when that texture is done because the texture is actually getting used well if that could really put a fence probably what it's gonna do is it's gonna create it I don't even know why I drew this diagram it's gonna create a dependency chain let's just talk about that way it's gonna create a dependency chain and the dependency chain looks like this right it looks like GL swap buffers at the bottom right the swap buffers call depends directly on the those primitives that GL begin end right and that GL begin end depends on that texture okay so when the graphics card sees GL text sub image what that does is that means for frame 1 this is the swap buffers on frame 1 let's say or frame 0 okay and this is the texture on frame 0 when I call GL Tech's image it knows that you're replacing the image entirely so on frame the next frame the dependency chain looks like this right they're totally separate so it's not it doesn't have to worry can issue these in complete parallel right what happens if I do text sub image is the dependency chain actually looks like this right it says oh I can't use a separate one here it's actually this one and I gotta wait for this to finish because it's using this one before I can do let me draw this a little bit better right the depending looks like this everyone depends on this including this texture update because it can update the texture till this guy's done so in a sense it ends up pointing to here which serializes these guys into just a big old chain right so now I have a chain that's six deep dependency instead of two-three deeps right so text some image is usually a bad idea if you're just replacing the whole image you usually don't want to do that because you're letting the graphics card fix it right if that makes sense so yeah yes uh Sean so I don't I don't think that's the case I could be wrong about that but I don't think that's the case I think most drivers would prefer it the other way round so yeah but at the very least if it was so if GL tech sub image 2d actually was faster for some reason you would probably still want to break the dependency chain so you'd probably still want to create two textures one for ultra either or like even odd parity does that make sense so yeah so one thing I can tell you for sure for example is you definitely don't want to do that with vertex buffers right like you definitely don't want to do sub range updates of an index buffer or a vertex buffer it's way faster if you're replacing all of them to just have a separate buffer right because you just don't want to create the finished reigns text image 2d doesn't text sub image and text image don't neither they don't have to do anything separate but it can be implemented sacked ly the same in the driver right saying it has to allocute storage the texture well that's not any that's that's what allows you to break the dependency chain right so the driver probably understands that and will know that it can make a second slop the second time it sees the GL texture and then it'll have to for that texture which is just good right so yeah I'd be real surprised but like I said you know we'd have to actually time it or ask an actual driver person to tell us in this case it's not taking enough time as it is I think probably for us to even notice meaning I'm pretty sure that like we could switch between them either way and we'd never know we'd have to actually put in some better timing to even tell like the frame times probably is too variant right now especially with the especially with the I'm not gonna bout doing it right now but it's too variant right now especially with the streaming to really have any idea which one of those faster is by switching we'd have to do some kind of better timing to really even tell quarter Tron at what point in the chain of GL commands does the card actually get involved command dependent driver dependent entirely driver dependent as far as I know well can card dependent I mean it's it's entirely dependent on how they chose to optimize things all right I'm gonna wrap it up yeah so GL text image versus text image if you guys want to play with it you probably could create a pretty easy test case for it well I mean this is an okay test case just got it you have to put in some better timing and run it on a machine that's not streaming I think all right I'm gonna wrap it up let's go ahead and close this down that's not quite closed all right everybody thank you for joining me for another episode of handmade hero it's been a pleasure coding with you as always if you would like to follow along at home you can always peer to the game on Hammacher org and it comes with a source code so you could follow along and for example if you'd like to play with timing text images 2d verses sub image 2d for whatever that's worth you could do that there is a forum site you can go to ask questions a patreon page we have if you wants about the video series and a tweet bot if you would like to follow the series live it tells you when we're going to be live so you can plan your whole life around handmade hero we will be back next week and I will post the schedule to that tweet bot so check it during the weekend or Monday morning to see when we're going to be live until then have a good weekend programming and I will see you guys on the Internet take it easy everyone
Info
Channel: Molly Rocket
Views: 14,586
Rating: undefined out of 5
Keywords: Handmade Hero
Id: YIOpZ9M5pc4
Channel Id: undefined
Length: 91min 59sec (5519 seconds)
Published: Fri Jan 15 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.