Input Vectors - Shader Graph Basics - Episode 9

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
today we're going to talk about input vectors let's go [Music] before we jump into using input vectors in unreal 5 and unity i want to step back and talk about the core concepts and illustrate the principles a little so you'll understand what's going on once we start using these things first of all it's important to understand what a vector is a vector is basically a line between two points it has a start point and an endpoint or a direction in computer graphics we use vectors to find out important information about the scene like how far from the camera an object is or what direction the surface is facing and whether or not an object should be lit by a light source today we're going to talk about three main vectors the first is the surface normal a surface normal or often just called a normal is a vector that's stored at every vertex of the model the direction in the normal is pointing is the direction that the surface is facing so the normal is perpendicular to the surface now i know there's only one of them shown here but it's important to know that every point on the surface of the model has its own normal surface normals are automatically created by the 3d software you use to create your models the normals are always one unit long which is important because many math operations like dot products require that the vectors be the same length i'll give more details on that later in the shader you can use normals to judge the relationship between the model's surface and other objects in the scene like the camera or the lights next let's talk about the camera vector this is also sometimes called the view vector or the eye vector it's a line that starts at the position of the camera in the scene and extends into the scene to each point that's being rendered if we measure the length of the camera vector that tells us how far away from the camera each point in the scene is so it gives us the scene's depth if we compare the camera vector with the surface normals on the model it tells us what parts of the model are facing toward the camera and what parts are facing away to do that we need to normalize the camera vector which means we're making it one unit in length just like the surface normals once we have a normalized camera vector we can do a dot product between the camera vector and the surface normal which will result in a 1 or a white value if the surface is facing the camera and 0 or a black value if it's facing away or perpendicular to the camera i'll show you a better example of that once we jump into unreal and unity lastly we have the light vector if it's a directional light the direction of the light vector is a constant value but if it's a point light then the light vector starts at the position of the light source and goes to the point that's currently being rendered just like with the camera vector we can measure the length of the light vector to see how far away the light is and we can compare the angle between the light vector and the surface normal to see how much illumination the surface should receive from the light because most modern rendering engines including unreal and unity use deferred rendering the lighting calculations are generally done at a later stage in the rendering process and so you won't need to use a light vector much at all in the materials you make with shader graph or the material editor but it's still good to understand what they are alright so now that we've done all the explaining let's take a look at some example shaders and show what you can do with the surface normal the camera vector and the light vector right so here we are in unreal and the first example that we're going to take a look at is something that you can do with just the surface normal so here in unreal i have the vertex normal in world space so i can get to that just by typing vertex here and pick vertex normal ws now it's really important when you do operations using two vectors that you have both of them in the same space so i have my vertex normal in world space right now and don't worry we're going to get into another video later on where we talk about spaces world space object space tangent space that sort of thing so just trust me right now when i say that we need to have our vectors in the same space when we use them together so for example here i have the vertex normal in world space and i also have this other vector that is pointed straight up in world space so i have zero zero and one and this is pointed straight up in world space because in unreal up is the z component so i have 0 0 and then 1 as the z so this is pointed up and i'm doing a dot product between my surface normal and the up vector and what that's going to do is it's going to compare the up vector in world space and the surface normal of my model and if the normals are parallel or pointed in the same direction it's going to give me white and if the normals are perpendicular it's going to give me black and then i saturate the result and we'll p pass this into base color and emissive to take a look at what happens so here you can see i've used the surface normal dot product dot product did it with the up vector and what this is giving me is a mask where the parts of my model that are facing up are white and the parts of my model that are facing down are black with a nice smooth fall off now i could adjust this with some additional math if i wanted to change how sharp this fall off was or that sort of thing but this is really useful i can use the surface normal to tell if a surface is facing up or not and i could use this mask for all kinds of things if i wanted to apply sand or moss to the top of my model i could use this as my mask in fact i i have another video that i created where i do just that i use this technique to apply an environment material like moss to the top of my model and i'll put a link to that right here go ahead and take a look at that if you want to expand on this technique and use the normal to create a mask that allows you to apply materials to the tops of your models okay an easier way of doing this and a slightly more efficient way if you look at what's happening with our math here in a previous video we talked about the dot product and we talked about how you multiply the x of the two vectors together and you multiply the y of the two vectors together and you multiply the z of the two vectors together well if we look at this value here and we have we have these zeros here these zeros when we multiply them with the data coming from our world space normal we're just going to end up with 0 for x and y and then we're going to end up with z multiplied by 1 which is good just going to leave us the value of the z for the world space normal so if i want to do this same operation and get the same result without actually having to compute this dot product all i have to do is add a mass component node and just select the z component because i already know that i'm going to get 0 for x and y and i'm going to get whatever the z component of the world space normal is for the z and so i can get the same result by doing by just grabbing the z component of the world space normal as i can with doing a dot product with the up vector alright so i wire this into the saturate and you can see i've got the exact same thing and i didn't even have to do a dot product so pretty cool that's one use for the vertex normal and there are all kinds of other things that i could do i could if i wanted to i could mask out instead of the up vector i could use the y and now i've got a mass coming from the side or i could use the x and now i've got a mask coming from the front all right so that's our first example that's a use of the vertex normal in the next example and this is one that we've seen before on the channel we're actually going to use the vertex normal and the camera vector together so let's take a look at this example here we have our camera vector which is the the vector that's going from the camera to the surface and here we have the vertex normal and we're doing the dot product between these two now that dot product is going to be white when these two vectors are parallel and it's going to be black when they're perpendicular so when the surface is pointing toward the camera we're going to get a white value and when it's pointing away from the camera we're going to get a black value so let's just go ahead and plug this in and see what our result is and so you can see that our model is white here in the middle and then black around the edges let me just change the field of view a little bit so maybe it's it'll be a little bit obvious more obvious what's going on here i zoom in you can kind of see that black showing up around the edges but you'd have to be able to rotate outside of the camera to be able to see the black because the surface is pointing right at the camera and we're measuring if the camera or if the surface normal is parallel or perpendicular to the camera now the one thing that i should point out about how unreal works is whenever you do a dot product between two vectors they need to be the same length and that's why we have normalize and so we take the vertex normal and we make it a length of one and then we take the camera vector and we make it a length of one and then when we do a dot product we get the results that we're expecting now the thing about this camera vector is it's already normalized so if i pass it into this normalize plug it into this dot product you see we get the same result so this camera vector coming in i told you before that the camera vector went from the position of the camera to the position on the model that's currently being rendered and that's true but in unreal this node for the camera vector has already been normalized so it's not the full length from the camera to the position on the model that's currently being rendered it's just the normalized version and so we're able to get away with operations like this where we do the dot product between the vertex normal and the camera vector without doing a normalization because the normalization has already been applied to these uh to these two nodes now if we want to adjust our mask i can use a power node here like i'm raising it to the power of three right here so if we plug this in now you can see i'm getting a little bit more black around the edges and the higher i make this value the more toward the middle i'm able to push the effect so now that i've raised it to a power of 8 i'm actually getting more black and less white so using this power node i'm able to adjust the results that i'm getting i could also use the one minus node if i wanted to do the inverse of what i'm getting here so i'll pass the result into one minus and what that's going to do is flip the results around so that i'm getting white on the edges and black in the middle and if i do that i'm probably going to need to lower that power back down anyway so this second example is doing a dot product between the camera vector and the vertex normal and we're getting an effect that a lot of people call a fresnel effect and i have another video where i show a really cool use of the fresnel effect to make cloth i think i've pointed out this before but if you haven't seen my cloth shader video where i use uh the camera vector dot product dot product with the surface normal you can check that out right here i'll put that link here and also down in the description all right so camera vector dot product it with the surface normal gives us a mask that tells us if the surface is facing the camera or facing away from the camera all right let's take a look at our third example and this is an example where we're going to want to take the camera vector not normalized i talked about using this normalized node and making the camera vector unit length and how this one already is normalized and so it's unit length well how do we create a camera vector that actually goes from the position of the camera to the position on the surface that's being rendered let's take a look at that so i'll just scroll down here and that's what i've done here i've created a camera vector by taking the camera's position and subtracting it from the absolute world position camera vector minus world position will give me a vector that goes from the position of the camera to the position that i'm currently rendering so then if i want to know how far that is i can use this node called length and what that does is it looks at the vector that i've created here and measures how long it is so what i'm trying to do here is create a mask that is black when the object is near the camera and white when the object is further away now there's tons of uses for this kind of effect for example if i wanted to apply raindrops when the model was close to the camera but fade those rain drops out as the model got further away i could do that with this example also if i wanted to add a detail texture to my surface but then fade that texture out as it get as the model got further away from the camera this is how i do it i take the camera position and i subtract the absolute world space position to get the camera vector then i measure the length of that vector and then i have two nodes here that adjust the results what this one does is subtracts a hard-coded value or i could expose this as a as a parameter and this parameter controls how far away the effect starts from the camera so if i gave this a value of 0 my falloff mask would start right at the camera and begin to fall off right there but i've given it a value of 500 which means that my mask will be perfectly black from the point of the camera up to five meters and then the falloff will start at that position now this next divide node here i'm dividing the result by 5000 which means that from that point where the falloff starts at 5 meters it's then going to take 50 meters to go from a value of black to a value of white so this value here that i'm dividing determines how long or how far into the distance the mask is so this is my offset and this is my mask length well let's go ahead and move these nodes up and we'll take a look at the result that we get from this technique so i'm just going to plug this into the base color and also plug it into the emissive color so here you can see that my sphere is black and then i can zoom out and as i get to about five meters you can see that now it starts turning white and from here to about 50 meters it's gonna go from black to white so at about here it's a solid white then as i zoom in again you can see that the closer it gets to the camera the darker it gets and so i'm doing this again by measuring the length of the vector between the camera position and the model's position and then giving a value that determines where the falloff starts and then how long the falloff lasts so by measuring the length of the camera vector i'm able to create this cool falloff mask that then i can use to do all kinds of things that i want to fade out or fade in when they're close to the camera and fade out when they get further away all right let's take a look at one last example and in the diagrams at the beginning of the video we have the light vector node so i wanted to show you there is a node in unreal called the light vector node and here i've done a dot product with the light vector node and the vertex normal so what this is going to do is show you how parallel or perpendicular the surface of the model is with the light but the problem is if i take this node and i plug it into my shader now you can see that i'm getting this error and if i mouse over the area you can see it says light vector can only be used in a light function or a deferred decal material and what that means is basically unreal is trying to tell me hey we're using a deferred renderer here and we don't want you to do lighting calculations in your material basically what this is doing is diffuse lighting and i don't really need to do diffuse lighting because the engine does that for me if i disconnect these nodes here and switch to lit mode you can see that i've got diffuse lighting going on already and that happens in the g buffer later on in the rendering than what i'm doing here when i'm defining my materials so even though there is a light vector node available you can only use it in certain conditions where you're not rendering into the g buffer so if i were creating a decal or some other kind of material where i wasn't rendering into the g buffer but i wanted to do lighting i could do this operation here where i'm doing a dot product between the light vector and the vertex normal and similar to this dot product here what this would give me is a gradient that was white when my surface is facing the light and black when it was facing away from the light and that would tell me how much the surface should be lit uh by that particular light source so this is a cool technique um but it's only really applicable if you're creating some kind of weird exotic material that's not rendering into the g buffer all right so those are our examples from unreal and now let's switch over to unity and take a look at the examples there all right so here we are in unity in shader graph and we're going to take a look at those same three examples but i want to show you a couple of differences a couple of key differences where unity and unreal are slightly different okay so in our first example we're taking our surface normal and we're using the split node to isolate the g or the y component of the surface normal and then we saturate that and if we plug this into our shader we're going to see is that same up facing mask that we got in unreal so we're taking the y component of the of the normal and we're passing that in and it's giving this us this up facing mask now the part that i want you to notice here is if i if i open up my split node you can see that i'm taking the g component which is the same as the y whereas in unreal i was taking the z and that's because in unreal uh we're looking at an up vector that's using the z and in unity unity is y up so we've flipped this thing on its head and we're using the y-axis as our up vector instead of the z-axis as our up vector so those are two different uh ways of doing the same thing but they're just different in the different engines so when you're in unity if you want to use up make sure that you're using y okay so that's our use for the normal vector to create masks and just like we did on unreal you know you could use the other component to create a front back mask or a left right mask all right for our next example we've got uh the fresnel term that we created uh in unreal and the one difference here is i want to point out that i'm doing the dot product here between the normal and the camera direction node uh so here i've got my camera node with all of its different parameters for the camera and i've got my camera direction coming out of here but the one difference here between uh unreal and unity is that in unity this camera direction is inverted and so if i want to do a dot product between the surface normal and the camera vector and get the fresnel term i actually have to add a negate node in here to reverse the direction of the camera vector in order to be able to perform this operation so here's my normal my surface normal my camera vector negated and i've got my dot product so if i pass this in this result into the root or into my master stack rather you can see that now i've got white in the middle where the model is looking right at the camera and then it falls off to black around the edges and i know that isn't super obvious which is why i've added this power node in here to kind of increase the contrast so you can see that a little bit better now you can see it you can see that the black on the edge and the white where the model is facing right at the camera and again just like we did in unreal i can use the one minus node to invert the results and get white on the edges and black in the middle just like that all right so that's dot product with the surface normal and the camera vector and for our final example i've done the same thing here that we did in unreal where i'm casting a ray or the vector from the camera uh to the position on the model so i uh do the camera minus the model's position then i measure the length of that ray that i've cast with this operation and then here i'm doing the subtract to offset from the the camera position and i'm also doing the divide to determine uh how far that mask is going to be so this is going to give me a mask that starts out black close to the camera and fades out to white the further away from the camera that i get so this value right here is in meters and so what this says is i'm going to have black from starting at the point where the camera is up to 9.5 meters and then from that point to .7 meters later i'm gonna fade from black to white and so if i plug this in you can see that on my sphere here this point is the spot on the sphere where we're about 9.5 meters away from the camera and then you can see that we go for about 0.7 meters around to the side of the sphere and at this point on the sphere were about 0.7 meters further into the scene now if i zoom out a little bit you can see that as my sphere gets further away it goes completely white and if i zoom in you can see that uh getting closer to the camera i become black so this uh this set of nodes here where we're subtracting the camera position from the world space position and then measuring the length uh gives us uh how far away from the camera we are and then we can adjust the mask to be exactly what we want starting the distance away from the camera with this 9.5 value here and then dividing by the length that we want our mask to be uh so that the mask goes from uh the spot in the scene where we want it to start and then at the spot in the scene where we want it to end so pretty cool we've got uh an up vector mask using the surface normal we've got a fresnel mask using the surface normal and the camera dot product it together and then we've got a camera distance mask where we create the camera vector and then measure the length so just like we did in unreal we've got the same effects in unity and um this is a pretty good basic summary of using both the normal vector and the camera vector all right that'll about wrap it up for today's video i hope you enjoyed this one and that you learned something new about the input vectors uh normal vector camera vector and maybe a little bit about light vector 2 although we don't use that one too often all right be sure to come back next week and we'll talk about more shadery goodness and in the meantime have a great week everybody [Music] you
Info
Channel: Ben Cloward
Views: 3,525
Rating: undefined out of 5
Keywords: UE4, UE5, Unreal, Unreal Engine, Unity, shader, material, material editor, game development, real-time, tutorial, training, graphics, 3d, GPU, tech art, computer graphics, fundamentals, basics, beginning, learning, shader graph, getting started, normal, normalize, vertex normal, surface normal, camera vector, eye vector, view vector
Id: lrc-j7ub28U
Channel Id: undefined
Length: 27min 29sec (1649 seconds)
Published: Thu Aug 05 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.