Understanding Texture Coordinates - Getting Started with Blender Nodes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to another procedural texturing tutorial in this one we're going to be talking about coordinate systems how we can use them how we can make them let's get stuck in there are three main nodes that contain coordinate information and these are the uv map node texture coordinate node and the geometry node generally we use the texture coordinate node or sometimes the uv map node if we just want the uv map but all three of these contain information that we can use so these are a little bit confusing and i've had quite a few people ask about them i'm just going to run through all of these options show what they do and which ones we can use for coordinate systems now what is a texture coordinate texture coordinates tell blender where to position colors on the mesh coordinates are position information and therefore vectors right they contain x y and z information with one exception to make sure that we're able to manage this information without different axes getting mixed up we store this information as r g and b channels red green and blue channels now if we have a look at our axes in blender you can see that our x-axis is red our y-axis is green and our z-axis is blue and this corresponds exactly with how we store our texture coordinates so each point in the coordinate system has its own color which is made up of the rgb values and we can interchange rgb and xyz here so if we have a point in this generated vector and i'll explain which of these are properly in a moment but if we have a point in the bottom left here this is point zero zero zero in terms of our vector position and if we store this as color information so i'm just going to add an input rgb so if i go into the rgb here and i set these all to zero you can see that we're outputting black and there is no value in this bottom corner if i go directly above it where we have only increased the z value so on this cube using generated vectors we have one being at the top so now our vector is zero zero one i change my blue up and you can see that we've got that blue i then move along this edge in the x direction then we're going to be changing to 1 0 1. we haven't gone anywhere in the y yet if i increase this then we get a magenta color and you can see that this corner is magenta so now if we go back in space in the y direction so this corner i'm gonna have one one one which is white pure white now obviously coordinate space can go higher than one and lower than zero but our screens are not going to express that range properly in filmic view transform which is blender's default and you can set that under your render settings color management and it's down here view transform set this to standard you can see that the white is absolutely one so standard has a proper range of zero to one and then it will clip anything outside that range and filmic goes much higher in the white range so this is why when you look at a coordinate system you see all these colors but actually in terms of them being stored as rgb values it kind of makes sense so if we have a look at the uv map here i've got a plane up here above and i'm using a plane because a plane fills up our uv space and a uv space has a range in x and y from zero to one uv spaces are the exception to the rule that every space contains x y and z information uv space only contains x and y information if i add a converter separate xyz and you can see that our x gradient increases horizontally and our y gradient increases vertically in terms of our uv space so as i mentioned before with these being stored as values in color channels this is why we end up with a gradient so here is like 0.05 and that's why it's very dark whereas if we go right up to the top to like 0.9 then it is much brighter and this axis runs all the way across the whole thing linearly because there is no change in any other axises only changing in one direction because of how we're storing all of this information as color channels we can affect the information as if it's color so we can use things like the mix rgb node and start mixing colors that way so let's have a run through of what our different options are here just to explain things a bit more clearly i'm using a principled volume node just so that we can see the space three-dimensionally if i put a uv map onto this it's going to go black but that's because uv map is only talking about surface information and there's no way for us to express this in volume so the uv map node by default it contains the same information that the uv output from the texture coordinate node has however it also allows us to use multiple uv maps so if i add this one we have uv map zero zero one i can select that here and if i look at the uv space you can see that in uv map we have this kind of cross in uv map zero zero one we have a different layout and this is gonna help you when you do stuff like baking if you need different textures from different uv maps the texture coordinate node has a few different options here so we have generated generated uses what we call the bounding box of the object if i add a monkey suzanne takes up a certain amount of space she's a certain width and a certain depth as well if you would like to view her bounding box on this orange one we can set the viewport display check this box the bounds is the smallest box that can fit the mesh inside it is the bounds of the object that we are taking into account when we are looking at generated coordinates so from the bottom front left is zero zero then we go up in the z direction horizontally in the x direction and also front to back in the y direction important thing to remember when we are looking at generated is that we have a zero to one range in each of these axis so our texture coordinates have a different scale in each axis so you can see that if i put in this foreign texture i just kind of feeding through a color ramp on the cube where we have the same value x and y we have circles but on the monkey where it is shorter vertically we have ellipses and that is caused by generated being shorter vertically than it is horizontally though that is worse take into account when we do not use any input to a node it defaults to generated except from we're using an image texture node that will default to uv the next one on our list is normal what the normal is doing is it's outputting the vector normal to a surface and a if we have a surface looking top down on it like this the normal is the vector which points out at 90 degrees this top surface of the cube pointing up it is completely blue and that is because it is pointing straight up in a set position negatives are black but that doesn't mean there's no information it just means that we're using negative factors which do not get represented visually at least on our screens the information is however still there and you can see on the monkey where we have slightly different angles we're getting slightly more complex with actors but these are just factors saved as rgb information and so we're getting different colors on different faces next one we've talked about is uv this will just use the default uv map the one that comes out this texture coordinate node will be whichever one has the camera icon next to it so you cannot define multiple uv maps with the text coordinate node only with the uv map node following uv we have object object coordinates have a zero zero point in the exact position of the origin of our shape object coordinates are influenced by object scale and object rotation so if i rotate this you can see that our x now y is moving if i increase the scale of this in the z axis then we're not changing the maximum value however if i apply rotation scale those things do actually change so if i drop this right back down and apply the rotation scale this will drop the maximum amount if you are using object coordinates and your texture looks strangely deformed likely that you just need to apply the scale of your object next up we've got camera coordinates and camera coordinates quite easy to understand essentially the origin of the coordinate space is centered where your camera is where you're viewing from and they are orientated so that the direction that you are looking in is exactly in the z axis one way for me to demonstrate this is if i just add a plane scale it up a little bit there and give it the same material if i use a converter vector math on here and set this to length it's going to give me a gradient out from the origin and then if i add a composite math i'd say this to be modulo and you can see that this is concentric spheres away from the point of view so if i just go back to this you can see that the center of my view is blue so we're looking in a positive z direction and to the right we have increasing x and up we have increasing y so that's camera coordinates after a camera we have got window there is no z coordinate all it is using is the space that is actually looking into your 3d scene and if you have a camera just add one now the camera will then define what is or isn't in the view and you can see that there is a zero zero point just to that bottom left hand corner and things down and left of that appear black after window we have reflection reflection is slightly more complex so reflection if you have your surface like this you are looking at it in this direction and the reflection ray comes out like this and this is 90 degrees here and this angle theta and this angle theta away from the normal they equal each other so it's the direction that you look at and then how it's being reflected on the other side this angle out here this is your reflection vector so where we are looking down here we're looking in the negative z so the vector that comes off this is going to be pointing in the positive x and the negative z and that's why we have a red value there and if we look up slightly then we start to get positive z included and that is why we get a magenta color so that's reflection so now going down onto the second node the geometry node we have position information this looks the same as object right position an object at least on our cube but you can see that susan up here does not have the same result when we look at object it is centered on the origin but the position we can see that this is centered on the world origin it also does not matter if you've got a different scale because this doesn't look at your object scale at all and if you move this it takes on the point information of that position data so i find position really useful when doing things like spreading your texture across multiple objects in different places but you want the texture to be somewhat continuous and therefore if you just use this position information they're using the world coordinates now normal for your geometry i'm just going to right click smooth so normal for both geometry and texture coordinates are the same however we do have two more options here we have tangent and tangent is 90 degrees to a normal which is to say that it is in line with the surface if i have a circle the normal would be pointing straight out from that but the tangent of a point is going to be where it is dead in line with it and this going towards a positive vector is what we can see is happening here after tangent we have true normal and you can see even though we have shaded suzanne smooth when we look at normal coordinates we see the smooth to normal and when we look at true normal we are just looking at the raw mesh information nothing to do with shaded smooth normals or weighted normals or anything this is just the pure what does the mesh surface look like incoming this is another really useful one and it works similarly to reflection but if this is what reflection ray looks like then our incoming ray we're looking in this direction the incoming ray is what is coming straight back at us if we look straight on at this this surface and our camera currently has a position from the surface coming out towards us of positive zed and if we come this way then we have positive z and a positive x if we get behind and we have all three positive underneath we have negative sets so that is incoming uh we also have parametric here and parametric splits every face into a triangle which is what's happening behind the scenes anyway renders always triangulating our mesh but we just don't see that on our editable mesh so the issue with using these coordinates you might think well they've got x and y but on some of the faces they are at 90 degrees from one another and on other faces they are at 45 degrees from one another and that is not going to give you a good result there are reasons why you would be using parametric but it's a very specific use case we then have these three gray sockets we have back facing which is basically a black and white mask when we look inside our faces which are pointing backwards we get a one it just says that is the back facing base pointiness is referring to how angled the points are so in internal corners you will have a lower value and on external quantities you will have a higher value and then we have random per island which will give you a random value per mesh island so even within one object you can see that suzanne has separate eye meshes and these are getting a random value this option here this random per island only works with cycles not with ev that's just been a run through of these different coordinate systems generated we have zero to one in our x y and z ranges of our bounding box uv considers our uvema object considers the zero zero position of our mesh to be the origin of our texture coordinates and uses world space scaled by our object scale camera window and reflection these are all to do with how we look at the mesh position is just world coordinates normal tangent and true normal these are just the direction of the surface and you can't really use these as coordinate systems and they're very useful for things like masking incoming again another one that is just talking about how we look at the mesh and finally parametric which gives us a uv per triangulated face one of the really great things about how we store our coordinate information being as it is gradients stored in rgb channels aside from how easy it is for us to edit them it also means that very easy for us to generate uv spaces however we want if you find later on in a material you need to be generating a uv space so you can put an image on we did this for the herringbone tutorial then as long as you can generate a gradient and you combine it with a combined xyz node then you can put an image onto that it just becomes a case of how can you modify gradients to do what you want them to do to look how you want them to look and how can you manipulate that information and that's really all we do when we do procedural materials hope this makes sense it can take a little bit of time to get your head around if it's new to you so do make sure that you understand what the different outputs of the texture coordinate geometry node do and when you can use them the key takeaway from this today is that everything that we do with procedural textures generating shapes generating interesting displacement all comes from modifying gradients that we take from texture coordinates and as long as you understand that the gradients the values are actually inherent to the color then it becomes much easier for you to start changing how your gradients are being mapped manipulating coordinates is the cornerstone of procedural texturing so this has been useful hope you've enjoyed it and i'll catch you next time
Info
Channel: Erindale
Views: 23,746
Rating: undefined out of 5
Keywords: b3d, blender3d, procedural
Id: 8od3pGdiRG8
Channel Id: undefined
Length: 14min 28sec (868 seconds)
Published: Wed Jun 17 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.