Shader Basics, Blending & Textures • Shaders for Game Devs [Part 1]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Fuck... 4 Hours! Better grab some Popcorn😂

Seriously this is awesome stuff!

👍︎︎ 3 👤︎︎ u/RecommendationIcy382 📅︎︎ Oct 12 2021 🗫︎ replies

I cannot stress enough. Watch all of her videos. It's like a semester of college for free.

👍︎︎ 4 👤︎︎ u/GagOnMacaque 📅︎︎ Oct 11 2021 🗫︎ replies

Sometimes I ask myself, how is it even possible that something like this on web is for free...I've watched it few months ago and indeed it is really good

👍︎︎ 2 👤︎︎ u/Fatamos 📅︎︎ Oct 11 2021 🗫︎ replies
Captions
hello make a fire hello jamaica hello google drop a little average person i'm glad i got the average person in here that's neat okay uh i mentioned this last time uh when we did the math course that i'm gonna try to have this be a very practical and visual approach to what we're doing in order to get into shaders as before feel free to ask questions at any moment i will be primarily reading the chat the future games private chat one of the questions people have quite a lot is like what is a shader and some people ask the question of like do i need shaders in my project or should i use shaders and the easy answer to that is that you kind of don't have a choice everything in a game is rendered with shaders there are some exceptions but these days pretty much every game you play are like rendered with shaders absolutely everywhere so what is a shader uh you can think of a shader as code running on your gpu that is kind of the most generalized way that i can think of defining shaders but in this case we're going to focus on shaders that specifically render graphics onto your screen and quite often what you do is that you define a bunch of parameters like maybe the position of an object you have data such as the a mesh you want to render maybe you want to render a cat in a game then you have a mesh of that cat and then you have some way that defines the way that that surface should look and sometimes even move and that code is shader code so uh shaders are like because of the way they work is that they are absolutely ubiquitous like absolutely everywhere in games you're gonna find many many different types of shaders that do all sorts of different effects so i was thinking we could start by just like showing some examples of what that could look like in games right because i feel like it's not very tangible to not have any like visual reference for this uh so shaders are more like normal maps rather than diffuse textures shaders are completely unrelated to textures textures are input data that shaders can use if you want to but you don't have to so normal maps and diffuse textures are examples of data that we often use in shaders but shaders are more about taking the information from textures or sometimes just other parameters or values or colors and then running a bunch of math on that and then outputting it to your screen a diffuse texture for instance is usually something that contains the the color of the surface and if you want to render an object where you have different colors on the surface and you want to define that using a texture you can do that if you want but you also don't have to use textures in fact most of the work that i've done in the past like four years have almost entirely been textureless because i i don't know i like math and i like doing things procedurally and most of the games that i do tend to have an art style that doesn't require that many textures so so last night i logged on to final fantasy and i took a bunch of screenshots because i thought this might be a good case study let's actually open a game take a bunch of screenshots and then just look at what is happening on the screen and how does this relate to shaders so let's let's jump into all of my screenshots i've been playing final fantasy all um all holidays so i haven't been doing much else so of course i'm gonna continue doing things related to that um okay so let's open up some examples here okay there we go so this is a screenshot from from final fantasy and final fantasy it's an mmo and mmos uh are usually have a lot of like vfx and particle effects as you can see in this case uh the uh my character is doing an attack so there's this big like ring thing happening uh there are impact effects on the screaming dog uh where you know you have some some wisps up here uh you can see that they're sort of arranged in a circular pattern here as well uh there's another one here and another one here um and usually these uh these types of things are a combination of particle effects and also meshes that have uh specialized shaders that run for like that type of effect that you're trying to achieve um so for instance if we try to break this down and try to figure out like how how is this whole thing constructed right i can try to think of okay so maybe this disc here uh this could be a single quad as in a mesh with um with two triangles right uh and then on top of that you have like some sort of circular swirly uh vfx type of thing right so you render that on that quad and there are some like glowing vertical stripes here this could either be a particle effect or like very small individual light strips it could also be a cylinder mesh that simply has a texture that goes along the edges that gives us gives it all of those glowy stripes right same thing goes for the dog you can see that there's uh all the rings here that could be a single quad that kind of encapsulates all of the effects there and then that has some sort of texture on it um the the flare here very likely a particle probably also just a single quad with a flare texture on it uh same thing with this thing right here and so forth and you can sort of like try to try to break vfx down in this way and try to figure out you know how is all of this constructed and what shaders allow you to do is basically customize this in almost any way you want so rather than just having textures you can also animate these you can blend them with other things you can tell them to render additively or they can multiply into the scene so there are many many different ways of doing this so here's another example where the different particles use different blending modes so blending mode is just a way of like saying how should this combine with the background so i can actually show an example in photoshop you can see that this one kind of adds a glow whenever i draw with this blending mode i can set a blending mode to multiply and if we do that it's going to multiply in which in this case reduces the color values so it usually darkens right so so there are many many different ways of blending it and then you also of course have the the regular blending mode which is usually just alpha blending or alpha compositing we're kind of like rendering something on top of something else so these are very very distinct ways of rendering something on top of something else right um so we can like try to look at this so so how do how can we break this down well we can see that there's like sort of an orange halo here right and that seems to darken the background so we can presume that that is a multiplicative effect that is darkening that background then we can look at some of the other ones um it seems like the the black ones here might not actually modify the background that much so they're probably alpha blended like the the blue one up there all the gluey white ones here are almost certainly additive which means that they're going to brighten the background yeah so those are different blend modes when you say particle in this context you're talking about engine stuff right the gpu shader code you see is a quad and it doesn't know what a particle and it's just a mesh uh pretty much yeah um so if we talk about like individual shaders then shaders are generally what is dealing with each individual component of all of these things right so if we want to count how many shaders we have here it is possible that this only has three shaders uh it could be that it's kind of hard to tell without actually having the game at hand but it could be that there's one shader that deals with the multiply that's in the multiply blend mode um and there could be another one that deals with additive and it could be another one that does uh alpha blending and then what can happen is that you use these three shaders but you use them in different contexts and with different inputs we're going to get into that later um how the the structure of like shaders versus materials and all that kind of going into too much detail here supposed to just quickly go through these okay what else here's another interesting example uh so this one has a refraction in it uh refraction is usually when you take uh some usually background and then you distort it so if you want to have like heat waves above a road or something uh then you can sometimes use refraction in order to distort some image underwater effects also use a lot of distortion where you can see that the the character is here and then there's the distorted version here and it's a little hard to tell how it works when it's not in motion and i didn't have time to record a thing in motion here's another example when you like kind of mash all of these together this is more about vfx than it is about shaders i suppose but you can do a lot of like fancy effects if you if you combine a lot of things together right um so in this case you can you can sort of trace out all the different disks you have here right there's one going uh here and there's a circular one going there and you can see that with the combination of all of this you can achieve a lot of effects right um so if we try to break down the shaders of these then again we're probably uh going to look at one additive shader here for anything that's glowing uh and then this looks like an alpha blended one for stuff that shouldn't make things brighter this should just darken things right we want to blend toward the color of um that we have defined in in this probably particle effect right another example of a lot of different planes uh this is also something that sort of goes outside of what you do in particle effects uh if you talk about unity's particle effect system doing something like this is generally pretty hard because particle effects usually deal with a lot of particles naturally and they're usually kind of scattered and in many different places but often you have cases like this where you have a very defined structure right um where you want to draw these like strips on the ground with a lot of these symbols uh so these are likely two triangles right for each of these uh i cannot draw a straight line anyway yeah so that's probably just a quad uh and then you have um another mesh that goes uh it's sort of like a the edges of a cube very likely that goes along here uh you can sort of follow the shape i hope um and then of course you have everything is triangles so the mesh is going to be triangulated like this so what what the shader can then do you know the shader has the mesh that is this inverted cube with only the walls right and then what you can do then in the shader is that you have lots of textures you can combine in different ways um so you can see that there's this uh pink smoke like effects uh so so this is very likely a texture that is mapped along this whole shape there are some some like multiply effects here that like dark in the background um that could be part of another render pass of this cube i'm not sure um and so forth um so so usually this kind of goes outside of what you can do with particle effects this is more you know you have a mesh and then you want to make it look a very very specific magical way um and that is something you can do with shaders okay should i go through more examples or is it boring you at this point i probably took too many screenshots same thing here uh it's a big brown texture um a lot of like additive glowy effects there are these little daggers pointing into the ground ground and all of these are probably also just simple quads and stuff right this became a vfx course now it's not about shaders anymore it's about vfx oh here's a good one you can even use traders and ui so this is an example of a progress bar that sort of has this mask on top of it in order to have all of these symbols but what's happening underneath these symbols is that there's a progress bar that's going to render inside of these you can see that it's halfway filled here where it's sort of going up to this point so so this is also a very good use case for shaders um you can you can make a single shader that you know renders this thing and then you have input data that says how like what is the percentage of fill on this bar right and then using that you can then just define exactly how you want this to look depending on how much um how much you've stored in this bar right same thing goes for like a lot of mmos have like markings on their icons uh so these little dashes around the edge here um they are also animated and move around in a circle you could do that using a shader um there are lots like circular progress bars so you can also define in the shader um or you can do it with sprite sheets usually makes things easier than custom shaders for everything here's my render pipeline related one um so shadows if you want to make shadows in a game that also requires a lot of processing on the gpu because quite often your gpu needs information about everything in the world in order to cast shadows like this right um so if we look at the if you look at the scene um most of the background is pretty well lit there are some shadows here in the background that are cast here but there's a building here that is casting this uh pretty complex shadow right because it's following uh following the shape of the the edge of the roof of this building right and it also like uh casts realistically on top of all of these objects so that continues there and then even for dynamic shadows for the character there's shadows going underneath there shadows going across here um going all the way down there right and this is another process that you tend to do on the gpu and there are many different ways of doing shadows but it's kind of interesting that like some of the techniques you use to cast shadows is that you kind of render the scene from the point of view of the light source and you only render how far away geometry is uh and then you can use that information in your usual frame that you're rendering here in the game uh to know uh whether or not shadow or light is gonna hit a certain point or not right here's another example the skin is a pretty difficult thing to render it turns out um skin has a lot of properties that generally most like solid objects in the world don't have um so so one thing that you might notice when skin is rendered in some game uh is that usually there's some highlight where the light source is hitting right uh the this is the highlight on the the cheek and then you can see that it sort of fades to shadow in this direction right um and then you have a shadow line here right uh so one thing that skin has which is a very like specialized thing uh is usually called subsurface scattering subsurface scattering means that light sort of enters something so if you have a surface you have light rays coming in in many cases if it's a perfect mirror light would just bounce off of it right but in some cases light can actually enter and then the light can scatter around inside of some surface and then hop out somewhere else uh what happens is that usually some wavelengths are absorbed when this happens so it could be that you have white light coming in uh but then once it starts to scatter inside of the skin some wavelengths are absorbed um and you you can get a more red tinted light coming out of it uh because the greens and the blues have been absorbed so mostly red is coming out so quite often with skin is that you you get this red tinted shade here that happens in the transition where you get less light but then the scattered light keeps spreading out of that um you don't need to know how to do all of these things i'm kind of just going through of like what you can do with shaders and some examples uh from in this case from final fantasy and then there are all sorts of properties of all of these surfaces right um you the uh for instance the lower lip has a very glossy highlight here right uh but the cheeks don't there's not a glossy highlight here um or on the nose or like nothing like that right there they're pretty matte um so all of these things are things you need to define in the shader uh where do we want things to be glossy where do we want things to be matte uh where do we want to have subsurface scattering and so forth um let's see what else uh oh hair rendering is surprisingly difficult but i don't know if we should get into all of the details of that here's another good thing to talk about um someone mentioned normal maps before normal maps are kind of a way to add more geometry data or what looks like geometry data without actually adding geometry uh so quite often you know you're making mesh with meshes with triangles so we can try to like sort of work out what the mesh looks like here this is a pretty low poly mesh right you can see that it's very jagged but the internals of it it looks pretty detailed right it looks like a pretty smooth curved surface all of these intricate details look really nice but what the geometry likely looks like if i were to guess is that i'll probably have a very very simple structure actually i could see that one now i'm guessing it would be something like this so it's sort of just like a very simple low low poly cone but with certain textures you can make a normal map which encodes data about the direction that a surface is pointing and with that data we can make it look as if it's being shaded as if there's geometry there same thing goes with this uh this doesn't actually exist in the triangle data but if we have that encoded into a texture we can then use that texture in order to get more detail out of it instead of just adding geometry all the time right especially in like an mmo where you're gonna have to uh you're gonna have to have like a lot of characters in one place and adding geometry detail that's like super super high poly to that it's not going to work out well right so you need some shortcuts to add more detail where there is none right uh so we're going to do normal maps probably on thursday i think depending on how far we go all right let's see let's switch game overwatch what a what a game so we can see sort of the same concepts here like regardless of what game we're looking at we can almost always just take a screenshot and then sort of break down what's happening in the frame um so overwatch is kind of interesting uh in many ways one thing that blizzard tend to do in their games is they have a lot of like specialized shaders in their ui um so this glow for instance is sort of like an additive particle effect but it's in your ui instead of in the world and then obviously in the world there are tons and tons of different particle effects uh you have like a roadhog's gun uh you've got like an additive thing here um and i can't draw for some reason oh i'm drawing underneath great and then you have the reinhardt's barrier this one looks like it's very glowy it could be additively rendered and then you have some refraction here again you can see that there's uh like a distortion bubble here might be hard to tell on stream um but you can see that it distorts this uh path here there's a smoke trail coming here that also gets a little distorted um uh what else we have uh tracer just did a blink thing maybe reverse i don't know which effect this is this is probably the teleportation thing um so you can see that tracer is rendered completely differently here right um so very likely tracer's shader has just been completely swapped to some other shader right everybody else is rendered like with their normal textures and whatnot uh but tracer in this case um has an entirely different one and the this particular type of shader is usually called a fresnel shader fresnel is a an absolutely ubiquitous term either you're going to hear about too much uh f-r-e-s-n-e-l because yay french names um fresnel so what renewal basically means that um as something is starting to face away from you uh you get a stronger light uh so you can see that the edges are kind of lit up here right uh but the middle is not lit up so it's almost an outline effect uh but it's not actually an outline effect and it's kind of important to know the difference between the two all right another example for now roadhog in this case is being frozen um and it's probably a little bit more visible here to see how it works so you can see that if you look at his hand for instance stuff that is kind of directly facing the camera in which case like this part of the thumb does not have this glowy effect it only starts happening for surfaces that are facing away from you right so the roadhog's hook here for instance is also facing away from you more so than this part of the thumb and therefore it's brighter right and this part is has a very very steep angle so this whole thing is very bright um so so this is a very useful simple way of getting a sort of like outline effect highlighting objects um and so forth another one winston's shield similar thing um this reader also has some like extra features for um because this is a spherical bubble uh but it can intersect with the world in different places so it intersects with the ground here and then it makes it so that there's a bit of a light going around that edge uh so it kind of follows around the whole bubble uh it also intersects with the world up here right so even though it's intersecting with a pretty complicated geometry here um it can still maintain to do this like whole outline thing yeah usually that's done with like ducks hackery and whatnot but different i love relations with rim light or am i confusing them with each other um they are usually related but they're not the same thing there are all sorts of things that are called fresnel if you start reading up on like physically based shading you're going to find the word fresnel in terms of like physical light propagation which is slightly different than the artistic usage of just a fresnel shader um yeah but but the the idea is that as the angle towards some surface versus the camera is very low something happens right and we're going to make a pronunciator later so now we can see it in action if you want to here's a screenshot from demon souls um and you might tell that there's a bit of a highlight effect on the character this too is for now everything is for now fresnel is everywhere uh no matter what game you're getting into there's there's going to be fresnel somewhere exactly the same effect you get like a glow around the edges and as the surfaces are pointing toward you that glow sort of disappears and it fades to darkness um yeah so it's exactly exact same principle um it is important to note though that fresnel is kind of only like it works best for surfaces that have uh relatively smooth surfaces as soon as things get sharp you don't really get the same outline effect as you can see the shield here doesn't actually really have that type of highlight uh not even close compared to like the like around the knees here right um so so they're a little bit separate but it's just very very easy hack to add to your stuff and it also looks a little different than an actual outline effect this is the game return of the obradin um very very different uses of usage of shaders we're being sort of looking at more vfx and realistic stuff uh this is another um game that is heavily using shaders uh but for an entirely different artistic purpose right uh so the return of oberdin um is a game that is only using two colors uh there's uh well it's not exactly black but it's pretty much just black and white and then in order to express shading uh they use dithering in in their game so um yeah so see like this is something that you can do with shaders too and like this is like perfect for shaders because uh they deal with these types of like graphical things and then you can yeah do all sorts of artistic effects like this another example is of something less realistic is kentucky route zero this is one of my favorite point-and-click games it's super super nice and atmospheric kentucky red zero also has a a very very like um minimalistic art style but even though it's very minimalistic there are lots of places where they use shaders for artistic effect right um so in this case you can see that they're they're playing a lot with depth in this one um because if you look at the trees here the bird is in front of these trees but the fog implies that these parts are in the background um so this is a like very like it's a trippy effect it's a trippy game uh that is that employs all sorts of like graphical things to um to achieve their like artistic vision right and you wouldn't be able to do this if you were just using like unity's built-in shaders right um so really what shaders are about is allowing you to expand your visual horizons right i thought that was sorting of the sprites all of this is dynamic and so if you move the camera and the geometry it's kind of difficult to do this without like coding that into the shaders but maybe you can do it with just sorting as well but regardless setting shorting sorting on something is dealing with the graphical aspects of this which is dealing with shaders right that was a very long just walk through of random in games oh i actually had a few like separate examples uh here's an example from from a game that i worked on uh made in my studio neat corp um so this is the portal locomotion system in budget cuts so in this case this one has a very very specialized shader for dealing with opening up this portal right so you use this portal to move between locations and rooms that's pretty much what this does so you can see that you open up a portal into a different room and then this renders into that portal that's sitting in your hand uh and then you can still see anything outside of that right and yes this is a vr game and then when you teleport this bubble sort of expands and encapsulates your entire field of view and then once it's encapsulated your field of view you're standing at the new location and we wanted this to of course be like a completely seamless experience uh so we didn't want to have any jarring like flashes happening or um you know we didn't want to approximate this either we wanted it to be uh look exactly like you're just like smoothly moving to the new location right is it a second camera with a render texture it used to be uh but it's not anymore because turns out having a like like rendering things in vr you generally have to do things very high resolution because you have uh you need a lot of pixels for it to look good because you have a very wide field of view when you're wearing a vr headset so you need a lot of pixels it's usually a higher resolution than you know 1080p or whatever um and if you want to have another render texture for the portal that usually means that you're going to have another full screen render texture which is a lot of memory especially if you want to like target the lower end platforms here's a very simple example uh this is from another game i'm working on called flowstrom uh with this little rocket here so so this is um another vfx shader uh it's uh this rocket flame is a single quad it's it's two triangles and then everything else is done in the shader so you can see that it's sort of bending when the rocket turns it changes color when you stop or start accelerating um yeah and then the little flakes that is a particle effect but everything else is done in the in the shader itself a water shader i made for um another game at nicor so that game is very very stylized so in this case i wasn't like aiming for realistic water whatsoever but i wanted it to look really colorful and um yeah very lush and pleasant to look at so then you tend to want to get these like you want to add a little foam ridge along the edge you want to have some cute specular highlights and you want to have this gradient that goes from like um tropical cyan and then down to the deeper blues right here is is not my shader this is from uncharted uh uncharted and then the team working on uncharted have like some of the best technical artists out there so usually if you want to look at really really really high-end uh like aaa shaders they do really good work there um so so this is a lot of things are happening here of course but the surface of the water is going to be one shader you have some particle effects for all the smoke there um there's even a rainbow like there are lots of things happening uh but but this is again stuff you can do with shaders oh and the rocks as well we haven't looked at boring objects but all the boring objects have shaders too um i'm mostly talking about the more like exceptional cases so you can't really do um you know only using the built-in shaders right um okay all right any questions so far i guess there's nothing to ask about i just mentioned that cheaters exist in games uh does the portal shader have a field for position of the current portal and then i just render stuff from that position in a circular viewport the way that it works right now is a little complicated to explain um but uh but generally the easiest way to explain it is you have a player position right you have the cameras that are rendering for the player camera where the head of the player is because it's vr um so you have the camera for your head and then once you place the teleportation beacon somewhere we create another set of cameras that mirror like where your head would be if you were to teleport right so then we have two sets of cameras and then what we do is that we render the first set of cameras and then we do some depth buffer hackery and then we render the second ones um on top of that so there's no render texture it's all just the frame buffer but it ends up just working out um has a lot of weird side effects though because depth buffer hackery kind of breaks a lot of things uh but it worked out how often does someone need to code a shader compared to an artist making a shader using nodes um that's a good question um it's hard to like distinguish between the two because quite often you can have coders just making it using nodes as well uh right now whether or not you should make things with a node-based editor versus writing things by hand and unity at least it's more determined about like determine are based on whether or not you're using um specific render pipelines if you use the built-in render pipeline you're most likely going to write shaders by hand or use like a third-party shader editor like my old shader forge or amplify shader editor if you're using the unity's new urp and hdrp render pipelines then you're probably going to use shadeograph for those right but but it kind of depends i mean if you have a team of a lot of artists that can make shaders and gnome-based editors then that might be a really good advantage because quite often shaders live in this sort of weird territory of it's very technical and yet it's also very artistic and visual right it feels like the most low-level front-end code you're ever going to do because you're like very very much doing like extremely low level operations and yet it's something that is like directly going to be visible to players usually not always but usually are there certain tools for making maps like normal or displacement maps or do you draw them by hand there are many different ways of doing them you can either generate them you can either sculpt them if you're using something like zbrush usually you sculpt a model and then you can export a normal map from that and then you you can have a low poly version of your mesh that then uses the high polish normal map um that's a very very common workflow for like aaa style realistic graphics and whatnot um you can also do 3d scanning you can you can literally do photogrammetry scanning of different surfaces or even entire models and then export like texture maps from that what can you use to generate them you know i haven't generated them for a very long time i don't know what generally if you take like a color texture and generate a normal map out of that you usually get very bad normal maps i'm sure you can find tools online for this there used to be one a really long time ago called crazy bump um i think x normal is something that artists use a lot even though x normal looks like it's from years ago i don't even know how many decades ago it looks like xnormal has a very wind amp aesthetic um anyway x normal is very very commonly used for baking normals like taking a high poly model generating a normal map and then using it on a low poly model substance painter is a pretty good program for stuff like that yeah so they're a bunch of like procedurally or like substance designer is a really good uh tool for generating textures um it's really really neat you can do all sorts of like procedurally generated stuff then export you know all the maps you want from that like um height maps normal maps and specular maps gloss maps metalness maps and whatever um oh and albedo maps um they're all sorts of like procedural tools like that there's also like you mentioned substance painter which as far as i know i haven't used it that much but um i'm pretty sure it's a tool where you can sort of draw on the mesh and then that you can draw directly into the texture on the mesh itself rather than using a flat texture right and then it has all sorts of like procedural things if you want to like add weathering to it or add rust to some object or whatever it's very easy to do that using um the tools that substance have uh artists often can't or don't make shaders unless they're technical artists right um it's a matter of definition i suppose but usually uh the capital a artists who just do 3d modeling or 2d art generally don't do shaders um quite often there's you know a specific role in game development where someone is a technical artist and the technical artist is then responsible for creating those shaders um and then usually you need to make sure that communication is good because not all technical artists are good at art um so sometimes you need to have like an art director or even other artists to like guide them like exactly what they want to want to achieve right how do you learn about the various types of shaders like where does one even encounter the fresnel thing in the first place uh good question um i guess a lot of like looking at existing games and googling for how to achieve certain effects i don't really know how i learned about it the first time probably it was my teacher a really long time ago yeah i don't know listen to me there you go that's how you encounter these effects you're you're in it right now this is the way but you can probably find like some website that has like a repository of like different shader effects and whatnot fresnel happens all around you irl constantly yes although yeah it depends on what type of renault you're talking about but the the light effect of the fresnel effect yes that happens a lot i don't know if we can if it's possible to show it on stream okay maybe this thing maybe okay you might be able to see that the the top of the outer edge of this thing right there has a bit of a glow to it right um and the idea is basically that almost any surface when you get to a low angle enough oh actually you can't see my fingers in a reflection on this object right but if you go really low you can see some reflections off of this right uh so the idea is that if you have a very very very steep angle looking at a surface it becomes more reflective um that's the more um scientific light version of the fresnel effect uh which you might want to separate from the more like uh unrealistic um glowy fresnel effect that people usually add to like highlight objects and whatever reflections on a water surface might be a good example for fresnel yeah water is a pretty good example of that um usually pretty hard to find good pictures that shows you both the transparent part and the reflective part um i had one in the presentation at one point i mostly find like fake ones and renders that are like not super good here's a pretty good example um so it's kind of difficult to see here probably on stream but it's way more transparent here right you can sort of see the stuff underwater um but then as you go to a more like grazing angle like a very very sharp angle um there there's like a very very like mirror like reflections right um so that's the fresnel effect in action uh in this case there's sort of a trade-off of like you know how much of the light do you see that's reflected off of it versus the light that's coming out from underneath it right so now we mostly talked about like what you can do with shaders some uh vague effects of like uh you know what what you can do in like different types of games and what some effects are called but we haven't actually gone into like how to make shaders this is all very like theoretical so let's get closer to the the practical reality of making shaders in unity so usually there's a bit of a like terminology mess when it comes to shaders um because when someone says a shader that can mean many many different things unfortunately um so so i'm going to try to like try to break this down the structure of a shader works um so we can start with a the dot shader file right so we have a shader here so now we've got the unity file of the shader so what does this contain let's break this down um a shader has properties properties are basically it's basically a set of input data so uh this could be like uh colors uh it could be just values that you use for like um maybe it's a number for the amount of health you have and then you want to render a health bar using that value this could be textures and then indirectly you don't pass this with the property but indirectly you also pass the mesh that you're going to use right so the mesh that's going to be rendered is usually passed into it and then you also have the matrix for that mesh not always but usually so the matrix contains the transform data of where it is how it's rotated how it's scaled all that stuff right so these would be actual properties you pass in and these are kind of implicitly passed in from you know whatever object you have in the game right right so we have a shader a shader file the shader file contains properties all right what else uh so so properties is just the input data but not the shader itself isn't here yet so then the next level uh is a sub shader so a sub shader you could just call it a shader it's kind of confusing but the way that things are set up in unity at least is that you can have multiple sub shaders in a single shader file uh so a use case for this is that you might have a sub shader that is meant to be a little bit more optimized um to run on like low end platforms so if you have multiple sub shaders you can make it pick the one that's appropriate for your uh current situation right but for our purposes we're not going to make multiple sub shaders so you might as well just consider it a shader um the subshader itself contains something called a pass and this is a render pass or a draw i don't really know what you want to call it but we can just call it a pass and subshaders can contain multiple passes so you can have multiple passes in here there are a lot lots of shaders that are not multiple passes usually you do that if you're doing like certain types of lighting you sometimes want to have multiple passes or very very specialized vfx uh but usually you just have one pass um so you should usually it's a relatively straightforward path going from you know shader sub shader pass and then you can start writing your shader code so inside of the pass this is where the actual shader code is happening you can break a path down into many many different types of shaders but usually what you're going to be using is a vertex shader and a fragment shader the fragment shader is sometimes called a pixel shader uh but it's a bit of a misnomer so the fragment shader is the the technically correct one if you're going to search around for this you're going to find people calling it the pixel shader oh yeah so so this is the kind of the basic structure of a shader file in unity um so if you if you're wondering like where is the shader code here uh then what this is is that this part the vertex shader and the fragment shader that is the part that is going to be written in the shader language hlsl i think it's high-level shading language or something like that so so that's the actual shader code right everything outside of that as soon as you get into the the passes the sub shader the properties all of this is kind of unity's own syntax for like defining where should this happen in the render pipeline um should this have some sort of properties some input data and all that stuff all of that is called shader lab so sometimes it's useful to distinguish between shader lab and hlsl if you want to like um sorry about my bad handwriting so all of this is pretty much shader lab like all of this the part in white okay so so the actual shader code you're going to be doing is a very specific part of the shader the red part here is the fun part the white part is the boring part because the boring part this is mostly about figuring out how the heck do shaders work in unity how do i access x y and z stuff and you know there's usually a lot of like boilerplate involved here but this is pretty much the basic structure of a shader okay so what do what do all of these things do so like what is the difference between the vertex shader and the fragment shader and what are the used form all right so let's let's see if we can break it down so we have a we have all of these properties right we have the colors the values uh the textures the mesh and the positioning data so all of this is going to be an input to a shader code that is then going to run right let's start with the vertex shader let's go here we go vertex shader perfect uh vertex shader basically takes all the vertices of your mesh right because when you're rendering something in a game um you usually do that with a mesh and meshes are built out of vertices and triangles right so you have all of this data that is coming into the shader uh you have some mesh maybe it's a cube uh maybe it's an entire character it could be many many different things but the point is whatever you render in a vertex shader is going to be composed of lots of vertices right um so what the vertex shader does is you it's kind of like a for each loop over all of the vertices that you have and then you can access data from that from all of those vertices so so you can think of the vertex shader as a for each vertex right and then all of your vertex shader code lives here so you do you you can define all of these operations on every vertex in most cases what you want to do in the vertex shader is that you want to put these vertices at some place in the world right so if you have an object like i don't know let's say you have a cube right this is not a very good cube you want to put this somewhere in the world uh but one problem is that the vertex shader doesn't really want you to put things in world space um and not in local space either and not even in view space the vertex shader wants you to say where are these vertices going to be in what's called clip space and clip space is kind of like a normalized space from negative one to one uh inside of your your current view or render target um so usually it's kind of a like weird transformation where you jump between like three different matrices but but generally there's a very very simple way that you do that you basically take the local space coordinate of all of the vertices and then you transform that using what's called an mvp matrix to to then convert it to clip space uh and then you're kind of done right um but what you can do in here is that you you can you can modify the vertices if you want to but yeah the usual case is that you don't modify them but you can do that if you want that's the vertex shader what was tangent space now again uh we're going to talk more about that later i could go into it now if you want to but i first want to talk about the the general structure of the shader okay so we got the vertex shader here uh so i'm going to try to do this chronologically all right oh like someone is mentioning the vertex shader is often used to do like um animating water or um doing stuff like uh leaves that sway or like grass that can sway in the wind um because then what you usually do is that you define some math in the vertex shader that makes these move in a certain way right because you can set the position to anything right it doesn't have to be where the object is even right or the acid shader yes if you want to do the the acid minecraft one you would put these things in some wild location that would make it look trippy right okay so you got the vertex shader um where you are going through all of the vertices and you can do whatever you want there right as long as it's a per vertex thing should you manipulate vertex uv coordinates in the vertical shader generally yes you can do it in both the vertices and the fragment shader but it depends like can you do it in the vertex shader then almost always do it in the vertex shader if you can't you have to do it in the fragmentator right okay so we got the vertex shader and now we're moving on to the next stage there is some internal magic that happens here on the gpu um so what's happening here is a lot of stuff related to the depth buffer uh it's gonna do like a rasterization step where uh you know it knows the position of all the the vertices and then it's gonna try to translate that into you know what pixels go where um according to where you place them um and so forth right uh so so there there's a lot of like internal stuff happening between these things okay but after that's done um you can now go back to your code where you can start defining things on your own so now we get to the fragment shader okay um so now in the fragment shader uh just like the vertex shader was kind of like a for each loop over every vertex uh the fragment shader is a for each loop over every fragment and the reason it's called fragment and not a pixel is because pixel is usually one to one corresponding to a pixel on your screen uh but in shaders that's not always the case so usually use the word fragment um but in most cases it's it's kind of the same thing as pixels right so it's sort of like uh for each pixel inside of all the geometry that we're now rendering we're gonna run some code okay so the fragment shader um basically all you do is you set the color of every pixel so what color is this going to be well that's that's up to the fragment shader uh maybe it's going to be red maybe it's going to be green maybe it's going to be something else entirely right so the fragment shader basically iterates through all of the fragments or pixels that are going to be rendered and you can write code to define exactly how that should look and there are all sorts of ways of doing that this is where like everything sort of happens where you can make it look exactly the way you want but yeah this is the basic structure and this is usually the limitations you're also working under right because this is a relatively limited structure where you need to uh do things one at a time right the vertex shader always happens before the fragment shader and so forth um and there's not really any uh communication in two ways like back and forth between these and there's only a one-way communication from you can pass information from the vertex shader to the fragment shader but not vice versa okay so the fragmentator you specify color gloss in a specular phenol etc vertex shader is only the shape of the object yeah so the vertex shader all you do there is either set the position of vertices or you pass data to the fragment shader that's kind of it that's all you do there but sometimes you want to have very very specific data that you have like pre-processed in some way before you pass it to the fragment shader a good example is if you want to manipulate uv coordinates but we're going to get into that later yeah so basically um this is setting the pixel color and that's it it literally returns a color oh i didn't distinguish between shaders and materials that's also important let's do that so generally what we're talking about here this whole thing is the code that you define in the vertex shader and the fragment shader which lives in a pass which lives in a sub shader which lives in a shader file so there's there's a lot of like nested things initiator so sometimes it's good to like make sure you keep track of all of this stuff you can think of all of these things as input data to rendering your object right uh the green ones up here uh these are usually called properties uh they are properties that you can configure uh before you're rendering your objects and then there's some extra data for you know the the mesh you're gonna render and the uh the transform of that mesh itself uh these things are usually like automatically supplied by your mesh renderer components um but the these properties you have to define yourself and usually the way that you define these properties is using a material so the material itself contains ex like explicit values for all of these different parameters that we can input to our shader and the material also has a reference to the shader so you can never quote unquote add a shader to an object in unity in unity you apply a material to an object and that material then has a reference to the shader itself right so you can sort of think of materials as pre-configured parameters to be used when rendering something with a shader right so to clarify you can have multiple materials that all use the same shader but they have different input data which is very very common um say you have a shader that renders that has a single property of a color then you might have you know two materials that both use the same shader but one of them has the color red the other one has the color blue or whatever right so let's look at shader code let's get a little bit more practical so there are many different types of shaders you can create in unity um since this is a course for coders i want to start with sort of the simplest most low level thing um because i think it's good to get a grasp of like how shaders are structured and i think the best way of doing that is to use the unlit shaders in unity there are many many different types of shaders especially if you go into the different render pipelines and if you use shadeograph and so forth but for now we're just going to make the simplest outlet shader all right i don't know what to call this shader one there we go perfect name uh for a shader file uh okay so let's see all right there we go here's this here's the default shader um and i'm gonna fix all these brackets because they annoy me um all right so you can see that we got the the shader itself and then we have the properties so again the properties this is the input data excluding the mesh and like maybe lighting information and all of that stuff that unity automatically supplies um so that's kind of your own defined input data um the then you got the subshader itself the subshader contains the pass and then the pass contains your actual shader code so anything inside of the cg program and ncg uh that's part of the shader code um it says cg unity is practically hlsl but you can call it cg as well if you want to um okay uh let's see i'm just gonna fix some more brackets because i don't like these brackets um all right we can go through this line by line if you want to so uh both the sub shader and the pass has a bunch of tags and things you can like specify in order to define how this object should render uh so the sub shader has stuff when it comes to like sorting um you know is this object opaque is it transparent i do want to change the queue of this so that it renders before or after some other shader that's the type of stuff you set in the sub shader in the past you set the more like explicit rendering stuff for this render pass itself such as the blending mode um you can set the stencil properties and so forth as a sub shader usually it's more render pipeline related and pass is more like the the graphics related stuff for this specific render pass lot 100 is a specific thing that you can use to i think you can set a lot level of an object and then it will pick different sub shaders based on what you set lot to um i have personally never ever used this so we're gonna delete that uh no more lawd all right uh cj program so this is where the the shader code starts uh and then we have uh pragma vertex vert pragma fragment frag this is just a way of telling the compiler uh what function is the vertex shader and what function is the fragment shader so basically we're saying that we want our vertex shader to be the function that is called vert right which is this one down here uh and we want the the function called fragment to be our fragmentator so this is kind of just like pointing to the name of these functions to say that this is the uh this is the functions we're gonna use for for these two um shaders right we're going to ignore fog so i'm just going to delete everything related to fog that's gone uh include unity cg.ct inc um this literally takes a code from a different file and pastes it into your shader um so unity cg.cg inc is a file containing a lot of the unity specific things that you might want to use in order to like do things more efficiently there are lots of like built-in functions there it's a very useful one to include um so um yeah you pretty much always have that there you can also have your own things included if you want to include a math library or something i don't have a math library for shaders to pitch god damn it okay so now we're now we're on to to the bar specific shader code stuff so app data i hate this name i think app data is a terrible name for this structure uh so we're going to rename this uh mesh data there we go i'm going to use capital m uh you don't have to follow along by the way uh like you don't have to write the same things that i'm doing um just a heads up so i might go a little quickly because i'm not like expecting you to do the same thing at the same time all right so i'm gonna call this mesh data this is a structure that we're gonna run into later um so this is a bit of a weird order for all of these things i might move things around a little bit um okay um so so then you can define variables for your stuff i think they're sometimes called uniforms but basically if you have some property in this case it's a texture called maintex you also need to have a variable to define along with it it's a bit boiler platy and annoying and texture is a little bit uh texture is a little bit complicated so we're actually actually going to delete that we're not going to use textures so i'm going to replace that so let's say we want to have a let's say you want to have a value just a single float value right um so let's see i always forget the syntax uh okay so maybe we would have some value we give it a name and the inspector so usually uh usually that's something readable uh that's not like um i don't know you usually want to make this very like user friendly and whatnot and this is the internal variable name and we want this to be a float i want to set this to some value some default value right so now we have defined a single float property so if we make materials we can then um we can then specify different values here for all of the different materials float4 for vertex position um yes usually it's float4 i actually don't know if the fourth component is used for the vertex position there um i'm not entirely sure um there are lots of cases where you do use the fourth component though but i don't think i don't think it's used here but but quite often when you're dealing with mesh data quite often everything you input is a float4 regardless because that's sometimes the data you have from meshes are called vertex streams because usually or quite often you're kind of using uv channels for stuff that's completely unrelated to uv mapping when it comes to like textures and whatnot so so pretty much what it is it's just a bunch of data coming in from the shader and they're always defined in uh clusters of four so that's why it's a float four in almost every case um so so if you're doing like a lots of like procedural generation and whatnot uh then quite often you just have float force and then you write float far data into that uh but we're getting ahead of ourselves that's we're not there yet um okay i'm removing the texture stuff because we're not gonna get into textures yet so we defined a property and then in order to use that property in our shader we also need a variable to go along with it so i'm just going to do float value and then this is automatically going to get the the values from the properties in the shader right and yes autocomplete and syntax highlighting for shaders is absolutely garbage um and it's only very recently that this is starting to get better which is why i recommend using writer for this because writer does have good syntax highlighting and whatnot um yeah um okay so now we're going to the the mesh data structure so the mesh data you're getting here are always per vertex as this is the per vertex mesh data so this is the vertex position if i can spell and in this case this is going to be the uv coordinates uv coordinates are incredibly general uh you can use uv cornice for almost anything um it just so happens that quite often they're used for mapping textures to objects um so so if you have a um let's see if you have a cube for instance uh and you open a uv editor uh let's see if i can remember where to open it there we go uh you usually have some some 2d coordinate system where you are defining uh where do we want to map a 2d texture onto this 3d object right which is kind of a complicated thing because you're you need to unfold this whole thing you need to define where all these things should go usually you need to like pack them in ways that are like smart um it's a lot of work like just doing uv mapping and like making that good um but yeah uh okay so so what is happening here we have float for vertex uh these are variable names so we can name these whatever we want they could be just whatever um but the the colon here is is called a semantic and this is the thing that tells us or tells the compiler that we want the position data to be passed into uh this field uh text card 0 refers to uv channel 0 which is usually the first one and then you can you can continue doing this if you want to uh text card 1 would be the uv 1 instead of ub0 and so forth there's only a limited amount of things you can get out of this quite often the things you're going to use is uv coordinates position always has to be there and then usually have normals here um so if you go to um let's create a plane this one has a bunch of vertices um and oh it's probably up here display polygons normals vertex neurons there we go um okay so if we take a vertex and move it away a little bit um you can see these green lines right so this is the the normal direction of this surface um so if we change the uh the direction of this you can see that the normals change along with them right so this is kind of the direction that the vertex is pointing which is usually used for shading and whatnot to to make things look smooth or to make things not look smooth and so forth so you can see that if we make a little bump like this you can see that the shading makes this this soft hump here because it's interpolating these normals across the faces but the normals themselves are defined on a per vertex basis yeah so so when we when we refer to normal that is the normal direction of a vertex and there are all sorts of like other things you can get here you can get um for instance you can get the vertex color which i believe is just a color i could be wrong i haven't used vertex colors for a long time um and uh there's another one where you can get the tangent direction and tangents are crucially float fours and the four uh the fourth component actually contains important information um so all of these are like useful whenever you're doing um whenever you need data from the mesh right but if you just want to do a simple like apply texture to mesh then pretty much all you can do or all you need to do is have your vertex position and the coordinates you want to apply the texture with right um but you don't need to use uv coordinates in order to use textures uv corners is just one way of like getting a coordinate system for your textures right how do we know if we need to float two float three or four that depends on the data um it kind of like the a normal direction is a three dimensional vector um so it's naturally a float three a tangent in unity specifically this might be a more universal thing in unity the first three components is the direction of the tangent but the fourth component contains a sine information as in whether or not it's flipped or whether or not your uvs are mirrored so it's just a kind of a hack where the w component just contains sign information the vertex color is always looked for if you have colors anywhere float4 is always rgba um and yeah uh uv coordinates oh what's the difference between uv channels uh nothing uh the difference between uni uv channels is what you define them to be right so a common thing uh that you do in games is that you might have uv zero might be for your diffuse uh normal map textures so this might be the coordinates for your like all the texture that you apply to the surface uh uv one coordinates this might be the light map coordinates so sometimes you have like different coordinates for for different types of textures and light maps is another one of those textures but quite often you want to have a different layout for like color textures versus um like uh baked data like light maps uh for instance if you if you uv map something um like sometimes you have a lot of overlapping uh like overlapping shells because you wanna use the same texture for the same place or for different places on the mesh but if you want to encode light map data you can't have overlapping uvs right can meshes include multiple uv channels yes you can have many uv channels and if you want you can make them float forward so if you're doing procedural generation you can just shove data in there so quite often the uv channels are just data and then how you use that data is up to you but all it is is a float four associated with that vertex right so it's like having a uv map placement for each map you're using yeah you can use it that way if you want right so that's the mesh data we haven't even gotten the vertex shader i feel like i'm doing this way too slowly okay v2f this is unity's default name for the data that gets passed from the vertex shader to the fragment shader so the i usually like naming this something other than that uh so something like fragmentator input uh we could name it interpolators uh if we want that would be pretty accurate um let's go for interpolators okay fog we're going to ignore fog um all right so this looks very similar but it's pretty different in terms of what this is going to be used for interpolators is the way that we pass data from the vertex shader to the fragment shader is using this structure so everything we're going to pass to the fragment shader from the vertex shader has to exist inside of the struct right because you can see the vertex shader returns interpolators um so in this case we're passing uv coordinates uh and the position here again there are semantics here uh the semantic sv underscore position uh this is the clip space position of this vertex or the clip space position it's a little weird to say for this vertex because the reasons i'm going to get into later um but so so that's a clip space position of each vertex uh and then the uv channel this could be any data you want it to be in this case text card does actually not refer to uv channels which is really confusing but in this case if you want to pass data you can just jam a bunch of text chords in here and what you write to these channels is entirely up to you and they don't have to be uvs they could be absolutely anything as long as they are floats you know whatever data type you have the maximum one is float4 for each interpolator all right so so it's important to note that the semantics here refer to specific uv channels the semantics here is just a way to tell you that this is a different one it's kind of boilerplate i kind of wish you didn't have to define all of these text chords because it feels unnecessary but anyway um for a very basic shader we might just pass a single like uv coordinates and then we might as well name this variable uv right let's see why do we call these interpolators so let's say you have a mesh we're going to look at this side on right now right we have a vertex and we have a triangle in this case it's just a line and we have some data in these things right um so that data could be a normal direction right it could be the direction that this is facing um and the normal like i mentioned before is part of the vertex there's no normal data here uh so the next vertex has a normal here next one has a normal here and the last one is a normal here actually let's make this a little bit more clear okay so now we might get a normal like that um this one is going to be like the same direction it's a little confusing but that's okay um okay so so now what's going to happen is that the vertex shader again it's a for each loop over every vertex and then we pass data to the fragmentator where we're going to render for each pixel and then you might ask yourself well the pixels are not just on the vertices you're going to have a pixel that's going to render in the middle of this right it's going to be on like a triangle face somewhere um so then if we pass the normal to the fragment shader from the vertex shader the way that that data is going to look in the fragmentator is that it's going to be interpolated as in it's going to be a blend between this normal and this normal so the interpolated data in this case means that it's going to be the normal is going to like smoothly blend over to point more and more in this direction right and going this direction it's going to start tilting down into pointing in this direction and this goes for any data so let's say you have vertex colors let's say you have the color red there and then you have the color blue here so if you then get the vertex colors and you pass that to the fragment shader uh the color you're going to get at this location is going to be a blend between these two because again it's interpolating or lerping if you want to use that between these two states uh so what you're going to get is sort of um sort of a smooth blend like this right so then you're going to end up with sort of the halfway color of this that's why they are called interpolators sometimes because any data that you set in the vertex shader that is going to be passed to the fragment shader is going to be interpolated exactly in that way whatever data that might be right um so in the fragment shader you don't actually have access to individual vertices or anything like that all you have is the interpolated data for any given fragment that you're rendering and obviously there's going to be more it's not just a single pixel in the center there's going to be a lot of pixels all across and like for the for the 3d analogy it would be exactly the same thing if you look at a triangle like this there you go uh and let's say we define vertex colors for this one too so we have um we have red we have blue and we have green so now let's say we want to draw like we want to know what color this one is going to be so let's say we make a shader that all the shader does is that it outputs the vertex color so let's say we have vertex colors in the mesh and in the fragment shader we just say return vertex color and that vertex color was something we passed from the fragment shader the way that triangle is going to look is that it's going to blend these colors together and it's going to look you know something something like this and this is the classic gl triangle if you have like that one friend that is riding their own game engine for some reason they're going to be like oh look i made a triangle right and and the reason it's blending between these different colors is because you supplied some color data in each of these corners of the triangle and the fragment shader blends that data um so that's the blending you're seeing in the center if you want to get technical what this is called when you're blending between three different points like this it's called bary-centric interpolation but it's basically just a lerp but for three points in like 2d space but you're not you don't have to do this yourself this is something that that it's going to do on its own right so this happens whether you like it or not um it's going to interpolate all the data you have sometimes you don't want it to do that um so so so yeah let's see there is a question um i feel like i missed a step where vertex data was passed into mesh data uh stuck thinking that opengl redefine a vertex array object for the vertex data layout you don't pass vertex data into mesh data that happens automatically by unity so unity is passing all the data into the mesh data struct uh so you don't need to deal with that um so so this is like autumn automatically filled out by unity okay so now we've just talked about like what interpolators are we haven't actually looked at the vertex shader and here we got the actual vertex shader all right so so this is a function just like any other function um it has some input parameters it has the the mesh data right um so the mesh data is coming in here so that's the structs so if we want to access the normals of this mesh or the uv coordinates we can do that using the the v uh variable because that one is already populated with data um we're gonna ignore fog so i'm just deleting everything fog related we're gonna ignore uh textures as well and the vertex shader all it returns is these interpolators right uh and interpolators is like data you specify per vertex so what you specify in interpolators is uh what data do we want to interpolate across all of these across this whole surface right because again all the data that each fragment shader has access to is the the interpolated version of this so when you're in the fragment shader the only color you have for this fragment is going to be the color that you get here right the fragment has no idea what color this is what color this is and what color this is right yeah so so so that's that's why you can sometimes call them interpolators because you get the interpolated value of whatever it is you define in the vertex shader okay this is the vertex shader right now all we're doing is we get mesh data and then we set the um the vertex interpolator this is the specialized one that always has the clip space position so unity has a built-in function called the unity object to clip position um if you want to get technical this is multiplying by the mvp matrix which is the model view projection matrix but essentially what it does is that it converts local space to its local space to clip space that's all it does uh and then it sends that to the interpolator called vertex right and usually o is used for the output if you want like if you don't know why it's called o um right um so that's it that's all we're doing right now we can we can wait with uv coordinates for now actually let's not do uv coordinates okay fixed for what is fixed for what is happening what is this um all right so you're probably familiar with uh float4 uh in in unity c sharp this would be the same thing as vector four right and the same thing goes with like uh float three float two and floats um but in shaders you have some extra data types that are lower precision than floating point precision uh so this is a 32-bit float but in shaders you also have fixed and half so you can say uh let's just do flight um so float is a 32-bit float uh half is a 16-bit float uh and then you have another one called fixed which is a little weird it's kind of like it's very legacy at this point and almost no platforms you're ever going to target has the fixed precision uh but it's very very low precision uh i think it's around 12 or something weird like that i think it also depends on the platform so sometimes it's not just 12. um anyway lower precision there we go and the fixed uh one is pretty much only useful within the like negative one to one range outside of that you get so terrible precision that it's not going to be useful right half is pretty good for most things uh you very rarely need to use float actually but half precision you can get away with a lot of things as long as you do them in the correct space uh float works well for anything in like a world space and stuff like that about half is is usually the one you're going to use but when you're doing things in practice you can just use float everywhere and some platforms don't even support half and fixed i think some like pc platforms never use half or fixed they always have everything in flips but if you're doing things on mobile then half precision is going to be important and then there are some other variable names that might be useful um so again you can do this with vectors too so while there's a float four there's also a half four there's also a fixed four so this would be the vector four version of that uh and they continue with this like naming convention so if you wanna make a matrix uh then that's called float four by four um and again you can do half four by four for a um half precision four by four matrix um and so forth yeah so this is how you do matrices where where this in in c sharp this the equivalent would be matrix four by four right and you can do like three by three three by four matrices as well there are some data types bull data types uh kinda exist um but generally uh they're equivalent to values of zero and one which is familiar to some of you because i'm sure some of you have used you know c or whatever where i think bullets is like equivalent to a zero and one where you can just multiply by a bull so uh yeah so you could like multiply something by boolean value and that's going to compile and false would be equivalent to a zero true is equivalent to a one you could have uh integer values as well but in many cases it's just going to be converted to a floating point value anyway uh but usually you can have into values if you want to yeah but generally where you're going to you'll be using like 99 of the time it's just floats everywhere because it's not going to matter for a very long time unless you'd like start optimizing things specifically oh and then a fun thing is that this vector format kind of translates to everything else so you can use an into uh you can probably do a bool4 yeah it's shaders you can just add a number for how many components you want in that thing um i think i actually never use these so i don't know if it's possible but i'm pretty sure it is okay what's the value of doing lower precision does use less ram or is it faster it is generally faster and in some cases use less memory as well depending on what you're doing so if you're doing things like gpu instancing then the number of bits that your properties can take or the number of bits that they occupy um kind of determines how many instances you can have in a single rendering batch not to get into too much into instancing and what that is but um but yeah generally on like lower end platforms you want to use lower precision stuff because the calculation runs faster you don't have to do a full 32-bit flip and then uh now we're in the fragment shader we're going to delete fog and we're going to delete textures uh should you always use lower precision if you can pretty much yeah uh that would be a good rule of thumb i would generally caution you to not care about it until you have to because you can run into some really weird and hard to debug issues um when you're using like too low of a precision for something um so the way i approach it approach it is use float everywhere until you have to optimize um or unless you like really really know what you're doing and you're not experimenting anymore right cool so now we have our actual fragment shader um so the fragment shader has a semantic here uh this semantic is just telling us that this fragment shader should output to the frame buffer uh in most cases um that should be the target of this fragment shader in some cases if you're doing deferred rendering you can write to multiple targets but we're not doing that in this case it's just a single target so then we just use the semantic sv underscore target again it's mostly boilerplate this is always there basically so now we have basically one of the most simple skeletons of of a shader um you don't usually modify that much apart from adding code inside of the vertex shader and the fragment shader although sometimes the render pipeline tags and the properties of course you need to modify okay so let's do the hello world of shaders let's output a color um so let's do that well let's output a float four and let's give it give it a nice color give it a red there we go oh geez i don't want to autoform it um all right so this is just a red color um and like i mentioned before in the math class we had a very good rule of thumb to remember is that r g b a uh directly corresponds usually to stuff like you know x y z and w so these would be the components of a vector right like the uh the x component and the y component z component and so forth um for colors this would be rgba right right and of course you can also index them you can call this zero and this is one and this is two uh and this is three shaders make no distinction between colors and vectors they're all the same all of them are float three or they're uh float four or float two or whatever um so so everything you do in shaders is pretty much always structured this way so there's no color type in shaders and there's no vector type it's all the same right because they have exactly the same structure you have a floating point or some other position value in up to four components oh what is a in rgba so a is usually called the alpha channel alpha channel can be used for many different things uh but the most common use case for the alpha channel is used to represent transparency or coverage um so you can use it as a transparency thing or you can use it just to pass some other data along right the same way that sometimes with vectors you also use a w component just to pass some data which is more common than having a four dimensional vectors all right so now we got the fragment shader we're just returning a new float for uh that's just red all right so let's try this shader let's see if it works uh let's see output value vert not completely initialized okay so this is it's complaining because i have not set the value of the uv interpolator so i'm just going to comment that out um there we go all right so we got our first shader but we need a material we can't apply a shader to an object so even though we might have an object um we can't just drag and drop the shader onto that object we need to have a material to define what are the properties going to be for this object so i'm just going to create a material there we go and this is now using our shader and then we can apply that to our object um yeah and there you go this is our shader uh we made a shader it does indeed output red something i love about shaders is that we can do oh swizzling yeah so swizzling is a funny name for a pretty basic uh but very useful thing of like if you have a float fart uh my value um then you can access these components or cast them to a vector two by doing my value dot x y uh and then that would be a float two um like this um which is pretty useful uh so this is like a you can just cast it to a vector two and you can replace x and y with r and g for red and green um so you can use whatever like uh whatever components you want to use from this set right or not the numbers so so these letters um and you can even flip them so you can do my value dot gr and it's going to flip the um it's going to assign the green channel to the red channel and the red channel to the green channel so this is called swizzling and it's incredibly useful um and sometimes if you just want like want to output a grayscale of some color you can do dot xxx um and now this is going to just take the first component and then splattered out on all the four uh components of something um super useful to to have and yes it is frustrating to not have that with unity's vector type um so we we made a color right we output a red color uh so so what can we do with this well we can we can make it output green if we want right um so we recompile now it's green uh can you do rrr1 to automatically put one in the alpha channel no but this is something that i am pushing hard for i really really want this to happen um and my my girlfriend ashley is working on she's working on rust gpu which is basically like a way to write shader code in rust uh and i'm trying to get her to push for that change there's some discussion about this on the internet as well but it gets complicated because if you want the first component to be zero then that gets complicated to compile because doing dot 0 is not a valid variable name or a field name right um but yeah anyway uh all right so now we have a way of outputting a color to the screen and we can move this around and it's following the you know the transform of this object which um which is like seems like it's a very basic thing but the reason this happens is because we told it to convert local space to clip space here so this is not always the case uh so so the default space is the clip space right so if we just set this to v.vertex it's going to be like stuck to the camera it might not even render no wait okay we got it so so now we just took this mesh but we're rendering it directly in clip space um so now the position of this object doesn't matter whatsoever um because we're not using that matrix right now it's just like slapped onto the screen so this can be useful if we're doing stuff like post-processing shaders because for post-processing shaders you usually want to have something that covers the entire screen right it doesn't it doesn't have a position it's just a global thing for the frame right okay oh the difference between like normalized device coordinates and clip space it's yeah it seems really messy and it's really hard to find good information on that it just seems like it's mostly a difference in like axis orientation and uh whether or not depth should go from zero to one or negative one to one um whereas in clip space i feel like depth goes further but in normalized device coordinates is squished into either zero to one or negative one to one right you don't you don't have to know that in order to write shaders though which is kind of nice at least you're riding your own engine but that's not why we're here how compatible are the shaders you write in shader lab opengl vulcan directx um unity cross ports too all sorts of rendered i don't even know what to call them render apis um so like if you compile something for ios it's going to compile to ios metal right um so like the the actual back end is you don't have to care about that you only write stuff in hlsl and then unity is going to handle all the like translation to all the different apis is it all compiled to spear v nowadays i don't know i would guess not but maybe you can probably look it up i don't really know how the internals work there like i quite often mention i pretty much only learn the things i have to learn so if there's information that's mostly useless to me i tend to not pick it up so for me to make things look pretty in a game and to code the shaders that i want i don't need to know that i just need to know that it compiles for the target platform can you have multiple shaders on one object for different purposes um say i want to outline shader when i hover over the objects but i also want to have another shader on them at the same time um usually you can like do stuff like you render the mesh twice with different shaders yes you can do that um outlines in particular are like notoriously complicated so sometimes outlines are even done as a post-process effect rather than you know something that's in the shader of the surface itself so it kind of depends on what you're doing rust gpu looks like it uses spear v uh i'm pretty sure yes actually rust gpu uses spear v right thumbs up yes is it possible to replace vfx with shaders for non-art background so you can't like well it depends on what you mean by the effects and what you mean by shaders because every all the vfx that you use and draw do use shaders um shaders are kind of inevitable everything that you can see in a game it's pretty much always rendered with shaders um but if you're asking like can i replace like artist authored texture data with shaders sometimes but sometimes it's also like pretty like computationally expensive to do that in real time when you can just use the texture um that someone is like authored and usually it's like it's also a different style and it really depends on what you're trying to like artistically achieve and what your capabilities as a team are so let's continue so we made a very simple shader uh with a hard coated color all we do here is we have a fragment shader that is positioning the vertices at some position in the world right in this case it corresponds to the transform of this object um and the fragment shader the only thing it does is it returns a green color that's it um so let's do something that's a little bit more useful and interesting um so first off let's um let's make a property because maybe we want separate materials for this maybe we want one that's green and another one that is a red let's say that we want to have a green one and a red one and we want to be able to configure this um so okay uh we want to have a property for the color of this so this is a float property but in our case we want a color right uh so let's call it a color and give it a name and color uh and we just call it a color color color color um and a color has four components so we can default it to some value let's default to um actually we can probably just do do one there we go one across the board white color and then we want to be able to access this property somewhere right uh the properties that you define here usually you need to define a variable for that too or a field for that and again colors are uh have four components so you can either do float four fixed for half four but we're just going to do floats and so so now we have the color property and we can set this color property in the different materials that we have so then we can access this both in the vertex shader and the fragment shader we can access the properties from anywhere which is really really useful so in this case instead of outputting a hard coded color value we're just going to output underscore color save that go back to unity and so now if we select the let's see this is the supposedly red one let's copy that and assign the green one so if you now look at the material of this we now have a color property um on this material uh so we can then modify this color uh so this was the green one so let's set it to green color and then we have the red one so let's set that to a red color there we go so now we have a green one and a red one um and we cannot configure this per material because we have this property up here right uh so this property has a field called color and we can then access that field in the fragment shader which again the fragment shader all it does is return a color for the current fragment or sometimes pixel that we want to render but so far we haven't actually passed any data from the vertex shader to the fragmentator all we're doing is just like out a color right um or i guess that's all you always do in a fragmentator but in this case we want to get a little fancier so what if we want to pass something from the vertex shader to the fragment shader okay so let's talk about the concept of uv coordinates so now we're going to get into what uvs are oh actually we're not going to do uvs uh we're going to do normals we're going to skip uis all right so in order to pass something from the vertex shader to the fragment shader we need to pass that in the interpolator structure right and we need to have a float3 value because normals are it's a three-dimensional vector that's a direction uh so let's call that normal and then the semantic we use for that uh which is called the text chord zero it doesn't matter this does not correspond to uv coordinates in the interpolator structure uh this corresponds to just one of the data streams that we have coming from the vertex shader to the fragment shader and all right so now we've got the normal in the interpolators and we can access that because we have the interpolator's input in the fragment shader uh so we could do i dot normal and now we're going to access whatever we set the interpolator to here right uh now i dot normal is a float 3 but the fragment shader is going to return a float four so we need to like specify what's gonna what's the fourth component going to be well we can do float four and then we pass i dot normal in and that's automatically gonna populate the xyz components and then we're just gonna set the w components to one because usually in this case that's usually alpha if you're using alpha blending or like transparency but in this case i guess it doesn't matter we could just make this a float three but whatever um yeah so in this case it's just making a float for out of a normal direction which is three components and we're adding another component to that okay um so now we are outputting the value that we have in the interpolators but we haven't set that value yet the vertex shader needs to assign something to the interpolator called normal right um so in this case we would do o dot normal equals and then we give this some value and again like i mentioned before this can be anything it doesn't have to be normal we can even pass the color in here let's say we pass the color.rgb into odopnormal then if we go back to unity we're still just looking at the color now but instead of directly outputting color here we're passing it through an interpolator um so so all we're doing here is just piping data to the fragment shader but we don't want the color we want to show the normals of this object just visualize the normal directions right the way we can do that is that we need to access the mesh data right because the mesh data contains the normals because again mesh data has the normal here and we use a normal semantic to make sure that it populates the normals variable with the normals of the mesh um so if we now go back here we can do uh o.normal equals v dot normal and this is very common you will very often just pass data just pass data through the vertex shader immediately to the fragment shader so this one is not actually doing any transformation and whatnot but we should and we're going to get to that later um all right so we're passing the normals from the mesh data to the interpolator that we're going to have across all the triangles that's going to get passed to the fragment shader so now we can access the normal so now if we go back to unity we have these very colorful mango spheres um so so now what this one is showing is that it's displaying the direction of the normal for every given pixel that we are rendering and a way to sort of visualize this is that if you consider what normals are it's the direction out from the surface right and again like i mentioned before shaders make no distinction between vectors and colors so what we're doing now is we're basically outputting the vectors as a color so on top of this mesh it's green right and what that means in a vector is that we have the values 0 1 0 because in the center component where the y component and that one is one which means it's pointing directly upwards right so that's why the green on the top on the x-axis the normal is pointing directly along x and that's why it's red on the z-axis it's pointing along z and that's why it's blue um so what this is doing is that it's it's basically just showing the direction of the normal why is normal semantic in interpolators text card weight s um it's text card zero in this case so text cord this is kind of just a name to specify a slot of where you want where you want to pass data through so like i mentioned in mesh data tax code specifically refers to uv coordinates but in the interpolators text cord is just an index you don't really need to think of it as anything related to anything when it comes to mesh data so in this case tax code is something completely different it's just the semantic you use in order to separate all the different values you have so if you want to add something more you name the next one text card one name the next one text core two and so forth and this could be anything this could be the tangent this could be as just some values it could be whatever you want right so this is just a way of saying that you want to have some channel where you pass some data into it right i'm seeing the black side render correctly does that mean you can have negative color values output by the fragment shader and nothing will break yes uh shaders are like inside of the vertex shader and the fragment shader they're kind of designed to never crash uh so if it compiles it will output something the downside of this is that you can like you can literally divide by zero um and you can get nan propagation throughout your rendering pipeline um unless you're careful uh because shaders will not like crash for that reason um but it kind of depends on what you're rendering to but yes you can render you can have negative negative color outputs you can have values above one it's not restricted to zero to one and depending on where you're outputting it to it has different effects um it is possible to render to a texture um and it's also possible to render to the frame buffer in this case we'll render it to the frame buffer but you can render to like a floating point texture if you want to in which case values below zero are going to stay values below zero what does nad display as uh i don't remember um i don't quite often run into that and there are like some k like sometimes there are protections against that to prevent non propagations as well i think the newer render pipelines in unity has protection for that okay cool so we're just passing the normals through uh all right so so now we're basically showing the normals of this object in color and this applies to any object so we can replace this mesh it doesn't matter what mesh we're using right so we can just use a capsule or a cylinder and everything behaves as we would expect it to right so that is the normals all right let's look at something else um oh actually one more thing um so if we rotate these you can see that the the normal that we're outputting is not in world space right um because you can see that this corresponds to the local space axes or in this case it's sometimes called uh model space or mesh space so so these normals are are not converted to world space and we might want to do that so so this is where we can start figuring out you know where do we want to do this math because technically we could do that either in the vertex shader or in the fragment shader so if you want to do that we would do a unity uh is it model or local probably local objects to world normal there we go object space is what unity calls it uh so this is a unity macro that basically just converts this from object space to world space right uh it's called normal because this macro does some extra stuff depending on how you configure unity so there is like there are lots of shenanigans that happen if you like scale your object with non-uniform scaling uh because then you want to be you want to make sure that the normal is normalized um so so there are all sorts of like things you might be careful with depending on how your object is set up uh but if you don't have any scaling then it's pretty much just a matrix multiplication right uh as you can see here it's a mole which is multiply with matrices so it multiplies a normal vector with a three by three matrix of world to object right and yes this is from unity's include file and you can do this manually if you want to so if you want to do this manually you would do mole um and then v dot normals uh another matrix here which one do they use uh this one there we go so so unity world to object is a four by four matrix but if you're doing matrix multiplication of a direction you can just discard um the fourth column in a row right um yeah all right and this is a little confusing because it says world to object but what we're actually doing is object to world but the reason for this is that the uh the parameter order of the multiply operation changes the matrix the matrix will either be transposed or not um so if you want to make this a little bit more readable which i would do i would never structure it that way um i would instead use unity object to world and then flip the arguments so now it's a straightforward multiplication of a three by three matrix uh with the uh object space normal and then we transform it to world space right yeah and we don't use any scaling so we don't need to normalize this uh so we can just leave it as this oh you could also use the unity matrix m this is the model matrix um so if you've heard about the mvp matrix the model view projection matrix this would be equivalent to the unity objective world uh unity themselves recommend using their names because they usually define like those names to work in different ways depending on like platforms it gets really complicated once you get into vr and stuff like that so i usually it's best to use unity's built-in stuff for this uh but i just wanted to show you how you could do space transformation manually so now this is no longer in local space so if we rotate this and then recompile the green is gonna change direction right so now no matter how we rotate this the um it's always going to correspond to world space normals right because we transformed those uh vectors um the same thing goes for this one you can see that the this one is going to change to green we can change it to red and we can change it to blue all right um so we could if we want to we could do this in the fragment shader we could pass the local space normals here and then we do this operation in here right so if we recompile nothing changes it works exactly the same weight as before but this is where a one of the most common ways to optimize shaders comes in so if you want to think about how you can make a shader faster try to think about how many vertices do you have versus how many pixels or fragments do you have and in most cases not all but in most cases you have more pixels than you have vertices um so you usually want to do as much as possible in the vertex shader and do as little as possible in the fragment shader um it could be the case that you have some really like high vertex count on something that is very far away in which case it might not have like maybe it's only rendering to like 32 pixels but it has like 16 000 vertices in that case it's going to be more expensive in the vertical shader right but in most cases you're going to be running more pixels than vertices in a mesh um so usually something like this if you can put it in the vertex shader then you should put it in the vertex shader um presuming you didn't need access to the like local space normals in the fragment shader all right so normals is just one type of data that we have in our mesh um so let's move on to uv coordinates so we have the uv 0 uv channel 0 as our mesh data input and then in the interpolators we want to send the interpolated uv values to the fragment shader so we need we need a channel to do that on uh so we're just going to call this uv so now we have uv and then in the vertex shader we do o dot uv equals v dot uv zero all right so this is again just a simple pass-through we're not modifying this uh at all here okay so what do the uv channels look like let's let's just output the uv channels so i dot uv zero one because uv channels are or uv uh coordinates if you're using the traditional uv coordinates that you have like in your mesh editing programs usually uh uv coordinates are 2d coordinates right um so in this case it's a 2d um vector that we pass into the float for so then it combines it into a full float four where z is zero and w is one uh so what does this look like well uh we get even more mango shapes so the uv coordinates depends on how the object was uv mapped when the artist authored this object right and it's probably most helpful to look at the way that it looks on a single quad um so here we have a single quad um so what uvs are if you think about the colors that we're seeing here they directly correspond to two dimensional coordinates right because colors are vectors and vectors are colors and shaders this is a very useful way to debug this um so the vector or the uv coordinates or the color is zero in the bottom left corner because it's fully black and along the bottom side you can see that it's increasing in red and red is the same thing as the x component so the x coordinate is increasing along here but on the other axis it's increasing along green which is the y coordinate and then diagonally it's increasing across both which would make for a yellow color so what we're visualizing is 2d coordinates on this mesh and if you want to visualize using one of the components we can do i dot uv.xx for instance so now we take the x component and splat it out across rgb um and now we just see the x component of the uv coordinates which just makes for horizontal gradient right and we can do the same thing for the y coordinates and now we'll get the the y coordinate there uh it's just that when we visualize both at the same time we tend to get this uh manga vibe going um because it's just red and green being added together in a single color so so this is the coordinates that you get in a 3d application when you are modeling something so if you have a cube like this then depending on what face it is the colors would be different depending on where it is right because that's literally what the uv coordinates are they specify a 2d coordinate on your mesh yeah not always but usually here's here's a good thing to talk about so let's say let's say we want to modify this with a property let's say we want to scale these let's say we call it uv scale and that's going to be a single float value and it's going to start with 1. that's a good default for a scale and then we need a variable for that so we're just going to call it scale and now we want to scale these uvs based on this parameter right so if we then want to scale this then we can kind of consider like what does scaling mean what we kind of want to do is that we want to multiply these vectors so that they get either longer or shorter right um and you you know vector math at this point so if you multiply by scale the coordinates themselves will change depending on what this value is and again we can do this in the fragment shader as well but since it's faster to do it in the vertex shader we're just going to do that in the vertex shader um so now we scale these uvs with some value so if i go back to unity what happens if we tweak this value now is that you can see that the coordinates themselves change um if we put the scale to 2 for instance you can see that it's maxed out at yellow here and then this quadrant is just completely yellow because all of those values have gone above one right uh so the color is clamped but the underlying data is not uh you still have values outside of zero to one but visually you can your monitor can't show negative colors and your monitor can show colors above one uh so that's kind of the range that we can see stuff in um but you can still use those values in the shader itself right so so this is how you can very easily modify data like this and the part that i really love about working with shaders is that it's kind of fascinating how closely you can visualize math and like things you would do in math so so in this case we multiplied by scale so what if we do some other stuff let's say we have something called an offset let's call that offset uh default is zero the additive identity there we go all right so if we have a scale and an offset then usually offsetting offsetting would be that you add instead of multiply um whether or not you want to offset before or after scaling i guess depends on your use case um but in this case um we're going to do it before actually you do want to do it after i don't know things always feel a little reversed all right so now we have an offset parameter so let's set the scales at 1. so now we can offset these uvs diagonally right we can put a negative value there and we start moving the coordinates to be negative um or positive values and they're all like positive right and this is where things start becoming really really powerful because if you start thinking about how maybe we want to you know fade something from one color to another over some specific range this is where we can start doing stuff like that so let's say for instance that we want to make a gradient from one color to another over the uv coordinates let's say that that's our goal uh all right so what are we going to need um we're going to we can start out by having the two colors right red color a and color b so it's going to be our start and end colors um and let's say that we just want to do this across the x-axis uh so let's ignore scale and offset for now just pass the uv coordinates through and now we want to blend between two colors oh we also need the the properties here color a color b don't use these anymore all right so you probably remember that we talked a lot about the function called lerp right and lerp is basically a way to blend between two values based on a third value usually between zero to one right and if we think about the uv x-coordinate then this is a value from zero to one right zero on the left side it's one on the right side so we can use this as the t parameter of a lerp function between two colors um so let's say we want to make a variable let's call that our um output color and we're gonna learn and in shader code you just type lerp the from value is going to be color a the two value is going to be color b and the inter interpolator is going to be the x coordinate of the uvs right and then we can output that color and okay so this is basically blend between two colors based on the uh x ue coordinate all right and we're back in unity and now we can set this to some color and we can set this to some other color right so now we've done a very very simple blend between two colors and we constructed a gradient uh with very like few instructions right it's a simple lerp between two colors and all it's based on is the uv coordinates of of this object um and then we can like then we can start modifying this however we want uh let's say you want to be able to change where where the where the gradient starts and ends maybe that could be a useful parameter to tweak um well in that case um it might be useful to have two ranges for instance so let's call it color start color end um there we go uh and in this case let's use a range instead so range is just a quick way to tell unity that we want the inspector to have a slider between these two values in this case it's just gonna be zero to one all right so we have color start color end uh we wanna start at zero and end at one by default and then we wanna add these as properties um so as with the other ones uh it's just going to be it's just going to be float4 for all of these things right uh so it's float four or no actually these are not flute first sorry uh since these are just a single value uh we just have color start and color and um as single values all right um cool so what we then want to do is that uh we want to change or we want to remap the x coordinate of the uvs to some other range right so so for now let's let's stop looking at colors and let's go back to looking at uh the uh the actual values we want to use as a t parameter uh and this is really really really useful because in shaders you can't really log things so shaders you usually want to just you usually just want to output some color and that is the way that you can see what is happening in your shader um so you if you're writing shaders you're going to have a lot of commented outlines that just say return some stuff because it's really useful to debug stuff that way so in this case we want to see i.uv.x so this this is the x coordinate that we're dealing with so now we want to make these two sliders uh change where this gradient starts and where this gradient ends right okay so we've talked about this during the math class um and the a very very useful function that you can use to do this is called inverse lerp surprisingly and oddly enough inverse larp is generally not built into shader languages there is almost an inverse lerp called smooth step but that one also has smoothing and it's kind of i don't know it's different right so sometimes you have to define your own functions so let's just define our own function so let's call this inverse lerp we have a start value we have a an n value and we have the input value right so we got inverse lerp and the inverse verb function is really really straightforward uh in case you don't care about divisions by zera in which case uh you just do uh return um let's see it's uh v minus a divided by b minus a i'm pretty sure that's the one um so now we have an inverse larp function and now we can use this to change where this gradient starts and ends right so all right let's call this value the t values that's usually what it's called in the the third parameter of the lerp function uh so we're going to do inverse lurk color start and color end and then we're going to have the actual uv coordinates that's a gradient input so then we get the t value and let's just return that t value for now all right let's see if this works so if we pull the start position it seems to actually change the start and if you pull the end of the gradient that also changes the end point you can also reverse this um the inverse layer doesn't really care what order this goes in but yeah so so this is now a very simple thing where you can just change where this gradient goes right all right and then we want to uh then we want to make this uh be between two colors and not just black and white but all the data is pretty much there right uh so um you might think that all you need to do is just bring this back and change this to t there's one more thing we need to do um oh sorry um i still need to return it great so so now we're using the t value that we we're just looking at this value from zero to one um and we're using that and as the t parameter of the lerp between color a and color b so now if we go back to unity recompile we now have a blend between two colors kinda we it looks like we have more than just two colors here right and if you can guess why that is well how how could this happen why is there yellow does anyone have any idea and then i'm like an awkwardly sit and hell yeah you got it so basically the the problem is that because shaders don't natively clamp everything between 0 and 1 you need to make sure that you do that wherever you want to keep things within that range so what's actually happening is that the t value here of the inverse lerp actually has values outside of zero to one so even though we might return t then we can't tell if these values are above or if they are above one or if these are below zero um there are ways of like there are ways of checking this um so one way that i like to use um for checking like whether or not something is overshooting uh is that i usually like using a function called frac so frac is a very simple function that basically frac is the same thing as doing some value and then subtract floor of that value and in in the human language what that means is that values are going to repeat within the 0 to 1 interval so what would happen is that if we have values that go above one here we would see this gradient repeat multiple times in both directions right but if this is clamped then we wouldn't see a repeating pattern because then all of these are the same values all right so let's uh let's use frac in this case let's do t equals um t equals uh frac of t and then let's go back to unity and we can see that they do indeed repeat right so yeah and we don't want that we want to want to clamp this between um between zero to one and not repeat the pattern uh you could if you want to but that's not what we're going for now so we're not confirmed that this one is not clamped um but let's clamp this and see what happens so we have a frac and then the way that you clamp things uh in shaders is quite possibly the worst name of a function ever to exist in the universe the way that you clamp is that you use saturate which is a name for what this is doing so saturate basically means if it's less than one make it zero or if it's less than zero make it zero if it's greater than one make it one it's the same as the clamp 01 function in unity um in the unity c sharp um but yeah so that's what saturate is doing um it just clamps it between that range so if we go back to unity now we can now see that it has clamped this pattern is not repeating below or above it right um because we clamp it and then we do uh the frac operator which would show us repeating gradients in case it was still increasing here or if it was still decreasing here so it's a good that means that we have now successfully clamped this and now we know that these values are stuck at one these values are stuck at zero so now if we go back to the gradient we now have a single gradient here and it doesn't overshoot into other colors this might be a topic for later in which case don't answer oh i don't know i haven't read it yet what's the difference between what we're doing now plus a clamp and smooth step so smooth step is a little different because smooth step is an inverse larp it's a clamp and a cubic function so smooth step has like three different things that have a bunch of operators that you might not always want to use so smooth step is yeah it's a combination of those three three things yes it's not a lerp it's a cubic smooth which in and of itself can be useful um but yeah oh for context the reason i think saturate is one of the shittiest names for a function is because like you're literally dealing with colors in shaders and saturate means something completely different in the context of art and colors which is really frustrating to me does that make sense any questions so far is saturate also part of unity's include nope saturate is a built-in function in the language itself usually the page i go to is the cg deprecated documentation here you go the the standard library uh in cg lists a bunch of functions uh so if we go here and we search for frac you can see that we have frac and you got all the different overloads for all the different positions and the input values and whatnot oh oh it's also really useful uh it does also show you if you were to implement it manually this is how you would do it um so which is yeah that's the thing i was just mentioning which is funny because then we can look at smooth step and you can see that smooth step is doing a hell of a lot more than just an inverse lerp right and so if we look at smooth step this is a linear inverse lerp and then it's clamping it between zero to one so so this is the part that we were doing manually and then it's doing a cubic smoothing of that function um so so it has like a lot of extra math that you don't always want uh whenever you're doing this so just show you what that looks like there you go so this is the cubic smoothing function uh whereas a the one we were doing is the green line here where it's just going to uh it's just a linear blend where a smooth step is doing a the smoothing curve on top of that i guess this should just be between 0 and 1. um yeah so that's what smooth step is doing this in saturation basically rgb times x it's not rgb times x like if you do that you multiply it by a value and that would still make it shoot beyond zero to one right if you're talking about the like the artistic saturation just multiplying is not going to change the saturation of it it's going to change the overall brightness but it's not going to change the saturation which is sort of a separate thing if you want to like increase the saturation of something you first need to figure out what is the grayscale value of this and then do the opposite of that so so usually something you can do is you can you can interpolate away from the grayscale version of that color which is like a sort of separate operation from from the the thing that shaders call saturate which is just a clamp between zero and one which again it sucks that it's called saturate it should just be called clamp 01 or whatever in that case the shader saturation makes no goddamn sense no that's why i hate the name of it it should be called clamp 01 but that's why it's called clampo1 and shaderforge literally yes that is why it's called that let's see uh we don't have that much time left so i can just like juggle a few math things to show you like what you can do with this type of stuff let's give it some more pleasing colors i didn't like these colors this is better i like this better okay so so one thing that is that is really neat is that um so so let's say you have your [Music] actually let's let's keep this for later oh that's kind of necessary oh by the way um things in shaders automatically cast from a single float to a float four so we're returning just a single float but this is basically going to take the x component and swizzle it out to all four channels of the the float4 or the four components um so in this case there's an implicit cast and so right now we're just showing the uh the basic uv coordinates that we looked at before just zero to one from left to right um and one thing that's really nice once you get into making shaders uh this way and once you can like harness the way that the math functions work is that you can get a lot of effects out of it that might otherwise be really hard um especially when you get into like animating animating things and like trying to do procedural stuff so let's say for instance i want to make a triangle wave out of this um then there are like you can sort of deploy a bunch of math here and then just sort of make it work um so say we want to make a triangle wave how can we go about that well we can sort of look at decimals for range values so now we have a simple curve from 0 to 1 right and let's say we want to make a triangle wave out of this well we do have the the function abs right which we talked about in the math class which basically means if it's negative make it positive um so it's the absolute value of of something right um so given that we can then sort of start thinking about okay so if i want to make a triangle wave that is sort of repeating over some um over some range uh what if i take this value multiply it by two subtract one now it's going from negative one to one over this interval and then if we do abs of that value you can see that it's now bouncing from one to zero to one so now we have a triangle wave inside of the zero to one range right so so now it's just a very simple mathematical function to do this and obviously we can we can repeat this so so if we then go back here and we we have our coordinates here and we uh we first multiplied by some value let's say five and then we do frac of that so if we go back uh we now have five repeating sections of zero to one right and then we want to try to turn it into a um try to turn it into a triangle wave so then we can do the same thing that we did here right absolute value of x times two minus one um so all right so we take this multiply by two subtract one and then we take the absolute value of that whole thing uh and then if we go back to unity we now have a triangle wave so this one is now going from zero to one and then down to zero again then up to one again and so forth and you can do this in many different ways you also have the the trigonometric functions uh so you don't have to do waves uh like manually like this um you could also do the uh let's say you do the the cosine of this value uh because as you know uh trigonometric functions repeat themselves so if we do cosine of x in this case we did times five uh it's not going to repeat that many times because uh the circle constant is involved um but we can just throw some constant in there let's say 25 go back to unity we now have a repeating pattern although this one is going from negative one to one because that's what this one outputs right should we involve tau um let's do it things juggle for the tau value all right define tau that number there we go this is a preprocessor defined uh basically makes it so that any instance of this is going to be directly replaced by this you can define things as like names of stuff and like you can make macros you can go kind of crazy with like preprocessor defines in this case i'm just going to define that as the constant um yeah so so now if we do this coordinate times tau and then an integer value this is guaranteed to repeat perfectly and seamlessly it's going to start and end on the same value so this guarantees that we're going to go through an entire period yeah so now if we go back here starts at 1 goes down to negative one goes back up to one and so forth and this thing of like remapping from negative one to one to zero to one or vice versa is something that like i tend to do a lot after a while because it's very common to both like juggle these two ranges um so in this case i would multiply by 0.5 and add 0.5 so that kind of shifts the range that this is in so now it's going from 1 to 0 to 1 to 0. freya's just having fun now yes i had to kill time for 10 minutes uh before lunch but i also wanted to show that like um as soon as you start like harnessing this type of like math stuff you can do a lot of like fun things with this um and you can animate this if you want to it doesn't have to um you don't have to like have static things you can do it in different spaces it doesn't have to use uv coordinates and so forth um there's a lot of stuff you can do with this but i feel like we should explore that after after the break uh so so we're we're doing a lunch break all right so is there anything you would like me to recap or talk about that we've already talked about or clarify something if so do let me know can we do a quick one minute recap yeah okay i can try to do a quick one i tend to make them way too long but let's do a quick recap of the shader that we wrote at the very least because i'm guessing that's the one that has this most information dense all right so it's as you like recap the structure of a shader um the shader itself um it's kind of the outermost scope here uh this is the path to the shader when you select a shader i didn't mention that before but basically if you have a material you can select between different shaders so the path to where the shader lies in this in this whole group uh depends on the path you have at the top of your shader you can sometimes be useful useful for like categorization and stuff all right and then we have the properties and the properties is basically all the stuff that individual materials can modify and have as input data for this shader so generally all the properties that is stuff that you can tweak with this everything else is hard coded pretty much there are some exceptions there are some things that are populated by unity the mesh data is populated by unity and so some of the positional stuff like the transform matrix uh the mvp matrix uh like light information is also supplied by unity automatically but basically all the custom stuff that you generally want to input to your shader is defined by the properties all right let me know the sub shader um most shaders that you're going to write are only ever going to have one sub shader but if you want you can you can have several sub shaders in a single shader file which you can then select between depending on like performance characteristics or whatever um using the the uh level of detail the thingy um i don't know how that works i never use it it's probably not relevant um there you go and then you got the watch tags so these are the configurations for how you want this sub shader to act uh there are sometimes also tags here this is usually called pass tags um and then you have sub shader tags uh it's important to make a distinction between these because it's easy to get them missed like mixed up where you like try to enter subtitle tags in the past tags um so it's important to like know the difference between the two um so but i can easily you can easily do a google of like unity shader pass tags and if you go here you can then see okay what are the the value like the the valid tags we can have in and in our in our past um and so forth so so you can just google any of these things and you're going to find it all right um is there a way to see all shader paths including hidden shaders um with the reflection i think there is but i don't know if there's a way to do it in unity um just with the vanilla editor and will we go over transparent and opaque in the order in which things render and such yes okay and then we got the cg program which is the same thing as hlsl code so what's within the cg program and ncg scope is uh code or shader code everything outside of that is just called shader lab it's unities like wrapper language to encapsulate all the data we need to supply and interact with our shaders all right so and then in in the past is where we define our shader code so in this case we have a vertex shader we have a fragment shader we include some of the unity like built-in variables and stuff uh we define tile because tao is useful and then we've got our properties defined as variables and we need this in order to be able to access them all right this is a little messy i don't know if i should remove the comments or if it's going to be actually let's leave them in so this is the tangent direction x y z and the tangent sign on w it might be useful to say that all of this is local space as well so this is the data you get from the mesh um so so the mesh data is the input to the vertex shader so this is where we have all the information that is encoded and all of the vertices right um and there are only like a limited number of things you can do here i've already listed like the things i will use 99 of the time uh sometimes i will use like further like uv coordinates from this but generally uh the things that i've listed here is like pretty much what you're gonna use um all the time um i think there are some extra stuff you can get out of this um like especially when it comes to like skinning i think you can get like bind poses and whatnot but that gets into stuff that i haven't touched ever so all right so so mesh data here we define what inputs we want to take from the mesh that we care about in this shader in this case we wanted the local space vertex position the local space normal and the uh the first uv channel which is uv-0 because most most meshes have some form of uv coordinates defined all right so this is then we're going to skip over this because the mesh data then goes into the vertex shader and the vertex shader's job is to supply data to the fragment shader and set the clip space position of the vertex we set the clip space position here and then we use another structure called interpolators that is going to be data that we want to pass to the fragmentator so we set up an interpolator struct we supply data to this tract and then we return the struct and then we're done with the vertex shader this struct itself uh we can the the only uh thing that you always have to have there is this one you always need to set the sv underscore position uh which is the clip space position of each vertex everything else that follows here can be whatever you want um like you can you can store data in um like maybe maybe this one is a it's an interpolator you use to store i don't know some length properties or something it could be even separate things for x y z w uh like this can be anything and they're always in the same uh they always use the semantics text 0 1 2 3 and so forth not to be confused with these text chords which explicitly refer to uv coordinates but in the interpolator structs they're basically just channels where you want to send data through right okay so after you set the interpolators the um you have now set data for each vertex and the java the fragment shader is to render the triangles right because now we're going to render actual pixels to the screen and all the data you have in each corner of the triangle is going to be blended or interpolated across that triangle it's not only colors it's normals and it's all the data that you pass in there um so so if you pass in a value of one into an interpolator and you pass in a value of two into the interpolator uh the pixel that's going to render here is going to have a value of 1.5 right because it's a halfway blend uh between uh oops because it's a halfway blend between one and two right why do you need text code zero etc in the interpolator if you set the value the interpolator struct variables in the vertex shader what do you mean by why you need it oh um so i i think it's just because it's the way things work i don't think there's any specific reason um oh sorry this one apart from you want to be able to distinguish between which one is the clip space position and which one is just data you want to interpolate or at least that's how i interpret it um so so like this is just a way to enumerate all the data that you want to pass um and the fact that it's called tax card is just a happenstance right i wish you didn't have to specify this uh because it feels kind of superfluous i wish you could do this and then everything would just work uh but that's not how how shaders are structured um yeah so usually you just it's just a thing you have to add pretty much it's annoying boilerplate right so the thing we did in the vertex shader apart from getting the clip space position uh was that we also transformed the local space normal to world space um this was so that we could output this in the fragment shader oops uh where did i go i fumbled my case too much so if we output that that would be i dot normal and then whatever value in the alpha channel um so now we're displaying the normal uh direction and since it's world space the orientation of this object is not going to affect the uh normals that we're seeing here right and then the fragment shader itself is relatively straightforward fragment shader is usually the one that is um the most straightforward because you're not passing anything to anything you just have data and you want to output a color that's kind of all you're doing so we were before we went for break we were looking at generating patterns and waves uh in a shader so in this case we were generating this over the uv coordinates the space and we just limited it to the x component of the space right um yeah so in this case we're doing a wave over the x components but we could do this over like both components as well uh we could make this a float too um because all of the built-in functions just work with any number of components uh what's going to happen is that for the most part it's going to do the same operation for every component um so if we make this a float 2 then it's going to do this on both axes and and visualize it with red and green so now uh where we have a cosine wave on the x-axis as well as on the y-axis and we're visualizing them separately using the red channel and the green channel um so like you can you can juggle these channels very easily back and forth anyway so let's go back to using one [Music] um okay let's repeat this a few more times let's do five there we go um so one thing that is very nice in shaders in in the way the shaders work and that you're kind of just piping math in different directions um is that it's very easy to change the domain in which this is happening so for instance we have this value changing by the x coordinate of the in the uv space right so the orientation and the position of this object doesn't matter because it's all in uv space uh but we could do this in world space if we want to right there's nothing stopping us from doing that and and like likewise you can change this to the the y coordinate and everything is going to quote unquote just work um the only input we have to this right now is that we we have a single value going from zero to one and then the math formula can sort of work its magic and that is going to make a wave along that right and and this has all sorts of implications but you can you can very easily get like pretty funky effects with very simple modifications um so for instance let's say let's say you want to make this wave uh wiggle for instance you want to make this wave let's see if we go back here let's say we want to make this one have a way where it's distorted in one direction uh well distortion means i want to offset it in one in a specific direction right uh so let's say we make a um an offset here so let's call it x offset or something and then maybe we want to offset this based on where it is vertically right that's where we want this offset to be different along um so uh we can do i dot uv dot y we could just leave it at that if we want to then we want to offset the coordinate here so plus x offset there we go so if we go back now now we have a diagonal wave because what we're doing is we're adding a value as we go higher on the y scale we add more and more in a specific direction um so now it basically became a diagonal thing and this looks like some sort of drill now uh so if you want to make a drill um you got the shader code right there um but you know what if we do stuff like uh what if we make this with a with a some trigonometry right uh what if we do the cosine of this uh times tau times eight for instance uh then multiply that by 0.1 probably and then go back now we have the zigzag pattern right uh we can maybe modify that a little bit further make it a little bit less intense there you go so now we got this wobbly pattern isn't that neat the thing i really like about this is that it's like it's so like nice to be able to just directly visualize the math that you're doing and i think it's really neat and then you can start like making the effects out of this right let's see what could we do with this um let's say we want to make this animated for instance uh then we want to animate something we can just very easily access a time variable so unity has a built-in time variable called underscore time and this one is again automatically supplied by unity i'd say it's a global variable in shaders and um it has xyzw components uh it's a little it's a little esoteric but basically the x y z w components use different scales of time so the y component happens to be seconds but the w component is like seconds divided by 20. so if you want to have like different speeds you can use different components of this or you can just multiply it um so if we want to change um the opposite here we have the x offset here but then we can add time to this so if we do underscore time dot y we're now going to add the the current time in seconds to this so now if we go back to unity and make sure that animated materials is enabled there we go that looks very trippy so let's slow it down a little um there you go so now you can see that we got this little pattern that's animating over time right isn't that neat i have to like rotate the camera because for some reason it's like frame rate limited to a really gross frame rate i don't know that weird anyway um yeah so so these are just like some of the things you can do relatively easily in shaders and like like we talked about before if you go to like if you want to make vfx and stuff like that where um maybe you want to have a ring around uh the ground or something like that actually we can just do that like we can just do that so let's say we have oh maybe we can do that actually i don't know what the uv coordinates are in this thing um so let's say we want to make some sort of vfx thing first i'm going to reverse this so that we're on the other axis i want to reverse the direction of this so all we need to do is negate time we could hit play but i don't want to like i don't want to mess up the play states um so if we uh negate the time we're gonna it's gonna start moving in the other direction right uh all right let's scale this a little bit all right so maybe this can be like something that's like a ring around a player like an effect or something um and then let's see if we want to want to make it like fade out um because we don't want it to be fully visible at the top we want it to be a ring along the ground that then fades out the higher we go so then we need a value that is changing along that axis and again the way that you debug things and shaders is that you usually just output some value so i think the the y-coordinate of the uvs yep that one goes from zero to one we can use that to then tweak the way that this pattern looks across that space right um so what we can then do um in this case one easy way of making this fade out to say black is that we can multiply by it right um because if you multiply by a low value it's going to start getting more and more suppressed toward black if you multiply by one it's not going to change at all so if we then take this t value we can then do t multiplied by the y coordinate so now it's going to fade right although it is fading in the wrong direction so we might want to reverse this so we have 1 minus that uv coordinate and now we have this feeding effect from the ground right um we haven't done anything related to blending yet so we probably want this to be additive or something so we could go into that if you want to i don't really know if we should move on to like lighting or if you want to like learn about blending already uh is it cheaper to make a thing selected color change with a shader rather than switching materials it depends um but you can you can do either it doesn't really uh i don't think it's gonna matter that much uh all right let's do blending in that case okay i'm just gonna ignore the caps of this because i don't want the i don't really care about the caps so let's look into blending let's say that we want this to actually go into uh looking more like a like vfx type of thing uh then right now there are a lot of things that are wrong with this right we can't see anything behind this even if we have something here it's going to fully obscure whatever object is sitting behind it um so right now we are rendering this fully opaque as in there is no way we can have anything close to something related to transparency in this at all right now uh so one of the first things we need to do is think about blending and blending is a little bit weird in the way that it's set up quick little explainer okay so when you have a shader like this you get a color out of the fragment shader right this color that you're getting out of the shader in terms of blending is called a source color so that's usually shortened as src and then you have the background as in the destination that you're rendering to and that is whatever is already behind the object so that is usually shortened as dst as in the destination and the way the blending is set up in shaders or as in the the very cheap version of doing blending that doesn't require extra render targets there's a term for this i just forgot what it is uh oh the back buffer blending um anyway so the way that this works is that you have source uh times uh some value here i don't know what to call it um let's call it a there we go source times a um and then it usually is plus but it can be minus as well and it can be reversed minus so it's source times a plus or minus the destination color times b so when you define blending as in you want to change the mathematical formula that determines how this is going to blend with the background the things you can modify is a right here and b and you can modify this operator so how can we use these alpha colors or the shader to apply textures or colors up on them we're going to get the textures later for example replacing the white with a texture and a black with another yeah we're going to look into textures later but not right now is there time to talk about pre-multiplied alpha i don't know if i want to go into that it's so frustrating every time i talk about that on twitter i always get some extremely frustrating person who has dedicated their life to reading about pre-multiplied alpha and then they're like taking their time to nitpick about how i use some term incorrectly and like pretends that i don't know what i'm talking about and it's just really annoying um anyway i shouldn't let that affect y'all i don't know if it's gonna matter that much in this case uh it might be worth mentioning but um okay so so the way the blending works in shaders is that you need to to set a this operator and b in order to get the effect that you want right that is kind of what you have to work with uh so this is again this is the blending okay so with this in mind uh you can then uh do different things so let's say that you want to do additive blending so we got additive okay additive blending uh additive blending is the one that i was showing earlier that kind of makes things brighter usually um so if we got um it basically takes the background and you just add to it straight up add mathematically equivalent to just adding so in this case you can see that if you have additive blending it kind of just adds light to it it doesn't darken anything in fact um it's completely incapable of darkening uh unless you have negative values i guess additive rendering is very useful for like um flashy effects uh like light effects and whatnot it's very very commonly used for like i don't know fiery uh things because you usually get these nice little orange ingredients and whatnot out of it that people usually want to have when doing like flame particles and whatnot so this is additive blending um and i just totally drew that on top of everything else all right so additive blending if we look at this formula what can we do to make this formula just be source plus destination uh well all right we shouldn't be an additive anymore there we go so we want to do that then all we need to do is set a to one so that needs to be set to one the operator should be plus and b should be 1. um and that's it that's how we get additive blending because then what's going to happen it's going to take the source color which is this shader and source multiplied by 1 is just the same thing as source plus the destination multiplied by one which is just destination so that becomes source times test or source plus destination that's how you do additive blending very straightforward okay and let's see what else do we have we have a multiplicative blending so usually it's called multiply so so if we want to multiply these colors we want to have a source times destination in this case uh what we need to do is we need to set a to destination and we set b to zero so what's gonna happen then is that it's gonna shove in the uh the destination caller here into a so with source times destination uh and then we do plus destination times 0 which then just removes this factor entirely so so sometimes when you look at blending modes it's always important to read them in this structure because when you look at a shader all you're going to see is this you're not going to see anything in relation to like additive or multiply or alpha blended all you're going to see is the words one and one because plus is usually the default uh so if we go back here then blending modes is defined in the pass and there before the actual cg program program happens because it's not in the actual shader code it's in shader lab right so the way you define blending mode is you type blend and then you enter your a and your b things and in this case you want one and one uh they're always in words you cannot enter custom formulas here there's a very specific set of things you can supply here but this basically means that it's additive um then we can add the other one just for uh for completion uh we have multiply there uh so that one would be a dst color would be the destination color and then zero so this would be multiplicative or multiply but we're gonna try to do additive in this case okay so let's see what this looks like it looks like it's sort of additive but we now have a separate problem as soon as you start getting into things that are transparent you run into the issues of the depth buffer or whether or not you should write to the depth buffer um so if we add a regular unity uh sphere um you can see that it sort of kind of works but it's also got some sorting issues and whatnot right um so the problem is that the this one is currently writing to the depth buffer and the depth buffer uh there's a lot to go into in the depth buffer but it's kind of just a big screen space texture where some shaders write a depth value which is between 0 and 1 and when other shaders want to render they check this this depth buffer to see is this fragment behind or in front of the depth buffer and if it's behind the depth buffer it will not render so so basically what's happening is that we we have our camera there we go and then we're writing to the depth buffer with some object here right and what that means is that this is basically gonna make the depth buffer go from the far clip of the camera and then it's going to have values that are really close to the camera because we wrote to the depth buffer here right so it's basically going to create this little cone in the depth buffer and so then if we want to render something behind this object it's going to skip this it's going to detect that well since we already have something in front of it we're not going to render this but whenever you draw something transparent we can't just write to the depth buffer like this this method works for opaque objects things that are not like partially transparent but as soon as you have something that can still show the destination color we need to make sure that it doesn't write to the depth buffer so so in this case we want this one to not write to the def buffer uh so then what's going to happen is that our transparent thing is going to render and then maybe the opaque thing that's behind it is then going to write to that depth buffer instead so it's not actually going to affect the uh or additive object right here did that make sense it's a little technical but you generally don't have to care that much about how the depth buffer works apart from the fact that it's there to be able to uh optimize out things that are behind things right it's not a draw that is skipped right no the draw will still happen but the fragments will be discarded very early uh even before your fragment shader hits um yes so so it's it's a very good optimization in that sense because you skipped the fragment shader entirely but yes it will not not have like anything in relation to like occlusion calling uh it's only like a rendering feature uh there are many ways that you can like play with the depth buffer and there are basically two things that you can do with regards to the depth buffer where you can change the way that it reads from the depth buffer and the way that it writes to the depth buffer would that mean that it's kind of heavy to render multiple transference objects in front of each other at the same time yes the thing about rendering multiple things in a stack with transparent objects is that let's say you have let's say this is the camera again and you have a bunch of planes that all have like some additive texture or whatever because it looks cool um what's gonna happen is that the the it has to render every single one of these it can't skip any of the fragments um so if you're not careful uh you can run into situations where you start lagging in some games this is actually noticeable if you you like um like this is pretty common in particle effects if you make like large smoke particles then usually you would have this like large nest of like quads and all of them have these like large cloud textures on them right and so so like especially in older games sometimes if you walk into like a particle effect with large clouds your frame rate will just tank completely because these are covering a large portion of your screen and it means that it needs to do like full screen renders of all of these quads right but if you're far away uh then if your camera is over here or something then it's totally fine it doesn't matter because it's like it's a tiny amount of pixels but if you get really close then you have so so many layers to render uh this is called a fill rate if you if you want to have a technical term for this concept so you want to be careful to not have too high of a fill rate so what's happening here is that this one is writing to the depth buffer but we don't want it to there's also um i don't know maybe it's going to be an issue we'll see so that is the right whether or not we want this to write to the depth buffer and we want this to be off let's recompile and now it seems fine you might now notice though that we have a separate problem that this one is being rendered before the sphere right uh so because we want this one to be rendered afterwards not before so what what's happening is that this one is now writing to the depth buffer and just overriding um everything we're drawing here um so in unit is built in render pipeline the uh basically there's a an order in which different types of geometry tends to render uh so opaque is or actually the first thing that renders it's a skybox second thing is opaque or it's called geometry in the in the render queue and then you have a transparent which you usually group like all additive and transparent stuff in and then after that you have like overlays like flares and like lens flares and stuff tend to go above everything else um yeah so that's usually the render order um so when you're dealing with transparent stuff usually you want to set the render type to transparent uh and you also want to set the render queue so render queue uh should also be set to transparent uh both of these relate to the draw order or actually it's mostly render queue that doesn't render type is mostly for tagging purposes for post process effects um a render queue is the actual order that things are going to be drawn in all right let's go back uh that did not work so let's go to unity unity render not that one [Music] oh it's called q rip there we go q transparent recompile there we go so so now we're drawing this additively and it seems like everything works so far as if we copy this uh you can see that they sort of uh they all render they don't obscure each other and they sort of add together to make this very glowy glowy thing right so one thing we might want to do on this is that what if we want this to be double-sided because we kind of don't see the other side of this we just have one side and but the back faces are just gone right this is something called back face culling and back face calling is the default behavior of when you're rendering shaders in unity and basically you can tell your shader to either render back faces or front faces or both so if you say call back this is the default value this means that it has backface calling culling usually means that it's a removal of something you can set that to cold front and now it's going to flip so now it's not rendering the front side of the triangles and it's only rendering the uh the back side uh but if you want to render both you can say call off that means it's rendering uh both sides of the whole thing so now we even have the front side of it uh okay why do you have q and render type together in both set to transparence so render type is basically a tag to inform the render pipeline of what type this is this is usually used for post-processing reasons this does not actually change the sorting but it's a useful tag for post process effects this one changes the render order so unity has a bunch of preset categories um so there's uh skybox opaque transparent overlay and so forth and these are all rendered in that specific order uh we want to render all of our transparent objects after everything that's opaque has rendered because we don't want the depth buffer to write depth values um that's going to destroy all of our like transparent and additive effects um so we want to make sure that everything that is transparent or additive or multiplicative all of that we want to render after everything opaque has rendered so that's why we set the queue to transparent okay so this looks kind of neat i don't like the fact that we have the upper and the lower part rendering uh we can sort of hack that away uh multiplying t by um what's what's the thing i dot normal dot y or the abs value of that is greater than o point something there we go um no less than now i just hacked away the those two surfaces um basically i'm using the normal of the surface and if that vector is pointing either almost entirely up or almost entirely down um i multiply by zero to just remove uh to remove those things it's a little hacky but you know what that's that's how shaders do okay what else do we want to do with this yeah i think this sort of illustrates my point of like um that you can use all sorts of patterns to create these types of effects right uh where you want to have some sort of highlight circle around some objects in fact let's put an object there there we go this is now marking the the holy holy sphere i feel like there was something else i was going to talk about when it came to blending oh there are other things you can do when it comes to the depth buffer so while this effect shader no longer it no longer writes to the depth buffer but it still reads the depth buffer because if we make this one intersect the sphere you can see that these pixels here are not rendering right that is not something that always happens sometimes you want to disable that um so on top of z write we also have the test for whether or not over how the testing should work out when you're presented with a depth buffer with some value right the default value is called l equal it's a little esoteric but it basically means that if the depth of this object is less than or equal to the depth already written into the depth buffer show it right otherwise don't so so l equals the default value uh if you want to um always draw you can set it to always so now even if this is behind the sphere it's still going to draw it doesn't care about the depth buffer whatsoever you can also reverse it you can set the z-test to greater than or equal in this case it's only going to draw if it's behind something and it's not going to draw if it's in front of something this one is super useful if you're doing stuff like uh if you have a character effect where you want to show like a ghostly version of a character when they're behind something then it can be useful to have two shaders that one of them renders when you're behind something one of them when you're in front and then so even if you're partially obscured you can see both versions of it um so you can do all sorts of hackery with the depth buffer um but yeah in this case usually you want the default value of less than or equal is there n equal to um i don't don't know if equal and not equal are ever going to be useful because there's only a sliver of infinitesimally small intersections where that would be um like where that matters right because that would only mean that where they're exactly the same it would either show or not show right uh which only happens at very very few locations but there might be um yeah if you search for it you can probably find it i just never use it myself yeah the the ones that are like there are things you have for like there is less than equal or less and the difference between the two can be useful if you want to prevent z fighting uh but other than that i don't really know what the what the use would be of an equal or not equal um um okay do we still have the colors we do okay now that we have this whole thing uh we can still use all of this uh with our colors right uh so let's make a new variable just for readability let's call it waves uh float uh top bottom remover there we go that's this one based on the normals um okay let's say that we now want to bring back our colors we want to set a color at the top of this and a color at the bottom of this let's call this um gradient we're going to base this off of the uv coordinates y component so we could just return this gradient to show what it looks like so this is the gradient we have color a is at the bottom and then we have b at the top so i've said that the black fading out because black doesn't change anything with additive uh rendering right okay so maybe uh let's see color a would be the bottom let's make that uh some sort of cyan and then the top one some more blue stuff there we go okay and then what we might want to then do is multiply this by waves and now we got got waves with some colors right uh we have some colors at the bottom it goes from like cyan to more of a darker bluish tone there and so like everything you're doing pretty much is just manipulating math and colors and vectors in order to get these types of effects that you want um and again you can range remap everything if you want to change the contrast of this or the fall off of this all of this stuff is really really easy and straightforward to change once you know the math behind it so you do kind of need to know a lot of math to do this stuff but once you like harness that you can make like almost anything it's it's wild how much you can do if you know how to write shaders um so i know that it's like empowered me and all the projects i worked on a ton um so it's super super useful okay let's see which thing should be covered during that hour emission would be cool emission uh depends on what you mean by a mission uh in some sense everything you've been doing is just emission because like none of this is affected by lighting none of this is shaded as in we have no shadows on anything so usually when you talk about emissive shaders this is what you're talking about as in uh shaders that are unaffected by uh shadows right but if you want to get stuff that like uh if you want a bloom effect or something like a post-processing thing that's usually a post-process effect and not something you make like localized on objects in the world um yeah so kind of depends on what you mean by emission can we get some wavy effects perhaps isn't that exactly what we've been doing oh distortion like it's easier once you have textures but we haven't gone through textures yet oh yeah vertex offset that is something we should definitely talk about let's talk about vertex offset i was going to talk about that earlier that i forgot so i'm really glad you brought that up all right vertex offset there we go paste the existing code called vertex offset let's create a tessellated plane what a beautiful plane if you're working with shaders in unity this color you're gonna see this color a lot okay so i have a plane here this one is tessellated so i've like subdivided it into a lot of different triangles which means that we can deform it because we have a lot of geometry to uh to deform with right all right so let's make a material out of our vertex offset shader material there we go it is completely invisible so let's modify this one uh let's first just output the uh vertex color or the the uv coordinates sorry okay i don't want this one to be additive so uh we're just gonna default all of this and we're going to set the render type to opaque and the the default render queue is called geometry instead of opaque for some reason but kind of annoying thing to have to remember all right look at this mango square okay let's bring back our uh wave along one of the axes and then um remove this one so we just have the pure wave okay so here's here's a here's a neat thing you can do um so right now we've mostly been talking about the fragment shader and we kind of just passed by the vertex shader which you quite often do you don't always touch the vertex shader you kind of just pass data through it but let's call this a wave but a really powerful thing you can do in the vertex shader is that you can change the position of the vertices it can sound a little trivial like it's not a big deal but if you consider the the number of vertices that we have in this mesh there's loads of them right there's in fact 8192 triangles in this plane um that's quite a lot um so so like doing this in like on the cpu of like making a for loop iterating through all of these vertices is going to be expensive um because the the cpu is kind of designed to be very specialized and very good at doing like advanced things that are not like massive amounts of data but the gpu has sort of an entirely different architecture architecture where you have massive amounts of shader processors that can do a lot of work in parallel and that's exactly what we're doing right we have a single shader program this is the same code that is being run for all of these vertices right so now we have this really like parallel processing unit the gpu where we can basically do all of this um it's almost free compared to doing it on the cpu um because this is so rudimentary and basic for the gpu to do uh that it's it's it's really really powerful and this is also why sometimes the gpu is used for computing or processing data that is unrelated to graphics just because the gpu is so so powerful at processing parallel data and and doing that type of math um anyway so it's a vertex offset so now we're going to do we're going to add some code that can then change the position of the vertices so what we can do is we can take this wave um and um how do you offload processing to the gpu you write shaders uh instead of doing your code that does uh whatever it is you're doing do that on a shader um so like you you do that with shader code or compute shaders um and then you can get the data out of it usually it's like rendered to some buffer or to some texture and that's how you get the data back okay so let's take this wave and just straight up copy this one to the vertex shader so in the vertex shader we can paste in the wave let's see it's not happy with the uv and that's because the we just have a different word for that it's called the uv0.y uh so now this one is reading directly from the mesh data instead of the interpolators because now we're on a per vertex level right this is for each vertex not for each pixel or fragment um so now we have the same cosine wave here and then let's use this wave to modify the y-coordinate of the vertex so v dot vertex dot y um equals wave and in this case we can actually just leave the negative values in there so we can just remove the remapping there so now we have this wave parameter uh that we're mod we're setting the uh local space vertex y-coordinate to before we transform it to clip space uh and then if we go back to unity uh we have this insanity and this is because the waves are huge right now we haven't said how we haven't like set the amplitude of this um so the amplitude is right now one meter so that's why we're getting this um interesting pattern um so we could add a parameter for that uh wave amplitude and we did i probably want that to be like from 0 to 0.2 or something and then we need a variable for that so wave amplitude and then all we need to do is is multiply this wave by the amplitude there we go all right so now we got this wavy looking thing we can set the wave amplitude if we want i want to make it a little softer um you can also press play to make unity not run at like 24 fps uh now we've got now we got the very very rudimentary basic water shader right um and this is like again this is basically free um if you think about the math that we have to do um it's very simple it's not that much math right we're just doing this very simple operation um and we get this whole animation out of it do we have purlin noise next nope we might do we're to do noise with textures but we're not going to generate pearly noise in the shader because that's generally a bad idea because it's very heavy so it's not very useful in games it's mostly useful if you're doing shader toy stuff and you want to impress your friends with math but this is focused on game dev so all right and obviously you can do this like you can still keep changing these things if you want like if you want to combine multiple waves maybe you have a wave 2 that runs on the x-axis instead and you want this one to also play along with this then maybe you you have a wave 2 right there now you have a multiple wave system that is interacting and what not right although the colors are a little misleading right now because the colors are still we didn't modify the fragment shader we're just doing this in the vertex shader how do you ripple um acid like rippling from the center so if you want a ripple from the center uh so you can consider the space that we're doing this in now right uh actually we should go back to the fragment shader for this it's a little bit easier to visualize in the fragment shader um so now we're doing this animation where the domain is the uh linear x-coordinate of the uvs of this mesh right so if we want to make this a radial as in based on the radius or distance from the center that is the input data we need for this and right now we're giving it the linear y-coordinate so what we can then do is that instead of passing this in we can pass in the distance to the center right uh so if we want to get that value um we can uh first we we can take the uv coordinates and remap that to a negative one to one range in order to center it uh so if we just do uh if we look at the uv coordinates again there we go you can see that we've got zero here uh oh this should be based on the x coordinate uh we've got zero here increasing on the y axis increasing on the x axis um all the way from zero to one but we wanna put zero in the center right uh so then we do the exact remapping thing that we did before where we uh remap something from the zero to one range to negative one to one range um which is a relatively straightforward transformation to do you multiply by two and subtract 1 and now we've centered these coordinates where 1 is at the edge and negative 1 is on the other edge okay you can also use inverse lerp for this if you want to you it's gonna be a little bit more descriptive with it um but it kind of depends on what you how you like want to structure your code and how readable it is to you uh but maybe we can make a variable so float2 uv's centered okay and then if we want to turn this into a radial coordinate um remember that these uv coordinates are vectors right they're vectors going from zero in the center out to some value in this whole space so what we can then do is we can calculate a distance like a radial distance would be the length of this vector um is it magnitude no length um so then we get the length of this vector and then we can output that um so go back to unity we now have a circular pattern because now uh every pixel is showing the distance to the center so far so good i hope okay so now that we have this radial distance instead of using a linear coordinate we now have a radial one that we can use in the same wave function we had before right um so uh we got our wave and instead of using i dot uv dot x we can use radial distance here and then we go back and now we have um this very hypnotizing shader there you go and that's sort of it and then you can do all sorts of stuff that you might want to do in terms of like oh maybe you want to fade it out at some range if you want to make the ripples not go on forever then then maybe you want to multiply this so that it ramps down as you get closer to the edges so what you can do is then you can multiply wave by uh radial distance although this is going to have the opposite effect because the distance is zero in the center so now it's going to fade out the center uh but what you can then do is you can invert that right one minus and then go back and now we have sort of ripples that fade out the further out you go um yeah um okay so now now we did this in the fragment shader but obviously we can do this in the vertex shader too so let's see maybe we should make a function out of this uh get wave um there we go now i get waved let's just make sure that this works yep cool okay so now we can use the same function in the uh vertex shader although because this is a language like c and c is absolute garbage you need to put the function before the place you use it and i hate it um so you need to make sure that the get wave is above anything you want to any place you want to use it um all right so if you want to use that here then we wanted to do a v dot vertex dot y equals uh get wave and then we pass the uv coordinates into that which is v dot uv 0 recompile okay so now this looks like a um some sort of uh tower of babylon or the way that it's usually depicted i think or like a birthday cake um wait birthday cake this is a huge kink um so we want to multiply that by the amplitude right that we had before fountain of wedding cake yeah one of those there you go so now you have this goopy three-dimensional wave of the generator thingy okay although this one only goes above i feel like the waves should have they should have negative values um so let's remove this from the get weave oh that's gonna get complicated actually if you want to use the same one for the you know what it's okay whatever they remind me um so i think it's better to go through textures so that will also open up for more possibilities for uh the assignments i'm gonna give you um so let's do textures let's see let's create a new shader unlit shader let's call it textured this is just going to be a gallery of very trippy shaders let's switch back to another tesolated plane put it at zero move away these groups and supply our new material it's good it's called textured there we go okay textures let's let's now jump into textures oh uh there we go okay first i'm just gonna murder these shitty brackets a lot we don't care about uh pass we don't care about don't care about fog no fog no fog no going clutters up my shader okay cool uh the default state uh unity's unlit shader uh is that uh it actually already has a texture uh so we can just kind of look at that to see how it works so you got a property just like any other property like colors and values and whatnot but you set it to 2d and that specifies that this should be a 2d textures there are 3d textures and there are cube maps and all these work in a little bit different ways but in our case all we care about is 2d textures main text is kind of the default name for unity's main texture property usually that contains the color information on the surface all right so um pretty standard stuff when it comes to the rest uh app data would be the um mesh data uh and then v2f would be our interpolators so the the thing that is new here uh is that um when we want to sample a texture you can see that we have the texture here uh it's got it's referring to the main text variable which you need to define in order to be able to sample from a texture it's called sampler2d good to remember i always forget and i always have to create a new shader and then check what it says in there um but yeah simpler 2d maintex this one is optional you know you don't have to define this one but there as you might have seen in unity uh textures have this tiling and offset thing where you can set the tiling and an offset for how that texture should be mapped that's what the uh what this is and this is sort of a magic magic variable name if you name it exactly the same name as your sampler and add underscore st this is going to be the the scale offset um so this one is going to contain all of those values um unity has a function called transform text and transform text basically takes those offset values and offsets some input uv coordinates so all this is doing is that it's scaling and and offsetting the uv coordinates so that we can then get those scaled uv coordinates uh but this is optional uh you can skip this entirely and just do v.u.v um in which case it's not gonna scale and this thing is not gonna matter right um i almost never use the offset scaling values so usually i just remove that but in this case we might leave it in in case uh we want to tweak that so what we got here is basically the same thing as before the same setup with the vertex shader and the fragment shader but there's a special function here called text 2d text 2d means that we're going to pick pick a color from the texture and the input to this one is what sampler do we want to use when sampling this or what texture and the other parameter is uh where do i want to get some colors from this so usually uv coordinates are used for like exactly this purpose for sampling textures and the space we're working in for textures is from zero to one also sometimes known as normalized coordinates um okay so so we are getting a color from this texture and then we're returning it and that's basically it um so if we got this textured one we can pick a shader um let's do um let's do this this grass texture or moss texture there we go so so now we are sampling from a texture and you can you can change the offset and tiling uh so tiling uh would would be used for scaling it or stretching it if you want to um and so forth so so that's what the tiling and offsets values are for uh but again they're optional you don't have to use them and you can also you do this manually the way that we did before right uh where we directly modify the uvs um so if we do something like oh.uv.x plus or uh yeah plus equals um time dot y uh times 0.1 there we go um so if we now go back uh you can see that it's now scrolling uh because we're literally adding to the uv coordinates which in in shaders would mean that it's offset right um so so we now got this scrolling scrolling texture um useful for stuff like if you want to make a river or something usually you want to have like panning uvs uh there are there are many many different ways you can modify uv coordinates to do stuff like that um you can even use textures to modify the uv coordinates that you're then going to use to sample another texture it's kind of expensive to do that but you can use that for instance if you want to have i know valve used this in portal 2 and in left 4 dead 2 where they were setting up like a texture that would then be able to use um as a flow map as in uh that one would tell the shader what direction things should scroll in um and then it had all sorts of shaded tricks to make sure that you don't get like infinite stretching and whatnot they were sort of like crossfading between different states as you know a texture was flowing in some pattern um but yeah anyway um so you can do lots of stuff with modifying the uv coordinates and and you can map textures in almost any way you want you don't have to use uv coordinates um this is a really really important thing and i think it's a shame that like it's called text cord which almost feels like it's meant to be used for textures like uvs are texture coordinates um but i think that that is a that is a thing you need to like make sure that you don't like learn because textures can use any coordinates from any space and across any time and uv coordinates are not necessarily used for only mapping textures um in fact most of the work that i've done in the past like two years uv coordinates have nothing to do with textures i use that for passing data and like other information um so i think it's important to know that like quite often uv chord uvs can be used for just passing data that is completely unrelated to textures and textures don't have to use uv coordinates even though that is very common of course so just to show an example of this um so let's say we want to um you can name these things anything you like though right yes well not the semantics the semantics have to stay the same uh but yes these are variable names so so let's say we want to instead of mapping this with uv in uv space let's map it in world space so we have world coordinates uh and we want to uh we want to be able to send the world coordinates from the fragment shader to the vertex shader or sorry vertex shader to the fragment shader so we need this in our interpolators okay so how do we get the how do we get the world space coordinates well um we already have the local space once which is v dot vertex so if we do um usually actually i usually call it world position because it's literally the world position of this uh pixel right after interpolation um so what we can then do is we can do a matrix multiplication so we do um mole which is the matrix multiply function what we can then do is we pass a matrix in here which is unity object to world and then we multiply that by um the local space vertex position you can also use um the model matrix if you want if you want to be a little bit more technical this would be the same thing all right so now we're transforming this from local space to world space by multiplying it by the model matrix and then we can use this in the fragment shader and as always if you want to debug stuff output it as a color so let's do return i dot world position uh oh the world position is a float3 so we might want to make sure oh actually it's vertex i'm not sure if the fourth component of the vertex is one here so i'm just gonna make sure that it is uh possibly dumb question multiplying by matrix has to be done with the mole or can you just use matrix times vector um i am not sure i don't think you can do that but it might be possible i just always use the mole function the mole function also has the advantage that you can flip the arguments to transpose the matrix um but yeah um maybe you can do that with just a multiply operator as well okay so i don't know if you remember when we talked about this during our math class uh but when you do a matrix multiplication uh where you transform some three-dimensional vector you usually pass a float4 into that or a vector four where the fourth component um depending on if the fourth component is zero it's going to transform as a vector or a direction but if it's one it's going to transform it as a position meaning it's going to take offset into account if it's zero offset is not going to be included it's just going to be the orientation and scale but in this case it's a position right okay so then we want to output the world position xyz alpha channel doesn't matter in the shader so we can just give it whatever value uh okay there we go and now we can see like would this make sense we're outputting the world space position right uh green increases when we move it up uh we can see that as we move it around um it gets more red on the x-axis so it very much looks like we do have the actual world coordinates here right so we scale it up we can see sort of the the unit cube um in the zero to one range over there uh so that that seems valid right because it's especially visible if you put it on a cube i always love the the color gradients you get when you're visualizing world position like this there we go we got an rgb cube um so this one is now showing the world position of the current uh fragment that we're rendering cool so we got world position all right so we've got the world coordinates um and world coordinates are in 3d right we have it's along all three axes and we can slice this however we want like the rotation doesn't matter we're just gonna get a a slice of that unit cube right so we know that this uh value now works actually i'm curious if one is the default value here so i'm just going to try that seems like it is cool that makes that formula a little bit easier uh so this is object to world uh so what do we want to do with this uh right we wanted to map this texture um with world space coordinates instead of using uh uv coordinates uh so if we think about this what axes do we want to map this texture to so it looks like it would be on the um on the x axis and the z axis the blue one right so it wouldn't be the x y axis because then it would be projection along the z axis so that would be side on projection so we want this to be a top-down projection for our 2d texture um so in this case if we want to get 2d coordinates for our texture uh we can call it a top down projection and then we do i dot world position dot x z so now we're going to get a vector 2 out of this that is built from the x component and the z component of the world position so now we could output this as well so we can output the top down projection which is now a vector 2. uh and we can see that we now indeed we have a two-dimensional coordinate here the y-position doesn't matter at all but it is projected top down right so if we like rotate it like this uh we kind of lose data and it's uh it's not really a planar projection along that axis anymore okay we're gonna use this as our coordinates for the texture so we don't actually need to uh we don't actually need to use the uvs instead we can just input our top down projection which is our top line coordinates and if we then go back to unity our texture is now mapped to world space so if we move this uh this plane it doesn't matter where we put it or even how big it is this texture is always always always going to repeat in world space so it doesn't really matter how we move this thing around right and since it's a top down projection edges are going to get all stretched and funky um and if we start rotating this you can see that we get like stretching along the edges which looks kind of gross but it works pretty well for like a relatively flat stuff right um so so this is a very very common thing to use if you're doing like terrain um so with terrain you quite often have like vast stretches of land and then you like blend between different textures and sometimes it's just kind of easy to just use a world space coordinates for that instead of having to like update some uv coordinates on your mesh every time you change the size of it or whatever so it can be can really useful to you to have that in your toolbox um yeah we do not have time to talk about normal maps oh but there's lots of stuff we can do now now that we have textures okay so something that i mentioned before is that like a lot of the things we do like it's all colors in the end and it's all float force in the end so we can replace almost anything with anything and it'll just work in the end right um so just to show an example of this let's go to let's go to a good old good old photoshop and make a texture it's gonna be it's gonna be a beautiful texture uh we're gonna fill it with just black in the background all right let's draw some some patterns or of some sort cool what a pattern um actually let's do a little bit more of the islandy bits perfect rate my pattern why did i name it pattern png what a weird name okay we dumped that into unity cool now let's go back to our shader uh we want to have another texture input now we're just going to call it pattern um there we go oh i didn't really explain what this part is this is what is going to be assigned if you have nothing assigned to it um so if you want this to be white if you have no texture assigned then you type white i think there's white there's black there's gray and then there's a bump it's basically a normal map a flat normal map color um so a normal map that just points directly upwards okay so pattern uh let's let's bring in our pattern uh we don't want scale opposite for our pattern i'm just gonna just gonna have the sampler there um and is there such a thing as a single channel texture or always getting rgba there is something such as a single channel texture uh not all of them are supported on all platforms though and some of them are only available as like render textures rather than like textured data but yes they exist um there are like tons of different like shader formats or uh texture formats you can use okay so uh we i don't really want to use the scale transform stuff anymore we don't need that um let's remove it let's nuke it uh okay and then we have our pattern so what we can do now is that we can remove the uv animation uh okay so we have the color of the the grass right let's or the moss let's call it moss just to be a little descriptive uh and then maybe we can have another texture now because we have the pattern right let's call it pattern equals and then we sample that texture just like we did before but for this one we we can you just use uv coordinates right um so we've got our pattern let's just make sure that that works um let's assign uh the pattern and then also make sure that we use the pattern texture there we go so now this is the thing that we just drew in photoshop uh the very beautiful 10 out of 10 pattern so like i mentioned before like it doesn't matter if you're sampling a texture doesn't matter if it's uv coordinates doesn't matter if you're getting world coordinates the things we've been doing before we can do that just as well using a texture sampler um as anything else um so when we were looking at this like um we made this wave thing right like the the get wave function um like even stuff like that we can use the texture for right um so uh let's say we change this just to a coordinate because in our radial anymore there we go [Music] okay so we can do this with the um the textures as well so let's say we want to do a wave that's based on this pattern that we just drew that is completely legitimate and a thing we can totally do so let's try that let's do get wave from this pattern and then we have an undeclared identifier and undeclared identifiers happen when you don't have tau in your um in your shader there we go so now this thing is based on the texture instead of being based on um some math right um it's a little a little bit compressed looks a little garbage if you zoom in um you can of course change a bunch of compression settings uh you can set the compression to none in which case it turns off compression uh but if you turn off compression then obviously that's gonna take more memory uh in your um uh it's gonna occupy more memory in your gpu um might also want to do that so that it's not flashing all the time um but yeah so so like i mentioned you can just like keep doing this like over and over and over again of like swapping out values and spaces and coordinates to achieve all of these like funky effects that you might want to achieve um anyway that's just an example for how you can like swap these values out um okay otherwise we now have this pattern i should specify that that's the x uh component um right so now we can do stuff like uh what if we want to have grass wherever this thing is white for instance um well we can use a lerp for that right so um let's say float4 uh final color equals lerp from in this case i guess we can just have a constant of um let's make it a let's make it red let's blend between red um and grass right or moss based on the pattern that we just had and that's a float before and then we return that color so now if we go back we're not blending between the color red and our grass texture and these are sampled in different spaces right because one of them is sampled in uv space so if i scale this the pattern will also scale but the grass will not scale so we can sort of move this around and they're kind of moving independently of each other um which is which is kind of neat right um so so you can mix up these spaces as well not everything has to use the same space and then obviously you can also blend between different textures so let's add another texture let's call that rock uh all right copy the variable um and then we have the moss here and then we can replace this with a rock there we go use the same top down projection for that and blend between rock and moss okay let's go back to unity it's white because white's a default and we haven't assigned this yet um so let's replace that with the rock and there you go so now we're blending between these two uh textures and they're also both um in in world space here right um so so this pattern is only changing where they are blending uh but it's not actually changing the mapping of these textures right because the textures themselves are sampled in world space and obviously this blend looks kind of horrible it's very very like obvious that it's crossfading between two textures so quite often you would use other like noise textures to modulate this blend but i don't know we probably don't have that much time to look into that now and i also need some time to uh create some exercises for y'all okay i'm gonna make a unity package out of all of these things and um i'm gonna send it to you so you have the code in case you want the code um if you're watching this on youtube in the future it's going to be below in the description i'm going to link the things same thing with the assignments i'm also going to link that in the description but we're still stuck in the past so there's no description here my students are saying the future people should like comment and subscribe and hit the bell for notifications there you go uh any more questions before we're done so we did textures that's that's really good lots of fun assignments we oh one final thing that i should talk about um which is which is a very neat topic which uh some people might not expect to be a thing that shaders have here's a fun fact actually let's use this texture as an example it's probably a good one okay so when you when you think about a texture this obviously needs to be loaded into memory of your gpu right it has to load this whole texture so that it's ready to read from it as soon as you want to draw something with it there is something that is really useful to know about when it comes to textures is that this is actually not what lives in your memory it's not just what you're seeing in front of you right now you can make it so that it is that but that is actually pretty bad um and this relates to something called mipmaps or mip maps usually just mipmaps and what mipmaps are is that you have basically copies of this texture but down sampled to different sizes so the way that this is going to look in your memory in most cases is that it's actually going to take twice the amount of memory worst case four times quite often your textures are not properly copied in photoshop oh god i turn off snapping view snap there we go so what what often happens is that your actual texture in in your video memory is going to have a downsized version of itself quite a bunch of times and usually it's it's a halved in size for for every uh scale along the way so it's something like this i'm not perfectly aligning these so this couldn't actually be used in a real world scenario but but you get the idea right all the way down to a very very very low resolution and this one is just a bunch of bunch of pixel so we end up with is a texture that sometimes takes like twice the actual amount of memory because it has to fill these extra mip spaces in here um and you might wonder like what is the point of having mipmaps like why do we even do this right it just takes up a bunch of extra memory and the fact that you do that is or the reason you do that is because if you think about how shaders work they if some texture is far away you have some uv coordinate on some pixel it's actually just going to take the color of that pixel and output it um in a very naive way if you don't have something like this so we can actually look at how like how what would that look like if we turn off mips so i'm just going to make both of these moss uh this might be difficult to see on stream is that actually how they are packed it depends on a lot of factors uh if you have like i think some of them like do one on the side and then it's sort of up in a like sort of spiral-like manner like i think some of them do uh oh god what's the order of these um i think you can do this type of thing which is a little bit more space efficient than whatever but if you have an isotropic mips it's not going to look like this at all because then you need anisotropic data and not just isotropic data um so it's a simplification of it okay so so if we want to turn off mipping we can very well do that so if we go to our moss we can set an isotropic level to zero we can uncheck generate mid maps and now it might be difficult to tell but there should be a lot of noise happening now right anisotropic is for anisotropic filtering right yes um so you might be able to tell that this is kind of flickering and it's very noisy right and this is because now we don't have the mip information we don't have this uh pre-down sampled versions of this texture we could do the same for the pattern i guess it might be more obvious with this pattern since there's more contrast in it so if we turn off the mips uh although this one is more soft no no my bad example is too soft but yeah so there's lots of noise here okay um so the um it's actually not so the the reason for having these mipmaps is that they are down sampled so that when the uh texture or when the shader is going to sample from this texture it can sort of guess at how uh how far away the texture is and it's actually using partial derivatives to figure this out and based on the kind of the the rate of change of the uvs that you're sampling with depending on that it can pick a lower mip that matches the distance at which you're sampling this in and then there's all sorts of stuff you can add on top of this the very basic types of mips that i show you here that one is purely distance based uh it doesn't really care about angle it does to some extent but it's isotropic um you can enable anisotropic mips which um that one stores uh squished versions of these as well that are like squished on some axis so not only do you have the isotropic ones but you also have anisotropic versions that are scaled on a specific axis right which is useful for when you're viewing something at a very low angle um so if you're if you have ever heard of like anisotropic filtering that is what anisotropic filtering is uh it stores some number of mips like this um that are squished um along a specific axis uh so they're like pre-filtered and look really good at like low angles which is really really useful for uh like games where you have like a camera that's close to the ground like fps games or whatever not as useful for like rts games anyways that's anisotropic filtering um so if we um so if we enable mid maps again but we don't have an isotropic filtering that if we view it at a grazing angle you might be able to tell that it's kind of blurry here because now it's picking from one of the lower mid levels or higher i don't know which way it goes so now it looks pretty blurry on the horizontal axis right but if you enable the anisotropic mips the blur disappears immediately because now we have pre-sampled versions of you know what would what would an ideal squished version of this look like along you know both the cardinal directions right um yeah games tend to say that it will induce a performance hit but i rarely noticed any from that is it mainly just a memory issue i think it's mainly a memory issue but yes most games say that it affects performance so i would guess that it actually does um but yeah anisotropic filtering generally generally does take more memory uh but it could be something like the partial derivative needs to like separate out the different axes in order to like calculate that and that might be slightly more expensive but yeah i've never had issues with like performance when it comes to that so oh and then there's a third concept which is also really useful so you can have different filter modes bilinear is the usual filter mode and you can also have point filtering so point filtering is what kind of gives you the minecraft look um where it doesn't blend between the colors between each individual pixel it just goes to the nearest neighbor right so that's when you have a completely unfiltered texture sampling bilinear means that it's going to blend between these textures bilinear is is pretty much always default at this point and point sampling has a lot of anti-aliasing issues unless you make a very specialized shader for it and then you have trilinear trilinear means that not only does it blend between the pixels smoothly but trilinear means that it also blends between the different mip levels so it used to be that if you don't have if you don't have trilinear filtering enabled you could quite often see like a wandering ridges if you move along the surface i can see it on my screen uh like there's one ridge here where it's switching between the different mip levels um probably not visible on stream but if you enable tri-linear filtering it's going to blend between the different mip levels in which case we're not going to have this blurriness issue again mostly useful for like fps games where you have a lot of low angles looking at terrain like this in which case it's really good to enable all of these things and but for rts games uh it's pretty much just a waste of memory okay cool we did it we talked about textures um all right how is the filtering applied is kind of post effect this is very low level gpu stuff it's not a post effect this happens when you sample the texture i don't know exactly where it happens in the gpu but um but if you want you can you can explicitly sample a texture at a specific mip level if you want to the in some cases you actually have to do that but if we do for instance let's just return the moss um let's add a mip sample level mid and that's a integer oh sometimes you can sample float i have like weird behaviors between platforms um with this zero is the highest level of detail and higher numbers step down into the into the lower levels of detail um all right so we've got the mid-sample level let's call it a float so if you want to sample a specific mip level uh you can use a text 2d lod for level of detail uh and then the um then the uv coordinates are going to be a float four uh where the mip level lives in the remaining parameter i think that's how it works i don't know if i need both of them i think you have two a float two here if you want to do like separate for the x and y coordinate or the u and v coordinate uh and that's for anisotropic sampling but in this case we we just have a single value so now if we go back we can change the mip level right here right so we set it to zero we have the highest level of detail and we're going to get this issue again where if we're far away it's going to look all noisy but if we can change the mip level here and if we're really close it's going to look blurry but if we're far away enough it's going to look just just fine right um but yeah so this is something you can use mip levels are sometimes exploited for like specific effects um they're quite often used in lighting like image based lighting uh sometimes you store like uh like convoluted convoluted as in the uh lighting scientific term convolution of light in like a cube map for instance in which case the mip level can sometimes be sampled depending on like if you want to have a blurry reflection versus a sharp reflection you can read off of textures in different at different mip levels so sometimes you do want to get a specific mip level um for like technical reasons for artistic reasons and so forth usually you just let the gpu calculate the mip level automatically in most cases another caveat is that if you sample textures in the vertex shader the vertex shader cannot figure out mip level automatically so you can't use the text2d function inside of the vertex shader but you can use the text to the lod function in the vertex shader so just a little thing to keep in mind that if you ever want to use textures in the vertex shader always use text 2d a lot because it needs a specific mip level uh supplied to it um okay that was a that was a lot sorry for like rushing mips mips and textures but i wanted to go through textures before we end for today okay i don't know when i'm gonna have the exercises ready any questions before we're done for today uh if the texture is rendered at a big distance only small bib is used it's a whole texture atlas still loaded into memory that depends on how you set up your like texture pipeline um there are like some games that can like dynamically stream in and out different mip levels uh which is why sometimes you like sometimes games prioritize lower mid levels and that load the higher detail ones later um like i know um unreal engine 3 i think had a pretty aggressive uh feature like this uh where they prioritize load times really really heavily so you would sometimes like spawn in like some area and you would see everything would just be a smeary blur and then it would like get more and more detailed as you uh as time goes on um so sometimes game images do that yeah this is very informative looking forward to the assignments in the next lecture thanks for the sake of efficiency can you prepare the assignments ahead of time like before the course starts or something also more likely to guarantee the quality of them as well i don't like doing that because i don't know how far we're gonna get in the course um i feel like if i force specific things that we're gonna cover i'm to either rush some of them or i'm going to have to like draw some of them out in order to not cover things that i want to cover in the next one so i tend to prefer to like i tend to prefer to do this on the fly because then i know exactly what i've taught to you and what i haven't taught yet uh to make sure that like the assignments only cover things you already know but but yeah but i mean if you don't like the assignments feel free to tell me um if they're not good but i try to make them good regardless even though i do all of this basically impressed so can you get these scripts as well yes um i will send you all the things okay also thanks oliver can you swap out the stone texture with a water one and make the waves again i don't have a water texture to go so not really okay cool i guess we don't have any more questions feel free to feel free to play around with traders uh until you get the assignments but yeah otherwise uh just experiment there's a lot of like fun to be had and just experimenting with shaders um and i will send you i will send you all the code uh from today as well so don't worry about that including the textures i should also fix all of the mip settings that i destroyed oh actually um if you want these assignments as well uh i guess feel free to head out head over to my discord and i'll post them in the um in the dev channel probably i'll post them somewhere um in case you want this package as well um for those of you on twitch but yeah thor screaming i should wrap up the stream um i hope this was informative or interesting or fun or whatever also huge thank you to future games for like even allowing me to share this publicly um it really helps incentivize it for me because i i want to be able to reach a lot of people and it's kind of a shame to me if i do like a whole thing and it only reaches like 20 people i think it's really nice that like even people who aren't part of future games get to take part of this so i'm really really grateful for future games too for allowing me to do this it's really nice of them yeah i think that's it thank you all so much for joining you
Info
Channel: Freya Holmér
Views: 302,358
Rating: undefined out of 5
Keywords: Acegikmo, Freya Holmér, Freya, Holmér, Twitch, Unity, Unity3d
Id: kfM-yu0iQBk
Channel Id: undefined
Length: 233min 10sec (13990 seconds)
Published: Fri Feb 26 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.