Best practices for Shader Graph - Unite LA

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
alright let's go ahead and get started I do see a few familiar faces here but for those of you that don't know me my name is Charles sang limb salon I've been working in gangs for some time now across many different platforms and today I'm a part of unities enterprise support team based in sunny Seattle as a developer relations engineer I have one of the coolest jobs in the world because I get to meet up and help developers unblock and accelerate the game development over the last two years I've seen our developed community do some really amazing things often though at the cost of performance so when I first learned by shader graph I got really excited by the new possibilities that it opened up and I wanted to absolutely make sure that our most important audience you our users had the right knowledge to use shader graph in the best way possible we'll be covering a broad range of topics that benefits anyone interested in using shader graph especially if you're in one of these roles there is a lot of material to cover and in case you miss anything don't worry we will post the slides online and the top and YouTube shortly after you night is over in the next 40 minutes or so we'll discuss how a shader graph works how to choose the right master node how to optimize your shader graph discuss some workflow tips and briefly share a road map for twenty 19.1 so how many here of you have used shadow graph before that's a lot of you that's great if this is the first time you've heard of shadow graph you may have a few questions already like what is shader graph and why at Unity we make no secret about our mission to democratize game development by solving hard problem to enable your success every game has a different look and feel and shouldn't be bound by unity standard shaders but we're also acutely aware that writing shaders in their native languages has a very high learning curve we decided to tackle this problem by creating shader graph shadow graph is a tool that allows developers and artists to easily create complex and beautiful shaders that may have been out of reach previously irrespective of whether they have an extensive graphic background and graphics or not by connecting individual nodes that represent different functions and values and inputs commonly used in regular shaders you can create surface shaders to represent any kind of material and thanks to the meticulous and granular control that users have these output materials can range from simple effects to realistic organic matter out of the box shader graph includes over a hundred fifty nodes with an extensive API that even allows you to create your own nodes and even if you're a hardcore graphics programmer graph can still be a really valuable tool to help quickly prototype your visual target or even give you a starting point to avoid writing boilerplate shader code early official work on shader graph started in about May 2017 and was released as a preview package earlier this year with unity 20 18.1 and since then it's been downloaded almost a hundred thousand times so major kudos to our preview users with the high volume of great feedback that we received we've been able to evolve shadow graph fairly quickly releasing nearly two dozen updates since then using the latest build of unity twenty 18.3 one of our top technical artists natalie work was able to create is absolutely stunning same she put this together using the HD render pipeline to show off what you could really do using the latest preview package of shadowgraph every game object that you see in this scene including the butterfly is actually using a shader that was authored entirely in shader graph the butterfly is actually a perfect example and if you look at it closely you may notice how you can actually see through the wings a bit and observe how the color will gradually change based on the illumination angle this effect is called iridescent and is actually present in the mini-me natural material surfaces iridescent is one of many features built right into shader graph so clearly shader graph is very powerful and extends the reach of artists and developers but alas as we're all aware in the game development world everything comes at a cost and in the words of Benjamin Parker with great power comes great responsibility so to understand because associated we using shader graph and how to extract the most performance we need to understand the foundations of how shadow graph works for shadow graph to work as well as it possibly could we needed to rely on a more modern and extensible approach to rendering so for that reason shadow graph requires that your project use is scriptable render pipeline there are two main render pipelines today the High Definition render pipeline aimed at gaming pcs and high-end consoles and the lightweight render pipeline aim for everything else you may have noticed another template called the VR lightweight render pipeline but that's really just the lightweight render pipeline where VR settings enabled so when you create a shader graph the first node that you'll see is called the master node the master node defines the course surface parameters and is the entry point from which data graph will create your shader from there's actually a few different master nodes that you can choose from which we'll talk about in a little bit that said at any given time you're only allowed to have one master node per shader if by chance you happen to have more than one master node in your graph only the first one that was serialized will be used so while master nodes contain a great deal of information defining how things should look logically they are just a fraction of the process to actually generate a shader that we can use the materials we need to pass the shader graph to the render pipeline more specifically the graph will need to be passed into something called a shader back-end which is unique per render pipeline shader backends include sub shader generators for supported master nodes this does carry a strong implication namely that not all master nodes are supported by every render pipeline furthermore because the HD and lightweight render pipelines differ in the way to render the same master node won't look exactly the same between these two different render pipelines and that's because each master node actually has a different template per ender pipeline which is used by the sub shader generator each template contains numerous injection points which will vary based on the master node and the render pipeline settings when the sub shader generators kicked off your shader graph is traversed beginning with the master node injection points are then replaced with actual code based on the node function and values the process may sound a bit heavy-handed but it's actually executed fairly quickly once the sub shader generators complete the outcome is a shader written in the shader lab language which is functionally very similar to HLSL or CG at this point the generator can now be used in your materials it's important to note that this process only occurs when you're using the unity editor and not at runtime to better illustrate let's go over a quick example if you Rin shaders in unity before this will look pretty familiar shaders that you normally write and those generate free shader graph both follow the same language construct but there are a few key differences and wanted to highlight first properties are defined in the shader graph blackboard and most impedes can be converted to properties properties are exposed in the materials inspector and can be modified per materials instance next the tag and material options are defined in the settings and various properties of the masternode itself they can be located by clicking on the gear icon on the masternode UI the next set of differences are perhaps the most critical first we need to include the various function used in your shader graph for example if you use the multiply or noise function that code will be included in your shader next we need to provide the input and output data structures for the surface shaders unity uses is to tell the GPU exactly how data will be transferred in and out of your shader functions the surface function itself ties all of your shader graph operations together let's look at a graph and see what kind of code it generates in this shader graph I'm simply multiplying a tangent vector with the scalar value then setting that as input for the albedo color the graph function of using here is the multiply function so we need to include that in our put shader now for performance reasons shader graph will only include the functions and overloads that are referenced in your graph and ultimately connect to your master node the next thing that we need to do is to define the input and output data structures or the surface shaders the input structure typically contains mesh data in this particular case here the world space tangent is a requirement and accordingly is present in the description input data structure the output data structure has the albedo defined because it corresponds to one of the masternode slots all the other master node slots would also normally appear here which I've omitted for the purpose of simplicity and finally we've reached the actual surface shader function this is where we call the graph functions in the way that you set up in your shader graph using data supplied from the description input data structure at the end of the function we simply return the calculated value in the output data structure in this particular example the code is doing exactly what the graph is defining which is multiplying the tangent vector with the scalar then setting that as a surface color the final difference that I want to briefly mention was that we need to inject code into the vertex input and output structures as well as the vertex and fragment shaders themselves this is done because in order to perform operations using the mesh data we need to pass that data along from the vertex to the fragment pipeline this includes space transformations and other various properties depending on what the shader graph actually needs I'll also briefly discuss hand editing shaders a little bit later my talk but if this was something that you're interested in the functions that I just described now are good candidate areas to look at so now that we understand how shadow craft works let's use that information to advantage so while choosing a master node may seem like a trivial decision it's actually an important decision that has profound implications on overall performance today you have the choice to select from among three different master nodes High Definition lit PBR short for a physically based rendering or unlit let's go over each one of these starting with the PBR master node this master node works across all render pipelines and for that reason alone is a really great note to use if you just wanted to prototype furthermore the feature set includes properties like occlusion and emission and is broad enough for most rendering scenarios if you lock down your render pipeline to the high definition one then you can also consider the HD lit master node this master node will only work on the high definition render pipeline so while it does have a much larger feature set it's not exactly a good choice for prototyping since you may end up using features than aren't present in other render pipelines furthermore the larger feature set can become a double-edged sword as it may add needless complexity to your output shader code potentially resulting in slower performance in fact if I just go ahead and expand our master node you can start to imagine how complex the surface shader can really become using the H delay master node for everything isn't a situation do you want to put yourself in because it comes far too easy to for example enable translucency for objects that don't really benefit from that setting translucency and subsurface scattering by the way are both very expensive computationally later in this presentation I'll talk about what unity does to helping these scenarios and how it relates back to your master note selection the last master node that you can choose from is the unlit master node like the PBR node the unlit master node will also work across all render pipelines it's tempting to overlook this master node because it contains a pretty bare-bones set of features but for that reason it's pretty lightweight and certainly has uses this is the master node did you want to use for effects or UI and even though it's called unlit you can actually add lighting to it using a clever trick you do this by using the provided math nodes to calculate lighting for your target shading model which may be for instance blinn-phong all the path nodes that you need to do this are supplied in the provided node library and once you've done the math all you need to do is just connect the output color to the color port in the unlit master node this is a pretty neat and lightweight way to do some simple lighting to summarize the master nodes available today we suggest using PBR for prototyping due to its versatility or if you're looking to leverage different render pipelines to target a broad spectrum of devices and you lock down to the HD render pipeline you can switch over to using the HD LED master node but only if you need it and last but certainly not least don't overlook the utility of the unlit master node which can be useful in the right scenarios the master node that you select and how you use the matter node affects how unity will try to optimize your shader performance when I look at how unity will try to optimize your output generator output shader generated from shader graph I like to break it down to two categories the first one is that any feature in your master node that you don't use won't be added to your output shader and while there are a few exceptions to this like the albedo color this is generally true for most features and will also depend on the render pipeline now if I were to change the default value for a particular feature that feature will get marked as active and code for that will get added to your output shader moreover connecting any node to its port will also mark that feature is active even if the node value is identical to the default value let's look at the code mask feature as an example now because I left the code mass value at 0 which happens to be the default and there's no node connected no code mask code will be present in your output shader code now if I were to change that to 1 or even 0.001 code mask code would appear in your surface shader code and tying this back to my earlier discussion point this is precisely why choosing master nodes is a special is important when you use the HDL in master node you have to be very careful as you risk enabling unintended features preventing unity from optimizing your shader the second category of optimization is what the shader compiler will do for you with the shader compiler does is take your high level shader lab code then compile that down to low-level by code that could be sent over to the GPU moreover this doesn't apply just to share the graph output but to all shaders that go through unities shader compiler and as part of the byte code generation process the shader compiler will also attempt to optimize your shader code typically by removing no opcode and as it turns out a lot of the rules for the first category completely carry over for example let's look at the metallic feature which is present on both the PBR and HD lit master nodes so we leave the default value to zero and have no additional nodes connected to its port that code block will be compiled out completely by the shader compiler which you can independently verify by looking at the applicator by code another example is emissions which is a little bit more nuanced in the case of emissions you have three requirements first the color must be left at the default value of black next it must be left unconnected to any other node and third the emissions property must be disabled in the material if all three requirements are met emissions code will be compiled out in your shader code to reiterate taking full advantage of the unity shader generation process requires that you first don't change default values unless you need to and likewise don't connect nodes unless they're required these categories of optimizations that have discussed thus far have been confined to the shader code itself but what's more critical is how these shaders are used within the context of your draw calls ideally you want to reduce the number of Joe calls that your game submits to reduce the frequency of costly GPU state changes to improve overall rendering performance one of the best ways to do this is by leveraging GPU instancing which is supported on most modern GPU hardware compared to static patching GPU instancing has the advantage of not requiring geometry replication data now typically when you write shaders on your own you have to add specific code to enable GPU instant singing in your shaders all shaders created from shader graph will support GPU instancing of the box you do need to explicitly enable this in your material and it does increase the size your materials by a tiny bit something to also note is that like static patching GPU instancing does not work on your skin mesh renderers moreover dynamic batching is disabled in the descriptive all render pipeline and even if it did work dynamic batching limits you to no more than 900 vertex attributes per mesh which wouldn't really work for most modern 3d games today anyway beginning in unity 20 18.3 your skin mesh renders can instead make use of a new feature called the SRP batcher the SRP batcher effectively replaces dynamic batching in the scriptable render pipeline and will work on both mesh renderers as well as skin mesh renderers if you're writing shaders by hand and you want your mesh to be picked and batch by the SRP batcher it does require that just shader write out per material parameters and a dedicated constant buffer it's not exactly a trivial amount of work but the results can be quite drastic using the SRP batcher we've seen rendering speeds increase anywhere from 27% to over 400% in our internal tests as noted in this chart comparing rendering times which includes the cost of shadow passes and to clarify since this chart is measuring time lower is better where the SRP batcher really shines is when you have a single shader like an uber shader used across a lot of materials you they see your still please see performance benefits if you only have one shader per material though won't be pronounced every shader created using shader graph will automatically support the SRP batcher to take full advantage of these performance benefits so up to this very moment we've talked a lot about masternodes and what unity does to help your game run as fast as possible what would be focusing to next is what you can do to make your shaders before I'm even faster the absolute first step is to understand what your performance is bound by if you're bound by the CPU then optimizing your shaders won't increase your frame rate though it might improve your battery performance if you are mobile if you know that you're bound by the GPU the next step is to understand what specific part of the graphics pipeline you're bound by to do this you'll need to use the graphics profilers native to your target platform because the unity frame debugger doesn't include timing information this is critical because it affects the way we need to approach optimization optimizing for memory bound situations is going to be different than optimizing for vertex bound scenarios earlier shared how shadow path works by traversing nodes in your graph then injecting shader code when needed so for that very reason you typically want to reduce the number of nodes as much as you possibly can in your graph and doing so will reduce your shader complexity the node decimation process is achieved through a few different ways they include eliminating no op node like multiplying values by one and if you know that you're not bound by texture fetches another clever way to reduce nodes is by baking values back into the textures themselves this is a particularly useful alternative to say expensive noise functions let's take this example where what I'm doing here is sampling a texture then multiplying the color channels by scaler to increase its brightness this is absolutely fine for prototyping but when you're ready to start optimizing you can simply increase the brightness the base texture then remove the multiply note all together to reduce your math operations per frame keep in mind that this optimization would only help your performance if your if it were bound by a LRU and while there are a few cases where having more nodes is actually preferable it for most scenarios combining nodes is a great strategy to aim for let's go ahead and look at another detailed example the light fall-off effect that you see here or a certain glass like surfaces is achieved through a few different ways a really common way to do this is by calling the power function multiple times in succession here's what the shader graph for that shader looks like it might be a bit hard to see but try to draw your attention to these adjacent power nodes let's zoom in a bit now to alleviate this we can use the laws of mathematics to combine the four power nodes into a single one and the visual look and feel is completely identical and again if you know that you're not bound by texture fetches then an even better approach to this would be to bake the fall values into a texture then sample the texture when you need it if you're bound by texture fetches then sampling the texture in this case end up hurting your performance and this is precisely why profiling is very important so in addition to reducing nodes themselves you'll also want to look at each node in itself basically use just what you need if you only need AB actor to don't use a vector for and you'll save 8 bytes of shader memory each node does have a theoretical memory cost and since you can see all the shader code generated it becomes fairly easy to calculate the memory cost of the nodes that you're using and overall reducing your shader memory cost will help in GPU bandwidth situations for those of you wishing to take this in further you can switch to a less precise format for improved shader performance this is especially helpful if you're targeting mobile devices it does require some code modification thankfully shader graph is fully open source and github and there's a few different ways to approach this each with varying degrees of involvement in the near future we'll add half precision support but if you want to get a head start right now this is the class an enumeration that you want to start exploring if you've been writing shaders for some time now this isn't new to you something to keep in mind is that all the other shader best practices that you learn about completely carry over to share the graph such back best practices include reduce your math operations which really ties back to no decimation first look to combine your scale of values before applying them to vector values doing so will reduce the math that your shader needs to perform next instead of branching which is actually quite expensive to do in all GPUs prefer to blend the result in your graph instead additionally you'll want to spend some time to research the impact of certain math operations on your target platform for example if you look at the reciprocal function there's actually two different modes default and fast using fast requires shader model five which isn't universally supported a novel hardware so you can stand to benefit from switching to fast mode if you know that all of your target platforms properly support shader model 5 another general example are color mask which tend to perform inefficiently on power VR and other mobile chipsets the last point that I want to make here is to prefer constants whenever possible in the context of shader graph this refers to inline nodes as I said earlier many input nodes can be converted to a property and shader graph exposing them in the materials inspector properties make it really convenient to prototype how you shader will look on actual game objects but do be aware that converting a node to a property eliminates the possibility for the shader compiler to optimize your shader function so while it's perfectly fine to use properties for prototyping we do strongly recommend converting them back to inline nodes when you're out of the prototyping phase allowing for the shader compiler to generate better optimized code if your season graphics veteran you may be interested in optimizing the generated shaders by hand you'll have to be aware that this is a one-way path and there's no way to carry back any manual optimizations that you make the reason for this is because these many optimizations would likely interfere with the injection based templated to approach that shader graph users but if you're interested in doing this anyway it's a straightforward process all you need to do is right-click the master node select copy shader code then paste that to a new shader file and once you've made your desired changes don't forget to change your material to use the right shader and because this is a one-way path we strongly recommend reserving this manual optimization step towards the end of your development lifecycle the last tip they don't want to give is it so much for runtime performance but for your own sanity as you're building your shader graph and the start stick is large in size you may start to notice UI slowdowns and that's because every node that eventually connects to the master node is continually evaluated conversely nodes that are disconnected from the master node aren't actually evaluated and won't be included in your January shader so it's completely safe to leave those in for prototyping now if you want to quickly edit a lot of nodes in succession you can instead connect them to your preview naired that itself isn't attached to the master node to help save you some time to summarize the entire end-to-end workflow of using shader graph from the onset we suggest prototyping with the PBR master node and only move to HD lit if necessary and appropriate you'll also want to think about your approach the shaders whether you want to go of uber shaders or not to take full advantage of the SRP batcher and when you no longer need to prototype and you're ready to start optimizing begin with aggressively reducing your nodes and if properties are no longer needed don't forget to convert them back to inline values allowing them to be potentially optimized by the shader compiler and finally when your linear at the end of your project that's a good point to start hand optimizing your shaders as we approach the end of my talk I want to conclude with a few exploratory ideas that you can pursue using shader graph the first of which is the node API which allows you to create your own custom nodes this is particularly useful for doing a number of things like say custom lighting perhaps have precision support or even a specialized noise function that we don't yet offer we've gotten lots of excellent feedback from our PvE users so expect this to quickly evolve until unity 20 19.1 as most unity games are bound by CPU performance I'd also encourage developers to start exploring greater inclusions of vertex animations in their titles compared to animators vertex animations have a relatively low impact on your CPU performance as most of the work is done on the GPU good candidates for this include these subtle movements of foliage water animations or perhaps cloth effects let's look at a few examples that we've crafted internally the movements of this tree follow an algorithm pioneered by Tiago Souza and if you observe carefully you may notice different phases for the trunk branch and leaves all of this is completely driven by vertex animations timed by varied sine wave values and also completely authored in shadow graph we posted this to a blog about two weeks ago but this example here showcases using shadow graph to create vertex displacement effects to simulate the movement of water the flag's animations in the middle of the island are also driven by vertex animations and is also authored in shader graph shadowgraph I certainly come a long way from interception but we're not stopping there beginning in unity 2019 dot one we will launch the first official verified release of shader graph coming out of preview and becoming a verified release means shadowgraph will undergo a highly rigorous testing process to ensure a high quality stable feature release which we understand is very important for everyone here and based on your feedback the node API will receive an extensive upgrade allowing you to do even more and of course you also find key quality of life improvements across the board when you go and download shader graph tonight remember that we have a lot of resources out there to teach you how to use trader graph to create amazing effects you can also check out the impressive samples from our very own evangelism team to get a better idea of what you can really do and with that we've arrived at end of my talk today and now I'd like to open up the floor to take any questions [Applause] hey yeah so if we're releasing a game by the end of this year we've got to amplify workflow at the moment for using shaders and we're we're looking at using their the lightweight scriptable random pipeline for that can we should we do that or should we switch to shader graph you said there's a lot of optimizations are gonna be happening mm 19.1 what do you what do you suggest so question was I'm using a lightweight render pipeline today I'm currently using amplify should we use shader graph correct the answer that is a bit nuanced it really depends on your release date if you're looking to release soon I would suggest using continuously amplify it's a great tool if you're looking to release next year I think that's a great time to go ahead and start exploring shader graph I do think it's worth trying out right now to just since you're already familiar with amplify it's great opportunity for you to try at a graph give us feedback on what you think is missing and we'll continue to evolve shader graph to meet your needs thank you at a question about the the foliage and water animations that you were talking about with the vertex animations is are those examples going to be available built into it or like some place online where we can download and see how they work and utilize something like that the question was will the sample and water animations effects will those samples for that be posted online the answer is yes if you actually go to a blog we view post samples for both the foliage and water examples on our blog which you can download it's complete open source feel free to take at it and give it a shot right thanks sure thing hi I was wondering if shaders made in shader graph can be used in older versions of unity the question was will shaders made in shader graph work and other versions of unity it actually depends on the render pipeline so if as long as in most cases it will work there are a few cases where depending on which render pipeline it may work there so beginning with unity 20 19.1 you can expect a lot more stability in the regard historically because we've been iterating very quickly there may be some breakage between these these different versions of the render pipeline and the shaders that you create shader graph thankfully because of D because of the template approach that we use in most cases all you need to do is just load up your graph and share the graph with the target render pipeline and if you typically just resave out the shader it should generally work also like copying the code and then pasting it in another file if you open up the graph file it's actually a JSON file but you can you can double click it and it should just open up and show your graph got it cool yeah you trippin so I assume just having used other node based shaders tools that we'll be able to sort of write custom input nodes but will we be able to also write custom master nodes for example if we want to have a custom lighting solution the question is will we be able to write a custom master node at the moment no but something we're really exploring for future release cool thank you hi the question is will you increase example library to see more powerful shredder graph like templates examples the question was will we provide a library to provide some examples or templates so if you go to this link over here if you check out Andy Touches blog you can just search on Google Andy touch shader graph he has a ton of amazing examples out there that teach you how to do things like self colors rim lighting you know cell shading displacement effects caterers blog is also a great source for that yeah I saw that a lot cool stuff but maybe you can do more we will continue to add more samples to this just because we we really do believe in the possibilities opens up but we also want to illustrate you know good examples of how do you share the graph and what you can really do so do you expect that to library to grow okay thanks sure thing hey I was wondering you could go into wine on default values can't be optimized away the question was why can't non default values be optimized away a lot of that has to do with the compiler that we actually use we actually used 3d 3d shader compiler which we then compiled out to other platforms using an open source project called HLS LLCC so we don't actually control the shader compiler itself but that being said we are definitely exploring how can we even better optimize your shaders okay thank you yeah I was wondering you were saying that in certain instances it may it would make sense to to create a another texture instead of performing math operations and I was wondering if it would be possible for you to integrate some system that would allow you to let's say print the texture so the question was will we add a feature that allows us to go ahead and like print out texture or say about texture based on your node values to that you can reference the sample later right either as a lookup table or directly as the Reisman sample a replacement texture what's funny was as I was giving this presentation internally to our technical artists they asked for the same exact feature so it is certainly that we're looking at and we will put our roadmap so that being said we're actually out of time so what I'll do is I'll stand outside and I'll answer Q&A there thanks everyone [Applause] you
Info
Channel: Unity
Views: 43,650
Rating: undefined out of 5
Keywords: Unity3d, Unity, Unity Technologies, Games, Game Development, Game Dev, Game Engine
Id: Y6WfgFI5H90
Channel Id: undefined
Length: 45min 45sec (2745 seconds)
Published: Mon Dec 10 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.