*sighs* Much like Icarus flew too close to the sun,
I too have flown so far and high, that the wax of my wings has melted
and I am plummeting down to earth. As the seas and mountains around me rise,
I can no longer see the line where the sky meets the sea. For a moment, I was free
and I could see the beautiful fields of Crete in front of me but now I return to the Earth where an endless
slumber awaits me. Yup, that's right, in the end I couldn't perfect
this e-ffect but I'll show you where I got to and then
explain what needs to be done for it be complete. Quick shout out to everyone who's been watching
my videos so far and leaving such great comments. This started out as a way to keep me sane
but the feedback has been so positive, I'll definitely considering continuing. But enough personal rubbish, let's get into
the nitty gritty. We've two cool versions of this effect but
they're not efficient. They've got the beat and the rhythm but they
lack the grace and poise. I'm sorry to tell you this, friends, but the
only way forwards is to get… TECHNICAL TECHNICAL TECHNICAL TECHNICAL Unity's Scriptable Render Pipeline is a barely
supported, undocumented, infuriating bucket of shhh…
fine… it's fine. It's fine. Writing this video was incredibly stressful
for me because I had to be sure that the version
I presented to you was as efficient as possible and guess what
I know it's NOT! Nothing infuriates me more than watching someone
write bad code for 5 minutes, but I've lived long enough to see myself become
the villain so here we are. I especially hate it when they dance around
the topic. My new favourite window in Unity is the Frame
Debug window. I don't know if you've ever read that article
about how Metal Gear Solid V's FOX Engine breaks down a scene but it's essentially that. It separates the processes of how the scene
is actually rendered out, all with neat feedback from the screen. And it's the perfect tool to tell you how
we're going to do this effect. See, we're going to try and insert our own
stuff riiiiiiiight here… or here. Urgh… Quick history lesson. So Unity had it's own Built-in rendering pipeline
for years until, naturally, developers started crying for more. Since the source code for the engine is very
controlled, your average farm-grown developer didn't have
access to any of the lower-level rendering. Therefore, the Scriptable Render Pipeline
was created. Then, to demonstrate how powerful this, they
used this new system to create the Lightweight Render Pipeline and the High Definition Render
Pipeline. (And then they changed the name to Universal
Render Pipeline because Lightweight had negative connotations
I think) Now, this shift to Scriptable Rendering is
not a unanimously supported decision, with several outspoken members of the Dev
community accusing Unity of making new core systems in the interest of
pleasing shareholders and never spending the time to focus on optimising
the systems they have. Other people like the new systems and think
they're absolutely necessary if Unity is to survive in an ever more competitive
market. I'm of the group of people that don't have
opinions of things they know very little about. An increasingly small portion of the internet. Anyway, the important takeaway here, is that
we can take one of those prebuilt Render Pipelines , in this case URP, and use Unity SRP to add
and change things. Specifically… POST PROCESSING Well, kinda. Unity had it's own scriptable post-processing
stack back when it was Built-In which was then later replaced with Post Processing
v2 which used CommandBuffers and is the subject of discussion today. URP currently uses it's own newer Post Processing
stack but it is currently uncustomizable and therefore WORTHLESS. But our effect isn't technically Post-Processing
as we're not doing anything to the final image once it's been rendered, but rather changing the rendered image of
a completely separate layer and integrating it into our scene. So it's sort of peri-processing. That sounds fun. Specifically, I'm going to get our Renderer
that has been generated for us and add a Render Feature. These are experimental, mind you, as is essentially
all of SRP. It's all still in active development, subject
to change and, as we will discover later, missing key features. Our URP Render Pipeline is made up of several
passes, each one dedicated to a different kind of thing to render. Let's quickly go through them now. SHADOWS Gotta figure out where those shadows go. Get our lightmaps and projections done. First things first. PRE-PASSES Let's gather up any information we need before
getting into the nitty gritty. Depth stuff. LUT stuff. Maybe clear the screen. I don't know. OPAQUES These are our big boys. All our walls, pots and lootboxes. They're totally there and you cannot see through
them. They check and write to the Depth Buffer that
I mentioned in the last episode, allowing us to figure out what cuts out what. SKYBOX Let's look at our depth texture. Huh, that must be really far away, guess we
should just replace it with some Sky. Yeah, that looks right. TRANSPARENTS But sometimes we gotta see through stuff. So we render it last and put it on top of
what want to see on the other side. These guys check the Depth Buffer so they
can cull the right bits, but they don't write to it as they don't block anything's view. They are also THE BANE OF MY EXISTENCE. POST-PROCESSING This is the cool stuff. Everything's rendered out now so we can add
some Bloom or Colour Grading or whatever. That's real neat. So adding a Render Feature will execute its
own pass into the Pipeline wherever we want it. There's only one type of Render Feature currently
available, and it's called Render Objects. It allows you to render specific objects at
any point in the Pipeline, change their material or even the camera settings that they're rendered
with. Very useful for FPS weapons. The knowledge I'm presenting today is based
off of using the scripts behind this feature, reading up on the Documentation, as well as really stellar tutorials by people
like CatLikeCoding, link in the description. There's honestly not a lot out there on this
stuff, so I hope that my basic explanation will allow you to take it further and do your
own stuff with it. Here's the basic outline of a Render Feature
that I've coded. The first thing you have to do is inherit
from the ScriptableRendererFeature class. If you don't understand what inheriting is
then um that sounds like a YOU problem *laughs*. No but seriously look up Object Oriented Programming,
essentially we're taking all the deliciousy goodness of a class and we're gonnna add and
change some things. You've got your serializable settings. These can be changed in the inspector which
is always going to be useful when you want to test out the effect. You can change things like what Materials
are used and more importantly when the pass is supposed to occur in the pipeline. We initialise these settings and our BasicFeaturePass. This is another class that we'll write in
a separate file but for now initialise it in the overrided Create function. Then you need to add this pass to the queue
in the overrided AddRenderPasses function, with these parameters. Very simple, skeletal class, which as you
can see is a lot more complicated in the original RenderObjects version. Then in our BasicFeaturePass class, we inherit
from ScriptableRenderPass. The ScriptableRendererFeature and the ScriptableRenderPass
classes are the backbone of the Scriptable Rendering Pipeline and while it's annoying to have to say Scriptable
so much, it's more annoying there are 3 different forms of the verb “to Render” in there. Here we declare some variables. We've got our customisable parameters for
the pass, as well a bunch of things that are 100% necessary in order to render our objects. The ProfilingSampler lets us see this pass
in the Frame Debug Menu, the RenderStateBlock tells us what we want to write to when rendering, the ShaderTagIDLists allows us to specify
which passes to call in shaders and the FilteringSettings allow us to control which objects and layers
we're planning on rendering. Finally, we've got some static ids that we'll
use during the pass itself. Just easier to set them here than calculate
them every time we execute. Then, we set up our constructor, exactly how
we called it in the BasicFeature script. Important thing to note is this renderPassEvent. Here we're assigning the variable from the
inherited class that tells the Pass when to execute. Everything else is just setting up basic values
and using the settings from the inspector. The final part, and holy grail, of this class
is the overridden Execute function. This function is what is executed when it's
time for this Pass to actually do something. Here we've got our CommandBuffer. We're going to give this baby a series of
instructions, or “Commands” and then jam it into our Rendering context and tell it
to execute. Each time the CommandBuffer must have a RenderTarget,
where it's going to send its information. It can also create and release Temporary Render
Textures, that only exist in memory during the Pass. I've got this set up to essentially recreate
the Pixelart effect from the previous video. Recall that we used two RenderTextures, one
for the rendered out pixelated colour and one for its depth. We use the CommandBuffer to create these as
temporary Render Textures with this code, use this code to make them the CommandBuffer's
Render Target and then make sure they're wiped so that they
contain only transparent data. Put it in context and clear the command buffer
for further instructions. Next we need to render what's on the screen
onto the RenderTextures. Since they're still the RenderTarget, and
we've already given it a LayerMask so that it only renders the Pixel Layer, we just have
to call the DrawRenderers command with the information already given to us for this to
work. Now we've filled our RenderTextures with pixelly
goodness but we need to actually send this to the screen. Since this is a new RenderTarget we can carry
on adding commands and don't need to execute yet. This is more efficient, probably. We've got to set our new RenderTarget to the
Pipeline's temporary RenderTexture, _CameraColorTexture, something that can be accessed by every shader
and is directly drawn onto the screen at the end. Then we've got to do what every PostProcessing
stack has had to do since the dawn of time. We have to Blit which is a word that stands
for something. Blitting essentially means we're taking images
and we're either copying one to the other or blending them together by essentially rendering
a square with a special shader that takes the first input as an input. (Anyone whos anyone in Computer Graphics will
tell you that squares don't exist and are just two trangles so it's technically more efficient to use
a big triangle and ignore the tips but that's too complicated for this tutorial) So we give the Blit command the identifier
for our Pixelated Render Texture, tell it that the RenderTarget is the one we already
set and give it our special material with the
special shader for rendering. Then we release our temporary render textures
and do our final execute to wrap it all up. But we're not finished because we still have
to actually write the goddamn shader and at this point we're 3 levels deep and
you can see why this is the goddamn advanced tutorial. Thankfully, this shader is essentially just
a coded version of the ShaderGraph I made in the last video. Here we're sampling the depth, here we're
making a sobel filter and there's some bonus stuff to make sure we're getting the right
depth for the outline. You could probably use a Shader Graph for
this but even if it did work it'd be hella inefficient. Even though it's scary to learn GLSL and ShaderLab
it's definitely worth it for that low level functionality and there are some great tutorials
out there. An important thing to note are these two tags. The Blend tag allows us to dictate how the
rendered image is blended together with the destination image. In this specific case it's using the alpha
values to essentially paste it on top of our destination image like so. The ZWrite tag allows us to stop it writing
to depth buffer, because if it did it would write the square's depth and the depthbuffer
would just be a useless plain colour, losing all of our information. So we need our custom render pass to execute
before we've rendered our Transparents. Good stuff lemme just set it there aaaaand BOOM! We did it! We've completely reproduced the effect in
SRP! We are Gods! The laws of Graphics are ours! Nothing can stop us! We shall fly higher and higher until… …
So this effect and the one in the video previous sorta break-down when you add Transparents. Since Transparents don't write to the Depth
Buffer, there's no way to account for them after the fact, but because they cull by reading from the
Depth Buffer, they won't account for our pixelated objects. So, the solution is simple. We're rendering out our Pixelly bois before
the Transparents so we just update the Depth Buffer with our new values! Well… we can't. Or at least I can't. Hello and welcome to 8 weeks in the future. Everything is worse now. Links in the goddamn description. I'm sorry, this video has been a giant slog
to get through so I'm going to cut to the point. In doing further screenshots for the final
part of this video, I came across a 6 year old reddit thread that led me to the exact answers I was looking
for. A way in HLSL to modify the depth buffer. I've changed this method and only this method
and now it works like a dream. Enjoy this insurmountable trilogy of videos. If I ever make more they'll be less tutorial-y,
I think. Who knows. Would you believe that this solution is not
only so well-known that it's baffling it took me two months to
get here but people actual advocate not using it as
changing the depth of pixels after they've been Z-Tested can get a little
nuts. Luckily, I know what I'm doing and this is
fine. Probably. I'm working on my first commercial game now. I love my girlfriend. See ya!