Clay John: The Future of Rendering in Godot #GodotCon2023

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] with goo 4 the engine took a major leap towards uh forward regarding its visual capabilities like most projects overhauled uh the overhauling of God Do's renderer render pipeline forced to make many or thousands of little decisions going forward with the future of the render Pipeline and today John Clay is here the leader of the rendering team of the gdau engine who wants to who who will tell us about the future of the render pipeline in Gau and the many great visuals that will come from it so Applause for clay thanks thanks everyone and thanks uh for for the nice introduction I'm feeling very welcome to be here and I hope everybody feels feels welcome here uh so uh as you said we've been really busy um we've been working super hard these last couple years to make good4 everything that we wanted it to be and it's it's good I'm very happy with it and I'm always looking forward to what's next and unfortunately that means I always feel like things can be better and so today I'm going to talk about what I think and what what we've talked about and and discussed what we think would make things better and hopefully you agree there will be a Q&A period afterwards so feel free to to ask questions about things you don't understand and feel free to ask about uh things that you think we we might have missed if you have a lot of questions you can always find me at lunch and I'm more than happy to chat about this sort of stuff so for for a little context uh I started contributing to gdau in 2017 so at at that point we had released good 2.1 good 3.0 had not yet come out and it was you know the big thing it was going to change everything and everything is going to be really Advanced and really nice and things have changed so much from from then good old 3.5 and we'll be releasing good 3.6 soon is just totally different from from good 3.0 uh and now we have 4.0 we're about to release 4.2 and I think 4.6 4.12 are going to be totally different and uh and I'm really hoping to see that same sort of progression that we've seen over the last uh six years or so uh so uh I'm going to talk a little bit about what I'm going to talk about uh so that's fun um so first I want to talk a little bit about the current state of things as of 4.2 I mean we've been releasing betas and we've been talking about what we've done so far but I know a lot of people won't know exactly what's uh what's coming or what the state of things are so I want to kind of uh introduce that and then I want to talk about our priorities and this is going to be a bit of a discussion about how we decide what what our priorities are how we decide what we're going to work on next um and how we kind of deal with uh conflicting uh needs and desires from uh from from the community then we're going to talk about short-term plans so these are things that we've talked about with we all fairly certain that there are things we want to have in the engine and we want to have in the engine quite soon uh and quite soon means you know maybe 6 months maybe a year um we'll get working on them soon and hopefully have something that's ready and merged into the engine uh in a shorter more reasonable timeline and then finally I want to talk about some long-term ideas and you can see I don't call these plans I call these ideas because they're things that we're talking about but we don't know if we'll actually do them um we don't know if when we do them they'll look anything like what we have have in mind now but they're the sorts of things that um we're thinking about and talking about because we want to be ready for uh you know the next generation of things and for the future and we want to make sure that we're thinking about our future now so that we don't get ourselves into a position where we have to go back and rewrite everything again because that's essentially what we did already for for good4 and we don't want to do that again so important disclaimer I'm talking about new features today that doesn't mean that that's our Focus going forward we are still totally dedicated to working on bug fixes stabilizing things making things run faster making things easier um it's just that for a talk it's more fun to talk about new features so that's what I'm going to talk about but don't take any of this to mean that we're not totally dedicated to uh fixing the things that are currently broken broken or smoothing out the things that are a little bit bumpy right now okay so the current state of things um as most of you will know we've got a few different uh rendering backends so we've got what we call the Rd renderer and the Rd renderer it's our Vulcan based renderer it's for modern systems uh it works well on newer phones it works well on desktop devices it doesn't run on the web it doesn't run on 10-year-old devices and for that we have the compatibility back end and the compatibility backend is very similar to the old uh good old ges2 renderer uh it's not ges2 it's gles3 but the architecture is very similar to the ges2 renderer um and so the state of that is it's mostly done um the 2D render is totally done it's very fast it works really well and for 2D games that are targeting mobile low-end desktop web it's working quite quite well and we're very happy with it but for for 3D there's still more we want to do so some of the more advanced 3D features that um you might not use all the time we haven't implemented yet and I'm going to talk more about that and what our plans are for it later but uh for now just know that it's not 100% where we want it to be in terms of features but um it's getting there and should be there fairly soon and then for the Rd backends this is our high-end stuff right this is the the exciting stuff this is where the the new features go because this is targeting uh higher-end devices it's targeting uh more recent devices okay we're having microphone problems um okay so um that's the Rd back ends we're quite happy with where they are so things are um performing okay uh things are looking really good and I know you're going to see some demos later this weekend that look really good um so be excited for that but uh we know performance can be better uh we're working on it um for some scenes it's great uh for other scenes uh we need to do a bit more work and then certain effects are not up to the Quality that we want them to be and you know this is a it's a time constraint kind of thing we get things working so that 90% of the time it looks good and then if your game falls into that 10% it looks terrible and doesn't work at all and we need to do a lot of work and and we're getting there and then finally the last thing and this this is a bit more more technical but we we right now use a very rigid rendering pipeline so what that means is we make certain assumptions about uh how your game should be rendered so what order the steps happen in uh what what sort of things are going to be done uh and this is nice for us as developers because we just say Okay a b c d That's the order that things happen in and then we're good things are fast we can optimize for that but then if your game doesn't fit into uh what we have in mind then all of a sudden you run into barriers and you're like well I I want to do this and we smile and say no you can't sorry or you know well I'm doing this thing that seems so simple why is it performing slow and we go oh yeah yeah yeah that is going to perform slow but this will be fast and you say well I don't want to do that I want you know I want to do the thing that I'm already doing no no it's going to be slow sorry good luck so uh we want to provide more flexibility we want to provide the tools so that users who are really pushing good uh can tweak things or or really control things that they want to control and and that'll allow people to get uh really high performance when they need it and really flexible kind of alternative Styles and things when they want to do it but this is subject to the constraint that for regular users we want regular users to just be able to not worry about any of that stuff and just have things work which is the state now for most people you just throw all your stuff in it works you don't think about it you move on um okay so uh talking about priorities so following up on that number one priority is ease of use right uh it's something that makes really unique you don't need years of experience as a Game Dev to start using it you can just start using it and things work and then when you get more experience you learn new things and you're able to push it further and further and and that's what we want to keep we don't want to sacrifice that early core experience in order to make people you know be able to make really big games or really you know different different things we want to make sure that that core experience stays the same and then it's also possible to to do something more advanced or do something really big once you once you know more and you know how to take advantage of the tools that that are available and then next up are stability and performance so I touched on those before but like for a game engine these are just of the utmost importance uh and and of course they're things that that uh we find extremely important too right so your game needs to run uh it needs to run as fast as possible so that you can do as much as you want and it needs to actually work on all the devices that you want to ship on and we care about that too because um you know we ship Gau to every gdau user because the gdau editor is Gau and so our users need Gau to work well and our users need their games to work well and those two uh two things are really closely aligned and so uh we we tend to spend a lot of time trying to uh stabilize and make things work well so now these are priorities that we're thinking more about going into the future uh they don't reflect uh how we've been been making decisions they're more reflecting how we're making decisions about what we do next uh so I've already touched on this quite a bit but we want the engine to be more flexible we want uh more advanced users to be able to do more diverse things with the engine and we really want to empower those users to to do the things that they want to do and for that the engine needs to be more flexible so in terms of rendering that means things like custom rendering passes if you don't know what that is I'm going to talk about it later so I uh I won't spend any time on that now and we want users to have more powerful Shader authoring so right now we have a very uh what I think is a very approachable way of authoring shaders uh it's a lot easier than just getting started with raw glsl or or a Shader language um but uh it's limiting and so we want to provide something that's less limiting and gives power users uh more more control and then the other thing is scale so um when you're making a 3D game in gdau you can only make the game so big there are technical limitations for that that I'll I'll talk about later and then I'll talk about what uh our solutions for that are and in particular how we're going to do that without making small games extremely hard to to make because right now if you're making a small small game super easy if you're making a huge game it's extremely hard we want to keep small games easy to make but we want to make it possible to make uh large games as well um so that's just larger uh larger scenes primarily okay so now this is um the the process piece right so um we have a lot of ideas uh a lot of us have uh diverse experience in in the industry we're coming from different places uh we think we know what what should be prioritized and so uh we meet a lot and we talk about things we write blog posts and proposals uh we go on social media and we talk about what we think we should work on and then we listen to people um and that's I mean it's it's easier to say than than to do but we really try to stay engaged with people and again this is in our meetings we have public weekly meetings that people can come to and and talking uh we get a lot of feedback on our blog posts so we try to write lots of blog posts so that we get more feedback we get a lot of feedback on our chat platform get a lot of feedback on on social media um and so we really try to engage with people and understand what their needs are um and how we can solve their their needs uh in a way that makes sense uh within within gdau uh and in doing that we have to prioritize certain things because we have conflicting conflicting things some users are saying I you I want to make Assassin's Creed and gdau uh please make that possible for me and we say okay uh we'll try we'll do what we can um but I ultimately the the needs that we spend the most time on are when people say I'm making this game um you know I'm partway through my project and I'm running into this limitation and I cannot finish my game unless this is lifted everything else like everything works great in gdau but I'm running into this thing what can we do and that gets super high priority right because we want to uh focus on the known needs of our current users and we're really trying to avoid the speculation of hey I'd love to use gdau but it has to support this um or else I won't touch it and then you know we Implement that thing and they don't come anyway so now we're just maintaining a feature that nobody uses so we really want to to implement things that we know people are going to use the people who are using the engine know they're going to use it um and that for us is is ideal and so that's why uh these three things are are up top right to me the this this is the way of serving our current users right uh we make things easy to use we make sure that our current users are empowered uh things are performing well so that they can make the things they want to make and we make sure that everything we have works well and then this is about expanding things for our current users and for future users so now you can hopefully do things that weren't possible before you can make the game that you wanted to make but you've been keeping on the back burner because it's just not going to work uh work with gdau um but ultimately we have to deal with these uh conflicting uh demands and it can feel a little bit like this so on on the one hand you have users with like incredible systems because they're software engineers and so they've got the best computers and everything and they're using gdau and they want all the fancy effects and then we've got a lot of people uh that are working on on laptops or older systems especially people in in the second and third world that use use gdau they don't have Nvidia graphics cards and they certainly don't have the 4090 and so for them like we could Implement rate tracing but it it does nothing for them and it takes a lot of time for us so um we we try to find a balance between these things ultimately we do our best to support both uh both sides here because nextg features today are totally standard features or yeah and next year right um so we want to support those things we want to support them in a decent amount of time but um you know we're doing that within the constraint of making sure that we are helping everybody um so this is a slide from our 2023 poll and what you can see is uh you know the high-end this is specific for gpus but the the high-end devices only make up about a third of our current user base and we had about 8,000 response oh you can see 7,671 uh it's a pretty good I think representation of of the community and so if we focus on low-end features or or medium- end features we're serving 2third of the community and if we focus on high-end features we're serving one3 so that's it's a significant amount either way so we have to focus on on everything a little bit um but it it it does help us decide uh what we're doing and this goes without saying that uh we also have a lot of mobile users so we can debate having high and and medium end features but uh on mobile devices like the the current the current gen on desktop is going to be current gen on mobile in 10 years right so like to properly support mobile we need to be thinking about next gen and last gen and we need to make sure that everything works well and we can't just abandon a huge part of our user base so uh this is again from from the poll this year you can see 35% of the community is targeting web 32% Android uh 9% iOS uh and everybody targets windows on top of that but um this is an important part of our our community and we we don't want to leave them behind and we don't want to just focus on things that uh only a subset of uh of our users can can work on so with the process out of the way these are a few of the things that we think we be starting to work on very soon some of this we've started work on and expect to have finished soon uh some of it we're hoping to start soon soon and we'll see uh See when it gets gets finished so I I promise to come back to the the compatible compatibility back end uh this is near and dear to my heart um so we're working on some of what we consider to be the more uh Advanced effects so uh in particular things that we'd like to work on soon are ssao that's screen space ambient occlusion it's it's responsible for making Corners a little bit dark um it adds a lot of realism to your games I promise if you don't know what that is if you do know what it is I trust that you're excited um volumetric fog uh we have that supported in the Rd renderers uh we're going to implement it for the compatibility backend but in a different way in a way that's not going to look quite as nice um right now the volumetric fog and gdau is very crisp it's extremely fast and it looks amazing and on the compatibility back end it's not going to look as good it's not going to be as physically accurate but it's it's going to run on low-end devices and it's going to run in the compatibility back end and so we're we're quite excited about that and then glow of course we're all familiar with with glow and we love it um and then uh on top of that things like reflection probes light Maps uh these are things that you just you need for for 3D games uh they're not supported so the types of 3D games you can make in the compatibility renderer it's still quite quite limited and um there are things that will Implement as soon as possible and we're going to implement them uh from the user-facing perspective as similar as we can to uh the Rd renderers but ultimately um they're going to be using less accurate versions more optimized versions of things that are much better on on that Hardware so they're not going to look quite as good um but they're going to be fast and that's that's our priority for uh for the compatibility back end now this is my favorite slide I'm going to go through it really fast um because it's all technical stuff I have a lot of notes written here and I'm not going to cover it all so the asyet graph this is extremely uh exciting for me uh this is a tool that solves a very specific problem that we have in the Vulcan back end uh that I hope uh nobody here knows that we have but it's a problem for me and so this is something that's going to make the life of rendering Engineers better um and ultimately uh what it's doing is it's something that runs behind the scenes uh it it keeps track of all the GPU commands and GPU resources that we use and then it reorders all of the commands to make sure that everything that can run in parallel runs in parallel things that need to wait on other things properly wait on other things and what it means for us and for users who are using the rendering device directly is they don't have to worry about uh parallelizing their GPU code they don't have to worry about um putting barriers in between different commands you just tell gdau what you want to do and it'll reorder things so that uh it's as fast as possible and um and just work so it's going to make the experience for rendering Engineers a lot uh more comfortable and uh it should come with a lot of performance benefits too naturally uh this is going to improve GPU performance uh but it's going to cost a little bit of C CPU Cycles so we'll be tweaking it to to find a good balance but uh it's something that we think is going to make uh make a lot of things just work very very easily and the cost should be quite minimal now this is a problem um a really big problem uh Vulcan and the modern apis introduced a concept called pipelines um pipelines are a thing that bundles together everything you need to know about rendering before you render or something so it's it's got your Shader it's got your material info it's got your blend mode things like that um a problem that we see in pretty much every game that releases these days is the first time an object comes onto screen you get a bit of a stutter and like AAA studios are struggling with this we're struggling with this back in the old days of using openg there was no good solution for this we came close to a solution but in the end it only worked on some GPU use with certain drivers and really only on on newer stuff and it's still a problem today with uh with Vulcan so we've got some some ideas that we can do uh in order to reduce this uh and that is uh quite complex but it it's similar to what we did in good3 which is using what we call an Uber Shader approach um and then uh compiling pipelines in in the background if you want to talk more about that come find me later it's a very technical topic very exciting for me um but not well suited for today uh and last uh we will hopefully soon start work on a metal renderer so metal is the equivalent of Vulcan but it runs on Apple devices um we have a translation layer that translates Vulcan to metal um and it works um that's about it it works but it's it's not as fast as we want it to be comes with a lot of problems and it's super hard to debug um so we want to write a metal back end we have the infrastructure there we just need uh need to do it um okay and now something more exciting um so the compositor uh is our solution to customizing rendering so a compositor is something that uh organizes rendering passes and what that means in practice is it's something that allow users to control the order that rendering happens in so we've got some tools in in good to control the order of things but they're they're not perfect um and I'll I'll explain uh what that means so right now if you want to render something in gdau you start from a viewport the viewport renders the camera and then you do this uh step called culling so that's anything that the camera can't see is just thrown out um and um if you have a clusion coling turned on it's like things that are hidden behind other surfaces are also thrown out uh and then we go to render the the scene and that looks kind of like this um so we do a prepass then we do a TAA pass so that's objects that are are moving and they need to write out to special buffers for TAA then we do an opaque pass we draw the sky and then we draw a transparent objects on top um the TAA pass and the opeg pass are where all your opeg objects are drawn so that's most of your objects uh the trouble is for certain effects um you want something to be rendered first or you want it to be rendered last uh and right now you can kind of do that uh you can use what we call the render priority setting and and give it a high priority but uh if you want something rendered first and it's in the opeg pass it's going to be rendered after everything in the TAA pass so it actually ends up rendering somewhere in the middle um and vice versa if you want something to be rendered last but it's Dynamic the best you can do is the middle you can't make it render last so uh the solution that we've come up for um for this using the compositor is to allow users to create their own custom passes so that looks kind of like this so you'll do your depth prepass then you do TAA pass one opeg pass one and then TAA pass two opeg pass two so if I want something to be rendered first I put it in pass one and it renders and then everything else can be in the other passes and then uh rendering continues as normal so this is something that's extremely important for using stensils for using custom death testing for implementing terrain blending which is a uh important feature if you want to have a modern terrain system uh it's used for geometry deal and other custom effects so I'll I'll give an example of this uh a common problem that users have is you want to render a boat and you've got a water plane and water fills your boat and of course the GPU doesn't understand that water should not be inside a boat so users have to come up with all kinds of different ways of making sure the water is not drawn where the boat is and that can be really annoying uh and so uh gpus have a tool for this called stencils and so what stencils allow you to do is you write to a special buffer the stencil buffer and that draws the boat and then when you go to draw the water you just say don't draw draw where the boat is and the boat will have an ID like maybe one and so then you get something like this this is actually from a Unity project it's a because we don't support stencils yet which is why I'm talking about this but it it's actually a pretty cool project it's a totally open-source Community Driven demo project um it was unfortunately cancelled but um it's still pretty cool and when I I'm happy it exists um so okay back to um this so that that's our solution for this this problem but the compositor does more than that because there are other effects that users want to do um and you sometimes want to do more than just adding another opeg pass because obviously there are more things in your scene than Opa objects um sometimes you want to render um for example portals or mirrors where you're rendering the scene again from a different angle but you want it to render to the same viewport you don't want to render to another viewport and then copy everything over because that's needlessly expensive so um this is again the the high level of what rendering looks like you go through rendering camera cing render scene and appears there and then we want users to be able to do this render your camera call render scene and then before you do post-processing and copy to uh the 2D renderer uh we'll just do that all again we'll do it with a different camera and you can do really uh fancy uh fancy effects like portals combined with stenciling where you only render the area that's in the portal and then you can render a portal really efficiently really easily um and it should should work well and obviously um look at all these little boxes this is an advanced an advanced thing but those are Advanced effects right and so we this is an example of a situation where uh we're trying to keep the normal process for for users the same and then we're adding something Advanced so users who really want to do those things can do it it's going to take a little bit of knowledge and it's going to be a little bit more complex but you'll be able to do it and um ideally we're going to have some really good tutorials and stuff in in the future so um what you end up seeing here um if you're rendering with multiple cameras is you end up doing this Full Pass twice uh and that allows you to do uh a lot of things and then of course if in the second pass you're not rendering a sky it doesn't render the sky and if you don't have transparent objects it doesn't render transparent objects either and then something I'm not going to talk about because um I want to get through the presentation within my time slot is we're also going to allow inserting call backs between these passes and so that'll allow you to write GD script code or or C code or whatever scripting language you use and then you can insert code between the passes in order to you know add new rendering commands that we we wouldn't be putting in there um and that's that allows you to script script the pipeline a little bit it's not a scriptable rendering pipeline but it allows you to do some uh some things you wouldn't be able to do otherwise so uh on to Shader templates so this is what a Shader looks like in gdau uh like I said before we keep things simple um you write to some specified outputs you have some specified inputs uh and all of those come from uh our big Shader so we maintain shaders that are thousands of lines of code so that your shaders can be tens of lines of code But ultimately what happens is we take your Shader code and we just shove it into ours so that that's the whole Shader I showed before and now it's sitting within uh within our Shader and so what it means is you are kind of stuck with the decisions that we make about about the Shader right if if we uh make the Shader slower to make it more correct or make it look a little bit better you just have to live with that decision because your code gets inserted into ours um and what we want users to be able to do is not make the same trade-offs we're making we want users to be able to make their own tradeoffs we want users to be able to implement their own uh their own shaders so that they can do whatever they want so um you know the big question is what do we do about all this stuff until now it's been it's been necessary so that's what the the idea behind Shader templates are they're the everything else around your Shader so our idea is uh we'll have a new resource type it'll be Shader template when you assign it to your material instead of using the gdau template for lack of a better word you'll use your own uh and ideally what you can do with that is Implement totally different rendering Styles like if you want to for example write a tune Shader where you accumulate all the lights and then you apply a stepping function to the accumulated lights this is a heavily requested feature by the way uh you can do that um and then for other users who don't care about that we don't need to add weight to the to the shaders to support that uh you can do it uh you can do it yourself the idea is this is something you could specify in a um in a world environment or on a material or on a Shader itself um and then it'll just uh just work and then also of course if you want you can just delete 90% of our Shader code and have super fast shaders because you know exactly what you want and what you want is to render a lot of things even faster than than you can normally um now this is a fun topic you can already see what uh what's going to happen here uh we've worked really hard on improving our TAA in in the coming up to 4.2 and and now what we want to do is leverage that really good TAA in order to optimize a bunch of effects like GI AO uh Shadows so this is an example of the difference that using TAA makes on Shadows this actually didn't make it into 4.2 but it should be mer merged for 4.3 so you can see up top uh the Shadows look a little bit locky here that is close up and then with TAA optimization that's what it looks like so we're hoping to apply that to ssao to our GI and what that means is if you are okay with this um you can just turn your quality settings to the lowest make shadows super fast um and then you get a free performance boost and if you're not okay with this then you can have this and then it also allows us to to move to some more modern techniques so this this is ssao um it adds the little uh dark spots and cracks on the left is what we currently use in Gau this isn't a screenshot from gdau by the way but uh it's the same technique um that's what we currently use and then on the right is uh a more modern version of the same technique so it looks a little bit nicer a little bit crisper and it takes about 70% of the time to render so it's faster it looks better we want it um okay now this is a fun buzzword a lot of people will be familiar with deferred rendering Gau uses a forward renderer which means that we render our lights at the same time that we render an object um I'm not going to go any deeper into it than that what a deferred renderer does is uh you render some the information about your objects to the screen and then you take all of that and you render your lights after so you combine those slices and then render your your lights um realistically this is a performance optimization uh it's something you do when you want to have a lot of lights in the scene and you want to have really complex shaders um forward renderers struggle with something called vgpr usage and the solution unfortunately is to write a different type of renderer so we'd like to have an option that you can toggle to turn on deferred rendering uh it's less flexible um in certain cases it's more difficult to use but for people who know what they're doing turning it on should just improve performance in a lot of cases so it's an option we want to create and we've set up our architecture so that it shouldn't be too onerous to uh to add this so we'd also be getting rid of the depth prepass which is another performance saving uh tool um we' provide access to these buffers to VFX artists uh so there are a lot of cool techniques that people are familiar with from unity and unreal and they want to implement them in good oh and we say sorry you can't we're fundamentally different type of renderer um by adding this we would unblock a lot of those uh those people and then we'd also unblock um some other rendering techniques that just don't work um in gdau right now so one of those is is something called deferred deal it's also sometimes called mesh deal um what it allows you to do it's a very cheap way of adding uh detail onto things so a lot of times this is used to like add screws onto sheet metal without rendering you know 100,000 polygon screw you can just um throw it on top and and it'll look quite good okay and this is kind of what the process looks like when you use a deferred renderer so you start with some basic information you calculate lighting which is the bottom left there and then you put it all together uh and it ends up looking the same uh the same as it uh would in inos forward renderer okay so another kind of technical is thing um maybe something people are familiar with streamable resources um so this is mes streaming and texture streaming and so the idea there is instead of loading the entire model and your 4K textures at once you start with the uh low res textures and you start with the low polygon version of the model and then you scale up in accordance with maybe how close things are to you with time um and with how much vram you have so this this is an optimization for larger game worlds but it's an optimization we're increasingly finding that our users need because people are adding more detail they're having uh you know nicer meshes and they're having larger textures and they're bumping up uh on on limits ultimately what this does uh importantly it decreases load time so if you're switching between different scenes it makes that quite a bit faster and it allows you to manage vram effectively so for those of you making bigger bigger games you know that in gdau once you put too many things into your 3D World at once and you go over the uh the cards vram limit things get really slow and your only option is to start removing stuff as fast as you can and we don't have an automatic process for that this would allow you to set a limit that says this card has 4 gbt of vram do not go over that limit uh and then the engine can manage memory for you and make sure you don't go over the limit um and that's important for scenes like this uh it doesn't look too complex it's kind of stylized it's really nice but when you actually look at the detail required for something like this you can see there's a lot going on so like you know these are high high resolution textures they look amazing up close and from far away and this is a really high polygon model and if if you want to render this efficiently you have to have some form of of streaming and of course we're all familiar with streaming bugs so we're looking forward to introducing these into kdo as well um this is the downside of streaming if if you've got a slow hard drive or if you've got limited vram uh then suddenly your cutcenes might look like this for a second or two here's another famous example I think this is on a PS4 um and that's an important character as far as I know okay moving on um we're always working on polishing our VFX pipeline uh these days we're getting really good feedback really detailed feedback and there are a ton of paper cuts for VFX artists there's a ton of room for growth in this area um and people are already creating super super cool things so this is a a screenshot I pulled this from uh from a PR um coming in good 4.2 from QB um that really improves uh a lot of our particle systems um and so we're we're working on on improving this stuff we want VFX artists to have more control obviously more flexibility um but this is an iterative process so we get feedback we fix a paper cut we get more feedback and then as we resolve things we have a much more clear direction of where to go in the future and so this is a kind of a baby step situation where we're uh we're finding our way by slowly moving towards our uh RN goal okay pretty stuff uh sdfg sdfg stands for signed distance field Global illumination uh what it is it's our Dynamic solution to uh bouncing lighting off of things and into other places um so here is what a scene looks like without sfgi turned on um and then when you turn it on it looks like this so notice how you get the darkness uh inside the castle because the only light getting in there should be what comes in through the teeny tiny Windows the door um and that's it so it gets much much darker this is how sdfg sees the world uh these this is a visualization of the signed distance field so you kind of take a simplified version of the scene you use it to calculate uh lighting which is this is just the lighting contribution from sfgi and then you put that together and that's what you get so this is the perfect case for sfgi I think it looks amazing it runs super fast uh and it works really well but that scene is a scene that is an ideal candidate for sfgi so it's got a big light source it's got nice chunky chunky geometry and the performance is good but we know we can make it better uh and then particularly where sdfg struggles is when you're inside a a closed room uh and you have a small light source then things look a little bit wonky and we know we can improve that and so we are going to do that um and we're going to do that by improving sdfg we're to have better performance um we're going to add more accuracy so this is a uh weird visualization about how we do that uh and it's by separating our data into a few different formats and then we're going to do some some trickery with uh with the UN 64 format in order to get more accuracy um we're hoping this will uh eliminate light leaks um and then on top of that we want to add Dynamic object support So before I called it a dynamic system it's dynamic in the sense that when you move the sun uh all the lighting changes um but it doesn't capture lighting from uh for example your car character who's carrying a lantern or something around um the the contribution from the person won't be there you'll just have the lighting from from the Lantern and that can be a bit uh problematic okay so I'm running short on time so I'm going to blast through these and then open the floor for questions um these are some of our long-term ideas so we've been talking about Ray tracing uh we've been talking about a GPU driven renderer and we've been talking about writing a pluggable renderer through GD extension so uh if you're not familiar with Ray tracing um it's really uh a new technology that's not going anywhere and it's the future of of rendering uh we'd like to introduce some uh some effects using R tracing starting with shadows GI Reflections we're talking about eventually having a rate tracing based renderer um and and this just allows us to get closer to ground truth which is you know realistic looking stuff stuff um but it's a very long-term plan for us and it has to fit in with um our plans for GPU driven renderer now this is another modern technique it's been around for 10 years or so it's used in in all the big kind of games uh it allows you to write things faster um it allows you to stress your GPU a little bit more and it works really well combined with with Ray tracing so we're talking about how we can make something like this uh in a way that doesn't take too many resources away from from other developments and and as such it's it's a very long um timeline for us to get something like this and then finally um we want to have a pluggable renderer through GD extension so the idea there is you would write your own renderer uh compile it into a GD extension and then you know you could ship that out you could put it on on the asset library and then people could just plug in a totally different renderer so maybe something that's optimized for non-photo realistic rendering or something that's optimized for a specific type of asset or specific style and you would just skip everything that we've written and all the work that we've done um and use your own thing and and and that's something we want to support um but we know it's going to be a lot of work and it's going to be hard to do right um so uh it's a long-term thing for us okay any questions r h Sor yes I'm going to be very eager I have two questions if that's okay first of all you mentioned having sets of passes where one set could end and lead into another set starting could that lead to a a a situation where a a frame essentially never finishes rendering because it's stuck in an infinite Loop of rendering sets no uh you specify what how many passes you want to do uh in advance and you specify what goes in each pass so you there's a limited number that you can have and you choose what that L is hello I'm curious why vertex lighting wasn't implemented in 40 and was it because of priority or why because there are many mobile deaths and also some devs that like to do retro styled Graphics like me or like HPS one and yeah yeah that's that's a good question and the answer is unfortunately a bit long so I'll give you the short answer and it's um limited resources a and the difference between uh our hired resources and Community Resources so for certain things that are more medium priority and that take less technical skill to implement we kind of wait until somebody from the community does it um because we want to be able to engage those resources ources if we ourselves the The Hired team do everything that's easy then there's nothing left for the the community to do and so that's something that just unfortunately uh took a long time uh but should be coming in 4.3 so there's someone that uh implements it there's a PR and could it be readed to 42 or is it like too late for that um it's too late unfortunately uh our goal is to release 4.2 in the next couple weeks so uh it it came in kind of after we had uh Frozen features and we were in late betas so uh it's something that's coming soon but uh not next week sorry me again um the Rd renderer is that targeted specifically towards PC Hardware or are there is it uh are there considerations or do there need to be considerations for it to be suitable for consoles Xbox particularly the switch so the Rd renderers uh actually run on almost all of our supported platforms The Only Exception right now is web um so they they run over Vulcan which is supported on um some consoles um and it's supported on desktop and and mobile soon browsers are going to ship web GPU which will allow us to run the Rd renderer over web um and then the Rd render also abstracts over d312 metal uh secret console apis that we can't talk about so the Rd renderer will run everywhere uh and the compatibility renderer will only run places that run openg um so that it's slightly more limited but right now it's what supports mobile and web the best three more questions and I have one from this side I guess uh so you mentioned uh been able to write custom uh Shader templates so are those going to be written in like the usual Shader language that goo uses or are those like glsl they'll be raw glsl okay which our Shader language is r glsl with a few things added on top of it so we're hoping that the um it's not too difficult for for users but it is a very Advanced feature and it's intended for for advanced users so it's not going to be quite as easy as as what we currently offer with GD shaders yeah thank you uh you didn't talk about direct 3D and we We There is a PR for it and uh what is the plan with that the plan for direct 3D was to merge it for 4.2 um but we decided not to in the end as there was some tweaks we wanted to make and we want to make sure that once it it lands and Upstream it's exactly how we want it to be and unfortunately being such a large feature that comes with hundreds of thousands of lines of code we need to prioritize making sure that gdau itself remains lean and mean and easy to build and easy to work with and so merging something that's hundreds of thousands of lines of code actually takes a lot of uh process for us um technically it works you can merge the pr into your own branch and it works great um but we want to figure out some things on the git side and on the process management side to make it actually feasible for us to ship builds with d3d uh enabled um and so that just took a bit more time than we expected so it yes can you repeat it CL can you repeat what he said just please he Peter said this is a fun building to talk about D3 D12 12 issues in because there are unfortunately process issues around merging Microsoft code into an open source project oh not that we're not thankful we don't have to go into this year um stand one last question we have time for one last question if somebody wants to know something yep I go to you thank you so if you allow people to plug into the rendering pipeline at various points that will that also mean that you are less flexible in the future about changing how that pipeline works because now people are dependent on the particular shape and the Order of things are happening in yeah that that's right and that's one of the reasons why we've held off for so long on doing something like this because ultimately the the call back the user receives is going to have a set of inputs and a set of outputs and now we can't change those we can add more inputs we can add more outputs but we can't take anything away so if we decided that for the for example for the performance of our our default rendering pipeline we want to totally remove some buffer we have to figure out some way of maintaining that for users who are using it so it's something that takes a long time to add because we need to be confident that the way things are we're going to be happy with in five years and that's a hard thing to do but yeah that that's a great question we are quite a bit more Lim lied now in in the changes we can make thank you very much clay thank [Music] you
Info
Channel: Godot Engine
Views: 30,707
Rating: undefined out of 5
Keywords: 2023, Clay John, Day 1, Keynote, Redmond, godot, godotcon2023, godotcon2023 eng, godotcon2023 ov
Id: MW3IFMvDTCY
Channel Id: undefined
Length: 53min 50sec (3230 seconds)
Published: Tue Nov 07 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.