Houdini 17.5 Launch Event

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] welcome everybody to the launch of Houdini 17.5 and PDG my name is Chris aver I'm the director of marketing in side effects we are thrilled to be back here in Montreal every time we're here we really feel to the to the bottom of our hearts that the passion and the enthusiasm of the community in the in the CG fields of video game development film TV advertising and motion graphics here in Montreal and a big warm welcome to the people watching from around the world on the web hello side effects is nearly 32 years old and the great the the really nice thing about that is that we have a big solid foundation on which to build new features and enhancements into Houdini and we have a deep roster of developers who love doing exactly that so Gers gives me a great honor to introduce the leader of those developers Kristen burgle the VP of product development and senior product designer Scott Keating [Applause] hello everyone and welcome this is the launch of Houdini 17:5 which is a dot release only a name because in reality it packs a very big punch in what it has to offer and indeed great to be back in Montreal after only five months since releasing Houdini 17 that's a very short time between two pretty big releases from side effects and that's not because we're speeding up development our pace is just as steady as ever the reason why we're meeting sooner is to do with something completely different and it is a new technology that we're introducing that we've been working on not just for five months we've been working on it for three years and five months almost to the dot that technology is ready for its reveal we're extremely proud to show it to you tonight and we chose 17:5 as the moment to shine the spotlight on it and that project of course is project PDG all right this also makes 17:5 release unlike any other from side effects usually we try to aim for a nice balance between the features not this time we've channeled virtually all our energies towards making this a big night for one thing only and that's bdg this isn't to say that we won't have a couple of really nice non PDG features to share with you at the end of today and Scott will do that towards the end of our presentation but first let's focus on BDG which stands for procedural dependency graphs and it's all about extending the visual procedural paradigm to the big picture all right so before we go into exactly what it is and what it does let me give you some some background on that so the motivation for it it comes from two challenges that our industry you our users are facing very frequently in your work one of them is envy effects in film and it has to do with ROPS any time you try to do something with ROPS that goes beyond stringing together a couple of sins in a render it quickly degenerates into an artwork awkward scenario where you have very obtuse system of connections between the nodes very hard to visualize very hard to debug very hard to extend and that's simply because Rob's are not designed that way and good luck trying to build a VFX pipeline on rocks that would be even harder okay and then there's the efficiency problem the fact that even if you're doing a sim and a render which are very simple very quickly you hit another roadblock with Rob's due to their the way they cook they make it very hard for you to get to that very important first pixel and Scott will show you what that means in a few minutes now the other inspiration comes from games and just to pick two things open worlds and photogrammetry so the challenge with open-world creation procedural open word world's is the fact that there's such a huge number of dependency to deal with you have shops that help with geometric dependencies you have Houdini engine that does a really good job dealing with the individual asset and making local changes within you know the context but there's still a missing overarching dependency system in Houdini and elsewhere as well so if you want to build that that's a lot of work and then again the efficiency question how do you design a system how finely grained do the dependencies have to be so that you know exactly what to update when something moves like a road moves a bit or or a pawn or a building okay something like that does not exist or does not come easily right now you really have to work for it so here's the theme guys right there's something to do with dependencies and something to do with efficiencies and that's what PDG tries to solve so to understand how we came to the architecture for PDG I need to do one more thing and explain to you the step we had to take mentally the abstraction basically that we had to implement and that's pretty straightforward if you look up close every context in Houdini objects and ROPS and whatnot do something pretty well but it's very specific right and different than the other ones but take just one step back and suddenly you realize they're all about doing some tasks so this is the abstraction it's pretty straightforward and it's just one obstructional away from finding that common denominator that unifying language that lets you express everything in terms of one thing tasks okay and that extends to operations that have nothing to do with Houdini yet operations that you encounter and you have to deal with every day in your work so Houdini act Gnostic stuff and the minute you bring those on you're really talking about pipeline tasks and just to play with a mental model a little bit more imagine that we take away the Houdini component that let's do that then the model still stands and in fact what we're dealing with is generic tasks or a generic pipeline okay so let's go back now to everything displayed and then what is PDG well PDG is a system that handles all tasks and all dependencies and does so with very very strong efficiencies built in and since everything in Houdini is represented as a node we went about doing that so we introduced a new node context in Houdini called tops top operators and just to have fun the the icon that we picked for it is a top hat that's why that's there okay and what tops are are basically wrappers around PDG PDG is an api and tops are the nodes that you know you're very familiar with as Houdini users so that means that together PDG and top nodes represent a visual task management system with finely grained dependencies and very strong efficiencies built in and that's quite the mouthful okay so depending on who you are and how you relate to PDG it could simply be Rob's on steroids drops 2.0 it could be just like that it could be a fast sim and render or it could be your generalized task management system or it could be the heart and soul of your whole pipeline it's that whole gamut and all of that is available through nodes which means that hardly any scripting is required that was part of the thing not to have to be a programmer to write all of this you'll see that in a minute so what we're introducing tonight is this title tightly coupled technology between an API PDG api and the note system which is fully available available to you in Houdini 17:5 that's Houdini core and Houdini effects and then custom tailored for pipeline and middleware in a new product that we're launching tonight called pilot DG will give you all the details about pilot PDG when we release the software but for today we'll just focus on the technology so let's do that alright so to give us an initial tour through PT GN tops and highlights some of its key uses and V effects here is Scott Keating Thank You Kristin okay so let's dive in a little deeper so you can get maybe a slightly better understanding of why we even did this and what did we actually do so if you think of like the most basic set of tasks that you might want to do in Houdini are simulate something and then render the result and this is what it would have looked like in ROPS if you wanted to do that you would literally do this first and then you would do this and the problem with that is that there are actually more complex things happening here all ROPS really did was describe when things should happen simulate first and then render but if you want to do anything more sophisticated with this you need to start thinking about things inside of these jobs you can't think of these huge coarse objects you need to think about what each one of them actually does and so if we just zoom in a little bit you can start thinking about the things inside of these jobs as the frames right there are a certain number of frames for a simulation there are a certain number of frames for a render so that's great we're thinking about frames now we're thinking about these smaller units instead of these large picture things and if we look a little bit closer and start thinking about how you actually work with frames well if you're working with a simulation and you have a bunch of frames like this in a row and then a render job there's not really any connection between frames generically right a frame is just the thing that you're going to use for some result it's a slice of time well a simulation actually has sort of an implied dependency which is to say that frame one comes first and then frame two needs frame one to exist before frame two can start it's it's a it's a natural dependency that a simulation requires the preview results to continue but if you look at a render in isolation a render can happen in any order once you have all the stuff that you need to render it can happen in any order it doesn't matter but when you have two things together like a simulation followed by a render there is another implicit dependency here and that dependency is that frame 1 of the render relies on frame 1 of the simulation right so simulations rely on the previous frames but renders rely on any frame of the simulation being complete and as soon as we started thinking about this this idea of like well what we're really describing here instead of frames is really dependencies that's what we're describing we're not describing what the work is we're describing the way those things rely on each other so let's jump into Houdini and take a look at how this actually would work so I'm just gonna run this pyro sim just so you can kind of see how quickly it renders in Houdini generically and then we're going to put down a top network which is this new node context we're going to drop that in the network we're gonna dive inside of it there's the little top hat icon and inside here we're going to put down a node that you're pretty much already familiar with the rap geometry node and this is literally you pointed at the geometry you want to write the disc in this case a simulation we're gonna render out 100 frames of this simulation and we can use this new task graph to do it so right away you see a huge difference in the UI immediately you see the number of tasks in this operator if you look at the numbers on the left you can see how many are going to be cooked the bright green ones are the currently cooking ones and the darker green are the finished tasks and you can see we're cooking along here we get a nice live preview you got a nice little spinner telling you what's working but you can see that it's kind of like weirdly slow right you just saw how fast we cook that other simulation why is it so slow well let's turn on the performance graph and you can see that further and further down in this graph we get these darker red nodes and that means it's getting slower and slower over time and that's sort of odd why is that happening well essentially the reason that's happening is that we haven't told it there's a dependency between these frames so we can turn on all frames in one batch and we're just going to re-cook this graph now and you'll see that we get speeds approximating what we did in the viewport except you'll notice that the timeline is not playing back while I'm doing this right so we're not forcing this session to cook that stuff which is what would have happened in ROPS instead we're doing this in the background right so you can be working in a session of Houdini and in the background degenerating results to disk so here we are we've got a nice complete simulation so the next task we want to do is render so let's go ahead and add a render node to this so we can literally put down a mantra top and that just points to a mantra node and we're gonna just generate the node and that means just see how many tasks this is going to do and we already know there's a hundred you can also see that I can middle mouse button and get some PDG specific information here and everywhere I click we draw a line of dependency to the previous work so as you would normally I'm just going to put down a file slop and I'm gonna point it to a file but I'm actually pointing to something called PDG input and what that is is the result of the simulation and now I can click around and preview it in the viewport what frame is going to be rendered at what time and then I'll cook this note and so as I'm cooking it you'll see the render is going but something interesting is that if you look at the top there's a little green checkmark under wrap geometry and that means we're not wreaking that and again in the ROPS workflow you would have recouped those results before you started your simulation but Topps is intelligent enough to check on disk to see like well do these results already exist and if they do I'm not going to do the work again unless you ask me to do it and so you get it this nice way of working if you're just prototyping a setup you know you're reducing the amount of work you have to do with each step but let's clean up those files on disk and we can actually do that through a menu because each top knows the geometry it created and rerun it again and you'll see now from scratch as soon as a work item is ready in the wrap geometry we can preview it directly from the middle mouse button information and also as soon as a frame is ready to be rendered it starts rendering it doesn't wait for the whole simulation to be finished and then we can pop open a frame and preview it directly inside of Houdini without leaving the context that you're working it and so again this is really a workflow booster for our workflow that already mostly existed in Houdini but now much more efficient use of your time and your render resources and much easier from a set of point of view because you can directly see the results in the network itself even if you're working locally or on a network so typically in a pipeline like this the next step would actually be to create a little preview video of your render and we have a bunch of utility nodes and tops that let you do that this is a ffmpeg encoder but you can see that we're drawing a whole bunch of work items in here hundred of them and what that means is that we're gonna generate a hundred videos one for each frame and we don't actually want that in this case some cases you might actually want that but probably not and so what we need to do now is start thinking beyond the concept of a frame because a frame is not enough information as soon as you get outside of things that are time dependent right and this node that I just placed on called wait for all this is a special new node and tops we call them partition errs but essentially is a way of describing a dependency so it's a new type of thing instead of working on geometry the way stops would work on this works on a dependency and there's a bunch of those in tops and we definitely don't have time to cover them all but there are things like let's bits partition things by an attribute or even spatially across space break things up spatially and bring them together but in this case the weight for all does something very simple it literally says wait for all of the incoming work to be finished before you send work downstream and so if you see as we cook this network again from scratch there's a little spinner and one partition which is this little rectangular block waiting and it's waiting for all the work to be done above as soon as it is you get a nice little check mark we get an ffmpeg encoded video and we can immediately look at it so again you're taking these very common tasks that you normally would do in these kinds of scenarios and we're allowing you construct them with Network so again all stock nodes no coding to do this no scripting it's all built-in so let's move on now to something slightly more complicated so another thing people did with ROPS that was really useful is this idea of creating a wedge and a wedge was essentially a way of doing variations on something so you would set up some parameters and say well I want five random values for this parameter and I'm going to override some other parameter to do it and that was possible in rumps but on tops it's a much more fluid way of working so here we've got this very simple asset it's just a Saab asset it builds a brick wall has a couple of parameters and we want to create variations of this geometry so we're gonna put down a wedge top we're gonna say we want five variations so we're gonna change the wedge count to five and I can immediately cook this node now and see the jobs even though it's not even doing anything yet it's just generating that work ahead of time then I can say okay I want to create two variables so the first one is going to be height and I'm going to create a range of values from two to ten so we're gonna get five values between two and ten then we can go ahead and target a parameter on that node and we can do that just using this little convenient drop down here this chooser 2 lets you pick a parameter and then you can see that I can click on a work item and see that height has been added as an attribute on this work item so you can embed data on a work item to be interrogated by tops later and as I click these work items you see the wedge variable being evaluated and sumps so I can go ahead and add a width and I'll just set a different range here and copy our other paths up so what I'm doing here is just setting up values that I want to use in my my rage I want to create essentially five different walls each wall a different height and width so you can see now as I go through the work items by clicking on them I get an update in my viewport of exactly what I'm going to get so again you're not working blind which is why I used to happen in the old work flow you would set a range of values you would render it and come back and then see the results now the next thing you want to do is actually make more variations based on this and that was something that it was technically possible and ROPS previously but very difficult to set up but because we have these very clear dependencies in tops we can just go ahead and add another wedge node after the first one and I'm going to go ahead and set the wedge count to ten so for every incoming work item we're going to create ten more and again I'm going to use this case the number of bricks set some range of values and I'm going to just cook this to generate the items and you can see now I can click on each work item above and see that while I'm actually generating ten tasks for every one above then it stops I'm going to do something different here just to give you an example I can use at number excreted and PDG will change that value at runtime into whatever you've set so you can see now I can click on the work items and that add variable is being evaluated by tops to give us different walls with different numbers of bricks so we've taken a you know a pretty mundane task and we've more or less automated it here but let's take this a step further and say like okay well now I want to I want to render each one of these right so just essentially redoing the step that we did earlier so we're gonna go ahead and vary the geometry and that's what this first block of nodes is doing the ones that I just set up and then in the bottom right we're going to render each one of those wedges each one of those variations then we're gonna wait for all the frames to be rendered and then we're going to use another sort of utility node which actually uses image magic to create a contact sheet of all of these images so again we're taking very common pipelining tasks tasks that artists go through every single day and automate them we're gonna create that montage and this is the result so we created all the wedges and then we built this image this isn't just for this slide this is the actual image that's rendered out at the end of this network so this is great so now you can actually not only generate all this stuff but compare it in context possibly even in the scene that you're rendering it for but in this case on a very simple setup here you can see you know five versions of the green wall five versions of the orange the blue and so on now we'll take this another step forward and make a unique simulation for each one of those so here we are we're just in Houdini we've basically set up a standard sort of rigid body simulation none of this is top specific in fact all this was just set up using the rigid body shelf tool the only main difference here is that the geometry that's being read in the wall is coming again from the PDG node so we're channel referencing a value generated by those wedges so that we can get the right geometry at the right time and then we just have a standard simulation so there's nothing in here that's top specific we're just going to do this basic simulation but we want to do it for every one of our walls so here we are we've set up a Roth geometry just like we did in the first example and if we go ahead and cook that now you can see again simulations being generated very rapidly these can be run in parallel right so they can be split into separate tasks in the background run in parallel so you're not waiting for one simulation to finish and then getting the next one you're doing them all simultaneously so as Krista mentioned the time the first pixel is decree decreased as much as possible and so now let's go ahead and do that we're gonna hook up exactly the same network we created before except for rendering these simulations out so again you could see multiple frames being rendered at once and that's because there's no dependency between those again in parallel we're grabbing available work items and we're rendering them to disk and you can see them showing up over here on the side and then we're using some of these dependency nodes that I described these partition errs to say okay once we have all of the frames let's gather all the matching frames together so every frame one of a simulation let's get all of those every frame two of a render let's get all of those and put them all together to make a sequence of these montages and so the result is something like this so not just a static image that shows you all of the results but all of the frames altogether again because we can describe more complicated dependencies we can say I want all of the frames in order one at a time put into a sequence and then render the entire sequence and the reason this is important apart from just the fact that it's kind of neat looking is is when you're actually trying to have valuate this when you're doing this kind of wedging the goal is you want to find the sim that's the best the one that you like the best and in order to do that you really want to see them in context like this you want to be able to compare them directly so if you're sitting with a director or a supervisor having something like this is an excellent way to really get to the result as quickly as possible now let's go ahead and add a smoke wedge into this example as well so here now we're rendering everything at once and if you look in each sort of column you can see as soon as a wedge is ready we begin rendering a rigidbody sim and as soon as enough of that rigid body simulation is finished we start rendering a smoke wedge and so what that means is all of them are happening at the same time as much of your compute powers being used simultaneously as is possible as soon as one rigidbody sim is ready a smoke simulation starts and so you're maximizing your compute power and then as I said before we're going all the way down to creating a montage so the result of this is a static image a rigidbody sequence and a smoke simulation all rendered to disk and the final result is this as we said before so again a really powerful way of setting up these dependencies that was very difficult to do previously and really frees you up to worry about setting up the simulations and letting all that other stuff just happen more or less automatically for you once you've set up these graphs and of course they can be applied to anything it doesn't have to be wedging it can be any model variation that you're feeding into the system and again this is really important from an artists perspective it's not just about producing as much stuff as possible it's actually useful purely from the perspective of like I'm dealing with a client the client has said to me I need a motion graphics video and I need something that looks a little bit like a seashell and a little bit like a sail can you please show me a bunch of variations of that and that happens all the time and so yes you can do that and it would be great to give him not just five variations but say look here thousands of variations of this thing here they all are together so that you can test which one is the one that you want to use and you don't have to wait you know potentially tens of thousands of these and they also think about this particular example is that it's not just an example in tag Meyer who are awesome are doing a bunch of tutorials to explain exactly how this was done and it will release very quickly after the release of Houdini seventeen point five so you'll be able to jump in and start learning how this stuff really works as quickly as possible but it's not just about variations that you want to compare to each other sometimes you literally want multiple versions of a thing and so here's an example a little more of a complicated example producing a more complex asset than the rigid body simulation where we want to create different versions of these jersey barriers and we want to add different texture Maps and not only that we want to take that high res model we want to bake it out into a bunch of texture maps then we want to apply those materials back on so here's just an example of the wedging we're not gonna have to go through this every time I think you're getting it by now but you can see that we can click through the variations we can see in this case three connected wedges so wedges of wedges variations on variations and again some nice information in the middle mouse button menu there to tell you like which wedge number this is how many of this exists water the parameters I'm overriding but the interesting thing about this setup is that it goes outside of sobs and into comps and so you can see we're baking mantra' textures so we're actually taking the high-res geometry automatically generating some low res geometry for it we're baking each one together outputting the maps doing some extra compositing on them and then outputting the result as a final render with the new materials applied so again this isn't just about saying oh I want five versions of something it's about being able to tie all the tasks that you need to do with this thing together in this holistic place where you can say render something give me the result send it to compositing add the texture Maps apply a material to it and then render and again the result all together like this to show you the different variations that are possible in this particular setup and so here's another example of that exact same setup except taking advantage of something else that's really interesting about the technology here the PDG side and that's that the API is very simple and not a lot of work you can generate your own nodes that operate on things outside of Houdini so in this case this is the same setup that we just showed you baking with mantra except now we're baking with substance and not only are we baking with substance a final montage is using Photoshop and so these two nodes were generated by these third-party guys pretty quickly as sort of prototypes to see like well can we plug into the system is it useful for us and they were able to do that without a lot of effort in really just a couple of days in order to plug into something that has nothing to do with you this is completely outside of Houdini and of course you get a result as you would expect with some nice little information applied to it like which version of this embedded in the filename so again really freeing you up to connect parts of your pipeline that have nothing to do with you Dini necessarily but all controlled by this one mechanism so so far that's been all about like ok I want to generate a whole bunch of stuff like tons of stuff and I want to dump it onto disk somewhere and that's great and that's really useful but there's sort of the other side which is that why I need that stuff now I want to do something with it right and you want to be able to do that as an artist you don't want it just like I'll just take everything and load it into Dini so let's look at how you can use Saabs integrated with tops directly so first of all we've got this little tool here and it just generates a bunch of plants so we just click through these work items you can see we've got all these plants and we want to use these as a library for essentially a layout tool so I'm just gonna create that asset library and that really just means write those things out to disk and here we are now in a top network that does something really interesting it looks at a folder on disk and accused wildcards to say give me everything in that folder and not only that give me information about it like what's the file name and I can then use that in Saabs as a tool that goes into that library grabs any item that's in there and randomly picks the results and then just add more of these and update the library automatically so it goes on to disk it says oh there's more files here I'll get you those as well and now you can work as softs pulling information from top so again an interactive workflow now the issue here is that these models are actually quite high-res we hit a 20 million polygon limit pretty quickly and what happens is when we cross that threshold we start adding these bounding boxes as sort of placeholders and that's just how the viewport works in Houdini 17.5 and 17 in fact and that's usually actually what you want you want to keep your viewport nice and interactive you want to call geometry if you need to however if you're in a layout circumstances and you really want to get a feeling for like well what plant is that like I can't tell by looking at that bounding box which plant it is so let's see how we can use again a hybrid workflow to make this a bit easier so here we have that same wedging setup except now we've embedded a soft network in the middle of it and this soft network takes advantage of compiled sups to do a very simple operation it takes this high res plant at the top and it creates this extremely low res version at the bottom and then we just add that into our asset library and now insults I can say which file name is this if I'm rendering the proxies get me the one that's dot proxy and now I can be an artist that I can take my tool and you see stays very interactive I can paint more and more and more of these and because they've been poly reduced so much I get a good idea of where these plants are going and if I want to I can now switch to the high res one because PDG has managed like which filename have I got which one are you asking for and it can hand off the correct information as you do it so it's very easy to say show me the high res because I want to get make sure the lighting looks right but while I'm working show me the low res and again this is a truly hybrid approach tops existing inside of sobs as an artist tool but let's take like a step back and think of like the whole pipeline so here we have Krag normally when we do these videos with Craig he's like being pummeled by things or he's smashing something here we said you know what it's just a peaceful quiet walk in the that's it he just it's like his this is his off-time so if you actually were generating a scene like this let's say this is a shop in your film about crack and this is something that needs to get done so depending on who you are in the pipeline if you think about a shot like this you actually think about completely different things if you're a different person you have a different perspective on this shot so for instance if you are a animator when you look at this shot you probably see something like this you see a rig maybe it has textures maybe it doesn't and that's what you're dealing with and maybe the background because you want to get a feel for where he's walking around but you don't need the trees you don't need the lights you don't need any of that information you're truly just working with the rig so from this perspective Krag is not really a character at all he's a rig if you're an effects artist you don't probably care about the lights you may not care about the texture maps you don't care about the rig you want a geometry cache you just want the output from the animator and you're gonna add effects onto here maybe you're gonna add dust when he hits the ground maybe you're gonna add fireflies in the air whatever it is you only want some specific pieces of information and from your world from your point of view this is the only thing that matters and finally maybe you're doing look development maybe you're a lighter you want to pull in as much of the detail as possible so you want to pull in the texture Maps the full background the full character as a geometry cache you want to get all that in so you can do your job in the most efficient way and there's a lot of ways to approach this from a pipeline standpoint because you can say like well I'm working I'm an animator I'm gonna save a hip file with my rig and then if you need to see it well I can give you my hip file and you can make sure you have the asset and you can load it and look at it or if I'm an effects artist I can like you know localize some of these files and try to embed them but it becomes complicated you're working with files that really you shouldn't be touching you know somebody else has a file that's in progress you definitely should not be modifying it in any way and so really what you want to do in a scene like this is you want to think about this as not hip files you don't want to think about it as like well I need the thing that built this instead you want to think about it I need Craig and I'm an animator so I need his rig so PDG can actually be used to do that you can think of it as a purely pipeline tool something that happens to use Houdini in this case but it's really about grabbing the correct piece of information so imagine that at the very top of this network we need to ingest some sort of a shot description whatever that happens to be it could be maybe it's just a CSV file with a description of the assets for each shot maybe it's just folder structure somewhere on a network that says like well shot one is in this folder and the assets that are required are in nested folders or perhaps you're going to shotgun and all the things that I just described they're possible with PDG including the shotgun integration so you can directly connect to shotgun and grab this information and once we get that once we know what's in a shot we just need to start saying like okay let's do some work and so PG isn't just about pushing things blindly through a pipeline it can interrogate some of the information it can say what is the file name what's the asset type if it's a if it's a database if you've applied some metadata to that let's pair that up so that we know what we're actually dealing with and the reason this network is sort of large is not because it's actually very complicated but just because we keep splitting the data up more and more to get more specific information from each asset and once we've done that we can do something really interesting we can build a hip file so no hip file exists with these items in it and we can do that using something called a Houdini command chain and there are generic versions of this as well but we're focusing on the Houdini ones and what this allows you to do is send commands to a Houdini session using homme and so PDG can say okay you've given me a whole list of assets i want to instantiate those assets in a hip file and then save it on disk in a local folder so you can have access to it and it literally just does that with some very basic calm code here some Python code and you can see if you glance at it that it's really just like am I in HD a well then install an HD a and my keyframes then apply those keyframes to the rig and so on but obviously you don't want to deal with that he Network mostly because it's not necessary to interact with it once you've built it once somebody has constructed it you can create an asset out of it and if you're used to Houdini the idea of making an asset is not like a surprise like of course you can make an asset out of it where this becomes really interesting is that first of all you can just drop this asset into a hip file and start running it which is great but the other thing is that it means that parts of your pipeline now can be asset eyes they can be turned into a modular thing that can be connected with other parts of your pipeline and not only can they be connected together like that because they exist as assets they can be updated they can be replaced with other assets they can be checked into some sort of asset management system so suddenly your pipeline is not this static rigid thing which is like you know four thousand five thousand lines of Python code instead it's a something where it's like oh I just need this one piece I need the hip file builder part I'll just put that in my scene and so you can tie these things together and get really good interactive pipeline actions so here's just an example of saying okay i want shot one i want the cameras the lights and I want the geometry and then when I cook this and these nodes are inside that asset by the way we're gonna generate this look dev you can see that essentially what's happening here is that we're looping over everything that we've requested lights cameras and so on and then we're sending a command to Houdini to say load that piece of data and then you can see in the middle mouse button information here that's output as a hip file so you can directly access it here and the result looks like this we've pulled in the shot the camera asset the lights the set and you can immediately look through the camera and start seeing what the scene actually looks like and again this is on demand this hip file does not exist until you need it and so now things in your scene or if things in your shot are no longer rigid you can just say I need these things please give them to me I need these things please give them to me so on that note let's say you're an animator and maybe you need to make some modifications to something that exists maybe you're starting a second shot and you want to start from something that already exists so in this example we've split the rig and the keyframes in two separate files so they exist separately from each other so the rig know has animation animation is its own asset by itself so now I can say okay I've asked for the rig I've asked for animation there's no camera but you can see here's Krag here's his rig I can play the animation because we've applied the animation to that rig but this is a live rig I can actually go in and start editing keyframes now and because the keyframes are separate they don't live on the rig they live individually it's perfectly safe for me to do this which is what you want if you want to just see a rig it should never be dangerous to do that you should be able to say please give me the rig give me the keyframes I'm gonna make some edits especially if you imagine like layout tasks earlier in the pipeline where maybe you do have a rig and you're doing some basic setup you're doing look development very very important okay so we've got all these pipeline tools it all works together that's awesome but so far everything has been basically this idea that I have like all this work some amount of work and I just want to break it up into a known number of things like there are a hundred frames or I want a rig so that's one thing and that's mostly what you want you want to know how much work you're going to do but there are concepts that are possible in PDG that were absolutely impossible to do previously in Houdini so let's look at a scene like this okay we've got a bunch of kraut agents they run through this field every time they hit one of these sort of jelly beans they explode and go flying that's great so in a typical workflow even with PDG if we use the exact same setups that we've described so far what would happen is we would basically say okay first of all I need to find out where these things are going to happen right and so as soon as you think well I need to find that out if you look at multiple simulations and this isn't even a this isn't even a wedging example imagine that you don't know what you want yet and so every time you run a crowd sim and you change the number of landmines or the number of agents there's no way to know what the outcome is until you run that simulation right there's absolutely no way to do it you can't do it in advance and so if you think of what we've described so far is like a static Network then we would have to do something like this we would have to say well let's run the full simulation first just like we did with our rigidbody simulations in the first example and once we've run that whole simulation now we know we have a static number of things run those pyro sims in parallel right so essentially we're just accumulating the points we're saying that one was triggered this one was triggered that one was triggered and when we're done let's just run all of those simulations at once cuz they don't really need to talk to each other and that's great that's actually a very useful thing and in fact is a very optimal way of running this but sort of as a thought experiment like what if we could start simulating a pyro sim as soon as an agent touches one of those objects and instead why you would get is something like this as the simulation plays you immediately start rendering pyro simulations as soon as you know one of those things exists and this is something called a dynamic Network in PDG this means that we're not we don't know in advance how many work items to do tasks are generating new tasks as they come across them and that means you can say every time an agent touches one of those things please start generating a pyro sim with these settings at this point in space and this is essentially what that network looks like this is literally the entire network so first of all we have our crowd that's playing and you see as these little dots start filling up once we get to around frame 12 we find mine hits and you can see we've got three five they're filling up in the middle but as soon as we have a mind hit we start rendering a simulation on the far right and so now we're getting as fast as possible results there's no way to do this any faster as soon as the sim is available you start rendering it and it does this based purely on what the crowd sim is doing and then at the bottom we have another one of these gather dependencies where we're saying like well we have to be sure that all of the sims for that frame are ready before we start rendering and so it waits it gathers them as they come in and it can start rendering as soon as possible so you will have noticed that it starts rendering the final frames even while the simulations are still running because it can guarantee that well there are no more frames rendering before this so it's safe to render them and so suddenly you have this really powerful way of working where if your goal is that you need dissolve as fast as possible like that's what your goal is these dynamic networks allow you to do it now having said all that the reality is in most film VFX productions television productions these simulations could literally be a day long right they're not a short amount of time they could be days long depending on the type of simulation so in most cases you actually don't want this dynamic network because you don't know how many jobs are going to be created imagine you set up a crowd sim and then you clicked you know simulate and you left and went home and you generated 7,000 high-res pyro simulations right which would be possible in this case so in most cases that's not what you want you want statically known you want to be able to say you know what I need to know that there are 300 jobs that will be happen when I'm done and all the stuff we talked about before this would let you do that which is awesome but if you start to imagine other scenarios not being driven by a simulation but being driven by a human like there's a person interacting with this something where I want to minimize how much I need to do in order to get a result because someone is going to change the inputs and I don't know how they're going to change and that has some application in film and TV and effects but where it's really important is in a games workflow where an artist is sitting building a level in a level editor and they make a change and you want to do as little computation as possible to get a result right and being able to track these dynamic dependencies is what would allow you to do that so to help give you some better examples of that I'm going to welcome Kristin back to the stage in order to introduce our game section thank you very much Scott the next chapter belongs to Ken I want to say a few words about Ken before he takes the stage Ken as a senior sulfur architect at side-effects he's also the creator lead architect and lead developer of PDG pretty much PDG is his baby ken has been holding his breath in his own words for over three years now so hopefully he'll get to excel at the end of the day today he's iterated the architecture many many times to make sure that it's strong enough stable enough scalable and flexible enough and also user-friendly enough that we can put it for you to use starting today and he was able to achieve this with the help of an amazing team of developers who've been working tirelessly ever since we started until today today is their day so please join me in giving Ken and his team a big hand Thank You Kristen for the very flattering introduction I just want to take a moment here to give a quick shout out to the team in Toronto to Taylor who was in the audience with us and to Christy if you're out there watching we've all worked extremely hard to get to this point and I'm just so proud to be up here representing all of us and our collective work okay so back to the presentation I'm here to tell you a little bit about PD G's fit with Houdini engine and game development you've seen already how PDG fits into effects workflows and then film now Houdini engine is probably something that doesn't require any any introduction today it's being out there for a couple of years now and making a pretty big impact in both games and film it's capable of a great many things but among the things that Houdini engine is capable of is generating big scale procedural environments like this okay so here this is an environment that's completely generated procedurally and we see many elements in here we see you know terrain we see vegetation there's a lot of tower perched up there the broad in the distance we see an airport and even you know things like bridges and tunnels and so forth okay so who the engine is certainly capable of generating all these things and that's great but today it still has a one significant drawback and that is that it it considers all these elements as individual assets and that are totally independent so by that I mean if you wanted you know a different terrain tile you go select it and you know the interface pops up you will go to sliders and you got a different terrain if you want a different version of that road you grab the road curve you move it and you get a different road okay that's all well and greatest way better than you know modeling the stuff manually but in reality is still not capturing the fact that these things really have a relationship with each other by that I mean you know if I move the world the road is now kind of cutting into the terrain shouldn't the terrain be deformed right - - - in response and if I move that road and it's now kind of floating in the air maybe we should put in you know this bridge support here right and for vegetation and stuff like that I shouldn't be scattering that stuff onto the terrain until after the deformation is complete okay so today these dependencies aren't really captured by Houdini agen itself to capture these dependencies you'd really need to create this custom system - to do that and that comes at great time and expense or you know if you don't have the budget and time to do that you just end up not doing it at all and in which case you're left with a really messy workflow problem where you have you know a couple hundreds of these things scattered in your level and you have to go and pick out each one of those things and work on them one by one and that is you know really not efficient from a workflow standpoint nor a computation standpoint and it's just really really error-prone okay so with the advent of PDG what we can now do is really create for the first time visual procedural systems that can really take all these elements together and consider them at holistically together and capture their relationships and we can do it of course with as you might guess a PDG graph so here is one that is is really gonna describe the the relationship between all these elements the terrain and roads and bridges and tunnels and rocks and plants and how these things all interrelate to each other and we don't have time right now to get into the specifics of the details of this but don't worry about it because Kenny Lammers and from from indie pixel it's gonna be creating a full-fledged set of tutorials showing you step-by-step how we put something like this together and I'll quickly mention that all the nodes here are just stock nodes so everything you're about to see in the next slide you can achieve this completely you know without having to write any code at all okay so let's take a look at what this actually looks like once we have built these relationships so here I'm gonna just kind of grab this road here I'm gonna move it up and right away you're gonna notice that only the two terrain tiles where that move actually happened was actually impacted the rest of the environment you know wasn't touched at all notice also the sequencing the the the road was computed first followed by the terrain tile followed by the scattering right and notice also there's a there's a bridge there now the word there was none before so we got the automation we got the right sequencing and also we get in the scale here right so this this these two terrain tiles are actually being computed at the same time in parallel so we're getting that parallelism as well we are you know not doing other stuff just for clarity here so you can see what's happening but in fact while this is happening I can go grab other another piece of Road I can do all kinds of other stuff and you know work completely asynchronously and the system will just kind of just keep on delivering and one last point you know you you might notice that the that bridge there that just popped up to Scott earlier point about dynamic versus static here is a very dynamic situation when we first looked at this the scene here there was no bridge and when the curve was moved all of a sudden there was one so whether there's a bridge or not really depends on whether what the user actually did right now we can't know that until he's actually finished placing the road okay so the other really nice thing about this is that everything you seen here it just happens to use Houdini engine stuff it's not limited to Houdini engines so in that graph previously I could have just putting any kind of custom notes in there and it will just work just the same okay so anything that involves asynchronous compute anything that involves automation and scaling you can basically achieve it with with the same in the same way basically with PDG and the Houdini engine API okay so that bit is about Houdini engine and and the fit of PD g22 games I have one more important thing to talk to you about and that is machine learning so machine learning is one of these hot-button topics and you know it's already having a significant impact in both games and film and it turns out that PDG has a really nice fit with this important upcoming field and this is because machine learning requires just a ton of data it needs a lot of processing as well to crunch through all that data as it's doing its training thing and finally it needs a lot of trial and error to kind of get it right and these are all things that PDG just happens to be really really good at so let's take a look at how PDG can be applied to machine learning and we're gonna show you this through a couple of examples but keep in mind that these things are just examples around terrain okay um you know they're here so that we can clearly understand the the relationship between PDG and machine learning and see it in context but they are not themselves yet ready for production okay so with that in mind let's take a look at the first example and this one is around fast terrain erosion okay so the idea here is that you know we're just gonna take a piece of terrain and we're gonna erode it right and we have lots of tools in who do you need to be able to do that this all works great except the fact that it just takes a long time to do and so the idea here is can we create a machine learning system that you know it's gonna give us you know pretty similar results but and hopefully just do it a lot lot faster okay so for those of us in the audience who are not familiar with the process of machine learning I'm just gonna quickly lay out the steps for you so in general of machine learning really well you can't learn anything unless you give it lots and lots of examples right so the first thing we're gonna do is just basically create lots and lots of data examples of an eroded terrain versus eroded Turing and kind show it to the system and hopefully it it learns that for each of these pairs that we we create we're gonna want to flatten it to a pair of images and this is because our machine learning algorithm happens to be image based okay on the left you see the original an eroded height height map and on the right you see a color image because after erosion we actually get a couple more channels we got ero we get the debris and the water fields and we're using the green and blue channels for those and that's why that's a color image there so once the data is ready what we want to do is we want to create an appropriate machine learning model and by that I mean something that's you know the ads it's got the right configuration to be really able to to to to capture the information we're showing it right and with the model created will run machine run the machine learning training by that I mean you show show this model all the other data that we we've gathered so it can learn and at the end of that we'll take a look at how the model is doing hopefully you know it's doing well but you know if it's not doing well then we're gonna go back up and we're gonna just tweak the model and just keep on doing this over and over again until we got something that you know is really producing good results and hopefully at that point if we show it something and hasn't seen before it'll make a really good guess and give us good results okay so with that in mind let's let's take a look at what this looks like with PDG so just like Scott has shown you before you know PDG is just capable of crunching out tons and tons of variations and but that doesn't really stop with just effects so here we're gonna create lots of different variations of the terrain different roughness different offsets etc and create all these versions of the terrain that we're gonna erode to create pairs like this so on the left is the onion rotate version and the bright is the eroded version and once again you see in the coloration because you know we're using the blue and green channels to to represent the debris and water so next we're gonna go and like we described we're gonna flatten this okay so on the left you you've got the on eroded height map which is black and white and on the right the eroded version with the water and debris and we're gonna montage this together to form these these image pairs and we can get lots and lots of those like so so at this point our data set is ready okay for machine learning and now we're sort of we need to kind of figure out the right model to to to machine learn with right generally these days we have a you know a pretty good intuition on you know what kind of model to use but there's no really hard guidelines and there's just so many different ways you can configure these things like how many layers do I use how many you know if I'm doing convolution what's the filter size what's the learning rate so this different configurations in machine learning speak is called hyper parameters and so because we don't know really know what's gonna work well people just end up trying lots of different stuff at the same time well they tried in serial but you know now that we have PDG we can really instead of doing it one at a time we can just say hey you know what let's try all of these things together at the same time so what we're really doing here is hyper parameter space searching with PDG okay so I'm gonna wedge out a couple of different variations here on top you see I'm trying you know two different settings on the number of layers and the third note down I'm trying you know different filter sizes and three different variations of that so two times three is six total different variations we're gonna try and then we're gonna create the machine learning model and run that machine learning so this is the only step that actually really requires any kind of machine learning expertise you need somebody to who knows the machine learning stuff to kind of altered the the training code a little bit based on the upstream wedge settings right but once we do that we can run all six of these things in parallel and thus last up here we're gonna analyze all the results and produce ourselves a little report okay so once we have done this we can take a look at the report and it looks something like this you can see the upper left hand here is clear something that's not doing so well you can even sense it you know by the way that the kind of lines are just jumping all over the place and you can intuitively see that on the bottom right hand corner are things that are just converging a lot better and indeed you know once we take a look at the final results here and we really see that the one on the top upper left-hand is got pock marks all over the place and it's just a mess whereas on the bottom right hand corner things are looking a lot better okay so once we have this we're gonna kind of just pick out the best one of these and then and show it something it hasn't seen before so both of these one of these images here is produced by machine learning can you tell which turns out the top one is the one created with both soft with actual simulation while the bottom one is the one that machine learning produced it hasn't the amazing thing is that it hasn't really seen this at all it's just a totally new sample and it just guessed at the result and it you got that so it looks very very similar but the only thing is it's like 50,000 times faster so it's pretty crazy and what's also really cool about it is that you know you can see that it it also learned that the debris and water feels as well so you can use this as like a preview you can you know use it as a first starting point I just kind of carry on the work from there so yeah so crazy crazy speed from from machine learning but you might argue that you know sometimes the speed doesn't matter because you know what's the difference between something that's producing a result for you in in one second versus something that's producing a result for you in 0.1 second so you know that you might be right about that but you know unless of course that that extra speed that blinding speed is giving us different ways of creating stuff different ways of interacting and then that really becomes really exciting right so that's what I'm going to show you in the second demo here we are gonna show you how we can take ordinary sketches and turn it into realistic looking terrain okay so in order for machine learning to produce realistic looking terrain we need to show it realistic looking data right there's no magic here so so what better place to get realistic looking terrains than the USGS they have basically you know the the satellite scans of effectively the entire planet and us being Canadians you know we're partial so we thought you know hey but we just take a chunk of the Rockies and train it on that and see if we can you know produce something sensible here okay so that's what we're gonna show you next so the process here just like before you know we got to get hurt the first step as always is just getting our hands on the data and in the in this case we're gonna download it from a website the USGS website then we're gonna generate these height map tiles based on that data and once we have the height map tiles we're gonna create these peak and Valley masks okay so like quote-unquote sketches so we can then get these kind of image pairs between you know like the sketches we generated and pair that with the actual you know terrain so that machine-learning can learn a mapping between these things so unlike the the the first example where you know we we kind of made up what the terrain was gonna look like right we're totally generated that from scratch that was an example of data synthesis in this case we're kind of getting our hands on existing data so this this here is an example of data augmentation just modifying that data bet and preparing it so we can hand that off to machine learning okay so once again let's let's let's take a look take a closer look at how this is actually done okay so step one the downloading of the terrain data so the the way the USGS website works is that you you enter in this coordinates of what you actually want the chunk of terrain you want and it would give you back this text file containing the zip file so you're supposed to download in this case I'm just showing you four here for for the sake of clarity but in reality this this this text file could be like hundreds and hundreds of lines long this is just not something you want to really do by hand so what we're gonna do is want to leverage the power of these stock notes that we have in in PDG in Pilot PDG you can see you know like the third note down we have a text to CSV node that's basically you know ingesting that that text file and what we're doing is basically you know creating a work item for every lying in that in that in that text file and then that gives us an URL and which we can hand off to another node you know like near the bottom there like sort of third one from the bottom it says download files it takes a URL and it downloads has that file and then further down you can see the last note there it's like running a and file decompress to basically unzip the zip file so we can get at the actual data inside so so here you can see kind of like how we can really leverage the stock notes to to really amplify your your productivity right because without them you just end up like learning about the you know winzip API and and and and and parsing all the text herself which just ends up being a large glob of code and not everybody is even comfortable doing that right so at some point you just might get forced into asking your programming team hey can you write me something like this that does this but with these stock knows we just everything just kind of just works right and it's kind of democratized as the the pipeline so the more people can do stuff that it couldn't do before okay and one last thing is also where as you know you might have noticed with scott's earlier demos you know those nodes were often pointing to two other networks here what you see is what you get is really really simple is just literally do this followed by this followed by this you know okay so that's the download the terrain data so once we have the terrain data we bring it into Houdini now we didn't have to use Houdini we could have used anything that you know could handle the terrain date I would just happen to use fellini because we have a pretty nice set of terrain tools inside okay now we can't just use that by itself because each one of these terrain toughs is huge on the longer side is about 100 kilometres so that's just way too huge to you know present - whoo - to machine learning by itself so what we're gonna do is we're gonna kind of chop it up into little bits into more bite-sized chunks so we can kind of use each one of those as a sample and then for each one of these little tiles here we're gonna run it along HDA into to mark the peaks and valleys of these things so the peaks we would mark in red and the valleys with mark in blue and just like before we're gonna flatten that to form these image pairs on the left is you know like this quote-unquote sketch that we've generated and on the right is the sort of original height map that we've downloaded and this gives us of course tons and tons of these pairs that we now have our data and we're going to run through machine learning algorithm again and it produces this result so this is basically you know once once the system's learned this you can just basically freehand and just draw this stuff and you can say like hey I wanna you know I want to terrain that rocky like terrain that looks something like this and this is incredibly powerful because you know the problem with any kind of DCC is just that they have so many knobs and levers and you just don't know what to do but practically anybody can can produce a sketch like this maybe even me okay so you know you just sketch and it just produces you you know Rocky Mountain like like terrain right and of course you don't have to sketch it from scratch if you don't want to you can just simply hand it an image and it will do something sensible of course you know we are side effects so we had to do squab Island all right so to summarize this section for you we learned you know three things here so number one how PDG is really powerful to and it can be used to to do data synthesis for machine learning secondly how it can do also you know data augmentation for for machine learning and lastly how we can use you know the the compute power that PDG is providing to do you know just try a lot of configurations all at the same time run it all the same time or hyper parameter space searching and get all that automation the report generation all of that stuff ok so by this time I hope you kind of get in the sense of you know the the breadth of this technology you know we started with effects using Scott show you that but it didn't really stop there I showed you the you know the applications in games and and and environment generation but it didn't really stop there and then I showed you you know how this can apply to machine learning as we and as you might guess it's probably not gonna stop there as well so this is an extremely broad and general technology and that is why we are launching pilot PDG as its own product ok so with that in mind one last bit to kind of summarize and and and tie up these the power of these stock notes that you've seen throughout this presentation you've seen that you know we've been using them to you know montage images manipulate videos we downloaded stuff on the internet unzipped things but there's a whole bunch of stuff that we didn't get a chance to talk about we have this whole set of nose that help you calculate dependencies partition errs as Scott has talked about about also other sets things like mappers that just allow you to calculate dependencies you know in all kinds of different ways either through indices or spatially you name it knows to help you manipulate data SQL CSV JSON XML all those nodes are their schedulers an important point we PDG is not itself a scheduler okay but it works with practically like you know all the schedulers out in the in the market you know our own HQ tractor deadline comes workin out of the box but if your scheduler is not on that list don't worry it's just a couple hundred lines of Python code very easy to bind it to it source control through a shotgun and perforce so all that stuff is there and it's yeah really easy to add your own custom notes as as Scott has mentioned yeah and we really want to create an ecosystem around this so that you know we don't just end up reinventing the wheel over and over again so that the community can community can really just help each other out okay unfortunately we didn't quite have enough time to you know put all that stuff together in time for the launch but it's coming just in the not-too-distant future look for an announcement from us okay and with that I'm pretty much done and gonna hand it back to Kristin thank you Ken that was fantastic what do you think of those ml demos looking good yeah they show a lot of promise it's really just the early days and ml will be in our sights for I think for years to come both in terms of establishing the framework quite possibly based on PDG we have yet to decide fully on that and also in terms of solving domain-specific problems like what you've seen here terrain generation deep posing a pressing foot foot fluids and whatnot all right I think we're coming close to the end of our PDG presentation tonight we've covered a lot of ground and there's a lot that you have to take in we understand but we hope we've made a good case for how you can use it the very many ways in which you can use it and we've given you a sense of the size of this of this technology it's really it's really gigantic and it's also big in terms of its transformative potential and what I mean by that is that it stands to make a big difference in the way you think and this is not the first time we've done it if you think back in time the first time we did something like this we caused the shift in the way people think as they approach their you know CG content creation jobs it was back in the 80s when we introduced procedural ISM and that basically defined the new mindset and then we did it again in the early 2000s by adding Houdini digital assets to the mix and that introduced a new currency if you will in the VFX in the procedural pipeline and also a new abstraction and now finally with tasks graphs we're looking at the big picture this is the next shift before you had full control over the inner workings of your procedural systems now you have control over its overarching ashell as well so you have control over everything now these are still the early days in some ways for for PDG and we expect it to grow and who knows maybe one day it will get to fight off space invaders okay this is just to go a little light after all this theory and practice that we've introduced tonight this is this is what our PDG developers did for fun it's an actual top node zoomed in a lot it's actually cooking a PDG Network this is not scripted all right and it's all driven by the keyboard so the little shooter at the bottom is driven by the keyboard so this is an another example of PDG of course we don't mean it that way we just wanted to show it to you for fun all right just before we leave PDG territory for the day it is my great pleasure to introduce a very special guest bill paulsen global head of pipeline and workflow at MPC film is with us tonight bill has dedicated most of his career to designing and implementing successful pipelines for VFX and film he has written about pipelines he has proposed a model for pipeline architecture based on design patterns and has kindly accepted our invitation tonight for a short Q&A welcome bill and thank you so much incoming I'm new to this I'm new to this job so I just want to say there's a tremendous amount of amazing work being done at MPC I'm responsible for none of it at this point my main goal is to not screw it up well it's a very opportune moment to to have you here we're about to release PDG it's something to do with pipelines you know you spent 19 years at Pixar overseeing several generations of pipeline again you've written about pipelines and now at MPC you've taken a really thorough look at PDG how do you think PDG can contribute to the pipeline at MPC well it was I actually wrote out answers to this question and the opening of this presentation had all my points exactly the first thing is it's going to be a much better houdini and we're already incorporating it into workflows and effects and environments and so forth that work has well begun a big vision for us is to asset eyes our workflows and start building these these graphs that the VFX oops and the CG Supes can snap together there's exactly what was shown here this is what we want to do and we think PDG is the right the right kind of way of thinking about it we've we've been experimenting with this and it's very promising this is our vision to do exactly what was demonstrated Thank You Val one more question chef I mean the second question is about USD that's something and you know a lot about you saw it it in its infancy at Pixar you saw grow out of it's shell you were one of the first people to to promote it externally I remember and now we're seeing USD being adopted more and more by studios and many other studios talking about adopting it do you see any synergies between USD and PDG I do there's so you know just at one level you can think of USD is describing data and PDG is describing processing so that's kind of obvious and I think there are a lot of systems that decompose in that way but what is so compelling about phe is that the dependencies are if you really watch that carefully you saw that the dependencies are kind of independent of the shape of the data they're very reactive to the data and so we see this is a way of decoupling these problems the processing dependencies are not tied to the actual data description this lets the to evolve independently and splits the problem space up very nicely so that is why I think this is so compelling I think PDG and USD will go together very well thank you Bill we'll look forward to working with you and in talking some more in the future thank you all right welcome back Scott and not to something completely familiar let's go to the second part of our presentation today this is where we want to cover some of the main features that are non PDG related in Houdini 17:5 it's not something we set out to do like I said at the beginning we wanted to put our heads down and just get PDG out the door yes we can but we've been developing some features for Houdini 18 that happen to be ready now so rather than holding them back from you for still a couple of months we thought of including them in this release so here is Scott again to give us a quick tour of those Scott all right Thank You Kristen okay let's just let's just dive right in so in Houdini 17 we Andrew introduced a new whitewater solver for giving better whitewater effects more fidelity more control but of course when you're generating whitewater you tend to generate enormous numbers of particles especially depending on what's happening in your scene so it really is helpful to be able to distribute these simulations so for Houdini 17.5 we've created sort of two versions of this at a sliced distributed simulation and this is sort of the the one they probably are imagining where data is shared between the slices so they keep in sync with each other which is great you can go very large but there is some overhead to that because data is pushed between the slices and then what we're calling a clustered distribution where essentially the slices actually don't talk to each other at all and they can be run completely independently in parallel of course the downside of the clustered simulation is that there is no communication so it's possible that you're going to get slightly different results on either side of the slice however practically in our experiments and as you can see here it turns out to to not always be the case especially in situations like this where the underlying the sim is very stable across the slice anyway and here just to sort of drive that home is a render of the clustered simulation to see that there's no real discontinuities here this is sort of a best-case scenario because the ocean is sort of known on both sides very well but still really nice to be able to get these kinds of results so back in Houdini 16.5 we introduced narrowband fluid simulations and essentially if you're not aware that means that we simulate the particles only at the very surface of the fluid but in the deeper areas we basically only deal with the volumetrics data so there's a huge gain a little bit in performance but a lot of gain in memory because you're actually simulating fewer particles and in 16-5 this was mostly contained two tanks deep things and then in Houdini 17 we made some improvements to it and we were able to handle things like rivers and basically any kind of fluid simulation can get some gain from this narrowband approach but of course as soon as you say hey we've saved you you know all these gigs of data isn't that great you immediately say ok well now I'm gonna use all my memory anyway right so you just scale up the number of particles to these huge numbers in which case you still want to be able to distribute this so maybe you're rendering a billion particles five billion particles and so in 17:5 we've also allowed you the ability to distribute these narrowband simulations as well so you can scale up as large as your render farm essentially so it's not just about making the the underlying technology for doing simulations better but we want you to be able to visualize it better as well so this is a viewport flipbook of the new GPU based smoke shader which basically is designed to give you as high fidelity as possible in the viewport so that you can actually really set up lights and understand why your sim looks like without having to pay a huge cost and you can see that it's not just faster but it's also much higher fidelity you can see this self shadowing within the smoke itself just generally a high higher throughput and the interesting thing is that it's also faster so you can interact with it very high resolution volumes in Houdini that paths were actually kind of painful to work with you have pay a really big cost to move a light however now in real time you can take this very high-resolution simulation here and just interactively drag these lights around you can use spotlights which we could not do previously in the past so you can use the lights that you're used to working with as a lighter and so you can get really nice results you can see how the shadows cross across the surface of the volumes and again we're talking about a very high resolution volume here so when we pull this final light out you see all the detail that's in there you want to be able to see that in the viewport if you're gonna spend the time lighting this you don't always want to have to go to the render you want to get as close as possible in the viewport before you pay that extra cost and that's not just for volumetrics you've also made some big changes to the Houdini viewport to try and get parody with render so the principal shader now has a much better representation of what the final result will look like in the viewport compared to something like Metra so yeah on the left the viewport on the right is Metra now mantra has you know fancier technology behind there it's doing very detailed shadows it's doing all sorts of reflections and so on but still you get a very very close representation of what you're going to get by looking at the viewport representation in fact you can kind of put them side-by-side in a lot of cases it's sort of indiscernible which one is which which again means that as you're working with these assets you don't have to worry as much about like well am I really seeing what I think I'm seeing like do I have to send this to a render or not but it's not just about matching mantra in fact a lot of this work was done to make sure that we also match other software in the industry because a lot of times the reality is you're pulling data in from somewhere else or you're pushing it out into another piece of software and if you're doing that you want to make sure that they match as close as possible that you can feel confident that what I'm working on in Houdini will match whatever my end product is going to be whether it's unreal or substance and so on there's also a new game dev shader that has some other features to make sure you can match things like Mickelson tangent space normal z-- and so on but this is the default principal shader that you're looking here again since this is a 17.5 so it's sort of in the middle of a schedule we also built some foundational tools the major stop is not the most exciting stop in the world it's but it's very fundamental to a lot of workflows so it's been rewritten from the ground up and not only has it been rewritten sort of internally to do the calculations in a better way to make it faster we've also added a lot of visualization features that were not possible before so in the upper left you can see a histogram embedded into the viewport to help you understand what you're looking at the data you're looking at as well as these little arrows that you see are all handled using visualizers so this is a much faster more efficient way of representing this data and then of course that data can be baked down into attributes so it's not just for visualization it's just a way of improve when you're trying to find those values trying to find exactly the pieces that you're looking for and like I said it's it's been basically rewritten from scratch so we also have a lot of new ways of doing the measuring you can see here again how important having this live visualization the viewport is and how it's nice and fast so that you can narrow down to exactly the information you're looking for and then apply that to other effects further in the pipeline yeah measure SAP a foundational tool but very important for work to come in Houdini 18 and of course we wanted to let you get your hands on it sooner another one of these tools that just helps you expand an existing toolset is a terrain alpha cutout and essentially what this means is that we can apply what is essentially an alpha mask to a terrain so that when you generate the geometry you get something like this or you can render it this way and this is just a way of extending the functionality of height fields which by their nature were sort of square with hard edges on the ends so this allows you to sort of cut out a chunk of terrain even if you just want to do sort of an effect like this an interesting sort of floating island kind of effect or if you were doing something maybe that's going to be processed later so maybe you want tiles hexagonal tiles for a game or something like that this would allow you to create those shapes that were difficult do previously in Houdini we've also changed the way you interact with some things in the viewport so we can now create patterns of selections so you select the initial pattern and then you can expand that pattern in this selection so these are tools that were available in things like the group's up for instance but they really lived outside of the viewport you you had to change parameters on the group node this is an interactive tool so if you really just like well I just need to select these things and I want a fast way of doing it and it doesn't have to be a procedural effect this is a super fast way of doing it and also really nicely expands in other directions so you can follow a path or expand perpendicular to that path and get a reasonable obvious result and continuing that in that line we also have a selection by face normal so in this first example we have a 60 degree spread angle so even though this is sort of a connected piece of geometry I can very quickly make selections because it's only going to select the connected pieces of geometry that match the normal that you clicked on so I can very quickly select parts of this model without having to go in and individually select them and when you're working in a modeling case like this it can be really useful to do that you can really rapidly isolate parts of your model and now we've changed the angle to 90 degrees you can see now we'll wrap around the corners of objects but we won't go too far it'll stop at that 90 degree limit so again it allows you to really rapidly make selections on geometry for work down the road this is especially great when you just ingest the model and maybe you just need to isolate something really quickly in the viewport but it's not all about like selecting things we want more interactivity with our base tools so here's just a neat looking rigidbody simulation using a lot of the constraints that we developed previously in Houdini they've been around for a while but constraints are generally something that are generated procedurally you know there's a system of tools for saying like well let's connect these points together if they're close together and so on and that's really useful especially when you want to do something large-scale where you don't want to definitely hand place these but some times you do you want a specific thing in a specific area so in this case we're going to take a look at a viewport state that uses the Python States that were introduced in Houdini 17 to make an interactive tool for building constraints so you can see I can just draw on the viewport and it will connect those two pieces of geometry together just with a stroke and this is using the new constraint from line tool so I can preview the result in this case they're pins but it's not just placing them I can edit them I can draw them quickly you can see visualize which pieces are being connected and then edit ones that already exist so again it's taking some of that interactivity that you want and taking it away from a pure stops workflow and putting it into the viewport so that you can very quickly create these constraints in a very artist friendly way but still have the backing of the procedural network right the result is still the procedural network you would expect there's also a new tool where you're building constraints by curves and so in this case we want to connect these pieces together and across this gap so we can just draw in the viewport and it'll start connecting them automatically and if we increase the radius you can see these stitch lines sort of jumping across the gap so you can very quickly just paint in where you want things to be connected together and immediately get results so again not taking away the procedural tool but augmenting the procedural tool with this artist driven approach so once we had the constraints set up we can preview the results just on these pieces in isolation and then make some changes maybe make it stiffer and then see the results again so it's really again about allowing you to to more interactively do work that you can already do in Houdini but in a less hands-on way and since you've seen lots of these montages throughout this event we wanted to show you a couple more with us some extra pieces of information here where we're showing different types of information all at once so this was of course generated via PD G and we're wedging out these variations of these explosions but you can see that we can also embed other information in here so we're showing you know the final geometry the constraint network as well as all of the the stats all of the values that have been manipulated to get these results so if you're trying to find a good value it's not sort of just a random search you can actually see like oh the ones in the middle here actually doing pretty good let's continue on in that vein maybe more so than the one on the far upper left so again just a call back to PDG here and see how useful it is even in a case like this and then just because it looks cool I'm just going to show this again because it looks cool so vellum so we introduced vellum in Houdini 17 and we're going to show you an example here per point friction something that did not exist before and so you can see that these sort of weird guys where they've been painted with this sticky substance they slide more or less so you could vary friction before but it was on a per object basis and now different parts of the material can react in different ways so it's really per point friction on both the collider and the vellum object itself but it's not just cloth it's also the whole vellum solver so here's a vellum grain simulation where we've sort of increased the friction here on the glass and so the particles tend to stick to that area so again just increasing your options for these types of simulations giving you more ways to interact with it and continuing on that vein we've also added new constraint types to the vellum solver so if you're not familiar with what a constraint type is in vellum it essentially is a way of defining the physical properties of something so in this case we're creating these fiber constraints and those are basically implied fibers inside the three-dimensional space of this object and then they can be flexed along the fiber direction so in this case we're flexing them along an axis essentially but it's not just along an axis of course you can define these fiber directions using attributes using vex and here we're sort of pulling to the center of the object altogether so you can see it's sort of stress and pulling together and there's also some volumetric constraints here helping preserve the volume when this happens now these are just provided as interesting tools to use but you can imagine in the future that some of this technology could be applied to things like skin and muscles for muscle simulations so taking some of that vellum power and applying it to other cases so here's an example now on an actual character this frog and we're gonna actually generate these fiber directions interactively here and this is actually just taking advantage of essentially our fur grooming tools to set the fiber direction on this creature so you know you can easily set them like this obviously this can be driven as I said by Ave expression or other type of procedural effects but sometimes it's useful to be able to do this by hand now when we play this simulation you're gonna see this really interesting result where our frog you know he sort of undulates in a very suggestive sort of way so this is just using the fiber constraints that we just set up but of course once we saw this and saw how interesting these frog characters are we thought well let's put the per point friction and this new way of animating together to create an interesting sort of effect and so basically we increased the friction on their feet and let them sort of flail around to see who would win this race my money was on the one in the middle but it actually turns out the guy in the bottom wins the race all of the other frogs that lost were actually eviscerated so we didn't need those we only want the winning frogs in this case so to go along with this a while back we started using these nodes in Houdini that had this sort of three input three output method and the problem with that is that you could only ever view one output you would only see basically the first output so something that we've added purely as a UI feature is the ability to analyze each of the outputs individually using the middle puttan information so you can actually choose which output you're looking at and this is really useful because previously you would have to put down a null connected to the output and examine the geometry there so now not only can you use the middle mouse button information but you can also tell the the note itself to show me the data from each input not just look at it in the middle mouse button but update the viewport and of course as you would expect this is also extended to the geometry spreadsheet so you can just select which output you're viewing and get the results there so this is really just a huge improvement to your workflow with these sort of change nodes where you really want to be sure what data is being passed you know along which wire so this is just a way of really elevating the overall workflow for these new types of notes and so that is finally the end of this presentation we've spilled all the frogs they're all on the table and so just to wrap up here I'm going to once again bring Kristin back on stage just to wrap up so I hope you enjoyed the show and thank you very much Thank You Scott that's fantastic I think one thing we need in that frog race is some machine learning I can this concludes our presentation of Houdini 17:5 we hope you've enjoyed it we hope we've given you a lot to think about clearly the the big entrance here in the Houdini ecosystem is PDG but the list if you don't mind playing it Scott is actually not that small and we hope that you'll find a couple of good gems in there that you can use right away some of the features you've seen speak directly to some bigger projects that are underway the fibro constraints is is all about getting to muscle and skin simulation we're looking both at classic fe m and vellum now and see how they might coexist or or or something along those lines and then the other really big one is the lighting and the quality and the performance in the viewport and that is all about a project that i hinted at when we launched at any 17 project solaris which is about lighting rendering USD everything put together and that's coming in Houdini 18 also if I may invite you to take a look at some of the great new tools that are a game dev team have put together there's some really amazing stuff in there even the the texture baker I believe is is way way faster in in 17:5 and then there's support for Alice vision which is photogrammetry support in the game dev suite so lots do to look for and play with we hope you enjoy Houdini 17:5 and we look forward to seeing you again soon for Houdini 18 thank you and just before we go I'd love to give a big shout out to all the people who have made Houdini 17:5 possible well first Houdini 17:5 is coming in just a couple of days by by mid-march and then the team first our amazing team of developers goodness gracious PDG developers but and everybody else our technical director is a growing team of people with really production savvy skills our QA and documentation Department all three of them and then of course our Waterloo co-op students and in the end big thank you to all of you our alpha beta testers and all of you who use Houdini and really give us good reasons amazing reasons to continue to try to better ourselves in our product so thank you again everyone [Applause] [Music] you
Info
Channel: Houdini
Views: 37,308
Rating: 4.9703703 out of 5
Keywords: houdini, character, characters, rig, rigging, rigs, character rig, HDA, procedural, 3D, SideFX, Houdini 17.5, Houdini17.5, H17.5, PilotPDG, Pilot, PDG, procedural dependency graph, dependency, graph, TOP, TOPs, task operators, task, operator, operators, launch presentation, launch, launch event, event, montreal, FX, animation, VFX, CG
Id: w-8qrehON8Q
Channel Id: undefined
Length: 103min 7sec (6187 seconds)
Published: Fri Mar 08 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.