Houdini 19 Launch Reveal

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to the launch of houdini 19. side effects r d team has been very busy with new features and improvements across the spectrum from modeling to kin effects character effects solaris lighting layout look dev pyro karma rendering and more along with today's walkthrough we'll be hosting a houdini 19 hive featuring td talks from some of the artists that worked on h19 demos we'll also be hosting a solaris workshop featuring educational material for everyone from beginners to experts all this material will be available at the view conference this week and via side effects.com next week so without further ado let's jump in [Music] thank you chris and welcome everyone houdini turns 19 today but is hardly a teenager it actually came to life 25 years ago and although lots has changed since then the two defining goals for us as an r d team as a company have stayed the same the first one is to always genuinely try to listen to feel those pain points and while envisioning better ways the second one to connect as many dots along the cg pipeline as possible for us the biggest dot on the pipeline has always been vfx and cfx and we have some great news for you today on those fronts in recent years we've added other dots kin effects which is about motion processing and rigging and animation solaris which is about lighting and rendering and look at and now lay out as well you will see so houdini 19 is a major stepping stone for all of the above in all of this we've put a lot more effort into having all those new artist workflows be reflected in every feature that we design we've also tried to inject more real-time tools whether they are just individual workflows or real-time physics into the full fabric of houdini all this with a view of course to virtual production large-scale cloud compute microservices and just overall houdini proceduralism everywhere so let's begin solaris solaris continues to grow very fast with its adoption accelerating in the past year our focus for houdini 19 was on the feature set and on the lighting artist experience especially if this is your first time with solaris and usd usd does help immensely under the hood but you don't have to always look it straight in the eye to be productive now you also have access to our in-house renderer karma cpu which is finally ready for production and with as much parity with mantra as those two very different render architectures can allow for lookdev we now offer support from materialx an upcoming standard for all our industry and to top it off solaris comes with a new brush-based environment for large scale set dressing basically all of solaris's 3l's lighting look dev and layout get heavy representation in this release let me now pass the floor on to scott for the walk through all right thanks kristen so yeah let's let's dive right in so as as kristen was saying um you know when we introduced solaris in houdini 18 it was really about taking this technology universal scene description embedding it directly inside of houdini making a renderer that understands it and then presenting it to the users to use almost as sort of a one-to-one representation and then with houdini 18.5 we move forward that smoothing out some of the edges making things a little easier to use a little easier to understand and now with 19 we're sort of ready to start really making the artist level this sort of wrapping this up and presenting it in a way that artists can use without really having to consider the underlying architecture as much of course all that power is still there but we're presenting it in a way to make things a bit easier so let's take a look at the sort of first example of doing that so we've talked about scene import previously and scene import is basically a way of taking setups from object level or sop level and bringing them into solaris in a sort of a seamless way and we've captured more of the edge cases more of the possibilities here to bring those in we've also created a special karma rop that will go straight from object level to the renderer behind the scenes it is actually using solaris to do it but because scene import can capture most of that information you're able to kind of seamlessly go from object to rendering with karma but here's an example of using scene import to bring in this prop and you can see it comes in seamlessly takes all the texture maps all the shader assignments and the geometry and can import them without any issue which is where we kind of want to go especially as people are transitioning their workflows from object level or stop level into this new stage solaris level uh but more than that we've added this whole sort of suite of nodes that we're calling the component builder and the idea here is to create these nodes that let you set up a sort of well-constructed usd asset and the way this works is that we just can go into sops and import your geometry the way you might normally do it i like to do a little processing put the geometry on the ground plane it's useful for for doing layout tasks you can poly reduce it and plug it into the second optional output and that's going to be your proxy geometry so when you're looking in the viewport and you're dealing with large amounts of geometry you get this low poly representation we'll do a little housekeeping here just to rename things keep things nice and tidy and then we'll put down a material and you'll notice that we're not doing material assignments directly it's all being handled by this sort of framework the component builder framework so we're just going to fill out some texture maps make sure everything is working correctly and we've basically done it we've created a nice usd asset here now here's our low res with our material applied here's our full res rendering karma and back to gl so now that we've done this we can actually do a couple of interesting things one is create a thumbnail of this asset for use later we'll come to why you want thumbnails in a minute but for now you can see there's a built-in thumbnail camera and then we can just save it to disk and just to sort of drive this home well to put down a reference lop and just bring that asset back in and take a look at it in the lower left corner there in the scene graph tree and you can see that we've got a nicely constructed asset now the geo the material the sim proxies all together ready to be used by the artist but again without having to really build this hierarchy yourself or do a lot of usd work to get there now in the past we've talked a lot about variants and we want to capture the ability to use variants using the component builder as well because variants are an important part of usd and working in karma so i'm just going to copy the nodes that's sort of the easiest way to do it in this case do a little renaming and just change this file stop to point to a different piece of geometry and pretty much everything else is the same now we want to go to our material library and go ahead and also just duplicate that material change the texture maps update them to this new piece of geometry because these are completely distinct objects with their own shader and basically again we're pretty much ready to go here now you can see that we can pick the variant we're still getting all the sim proxy and proxy geometry and we can export these as variant layers as well now this isn't necessary but it's useful down the road for a tool we're about to discuss but again just to drive this home we'll use the set variant node just to show that you can now access these geometry and material variants and then we'll actually use explore variants to duplicate our uh our variants into unique pieces and then we can use the edit lop to grab one of our frogs here and enable simulation on it and that basically lets us grab it and move it around like a physical object so again previously you would have had to set up all these relationships yourself but now with the component builder things are just sort of smoothly integrated into the workflow so let's take a look at how we're going to use these assets so there is a new tool a new palette in houdini called the asset gallery and that basically lets you take these components these assets that we just created along with the thumbnail that you see and import them into this gallery we'll just make the icons a little bigger now we'll bring in our second frog and obviously this is useful you can bring in things one at a time but that could get a little tedious if you're dealing with you know 50 different assets so you can also just point it at a folder and import all the assets inside that folder which is also a really nice feature for bringing in lots of things at once and takes a little while to process and then you pops up with all the assets in this case i think we've got about 30 30-ish assets here so when you start getting a lot of assets you immediately want to start categorizing them and so we use a lot of different methods for doing that you can use color tags for sort of simple organization and then you can use a filter here to show you you know just the things tagged red just the things tagged blue or combinations of them and so on but we can also go ahead and edit metadata on here so for instance if we have objects that are meant to be used for scatter so placing randomly around your scene you can tag them with scatter and then some other tag these aren't specific tags they can be anything you like and this will basically allow you to filter down to certain types of assets easily by just typing it into the filter here so i type scatter and even the ones that i did not directly tag that have scatter in their names will show up so lots of different ways do it on the file name or this metadata and finally you can use this star system here in case you want to favorite some specific things assets you find yourself using over and over again can be quickly found using this star system there's also other ways to generate these things you can use a pdg network to do this as well so you can process you know potentially thousands of assets into this asset format and bring them into the asset gallery here we've actually used the animalogic lab that was recently released and we've just processed all of their usd assets and brought them into the asset gallery using pdg and this is also part of the effort of taking pdg and sort of bringing it closer to your standard workflow so that it doesn't feel like something that sits aside from houdini but rather comes together and we'll talk a little bit about that uh later but i think it's an important step in making this just part of your workflow so now what can we do with this gallery so there's this new node called the layout log and the layout lab can ingest these assets from the asset gallery so i've got just a little floating window here and i'm going to drag it into what we call the working set but you can think of it like a palette for your brushes and now you can use this brush based interface to place assets in your scene but it's not just brushes you can of course switch to the edit mode and manipulate these things directly which is useful for architectural things that you want to snap together in specific ways using this edit tool you can duplicate things easily just by holding down a hotkey and switch back and forth between the edit state and this brush based state you can pick assets and align them change their rotations by clicking and dragging and you can also select multiple assets and easily switch between them so you're not constantly going back to the working set instead you can randomize them you can cycle them using the middle mouse button and that allows you to quickly rotate through assets that you want to jump between as you're sort of filling out a scene so we're going to just use this tool now just to place a couple of more objects in the scene fill it out a little bit and now we're going to place some rocks and again you can see how quickly you can bring in assets from the asset gallery fill out your pallet and start working then we have this these scatter rocks being filled out with the fill brush which basically means you can draw a shape on the in screen space and it'll project them into your scene you'll notice that they're not landing on top of the crates and so on and that's because i've changed the filtering so that they don't intersect with other instances they only intersect with the input that means you can sort of quickly drop things down with worrying too much about your rock sitting on top of your frogs for instance there's a nudge brush to sort of adjust the placement because you always want to be able to sort of slide things around after the fact you can edit things with scale brush to change the scale after you've placed thing or even add some rotation to objects just to tweak their alignment in case you know things are looking too repetitive in your scene let's bring in a couple more assets in this case these mushrooms and now i've enabled the ability for these instances to collide with previous instances so i'm able to put things on top of existing objects using this fill brush and finally we'll just fill out this with a couple of candles and we're ready to go so very quickly you can put together a small scene like this just by jumping back and forth between these tools within this single node so people who are long time houdini users are probably wondering about these brushes and how they can make their own and we thought about that and so this is really a framework for this brush-based interface it's not just this one node it's that node plus a framework for building brushes so what you're seeing here is a template a soft level template for building brushes so you're not required to write a python state you're not required to write a python handle all that is handled for you abstract it away and instead you just need to deal with these soft level templates for what are the points you're dealing with what are the available assets and what will you do to them in sops to enable that so essentially these are hdas that you can install into the layout tool and use as your own brushes so people familiar with soft level tools can very quickly translate those into these lop level brush based tools so let's just take a look at a larger scene here just to drive home the point that it doesn't work only on these sort of simple setups you can have something with a lot of existing geometry already in your scene and fill things out very quickly and this is sort of a standard way people often approach filling out a scene like this where you start with sort of your hero objects the ones that are immovable that are placed you know very specifically in a scene things like the house or this sort of wharf or the stairs and then you use things like instancing to fill out the scene dress the scene make it look more detailed more interesting um you know and work kind of downward from large objects down to smaller objects and eventually down to tiny things like stones and grass and things like that but you can see here just using the exact same tools that i was using in the smaller demo here on this larger scale and things remain very interactive and partially that's the help of the component builder allowing you to have these opengls proxies so that you're not overwhelming the viewport with you know tens of millions of polygons here you're you're working with you know medium resolution assets um that allow you to continue to work without sort of feeling the pain of having massive amounts of geometry so finally we'll just fill out some final details here add some smaller details like these candles and mushrooms and we're essentially um ready to go now we've filled out our scene we've added a lot of detail things are looking pretty nice so when you're getting to this point you know you're you're going to be wanting to take snapshots we've talked about the snapshot gallery in the past which is basically a way of you know taking a snapshot of your viewport and using it to save both the current state but also just so you have a record of your work the problem with the snapshots is that you know you have to wait for your viewport to resolve for a while and on complicated scenes that can take a little while so instead we've introduced something called a background render so i can click a background letter and this actually spawns karma in the background so you can continue to work just in this example i'll just change some light colors while the render is occurring in the background so you're not sort of waiting for it to happen but this is a live render you can still bring it up you can use click to focus to render the pixels in a certain area if you're worried about what's happening in one place or another and this lets you just continue to work while spawning these jobs in the background of course you know this is pretty resource intensive because you still are doing a full karma render so a nice extra feature is that you can go ahead and stop that render and then it's possible to send this to the render farm instead so if you have access to a render farm by default we have uh we have set up for hq but the it's very easy to modify the scripts behind the scenes here to work with whatever render firm you'd like to use so you can offload that work so you can still work quickly without eating up all your resources using karma on the same cpu that you're rendering with karma to help all this a big request was to have sort of a first person camera that would let you move around because in large scenes it can be a little easier to walk around the scene using a first-person view than it is a traditional sort of tumbling camera especially when you're dealing with a large amount of space and to help with that there's also a sort of click to jump feature so i can just click on a space a double click on a space and i'll sort of hop to that space so it's easy to jump around your scene as well and get different views really quickly that you can definitely do in a traditional sort of tumbling camera but it can be really useful in these scenes to use the first person navigation and just to drive home that you can work with large scale assets here's a scene using very detailed assets and working with the same tool set that i just showed you placing hundreds of high resolution trees uh without again overwhelming the viewport so it is possible to work on very large scenes using this tool set with all sorts of ways to optimize what you're working on and again just to sort of drive home what we're doing here we're just going to turn on uh all of the various instances that have been placed in this scene and just moving the camera to see what it is that we're looking at and you can see tons of trees tons of detail lots and lots of geometry here so the viewport is able to handle enormous data sets um we've also added some you know commonly used visual effects features so here's just an example a pretty typical visual effects scene where you've got a sort of a tracked background plate and you want to do some integration here but of course if you render with karma now it doesn't look very good there's no lighting for one thing but there's also nothing happening on the ground no interaction with the ground so we have this new node called the background plate node and that basically lets you put down a piece of geometry and more or less it'll project the background plate onto that piece of geometry and then it can actually use that to light your scene so light is bouncing from the background plate geometry onto the creature it's a little subtle here but you'll be able to see that you see some reflections a little bit of orange light on the underside of the creature here but of course once you add in an hdri and a directional light it really starts to become integrated and this is essentially a shadow shadow catcher it also has some diffuse lighting from the creature onto the ground in case there's a there's some bounce light from the object you're trying to integrate onto the scene and this just gives you a very nice way to work as an effects artist in the viewport and see kind of what you're going to get for your final render before it's handed off to a compositor so this isn't how you would necessarily want to render your final scene although you could but it's a way of generating the passes for for compositing down the road and just as an example here's here's the same scene now rendered out with some compositing some color correction uh just to integrate uh crag here into the scene and again things like this really help because it just smooths out the workflow and lets you not have to worry about setting up things like complicated light path expressions or other other complex shaders to get things like this it's all integrated into solaris in a simplified workflow so karma let's talk about kermit so karma cpu we've had in beta since the release of solaris and we're very happy to announce that it is no longer in beta we consider it production ready so what does that mean does that mean it has every feature that mantra had no it doesn't but it means that we're confident that you could use karma in a production scenario and get the results that you need it's very flexible it's a very good rendering option for you and just to sort of drive that home let's take a look at a scene that was rendered with karma which is this enormous volumetric simulation as you can see down there more than a billion voxels for the whole sim um looking fantastic so in terms of performance in terms of memory all those things we feel confident now that karma is ready for production an interesting side effect of karma being able to ingest usd and work directly with usd uh it doesn't do any sort of translating under the hood it actually operates on usd data is that it can actually be removed from houdini entirely and run inside of external hydro delegates so this is a usd view as utility that comes along along with the usd build and you can see again using the animal logic lab scene that we're able to run karma interactively outside of houdini no houdini running in the background inside of usd view so it is a true usd hydra delegate so material x material x is sort of growing in popularity if you're not familiar with material x think of it like an open source shading language and this basically lets you have renderers use the same underlying shader language to render their scenes so it opens up the ability to share data between renders even more so here's just a bunch of examples of some shaders that were created using material x so the interesting thing is that our implementation is node based um you know we call these vops even though the v stands for vex but really it's just a way of constructing a shader in this node-based workflow so here we are with a materialx shader network including something that we've added which is the ability to use ramps which is sort of vitally important for most shader work but is not a feature of material x yet so we've built this node to help people get up to speed using material x material x will work with karma alongside vex and in certain circumstances can even be mixed and matched between material actuators and vex based shaders so karma xbu we've shown this in a few sort of tech previews in earlier presentations and it is now included with 3d 19 in alpha and what does that mean so alpha means that it is actually useful you can do things with it it supports texture mapping many of the features you would expect but there are a lot of holes in the implementation things that you cannot do so it's definitely not even too beta level but absolutely workable and just to sort of give you an idea of what we mean by that here is karma xpu rendering this scene looking really nice lots of different types of shaders here diffuse metallic and so on so again it is a usable system at this point it's definitely worth trying you can get some really nice results just understand that there are some missing pieces to the overall picture but we're pushing hard to get useful things in here so here is xpu rendering that same volumetric effect that you saw earlier again more than a billion voxels using a slightly different shader doesn't use the full pyro shader we're calling it a pyro preview shader but you can see how interactive it is and how quickly you can render these volumes to an acceptable detail in a very very short period of time so a really nice way to use xpu at the moment is to use it just quickly to light your scene get some get some results use almost as a preview render before going to the cpu karma for your final render but of course you can get fully acceptable results directly from xbu like this thank you scott as you can see we're committed to making solaris your first choice for lighting rendering look dev and now layout as well now it may take cg pipelines a while to to germinate uh before usd becomes a real currency across the board and that is okay we believe that the time will come for that what's what's important for us and for you to know is that we want solaris to be ready for you when you are as to karma xpu uh we're doing something that we've never done before we're giving you access to alpha software and that's okay it's so much fun to play with and we're having fun with it internally as well to be honest uh you may want to know when that'll be ready for production and while we can't commit to any dates we believe that we'll be safely into beta territory sometime 2022 and ready for a gold release in early to mid 2023 now let's talk about creature r d which is one of our biggest r d investments as a company i'm talking about kin effects and cfx together pin effect is a total re-envisioning of how one produces and manipulates motion when fully complete it will also include a framework for dynamic rigging and a new animation environment to take advantage of it we are busy at work with all of that under the hood right now in this second iteration of kin effects in houdini 19 our user-facing focus has been on real-time ingestion of motion robust motion editing tools and native support for physics at this point we believe you can begin to look at kin effects as the destination for most if not all of your motion processing needs and then there's cfx which sees a significant new iteration of the hair grooming toolset now more than an order of magnitude faster and endowed with valon physics brushes for fast realistic calming for muscles too we're happy to announce the beta release of our unified completely redesigned muscle tissue and skin simulation system that's been in the works for quite some time the system leverages the considerable power and speed of the vellum solver and combines it with interactive soft level tools to prepare the system for simulation okay onto you all right thanks kristin so let's just dive right in so let's start with mocap streaming so can fx in its sort of earlier implementation is really about motion processing about pulling in data and processing it for other purposes and obviously mocap is one of the key things that you would be importing so being able to stream it directly from external sources into houdini is sort of a vital part of the workflow so here we have fiona putting on a motion capture suit from xsens doing some leaping in a forest and you can see it's streaming directly from the xn software into houdini live so that means that you can actually monitor what's happening in houdini and start making modifications live as your mocap performer is doing their work it also can be streamed live from a recording from xsens as well so actually live or sort of semi-live via a recorded piece of footage in xs and this really opens up a lot of possibilities so another sort of streaming service that we have here is a streaming from houdini on the right into unreal engine on the left so again you're able to work live in houdini and see the results and whatever your final sort of target is and you've probably saw this demo in a previous release which was handled by sort of custom scripts and sort of a lot of extra work and now we're using a more concrete way of handling this using a web server which means that you have a much smoother way of working with the unreal engine live link there's also improved gltf support mostly based around animation but also smoothing some of the rough edges in that export process and again what this is all leading towards is the idea that houdini becomes sort of a a hub for animation whether you're creating the animation in houdini you're pulling the animation in from somewhere else and then exporting it to another package so houdini becomes sort of the centralized point where all animation and animation processing happens and that includes facial motion capture so here we have edward kind enough to put his face online to drive this character using faceware studio so again this is live streaming from a webcam into houdini and driving the blend shapes on this character this also goes with some improvements to our blend shape slop as well as well as other animation blending tools in houdini 19. so let's talk about some of the things you can do with that data in houdini so this case it's me in the motion capture suit pretending to be this dinosaur or whatever this thing is that we've captured it too and this is using all the standard sort of motion retargeting tools that we talked about in previous releases and you can see we've taken my human skeleton and we've retargeted it to this creature which has you know wildly different proportions uh and skeleton and we get a nice result um but it's sort of what you would imagine it looks very much like a rigid sort of shell uh some sort of a mascot costume or something that i'm wearing and we want to add some more life to this and that's what secondary motion uh and ragdolls can do but before we get there um motion capture can often give you enormous amounts of detail you know you can capture motion at 120 frames per second for instance and while houdini can happily work with that data it's often a good idea to reduce it and so we're going to create a motion clip here and you can see the dense data that's actually in here basically oppose every frame and then we'll use extract key poses to reduce the amount of information while still keeping the animation looking more or less the same so let's reduce it to just 11 key poses and see how that looks okay so now not so great 11 key poses is not nearly enough to capture all the movement from that animation but let's try you know around 100. and suddenly even though we've drastically reduced the number of keyframes we're actually getting you know a pretty nice fidelity you can also do it by percentage um in this case we're reducing it by 50 and you can see that it's almost indistinguishable from the original motion capture data and just to sort of drive that home here's some drastic reduces all the way up to 90 and even at 90 so just 10 of the frames remaining you still are capturing most of the important information it just has this you know unfortunate sort of linear blending between uh the poses but can still be quite useful if you have background characters or characters that you want to have a dramatically reduced data set um this is a really nice way to do it without ruining your animation okay so we've we've extracted the key poses we've reduced the animation let's make add a little bit more life to this character here so we're going to make use a node called secondary motion and secondary motion works on the animation data coming from the motion clip and basically what we're going to do is just add some lag and i like to do it sort of in a pose where i can see how much lag is being introduced and now you can see we've immediately added a lot of extra information a lot a lot of extra life to that tail just by adding essentially a lag on the animation channels that are there includes overshoot other things like that and some of you may be familiar with chops or channel operators this is not chop-based this is a fully integrated into kin effects so we're circumventing chops entirely to do something similar by operating on motion clips instead but you can see just by adding uh secondary motion to the sort of frill and to the tail you suddenly go from a very sort of static uninteresting character to something that actually feels like it has a lot more life to it in this example we've just done exactly what i just showed you but we've also added a little lag to the jaw as well so you sort of get the feeling of this thing sort of roaring and and sort of coming to life really without any animate animator intervention at all but none of this is physical so let's talk about adding ragdolls now so here the character is running forward and at a certain point we just transition to a ragdoll he collapses to the ground um you know not the most exciting uh example but just to prove that you can turn this character into physical simulation um and this is not using ragdolls from our crowd simulation tools this is a kin effects tool so you're not going out to some other agent definition this is happening on the rig itself but probably what you really want to do is only have some of the character be a rag though probably not like this but just to prove the point only the the legs in this case are retaining the full animation everything else is sort of a stiff rag doll so our poor critter sort of flops over like some terrible thing has happened to its spine and then it sadly walks away so we probably don't want that but it proves the point um however you can use partial ragdoll with motors and now this is using the underlying animation to drive the ragdoll to some degree so it's sort of a loose binding between the animation and the physics you can see it working really nicely on the arms the head is probably overly exaggerated but the arms suddenly have a nice sort of loose feeling like sort of flopping around under its own weight so you can use all of these things the secondary motion and the ragdolls the partial animation to sort of sculpt a brand new animation from whatever your motion capture is delivered here's just another example probably a more reasonable example where we have this motorcycle driving over this rough terrain um and the character has no animation at all this is driven entirely by the physical simulation under the hood so the the jiggle the sort of leaning at certain points all just comes from the simulation date and you could exaggerate that a bit more to make him sort of looser so when the motion happens he moves around more maybe he's tired maybe he's sick whatever it is he uh moves around more but you still get sort of a an almost lifelike movement even again without any actual animation occurring here and just to sort of drive it home again this is happening on the actual kin effects skeleton the joints that you use for your motion capture or however you've set up your character and again we can export this so just driving home the idea of houdini as a hub for animation data again we've exported this the skinned character and the rig into gltf this is using a web-based viewer or here it is in unity so again these these clips now can be used in your game engine uh as different motion clips that can be transitioned between using any sort of gameplay logic you might like um so you can generate these animation clips using physics again for characters that maybe are background characters or just for specific uh motions in your gameplay sequence we've also made some improvements to skeleton blending and this allows you to transition easily between two pieces of motion capture data although interestingly in this case the walking pose is actually just a modification of the quadruped walk so it's not a separate clip entirely it's a modified version of the original clip which is really nice keeping a lot of that motion uh without having to do an entirely new piece of animation and again all happening on the briggs so these are these are happening off the skeleton blend blending between these two clips of animation on the rig itself and of course we can combine all these elements together now we can say secondary motion for the tail and ears skeleton blending for the animation clips ragdoll for it to fall down and then transitioning back up into an animation after the physics simulation and so that's actually a key piece of information here for animators so houdini and kin effects absolutely has the ability to do hand animation it has for a long time we've taken a lot of the tools from the object level into sops to help with that but this combination of kin effects and hand animation has some really interesting uh sort of consequences for rigs so first of all let's take a look at building rig controls for an animator to use here's a really awesome tool for creating these control objects and binding them to a rig and so if you see this and you start thinking like okay houdini is ready for full character animation the answer is yes and no absolutely if you want to do that you definitely can the tools exist for you to do full animation inside of houdini using kin effects using these control objects however it is at this point still a rather technical approach our goal that we're rapidly barreling towards is full animation for animators directed towards animators and not tds and riggers however you can get in there and start experimenting and get starting to see how this workflow might look coming down the road so here's a really interesting example of this character brought into sops using kin effects you see there the motion trail tool so we've adapted the object level pose tool into sops now so you can use things like motion trails to do your animation but what's i think slightly more interesting here is the fact that we're mixing between basically three pieces of animation here we have the sort of balancing animation the ragdoll simulation and then we're picking up again into another motion clip and using some hand animation to help with some of these transitions but if you think about what this actually means it's that you're able to take a rig with some animation put it into a physics simulation and then have an animator or just another motion clip pick up the rig from that point and continue animation so this is layering of motion capture things like secondary motion things like ragdolls things like hand animation layering those things all together and then also layering the rig itself so in some sense you have a rig which controls the animation a simulation which modifies the animation and then also the result of that modifying the rig so that you can pick up the animation directly from that point so again this currently is sort of a complicated setup you would definitely need a td or a rigger to handle this but you can see where we're trying to go with our animation toolset hopefully coming very soon in houdini so character effects um so this is really the sort of extra motion motion and uh sort of finalization of characters using uh character effects tools so this is beyond just you know animation this is now getting into heavier simulation or things like grooming so muscles so we had a muscle system for a couple of builds now in houdini a couple of versions and they uh could produce really nice results but they were quite complex um they used object level rigs you had to sort of rig the muscles in specific ways to do the flexing and so on and it was it was able to produce nice results but it was very cumbersome it required a lot of tweaking it required a lot of td skills to set up the muscles and it was difficult to ingest animation from other places because you had to build these sort of object level rigs out of whatever you are pulling in so with our new muscle system we've pulled that entirely into sobs so now everything happens in sops and can be driven from there so this simplifies the process dramatically we've also completely redone the solver we've completely re redone the underlying architecture for how all this works and it gives us this sort of great result that you're seeing here where we have animated bones muscles that are triggering automatically or animated a tissue layer and then a final skin layer now of course in this example we've sort of exaggerated all this so that you can see the motion a lot better the skin sliding this might be more of a droopy creature than you might be expecting of a t-rex but it drives the point home that you can see how these layers build on top of each other to create this result and so these are actually the stages of simulation and so looking at this you probably think like well can i do this all at once is this actually stages or is it something that happens all at once and the answer is it's either but we're really pushing the idea of doing it in layers and there's a couple of reasons for that one is that in practice we just found it was easier to do the setup this way when you could sort of sign off on each stage as you go rather than trying to do them all at once it also means you can iterate much faster because you can work on the muscle sim at a relatively fast pace you can see here in the stats here around four seconds of frame for the muscle sit which means that you can tweak the results try things over and over again at that stage without paying the cost of doing the full simulation and then also just in a practical sense when you're trying to tune what a muscle shape looks like but it's being affected by the tissue around it it's actually just very difficult for an artist to tune those results so we've sort of landed on this staged approach which we think works really nicely and gives you the most control for your final product and just to give you an idea of what's going on inside here there are all these layers the muscle tissue the skin skin pass being really for things like fine detail and wrinkling where the tissue is the overall bulk motion if you see inside there there's this sort of blue tissue on the inside and in a lot of ways that's not even really part of the simulation itself it's there to sort of pull things together um give sort of an anchor point to all of the uh various pieces and just a final render here of the dinosaur to give you an idea how it all comes together with some nice dirt kick effects in there just for fun okay so what's actually going into this so let's take a look at this character here so this is uh also motion capture data being pulled in uh once again from xs um and this character is basically deformed in the traditional way there's no simulation happening here the bones are being deformed by the rig as are the muscles as is the skin so in some sense this is just your standard setup for a character a rig just moving the points around and deforming it and this is what the simulation ingests so we've circumvented the whole object level rigging thing and now we can just take animated stuff and start building muscles from it so here's an example of how the simulation can be modified so on the left you see the muscles animating and triggering and then you see three versions of the same character with sort of different levels of essentially tissue or skin tightness the first one being very close the skin stays very close to the muscles and then on the far right the tissue being much looser uh adding a lot more sort of bounce and character to this character so you have a lot of control how these things come together so let's take a look at what the pipeline actually looks like for this so here's just the muscle workflow so you import your muscle geometry again this is just geometry it can be animated or not convert it to tet meshes so under the hood this is all using vellum to drive this and it's using tap meshes to do the simulation and if you're curious about fem we're also working on an fem solution for this whole system as well fem of course sort of the king of simulation in a way will always give you the most physically plausible simulation um but it does have a cost so we focused on vellum for this release both for uh speed but also the look it does actually come out looking quite well so it's a nice balance of those things but anyway you convert your muscles to pets then you groom your muscle fibers a lot of these stages are actually optional we do automatically calculate this but it's a useful step and you'll see that in just a moment then you import your animated bone geometry and i just want to be clear here when we say bone geometry we're not talking about the rig we're actually talking about literally the bones like whatever the collision objects inside the creature so in this case the literal physical bones then we have muscle tension lines again another optional step but this is a way of automatically firing muscles based on the animation so rather than hand animating it setting up some some lines to define when a muscle should tense or not then you can animate the muscle tension again either automatically or by hand and then finally you do your muscle simulation so let's take a look at just a couple of those steps so here's the muscle grooming and you can see that the most of the muscles are actually fine things like a bicep for instance is basically a linear motor between two things and we can derive the direction pretty easily but things like your uh latissimus muscles you know they have strange shapes uh your trapezius i think um and so it's nice to have that kind of control especially if you're thinking outside of like a purely human character and maybe this is some crazy creature that you've invented but you still want to have the muscles flex in a specific way this allows you to really hand edit how the simulation will interpret how a muscle should flex then we have the muscle tension lines here so what this basically is is just an indicator of you know how we will evaluate when a muscle should flex or not flex so these lines are not directly used in the simulation they are just used by the muscle tension to understand like oh when this line decreases in length it's probably flexing a muscle and if it increases in length it's probably relaxing muscle and so you can use this viewport state which is really nice to connect muscles to these action lines um to set up when things should flex or not which gives you a nice automated way of doing this which is really handy if you're processing you know multiple characters and you can't do hand setups for every single character instead you can use this automatic flexing to do it whereas of course if this is some kind of hero character in close-up you probably want to very closely hand animate both the flexing but possibly even the shape of the muscle itself by sculpting it into a specific shape and then just another view of the bruiser here this time from the back just to show off those muscles as well and you can see that you know with this system with the auto flexi you can get a really nice result without having to do all the hand animation but even if you do do the hand animation uh you'll also still get this awesome control over each step of the simulation so grooming again a lot of these tools have existed for a long time and if you were doing procedural grooming for short hair things like that we have a lot of tools very robust really awesome for that purpose however when it came to longer hair or it came to very custom grooms where you really want to get in there and shape every part of the groom it was challenging to do so we've basically done a whole sort of rewrite rethink of how to do that sort of direct almost sculpting level approach to groom so here's an example of something that we're going to build not the most complex groom in the world but we're kind of going to build it live so you can see how these tools can work so here we are in the guide groom tool this is a completely complete rewrite essentially of this tool we're going to start off just by just directly drawing some hairs and really what we're doing here is just giving an indication of the length and the direction the shape of the overall groove then we'll use this plant tool and what that does is looks at the guides around it and it interpolates them so you can just start dropping hairs quickly this is really useful for things like eyebrows eyelashes facial hair where you really want to get in there and place them exactly in specific places so we're just going to do the same thing for the rest of the head now just draw some guides you can see very sparse leaving a lot of gaps just to get the overall shape of the hair and the length of the hair that we're going for and then we'll go back into that plant tool and this again works really nice for things like facial hair but you can imagine just filling out a whole character like this being pretty tedious so there's also a scatter mode and what this will do is create hairs of a certain density by still interpreting interpolating those original guides so you can really quickly map out a hairstyle just by adjusting the amount of hairs as you're going so how dense are they in certain places we also have some move brushes so this will allow you to sort of slide the hair along the surface to shape your groom so you can see i'm just sort of pushing in here sculpting the hair a little bit trying to clean up the edges fix things like the eyebrows if we need to do some tweaking we can just delete some of the hairs give them a bit of a receding hairline there and we can also use something called cull and collar is almost the opposite of the scattering that we did which is that it reduces hair to a certain density so you can thin out areas add a bit of a bald spot on the back of the head and this again it can be really useful in cleaning up the edges feathering out from very dense to very sparse hair there's also physics involved with the new guide groom tool so you can use physics to sculpt things like long hair so here i'm just isolating a piece of the groom just kind of brushing it into position just essentially sculpting the hair then we'll add a live simulation using vellum and this is for addressing sort of a really common task which is getting hair to sort of rest on uh the surface so we're using actual physics here to do that i can use that now to just puff up the hair a little bit bring it out from the skull so it doesn't lie so flat and switch back to our sculpt tool so this is really just another way of sculpting don't think of it as the final result it's a way of getting things basically into the shape you want and then you go back into your traditional sort of brushing sculpting tool set now this tool actually has a ton of more features that we don't have time to go into today but take a look and see you can see in the hud there on the upper left uh you know a lot of the potential options that exist and of course you want to be able to render this so this is using karma there's a new karma hair shader that works both on the cpu and xpu and you can see that you can get pretty quick results even in this case so this video is actually sped up a little bit but you can see the time in the upper right corner if you're curious and basically it takes around two ish minutes to get more or less a fully realized clean render and in this case we have sss as well now switching to xbu you'll notice that it renders almost immediately now granted there is no sss available in xvu yet but you can see that almost immediately the hair resolves to a point that it can be evaluated so this gives you a lot of options as a grooming artist to quickly check out well what does the final render look like in this case you know is the light catching this in the way that's pleasing or is it incorrect having the option to go to xpu or cpu to do that means that you can really quickly uh iterate on this and and sort of be confident in what the final result will look like without having to wait a very long time to see it and just to give you an idea of what it looks like like i said this is the same hair that we just sort of created a moment ago so it's done in around 10 minutes it's not a spectacular groom but it's just sort of showing you that you can go from something very quickly to a full render in a short period of time which means that you can evaluate this and find things like oh the back of the hair is a little patchy i should fix that i couldn't see that in the opengl representation let's make some tweaks to that or oh maybe you know the eyebrows are too thick or whatever it might be so let's just take a look at what a groom can look like with somebody takes longer than 10 minutes to do it a grooming artist helped us out with this and created this really awesome hairstyle on this man you see the again the the great ability to sort of feather out those grooms on the edges deal with like specific shapes for things like the curls on the end of the mustaches or the locks of hair in the front the frizz and so on so all these tools coming together to provide an artist with the tools to procedurally modify grooms but also by hand almost hair by hair if they wanted to modify and create and sculpt a character hairstyle and of course the hair shader can work with various types of hair styles of hair here's just some examples of different color hair and how they look and here's an example using karma xbu and cpu side by side and they match very closely there of course will be some differences as the architecture is different but you can see they match very closely barring the fact that there is not a subsurface scattering skin shader yet for karma so we've just kind of put this almost clay shaded skin in there all right thank you scott uh we want to take the new muscle system out of beta as soon as possible currently we recommend it more for the technical user than for the non-technical artist but even so please do give it a shot it is a lot of fun to use and we can make it better faster if we get your feedback as to grooming our next project on the roadmap will be feathering which will include kin effects and vellum support natively we are working on that already for modeling our primary focus is on raising the bar in computational geometry as we did with the bullion stop for example a couple years back what we want is to design resilient algorithms if i can call them that that means algorithms that do well under procedural stress not just when used interactively but repeatedly procedurally in parallel we're working on making the viewport interaction a better one for all of you modelers yes of course but also animators lighters and vfx artists a better interactive experience for everyone tool wise in houdini 19 we're paying particular attention to meshing and especially measuring that adds value to character workflows such as the improved topo transfer we're also delighted by all the modeling tools that our labs team keep delivering at crazy rates the labs tools are such a powerful complement to the tools we build in houdini maine and so much so that every year we try to harvest the best from labs for induction into houdini as default tools scott would you like to run us through the latest batch of tools all right thanks kristen so let's start out with polygon and uv tools so speaking of labs this is an awesome side effects labs tool for slicing up geometry the slicer tool and it's really interesting because this is the type of thing that of course you can do this in houdini by itself but it's complicated you have to build the tool yourself so side effects labs gives us this awesome place to sort of experiment with features like this and it's given this really amazing result not only is it sort of fast it can do these incredible sort of twists and turns into slicer so it's really going well beyond what you might typically build yourself in houdini but still remains a very practical tool which is kind of what side effects lab is all about extending reaching for something more while staying grounded in something that actually is very useful for an artist in their day-to-day workflows i also just really like the icon of the sliced bread so as you can see this is a it's a useful tool just for slicing things up but i can also imagine lots of motion graphics style setups that could use this or even doing sort of complex set dressing tasks so just as an example of that here's this slicer tool being used on this let's just call it a space station a spaceship of some kind and almost used as a greebling tool to add more and more and more detail in smaller and smaller increments and it looks really awesome so i see a lot of useful potential uses for this tool in the future so take a look play around with it it's really great uh uv flattening has continued to see incremental updates now of course the underlying algorithm is very solid to begin with and so a lot of why we've been doing release to release is just making it easier for the artist to interact with it and so we've made some adjustments to the uv flat state it gives you the ability to switch between different path finding modes so either shortest path or let's say fewest turns is the ability to add a mirror plane and this is actually really interesting and too much to probably get into here but this mirror plane is not simply a plane it's actually topological so you can pick a mirrored edge even if the geometry is in a different position on either side of the mirroring plane technically speaking it can even be topologically different on the other side but that's getting into too much detail right now but it helps you not have to work as much on both sides of the model you can work faster by working on one side and mirroring it to the other we've also made interactive cutting and sewing easier collapsing them into a single state and we also automatically color the islands we allow you to hide parts of the mesh automatically when you're not working on them and finally we automatically pop open the uv view as well when you're inside this tool set so that you don't have to manage different layouts for different tasks it automatically switches the views for you we've also made some incremental improvements to topo transfer which basically if you're not familiar allows you to take one mesh and sort of a target shape and transfer the nice clean mesh onto a new target shape here again is fiona kind enough to allow herself to be photo scanned having the topo transfer applied of course the key difference here from the previous release is this non non-manifold mesh which means things that are outside of the single contain mesh in this case the head can sort of move along with topo transfer and that includes things like interior things eyeballs mouth bags gums things people typically put inside heads for animation now are correctly pulled along with the topo transfer of the skin so that you don't have to manually do that as a second step again incremental steps here in this technology with an eye toward what we want to do in the future to make this into a more fully featured tool set for retargeting skin or more general meshes onto other shapes so here's a little tool that we're going to show that it's been a long time in making so first of all we're just going to take a find shortest path to generate a curve from the surface of the toy we're going to resample it typical sort of houdini modeling things and now we're going to use the new curve tool to modify that using bezier handles and so if you're looking at this and you're thinking finally bezier handles it we definitely agree with you we really should have had bezier handles for a very long time it's almost embarrassing that we did not have them but really this is more than bezier handles and that's kind of the key point here what you're looking at is kind of the culmination of a lot of things one the underlying python api that allows you to build tools with python the ability to build handles with python so these are python based handles and this user interface that you're seeing in the upper left corner there so all of these things coming together to bring the new curve tool so think of this as much more than just bezier handles actually i'm going to just return to that slide for a moment and also more than just bezier handles is the fact that this curve tool can inherit incoming geometry so you're seeing here that i've built this shortest path you know just using standard modeling tools but the curve tool is able to pull that data in so it's no longer just for generating curves even though you can draw multiple curves inside of a single curve saw it's also able to pull in data from other places and sort of take control of it almost like you've rigged at the curb now with the curves up and this opens up a lot of possibilities for tools down the road there's also some features that we've added here again beyond bezier handles like this rounded corners for instance so we're going to use the radial menu just to convert these points into rounded corners and now the curve tool can set a specific radius on each of those points you can modify them individually by hand and you can also grab the curve in different places and manipulate it and it will attempt to maintain those radii as you do that so if i grab this corner and pull it down you'll see it attempts to maintain the radius there all of these are manipulated sort of live in the viewport and then you can use again the radial menu here just to bake those back down to handles so now i can use standard bezier handles to manipulate the geometry so again it's this flow of going back and forth between these semi-procedural tools and direct modeling tools all inside the single curve state and speaking of that as the curve state is also the fact that the new curve tool can be embedded inside of other tools so previously when you wanted to build a an asset like this an hda that does this sort of chain you would typically have an input which would be the curve so you'd have the curve sort of sitting outside and the hea doing the work now it's possible to actually embed that curve inside the asset itself so the tool itself can do both the chain processing and the drawing of the curve at the same time and more than that the state itself can be sort of embedded inside the hda so your curve state the python structure the api behind there can be used um sort of in a hybrid state between this this new chain tool and the original curve state they can sort of work hand to head together this opens up a lot of interesting possibilities for tools now because you don't have to work with the assumption that there must be an input in fact you can use this tool to inherit the curve or generate new curves so volume deformation so obviously working with houdini there are a lot of tools for doing things with volumes generating volumes manipulating them in different ways but it was difficult to deform than the way you might deform geometry a lot of people have rolled their own tools to do this usually by sort of rasterizing the volume back to points manipulating those points and then turning them back into a volume so we've done something similar to that but the key piece is how we interpolate those volumes because it's very easy to lose information if you do that process of going to points and then back to volumes without paying a lot of attention to how those volumes are interpolated especially once you start adding extreme deformations to it you can see here that we're using this cubic sampling approach to give us the smoothest possible result while also maintaining a maximum amount of detail and if you look closely you'll see that this is the bend handle doing this and this is not a specific tool for doing volumetric deformation in fact we've sort of again built kind of a framework for doing this so essentially there's two nodes involved a lattice from volume node and then a volume deform and then in between there you can basically use any modeling tool that modifies the points so you can use the bend sop for instance in this case or you could do something like path deform so in this case we have a simulation of a sort of a fire tornado and then we're deforming it using the path to form tool and you'll remember that path to form was added a couple of versions ago so it knows nothing about volumes it hasn't been modified in any way but it allows this new system allows it to be integrated into this post simulation workflow so the simulation was done essentially straight with some twisting applied and then the path deformer comes in posts him and deforms it which opens up a lot of possibilities for working with volumes in this way for instance you can deform run a simulation deform it and then run another simulation on top of that so if you need smoke that needs to follow a specific path you could do that or like in this case if you want to add animation into the simulation afterwards that would be difficult to actually do in the simulation itself almost putting it in the hands of an effects artist as an animator in this case and of course just to show the fidelity that you get in the viewport versus karma cpu versus x-view and we're starting to converge you know we're getting all these things getting closer together so that you can see what you're doing as you're doing it in real time with the confidence that that eventually down the road it will look very close to what you did there just with some extra features like scattering and so on that are difficult to do in real time and just because it looks awesome here's a close-up view of the fire tornado here being deformed and again it's key here that the interpolation of the voxels is very good so you don't lose detail things don't become too soft over time they actually look nice and crisp even as you're deforming and this is just a really cool example of the same idea where we actually simulated in this static pose inside the crag geometry the fire burning inside and then essentially using a standard sort of point deformer to take that and apply the crag animation to the underlying sims so again sims statically deformed as a post operation here giving this really cool effect of this sort of fiery internalized character here and again once again just showing cpu xpu and the viewport all together to see how closely we can bring [Music] visuals to the artist well lots of features there and i i bet you weren't expecting volume deformation to be part of modeling but it is and it's such a valuable uh we think feature because it doesn't even require simulation and it comes with all that capability that that sops give you that whole arsenal of tools we're also looking forward to seeing what interactive tools will you'll create with the new python api now that you have access both to the interactive state and the 3d handles we've been having a lot of fun with that and i can tell you that a lot of the tools that you've seen and and hopefully many that will be building ourselves will use python api rather than the c plus plus api as far as other meshing projects in the future well we have some two very good ones uh the first one is about blend shape support for topo transfer uh that will take it to the next level and it will use optical flow and the other one is what we believe will be a game changer for quad remeshing but all of that for another day and finally the effects we say every time we talk about houdini on stage there cannot be a houdini release without some solid love given to dynamics and big time visual effects [Music] with all our solvers maturing nicely we're allowing our r d focus on dynamics to grow in scope slightly for example we're dedicating increased attention to real time solvers whether for game dev or vfx we are obsessing over the ux which often means shifting workflows away from dobs and into sops but also it means designing specialized 3d handles and viewport experiences and we're building more complex high-end tool sets to get you going faster this doesn't mean that we're building black boxes or that you'll always use them the way we design them but the point is to help you with your productivity right out the gate speaking of game dev our labs team have once more outdone themselves in supporting the needs of the gaming community with a major new vat release and more you'll hear about that from scott in a minute last but not least a special mention goes to vellum in houdini 19 not only because of the key role it plays in the new muscle system but also because it is now a complete unified multi-solver scott will show us everything now scott on to you please all right thanks chris so yeah visual effects obviously a key piece of the houdini sort of architecture let's let's dig in so vertex animation textures are bat if you're not familiar it's essentially a way of encoding motion into a texture map which then can be very efficiently rendered in a real-time engine by basically doing it all on the gpu on the graphics card using this texture to drive the animation so here's just some examples of why you can do a real-time engine by doing the simulation in houdini and then using these that the vertex animation textures to transfer that animation over it allows you to have these very high fidelity effects that would be very challenging to do in a real-time engine but still have them play in real time things like this very complicat complicated rigid body dynamics my favorite here are these fluid dynamics with the lava and the pig head they look amazing and they play back you know in real time so it's a really nice way of getting information out of houdini and into your uh real-time environment without losing um the fidelity of course that's what the vat 3.0 is all about is having keeping more of the original information there as much as possible and that's helped partially by having better interpolation of the data so in vat 2.1 now this again is sort of an exaggerated example just so you can really understand what's happening here um you know you you're kind of losing a lot of information between the sort of baked samples in the texture whereas in vat 3.0 you get a really nice smooth interpolation between these things and that way you can really again keep more of the original animation in a lot of ways it's similar to the keyframe reduction we showed in kin effects except in this case it's for simulations there's also support for lod level of detail for both rigid body and soft body so this is just an example where we're changing the color to show when the level of detail is changing of course very important for real-time environments you're optimizing for you know milliseconds in a real-time engine so you want to be able to change that level of detail especially when you can't see uh the information that you're losing which is kind of the point here so that's that's a really nice addition here makes it much more flexible for your real-time environments you can have more of these effects without worrying about hitting your performance limits [Music] conditional vat so you know obviously this is essentially just playing back and animation um but you know in games the whole point of them is that they're interactive and so this basically is allowing you to trigger these animations conditionally and have them behave um in different ways depending on what's happening so in this case uh you know we're obviously shooting and breaking up this wall and playing different chunks of the animation back depending on uh how and when it's hit that gives you something that feels like it's a real time simulation that's happening because it feels very interactive even though under the hood you are baking this out from another simulation so vella the vellum solver has you know slowly sort of become a workhorse for um specific types of effects obviously cloth being one of the key ones and we've continued to sort of develop the solver with an eye to creating more effects that weren't possible in the past so here's an example of fluids and plasticity cloth rigid bodies so you're looking at this you're probably thinking like oh this is cool like i've you know cloth things are interacting with each other um and the key piece here that may not be immediately obvious is that this fluid simulation is also being done inside the vellum solver as well as those rigid bodies so the rigid bodies are being used with shape matching to create a kind of rigid object in vellum all these things interacting together at the same time these are not baked out simulations this is not using feedback forces the vellum solver is handling all of this in a single simulation of course it's also rendered with xpu if you're curious so you can see some more of how you can actually use this today what level the renderer is at so the rigid elements are basically using shape matching which is something that we added actually added in a production build of 18.5 but we've continued to support we've added plasticity so in this case you see again vellum handling essentially rigid objects which really you could sort of do before but not to this level it's also important to note here that when we talk about fluids and rigid bodies here like don't imagine a replacement for the bullet solver or replacement for flip those are designed for extremely large scale simulations and that would be very difficult to do inside of vellum it's not really the purpose of these tools this is for smaller scale things where you really need to see that live interaction between pieces so just to give you an example of what i mean about interaction here is a simulation that uses fluid shape match rigid bodies cloth grains and wires all together in a single simulation all interacting with each other so this simulation would just not be possible using feedback forces if you look closely you'll see the fluid pushes on the rigid body wheels which turns which pulls the wire which lifts the cloth which lifts the grain when the cloth stops it forces the wheels to stop and the fluid is pushed back these are all things that you just simply could not have done without having this unified multi-solver or i should say you would have to do it in stages or try and use feedback forces for partial elements bake out certain other elements try and get the timing to all work together here instead you can see the benefit of having a truly multi-solver context here where cloth fluid grains soft bodies rigid bodies wires all work and communicate together using the same solver but again if you're looking at this and thinking well i'm going to make an ocean simulation of our fluids here that's really not the goal this is for smaller scale effects but you can see you can still get a nice high fidelity result you just don't want to create an entire ocean of vellum particles so obviously we've updated the vellum solver to work with fluids and part of getting that to work is to have some more of the solve happen on the graphics card using opencl and so there's this fast neighbor lookup that's used in opencl that allowed us to do fluids performance has actually increased from since the timing the recording of this video but you can see that even here it's quite interactive so you can actually use the vellum brush to push fluids around um and once we've had this fast neighbor lookout we also wanted to then apply it directly to grains so grain sees a speed improvement as well just as a byproduct of working with the fluid simulation here you can see using vellum brush to manipulate those these grains in sort of semi-real time and i see a lot of potential here for set dressing to use these tools to actually build assets that you know otherwise would need to be sculpted and just to give you an idea of the actual speed improvements using vellum grains here's an example of 18.5 on the left and houdini 19 on the right and some statistics to go along with it and you'll see there's slightly less points in h19 just because of sourcing differences between the two systems but you can see that it's about two-ish two to three times faster depending on the circumstance which is a really nice speed improvement overall here now just a render just to show again the fidelity of the grain solver and what you can actually get out of it in this beautiful kind of sand trap scenario and finally here just to show an example of how this these tools might be used in an actual production environment so this sort of a faux commercial here using all the elements that we just talked about to come together to create something that again obviously you could have done in other ways without using the vellum solver in the past but having all these tools together in one place using one system one way of configuring things one way of creating constraints really frees up the artist to create these types of effects in an easier way i think there's a lot of potential here especially for things like motion graphics and commercial work where often you are focused on a small scale uh simulation rather than you know destroying the whole earth or drowning in an ocean uh destruction since we were talking about destroying the whole earth so here's an example uh of an update to a sort of another fundamental tool in houdini which was debris source debris source has been used for a very long time in houdini it's basically a way of identifying parts of the simulation where things are breaking apart colliding together with the goal of using those areas to produce secondary things like debris whether it's just particles or other rigid bodies so here you can see an evaluation of what that's doing and again a lot not enough time to go into detail of all the pieces of what the new debris source is doing but some key highlights are that things just honestly just work a little better but also that you can use uh pack primitives uh to generate the sources so previously you would have to unpack your rigid body simulation and if you had a very complicated one that could take a long time and it was slower to evaluate this helps get around that problem it also does a better job of identifying the edges and the interior faces and scatters the points in a more reasonable way and that way you can get a better overall result and you know anyone who's done visual effects will definitely tell you how important secondary or debris is because it's really what makes things look realistic it takes a rigid body simulation which can look very very good and suddenly brings it to life by making it feel like it has almost you know unlimited pieces just smaller and smaller and smaller pieces bringing all that all that motion to life and just to sort of drive that home here's this awesome final example with different effects elements all brought together uh sourcing smoke sourcing pebbles sourcing other rigid bodies all together to give this really full feeling effect now when all these elements come together creating all this extra debris and pyro of course we can't have a houdini presentation without blowing things up or lighting things on fire so let's talk about pirate so before we get into the big explosions let's talk about you know it's just some fundamental workflows that are required when you're working with this kind of data which is generally huge um pyro effects can be gigantic you saw earlier you know more than a billion voxels so being able to cash out the results wedge those results see them is sort of critical to your workflow so we have a brand new file cache node in houdini 19 which has a lot of really interesting features it makes versioning assets much easier i should say versioning sequences much easier it makes handling you know multiple versions of things a lot easier it also uses pdg under the hood to bake things out so again taking pdg and tops and bringing it closer to your standard workflow in this case you can actually see that we're now able to to bring the top interface into sops so you can see the work items being displayed there uh on the node itself and then there's a side effects lab extension of this which adds wedging support so in this case you're actually seeing that we're able to generate multiple versions of this sequence and then bring them in using pdg and the work item display in sops to actually compare and look at the results all at once which is again extremely valuable when you're trying to iterate on something and you want to be able to compare results bake out multiple simulations at the same time this is going to be a real nice sort of stepping stone in your workflow to help you get over you know reading and writing huge amounts of data we've also made some updates to the minimal solver which if you're not familiar are sort of all on the gpu solver previously you had to sort of bake out your sources and you still do to a degree but now you can instance those sources which means you can manipulate their transform and orientation over time so in this case for instance we have this uh torch that's moving around it's changing its angle and so what we're actually doing is instancing a little pyrosource onto the end of the torch which allows it to move around reorient itself and still update in the sort of semi-real-time solver here and this is also just another really nice example of how nice the viewport can actually look so you seeing seeing this you can truly evaluate what you're going to get in your final render you also added a new gas axis force tool essentially this is a new tool that allows you to add some forces into your simulation using this new handle as well and the types of forces are things like an axis force where you push things around an axis there's also the ability to push things toward an axis and we can also push things along axis so in this case we're pushing things you know upward as we expand uh the handle here and all this is using the minimal solver but of course this handle can be used in the standard pyro solver as well but let's just take a look at why you can deal with it in a slightly more practical sense so here we've just got a sort of a noisy tourist source and we're going to shape our handles to sort of fit over it we'll hit play without any of the forces involved and you see it just sort of is a noisy expansion of the smoke but as we increase the axis force we start pushing the smoke along that tube that you saw now we're creating this kind of tunnel of smoke in a way then we're going to use the suction force to pull it in along its length and that means that we get this kind of cone effect as the smoke is pulled in as it goes along the length of the curb sucking in more and more and then finally we'll just add some of the orbit force to create kind of a vortexy whirling kind of look and so now we've gone from basically just a randomly expanding taurus into something that looks like maybe some kind of portal and as we jump up out of uh the forces uh subnet into sops you can see how it looks with some basic lighting and it's starting to come together and look quite nice now here's someone taking a little more time to massage the effect and get something interesting looking and using the pyro shader in the viewport to create a completely magical effect so this is no longer smoke or fire it's some sort of magical force so again the viewport isn't just showing you purely fire and smoke it can be used to show these types of effects as well and then eventually rendering out on the cpu to get the full effect of light scattering all the things you would expect from a full production renderer and throughout the presentation we've talked about improvements to uh the viewport representation of pyro and i think this is a really really good example where if you did not have lighting in your scene previously uh you would basically see nothing you would just have this flat volume so we've added essentially an ambient occlusion ability to the volumetrics in the viewport which suddenly shows you just how much detail is really in there pulling out tons of detail that used to get washed out in the viewport it would of course show up in full production rendering but while you were working it was difficult to see exactly what you were getting so here a comparison between 18 5 and 19 with all of our new effects so not only the ambient occlusion also the ability to bloom bright areas to create that kind of haze around brighter areas as well as a new interpolation for the voxels themselves in the viewport so previously especially at lower resolutions you would see these sort of blocky artifacts of the voxels so we added some new interpolation to smooth out those voxels and make things look more like they would look at like at higher resolutions and one of the nice sort of side effects of of having these viewport renders have all these new abilities and higher fidelity is that you can actually use the viewport to bake out texture sheets to be used outside of houdini traditionally in things like game engines so side effects labs has put together a tool set to take advantage of this workflow because we no longer have to bake the lighting in we can actually use the information to relight in other packages so here's just an example we're just using a standard sort of tool here we're just putting down an aerial explosion to generate an effect then we're going to use some tools here to create a normal pass and a motion vector pass motion vectors being um sort of the velocity behind the scenes and that can be used in game engines to do interesting uh interpolation between frames because you don't want to necessarily bake out you know a thousand frames but instead you want to bake out a much fewer number of frames and use the motion vectors to interpolate between them and then we use the flipbook textures node here to bake out a flipbook of the explosion and this is something that is a hybrid tool so there's the side effects labs part of building the the data that you need for uh for the game engines to use but there's also the native ability now of the flipbook tool to generate these texture sheets so it's a combination of side effects lab tools and the new built-in capabilities of the flipbook tool itself so just to be clear we're not writing things to disk and then using a mosaic tool or something like that the flipbook tool itself is creating the texture sheets and then of course you can bring this into unreal or other game engines in this case we're using unreal five and this is a material instance here on the right which basically provides you a lot of ways of manipulating the data that we've exported here and again because we can now use non-lit volumes that means we don't have to bake the lighting into the volume when we render instead you can use the game engine to modify the volume change properties about it sort of in real time and that includes the lighting as well so you can change light direction to match your scene this is really important for integrating effects into your unreal uh real-time playback here's just another example of just using fire integrating with with particle systems in this other environment and you can see how you know lighting particles and the texture sheet export being used in unreal 4 here in this example to really create this integrated feeling of flame there's also a new velocity scale tool for simulation so velocity scale is basically a way of manipulating the velocity over time using different fields so you can use thresholds to say i want things to slow down when the temperature reaches this point or i want it to speed up when let's say the density is a high value and you can set target speeds and limits so that you have a lot of control over the shape which is really important when you're doing large scale effects like this where smoke plumes are not just you know 20 feet tall they're potentially you know three kilometers tall and you want to have the ability to have them slow down in different parts of the atmosphere or react differently to various environmental effects especially things like temperature so a while back we started introducing these sort of higher level wrapper tools that allow you to deal with very common effects tasks in a simpler way and a new one is this new particle trail or spark trail tool so sparks are extremely common in visual effects for all sorts of reasons and there are a couple of issues that come up with it one is that they they do the splitting which you can see here which happens when a spark sort of gets to a certain point and rather than simply cooling off and dying uh they instead essentially explode a second time and send sparks off in this particular pattern so this wraps up that functionality it also allows you to sort of stretch out the particle in a sense over a specific time range so sampling motion blur for very very small but very very bright things like sparks is actually very difficult and requires a lot of sampling to get it to look right so a really nice production solution to this is instead to actually stretch out the particle using geometry and then render it without motion blur so you get the same effect without paying the cost of having to sample it so let's take a quick look at how you would do the setup here this is just like a very very standard particle simulation just some particles spraying out and hitting the ground we'll put down the spark trail node and you see immediately we get these new trails similar to what you would get with the trail saw but with a lot more fully featured results especially this trail method being set to shutter which means that you can match exactly the motion blur you would get in karma for the other elements in your scene uh to these sparks that you're going to render without motion blur so you can directly match shutter speed to shutter speed now in this example we've exaggerated this we've made them very big and obvious just so you can see it but let's go ahead and add splitting and if you look closely you'll see at certain points as the particles die off some some of them split and break apart into these extra trails just to again make it really obvious we're going to increase the split frame duration so you can see them more obviously and then we're going to change between uh ballistic and straight splits so ballistic meaning it's going to follow a velocity path as if it's being affected by gravity so all this together gives you the ability to generate sparks and other types of effects that require these type of trails a really good example here are these are these rain effects using basically the same idea but rather than spark sort of exploding and breaking it's water droplets hitting the ground and splitting apart so again like a lot of houdini tools we're not building you know tools that only work in a single circumstance they're really designed to be more open than that so even though the target is particle trails and things like sparks of course it can be used for a myriad of effects so here we have the muzzle flash tool which is a combination between a new pyro burst preset as well as the spark trail tool that we showed again the idea here is giving you tools to create effects very quickly using very common approaches in production which again is essentially using particles for both the sources of the flame as well as the sources for these sparks a common effect with now a new pyroburst preset and another one is a shockwave another extremely common effect any sort of large scale explosion will probably feature some sort of shock wave effect here again we're showing in the viewport just to show the fidelity that we can get out of the viewport it looks incredible but underlying this is a another new preset for pyroburst called shockwave of course now we're showing it in this beautiful karma cpu render just to show you all the detail and information that's in there as well as combining some effects that we've shown in previous launches like these ballistic paths these sort of glowing embers on the ends of the trails they're all coming together to create this really awesome explosion at a massive scale and just to give you a little idea of how this works you can see that we're basically using the pyroburst source the same tool we introduced previously but now with new options for shaping these expanding rings or shock waves in various ways and you can see you can adjust the timing you can adjust their positioning the amount that they spread and combining multiple of these pyroburst sources together gives you a really awesome tool set for adding this into existing pyro simulations or even doing it all simultaneously and just to end off here let's show the result of using this shockwave tool on this sort of massive world ending scale here to end off our presentation of pyro and visual effects in general but again you can see how all these effects come together to produce this result the pyro tools the ability to render this with karma the ability to have these different uh trails all combined together shock waves and so on just to give this incredible looking final result and so i think we're done now with our houdini 19 features sort of quick tour through houdini 19 and i'm gonna pass things back over to kristin and that's it folks houdini 19 is coming out in just a couple of days on october 27th together with lots of learning materials and master classes on the way we are super stoked to share this major houdini release with you all and with this our houdini 19 presentation comes to an end the list of features is is much longer though as you can see and if this list doesn't scroll down too fast you might catch another small gem or two in there for example if you do pipeline work check out the many pdg enhancements in 19 including the steps were taken to enable massive scale compute on the cloud and that work is ongoing if you work on crowds take a look at the new workflow for geometry layering for crowd agents and if you're a cloth artist check out the new constraint browser for ways to easily identify and inspect your scenes the same constraint browser by the way that lets a destruction artist figure out the constraints happening evolving between breaking fragments houdini 19 ships with python 3.7 which is production ready and enabled by default for all your houdini downloads and we will continue to also support python 2.7 for another release or two finally i want to thank everyone in r d for once again lifting such a big mountain in such a short time and for not skipping a beat with everyone working remotely and r d is not only the developers it is tds artists docs qa and an increasing number of ux specialists yes you heard it people dedicated to doing ux for houdini led by our very own scott keating as always special credit goes to you our alpha and beta testers and all our users for reaching out in so many ways answering our questions and most importantly allowing us to think and act in lockstep with you this is what always brings us the most joy and finally kudos to everyone who helped build the beautiful content you saw today and our marketing team who made today's event possible hope you all enjoy houdini 19. goodbye [Music] you
Info
Channel: Houdini
Views: 80,594
Rating: undefined out of 5
Keywords:
Id: hTfmtiz9qSI
Channel Id: undefined
Length: 105min 8sec (6308 seconds)
Published: Mon Oct 18 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.