RBD Tools Update | H17 Masterclass

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi in this masterclass I'll be covering some of the new RBD related features in Houdini 17 as an outline I'll be starting out with constraints looking at the new soft constraint along with some useful new options on the glue constraint then we'll be moving it to stops and taking a look at the new convex decomposition table as well as some useful new utility sobs like extract centroid extract transform and some handy new options on the atra promotes up for working with RBD objects finally you will be looking at the new fracturing tools including version 2.0 of Verona fracture and a new boolean fracture stop and then the higher-level material based fracturing toolset that we've introduced in Houdini 17 all right so we'll start off by taking a look at the soft constraint which is a new constraint type that we've added for bullet in Udine u-17 so the soft constraint does act like a spring so it's applying a force that's proportional to the distance between the two anchor points and you can also have some damping but it does have a lot of differences in its behavior when you compare it to the normal spring constraints so we'll be taking a look at some of those changes for advanced users who might have been familiar with doing this in previous versions it's actually kind of similar to the effect you might get by configuring some of the more advanced parameters of the hard constraint in order to make it behave in a softer manager but you'll find that the soft constraint is a lot more straightforward to deal with and easier to tune than trying to tweak the hard constraint to give you a softer looking behavior so in this setup here I've just taken a bunch of spheres added them through a box and then use connected decent pieces to create a whole bunch of interconnected constraints between them and then I've just configured them to be set to use the new self constraint and also affect both position and rotation and so then here I've just got a normal constraint network set up using the soft constraint just at its default values for now and if we press play we'll see that we get kind of a nice bendy and springy looking effect for our object so on the parameters for the soft constraint there's a few of the usual parameters that you're probably familiar with from other constraint types so there's the rest length parameter here and then there's the option to override the number of constraints over iterations if you wanted to use something different than the default value on the bullet solver for these constraints and then there's also the option to disable collisions between the pairs of directly connected objects the other two parameters are really the important ones for controlling the look of this brings so we have a stiffness parameter and a damping ratio parameter and if you compare this to the spring constraint it has a strength parameter and a parameter and these have some subtle differences between them if you're familiar with physics the strength and damping parameter on the spring constraint correspond to the spring constant and the damping coefficient that you would be familiar with from a damped harmonic oscillator and in the case of the soft constraint the stiffness is the frequency of the springs so the number of oscillations per second and the damping ratio really the important thing here is that the stiffness and damping ratio parameters are mass independent so they're really just specifying the look of this spring and so if I change the mass of my objects around or I have a bunch of objects with varying masses I'm still gonna be able to get the same look to my soft constraint which isn't the case with the spring constrain were those spring coefficients and so on are very dependent on the masses of the objects so in this case if I take my objects which just had the default density and I lower that quite a bit and then resuming late you'll see that we actually get a very similar look to the objects and so this makes it a lot easier to deal with for destruction type simulations we've got a bunch of fractured objects that have totally different sizes some really tiny objects in there and you want to be able to get a consistent look to your Springs throughout the simulation and then if you go back and you know refracts or your object a little bit differently the look isn't going to change very much on you and so you can get you can spend a lot less time tuning your constraints and so just as a point of comparison here if we were to wire in this spring constraint instead and just update the data name to soft and you know give it a moderate strength it's gonna be looking very different as we change the mass of the objects around so in this case it's somewhat stiff and then if I go back and reset my density to the default it's gonna look quite a bit different so now it's acting a lot softer just because the mass of my objects increased quite a bit so it seems that back to the used soft constraint for now so getting back to the parameters the surface parameter on the constraint is fairly straightforward so a larger value just gives you a stiffer effect so if I make this really small and then we simulate will see that the constraints allowed to stretch quite a bit and it doesn't recover very quickly and then as I increase it further so maybe to 100 we'll see that it's starting to get a much stiffer look and then as I increase it even further it just gets difference difference so as this value goes towards infinity it'll essentially end up approximating what a pink constraint will give you and so just like the payment constraint one thing to keep in mind is that if you're increasing the stiffness further and further to make it almost like a rigid constraint and you're still seeing a bit of give in there like in this case when that impact first happens we'll see that it does Bend just a little bit around the middle one thing to keep in mind is that if you're seeing that even though you think that your constraint should be a lot stiffer you might need to be increasing the number of constraint solver iterations so if I crank that up in this case we'll see that we do get a somewhat stiffer looking effect and a lot less bending at that case so just one thing to keep in mind is that you might need to also increase the number of constraint solver iterations if you're trying to get stiffer looking constraints and so then the damping ratio is the other parameter and this has a bit more of an interesting meaning to its scale so a value of zero corresponds to no damping so you'll see that we get a very you know a lot of oscillations showing up in this case once it bounced once it bounces back we'll see some oscillations happening and then as we increase that towards one will reach what's called critical damping when the value is one and this is where you have just enough damping to avoid any oscillation as it's coming back to its rest position after it's been stretched so this is just enough to give you kind of a bring you look but without any bouncing afterwards so see that here that once it gets back to its rest position we're not getting any more oscillations after that point and then as you increase it past one you just end up with an over damped spring and so increase in this further will just give you a progressively more damped effect and there you go so one thing to keep in mind is that this is really just like any of the other constraint types that bullet supports so during the simulation the solver is also going to be outputting the usual attributes that we're familiar with like the force attribute torque distance and angle that are just giving you some information about the state of the constraint in the simulation so like how much force it was solver was applying in order to enforce that constraint and so of course we can use this to break constraints or do whatever we want during the simulation so in this case I'll just maybe make this stiffness a little bit higher for this example and reset the damping back the one so that if I turn off turn on this solver here which is just operating on the constraint geometry sub data for the constraint we just have a simple wrangle here that's checking what the torque value was that the solver output it and then if it's greater than our tolerance which in this case is 70 we're just going to delete that constraint and so we can get a nice breaking effect as the simulation goes on and some impacts occur so as I mentioned before there are also some other differences with how the soft constraint is solved by bullet compared to the spring constraint aside from just the differences that we were talking about with the stiffness and damping ratio versus the parameters on the spring constraint and so we'll take a look at this here and sort of a more isolated case to make it a bit more clear what's going on so in this case we have something that's kind of a bad scenario for the old spring constraint where you have objects that have a very small mass and then you have relatively few sub steps on the solver and then we've got a spring that's relatively strong but doesn't have any also or it doesn't have any damping whatsoever so we're gonna get a fairly highly oscillating spring so if I press play here what we'll see is that there's actually a fair bit of energy loss even though we don't have any damping on and this is something that you'll see with a soft constraint if you have relatively few sub steps so if I increase this even lower to just one sub step we'll see that the energy loss is fairly rapid but then as I increase the number of sub steps will see that it behaves a bit more like what we'd expect with not having any damping enabled and so this is really the trade-off with the soft constraint the method that the bullet solver uses to solve the soft constraint Springs is different than the normal spring constraint and it's guaranteed to be stable so the soft constraint will never blow up on you unlike the spring constraint which could very easily blow up if you had small masses for your objects or you didn't have enough sub steps but the trade-off with that is that you may see some artificial damping being introduced if the number of sub steps isn't very high for the soft constraint and that's what we were seeing here as we increase the number of sub steps we saw less and less of this artificial unexpected damping that was being introduced by the solver so what that means really is that if you have cases where you want to have Springs that oscillate quite a bit and have very damping you might still want to use the traditional spring constraint because it'll let you get that look without having to crank up the number of sub steps as high but otherwise in most cases especially for destruction type simulations where you want to do some bending and then braking you're already going to be using some damping anyways and so if there's a little bit of extra artificial Japanese that gets introduced by the solver that's a perfectly good trade-off because you're going to be able to get much more stable results with the soft constraint and it's also quite a bit easier to tune especially when you have objects that have varying masses so the recommendation would be to use the soft constraint in almost every situation unless you find yourself needing to have very highly oscillating Springs and you want to avoid having to increase the number of sub steps so in this setup here I've got a pretty straightforward glue setup I've just fractured this wall into a bunch of pieces and set of glue constraints between all of them and then I've got a Collider that's coming through and breaking some of those pieces off but what we'll see here is that the impact doesn't end up propagating very far you just kind of get a few pieces around the impact location that are breaking off but it's not really travelling much farther in the glue constraint Network and breaking off some pieces that are further away which is what we might want if that's the effect we're looking for so let's start off by doing a quick review of how the glue parameters worked and are handled by the solver before talking about the new changes so as the simulation goes on what the solver will do is it'll take the impacts that occur and then take the impulse from those the collision response and record that as a primitive attribute on the glue constraints so in this case if I look at the constraints as the simulation goes on this impact attribute will get some values added to it by the solver based on how strong the collisions were and so then if that impact value gets to be larger than the glue strength that will cause the solver to break the constraint and then the piece will end up moving on its own and then if the impact value wasn't high enough to break the glue that's where the half-life starts to come in which just decays this impact value over time and so what this really is for is so that if you have a bunch of fairly small impacts that keep repeatedly occurring over time it won't necessarily cause the impact value to you know accumulate high enough to break the glue in case you don't want those sort of small and repeated collisions to cause any clue bonds to break and then the other thing that happens is the glue impulse needs to spread throughout the glue network so that other pieces that are farther away from the impact can be broken off and so that's where the propagation rate comes in so what the solver will do is it'll do a number of iterations of diffusing or spreading that glue impulse throughout the network and it does that according to this propagation rate so if the value is zero and in fact against one of the objects won't spread at all to any of its neighbors and then if it's set the one it'll be spread evenly to all of its neighbors so it'll kind of diffuse smoothly throughout the glue Network and still conserve the total amount of impulse that was added into the glue Network and then the solver will do a certain number of rounds of iterations of doing this propagation and so this is really what controls how far and how rapidly the impaction spread throughout the glue network so previously in Houdini 16-5 the number of rounds of propagation that were being done was a global property of the entire constraint network so it just did a fixed number of iterations of doing this for all of the constraints in the glue network and so if you wanted to control this you had to control with a detail attribute on the constraint network because it applied to all of the constraints globally and so we can see that effect here if I turn this on in my after creates up and I set up this detail attribute called propagation iteration and set that to ten let's say instead of the default of one then if I look at the simulation we'll see that the impact travels a lot farther throughout the glue network and more pieces are broken off there are a few changes to this that we've made in Houdini 17 so one which is a little bit harder to demonstrate quickly here is that there were a lot of performance improvements done to the glue propagation stage in the solver and so there's quite a bit less overhead now to increasing the number of propagation iterations so you can increase that a lot higher without having as much of a performance penalty but more importantly it's been changed to be a primitive property rather than just a property of the entire constraint network and so now it shows up on the blue constraint as the number of propagation of directions and so this is kind of like what you would see on the other constraint types that bullet supports that have an option to override the number of constraint solver iterations for the constraint types that aren't glue and are more like traditional constraints like Springs and so this kind of behaves the same way so a value of negative one means just use the default values so in this case that means you use that detail attribute if it exists for backwards compatibility or just use the default value of one but then if we want to override that ourselves we can change it to a nonzero value so if I change it back to one and we'll see it's like what we were seeing before where it doesn't propagate very far and then if I increase that to a hundred let's say the impact will propagate quite a bit further and break off a lot more pieces the other neat thing that this lets us do aside from just having this option be a lot more discoverable and easy to set up is that we can also now variate per constraint so I'll just reset this back to 1 and then if I go back to stops and I'm looking at my constraint network I've got this attribute create I set up here to set up the propagation iterations primitive attribute to override the propagation iterations parameter the Gloop constraint and so if I for example select a group here and I'm gonna set the propagation iterations attribute to be a value of a hundred for these constraints that are on the right side and just the default of one on the left side we can actually get some pretty interesting effects from doing that so if I press play we'll see that we've now got kind of an interesting shape to how the impacts propagate through the glue network so this can be kind of a handy tool aside from just changing the strength values in certain parts of your glue network to get some interesting braking effects let's take a look now let's some options for setting up proxy geometry for concave objects in bullet which is problem that tends to show up a lot in our BD simulations so there are a few different ways to go about this so one option that you always have with concave objects is just on the bullet data tab to pick the concave geometry representation but there are a few downsides to this probably the main one is performance it works pretty well if you have maybe a couple of colliders that you're trying to bring in but if you are trying to do this for a lot of fractured pieces that have concave bits to them it might not scale very well you're generally much better off with bullet using things like convex hulls fuse boxes and so on that it's able to deal with much more effectively the other problem that can also come up with the concave representation is that bullet basically just treats it as a whole bunch of polygons there isn't really any notion of inside or outside and so this can cause some trouble if you have fast-moving objects let's say we're from one subset to the next an object goes from outside to inside the object because bullet lets you collide against both sides of the triangle it kind of treats it as though it was just an open surface that you can Clyde on on either side you can end up having objects getting stuck on the wrong side of the object and not getting pushed out when you intended them to be and so it tends to be a lot better to use things like combat holes or spheres where bullet has a very well-defined inside and outside so that it can resolve any penetrations that happen from one subset to the next so one approach that you have you're setting up proxy geometry that I wanted to show quickly this is something you can do in sixteen five is using the VT be two spheres approach so in this case here I'm just gonna take this torus and shatter it into a few pieces but we still have some each of these pieces they're still somewhat concave and then we have this option on the rigidbody shelf to set up an RB D sphere proxy so I'll just select my geometry so what this did for us let's take a look back at the setup is so after our original source geometry that had been fractured into a few pieces set up a for each loop going through each of these pieces and then it uses VDB from polygons to convert that geometry into a volume and then it uses the VDB to sphere swap to fill up that volume with a whole bunch of possibly overlapping spheres and so this is what we're going to use in bullet as our proxy geometry and the nice thing here is that spheres are really efficient for bullet to deal with it's very easy to do close intersection for spheres they're very lightweight and so this can work pretty well as a proxy representation it can be a lot faster than concave objects or even convex hulls and so then we have the rest of our for each loop that's set up the approach to geometry for the rest of the pieces and so if we look at the geometry here we still have a name attribute just like before so we have PC arrow through piece nine but instead of each piece containing a bunch of polygons each piece now just contains a whole bunch of spheres so if you look at our exploded view we've still got the same number of pieces that we had before there's now each piece consists of a bunch of spheres and so then if we send that into Dobbs the only thing that we really need to be careful about setting up on the top side of things is our occlusion shapes so using the default settings so just the default convex hull what bullet will do is take the geometry of the entire piece so in this case a whole bunch of spheres and it'll compute the convex hull of all of that and so we'll see here that it's kind of undoing all of this clever work that we just had done to try to set up a nice proxy representation and so it's still giving us kind of an ugly overly large collision shape so what we can do instead is turn on this option to create convex hull per set of connected perimeter and so what this will do is for each piece it'll look at all of the sets of connected primitives so in this case because we just have a whole bunch of spheres none of them are connected to each other but in the case of polygons it would be each set of connected polygons so it'll go through each of the sets of connected primitives so in this case every single sphere create a collision shape for that so just a bunch of sphere collision shapes and then assemble all of those into a compound collision shape that it'll use for the entire piece so if I turn that on and turn off the display geometry we'll see that it's now just basically directly using our spheres as a compound collision shape for each of those pieces so I'll just add in a ground plane to make this a little bit more interesting and then if we press play we'll just sort of go back into daps I was looking at the wrong geometry we'll hit play then we'll see our simulation working nicely with the proxy geometry and so then afterwards what we need to be able to do of course is transfer the motion of those proxy pieces back on to our original pieces that we had built our sphere representation from so what we can do here is the typical workflow of taking the dolphin ports off which is set to create points to represent objects and what this does is for each of the pieces in our simulation it gives us a point that as its position orientation and so on it's transformed from the simulation and then we can use the transform pieces up to take our original pieces and then apply the transform from the simulation for each of those pieces so now we can just apply that motion onto our high res geometry so this works pretty well using spheres is very efficient so this can be a pretty good performing way of dealing with concave objects but it does have a couple of downsides one is that if your original geometry had flat surfaces and hard edges that kind of gets lost if you're trying to use a proxy representation that's made up of a bunch of spheres because it kind of just rounds out the shape of the object and so you kind of see that in this example here where some objects do kind of roll a bit more than you might expect them to be so it doesn't necessarily hold up as well for foreground pieces but it can be a pretty effective technique for objects that are more in the background the other thing to be aware of too is that there's a couple things that you need to always have to tweak with this process so one is you're converting the geometry into a volume so if you have more complex geometry with some small features you might need to be tweaking the voxel size to not lose any parts of the geometry that you care about if there's like really thin parts for example in this space between fingers on a character things like that and then you also need to tweak some of the VTB to sphere settings to control how many spheres you want what the minimum maximum radius is and so on so there's there's a little bit of tweaking that you have to do but this can be a fairly effective technique so as an alternative to the sphere packing approach we've added a new soft been Houdini 17 that gives you another option for how to set up the proxy geometry for concave shapes and that's the convex decomposition so up and so the idea with this OP is to take the geometry and cut it up into a bunch of different segments so that each of those segments is convex or close enough to be in convex given some tolerance and then we can use that as our proxy geometry as a compound shape so instead of having our proxy geometry made up as a bunch of spheres that approximate the shape of the original concave piece we'll instead take say a dozen convex hulls that together as a compound shape give us a good proxy representation so let's take a look at what this looks like on this blob which is a nice example to work with since it's pretty concave so here the main parameter is the Mexican cavity which is basically our tolerance for how close each of these segments can be to being perfectly convex so you probably want to leave this somewhat high so that you don't get too many pieces if you have a really bumpy piece of geometry you might end up needing a ton of convex hulls to make a perfect representation so in this case here if I turn on this visualizer button we'll see each of the pieces that got produced and so if I set the tolerance really high nothing really happens because we're allowing there to be a fairly coarse representation and this is basically the same result is just taking the convex hull of the entire geometry but as we reduce this we'll start seeing more convex old being produced but they're a better fit to the original shape of the geometry so this is really the main parameter that you need to deal with controlling how accurate you want the proxy geometry to match the original geometry but without producing way too many pieces to make the representation inefficient and then there are a couple different options for what geometry you want get produced by this table so the default behavior is to output the actual convex hull so you're basically seeing exactly what bolla would be using as its representation and dots and then there's an attribute called segment by default that gets output to identify each of these pieces and then the other option you have is to output the actual original segmented geometry so we can see here if we do an exploded view this is just the original geometry but with all of the cuts that have been made by the convex decomposition sob so we'll see that it's been sliced up into a bunch of different pieces and then just like before we do have a segment attribute but then we also have a group that's also headed for the interior faces that got produced by any of those cuts so this option can be useful just for kind of visualizing what was happening on the original geometry or using this as sort of a generic saw tool but probably in most cases you want to output the actual convex hulls because you can just send those directly into bullet it's already the same geometry presentation that the solver would be computing anyways so there are a couple other useful options here one is this peace attribute option and so this law actually has the option to run over a whole bunch of existing pieces so over here I've taken the squab and pre fractured it so I've already got a bunch of different pieces some of which are concave and should be decomposed a bit further and then if I turn on the piece attribute option the sob will run over each of those pieces individually and then decompose them if it needs to based on ikan cavity thresholds so here I've basically got the convex holes first sort of my original geometry and then as I reduce this some of the pieces that were concave are getting divided further into a few different convex segments and so if I look at the exploded view here if I explode by name we'll see that just like before with BTB two spheres we still have the same number of pieces as before so the name attribute before we had 24 different pieces and then afterwards we still have 24 different pieces just now each of those pieces consists of a few different disconnected segments for each of those convex hulls and then the segment attribute lets us identify each of those segments individually from each other so if I switch my exploded view to work by let's put them apart by the segment attribute and we can see each of those segments individually but it will still be seeing 24 pieces just like before but each of those pieces is made up of a bunch of different disconnected convex hulls there are a couple other more advanced options just to quickly mention so one is this treat a solid option and so to demonstrate this I'll just make my geometry open so if I take the squab here and say delete one of these pieces oops they're real so if I so we one of these pieces and I had left this option off it's easier to see with the segmented geometry so if I have some hollow open geometry here what the swap will do is basically treat it as though it was a hollow shell so you'll see here that it didn't fill in the interior when it made any of its cuts and then if I was gonna decrease this concavity it would continue cutting this as though as a hollow shell so it keeps trying to decompose further until these little shells here are close enough to being convex but what I might want to do is have the option to kind of ignore any small holes in the geometry and that's what this option will do it kind of implicitly does a poly cap on any holes in the geometry so if I turn that on we'll see that it basically behaved as though the hole wasn't really there and so we're still filling in the interior surface and it's not trying to further decompose any of these solid chunks and then finally the emerge nearby segments option lets you deal with the case where your original geometry already consisted of a couple of different disconnected pieces so I'll show that here just by setting up a somewhat contrived example so in this case if I had my original squab and then I also had say a box that was adjacent to it so here I'll just make this box a little bit smaller and then bring it up like that so if I were to send this into the comics decompositions up what it normally does is it just looks at all of the individual segments that already existed in the geometry and then if any of those are concave it'll start to decompose them further but if they're already convex within threshold then it won't really do anything so we'll see here even if I set my tolerance super-high it didn't actually like merge those two segments together or anything like that it just decided both of them were already convex it doesn't really need to do anything further so what this merged nearby segments option will do is allow this OP to do more searching to figure out if it could come up with a larger convex hull that can enclose both of those segments and merge them together but still satisfying this concave a threshold so if I turn that on now we just have a single convex hole that wraps around both of the segments that we had in our original geometry and then as I decrease the tolerance it may or may not decide to merge those together depending on what it thinks it's able to do dependent given the concavity tolerance so this can be a handy option if you have your input geometry already consisting of a bunch of disconnected pieces but you want to give this up the option to give you a larger proxy shape that represents the geometry as a whole and so then finally just like what we had with the torus we can we saw have a shelf tool that will set up all of this for us so I'll just take a quick look at that so going back to the original squab if I use the RB d convex proxy shelf well this does kind of the equivalent of what the Arbutus fear proxy shelf tool did for us with the sphere packing options and so if we jump back to so UPS will see that what it did was it just took our original source geometry ran it through the convex decomposition saw with the piece attribute option turned on so you could have already pre fractured your geometry and then sent that into Dobbs and just like before we turn on the option to create a compound collision shape so we're using the convex hulls that we've already computed using the convex decomposition so up instead of just wrapping it all in one big convex hull and then back in stops after the simulation we're doing the same dumping board as points and then transform pieces set up as before so the complex decomposition saab can be a pretty handy tool if you don't want to use the sphere packing approach and you want to be able to just very efficiently deal with any pieces that happen to be concave so with this piece attribute mode it's really easy to just run it over all of your fracturing geometry afterwards it's very fast at detecting pieces that already are convex so if you just happen to have a few you know a relatively small number of pieces that end up being concave after you're fracturing it's very efficient at just identifying those pieces and dividing them a bit further if it needs to so next up we'll start looking at a few of the stops that are there and you were improved and are useful utilities for common things that you end up doing a lot with rigid bodies so first up is attribute promote which of course is not a noose up but it's got a pretty useful new feature in it it's very handy for a lot of things that you're dealing with fracturing geometry so as an example here I've just got some geometry that I've fractured here with some small pieces and some large pieces and one thing that you might want to do is find pieces that have a small or a large surface area so that you can maybe delete tiny pieces or something like that so what I've got so far is just a measure soft to compute an area primitive attribute and then just a simple color visualization for the area so we can see some of the pieces that I have a small surface area being colored a little bit differently so if I want to sum up the total surface area of every piece what I probably would have done before was a for each loop so I would do it for each names primitive loop running over each of our pieces and then inside that loop I would do that rope remote to sum up the area attribute from all the primitives that are in that piece and then promote it up to detail change this to sums now we've got detail attribute with our total area for the piece and then it have to promote it back down from detail to primitive Oh detailed primitive area average doesn't matter in this case but I could just make it first match and so now if I look at that and we've now got as I increase the range on this visualization we've summed up the area for each piece and we can see the pieces that have a large surface area and a small surface area so this is okay but it does take a fair bit of work we better set up a loop and use a few different attribute promotes to do something that's a fairly common task so there's a very handy new option on attribute promote in Houdini 17 which is the option to use a piece attribute and so what this will do is for each of the pieces in the new class so based on let's say I'm promoting from point to primitive and I'm doing this based on a piece attribute it'll take a look at all of the primitives that share the same value of the name attribute and then grab all of their reference points and then do the promotion operation to you know sum up the attribute or average it or whatever and then write it back out to all of the elements of the piece at the primitive level so let's take a look at this year so in our case we're actually trying to sum up a primitive attribute over the entire piece so we'll set the original class in the new class to both be primitive and then we'll turn on piece attribute we already have a name attribute set up on our geometry and then we'll use the area attribute and then just sum it up so now you'll see that we've got an area attribute that has been summed up over all the pieces and if we change our visualization over to that we got the exact same result so this is a pretty handy option it saved us quite a bit of work from before and ends up being quite a bit faster in practice as well I'll take a look at a couple of other examples so in this case here we'll look at promoting attributes between different classes so in this case I've just got some fractured geometry that has a bunch of polygons on it and maybe I want to visualize how far away each of the polygons are from the centre of the object so what I could do for that is do an attribute promote and I'm just going to sum up or average the point positions for all of the pieces and then visualize how far away each primitive is from the average point position in its peaks so in this case I'm going to take the point position so my original classes point I'm going to promote that up to primitive but do it per piece so based on my name buuuut and average is what I want and then I'll just change this to average P let's say and then if I do a simple rainbow I'm gonna be running over primitives and looking at this average position so I'll just set up an attribute called distance which is just going to be the distance between at P which for primitives is just going to be the average position within that polygon to the average position of the entire piece so now I've got this distance attribute and then I'll just set up a quick visualizer this is a handy new feature actually in Houdini 17 so from the info window or a stop if you click on one of these attributes it'll actually set up a visualizer right there for you so you can avoid a lot of extra work setting that up so now you'll see we're visualizing primitives that are very close to the center of their piece versus primitives that are law farther away from the average point position and finally let's do one more example just working at the point level so in this case here I've taken a whole bunch of points and use the cluster points up to just divide them into a bunch of different clusters and let's say I want to do something interesting based on how far each point is from the center of its cluster so in this case I can just promote the point position per cluster and figure out the average position of that cluster so I'm promoting from point to point but based on the piece attribute cluster and averaging out those point positions and then writing that out to a new attribute called cluster Center and so then if I just do a simple wrangle I can scale the points in this case based on their distance from the center so if I adjust this we can do some interesting effects like moving points towards the center of their cluster and further out and so on so this new option ends up being pretty handy it's nothing you couldn't do before but it saves you quite a bit of work in a lot of common scenarios so it's good to keep this in mind to simplify your work in the future so related to what we're talking about right now with that rib you promote one thing that comes up quite a bit is finding the location of the center of a piece and this in particular this happens a lot with constraint network where you're often trying to generate a point at the center of each piece and then join them all together with polygons to represent the constraints so there were a few approaches to do this before definitely you could use a for each loop or something like that if you wanted to but the most common shortcut before was taking advantage of the packs up which has an option to create a pack fragment primitive for each piece with one point attached to it and then the point would be located at the centroid of the geometry and so if you then just did a delete geometry but keep the points operation you'd end up with 1 point per piece that's at a fairly reasonable location the way that they computed the centroid on the PAC's up though was just based on the bounding box of the geometry so depending on the shape of your object that might give you something kind of far away from where you'd expect it to be but generally this was a pretty fast and easy to use a probe so it was done quite a bit the other option you have of course is using what we just showed without to be promote so just computing the average point position of the piece which can be a bit closer to the center the one downside with this approach though is that it's very sensitive to how points are distributed on your original geometry so if I take this divides up here which is set to just do a Bricker operation on one of the faces if I turn that on we'll end up getting our average being very biased towards this one side of the object just because there's so many more points there so this approach is a little bit problematic sometimes so in Houdini 17 we've actually added a dedicated swap for this with a lot of useful options to make this very easy to do and also a lot more accurate so this is the extract centroid saw and so by default it's set to run over each piece so if I delete this boss up we'll get a point at the centre of each of those pieces and then it has a couple different options for what method to use to compute the center and I'll look at that second and then by default it'll output one point per piece so one option it does have for the method is the bounding box enter and this matches the packs up exactly so you can use this to replace that method if you wanted to but the better option that it has is to compute the actual center of mass of the geometry and this matches exactly what the bullet solver would do when it's computing the center of mass of that piece so this approach is a lot more accurate it doesn't get thrown off by having a lot more points on one side of the object or the other it's computing the actual center of mass where you'd expect it to be so this generally gives you a point that's closer to where you would want for things like constraints so it's a little bit more reliable to use than the bounding box if you're expecting the point to actually be more towards the center of the geometry even if you have kind of an odd shape so there are a couple other options to mention here so by default you can output points and you have the option to transfer attributes from the source pieces onto those points the other option you have though is to actually output the original geometry but add an attribute with a particular name in this case centroid is the default so this is handy for just computing the attribute and then letting you do stuff with it on the original geometry and then you also have the option instead of running over pieces to run over the entire geometry and so in this case the attribute mode will actually had a detail attribute with the location of the center and then if you are outputting points it will just give you one single point with the position of the center of the geometry so this is a pretty handy utility just to reliably get points in the center of each piece and also simplify things since it handles attribute transfer and things like that pretty nicely and you can also generate the center of mass location on your original geometry if you don't need points to be created so the next up we'll look at is the extract transform soft so this is actually a soft based version that we've added of the extract transform object which you may or may not have used before so the idea with this solve is to make it easy to extract a rigid transform from geometry that just has changing positions and appears to be deforming so we'll find that this ends up being a pretty handy utility for setting up animated colliders for bullets and things like that so in this example here I've just taken some fractured geometry and then I've just used a couple of transform stops to just apply some rigid transforms to the geometry so something like this and then we'll notice that all we have is just a bunch of changing point position so this is kind of like if you got a geometry sequence coming from some external application and you're trying to bring that in as your occlusion geometry you just have a bunch of changing point positions over time even though those point positions are actually just describing some rigid motion so if we wanted to bring this into bullet directly as is we basically end up having to use the forming static objects rather than animated static objects if we if we tried to use animated static objects what we'd see is that if we look at the transform intrinsic on these packed primitives nothing changes over time this is because the PAC primitives transform is not changing it's just the input geometry that's getting packed is changing and so there's deforming geometry inside of the PAC primitive so if we tried to use this as bullet and it set it up as animated static objects we'd see that nothing really happens because bullet can't pick up any motion and so the objects just activate when I told them to activate but they didn't really get any animation so if instead I would have had to change this - deforming which would work but the downside with this is that bullet would be recomputing closing geometry on every frame which isn't very optimal if I was dealing with some heavier geometry so what we want to be able to do is extract that rigid motion from that geometry that just had changing point positions and then apply that to our packed primitives so that they have an animated transform so let's take a look at that here so in this case we've we're gonna use our new sob extract transform which takes two inputs one is the reference geometry and one is the target geometry which need to have the same number of points and then it tries to figure out the best fit rigid transform that map's points from one to the points to the other so in this case I'll take the geometry from the first frame just time-shifted and then I'll use my second input as just the current geometry at my current frame and then extract transform will write it has an option to run per piece or for the entire geometry and then it'll find the best fit translation and rotation to transform the points from the reference geometry to the target geometry and the output from this is actually just a bunch of points and the format of this will probably look familiar to you from earlier when we were using the DA imports ops create points to represent objects mode which gives us the same sort of output so we have points that have a P attribute Orient attribute and pivot attribute that are describing the transformation along with the name attribute in this case because we were extracting per piece and so what we can do with this is feed it into transform pieces and we can apply that transform to any geometry that we want so in this case I'm going to take the geometry from the first frame run it to a pack swap so now I have ten pack fragment primitives and then if I apply this transform from these points through transform pieces we'll see we've got the motion that we want and in particular the packed primitives geometry the contents of the pack parameters geometry isn't changing because we're just building this from the first frames geometry but if you look at the transfer of intrinsic we're now getting the actual rotation in our transform of the pack primitive so if I was going to send this into bullet now then we can just use our animated setup and setup animated static objects and bullet will pick up those transforms and apply those to the static objects and then we can do our transition successfully so this makes it a lot easier when you're working with external geometry sources and things like that to extract region transforms and apply them to pack primitives so that you can take advantage of animated static objects in Bullitt rather than deforming static objects which end up being a lot more efficient I'll show another example just to kind of show some of the behavior of a few of the other options so I'll just use the pig head here turn off the point display and then I'll just use a transform stop let's say to set up the target geometry so I'll move this over a little bit rotated and so on and then put down and extract transform sob and then transform pieces so we'll see what the effect is when we transform the original reference geometry so in this case because I just did a rigid transform translation and rotation with the transform saw the original geometry when I apply that transform to it will exactly match what my target geometry was but let's say to make things interesting I had some different scaling or some distortion on the geometry so now if I use extract transform to extract the best fit translation and rotation will see that the target the reference geometry when I apply that transform doesn't match up with the target geometry and it can't in this case because there's no way to just translate and rotate this geometry and haven't matched this geometry that was scaled it does do the best job that it can to make things line up as closely as possible so there's one option on this OP it lets you get a measurement of how successful this was and that's this distortion attribute option and so what this will do is give you a measurement of how close the geometry matches up with the target geometry how much distortion there was so larger values mean it wasn't as good of a fit so in this case we have a somewhat small distortion value because some of these points match up but some of the ones near the ends don't quite match up but if I had gotten rid of that scaling then we'll see that the distortion attribute is essentially zero like 1e minus 6 so that can give you a useful measurement of how successful or how rigid be the deformation actually was the other option that you have is instead of just extracting the best translation and rotation you can also extract the best fit translation rotation and uniform scale so this will have a peace kill attribute into the output point and then if I did have some scaling then this allows the geometry to itself be scaled bit further to match up with it so in this case where I'm uniformly scaling the target geometry it's able to find a best fit uniform scale that also helps the reference geometry to line up with it so now we'll move on to talking about the fracturing tools starting with Verado so in Houdini 17 we've introduced a new 2.0 version of the veronik fracture shop which has a number of changes and improvements from the 1.0 version that we'll be going through first thing to mention just a front is that I find it really handy to turn on in the asset manager under the configuration tab the asset bar so setting it to display a menu of all definitions so by default this is off if you turn it on then you can very easily see which version of nodes you're using so you can identify if you're working with firown' refractor 2.0 or 1.0 and so on so one thing to mention up front it's a little bit harder to directly demo is that one goal with the new version was to modernize the internals quite a bit so the inside of Roma fracture has been redone quite a bit and it's also been made available so now you can use for anoi fracture and your compiled blocks and we've also made a number of other HD a's like act adjacent pieces and so on compiled Abul so a lot more of the destruction tools can be wrapped up in compiled blocks for efficiency so there have been a number of changes withdrawn a 2.0 to the parameter interface that we'll get into shortly generally though these changes are either for simplifying some pauling workflows we'll see a few useful options for recursive fracturing and also removing some deprecated options that weren't really a good idea anymore things like the options to create a separate primitive group for every single piece which is a lot less efficient than the standard modern approach of creating a name attribute to describe two pieces one very notable change we'll notice is that some of the options for clustering and interior detail that were present on the old version of run or fracture aren't there in the 2.0 version and they've been moved into separate shops now so there's the RBD interior detail shop and the RBD cluster soft which you can see in the top menu we'll get to those shortly but the idea with this change was to make the fracturing process more modular so for example the interior detail can be applied later on in the Saab chain after you already cashed out the results of your fracturing and then you can separately handle your low resolution and high resolution pieces and you can also use it with other fracturing methods like boolean which we'll talk about later on and the same goes for our BD clusters so now the clustering process is done post fracturing so that you're not necessarily having to refresh her your geometry because the clustering said have changed so we'll take a look at those settings afterward that those new nodes afterwards after we've gone through the new settings in Verano 2.0 so right here I've got the two versions side by side the 1.0 and 2.0 versions and probably the first thing you'll notice is that the number of inputs and outputs are different so the first two inputs are exactly the same as before so you input the geometry that you're going to fracture and then the second input is all of these cell points for the Verona cells so just like before you can create a volume scatter some points in it or something like that to generate the points that are going to define centers at the brony cells so we've got the same fracturing results with 1.0 and 2.0 the third input though has disappeared so in the old version of rowan I fractured that's let you provide a custom depth volume which was just used for the interior detail settings and so because that's now in the RBD interior detail soft that option isn't gone draw my fracture and then the output of course is the same as before you just get your fracture geometry but you'll see in Rono 2.0 we also have this second output here and what this lets us get is a constraint network it gives us the connections between all of the adjacent fractured pieces that were produced so this was actually something you could sort of do in the old version that sock was built before shops were able to add multiple outputs but if you object merged this constraint network geometry from inside of Ronna fracture you could get some form of constraints from it the downside with this old approach was that it was just basically doing a tetrahedral I saw between all of these Centers of the pieces and so if you had concave geometry it didn't really give you the kind of results that you would expect so that's method has been improved quite a bit in the new version and so you're just gonna get connections between the directly adjacent pieces that were created by the fracturing process and it's also done very efficiently by using some information from the fracturing process itself so it's not just doing sort of the blind connected jacent pieces approach of scattering a bunch of points on the geometry to find the connections so it's a very efficient way to get either a starting point for your constraint geometry or just some handy information about which pieces are adjacent to each other so if we wanted to use this these constraints in a sim all we would have to do is set up the usual constraint network attributes like the constraint name and constraint type primitive attributes out of the box that already has a wrestling factory that's been for us and it also has the name attribute and point position for the pieces that it's attached to so now let's take a look at some of the options the new version of Rowan I fracture so in the pieces tab we've got a bunch of options for how the pieces are named and some stuff for how their geometry is set up so just like before there's the create interior surfaces option which lets you control whether the inside faces are created during the fracturing process so we'll see if we turn that off we'll just get a hollow shell so this is a good option if you're trying to fracture 2d geometry or if you want to produce a hollow shell from the fracturing process there are a few options that were removed from Verona one I know like the option to fuse the input points disconnect the inside surface from the outside surface and so on because these weren't typically used very often and they can very easily be done with a separate soft like a few saw or a primitive split saw before or after the fracturing process so the interface has been streamlined a little bit there one thing you'll notice is that the visualize pieces option isn't available anymore instead you have this visualizer action button which we've seen before on the convex decomposition saw and it also appears on a few other shops like the attribute noise saw and so what this will do is it'll add a visualizer for the name attribute and so you can just turn that on and instead of having to recoup your shop and potentially invalidate downstream geometry by adding a color attribute instead just turn on a visualizer and it doesn't force the node to recut can potentially reduce some heavy fracturing processes so by default this is what you should probably be using to visualize your points but if you want to you always have the option of just dropping down a color soft setting it to primitive and random from attribute to generate random colors based on the name attribute afterwards with the name attribute actually one other useful option is by default of course it will just set the name attribute to be p0p so on so on but you also have the option to append to the existing name attribute if there was one so in this example here just at the top I've added an optional switch that I'll turn on so I've already done one level of fracturing and then if I send that fracture geometry into another one I fracture I have the option to append to the name attribute instead of just blindly overriding it so in this case my input geometry already had a few pieces like p0 p1 and so on and then by default if I were to feed this into Broner again it would just overwrite the name so now I have p0 pease 308 but instead I have now an option to make it easy to just append to the name attribute so I could do a dash for example as my name prefix and then I get names like piece 1 - 308 and so on and so this makes it a lot easier to track the hierarchy if you're doing multiple levels of fracturing there also have been a couple changes with the normals options so it's now a little bit more explicit about what's going to happen so by default computing interior vertex normals is turned on and then for the exterior normals there's a few options for what happens so by default it will preserve any existing normals so if you already have prediction normals on your geometry for example it won't recompute them but if you don't have any normals on your geometry it'll add normals using this cusp angle so in this case we've got vertex normals that have been added to our geometry even though the input actually in this case the end but did have normals because I had pre fractured it but my original employed didn't and then you also still have the option to recompute the normals if you want to overwrite any existing normals that were being fed in or turnoff normals entirely so the options are a little bit more explicit than before in terms of what will happen if you have some pre-existing normals on your geometry then in the output attribute section there are a couple new options so one is this attribute name prefix and what this will do is for any of the attributes and groups that are being created here it'll allow you to easily add custom prefix to any of those options so in this case I've added a prefix of custom to my inside and outside groups so now I can easily get unique names for all of these groups if I want to do multiple levels of fracturing and then get separate groups from each of them just like before you can turn on an attribute to get an integer ID with a piece number which is this piece attribute here as well as an attribute with a point ID of the original cell point that produced this piece and then there's one change here in the old version of rona fracture you could get an attribute for the primitive clip point and this would basically be for any of the interior faces which of the cell points it was clipped against in order to produce that face this has been changed a little bit so now it's the primitive clip piece so instead of being this Voronoi cell point from the piece on the other side it's actually the piece number so this ended up being kind of important if you have concave shapes and things like we're one throw noise cell point can actually produce multiple different pieces this is a little bit more precise and more reliable to work with now just like before there's still the options to generate the interior and exterior groups for the fractured pieces so we can see that and the spreadsheet here if we're looking at the exploded view we just believe the asterisks there so now you can see our inside and outside groups just like before and there's one option to make the behavior a little bit more explicit with what's going to happen if you're doing multiple levels of fracturing which is this merge with existing groups option so the default behavior here is that if the input geometry already has an inside and outside group any of the new primitives that are created from the inside surfaces will be added into the existing outside group but not over writing it and then the outside group will be just left as it so if you do a sequence of fractures you'll just end up having an outside group that contains the original outside primitives and you're inside group accumulates all of the inside primitives that were created from all of the fractures so this is the default behavior but if you want to turn that off and it'll behave as though at the end those groups didn't really even exist on the input geometry so the outside group will have all of the input primitives in the inside group will have all of the new primitives that were generated by the fracturing process then just like before you have an option if you want to copy any point attributes from the original cell points on to the points or primitives of the fractured pieces and then there's also a new option now that we have softened outputs about constraints is that you can also transfer attributes on to the constraint network geometry and then finally at the bottom there are just a few advanced options for a fracturing process which are basically the same as what you had on the original Varona fracture so either auto detecting or using an existing triangulation for Voronoi split the auto detect option is usually what you want and then a couple options for how to maintain piece numbering if you wanted to maintain some correspondence with the point numbers on the original cell points and then an offset parameter for the cutting plane alright let's take a look now that BRB interior detail table so one of the first things you'll notice is that this has three outputs which is a little bit different than what we're just seeing with Roenick fracture so just like before the geometry output and the constraint geometry output are outputs one and two then the third output provides proxy geometry and so this lets you carry along a low res or proxy geometry representation of the geometry alongside the fracture geometry and also alongside the constraints this is kind of a similar workflow to what develop tools are doing if you've already used that in Houdini 17 but instead of a third output being the collision geometry for velum it's being used for the proxy geometry or the RVD nodes so in the case of our B interior detail the main operation of adding detail to the interior faces is just applied to the first geometry so the high res geometry in this case so we're just taking the original geometry from bronec fracture and then RBD interior detail is going through the first inputs geometry and adding some more resolution and noise to the interior faces there the constraint output is just left as its so it's just affecting as a pass-through in this case so the constraints are just unchanged because adding interior detail doesn't affect any of the constraints and then the proxy geometry input is a little bit more interesting so if I had already had some proxy geometry and I passed that into the third input it'll just be directly passed through to the output so it'd just be left as is the interior detail is just being applied to the high-rez geometry but if I didn't actually provide any proxy geometry the Saab will take the original input geometry and provided as in the third output as my new proxy geometry so this immediately gives you a separate split in your network for the high res and low res geometry if you need to do separate things with them so let's take a quick look now at how the stop works so since it's now a separate shop and it's not part of runner fracture anymore you do need to specify what polygons or what primitives you want to apply the interior detail to and so this is where the interior group parameter comes in here so it defaults to inside which matches the name of the groups that Verone re-fractured creates by default so we'll just take a quick look at those in the network editor here I'll just disable the interior detail and go back to the exploded view and going through the group picker so veronik fractured by default will create a group called outside which as the original faces before the fracturing happened and then an inside group which has all the new interior faces that got added as part of the fracturing process and so if you've left these parameters as default then our buddy interior detail because it uses inside by default it'll just work as is but if you have changed around or disabled these groups on Rhona fracture you might need to take that into account when you're setting up the interior detail and provide your own proof there and then next up in the geometry tab there's a few options for what you want to do with polygons themselves so you have an option for whether you want to add any detail to those polygons by doing a Bricker operation so if you turn this off the geometry is just left as is which in this case isn't a high enough resolution to do anything interesting but if you already knew that your polygons that you were trying to apply the detail to had already been divided enough and you can turn that off and then otherwise you can control how finely or coarsely those polygons to be divided and then next up you have an option to recompute the interior normals or you can just leave them as is and then there's also an option where you had any quads or non triangular polygons in your output you can triangulate them if they ended up being non planar after applying the displacement so then in the next tab you've got a whole bunch of settings for the noise it's going to be applied and this basically mirrors a lot of the parameters you're probably familiar to from the unified noise whap in the mountains soft and so on so the main control here is the noise amplitude which is the amount of displacement that's being applied and then on other die about all the usual noise controls to switch between different noise types change the frequency of the noise offset and so on and then some of the more advanced noise controls as well so you can use all of that to adjust the shape and style of the noise and then at the bottom you have all the interesting controls for how the noise is applied to the interior faces so first we'll turn on the visualized noise scale option which gives us a nice visualization of how much displacement is being applied to each point so by default what the salt will do is compute an SD F or assigned distance field from the input geometry and use that for sampling the depth of each of these interior points so that it can then scale the amount of displacements that you don't have points that her you know right near the surface being displaced outside and things like that and also to make sure you get a smooth transition from the outside faces to the noisy interior so by default it'll build that signed distance field using this resolution but if you want to supply your own volume you can also do that using the fourth up so this is just like on foreign oil 1.0 there was that extra third input for providing your own depth volume and so now it's on Deary detail as this extra input so you could for example use I swap sets up build it from maybe the original the fractured geometry switch this over to SDF volume and then adjust your resolution and some of the more advanced construction parameters and then pass that in as the fourth in Butch here and then that will disable the normal depth volume that's being computed by default and it'll be used instead for the depth sampling just connect that ok so then next up there's an option to plant the amount of displacement based on the depth of the point from the surface and the effect of this is a bit easier to see if I crank the noise amplitude up really high so I'll just turn this up to kind of a ridiculous amount that looks pretty silly in this case but it makes it a little easier to see what's happening so if I turn this option off you'll see that points thither and you're the surface because the noise amplitude is so high they can end up being pushed outside and if I turn off the exploded view you'll see that it's kind of messed up the silhouette of our geometry so what this will do is clamp the amount of displacement to be some percentage of the depth from the surface so that it can't go outside so if it's set to one then it's clamping it just enough to not be pushed outside but if you have a really low resolution depth volume or something like that that might not be perfect so by default it's just clamped to ninety percent and then as you drag this down the points can't be displaced as much and at zero nothing really interesting is happening and then I'll just change my noise amplitude back to something that looks a little nicer and then the bottom controls like you control how the depth from the surface is mapped on to the noise amplitude so the default is using a bias curve and so if this is set to point five you end up betting basically just a linear interpolation so the points that are right at the surface end up getting a noise amplitude of zero and then the points that are at the deepest amount that are the deepest from the surface are gonna get the maximum noise amplitude here and then in between is just a linear interpolation so then as you drag this value to either side we'll get a curve that's biased in either direction and then as an alternative to that you can also use the depth of noise ramp in this foot so you just provide your own custom ramp so you can control the shape of this curve in any way that you want so with these parameters you can get a bit more control over how the points that are kind near the surface get this place to avoid having any artifacts that are being introduced by having a lot of noise being added to the anterior surfaces next we'll look at the RBD cluster sob which is used for clustering together fractured pieces into larger chunks the workflow is a bit different than the clustering options in the old Voronoi shop partly because it's now clearly separated into a post vector step it's also been made into a tool that's more useful in other situations you can now use it for fostering together fractured pieces from other fracturing methods like boolean and you also have some more control over how exactly you want to create those clusters whether it's by actually merging together pieces or by just setting up constraints differently to create the look of clusters so let's start by looking at the old Verona I saw up just for a refresher so in paranoia if you turned on the cluster pieces option and that would foster the pieces into a few larger chunks which we can see a bit more easily with the exploded view as well and so that was controlled by the cluster noise option which gives you some control over the side offset jitter of the cluster noise and then you also have some options for randomly detaching pieces or setting up constraint network that you would have to object merge out but if you wanted to do some more custom clustering the way that you do that is by setting up a cluster attributes on the input points so the input cell points for Verona fractures so in this case if I had just used this R angle here all it does is set up a is divide the points into two different clusters based on their position along X and so if I send that into for an oi fracture now and turn off the default cluster noise we'll see that we end up just getting two clusters that are split along the x axis and then as I change this around see my custom clustering taking effect so the one downside with this approach was that it was kind of intertwined into the fracturing process right so if I wanted to set up my own cluster attribute on the points I'm having to do that by adjusting the input points the row no fracture which means that that's also causing the fracture to happen again so you'll see you'll notice that it's kind of laggy as I'm trying to change around my clustering settings because it's actually modifying the inputs to broader fracture and forcing me to redo some of the fracturing work so that's not really ideal so let's take a look now at our BD cluster over here so I'll see that it's a three input three output node sort of like what we saw with our BD interior details so we have the geometry constraints and proxy geometry and it still allows you to do a similar point based clustering approach as before but now instead of doing it on these cell points for Verona I fracture you can just do it on the constraint networks points so we'll take a look at that here so over here I've got my constraint geometry and then if I do the same sort of clustering along the x-axis approaches before and then send that into our BD cluster well it can turn on the visualizer for the clusters and we'll see that we're getting split into two separate chunks as before so the nice thing with this approach is that it's done after the fracturing process you could cash out the results of your fracturing and do this clustering at any point later on and that also makes it very easy to iterate on because now it's super fast all you have to do is just change around the name attribute and so if I change around my clustering settings it just updates in real time and I can test this out very easily so let's take a look at the remedy interface so we have kind of similar looking options as before but now there's it's a little bit more explicit about how the cluster noise settings that are provided by default are combined with the cluster attributes that you might have set up on your input geometry so you first you can specify what attribute you want to use for the clustering it defaults to disk cluster and then you had the same offset jitter and size parameters that we had on the old fro noi fracture stop but then here we have an option to be explicit about how we want to deal with the incoming cluster attribute so you can preserve existing clusters which will let you just override or will ignore all the default clustering parameters if you have a cluster attribute already set up on your geometry and then otherwise you also have options to overwrite or not do anything and then on top of that there's also the random detach option which will randomly split out some pieces from their cluster and just leave them as isolated pieces on their own and the way that this works is just by setting the cluster value to negative one so cluster values that are less than zero are interpreted as being just pieces that should be left on their own so we can see that here if we let's just add something into our angle so if for if point zero let's say we want to leave on its own we can just set its cluster value to negative 1 and then we probably need to turn to an exporter view yeah we'll see we have one piece that's been left on its own and hasn't been added into any clusters so with this you can get a bit more control over how exactly you want the clustering to happen very easily and so how your own attachment can be implemented but you can also just use the random detach option and control the ratio of how many detached pieces you want and also adjust their random seed and then there are a couple different modes for how you want to actually apply the clustering and have it take effect so the default that we've been using here is the combined pieces mode and so the way that this works is it just does the clustering by adjusting the name attribute so merging together pieces into one single larger piece and so this was a little bit similar in concept out of the proxy geometry stuff we looked at earlier with sphere packing and convex decomposition worked so in this case we started out with 502 pieces and then we now just have 70 pieces I'll turn off the random detach maybe to make it even easier to look at so now we just have five pieces if we looked at the spreadsheet we went from having names like piece zero piece one piece 500 and now we just have pieces named cluster zero cluster 1 and so on according to this main prefix we have here so all but it's done is just set up a new set of names to combine some pieces together into larger clusters and so just like we were doing with the proxy geometry before in DOPS you could just turn on the compound shape option and have a compound shape be built up out of all the original grow enoyed pieces in that chunk so you can very efficiently set up collision shapes for these concave clustered pieces and so this approach is really good you have pieces that you never want to break apart so if there aren't going to be any constraints between these objects they're just going to be combined together into one object for the simulation so it's it's very efficient way of setting up compound shapes but it has the downside that you can't easily break apart any of these chunks unless you go down the dynamic fracturing approach or something like that so the other mode B you have is group constraints and so this works actually in kind of a similar way as the old glue cluster swap which was made for operating on blue constraint networks but it's been made a little bit more generic so that you can use it with other constraint types and kind of do your own thing with it so the idea here is that in this mode it won't actually rename your pieces you'll still have the same number of input pieces as before so we'll see we have 500 pieces and we still have our usual names at p0 p1 and so on but instead it just adjusts our constraint Network and creates some primitive groups so we'll take a look at that here I'm just going to turn on the random detach option as well so that we have a few detached pieces to show the effect of that and so it creates four different primitive groups so one for intra cluster cluster de cluster cluster to piece piece to piece it's a little easier to look at that if we turn off the visualization and go into the group visualizer so we have these four groups so one is going to be clustered to cluster which is the constraints from one from pieces in one cluster to pieces that are in a different cluster and then you also have the intra cluster which are all of the constraints that are inside of one cluster of pieces and so this lets you do things like setting up say stronger glue within a cluster and then weaker glue in between clusters and so that will give you the effect of having a chunk that can then break apart and then even break apart further if there's really strong impacts so this is useful for penis' shaping the clusters and letting them dynamically break apart during a simulation and then on top of that if you are doing the detachment option or you did your own custom clustering and every piece was added into a cluster will also have the groups for strains that are in between separate individual pieces and then also the constraints from pieces that belong to a cluster to pieces that are not part of any clusters so you end up with these four different primitive groups and then it's up to you to do whatever you want with them afterwards so you could set up different constraint properties and things like that so with this approach it won't be as fast as doing the combine pieces approach because you'll end up you aren't actually changing the number of pieces in your simulation right in this case we still have 500 pieces so it's not as efficient as combining everything and to say five giant clustered pieces but it gives you the flexibility of being able to set up your own constraints and do kind of custom stuff to let these chunks break apart further on so it's a more flexible approach if you want to do stuff with clusters that are going to be breaking apart and not necessarily staying together as one solid object for the entire simulation so related to what we were just looking at with our beauty cluster where we had the option of setting up different primitive groups on our constraints another new software we've added is the urban II constraint property soft and so the idea with this hop is it's just a simple higher-level tool for setting up the common constraint properties and primitive attributes for constraint networks that bullet expects so this is just a nice way to avoid having lots of wrangles and add rope creates for setting up usual bullet attributes so soft you can first specify your constraint groups so they're primitive groups within your constraint network so here we could use our cluster of the cluster groups for example that we had set up from our VD cluster and just set up the constraint properties for those constraints which we can see with the guide geometry here and so then you have a bunch of parameters for setting up the constraint properties you can pick between different constraint types currently it's glue and soft constraints but this will be expanded in the future for more constraint types and then set up the constraint name so this matches the name of the constraint in dops so you can change this if you're setting up multiple instances of the glue constraint relationship node and daps but by default it just matches the name of glue or soft and then you have all of the usual parameters that you'd expect for glue constraints like strength half weight and propagation rate and so on and there's also a built in option to add a bit of variance to the strength so if you looked at the constraint geometry we'll see that it's just added all the standard primitive attributes they were used to just for the primitives that are in the group that we were operating on so we have the constraint name attribute here the constraint type attribute is just left out for glue because it doesn't actually matter the constraint just creates compound shapes so it doesn't matter whether you're affecting position or rotation and then we have the impulse halflife and all of the usual attributes we might set up for the glue constraint so these all default to one so they'll act as primitive galactus multipliers for the values that you have and ops but if in DOPS you're just kind of leaving them as one then you can set these to the true values and do all of your setup its ops and then for soft constraints in addition to the stiffness damping ratio and all of its standard parameters there's also the degrees of freedom option which defaults to position and rotation so this corresponds to setting constraint type to all and then setting it to position only a rotation only corresponds to setting constraint type to position and rotation so this is just kind of a nice way of quickly setting up the constraint properties without a lot of customer angles and things like that and then for glue constraints this saw also exposes the next constraint option so I'll just turn that on which is this switch constraint type when broken and so this sets up these two attributes next constraint name and next constraint type which are attributes that the bullet solver itself will recognize so if the boat solver decides to break the constraint which in the case of glue constraints is going to happen if the impact value gets to be larger than the strength threshold the solver will then immediately switch over they could trained type and constraint name attributes over to these new values so it'll give you an immediate transition from blue constraints into soft constraints or whatever other constraint type you were using so this can just be a handy way of avoiding having to write soft soldiers in kind of common scenarios where you just want to immediately switch to some other constraint type when the glute constraint gets broken so then in are we considering properties you can also set up these stiffness and damping ratio perimeters and so on for the soft constraint that you're going to be switching into so to wrap up our section on the lower-level fracturing tools we'll take a look at the boolean factor soft which is another new stop in 1817 so the boolean factor swap is kind of just the boolean equivalent of Roenick fracture but based around boolean instead of row my split so if you look at the inside of row and I fracture it's really based around the Voronoi splits off that handles the actual cutting of the geometry and then the rest of row no fracture is just based around making it convenient to use fro noise splits so things like computing the named attributes recomputing normals transferring attributes from the input points setting up the standard groups and attributes that give you some information about what happened during the fracture and so on and boolean fracture is very similar except it's based around a boolean shatter operation and then still as annoying as all of the standard things like overwriting or appending to the name attribute and setting up normals and so on and so this makes it a lot more convenient to do boolean based fracturing both by eliminating a lot of the busy work but also making sure that it's consistent with the output you would get from row and I so you still have a name attribute protects normals being computed inside and outside groups and so on so it's a bit easier to have higher-level fracturing tools as well that can use either boolean or Voronoi and still produce consistent attributes on the output so this example here I'll just take a look at what I did so this is just the standard Pig geometry and then I scattered a few points inside its volume giving them random orientations with after randomize and then instanced a few grids on to them and then added some interesting noise to them with the mountains up so with boolean fracture the second input is different from verrano's so instead of being at SEL points for doing a bro noise split it's going to be the input cutting planes for doing a shatter operations with the boolean soft so then if I feed those cutting planes in we'll see that it's sliced up our geometry into some nice and interesting looking shapes and then as with Voronoi the outputs are be the same so the first output of course is the fractured geometry and then the second output is the constraints between all of the adjacent pieces that were produced by the boolean stop so just like with frou noise this constraint network can be really handy for either using as a starting point for your constraint network and assume or just forgetting some interesting information about which pieces ended up being adjacent to each other after the fracturing process and then the parameter inter case for boolean should look pretty familiar after looking at Verona's so just like with Verona we have usual option for setting up the name attribute along with a visualizer that we can turn on to visualize pieces and then we can also append to the name attribute instead of over writing to it if we're doing a recursive fracturing process the interior exterior normal settings are exactly the same as Voronoi so we can turn on or off computing interior normals as well as preserving or recomputing the incoming vertex normals and then the output attributes section is also fairly similar but it's a few differences between the different fracturing methods so just like with Verona we can use the atropine prefix if you want to add our own prefix to the inside and outside groups and any other attributes that we're creating and then we can also turn on the option to set up a primitive peace attribute which has just an integer ID for the pieces that were created I'm like Verona there are a couple fewer options because we don't have the input cell points so there's no option to of course set up an attribute for the cell point that produced that piece because we don't have the same concept with the cutting surfaces for boolean and boolean already handles some attribute transfer for transferring attributes from the interior surfaces onto those pieces and then just like with for annoy we can still set up the interior and exterior groups which default to inside and outside and we have the same option for whether we want to accumulate those interior and exterior groups over multiple fracture operations or if we want to just overwrite them and so then finally at the bottom there's also a few boolean specific settings that are just promoted up from the boolean swap so you have the G triangulation settings and then a few of the more advanced options for the boolean operation so you can just refer to the boolean sobs help for any of those if you find yourself needing to tweak them so on up all of these core fracturing tools that we just looked at we've also added a new node our BD material fracture that provides some higher-level material based fracturing options so the idea here is to provide some very artist friendly tools to fracture things like glass concrete and wood and handle a lot of the tedious tasks like setting up constraints and proxy geometry but still provide enough control to be able to art direct we'll look at the fracture so let's take a look at this now so the RBD material fractures up and let's take a look at some of the top-level parameters so at the top here we've got the standard group parameter which is still worth mentioning because there are a lot of interesting things we can do with this as well see you later on so you can use the three inputs and three outputs to chain together multiple eye reading material fractures ups and then use the group field isolate different parts of the geometry to fracture and we'll see that as we go ahead to doing a full example later on next up is the material type which lets you switch between concrete glass panels and wood and then the fracture namespace option lets you easily add an extra prefix all of the fractured pieces that are created by this node so that you can identify them easily later on so we'll see that the default names here for example are concrete in 0 0 1 0 1 and so on 3 2 the pieces but if I turn this on then we'll get this extra name being added into the beginning of all of the piece names so that we can identify those later on if we need to the fracture for per piece option we'll take a look at later on it's a little bit more interesting to talk about once we've actually seen the parameters at the different fracturing methods and then each of the different fracturing methods has its own guide geometry options but there are a few standard ones that all of them have so I'll just go through that now so the default is to not show any guide geometry but then you can also turn on the fractured geometry guide which will just highlight the pieces that were produced by this node and so this is mainly useful when you're also using the group field and it just gives you a quick visualization of what the saw itself was operating on and there's also the constraint network visualization which lets you see the constraint geometry from the second output there are also a couple interesting visualization options we'll see here once we're doing multiple levels of fracturing on pieces because and we can see the constraints that were just being added or changed around by the particular fracturing operation when we're just operating on a particular group of pieces so first up we'll look at the concrete fracturing and so this is really a primarily Voronoi based approach so the workflow should seem pretty familiar but it adds in a lot of nice options for getting interesting size variations doing sub fracturing or chipping off the corners or certain pieces getting edge nose and so on so we have less prone eye looking pieces being produced so let's start by turning on a single fracturing and also turn on the primary volume visualization here so the idea here is pretty similar to what you would get from the standard shatter shelf tool we're taking the input geometry building a fog volume from it and then scattering some points in that volume to use as the seed points for Verona fracture so down here at the fog volume section we have a volume resolution for the volume that we're producing and there's also some noise controls here and so this is just adding noise into the density values of the volume so if I set this to zero for example we just have a uniform density volume and if I increase the number of scattered points will see that we basically just get our points being scattered uniformly throughout the volume and we just have a bunch of evenly sized 400 chunks and then if we start adding some interesting settings to this noise will see that these scattered points are getting concentrated in certain areas of the volume and so now we've got some more interesting shaped fractured pieces being produced so we can use the noise frequency option to control the shape of the noise and then we can also use the noise offset to move that pattern around and there's also a cut-off option if you want to cut off any low-density areas of the volume to avoid having any points scattered there so then in the cell points section we can control how many points you want to be scattered which roughly will correspond to the number of pieces being produced and then we can also change around the seed that's being used for the random scattering there's also a useful option to be able to provide your own input points and so that uses this extra fourth input for driving the results of the fracture so I'll just do a really quick example of that so I'm going to create a box and just position this just position this up near the corner let's say and then I'll use an ISO offset to just create a fog volume and then we'll scatter a few points in here and maybe one side down to a hundred and then we'll wire these points into the fourth input I'm just gonna scale this down a little bit so that they're a little more concentrated and so then if I turn on the use input points toggle we'll see that those are being added into the points that were being generated from the volume and so you can control a bit more exactly where fracturing is going to occur so then one thing to mention when you're using the input points is that you can then also specify a group field if you want to only use certain points from the fourth end but for this particular staged here and the other mode be you have to work with instead of scattering volume is too scattered based on an attribute and so in this case just going to turn off the volume visualization air is out right away because the attribute bid was looking for on the point named entity by default doesn't exist so one way of creating that is but we already paint tool which is a handy new tool that we added so I'm gonna drop this down and switch over to that and make sure I have a softball visualization on which I do and so this lets you paint attributes easily on your fracture geometry and it defaults to creating a density attribute which matches what our reading material fracture was adding so one thing to mention here is that my input geometry is very low resolution so I can use the option on here to divide the geometry so I have some more points to paint on and then I'll just turn on my visualizer again and paint in kind of these areas here and if I switch back to our begin material fracture we'll see that the pieces are being scattered in these high density regions so we can control a little bit more precisely where exactly the fracturing happens this is very useful for kind of art directing the look of the fracture and so you can switch between these volume based or attribute based scattering approach depending on what you need I'm just gonna switch back to the default settings and maybe reduce this down to about 20 points and I'll turn off the paint operation as well so now that we've been sort of our first level of fracturing we had the option to do an arbitrary number of levels of fracturing so we can continue sub fracturing some of these pieces that were being produced so I'll add on an extra level of fracturing here and then now this fracture ratio parameter that I kind of skipped over before becomes a lot more important just to see the results of this a little more easily I'm going to turn off the noise frequencies so we just get uniformly scattered pieces and then maybe scatter 50 points in each of them so then the fracture ratio controls how many of the pieces from the first level of fracturing are going to be sub fractured again so if it's set to 1 and everything that's sub fractured and then as I change this around maybe only certain pieces get selected for a sub fracturing so this lets you keep some larger chunks around they'll subdividing some of the other pieces to get some smaller chunks in your result so then the fracture ratio is just controlling which percentage of the pieces get selected for sub fracturing and then fracture seed just lets you control that random selection and then just like before you have the same controls over scattering so you can scatter from volume or from attribute differently at each step and also use certain input points if you want to control the results of the second fracturing level differently now one thing to mention is that you might want to turn down the volume resolution for the additional levels of fracturing it tends to not be as important once you've got a lot of smaller pieces that are each being individually sub fractured - you often don't need to have as high of volume resolution so turning this down to something like maybe 40 or 30 will probably still give you very good results then on top of that there's also an option to turn on chipping which basically just acts as an additional level of fracturing on top of the primary fracture and so all that this does is it just chips off the corners of certain pieces given the overall ratio parameter and they also accede so it's just acting like an additional level of fracturing if you want to have a bit of chipping near the corners of certain pieces and then on the detail tab we have a few a few options to control your detail and interior detail so I'll switch back to the exploded view so we can see what's going on and if I turn on the interior detail we'll see that it's kind of similar to the RBD interior detail saw and actually there is just an arbitrary tail saw inside of our beanie material fracture so the interior faces are going to be divided according to the detail sighs so this controls how detailed or how small the details are on the interior surfaces and then we have a noise amplitude for controlling how much displacement we have on the interior faces and then a few noise parameters that are promoted up and then there's also a depth volume voxel size we can control as well the other option that's a bit more interesting is the edge detail option I'll just temporarily turn off the interior detail so the edge detail option will switch over to a boolean based approach for cutting the geometry and it adds some interesting noise along the edges see that some instead of getting the Street fracture lines from Brona fracture you get some interesting curvature along the lines and a bit more noise so the noise sight parameter controls how much noise is being added to the edges so in this case maybe point two or something like that looks okay and then the noise element size just controls the shape of the noise and how high frequency or low frequency it is along with that because it's a boolean based operation and do triangulate options have also been promoted up so you can control whether you want to do triangulation or not and then as before the detail size controls how high-resolution the edge noise is if we go back to the exploded view you'll see that we do get a bit of interior detail as a result of the cutting planes from the edge noise which you can also turn on with the geometry option but you still have the option of using the normal interior detail as well if you want to have a bit more control over the interior noise separately of the noise that's being added along the edges so next up we'll take a look at the glass fracturing and the glass fracturing is primarily driven by impact points so these will be the points where the cracks originate from so the default behavior here is to just scatter a number of points so in this case it defaults the one but I can you know scatter multiple points and change around the random seed to get different behavior and so on but similar to the concrete fracturing we also have the option to provide our own impact points using the forth inputs so in this case I could just for a simple example scatter a few points on the input geometry like this and then feed those into the fourth input here and then either turn off the original spattering or I'm buying both together and I can use my custom impact points and change those around if I want to have a bit more control over the fracturing look myself so just go back to the default settings for now and turn off use input points and go back to scattering my own points and then just started I'll turn off a few of the Advanced Options so that we can turn things on one by one and see the effective each of these settings so I'll turn off the edge noise and also crank up the with the concentric cracks to turn them off for now and I'll just improve it to having one impact point for now just so it's a little easier to see so the first thing here is creating the radial cracks which are emanating in each direction from the source impact point so this is controlled by the radial crack number so this is how many radial cracks you're going to get and then you also have some variance controls that you have let's say multiple impact points you can have a bit more variance in terms of you know each of them not having the exact same number of radial cracks so you have a variance in seed control for that I'll just go back to having one point and then on top of that it then performs another level of fracturing so it takes all of these separate pieces that were produced by the radiographs and fractures and further to produce concentric cracks in a circular pattern so if I first turn off the noise settings here and then start bringing the minimum width of the concentric cracks down to a more reasonable level so as this decreases we'll see that we get a bunch of concentric cracks with this minimum width and then there's also this impact spread which controls how close or how far away from the impact point the fracturing occurs so if we crank this up high then the impact and it spreads farther out and we get a lot more fracturing happening further away from the impact point and then to make the pattern more interesting instead of just getting a spider web pattern we also have some noise settings for adding some discontinuity to these fractures so if we turn on the concentric noise we'll see that we can visualize this noise pattern so the impact spread is scaling this noise pattern based on how far away we are from the original impact point then on top of that we can add some noise settings for adding some more discontinuity and reducing the number of concentric cracks that we cut into some more interesting looking shapes so we'll see that it's basically acting in sort of a broad-based manner where we have cell points for each of these concentric cracks and then if they're in an area of high noise they're allowed to stay around or in this red area cell points are allowed to stay there and then if they're in areas that aren't very high-density then those points are going to be removed so we can play around with the discontinuity settings to get sort of higher frequency or lower frequency noise and get a bit more control over the exact look of how these concentric cracks are spreading out from the origin so then on top of that we can also add chipping sort of like what we saw in the concrete fracturing where we can add some chipping to the corners of the pieces we had the same overall ratio and overall seed parameters as before and then there's also a parameter for how many corners basically will get chipped off per piece that gets selected for having chipping applied to it and then on top of that we also have some options to add some higher detail to our render geometries separately from our proxy geometry so if we turn on edge noise we'll get some interesting warping and noise happening for our high resolution geometry but the low resolution geometry will still remain a fairly coarse so let's switch back to the edge noise we have a couple of controls for fading away the amount of distortion that's happening from the origin and from the border so if we decrease this will see that you can noise pattern can end up kind of distorting the shape around the original impact point so you have the option to kind of have a smooth fade away from the impact point so that you've kind of maintained some of the original look and then likewise near the border you can avoid having too much distortion being applied so the default parameters are generally pretty good for this so you get kind of an interesting shape on the interior of the class but without affecting the edges and also the impact points area and then we have similar parameters to what you had on the concrete fracturing for the edge noise so you have an amplitude for the amount of displacement in either direction and then also some control over how high frequency or low frequency the noise that's being applied is so for the proxy geometry what Assaf tries to do is the format a bit to match up with the geometry from the that had the edge noise applied to it so I'll just turn out the exploded view to kind of visualize that so what it'll try to do is if there there was a fair amount of displacement happening for these pieces it'll try to deform the proxy geometry so that it kind of lines up with the distorted shape from the high resolution geometry but that obviously can introduce something cavity if there's a lot of bending around the middle of a piece and so on so you do have the option to turn on at convex decomposition so you can enable that and then adjust the maximum cavity and possibly divide some of these proxy pieces into smaller sizes to get a more accurate proxy representation so you can use that if you find yourself getting a lot of concave pieces being produced by the glass fracturing and then as with the other fracturing methods there's some built-in options if you want to set up the constraint properties for the constraints that are produced so in this case for glass fracturing we've got some a few different primitive groups that are being produced so we have constraints for the pieces that were produced by the radial fracturing constraints for the cuts that were made for the concentric fractures and then also some constraints that were produced for the pieces generated by the chipping and so you can if you want sort of basic constraints set up here you can just get some blue settings here but then if you want to do more because set up and we'll see this later on you can use already constraint properties afterwards and target some of these different primitive groups that were being created by the fracturing process next we'll take a look at the wood fracturing and so the wood fracturing is entirely boolean based so it's going to work based on setting up cutting planes to carve out the grains of the wood as well as creating splinters and cuts on the other direction so let's take a look at how that works so under the guide geometry options we have some options to visualize the grain pipe in splinters so we'll start by looking at grains and also turn off a cut for now so we can just see one piece at a time so the idea with the grains is to make cuts along the longest direction of the wood to create the grain lines and so by default this will auto compute the direction based on the bounding box so the cutting planes are aligned along the longest axis you can also provide your own direction vector if you want to and then the grain spacing and offset just control where the cutting planes are spaced so as this gets increased we have or fewer cutting planes and they're also closer together and then the grain offset controls how much randomness is applied to the position of each of the cutting planes so as we increase this and also maybe change it on the seed you can have pending plans that are less evenly spaced if you don't want as much of a regular look and then on top of that you can control the noise of the cutting planes so we have the grain detail size which is how much the cutting planes re meshed so as we increase this they're very low resolution cutting planes and so we get fairly low resolution pieces being produced and then as we decrease this we get so much higher resolution cuts being made and then the height and noise size to switch back to the guide geometry so this will control how noisy the cutting planes are in this case a smaller value probably looks pretty good so you can decide whether you want to like perfectly straight grain lines or some slightly curved grain lines and then the element size controls the shape of monoids so I'll just revert these back to a kind of a good setting to continue on with and so the next part is making cuts in the other direction of the grains to both fracture the wood and also create splinters so let's switch over to the guide geometry and also turn that back on so the default behavior here is to also auto compute the direction for the cuts and this will be in perpendicular direction to the green lines that we created before but again you can also specify your own direction vector if you - and then just like we did for the greens we can control the spacing and random offset between the planes so we can have them be a little bit closer together and then also change kind of what our random offset is and also add some different seeds for changing around the look at that and then similarly we can also adjust the same sort of noise settings to control the curvature of these cutting planes so the cuts that they're going to be making through the surface of the geometry and then in this case I kind of like the shape so if we looked at the exploded view we can see we have something like that and I'm actually gonna turn back down these or increase the spacing so that we have some fewer cutting planes so we have some longer chunks to work with the next part of this is adding splinters and so this is done by adding an additional level of noise to the cutting plane so that it carves out some nice splinters from the geometry and we can see that if we switch over to the splinters guide and right now I had this turned off by default so the Splinter length was set to zero normally it's on by default so that you get some interesting looks so as we increase the Splinter length we'll see that it's adding this very sharp noise in both directions to the cutting planes and so that it's carving out some splinter shapes for us so we'll see you something like that and then as we also adjust the Splinter density so whether it's you know we are the splinters are very closely back together or are very wide apart we can change around the shape of these wonders that are being produced we can either have a very low density of splinters which is not looking too good in this case or if we cranked up the density we're getting some you know sharp relooking splinters being produced and so then finally the next option you have is clustering together the different pieces and this is also on by default because it generally looks a little bit better for wood so if you want to you can also turn on clustering and then adjust the size parameters which basically corresponds to what you would get from the RBD cluster soph is just built in for convenience so this will combine together some of the pieces that were produced by the splinters cuts so you get some more solid looking chunks of wood that still have some interesting detail for the splinters you can do this afterwards as well since it's really just doing the RBD cluster swab but it's built in here by default so that you can just get a good result right out of the box on the detail tab there are just a couple options to be aware of one is the de triangulate option since this is boolean based you can select whether you want to de triangulo or angulate only unchanged and so on and then for the proxy geometry since this is boolean based it's rather easy to get very concave shapes so there's an option built-in to turn on a convex decomposition and then you can control the maximum cavity that you're using for generating the proxy geometry from the third output versus the high resolution geometry and then just looking at the constraints quickly I'll turn off clustering for a second and also turn back off the guide geometry so if we look at the constraints we're also going to get a few primitive groups that we can use for targeting with our B constraint properties and things like that so we'll get some cutting that's an constraints between the cuts that were made for the cut step as well as for the grains between our fractured pieces now that we take a look at some of the fracturing settings we can also get back to looking at that fracture per piece option that I mentioned before so instead of just having a single input piece of wood let's say that we actually have a door frame that's made up of a few different pieces already so in this case we have 12 input pieces so if I was to send this into RB material fracture it would by default just treat it as one giant object it's okay with the object consisting of different connected components but in this case it doesn't really produce the look-see you're looking for because if we look at the exploded view some of the grains are in the wrong direction for what we'd expect what we really want is real to individually fracture each of those pieces and have let's say the grains be aligned in the right direction obviously you could do this by just dropping down a for each loop and doing it yourself but this option is built into our reading material fracture both to simplify the workflow and also make it a bit easier to control some of the random offsets and things like that so that you can very easily get a bit of variation between each of the pieces that's being produced so if we turn on this fracture per piece option we'll see that we get a few new options that pop up so we have the option to specify what our piece attribute is of course and then we have a single pass option which just lets you easily isolate a single piece and tweak the fracturing settings just on one piece at a time instead of forcing you to continually refresh the entire geometry and then we also get this random seed option which we'll see in a second so on top of these parameters that have appeared we'll also get a few new parameters like this randomness setting for the grains which let us vary up the settings for each of the input pieces so I'll turn back off the single piece and then if we look at the exploded view we're now individually fracturing each piece of the grain directions in the direction that we'd expect but then if we change around this randomness setting we can get a bit of variation between this cutting the spacing of the cutting planes from one piece to the other so they're not perfectly uniform they were set to zero then we're getting kind of the same spacing between the cutting planes and then as this increases we'll get some pieces that have their cutting planes you know spread fairly far apart and then others that are pretty closely packed together so this will appear for a bunch of the different fracturing settings so on the cutting planes we'll also see some randomness extra randomness parameter showing up here and all of those are controlled by this random seed so you can get a bit of variation very easily across all the different fractured pieces and the same goes for glass and for what and for concrete where you'll get similar randomness settings for any of the things that already have random seeds for them like in the case of glass the number of radial cracks that are being produced and so before we get into building a full example there are just a couple of tips and tricks I want to mention for the work flow with using these multiple output notes so there's a couple of really handy shortcuts in the network editor the age news and I've been doing a couple of these all along actually so if you already have a node selected and then you're in the tab menu and you pick through you know that you want if you do shift enter instead of just pressing enter you know wire up all of the outputs of the currently selected node to the inputs of you know you're putting down so this can be really handy for these multiple input and output nodes to make it really simple to wire them up in the sequence and then along with that if you already have a node that you've already put down and you want to wire them up instead of just dragging each wire one by one and so on which can be a little bit tedious if you shift-click on the outputs of the one nodes if I click on the first input shift-click on the next and shift-click on the third and they've been bundled up together and then if I wire the neck to the first output it'll distribute those wires in a row so this can be really handy if you've already placed down a node you just want to very easily wire it up to another node and then there also a couple of useful utilities ops called RB pack and RB unpack and the idea with these is pretty similar to the velum pack and belem unpack swaps if you've seen those already in the bellum workflow so if we take a look at this our buddy pack takes three inputs so I'll just wire that up and then what it produces is it just gives us a single output so one piece of geometry but it contains three packed geometry primitives and so what it's done for us as it's taken the geometry that constraint geometry and the proxy geometry inverted each of those into a pack primitive and then merge them all together and then it's also tagged them with this RB D type attribute named either geometry constraints or proxy geometry so we've merged all of the geometry together but doing it with pack geometry primitives instead of just directly merging all of the geometry together and this is really handy because the constraint geometry usually has a totally different set of attributes from the fracturing geometry and so we avoid having to merge together all of those attribute tables instead we can just put each into a pack geometry primitive leave it on its own and then merge them all together so now we just have one piece of geometry with three pack primitives in it and so the nice thing with this is that if you do this then a lot of things like switches and merges can be a lot simpler so if I have two different fracturing setups that I want to switch between let's say I had some different stuff going on in this branch then I can just use a normal switch swap and switch between them on demand and then use already unpack to do the inverse operation which basically takes the incoming PAC primitives splits them out based on the RVD type attribute that we add here and then gives us back our geometry constraint geometry and proxy geometry again so this lets you avoid having to duplicate switches and merges and so on for each of the three streams and it's also nice for things like for each loops where you want to process different things in parallel and then merge the results later on those can be a handy workflow the other nice thing about this is that if you want to cash out the geometry to disk for the geometry constraint geometry and proxy geometry altogether this can also be really handy so you just drop down an RB Deepak's up and then you can use a file cache swap to just cash out everything together if you want to do that so now let's take a look at doing a full example and also a little bit more about setting up constraints between the different material types so in this setup here I've got the full version of the wall geometry that I was using for the earlier demos so I've got a name attribute already for the different concrete's glass and wood input pieces and then I've just got a sequence of our reading material fracture shops to fracture each of those different parts so for the glass I'm using the group field here to just operate on the three input glass panels and then I've also got the fracture per piece setting turned on to separately fracture each of those panels with three stuttered impact points each so the settings here for the glass fracturing are pretty much just left at the default and I've just left off a default constraining properties as well as we'll set those up later on so if we look at the guide geometry here for the constraint Network will see that in red it's just highlighting the constraints that were produced from this fracturing operation and the fractured geometry option is a little more useful now that we're just now that we're using the group field because it just highlights the pieces that we were producing from this operation and then the wood fracturing is the same idea I'm just using the group field to select the wood pieces chirper pieces turned on and then I've just got some clustering turned off and then I'm gonna be using constraints instead and then I've got the D triangulated settings adjusted a little bit but otherwise it's pretty much at the default settings and so again if we look at the constraint geometry view will see that in white we had the constraints that already existed before between the class pieces and then in red we've got all of the new constraints that were produced between the new pieces of wood and then constrain to the concrete geometry is the same idea so we're just operating on the single piece of concrete and then and I've also got the edge detail and interior detail turned on so there's a useful option on our BD interior detail that's very handy to mention so we'll take a look at that now or sorry not on our BD interior detail on our BD material fracture so I'm gonna start by just turning off the fracturing operations temporarily and then I'll just wire this up so if we select one piece to operate on so let's say concrete zero zero and then I'll turn back on one level of fracturing and just create a bunch of pieces so it's obvious what's happening and then we'll just turn on the visualizer so we can see which piece it was so here we've got this one concrete piece which we're just going to be doing some additional fracturing on by isolating it with a group field so if we look at the constraint geometry visualizer what we'll see is that in red we have all of the constraints that were produced between the fractured pieces that got generated here so we've just produced a bunch of fractured pieces from original input piece and so we bought these constraints in between all of those pieces but then in yellow we've also got constraints that were generated between the pieces that it was a neighbor with so if you'll notice on the original concrete geometry we had some constraints between this piece and a few of its neighbors and so as this piece got sub fractured in more and more pieces if the shop will also update all of the constraints to the neighbors to point to all of the boundary pieces that got produced so what this means is that as you continue to fracture certain pieces more the self is able to update all of the constraints with its neighbors to reference the new pieces that got produced to land boundaries there so this can be a really handy feature to take advantage of and in fact you can also use this for setting up in term aterial constraints so let's take a look at that so if we first inspect the constraint geometry that got produced at the end of this fracturing and look in the group visualizer we'll see that we do have a bunch of constraints between say the wood pieces that got produced the glass pieces and the concrete pieces but we don't have any constraints between the glass and the wood and the wood to the concrete pieces that their neighbors with so if we were to send it to a sim it would just kind of fall apart the glass might fall out of its window frame for example so one idea is to take advantage of what we were just doing with that single piece and use it to set up the constraints between the different material types for us so if we go back to our original wall what we could do is set up constraints between the original unfactual ass piece and the for wood pieces around it as well as between the wood pieces and the concrete but they're originally neighbors with so this is what I'm doing here so I just talked about blah stop to isolate the wood and the glass connected jacent pieces and then I'm also setting up a primitive group called wood glass for those wood to glass constraints so that we can track them and easily identify them later on as they're subdivided and rewired through the new pieces that get created and the same goes for the concrete to wood constraints oops and then if I merge those together we now just have some initial constraints between our own fractured pieces so we have a few constraints between the concrete and wood and what big glass so if I go back to the glass fracturing and take a look at the constraint network if I wire in these initial constraints into the second input we'll see that those also do get updated for us so as we fractured these glass panels it's also taken those original constraints between the glass piece and the wood because it's adjacent with and rewired them to all of the pieces of glass that got produced along the boundaries here so we have both those boundary constraints that are being updated for us as the fracturing happens as well as all the constraints we get by default just between the glass pieces and so we can recut the rest of the fractures and then take a look we get at the end so now if we go back to looking at our constraint geometry in addition to the wood grain and what cut groups we had before now we've got all of these would be glass constraints that God created for us along the boundaries just based on those initial constraints that we had so as pieces got fractured it keeps updating all the constraints along the boundaries at those fractured pieces and then we also have our concrete to wood constraints too so now we have enough to be able to fully constrain together all of these different material types so this is one way of working if you want to set up your relationships between the different material types ahead of time but you also of course always have the option of doing custom stuff afterwards like the output geometry from these constraints is just the usual constraint network format for bullets so your name attribute on the points and your standard attributes like wrestling it's great name and so on on the primitives and so you could just take the fractured pieces that got produced drop down connect two decent pieces and do some stuff to set up those boundaries constraints yourself and then just merge it in with the constraints that you already got from our BD material fracture so that's always an option and so once we've finished doing our fracturing set up we can then use our BD constraint properties to set up the properties for the different constraints they were gonna have so in this case I'm just setting up for the first level of concrete fracturing which the guy geometry will show here I'm just setting up some weak glue constraints and then for the second level of concrete constraints I'm setting up some stronger constraints so we have kind of a clustering look and then for the glass I've got some extremely weak glue constraints so that it breaks very easily and then for the wood constraints I've got fairly strong glue constraints so it takes a little bit more force to break them apart and then also some soft constraints that take effect afterwards and then finally for the clinic to wood and what the glass constraints I've got sort of a moderate strength glue constraint and so now at this point we could take the geometry the constraints and the proxy geometry and send that into a simulation so now that we've got our fracturing and constraint geometry set up let's take a look at a simulation to see how this all works so a few RBD shelf tools like the our video objects shelf tool were updated to understand these new three inputs and three outputs and so they'll be able to set up the constraints and proxy geometry and so on for us so we're gonna use the Shelf tool and take a look at what it's set up and then inspect it a bit to see what was going on and then make a few more vacations for stimulation so use the RBD object shelf tool and select our geometry object and then if we eat back back into stops will see that it dropped down a few notes for us so first up it drop down a null for our constraints that are coming out of the second output and we've already got our attributes and everything set up on them from our Bini constraint properties and then from the third output for the proxy geometry it's also putting down an assemble table to create pack fragment perimeters from us and those are going to be getting sent into the simulation as our collision objects and then finally from the IRAs geometry it's putting down a standard transform pieces set up so it's using the dot pinboard saw in the create points to represent object mode to get a bunch of points representing the transforms from our proxy pieces and then sending that into transform pieces to update the high-res geometry and so let's go back into the simulation and see what it created for us so it's pretty standard bullet setup so we have an RBD packed object op which is fetching our proxy geometry pieces and then on the bullet data tab the one option that's turned on is he create convex hull percent of connected primitives option and so this will be creating compound shapes from any pieces that had multiple parts to them so this would be if we had like a convex decomposition that we were doing for our proxy geometry or something like that and then it's putting down a constraint network table to fetch our constraints that we had set up and then it's created to constraint nodes for us so we have a blue constraint and a soft constraint and the data names on these are set to blue and soft and so these are matching what we were doing with our B constraint properties soft so we here we had been setting our constraint name to blue for example and so those need to match up in order for those constraints to be picked up and the other thing that is set up is all of the values for the constraint properties like rest length and stiffness and so on are set to one and so this ensures that there isn't any additional scaling that's being done from multiplying these values by the primitive attributes that we set up and so ops so we were setting up all of our properties like how our house TIFF her soft constraints were for example in soft and so this lets us do a completely soft based setup and finally on the solver settings are all just left at their default and then we have our usual gravity done and so if we press play we're gonna see that the objects just kind of fall down and aren't really anything too interesting so we'll need to set up some pluggers so let's set up a grand playing table first of all but we'll see that once we press play it just kind of collapses so we need to make our structure a little bit stiffer so one way we're going to do that is just by setting up some inactive pieces at the bottom just so that the whole object kind of stays together nicely so if we go back to miss ops what we want to do is take our points that are near the bottom and set up the active attribute to set up those as being initially inactive pieces and so we're just going to drop down a group creates up and group the points by a bounding box and then we'll just adjust this to grab some of the pieces that are near the bottom and then once we've done that we'll just drop down an attribute crates off to set up our active attribute so we'll just change the attribute name to you active and then set it to use our point group that we just set up and then we're going to set the values for the points in the group which are inside our bounding box to 0 so that they're inactive and then leave our default is one so that all the other pieces are active and so now if we go back to the simulation and take a look at it will see that our Abdallah is holding up pretty nicely so now we need to set up some actual colliders to crash into it and make it do some interesting things so I've set up this wrecking ball geometry here and so it's just a sphere that I'm gonna crash into the wall so I use the RB d object shelf tool to bring that into the simulation and then we're just gonna set up some initial velocity to make it move quickly towards the wall and then we'll also just set up the mass to be a little bit higher so that it's a bit of a heavier object and so now if we press play we'll see that we're getting some more interesting collisions now and wall is breaking apart pretty nicely except that the wood is kind of staying together a little too strongly like we want to be able to have it break a little bit more noticeably so we'll go back to the wood constraint properties and first we'll drop down the strength for the blue so that it breaks a little bit easier and then we'll also drop the stiffness a little bit so that it's a little bit more after it switches to a soft constraint and so now if we press play let's take a look at what it looks like we'll see now that the wood panels they are bending a little bit more nicely and so we're getting some much nicer behavior now so this just about covers what I wanted to do in this masterclass so thanks for watching
Info
Channel: Houdini
Views: 26,196
Rating: 4.9824944 out of 5
Keywords: procedural, gamedev, game development, video game development
Id: lG0iZCv-sPQ
Channel Id: undefined
Length: 114min 15sec (6855 seconds)
Published: Tue Apr 23 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.