What's new in Maya: The next chapter for Bifrost

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right i think we can get started hi everyone thanks for joining us for the maya 2020.4 launch stream i'm jennifer waters industry marketing for film and tv at autodesk and today i'm joined by the maya and bifrost product teams who are going to give you a first look at some exciting new updates in buy for us we will be recording the stream and it'll be posted on youtube shortly after the webinar is finished like many of you our entire team is working from home and as much as we did troubleshoot for today's webinar we ask that you kindly bear with us if there are any technical difficulties or connection issues all right let's run through the agenda first up our senior line uh product line manager for mayan m e collection tj gilda will give you a quick overview on this maya release he'll then hand the floor over to bifrost product owner jonah friedman who will go through the new scattering instancing and visual programming in this update then faye agbaje senior user experience designer and ian hooper user experience architect will walk through the new updates to the user experience last but not least casa san mateo's bifrost product designer we'll walk we'll talk about the new the exciting new cloth and fx features in bifrost we'll then wrap it up with the live q a at the end um throughout the presentation feel free to submit any questions you have in the q a panel at the bottom of the screen we'll either save your question for the live q a or answer it right in the chat and with that i'll hand it over to tj to get things started awesome thanks jen well welcome everybody we're super excited there's a bunch of really cool stuff to get through um so without further ado i mean what we're going to do is obviously today the big release and news is obviously our major update to bifrost and there's a lot to dig into which is why most of you are here so i'll just give you a couple quick highlights um obviously 2020.4 shipped includes arnold six one and there's a bunch of really good stuff there which i'll show you a couple things but i wanted to take a minute to mention that you know despite pandemic and working from home our team continues to work on and trying to really push the stability and performance i was looking at the recent metrics lately and it looks like three out of four people have not crashed since halloween which is awesome to hear and we'll continue to work on that so please keep submitting stuff and we're really trying hard and there's a bunch more updates as you can see here that we'll cover a couple of them today but i just don't have time to go through all of it so by all means a good spot to check out is our what's new pan so in arnold uh six one as i said ships and some highlights it's kind of a joke here because uh post fx so there's a number of things i'll cover but again i don't have time to do everything so the release notes are actually quite thorough and really good examples there is worth checking out but you can see here in the post effects with exposure is now there uh we've got color correction so you can see you can change obviously the the redness of the car for instance white balance like this is uh all renders which is obviously um pretty neat to see but you can actually tune all of these post effects live and we've got vignetting as well and this one is actually fairly subtle but yeah there's a bunch of really good stuff to dig into there the other big thing that we've shoved in uh is what's something called nestic dielectrics now that's a fancy way to basically say you know the ice cubes the glass the whiskey in this case all have sort of different properties for internal refraction in their index or reflection refraction so trying to get that just right as you can see here the results are pretty good and there's a bunch of really cool stuff you can do with that so not just drinks but things like this sort of glass marble it's a really neat effect of getting those refractions and reflections right and so hopefully we'll see a bunch of really cool glass bowling ball films coming out from everyone soon um on top of all that stuff uh this is a popular thing people have been asking for basically we call it on-demand loading and textures and it's a term to how this works but essentially what this means is we're really improving memory usage right and so on the gpu popular request was well can you make it more friendly for gpus i mean i have as much memory and really what this means is you can see here in this particular example it's saving over 10 gigabytes in the peak usage of the ram on the gpu so it's loading the textures as it needs rather than all at once and not only does that help you with the memory but it also helps you with the time for your first pixel so you'll get faster renders going as well so definitely worth checking out 2020.4 has a bunch of really cool stuff as i mentioned a number of cool improvements to our our plugins so here we're talking with adobe our friends who make substance and there's a bunch of really nice things from the substance designer that is now sending properly and working into maya directly so you can see here this new manhattan distance mode in designer comes through and works as you expect and they made it even nice and simple so as you're working in substance and you want to send that source over to maya there's a nice little icon you can hit a go button and it literally just does that so you can see here it sends to maya so in this case we've picked a nice skeleton graveyard kind of thing going on and it shows up in the viewport as you would expect so what's really nice is you know you're working with this source and you can actually change the different presets and then you know ask dirty old whatever and it all just works so again that's all bundled up and ready for you to try out before the holiday so give it a shot some pretty cool stuff there for sure and for those of you who haven't grabbed 2020 yet you're missing out there's a lot of really great stuff so we have a bunch of these things those videos on youtube that you can check out but we put a lot of time into making sure that you know as you can see in just on the slide over 60 features for animators um you know better ghosting work or onion skinning the new bookmarks is really nice and and that's built it has an animation feature but literally anyone can use that and so you can grab a range and and leave yourself a note and especially if you're working with other people and you can say hey you know from frame 12 to 14 is when my explosion happens that's a really useful thing there's a number of other really good stuff like i said check out youtube there's a bunch of good links and then the last thing i'll say is sort of the theme that we've been really trying to hit is you know um rather than just shove in a bunch of stuff we really want to make sure that you're able to work faster so not just the stability but also just making things that you use every day faster so whether it's file load times which we've constantly been pushing forward or you know editing your uvs or just snapping stuff into viewport cache playback we've been doing some metrics and we see that it's saving people almost two hours a day in a lot of cases which is fantastic right so again if you haven't tried to 2020 yet it is a good time to do that but i realize most of you are here and it looks like we've got over 100 people here to talk about what's coming next in our major update with bifrost and so with that i'm super excited i'll hand it over to jonah who's going to dig into exactly that so i'll stop sharings and mr friedman can show you the cool stuff go ahead jonah thank you tj yeah we'll just get set up here okay so uh yeah welcome to uh this webinar everybody and um this is going to be uh um what's new in bipros 2.2 uh but first uh small amount of housekeeping um so what is bifrost uh bifrost is visual programming for 3d graphics and that means programming in a node graph uh our node graph is jig compiled with llvm which is the same compiler used for a lot of c plus plus code so bifrost is fast uh we have state of the art simulation solvers which costo will show you lots about i'm not going to talk about those too much but this is one uh one simulation done with bifrost uh also general purpose proceduralism and an arnold integration and this is of course rendered with arnold and just a word about our releases and distribution uh bifrost is included with my subscription so this new version by frost.2.2 it's shipped with uh with maya 20 20.4 which we're all very excited about tons of good stuff in that release and uh but it's also available for 2018 2019 and all the rest of 2020 and you can download new versions on the area um and our release schedule is is flexible much like the arnold releases and our versioning much like the arnold versions and this is our 11th release so when we left off we uh in in the vision series presentation i talked about how this image here was produced with scattering instancing and um blue noise and other point distributions and fields um and yeah so we're going to kind of pick up from there so if you didn't see that one maybe watch it later but don't worry it's uh it's okay so the first thing i want to talk about is the scatter pack scatter pack is a collection of new nodes for creating and manipulating point distributions um this includes things like creating point clouds with scattering and aligning them to surfaces and controlling what gets instanced where but there's also a new graph programming concept that we've added called this interpreted auto part but first just a super quick tour of the scatter pack so this is the graph from the slide that i just showed showing aligning a bunch of instances to a curve and there's two nodes here that are from the scatter pack there's this node which is uh scatter points and this node here which is called create instances and uh this is a pretty simple graph these are the first two nodes that i've listed here on the uh scatter pack and but we also have a bunch of other nodes for manipulating points there's uh there's there's uh dealing with point transforms like randomized point scale randomized point rotation point translation we can manipulate the selection of instance shapes so we can uh um this is there's a bunch of nodes to control what gets instanced where so you can have a bunch of different variations or a bunch of different assets that get instanced and then there's uh some nitty gritty stuff as well like orienting points to surface baking instance geometry into a mesh getting the transforms from your point clouds and then this thing that i've alluded to but haven't explained yet which is the interpreted auto port and nodes to go and deal with that so with that i'm going to go to maya here you guys all see my maya right just wanted to kind of make sure um i don't see anybody saying that i'm not that they're not seeing my maya so i'm going to assume this is okay um so this is this node graph here and i've got this maya curve which by the way we can now you can now just drag and drop a maya curve into bifrost and it comes in just like that and this curve here i'm kind of doing like a distance check to this uh to this curve and then using that to orient all these points and the reason why i'm i'm i'm showing this kind of the first example is because this is a very simple graph and these are the these are two nodes that uh that the scatter pack or bifrost ships with um but this node here is something that i've built with visual programming as kind of a companion so all of these things are all intended to be extended using visual programming and that's what i'm doing here so i built this node uh pretty quickly it's got you you plug in a curve here i can also um bring in multiple curves if i wanted to it's got a distance so i can make the distance a little bit bigger and then it's got you know a further area of effect i've got this uh this f curve here so i can adjust the fall off um and on the inside of this node if we go in here um there's a little bit of a mess here so all of this stuff ignore it that's all just uh diagnostic drawing i'll show that in a second um this is the stuff that actually does the work of it and uh it's it's kind of just taking taking the distance to all these points i can display that using these diagnostics um i this is another feature by the way called terminals i can make all these nodes capable of explaining themselves i'm going to unplug the main outputs here and this is kind of what it's doing it's it's just taking the mesh that i'm scattering on finding the closest point on the curve here and then taking this this tangent which uh let me turn off the uh this view here so it's taking the tangents from this curve here and then kind of sampling them onto this mesh which is then being used in order to drive my instances so that's that's kind of the intent behind this concept of interpreted auto ports uh auto ports by the way are signified in the graph by this little halo so right here we have this tangent override and i just kind of added my own component into this to control the tangent and by doing this in even a really small simple graph as a user of the graph i can add a small component and and use even a very simple graph to express an artistic intent um so let me uh oh and uh one other thing here i've also added another output to my sample closest tangent here which is these uh these these weights here and by by uh plugging that in i can use the same data here to adjust where i'm scattering as well so these are kind of the same thing where like this is where you plug in your own kind of visual programming uh components or you know even ones that we ship that do things like create weights or create vectors uh in order to control all these nodes okay um there we go um so on the scatter pack nodes the ones i just talked about you'll notice that there's these weights ports everywhere and these these weights ports are what makes all of these nodes really expressive so in this case we have a node that is randomizing the selection between flowers and foliage so it's mostly foliage but in some places as flowers and i'm using uh i'm using weights in order to control where we're randomizing the selection let me go and uh look at that got this uh scene called trash cans one graph um let me just adjust the exposure here a little bit there we go and get an asus lot just looks nicer okay um yeah so in this graph it's pretty similar to the other one we've got a scatter points uh we're scattering points uh we've got this little display here of the of of some of the values where it's scattering by radius and using blue noise as our distribution so kind of read that from the top level of the graph uh what that means is uh scattering by radius means that if i go and change the size of this grid i don't get a different amount of uh of uh points per area because i'm telling it what what the radius is to avoid other points with and then there's create instances where i'm plugging in my greenery and my misnamed red flower shape so it doesn't know what colors things are it just knows what i named things and if i name things wrong you know what you gonna do and then we have randomize selection by probabilities here and if i unplug my weights here which are uh just kind of putting um which are flagging off the areas around the trash cans i've got these probabilities so i can go 0.1 on on probability 1 here so this is saying that there are there's one part uh greenery 0.1 parts flowers and i can go smaller as well so if i do this now i'm randomizing the selection with these settings here and i can plug in this this this weights uh here and now this randomization is only being applied here in this one place um and once again uh this this right here this create weights by proximity is a a something that i built with visual programming in order to kind of show the flexibility of this stuff um like uh we don't necessarily know how you want to flag things off like uh all the possibilities in the world and all of these nodes are meant to be extended in this way and the other thing is if i go ahead and i increase the size of this plane we can make this got a little bit slow and that's because the blue noise distribution takes a little bit of time and let me turn on my heads-up display show my frame rate here there it is so if i move one of these trash cans around we're getting um maybe about two frames a second um so it's not terribly great and i just happen to know that almost all the work that's going on here on this in this graph is is happening uh right here in in scatter points so what i'm going to do is i'm going to take this copy and paste it create a new graph paste it in don't need the input there connect that connect that and now i have two graphs here um which by the way we've changed our graph representation we now have this this graph shape uh which you can kind of put into the outliner and it can be parented and hidden and so forth previously we had a representation of the graph that was just the graph and then we had another node for drawing now these are kind of combined together and this makes uh a lot of the stuff much nicer so i'm going to hide my instances and now i have a graph that's only doing scattering i'm going to go back to my instances graph which is now also doing scattering we're kind of redundant here and i'm going to go here and i'm going to delete my scattering and then i'm going to bring in the other graph that i have which is outputting points i can plug that in and now i have this kind of like two stage process represented here in the outliner where i have this graph by trust graph one i'll call it scattering and then by frost graph instance i'll change the order of these just so it's clear this one creates points and then this one instances onto them and now this is much much faster i'm getting a much better frame rate because the scattering part of this is now cached and i have this two-stage operation that i was able to express here in uh in maya and the drag-and-drop graphs into graphs makes it really easy to kind of set this up if we if we go and look in the node editor and we can see what we've actually built here just uh there we go so we have our scattering graph here which is outputting points and then that's going to the bifrost graph instant shape here which is taking the points as an input so yeah um so uh these interpreted auto ports um a little bit more detail to these because this is a super important concept so i'm going to go to this scene here that has these train tracks let me go back to my presentation so so as a graph author or as a compound author i i i'm creating this compound here that rotates strands and my question and my tension in my mind is how do my users want to use this compound like do they want to control using weights or fields which i'll talk about a lot of detail a little bit but um right now uh or do they want to say a property like we already have like a color set or something on on the mesh and we want to control it with that or do we want to flag off certain points and so forth and the answer to this tension is the interpreted auto port so here's here's my scene to uh demonstrate this um if i zoom in on some of these train tracks here we can see that we're visualizing the basis vectors of this the blue is the uh it's a little easier to see blue is the tangent green is the normal and red is the binormal and what this node here is going to do rotate strands is it's going to rotate around the tangent so just to demonstrate that i'm going to plug in a time node into radians here that's how many radians it's going to rotate by if i hit play here we can see that it's all just kind of rotating um so yeah um so this is kind of cool um in this graph there's more stuff that's going on and i'm kind of focusing on this node right here but previously in the strand we're bringing in a curve or previously in the graph we're bringing in some curves as strands i guess i don't need that node and then afterward i have this create train tracks node which goes and creates a bunch of instances along the strands and so forth but i'm going to focus on the boring part of this graph this rotate strands node which all this node knows how to do is is it takes the tangent it it makes a quaternion which is a rotation out of it uh using axis and angle to quaternion um so it's saying rotate around the tangent by the amount the user specifies and if i if we ignore this part of it which we can build up again just in a second delete and delete this port to kind of rebuild it so this is the functionality that i've showed so far it's just saying take the tangents rotate around them by the number of radians that the user has specified if i now go and i say interpret auto port as scalar i plug in my geometry which is my strands and i plug in my interpreted scalar port here and i'm going to name this radians weight and this is going to give me data out so i'm going to set this up with multiply and multiply my radians by my data this is now auto looping so we can see that right in the graph that that this is now a loop over this data essentially and plug this in now we have this weights port here which is the same as the one that we see on all of the different scatter pack nodes so what that means is i can plug in first off some kind of arbitrary data so in this case i'm plugging in my length so this is a node here in the rebel pack that measures the length along a strand so kind of like counts from like zero to to some amount and this creates the effect of kind of twisting along the strand i can also do some you know very light visual programming right inside the graph uh here uh on this data like i have this f curve so let me just pop that open and this f curve here is it's it's kind of tiled so it's trending upwards so along the length of the curve it's going to kind of go up and down so kind of like a sine wavy type thing but it's going to trend upward and if i move this point up here we can see that it's trending upward more and uh the way that this this looks in the uh in the graph here is is kind of interesting um and i can go and i can add like little like you know i can add a bunch of stuff here in order to kind of make it more broken up or something like that let me hide my uh my axes just so we can see the intent here a little bit more um so that's interesting um but let's say that uh we want to do the thing with point length we have this property coming from somewhere else but all of this stuff here is kind of like down inside of a compound let me delete that this is all in a compound and we want to just use the point length along here so i can add a watch point and say okay this these strands here have this property point length and rather than you know going through all of the boilerplate of getting that out and putting it in there i can just say like take a string and say point length so i get exactly the same result as i would if i just plugged in length here because this node here is going and interpreting the string into data for me so as as the programmer of this node i still am just dealing with raw data here and this interprets the user's intent for me and then the last thing that we can do with this or another thing we can do this is actually not the last thing is i can plug in a field into here so this is this is a noise field um so if i like what we can see is we can see that it has uh kind of like a frequency to it and it's now controlling my rotate strands so i have number of frequencies three and an amount of frequency here if i decrease the frequency now uh the effect is a little bit different so now i have a noise that's kind of on a larger scale and i can also put my noise through an f curve here to kind of adjust exactly what how it looks so in this case this is a curve that's saying like mostly outliers so mostly minus one or one and very little here around zero and we kind of see that here where it's like mostly flat and suddenly it switches and i can do the opposite as well where i can say mostly flat and then a few outliers so there's like a few little outliers there probably easier to see if i increase the frequency here um so what what this has done this interpreted auto port is it's taken this node right here this rotate strands which was the most boring node in this graph and made it a really expressive thing that that um that artists can use in order to express their intent so this note only knows how to rotate a strand but the artist is going to say how to rotate it where to rotate it by how much um and and this is kind of how we envision these things kind of fitting together okay so here is here's uh another example from division series before uh where uh what i'm doing here is i'm taking just just kind of like raw data data here i have a per point property or color per vertex or color set whatever you want to call it uh on my mesh here that represents the drainage data and i'm just telling my scatter points node to use that and i'm able to express this intent here of only scattering the areas where there's more runoff in the strange data so that's my introduction to the scatter pack and the interpreter auto ports so the second thing that i want to introduce is another huge topic which is fields and uh fields are resolution independent volumes um this is based on work that uh that jerry tessendorf did and uh what these allow you to do is express a a spatially varying function in 3d so something that varies over 3d space and we kind of saw a glimpse of that when i was plugging in a scalar field and adjusting it with an f curve fields are interesting because you can author your own fields by combining fields with other fields and in this case i'm showing an example that i call hurricane noise um where i i've built this this field that basically does spirals so all this kind of like spirals outward and it's flowing along the surface here so we got these spirals that also flow along the surface and when you after these fields are authored uh they can be shared of course and you can use them to control simulations and you can use them to control anything that has one of these interpreted auto ports so i could use this field here in order to control the scattering of my instances like i could have uh like all these um like if you imagine that that they're like cars or whatever like all the cars would be aligned to this field if i used it in the same way that i was using the uh the curve um sampling in the other thing so now what i'm going to do is we're going to walk through exactly how to build the hurricane noise um in in to kind of hopefully give you guys a better idea of kind of what this means to have uh to have implicit volumetric tools that let you kind of build things up by combining multiple of them so to make my hurricane noise first i need to prepare my ingredients and let me go ahead and open up that scene so here's my ingredients here we go okay um so the first one is a fractal noise field i visualize that here uh the lighter areas are it's kind of a field with a black and white noise where there's lower values and higher values and they kind of change over space and then we can take the gradient of that noise and the gradient is kind of you can see that into these lines over here um what that is is it's it gives you the direction in which the fields values increases so if you think of this as being like like like like a hill like topology where the higher values of the higher elevations then what the gradient here gives you is the uphill direction so as you keep stepping along the gradient you kind of go up the hill until you end up at the highest point in the hill at which point there's nowhere to go and then we need an up vector so this is going to be just an arbitrary vector um in this case imagine it pointing kind of directly at the camera here because we're looking straight down um and we're using a constant up direction which i've not shown and we have the cross product of the gradient that's this one and the up vector and what that gives you is a field that's 90 degrees to both of those um and that's going to kind of sketch out these topology lines so i can transform this into this by doing a cross product of this field with another one so let's take a look at that so here's the scene and if i uh if i look around the scene and i move my plane around where i'm sampling things you can see that all these things are kind of changing the noise is changing and the gradients there that i'm visualizing are also changing uh as a result as i'm moving through this noise field if i move it sideways we kind of get something a little bit different where the points that we're growing all these lines from are changing but the values don't seem to be because we're kind of just moving through this 3d space so this is a function in 3d space and uh so let's take a look here what we've done we have vector field scope let's talk about this one first um this guy's job is to visualize fields so we can see all of these lines here and these are being created by this i'm going to turn off my grid so we just see these lines and these kind of go in the uphill direction here now i have this flattened field node here which if we go inside i'm really just multiplying my field by uh something that will zero out the y component so that it's flat is easier to understand that way but actually the uphill direction let me show my uh grid here again the uphill direction of this field is not actually in 2d the value can increase kind of upward and downward as well so this is the actual gradient of the up a hill but for the purposes of this i'm just going to flatten this for simplicity then i have my up vector field which this is this is this is a very kind of simple field this is just a constant field and what you get is you just basically get an arrow or a line like that a flow line that goes upward from every point um so that's the visualization of that field so if you take the gradient field and you cross it with the up vector field um this is kind of a a linear algebra type operation just to explain super quickly what this is for people who may not be familiar hopefully you guys are able to see my drawing here if i have one vector and then like another vector in 3d space then there's a third vector that is going to be 90 degrees to both of those and that's what a cross product gives you so if you take the uphill direction and you cross that with an up vector then you get something that resembles topology lines and so so what we end up with we end up with a field that represents the uphill direction along the uh uh in this noise field and we have one that represents the around the hill direction so if we follow this one we're just going to kind of stay at the same elevation and go around the hill so those are our ingredients and by putting these ingredients together we can build our hurricane noise and the way to do this is actually pretty simple we just need to blend the flows so we have these two flows around the hill and up the hill that's the gradient and the topology lines and if we blend the two we can have a formula like go 25 up the hill but 75 around the hill and this amounts to spiraling up the hill and now we have a noise field that is that produces these uh really interesting kind of like spiral shapes so let me open up that next one so here here it is um very similar graph as before except now we have this linear interpolate here and i have this driven by time so at zero we have only topology so this is just going around the hills and then at the end here at 24 that's going to be a 1.0 because it's seconds um and at 24 frames a second this is the uphill direction and somewhere in between we have these spirals and we can kind of pick like what value that we want and if i move if i move this around we get this uh this we can kind of sample this in different spaces and see this kind of swim around in really cool ways as we move through this noise field so what we've built here is is by combining all these fields together we built a field that is a uh like like it has this this uh spirally noise at infinite resolution with no bounds i can move this over you know way over here and it's still there there's no like uh it's all implicit and doesn't exist until we actually go and uh sample it like i can go and make this grid here smaller which that's maybe too small so now we're getting like a ton of these values yeah and the other thing that i've done is i've taken the flatten and i've replaced it with the project vector so this is another kind of linear algebra operation and what we're using this for is we're using it to kind of flatten the field onto an arbitrary normal and we need to do this because we uh because we want this to be in 3d we don't just want our factor to be one vector we want it to kind of like represent a flow around our turtle if you remember our original intent and one kind of cool thing is uh we have this this icon on project vector which is a visualization of what project vector does so we're taking this vector here this this first white one and then projecting it onto this other white one and the result is this blue one right here which is kind of the same uh length as this one and really that's the parallel vector that's being shown in the blue here um we also get an orthogonal vector which uh if i can draw that really quick it's not represented in the icon but uh here we go it's it's this one right so this this node kind of takes takes a vector projects out to another one and then gives you both of the kind of components and what what this uh if you don't want to think in linear algebra which i don't blame you i find this a little bit oblique too i can take this project vector i can collapse to a compound and kind of give it a name that's more expressive for my purpose here so i can say like flatten onto so i can say call the spectre field flattened field so flatten onto we flattened this vector this this field here are our uh taking this right it's not flat and we're flattening it onto this vector and the result is a perfectly flat field here so now we have our our ingredients all mixed and we're ready to apply them to the turtle so the next thing that we need is we need the up vector um and the up vector is a field two so in this case i want the up vector to always be flowing away from the turtle and this looks like normals it's actually not for this case it doesn't actually matter that's not normals but if you look carefully here you can see that these lines here are flowing so where they kind of collide with each other they kind of instead flow away from the turtle so this is another kind of ingredient that we can use with fields and the way that we do this is we convert our turtle to a volume specifically a level set or sign distance field volume and the gradient of that is the direction that flows away from the turtle and let me show you guys that so here's the application of this um almost like a cooking show where i put this in the oven earlier so here we go there there's our turtle with our hurricane noise applied i think that's a pretty cool look but uh if i go and kind of undo a little bit of this graph and i plug in my up vector again uh now this is what we get we end up with the same swirly noise but it's it's kind of flat in the wrong way it hasn't been flattened to the surface it's just flattened to a to a y vector here and then the other thing is where i'm sampling it so once again i i kind of built a second graph here so i built this graph and what this graph does is it scatters a bunch of points on the turtle with blue noise this is where all of our flow lines will start from and this one is going to convert our turtle into a volume and specifically our level set volume and then i'm outputting that as well and then in this other graph i've dragged and dropped this in and i have this node called voxel field so i'm converting my volume into a field so i can use my explicit volume data in the field system by converting it to a field with voxel field then i can take the gradient of that here so this is the gradient of the voxel field and if i use that to visualize uh let me hide the turtle again and show the flow line graph again we get this right here and if we look carefully we can see that this is indeed the direction that flows away from the turtle and then i use that as my up vector and i have all of this stuff here um again all the things that we talked about before and we have our flow noise around the turtle and like i i hope that it's clear that there are infinite variations to this like um if i change my frequency it'll be pretty obvious uh what that should do like a higher frequency will kind of give you a uh more um uh smaller hurricanes if i change the number of frequencies to something lower then we end up getting like a smoother kind of result same thing if they go higher we end up getting like a more extreme result i can also increase my resolution here uh we're at 15 samples per second if we consider this velocity because this is a flow so this is saying how many samples do we have increase it to 50 so now we kind of get this sort of thing um i can just change this back i can change my ratios this is these are kind of uh just parameters of the noise i can even like plug in time here to some of these like let's let's plug in one to frequency ratio and frequency ratio is is uh i think i might have turned my settings up here a little too high this is a bit too slow uh sorry one second let's go back to 15. that's i think that's good enough for this purpose yeah so we can see that this is changing in kind of interesting ways and all along like the the crazy parameter space of this i can also scale this note in this noise in other ways i can scale it uh just in one axis um i guess what i'm trying to say here is this is a very rich vein right like like like this operation here that that i demoed has infinite variations to it and when you shake this tree cool things fall out of it like i didn't really set out to make hurricane noise i uh i kind of um yeah i just kind of found it so anyway um yeah back to the presentation if there's a rendered result we can render strands so this is kind of like a cool way to make things that are like uh nice little sculptures or something else um yeah i especially like the way that it kind of like the lines kind of flow in and then kind of form these hurricanes and so forth uh then in the vision series i showed this guy um the the ragged clouds example uh it's really kind of the same stuff um i'm authoring a field that has exactly the characteristics that i want and then i'm deforming a volume with it and the result uh was this uh this this mist down here um and then here's another result from the same uh same technique where i like i call this the t-rex nebula so if you look really carefully the head is over here the leg is here little arms are here actually this is the t-rex model from the content browser with the uh the ragged clouds applied to it and um i used it to create this nebula instead then i made another more diffuse one which is the uh which is the other volume here and put some lights in it and so before i end just a couple words about changes to our instances again so we have render archive instancing and that's not new um we can instance arnold.s files using this it's uh it once we have other uh other renderers on board this should be a concept that's applicable to a lot of different renderers not just arnold but right now it's kind of the arnold.s files and the nice thing about this is that i can take my um my million polygon car maybe 100 000. i don't remember exactly that's kind of the nice thing about i don't need to think about how many polygons is in the car and i can instance it and then instance the entire car asset and it supports the entire feature set of arnold so that's great um great for instance incomplete assets i can do time offsets with this so i can do uh i can do like an animated character who's who's cheering or something like that um and then instance a whole bunch of them to create a crowd but um there are a couple limitations which is that motion blur is baked in and only the exported frames are usable so if i'm doing like 100 frames of a sequence i have frame 50 and i have frame 51 but i don't have frame 50 and a half and on frame 50 the motion blur is baked in so i can't control it after the fact i need to go and rebake my my ass files in order to uh get that to work so we have a new thing it's called arnold alembic instances and what this is is arnold has an olympic procedural so this is the arnold olympic procedural controlled from the instancing system and the way you do this is you export an alarm an arnold olympic archive and you can export your materials using materialx and then we can bring them back in later with the material x operator and this will support sub frames and it will support arbitrary motion blur if you look carefully these dogs are all motion blurred and offset from each other if i want to like slow this animation down by a factor of of like a hundred and just have them like in super slow motion or something moving between two frames of their animation i can do that because it's just going to sample the alembic file at the subframes and that's that's fine so this is also usable for things like stadium crowds and retimed walk cycles and so forth um and there's a little bit of a how-to there for uh for how exactly you do that um yeah uh there's in the arnold menu there's arnold scene export alembec and then arnold utilities export selection material x feel free to take a screenshot there um and hopefully we'll be able to post some examples soon about that as well so uh that's all for my section um let me see if there's any questions here just to answer really quick because i'm three minutes fast right now um there's one joining about uh are the lines dependent on uvs at all or if you want to expand on that a bit with the uh we're talking about this right here the strands here um no but they could be um what in our in our instancing um let me bring up the uh the curve one again um so here's the align by curve if you don't go and control the tangent yourself and the tangent was that flow along the curve here like let me grab a uh we've got a taurus here to scatter on instead and bring that in and plug that in there we go we still have our tangents here someone must unplug that so now we're using the standard built-in ones um this is actually building a tangent space out of out of uvs so this is this flow here that we see around this taurus is based on uvs um yeah so what you could do is you could create tangents like that change that to uh change that to a field you would have to go through a volume and then you could have like a field that's flowing along the tangents that are created by the uvs a little bit of a roundabout way of doing it but like yes you could do this today if you you know how i see phil has a field's question with a hurricane example uh in the turtle can we create peaks in the highest part of the hurricane patterns i ask so you then use this as a workflow the volumetric cool environments uh yes you can what you would do is you would end up grabbing another field and kind of putting it into there so uh if i go to my final result here on the hurricane noise um what you would do is there we are here we are so this is the original field right where we kind of had the peaks like you could put this uh through an f curve and then you could say like move it along the normal that we're generating based on the value of this um i don't have a test for that right now but i did actually try that out it was it was not quite as good as i was hoping though and the reason was is because these flow lines kind of like are all tracing along this path um so i think you might want to do so it was kind of going like up and down based on the length along the line so i think what you'd want to do is maybe do like a second process that then moves these along the normal after the fact based on this field it shouldn't be too hard you should be able to do that with just a displace points node thanks to the interpret auto okay and i think i'll hand it off to uh ian and feyna thank you can everyone see my screen hello it's working out can you hear me yep all right um let me just pull things around um all right so hi everyone um thank you for joining us to celebrate what's new in bifrost my name is fee and i'm an experienced designer on the bifrost team uh ian who is our user experience architect he'll join me as we explore by first 2.2 from a user experience perspective so let's start with quality of life improvements in bifrost 2.2 usability is not all about flashy ui performance and stability are critical parts of the user experience for a new product so with this in mind we have made significant performance improvements for error simulations and this includes a better memory allocation and consumption as well as improved handling of loops we've also fixed a number of bugs improving overall stability and code quality so we fixed over 80 bugs in buy for us 2.2 and we have fixed more than 350 bugs since our first release last year now let's talk about improvements that impact your d2d activities in buy for us improvements to the typical graph author workflows can be subtle but they make a huge impact on ease of use with escape termination you can now use the escape key to stop graph computation instantly this allows you abort long simulations and resume your work almost immediately so if you've ever been nervous about trying a new simulation that you're not quite sure of then you understand that struggle of having to wait for the wrong simulation to be over with escape termination you have that flexibility to confidently try new simulations quickly also when creating your graph if you wire it in a way that's not very efficient and your system starts to slow down you have the option to quickly stop it and make changes we've also integrated some drag and drop and has enhancements which improve workflows across maya and bifrost so just like you can drag a mesh into by first from the maya outliner you can also quickly and easily drag a curve from mass outliner into the bifrost graph similarly you can drag a biphras graph from the outliner into other byphos graphs this allows you quickly hook graphs to one another and unlocks other workflows and frameworks for graph creation for example you could choose to create multiple smaller graphs instead of one large graph and since the output of your graph is linked to mass caching system you can take advantage of this by compartmentalizing your graphs in a more efficient manner as jonah demonstrated earlier there are two new features in bifrost 2.2 which have a transformative effect on how you work the first feature is terminals terminals are flags on compounds with nested terminals which allow you to toggle the outputs at any level of the graph the terminals we've introduced a new ui concept which doesn't exist anywhere else in the graph so with terminals you can easily enable and disable your outputs without having to break connections for instance uh previously if you wanted to view and compare outputs of two or three smaller graphs in the graph editor you would have had to constantly make and break connections to the output and with terminals you can quickly control that with one click so you are able to control what's being computed and displayed easily with just one click next up is fields fields are a new system for creating spatially varying functions without bounce or a resolution fields allow you to expand your workflow beyond fixed influences so you can now create resolution independent custom influences from scratch using simple mathematical networks at the top level of graphs we have also added many new nodes and made improvements to existing nodes here's a brief overview of some of these changes for our volume tools we've made several enhancements to various volume manipulation nodes such as the ability for all of our volume nodes to support adaptivity with this multi-resolution support you no longer have to pay the performance cost of having high voxel volumes everywhere you're able to either choose where to apply a high resolution or allow the system automatically decide thus speeding up your workflow with mpm we've also massively improved collisions across all of mpm and modified the npm solver settings mode to include new properties for quicker setup this means you now have a plug-and-play solution which allows for easy and quick creation of high quality npm collisions so you can easily create better collisions with surfaces such as sand and snow self collisions such as a cloth folding over itself and overall collisions with everything so um say is sand interacting with snow which is also interacting with cloth and as jonah kind of showed earlier we've um introduced the scatter pack which is a collection of specialized nudes for scattering and instancing um and they've been integrated into bifora's core node library this allows you to easily find and use these nodes and unlocks even more workflows and capabilities for you now i'll hand off to ian who's going to review some of our other usability improvements that we recently need thank you great thanks faye i'll just share my screen and okay so all of those new nodes have added some great new capabilities to bifrost 2.2 of course we've been doing this all along throughout the past year each enhancement might individually seem small but when you add it all up it can make a big difference for example we recently added auto looping icons and other identifiers on ports and wires to make it easier to understand the data flow we also added type and value information to the nodes so you could better understand what is happening on each node node suggestions work anywhere there's an ecosystem of nodes that work together in their own networks we added quick create shortcuts to let you create relevant nodes in just a couple of clicks you can also now wrangle type of a port directly without needing to create a value node we introduced an f curve editor into bifrost and then continued to make small improvements to it we added a file browser widget to let you select a file directly from the parameter editor and of course over the course of the year we've added many new example graphs to the bifrost browser with all the new nodes that have been added we've also created many new icons to help visually distinguish them so here's an example of a couple of the how a couple of the enhancements i just mentioned can make a more comprehensible graph and jonah talked about this too quite a bit as well with the for example the auto port here is indicated with this bar and so to create a more readable graph auto looping icons on the ports here indicate when the data will be cycled in a loop in this example we can know that the node will loop over an array of vectors to produce an array of lengths here you can see how critical values in a graph can now be read directly on the node in this node the property being set is data dump and on this node we can see that the value is 100. the result of this is a highly readable graph where a simple screenshot of a graph can be shared with someone and they can step through it to understand what is happening so you can easily see how the value of 200 for this long integer would go in to this random value array along with the float3 you can see it's looping through the normalized node and so on in addition to a more readable graph we're also working on the day-to-day usability through features like the node suggestion shortcuts they help users learn while doing by offering node recommendations to plug into various ports another way that we are reducing friction is through type wrangling bifrost has a type system and type wrangling allows manipulating it with one click as well as providing hints and discoverability as to what nodes are capable of and now i'm going to let kosta share with you some of the exciting effects improvements that are in bifrost 2.2 thanks everyone thank you ian thank you for you thank you joan uh thank you so let me just share my screen okay is everything good uh on the screen sharing part yup they're good all right thank you so um i'm costa the um user-friendly version of constantinos which is my my real name and so today um i'll be taking you through all of the updates that um are involved with effects and cloths um of course fey already mentioned some of that stuff um and i'm just gonna break it down into more detail for you guys so the first thing i would like to show is um all of the features that are like overarching across all of effects so for those of you who are not yet familiar we have several simulation systems that exist in the bifrost graph right now and those would be the aero and combustion simulation system where i have a generic particle system and we also have the npm solver which is responsible for things like sand and snow and cloth and so we have implemented some features that are actually affecting all of those so let's start with that so the first thing which has already been mentioned by fee is escape termination and i just really want to emphasize this point a little bit more because for anybody who's used the bipros graph and has done a very very heavy uh combustion simulation or anybody who's used the older bifos liquid knows that it's it's pretty painful to have to sit there and wait with your hand holding the escape key in order to cancel a simulation and so super excited to announce that we finally have implemented instant escape termination so that you can abort any simulation as its computing mid frame and if you do wish to resume the simulation after having escaped we do offer the option to save the last computed frame into your into your memory and so that's available there if you want to resume as an option that's the first big overarching feature i wanted to discuss the second one um is field of influences so we've already spoken a lot about fields and we've already seen how powerful and versatile they are for bifrost in general when used as a visual programming tool and i'm super happy to announce that we've also leveraged fields to reinvent our entire influences system from the ground up for all of the all of the influence all of the simulations available in bifrost so this doesn't mean that everything that existed before is not there anymore as a matter of fact all of the pre-existing influence compounds have been rebuilt so they're still there they're still the same they still work the way they used to but if you dive into them you'll see that they're built out of comp out of a fields and they're fully editable and customizable right there at the top level of the graph so you never have to dive into the aero solver or to any solver to build new influences and and obviously for anybody who's still new to the concept of fields you could take all of the instances that we ship with and you can you know study them you can explode them you can see all the fields that build them up and then you'll you'll have that as a learning opportunity to be able to start building your own and because everything is built out of fields now and jonah already showed how all the fields have scopes associated to them and you can kind of see what's going on as a diagnostic tool in the viewport so all of that is now available for influencers as well before we didn't have a way to visualize the influences that were affecting your simulation and now because they're built out of fields we have that possibility as well so i'm just going to show you guys a very very couple very simple example so here i just have a generic particle system and i'm going to just add a turbulent influence to that and just play without a little bit tweak it and plug it in and if we play that back properly you can see that it still does everything it used to do and also instead of connecting my particles to an output port i'm also playing around with the terminals in this case i'm just feeding the output directly to the viewport so the p stands for proxy um and so now if i dive into my turbulence influence i could actually see all of the fields and all the mathematical operations that were used to construct this turbulence influences whereas before with the old influences if you were to look inside the compound it was just a bunch of abstract set property nodes that were used to send instructions down to the solver so it's really cool that now you actually have a concrete network of visual programming that's showing you how this influence has been constructed and obviously you can explode that and you can have everything at the top level of the graph and you can start to either you know study it to learn how it works or start customizing it and altering it in any way and so let's have a really quick look at how you would go about creating um an influence for simulation out of field so let's say i wanted to have something that's kind of like a wind influence so i would start with a vector field and i would give it a direction along along the x-axis and so now i want to use that let's say as a wind thing the thing is you can't actually plug the output of a field directly in the influence port of the simulation in this case simulate particles but it can be simulate arrow or npm or whatever um you actually need to tell the solver like how it how it is going to interpret this field so i have a vector field i want it to be used as a wind and so in this case i want to actually use this vector field as a force for my simulation and so rather than feed that directly in there i'm gonna have to take this other new node that's called a set influence force and this accepts any kind of field as an input for the force and then now this is already packaged up as an influence for simulation and if i look at this i have a very very slight very weak wind in the x direction so obviously i want to make that more powerful and as jonah has already showed you can apply any kind of mathematical operations to fields so in this case i'm going to take a scalar field and i'm going to just use that as like a type of magnitude for my wind and i'm going to multiply that and feed that result into my set influence 4 so now i have a much more powerful wind another really cool thing you can do is um it's query and directly set the value of solver properties so obviously when we're simulating we have a whole bunch of properties that are being solved for like the point velocity and the direction all that stuff and sometimes you want to look at that and you want to like hijack it do something to it and then send it back to the simulation and so this is why we created this node called a property proxy field it actually allows you to query any the value of any property that's happening in your simulation and you can do things to it and set it back so let's just see how that works so in this case i want to take uh let's say i want to take the point velocity and do something to it so i just specify that the property i'm looking for is point velocity uh i set this node to be a vector because obviously velocity is a vector and then let's say i just want to like slow it down i want to do like some kind of very primitive you know drag or dampening kind of thing so um again i'll just take you know i'm looking at my velocities here and i'm multiplying them by scalar field in this case i'm going to multiply them by 0.9 so that means that every step the velocity of the point is going to be multiplied by 0.9 and now we already saw how the first vector field was being used as a force for the simulation and so i used a set um but now i want to take the velocities do something to them and then set those back and so now i have to instead of using the set influence force node i have to use an influence set property node and again i'll just specify which property i'm setting in this case it's the point velocity and now i could feed that directly into the solver and if we play that back now we'll see that we still have our wind acting but it's kind of like much more dampened and slowed down so all of this just to say that in order to use the fields as influence for simulation you have a whole bunch of other nodes that are used to kind of tell the simulation how it's going to interpret that field is it a force is it setting a property value there's other ones if you want to use fields as masks so for example if i have a wind and i want to use a noise field as a mask on that wind i have a set mask influence node and so on and so forth so you have all the tools available to you to use fields in any way you want to affect your simulation and obviously this is just the tip of the iceberg there are endless possibilities as jonah has already mentioned and all kinds of field networks that can be built to create some custom influences so let's have a look at some of that some of the fields that we've made internally and so in this case i have an align with velocity type influence and i'm driving the color using the orientation of the instance particles in this case we have a pulse type of influence and applied to cloth which is pretty weird but kind of cool at the same time we have spin influence the affecting arrow and then the last one here is just kind of like a boiling sand or popcorn sand type influence that's applied to npm so with that we end the overview of the general simulation updates and we can now dive into the future work that's specific to the individual simulation systems and we're going to start with npm cloth and my slogan for this section is when clocks collide because as i say you quickly mentioned we've actually spent a lot of time and effort working on improving the quality of the collisions for npm in general and even more especially for cloth and so if we just do a quick little recap of what we did in version 2.1 of bifrost we put some significant effort in improving the calculation for colliding with surfaces and much more accuracy at lower resolutions and more robustness overall so this was for all the systems be it the granular particles or snow or be it for cloth uh we had we had seen that we weren't getting what we wanted as far as the accuracy with with non-simulation collision surfaces like you know like the ground or some other projectile or whatever right and so we spent a lot of time making that better and if we just look at this montage really quick it demonstrates some of the results um from the work done in version 2.1 so this improves the collisions of cloth and shells with non-stimulation surfaces such as the animated body of a character other fast-moving objects other passive collision surfaces like the ground or any irregular type of collision surface and again the issues resolved were really related to how difficult it was to get accurate collisions without requiring a very very small solver detail size and so all the various scenarios that were leading to penetrations were also worked on and so in this example with the mushy sand here we can see that the resolution of the grain of the npm granular solve is actually pretty coarse to the point that my inner collision surface which is moving quickly is much smaller than that but we're still getting a very accurate collision at lower resolution so that was our focus leading up to version 2.2 and then once we started working on version 2.2 we turned our attention specifically to self-collision so this is how how those simulated bodies are going to interact with each other specifically cloth on cloth but the benefits that were introduced also help with multi-solves when it comes to like cloth on sand and all that stuff and so what we did is we introduced a new self-collision method which detects and solves small disc the continuities at coarser resolution so what does that mean so let's say you have a garment which either has you know like let's say it's a button-up shirt or a jacket which is open and the two sides are really close together you actually want those to move independently and you don't want to have to use very very high resolution for that to be solved and if we quickly mention um nucleus cloth it never had this problem so this is something that we were seeing in npm that we wanted to fix um and by fixing these discontinuities we also kind of took care of a lot of other problems we were seeing such as clock sticking to itself so if you had a simulation where you had a flap of cloth that kind of folded it over itself and and landed on top of itself sometimes it would stick there depending on your resolution we were also still seeing some penetration interpenetrations with cloth again when the cloth was colliding with itself uh we never want that to out to pass through itself so this is affecting that as well and as i already mentioned it also facilitates coupled solves where you actually want everything to collide properly and not stick and not penetrate and all of that stuff and so if we quickly look at this video here um on the left side we have the old behavior so we have this this piece of cloth here that has a whole bunch of cuts and as it moves around at this low resolution you can see that it's not really detecting the cuts and it's actually treating everything as the like one continuous piece whereas with the new method uh we are detecting the discontinuity and solving for it correctly and the same thing with these post-it notes uh with the old model on the left here like all of the posters were kind of being treated as one big blob whereas with the new method they are being you know individually isolated and the behavior acts accordingly and again this works at low resolution you don't have to use a very high resolution for that to work and so if i take into account the entire body of work that we've done in npm for versions 2.1 and 2.2 to improve all aspects of collision vehicle surfaces or with self-interaction we can now do some pretty impressive simulations with npm and everything is a very quick plug-and-play type setup fey also mentioned that a little bit we've done some tweaks to the way we present all the properties in the solver settings and everything that you need is turned on by default so that you could really get started very very fast and you don't have to break your head trying to figure out oh my god what do i have to change to fix this interpenetration or why is my cloth sticking to itself or all of that stuff is really taking care of of you by default and i particularly like this simulation right here because i'm actually using npm shells to kind of mimic a sort of you know semi-soft body simulation so the four uh the four objects that you see here are all npm shells and these ones are are tweaked so that they're a bit softer than these two which are much harder and again you know nothing is sticking to anything else and everything is behaving the way it should so um we're really happy with these results and then finally obviously one of the most important uses for qualities for our characters you know we want we want all the collisions of the body to be to act properly we want all the collisions with the self to act properly we want to be able to have several layers we don't want anything sticking to each other and all of this works really awesomely right now and this message this mesh is actually a really terrible mesh it's a mesh that you'll probably never use in production for a cloth simulation and yet it's still uh the mpm solver still does an amazing job at making this work properly it actually even starts with interpenetration like at the armpit and the groin area and yet it doesn't really create any problems for the salt so that's really cool and so that's it for all of the npm work that i wanted to talk about oh actually no sorry i have one more the christmas tree how can i forget and so this is just a little fun example that shows like all of the different elements working together and so what we have here is obviously uh a snow simulation um the kind of branches of the tree are npm shells uh the garland is an npm cloth and everything is colliding with this uh the trunk of the tree let's say which is also a constraint for for the branches and there's collisions with the ground and then all the inter collisions which are all affecting one another everything works as expected and again this took me like literally three to five minutes to set up because everything is just ready to go and the simulation took let's say maybe an hour at this resolution so it's pretty awesome all right so that's really it for for npm now i'm gonna move on to arrow and for arrow my slogan is it's all in the details now why is that the slogan well because in version two we really really turned our attention in the ability to create awesome amounts of detail and behavior for arrow and also making sure that that that runs as fast as possible and so we've actually done five separate things for version two the first one is we implemented a physical viscosity model uh the second thing we did is a special b spline render sampler for arnold uh we have uh instant detail refinement so this is a form of uprising with feedback back into the simulation we've developed a complete uvw texture coordinates infection workflow and a lot of important performing fixes so let's have a look at each one of these in more detail so the first one which i'm really excited about is the new kinematic physical viscosity option that exists on the arrow solver global node so simply by changing this from zero to something else depending on the scale and size of your scene you can mimic some really specific physical phenomena such as this which is usually referred to as karman vertices or vorticity shredding it's a phenomenon that you'll that you'll sometimes see when you have an airplane flying through a big cloud or when you have a big cloud passing by a mountaintop and so this is a very specific physical phenomenon that requires physical viscosity to simulate and so just by turning this on you get the results right away you don't have to create any specific custom influences or use any tricks or hacks it just does the thing you want it to do and then so that's for the bigger scale and then for the smaller scale for things like cigarette smoke again there are various ways to get the effect but having the physical viscosity really does like 99 of the work for you and it looks like real cigarette smoke so that's just a little bit there about the physical viscosity so the second thing i want to talk about is the the new b spline interpolation for rendering um so here we see this very common issue that a lot of fluid solvers have especially when we're talking about combustion it's this common issue of orthogonal artifacts uh that you that you see usually in the viewport and that you hope that they won't show up in the render um and so our goal here was to create a new sampler at render rendering time that will kind of smooth these out without losing all of the crisp detail that you want and so here we see the problem in a combustion simulation and also in the fog density even though it's more subtle in the fog density but it's still there if you look close enough and so there are many ways to deal with these artifacts in the simulation or in the graph the primary thing you would probably do is increase the resolution but you'll always encounter this to some degree if you zoom in really close other options you could use are like sharpening filters or the smoothing of properties like temperature and so on but luckily for us our resident researcher robert britson cooked up a b-flat interpolation scheme for arrow which can be sampled at render time using arnold's ai standard volume material by setting the interpolation to tri-cubic mode and so if we take a look now here on the left we see what it used to be and on the right what it looks like with the b-spline option on so again the tri-cubic option on the shader and so uh this this simulation is pretty low resolution to begin with and so even at that we get some pretty good results uh with the new the spline filter and if we look at the combustion simulation you know it's not the holy grail it's not going to get rid of all the artifacts but we can see that it's already much much better um but we haven't really lost any of the detail it doesn't look blurry or anything like that and usually an explosion like this would have a lot more noise a lot more vorticity in it which which kind of masks uh some of the orthogonal artifacts a little bit but using that with the b-spline interpolator is gonna make things a lot better okay now um the next thing i want to speak about um we're going to have this simulation i did was just kind of inspired by uh ink and water let's say and i am using both of the things i just mentioned so this has some physical viscosity enabled and it's been rendered using arnold and the uv spline interpolators already it looks pretty good but let's say that i wanted to add more details inside of these bodies of smoke or ink but i didn't want to really affect the rest of the simulation you know i want everything to look the same but i just want to inject more detail in there and so we've implemented a couple of ways to do this and the first one is by doing a form of upressing inside the simulation so ian already mentioned how we have some new right-click menus that allow you to quickly find a new compound and add them wherever they're relevant and so my arrow solver settings node is there for any aero simulation that i'm doing and it's going to give you all of the basic fundamental settings you need which are common to any type of simulation and then if you right click on this additional settings part you're going to have some opportunities to add more settings for more functionality and one of these things that you can add is this compound called the error refinement settings node now this is newish it used to exist under a different name it used to be called arrow sharpening settings and it used to contain only a sharpening filter and another method which emits points to sharpen the the fog density but now we've added a third option and actually this is the one that's on by default it's called an enable refinement and so what this does is it's essentially an uprising of the temperature and fog density property while keeping the velocity computation at the base resolution so this is not only a faster way to get more detail since we're not refining the entire solve but it also ensures that the velocity is not changing and so the overall behavior of the simulation will stay the same moreover the reason why this version of up resin is not a post process is because of the is because the refined fog density and temperature are actually fed back into the simulation and they're taken into account for the computation of the next frame and so this makes for a more dynamic and realistic way of producing more detail so all you have to do is add this node it's on by default and then you can choose by how many levels you want to refine so i'm just going to leave it at its default and see what that gives me so if i play my simulation back now i can see that it's actually much more crisp and if we compare it to what it was before so we this is what it was before and this is what it is now so you can see i have much much cleaner edge i have a lot more detail um an action happening inside of of the body of these of these jets of ink and it just overall looks a lot better and so this is one of the things that we've added to help boost details for any given simulation so it is a little bit slower than the default but it's not as slow as it would have been had i increased the general resolution of my solver to obtain this amount of small detail and so another method for adding more detail is through the use of texture coordinates now similarly to how i added this error refinement settings node to my aero solver settings i could also add a arrow uvw settings node uh the aero uvw settings node was first introduced as a prototype in version 2.1 but it's been polished into a complete end-to-end workflow in version 2.2 so this allows you to emit any number of texture coordinates which sorry it allows you to omit any number of texture coordinates to be advected along with the arrow simulation which can later be used as in post simulation or at render time to add fields for fog density let's say or if you want to use a texture at render time then you have all of the required uvw information and the idea behind the multiple sets is to minimize the effect of texture stretching by having two or more sets simultaneously emitted with an offset which fade in and out according to this weight curve and so this swapping of texture coordinates assures that texturing is uh more seamless and so if we take a look at the final version of the simulation and i do apologize i i later realized that i made a mistake as far as the speed of the simulation but it's the same simulation just a little bit faster so now i see that i've been able to break up uh the blobs a lot more and add a lot more fine tendrils and details um in the simulation by by having emitted texture coordinates and then using them as a post process to apply a noise field uh a random noise field still fog density and the random noise field is sampled at the positions that are being created by the texture coordinates that are adjected in the aero simulation and so if we oh it keeps two buttons at once and so if we take a look at the three versions of my simulation i have the original kind of low-res simulation and i have the one where i used the in sim up resin process and then on top of that i added uh the texture coordinates and used those to give even more noise and and breaking up the fog density um as a post process and so that's it for that part now for the final part concerning arrow so having these indirect methods for injecting finer detail without resorting to incredibly high resolutions is extremely useful and powerful but sometimes some brute force is required where you actually want all that detail to be part of the full resolution solved and not some form of up res or texture post process so when i think about these kind of huge dense slow-moving pyroclastic flows like volcano ash or thick smoke from large scale oil fires although these effects can definitely benefit from the two features that we just looked at before but having the ability to actually solve for that amount of detail in the first place will produce the highest quality and the most realistic results so for us up until this point one of the biggest challenges has been a lack of artistic tools to create the specific types of forces required to produce the pyroclastic behavior and look for arrow simulations as as we already saw at the beginning of this presentation the incorporation of generic field simulation influences has opened up a realm of infinite possibilities for building influences and that most certainly includes influences for pyroclastic flows the second biggest challenge has been the performance hurdles involved with producing very very large highly detailed simulations which in the case of this image consists of 410 million voxels for this frame only um and so thankfully we've invested in some extensive performance improvements in errol which now make this a little less painful so things like multiple improvements for parallel scaling several checks and fixes aimed at avoiding the copying of data where it shouldn't be better handling of memory or location and the amount of memory consumed along with other optimizations such as flattening of loops and so this image that you're seeing here is uh is one of the influences that i made and i called it the aero einstein influence and i hope the reasons for that are obvious um but let's take a look at another version of this which was created by michael nielsen one of our developers in arrow so this is a version of a pyroclastic influence for arrow that he created we're looking at it a little bit more slowly and so all of these can be applied to combustion simulations as well to give some very very highly detailed and breathtaking results and so with that um i wrap up the aero portion of the presentation and so we've talked about the updates that are available for all of our simulation systems but now let's also look at what's up with uh generic volumes the slogan for this is that we're adapting and we really are so we're very happy to announce that we really have achieved a state-of-the-art adaptive welding tool set this is because we've implemented the ability for all of our volumes to convert when you convert into a volume we have the ability to make those volumes adaptive on the fly so this is whether you're converting a mesh or points everything is fully adapted with several options we also have a new merge volumes compound which is also adaptive we have a new convert to fields volume and again all of this adaptive generic volume manipulation is only available by frost uh so we're very happy about that we often get asked like why we're not using vdb we could obviously consume vdb we could write out to vdb but the reason why we have our own type of volumes is because we wanted to leverage our adaptive tile tree structure which is currently not yet supported by vdd and so let's have a look at some examples here so here i have a mesh which i want to convert into a volume and so the convert to volume node still exists the way it used to from the beginning however it's been beefed up with a whole bunch of adaptivity functionality and quickly want to mention why is all of this uh functionality on this node that's because we actually don't want to to to have a process where you produce a high resolution volume and then you start to coarsen it we the thinking behind this is that the user should never pay for the high resolution everywhere if that's not what they want and so we've added the functionality sorry it's knocked over my bottle we added that functionality directly into the convert to volume so that the adaptive volume is created on the fly and you never ever have to pay for the performance costs of having a narrow band level set everywhere that's homogeneous so here i'm just converting this uh this mesh into a volume and by default this is going to create at the resolution that i've said it's going to it's going to create 50 around 53 million voxels it's going to take around 9 000 motor seconds to compute and if we use the volume scope to take a look at the tile tree we can see that you know the narrow band has been voxelized homogeneously and so all of the orange voxels are um the same size and then and then you have the resolution falling off as you get further away from the level set and it's falling off over powers of five out into the into infinity and so now let's say i wanted to do this with adaptivity i have several options as to how to approach that and the first option so these three things that you see here are new on the convert to volume mode and they allow you to control various aspects of that activity and so this drop down here will give you options such as is that activity even on so by default it is off because we just want to you know make sure that it works properly and send it out into the wild before we decide to turn it on by default but the second option you have is this mode of adaptivity of it and activity sorry called optimize and so this is kind of like an automatic magical form of adaptivity which is going to look at the mesh and it's going to decide on the fly where it needs to have small voxels and where it could afford to have bigger ones and so if i look at the result now i can see that with optimal adaptivity on i've actually reduced the amount of voxels down from 50 million to 28 million and i've increased the the time it takes to make this volume by 40 point 32 percent so down from 9000 milliseconds to 55 to 500 milliseconds and if i look at the volume scope now because i have an adaptive volume structure where before the different levels of the tile tree were falling off in powers of five they are now falling off in powers of two and at the finest resolution where before it was only you know being represented by the orange voxels which were the finest voxels we now have a mix of orange and green which is the next level up so this is really really cool and we're very excited about this and if i actually just look at the level set itself you can't really notice any difference so on the left is the default one that had you know 60 some million voxels and on the right is the adaptive one which has 28 million voxels and computes way faster because it's allowing itself to coarsen in certain areas that it sees it and so you don't actually have to do anything just have to turn it on and it magically works and it looks exactly the same as before and so another thing that we've added to the convert volume is the ability to manually control where you want to have high resolution and where you want to have flow resolution and for anybody who's familiar at all with our aero system um if you've if you've used the the adaptivity for arrow you'll already be familiar with this concept of like placing bounding boxes in space and assigning certain resolutions to those bounding boxes and so i'm still going to leave the optimized adaptivity on but i'm also going to enable the resolution balance and so on my node itself on my compound i have these four um these four extra ports now which allow me to plug in bounding boxes that i brought in from maya or that i've created in the graph for that matter and then i could assign all of these boxes to either use you know to have base resolution inside them half the resolution or quarter resolution and so the way i'm doing this is that let's say i only want like super mega high resolution in the area of the eyes and the nose and then you know the forehead and the mouth can get uh one level of resolution lower and then you know the top of the head and the neck and chain can get yet another level of resolution lower and then everything outside of that can be at the lowest resolution and so we can already see from the level set itself but down here it's actually pretty chunky and then it actually gets more and more detailed as we zoom in on the ice and if we again look at that through the scope uh we can see again like in the area of the eyes we have these orange boxes which represent the finest resolution level in in the voxel tree and then it falls off to to less and less and so combining both the optimal adaptivity and the resolution boxes takes down our number of total voxels to only 7 million and the time it took to voxelize that was only uh 1800 milliseconds that's uh almost an 80 speed up versus having the entire thing um voxelized at the finest resolution and so now let's have a look at another example so earlier when i was showing you the fields i briefly showed this video of my popcorn fan uh and so if we take just one frame of this point cache so this is an npn cache with my points kind of like some of them are resting and some of them are flying in the air and now let's say i wanted to turn these into a volume and then turn them into a mesh i could use the optimal resolution or i could use a resolution bounds or i could use both as we just saw but i also have a third option and this is that i can query any property that exists on these points and use that to drive the resolution so for this scenario let's imagine that we want to create a volume which only has high resolution wherever you have points that are moving fast so this would be mostly the points that are flying up in the air and then the points that are at rest like you don't care for them you actually want to have a little bit of a lower resolution there so what i'll do now is i'll switch my adaptivity out from optimal to a second mode called value from property and then i'll tell it okay we'll use the point velocity to decide where um where we have high resolution so so that we have higher resolution if point velocity is high and we have lower resolution if point velocity is low and if we look at the level set now we could see that down here where the points are kind of static we have chunkier resolution and then it gets more and more refined as we look at the faster moving particles and again if we look at the volume scope for this we have these very coarse particles here the ones that are in green are representing the kind of medium moving points and the ones that are in red are the faster moving points and they're the ones that are getting voxelized with the smallest detail size and what's super awesome about that is that i can then convert this into a mesh and because the mesh is set to automatically mimic any type of adaptivity that already exists on the volume so this volume is already adapted and the mesh is set to mimic that right away so you're going to get a mesh which has more triangles where the liquid in this case the sand is moving fast and less triangles where it's moving slow and so that's the major update that we've done for convert to volume and all of the adaptivity options involved there it doesn't end there however so we've already seen a whole lot about fields so we also have a new node that's called field to volume which allows you to pipe in a field and create a volume out of it now this node itself is not yet adaptive but it's definitely something we're exploring and then we also have a merge volumes compound which we did not have before which allows you to feed any number of volumes into it whether it be a sine distance field or a fog density and you have all kinds of you know common boolean options to combine either the sign distance field or the fog density and this node also has optimal adaptivity enabled so we'll see what that means in just a second and so let's say i have this very simple um this very simple network of fields that create you know this this random mathematical field uh so i could i could do all that and i could pop pipe in the result directly to a field to volume and as we've already mentioned fields are influenced infinite so you'll want to kind of give the field to volume mode some kind of bounce to define where you actually want to voxelize so in this case i just use the cube as my bounds and i fed in this this network of fields and it created this kind of work tube looking type of field and now if i bring back my my bust that we've oxidized earlier and which has adaptivity enabled again remember i had all those boxes that were kind of defining the highest level of detail around the eyes and it was the lowest level of detail down here where it looks pretty chunky so let's say i want to now combine these two into one volume so i could essentially take my convert to volume from earlier which has all the adaptive bounds and all that stuff and i could type that into a merge volume and then i could take my sequence of fields that have been converted to a volume and then pipe that into the merge volumes as well and since both of these are using sine distance field i don't really have to worry about the fog density down here i just set my level set mode to difference and so what this does is it kind of gives them a locutis type of star trek look uh where you have one side that's kind of textured and detailed and the other side that remains the same as it was before and if we take a look at that through the scope we can see that the portion of him that is not interacting with the field volume is still adaptive and so we still have you know the high resolution where the eyes are and it falls off but the field volume itself as i already mentioned that compound is not yet adapted so that part of tile tree is going to be uniform at a uniform resolution but the two still work together as one volume or one side of it is adaptive and the other side is not however what's really awesome is that the merge volume compound itself allows you to apply adaptivity optimal adaptivity to any of the resulting volumes that you get by feeding individual volumes into it so in this case just by clicking one checkbox i click on optimal adaptivity so it's then gonna look at this and analyze it and decide where it's going to apply high resolution and where it is and what i especially love is if we look at the top of his head right here we know that this side of the head which is you know intersecting with the the noisy field has a lot more detail and so you can see that i have these finer voxels here in orange but the other side of the head did not intersect with the field noise and so this doesn't need high resolution and so just by turning on the optimal resolution it's able to figure all that out for you and it and it makes for much much more efficient volumes uh even if the incoming volumes for the merge volume were not adaptive it still gives you the opportunity to pump out the result as an adaptive volume again saving you on voxel numbers and performance and if we just look at a very simple yet effective example of this you know if you voxelize a cube into a very fine detailed uh narrow band level set you can get seven point in 7.8 million vocals if you combine two tubes that are intersecting or or in this case they're being added together and if we just turn on the adaptive check box on the merge volumes it's going to take that down to 1.5 million because it knows that you only need small smaller fine detail you know where where the faces are intersecting and you have these right angles whereas the planer faces don't actually require any high detail at all so it's able to figure that out for you and even if these were animated it's going to adapt on the fly and with that i've reached the end of my presentation so thank you guys very much for being here and i hope you have a lot of fun exploring all of this new stuff awesome thank you very much so you guys packed in a ton of stuff as as we went through and so we're going to take a bit of time here to do some q a so by all means i mean the simplest thing to do is there's a little qa button it might even be on top at the bottom of your screen if you hit that you can add questions to the box and then the team here will read through them and and away we go there was one question in that um in the regular chat i'd just like to answer really quick uh which was about whether bifrost will be the future of rigging um so that is something that we're very interested in and uh jason lab who is actually here right now uh and enrique caballero did an amazing presentation the other week uh so i'm just going to plug that and that's in the chat okay so i'm going to try answer some of the questions that are aimed at convert to volume and mesh to volume and all that stuff so convert to volume is a high level compound which is actually built up of smaller compounds such as meshed volume points to volume and all that stuff the idea is that for those that don't know you know what you need you you just have this one place where you could plug in any kind of geometry and it's going to convert it to volume um ideally when you are converting into a volume you do want to have a closed mesh so something like tube or a sphere or anything that doesn't have a face missing if you do have a phase missing you can still convert it to a volume but it's going to be more like a shell and we're currently exploring ways of actually still creating valid solid volumes even if you have face is missing um however it's it's very important to note that all of the adaptivity stuff for converting volumes that i showed actually only works if you have a solid volume so if you do end up in the case where you're using a shell of oxidation so let's say you just have a plane in maya and you convert that to a very very thin volume then you can't use the adaptivity features at least not yet and right again i'm seeing a lot of questions related to this so yeah if you have just a mesh let's see have a sphere you want to convert it to a volume you can use mesh to volume if you want but you could also use converge volume again convert volume is just like a high level compound aimed at you know anyone who's not technical and it's kind of like a swiss army kind of swiss army knife type of node where just it's a one stop shop for all your volume needs let's just say that there's a question about any update to procedural modeling in bifrost the answer is sort of the volume tools will produce geometry the uh interpret auto stuff and the some of the nodes in the scatter pack like displace points uh will do certain kinds of modeling e operations so there are things that produce meshes and manipulate meshes for sure um but i wouldn't say that we've kind of really attacked procedural modeling yet but of course like being able to convert uh fields into volumes and the volumes into meshes like that kind of stuff that costa was showing with with his locutus i mean that could be considered a procedural model there's a question uh any future plans to add example and starter files the bifrost library um yes we're always looking at that it's it's always a question of time but we see that uh that's very important for people to have good examples to be able to kind of dig in and mess with stuff uh maxime asks are there any plans for tighter support for cache playback especially for graphs that make use of feedback boards like simulations um yes we have we have a mostly working internal version with uh cache dynamics um but there are still details to iron out um but we do intend to support it i can't say when though so somebody also asked if there have been any updates to the basic combustion so yes and no so nothing about the actual combustion system specifically has changed but obviously all the stuff i mentioned about performance and the way we can now manipulate uh the simulations with field influences effects combustion as it affects everything else and i i see that come on mystery is here and he says hi so i'm saying hi back nice to see you okay me again so there's questions about gpu i do invite you to to take a look at the reddit ask me anything that we did yesterday because there was a lot of discussion about gpu and so the short answer is that we're definitely looking at this very hard and we're exploring um and it's to be determined it will come out of that i see a question in the qa panel about rigid bodies and flip so the short answer is once again it's definitely something that we're looking at um you could already kind of do a pseudo rigid body simulation with npm but i know it's nowhere near the real thing uh so yes we do intend to definitely explore that option uh the flip solver that we had for the old bifrost liquids is also something that we've done some extensive research on and see what it could mean to bring it into biofrost and even if we could make it better than it was already and the answer i gave for rigid bodies also applies to dynamic shattering effects so we're looking at all that as one solution so having the ability to shatter stuff and then simulate it as rigid body is something that we want to explore as one continuous workflow so we're just gonna be looking at all of that at some point uh there is a question now if you're not using bdb combines be converted to vdbs uh yes they can uh we can write out vdb files and we can read them in as well uh with properties including bdb points like vdb is a nice sufficient representation of point clouds where i don't know exactly how it works i think they store the points in the uh in the voxels to kind of accelerate it that way but we support that as well yeah and when you read in a vdb volume um vdb uses powers of five in their in their structure but you can keep that or you can change it to powers of two if you wanna uh leverage some of the additivity stuff that i showed cool maybe we got time for one more and then we'll hand it over to jen and by all means if you're if you want to reach out if something comes up to mind like you know tomorrow or another day the team is happy to get questions from you a variety of ways to do that there's a bifrost form of course it's probably the best place to start where people can see answers and post files and stuff but you can hit us up on twitter or social or whatever it works for you so uh graham clark asks hi graham uh are there any plans to add uh more data nodes like table i o with data manipulation nodes uh for uh geo volumes like for database um yeah we're always looking at things that allow kind of more ways in and out of bifrost because uh the power of the tool set really scales with uh kind of like multiplying it by all of the things you can bring in and all the things you could send out so that is something that we uh we are looking at but again i can't say when we'll do things cool well like i said it's a major update there's a lot to dig into and we only really scratch the surface here today so uh with that i'll hand it over to jen to wrap this up thanks pj so before we wrap up i'd like to introduce you to our revamped by frost hub we have tons of tutorials master classes and blog posts we'll be updating the site quite frequently so be sure to check it out at makeanything.autodesk.com i also have a special message for all you new grads and freelancers out there this past august we launched maya for in the users a more affordable option for those who are working on a tighter budget there are some requirements to be eligible for this offer you can find all the details at the link on screen thank you all for taking the time out of your busy day to geek out with us we hope you enjoyed the webinar and invite you to follow us on facebook and twitter well where we'll post the webinar shortly again we know these are challenging times and we appreciate that you are able to tune in from our team to yours we hope you and your families are safe and healthy and find some joy this holiday season thank you thanks everyone thank you very much bye bye holly
Info
Channel: Autodesk
Views: 13,084
Rating: undefined out of 5
Keywords:
Id: ucnyMu3ohTw
Channel Id: undefined
Length: 113min 41sec (6821 seconds)
Published: Fri Dec 11 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.