Intelligent Setups for Automating Character FX | Adam Katz | SIGGRAPH Asia 2019

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] as introduced my name is Adam Katz I'm one of the lead character effects artists at Animal Logic and I'm gonna be covering how we automate character effects today we use this method throughout Peter Rabbit and we're also using it on some current and future projects so first things first just a bit about new analogic is we were founded in 1991 we've got creative studios in sydney and vancouver and development offices in LA and sydney we work pride them predominantly in commercials first while then moving on to the effects features we did The Matrix and Happy Feet which was the first animated feature to be done in Australia and then won an Oscar and more recently we did the lego movie franchise and Peter Rabbit and Peter Rabbit kind of brings us back to the automated character effects pipeline so the first thing is I want to start by showing you some examples of what our setups can do and the main thing to remember as you're watching this is that essentially it's a hundred percent automated after animators have checked in key frames of what they want to see so let's start with a clip we already yep absolutely always I guess love you guys now let's do this [Music] [Music] [Music] [Music] alright that was Peter Evan so for today's beep speech not going to use Peter Brabeck we're going to use a test character so let me introduce you to that test character so this is the character that we use while testing that everything's working you can see it running around having a pretty good time now this set for this character in general would be created before the character effects team starts working on a shot which means all the lighting the shading the rendering settings that's all set up and ready to go for us and what we're gonna look at is the automatic process we have to create the character effects for this in any other character and that includes working on the geometry of the body the clothing and the fur so as an overview we want to look at how we build our pipeline for automation while also accommodating specific shot works that artists can use on the characters so we automate all aspects of this within kdeeny and when those process is complete they move on to the rest of animal logics pipeline which picks things up lights it renders it and so we can get feedback for our work so specifically in the talk I want to look at how we set up our pipeline for automation that includes geometry preparation D intersection cloth simulation hair simulation importing and exporting of data we're gonna look at some of the technical details about it some of the challenges we've found with it and consequences from using this type of setup and I'd like to give a big shout out to Roger Frandsen he really helped create the initials Houdini set up on Peter Rabbit one that this was all worked on amongst other people as well so the purpose of what we want to do is to create engaging photorealistic characters within a live-action world and that means we want high quality dynamic cloth and fur that can be used in a range of different shots so that might be slow emotional performances frantic action scenes the characters are going to be interacting with each other and also with live-action actors and actresses and there's going to be a range of different weather conditions so like wind and rain things like that so the first thing to know is that you Dini is essential to our process the entire chart of the entire chart effects pipeline is contained within Houdini end end and what that means is we are importing USD data from the animation Department we're taking that data into Houdini and processing it we're importing it into Sox we're splitting into different geometry streams like the cloth and the furred cetera all the simulations completed within Houdini and then at the end of that we export that data as USD overs and defines and what it means is that the artists never need to leave Houdini they can stay within that one context of working and we found that that amount of automation is really only possible with that procedural nature combined with the amount of Python we can write and some fairly sophisticated rap nets that we're going to look at a little bit later now what are some challenges we found with this well first we haven't come across many studios that are ended and charvaks with a moodini some will have parts in Houdini some admirer or other software and that means that when we're looking to hire people it's hard to find people that have done everything within it some will have some experience within it but not for the full pipeline for instance they may have only done simulation or maybe post process thing or some data wrangling but not everything so really the question that we'll start with is why is it that we wanted to automate character of X well first there's a pretty long history of automation in animal logic automatic renders or star daeun was started on Guardians of Ga'Hoole in 2010 and in subsequent projects we've automated character rigging building lighting compositing now obviously not everything within those departments is automated but we have a very well connected pipeline especially to get data between different departments so when Peter Rabbit came around our goal was to have a really efficient way to get shots to as close as final as possible and with the the director he was very used to live-action work and having rushes and we wanted a way to be able to replicate that with our animated characters as much as we could so with all those other processes already automated we wanted to expand into character effects we also wanted a very quick and dynamic feedback as soon as possible to aid in animation reviews for both the animation department and for the director and essentially showing just basic point informations Orford informations won't cut it so the rough overview of what we do is we get an update from the animation department that goes in and gets processed by he Dini and we get a rendered shot at the end of that we want to be able to process very efficiently and basically the amount of animation updates we get per shot and per character we just overwhelm our relatively small sharks team so we had this process to be able to let us actually get everything done we'd also like our artists to be able to concentrate on hero work and so that means that if we have a shot that might be split up into some hero work which is going to come to the char effects team and then a lot of other work which is in a process called atom effects which I'll get to in a second and that's going to completely bypass our team and all of that is going to come together for a final render we also have the advantage of having a very large scalable farm with a lot of resources which means we're able to push a lot of processing onto that arm concurrently with each other there's also a pipeline that's driven by triggers which basically means as soon as certain processes finish in the pipeline it triggers the next ones to start and so on and that can handle a lot of the steps without needing user interaction and that goes throughout the entire process from assets all the way to final render okay so all of this comes together within our automation process so we basically have two different related pipelines around the process and the first is called animal facts and animal facts just means that it's the animation departments character effects then we also have something we call chart bucks and that's the character packs Department sits itself and what we actually do so both of these systems are used to create shots and they consist of the exact same you Deenie setup animal facts runs automatically whenever and the animation Department checks in key frames whereas chart X is manually run whenever shot specific changes are needed so things like a simulation that needs to be art directed or maybe a complicated collision in a shot or possibly simulation errors like jitters or pops that might come so the animal facts process consists of the following so an animator is going to check in keyframes that's going to then create a hip file which gets passed to Huey and product that process is the geometry of your body it simulates the cloth it simulates the fur and then it exports USD for the body cloth from the fur that triggers the final render which of course gets approved on the first try and we all go home that's animal facts now char effects is very closely related it's essentially the same when we had hero work or we had some simulation errors the animation was updated it would make it hip 5 then a charvaks artist opens up that hip file and it makes any changes that are needed on to the body or cloth or first sections they then send that file off to the farm it gets rendered like normal once those changes are approved we have one more step we take all of those changes and feed them back into something back into the animation Department so when animation runs again they'll pick this updated set up that has been set up specifically for that shot so as they continually update the animation we'll have all these shot specific changes ready for the processing of all of that is inserted into a chain of jobs and that chain of jobs triggers on new character updates from animation which might happen you know multiple times per shot because of multiple characters in that shot and all of these characters run concurrently and at the end of all that they trigger a final render so that's that's kind of you know the general process now how we actually automate when we have a few a few troubling things basically first the input geometry we get is not only fine not always finalized and it made slash will change point count we have multiple characters that are automating with different needs some may have cloths and may knots and may have fur some may not the animation is going to be updated frequently we might get geometry that's intersecting and basically they're just way too many shots and characters to finish with brute force so we do all this within something we call a character effects rig now it's not like a rig you have in Maya for animating something it's just the term we use for our setup within Houdini so I want to look at a high you know high overview of what this shooting process does so basically we first get the input from animation that's a cached us D file and that us D file includes body geometry cloth and fur that data we do geometry processing on we D intersect it we do a few other things that leads into the cloth simulation which leads into the first simulation and then that gets packaged all up and re exported as USD so to break this down into roughly the steps that we work with we get out of them put USD and the first two parts of that we use are something we call then animated skin mesh and a hair volume now the animated skin mesh this is just the basic part of the body that the fur is eventually going to attach to the volume input ruffly estimates about 60 to 70% of the length of the fur once it's attached to the smaller skinned geo and this geometry is what the cloth is going to collide against so if we zoom in here for just a little close-up this is the skin geo that's the volume geo and you can see these overlaid on top of each other now when we put the fur in you can see that the fur essentially just sticks a little bit out of that volume zero and we actually run our cloth simulation first and only after it's finished do we run the first simulation so we found that by directional simulation between the cloth from the fur was a little unstable and it was a lot slower so we fake the pressure of the first lighting with the cloth like lying against this volume Geo we're going to come back to the fur in a little more detail later for now we work on our volume input and we basically needed to ensure that our geometric inputs were good and valid and it was the cause of two major concerns so the first thing and the most importantly is that good inputs are going to equal good outputs and what I mean is that if the geometry getting self intersects or collapses or moves suddenly like in a frame or it fails to leave gaps for the hair or cloth like you know if the arm goes down to the body then nearly all solvers are gonna fail or they're gonna get bad results also the animation itself could end up with flipping joints or intersecting limbs or anything like that especially when animators are working to Cameron's or if they're used to working two camera angles now since all of our inputs come from the animation departments I really couldn't stress enough how important their relationship is in the process we made sure to talk with them extensively to make sure they'd have a really good understanding of what a good vine pose is what good pre-roll is and how self intersections are gonna cause issues and their renders especially because they're it's automated on their side as well so what may check something in they want to see a valid render come back now even with all those issues improved we basically still have a problem the volume mesh itself was and self-intersecting and areas of high flexibility good like the elbows or knees or thighs and it was basically an unavoidable situation because the volume geo is only estimating where the hair would be and the hair would be collapsible the volume geo is going to intersect so for instance we might have something like this for the to come together and so what we did to fix this was we took the mesh we projected it onto an SD F or assigned distance field of itself and that helped find intersection areas then we take that along with some smoothing and relaxing operations common areas of crossover which leaves space for the cloth to move so for example this is that raw input and then after we do intersect it we get something like that and here's roughly them overlaid on top of each other now a little more extreme example would be something where the leg of a character might be intersect in the hip a lot and when we de intersect we get something like that which was a lot better for simulated clock with the method we had it wasn't that fast but it was able to de intersect intensely to form sections and since all that could be chunked essentially no you know it wasn't a sim you just depended on each frame it went really fast now should say that we did test out using the vellum detangle mode within a solver for D intersecting and it kind of worked alright but we didn't want to have to rely on a solver that could only run on CPU in sequence we wanted to be able to split that up because we had the advantages under farm so from there we go ahead and we remove body parts that aren't going to be interested in the simulation it's like tobey's and fingers things like that and that's essentially the end of the body processing so from there we went on to the cloth and for the cloth we get a couple things the cloth itself was built in Marvelous Designer and we received two things the cloth conforms to the body like this and we also get a flat input as if all the panels were actually cut out of fabric this input gives us reference for how long all the edges would be if it was laid out and also their relative angles that the cloth should try to relax too we also receive a thicker model renderable jamb country which is the thing that's actually gonna be rendered at the end so after we have that cloth the first step we did was to remesh it to make it a lot better for simulation so we basically take the very nicely modeled assets for modeling Department and we re meshed into triangles that will let us simulate you know creases at arbitrary directions now the nice thing about our remaster is we get the same point in primitive count both for the flat panels and for the conform panels and we did that by taking the position attribute of the conform panels storing it as an attribute on the flat panels re meshing the flat and then reinterpreting those attributes back to the conform panels and a let us have one-to-one correspondence in points once we have that we move on to the cost simulation itself and we've done this in vellum or carbon depending on the project and it's important to note by cutting away that volume geo it really let us get a good simulation with a wide variety of inputs as long as we got rid of those intersections all right so after the simulation we move on to some post cloths and steps and these were used for a couple different things so first since we simulated the cloth against that D intersected volumes here there was a chance that it could end up inside of the original skin gyah so for instance this green geometry is the smaller skin geo and this red one is the volume zero so we take the volume Geo we carve it away but it might leave sections where the skin geo is still popping through and when we simulate the cloth on there it's possible that the skin geo still sticks out of that cloth so as one of our post sim steps we turn the skins you into an SDF and we push the cloth back out and you can sign it kind of see a more extreme example here where you might see the body showing through and we can push everything back out now the post cloth sim step is also a really important area where artists could make arbitrary adjustments to their Sims such as you know smoothing out any pops or jitters or maybe the VFX supervisor wants a specific flap to open in a specific time we can do some of that in sym steps and we realize as we went further and further along in projects more and more of our fixes would be posted because it was a lot faster especially in a long shot than re simulating everything and it can end up being more art directed well as well after we finished the cloth we move on to taking that renderable cloth input and point deforming it onto the simulated cloth so essentially we get the renderable cloth at the bind pose we also get the simulated conformed cloth of Blancos and in order to point to form we essentially have some special UV is that would be exactly the same between the thick cloth and the thin cloth and that lets us transfer the animation over and it we also use their normals and the connectivity of different panels to figure that out as well once we have that data that gets ingested into weave the animal logic toolset for making clothing and it makes all of our work look very very pretty so that's that's essentially what we do with the cloth now from there we have to move on to the fur so the first thing we do is we mean the cloth in a you know in a format to collide with the fur we'd use VD B's for that so we take the cloth we turn it into a BBB and here you can see the vvv itself s dia as we use it but the signed distance peeled and the original skin geo that the furs gonna grow from that's animated of course and basically the way we made this is we scatter a whole bunch of points onto the surface of the geometry and we do a VDD from particles and that lets us get a pretty accurate estimation of what that geometry is regardless of how many holes it has or how bad the collision may have ended up basically we don't have to do poly caps or anything like that we just get a good SDF alright now for the hair itself we start by bringing in a very dense geometry of curves so in a production asset this would be something like about five to ten million curves per character so obviously way too many to actually simulate efficiently so we take all these hairs and we call them down so they're about probably about 10% of the original curves now when we cut these down we make sure that our new guides are based on the longest hairs in the incoming green cache that's because when we simulate those long hairs will stay under the surface and so when we eventually interpolate back to the pull hairs or more guaranteed that most of that full cache will stay under the surface of the cloth as well and you can see sort of how much we cut that down in an actual asset okay so once we have those guide hairs we take them along with the USD animated skin geo the skin mesh and we do a bind of the hair to the body now this starts with just a kind of basic point deform but our method has some pseudo collisions built in and the way we do that is we take that guide cache after it's been basically point deformed and we make a velocity field out of that from the root to the tips we take that velocity field and we smooth it a bit we also make a velocity field from the normal of the body and this is going to help the hair from intercepting the body and we had these two velocity fields together we take that resultant field and weari affect our hair through it from the root points so it's gonna more or less go where the hair started but because of the nature of velocity fields it's going to try to D in terrset itself or just won't get intersected I should say and then the final step is we cut that back down to the original hair size and you can kind of see a little bit of the difference of what this does especially in joints and areas it works pretty well and it's it's fairly robust no matter what the input geometry is all right now after that we take the bound hair and we do a basic four simulation on it so what does this all look like well so here's an example of our guide groom bound at the bind post right so it's just the 10% of the full groom and if we just put our geometry of the cloth on there the furs gonna obviously be sticking through so on the first frame of our simulation we flatten the fur to the body and then after it's flattened we use that bound hair we did and use it as a goal so that the hair relaxes back to it also have making sure that it intersects our sorry that it collides with that SDF and that guarantees that our fur is always starting underneath the cloth and so on a later frame for instance the bound hair might look like this but our simulated hair looks like that and you can see that in motion here now 95% of the time on all characters and shots this is all we didn't really need to do for the fur it was really basic and a very fast solution that was just built in a cDNA geometry solver we did have a slightly more advanced method and this higher quality method uses vellum that had hair - hair collisions hair inertia and handled wind you can plug in any sort of velocity field you wanted get a little plus of hair coming up now this was actually originally done on a custom solver in Peter Rabbit but we're transitioning over to vellum for the future projects we found that the speed got a lot better vellum let us make artist friendly tools a lot quicker and really it helped that newer people coming in would have a much easier learning curve there's a lot of learning material for vellum out there than if they had to come in and learn the entire custom solver for the Advanced section anyway after we do that basic or advanced person depending on what the shop calls for we we export all that data we're sorry we classic we need to reinterpret late back to the full vendor cache so our general method for this is we take that full groom and we also take our guide hairs now we look at the guide hairs to start and we see where their route points touch the surface then we look at any specific patch on the full groom and we see which route points of the guides are closest to it then whatever route points those were those guides are the ones that deform that section of the full groom and that's how we do interpolate back this is that emotion we found it works really really well relatively quick and accurate for short-haired characters and even with the five to ten million hairs it only takes roughly 30 seconds to a minute per frame and of course that is not it's on a solver so that we can chunk up after all that we export the USD for the cloth and for the hair and that goes on to trigger a render all right so how is it that we do all this with encoding so I want to give you a rough overview not gonna go into every single no but just roughly to start off we have a master template for each character and this template holds a lot of different unity digital assets so within the template we have a section for specific large HD A's or you know digital assets that are going to be used in every single character and the main parts of this are a template char effects rig HDA and a rap network that helps us cache and export all the data and it's really important to remember that those digital assets are the same on every single character so when we do you know any updates of that are automatically folded into every single character on the floor now we also have sections for character specific setups or shot specific setups we have a section for skin deformers we have one for cloth simulation one prefer and any shot optional setups like wiggle or jiggling geometry or wind so the main focus of this is this template HDA and this holds all the connections between different sections of the rig as well as data flow from input to output so I'm not going to delve into all the details here because it's essentially just moving data around but roughly we bring in USD data at the top that gets split into different streams like the body or the cloth the fur whiskers everything is getting processed moved around the user can make changes and then at the end that all gets combined back again and we write it out as USD / Zelda fonts so a really important aspect of this setup and it's one of the big things we learn going from Peter Rabbit onto some current future projects is that we had these locked digital assets so we can update all the characters at the same time but we also needed the ability for artists to make shot specific changes and we didn't want them going in and unlocking these digital assets because once they're unlocked we couldn't update them so while the template rig is up is locked it works concurrently with shot specific sections so for instance when we process volumes yugioh dieter t-intersection inside the HDA or inside our template ring there is a volume geo stream and we have a wrapped-up HDA to help transfer data back and forth now outside the rig we have an equivalent section where the user is able to make changes and I want to jump inside both of these notes so on the Left we have the HD a on the right is the user section so the first thing that happens is the HD a brings in a stream of data with the body geometry then the user section object merges that in the user paints a color map and it sets some d intersection properties and then that data is object merged back into that HD a which is then exported from this and goes back into the template rig so the result of that is it added a little abstraction from the user side but it meant that the HTA could be updated a lot easier and because it's used for every single character we can push out changes very quickly we can also make some error checks here just to make sure that all the data is in the right format before it moves on to the next section all right that was that template HDA we also have a rough Network the rough networking we have is fairly sophisticated and it really lets us take advantage of our render farm and split all these caches between different computers as much as possible so throughout this we have a ton of caches for every part of the process we also have a lot of switches and logic and this lets us be sure that this setup can handle many different characters so for instance you know if the character has cloth or if it doesn't if it has fur it doesn't if there's wind is there are there collisions are we using the basic or advanced person we want this one setup to be cut to cover as much as possible so to distribute these across the farm we basically take any of the stages throughout the entire process that take more than a few minutes to complete and I should say we're definitely taking advantage that we have large render farm any non simulated areas are chunked up even more to more efficiently put that out on the farm and I should say it's really just a high-level view of the tasks here the real thing is about 15 to 22 different caching no it's depending on the setup for that character and for that shot and we found that for about a hundred frames of animation this would maybe take two or three hours to complete but it was all happening automatically in the background now although there's differences between rigs for our characters we did try to standardize them a bit to have some common parameters such as you know what kind of wind is being used its collisions happening what type of hair simulation and many of those controls would be set up from our production database which means some changes to shots we never even needed to open Houdini at all we can set that the switches in the rough never do their thing and then that would work well so that basically covers the automated process pipeline but how well does it actually work well let's start with the good bits so we got dynamic results for the animation Department and for the director within a few hours of animation checking in their keyframes on Peter Rabbit at least about 80% of the character / shots were automated without meeting an HR FX Department input if shot work was needed then it was easy enough for an artist to open something up make a change and feed that back into the pipeline and again yeah any any shot changes we fed back pretty easily also it let the character that's artists really focus on the hero or the necessary parts of the Sims and the process was so successful we even managed to get crowd shots working with it and some of these would have up to 70 characters or something like that even in a shot like this if we had to do specific work on one or two characters in still meant 95% of it would go through automated as our process matured we saw hundreds of cloth and first shots being automated that would get finaled straight from animation including some fairly complex soft like this all right so the bad bits not everything always goes smoothly so early rigs especially were very prone to failure although that vastly improved as we went through our production part of the problem was that the rigs were quite complex and they really took a lot of time to learn there's a few scenes we've had where characters are interacting with live-action actors and actresses whether they're getting pet or having their jackets you know touched or something we really struggled to make that work automated there was just way too much specific setup per shot depending exactly how the character was of the the live-action actor was filmed we did have some automated collisions but it really only worked on basic setups like if there's a desk and a character's leaning on it we'd get those we we'd figure out where those collision objects by based on looking at proximity around the character and kind of work but it only worked in basic scenarios sites like this were not uncommon until we had some quality control steps but we'd figure out how to automate our blooper reel so that was kind of cool all right so having said all that we of course still get problems in the rig either from unexpected input or bugs in the setup so we started trying to find ways to fail smarter so if we had a whole bunch of processes happening and one of them failed we make sure that that stopped that chain from calculating but let other projects inside side branches continue we also want to make sure we fail with very specific errors so that artists could find the problem quick as possible and it was especially important because in some of these shots and logs they'd be checked by people other from our department maybe a technical assistant in the animation department who wasn't that familiar with you Nene so we wanted to be able to point out exactly where problems occurred we'd also point out inside our digital asset where those errors were and you can see how many error checks we had within that and all of that tried to guarantee that processes that were already complete were still valid so if we had to send something the form again we could just pick up where we left off all right so our biggest issue that we want to improve especially it's trying to find a balance between rings that were easy to update and rigs they were easy to understand for the artists so I'm Peter revin we had these we didn't we did not have these massive digital assets and it meant it was extremely difficult to update characters so whenever we found a bug which you know depending where we were and the process or the show could be once every week or two or maybe multiple times a week it would it would take one to two days reimplemented that fix into every single character we had with the implementation of our digital our massive digital asset which fixed all that it made it very easy to update but it left a node draft that wasn't as easily followed for the artist and also if they needed to do something in the digital asset that became unlocked and we can no longer update it so part of our work in the future is to try to find it even better balance between some of those conflicting ideas that's more or less what we've done so far a little bit about some recent and future improvements we're looking at well first I guess I said we've updated our first simulation to vellum we get a lot better results a lot quicker and there's a lot it's a lot more artist friendly to interact with we've added some basic jiggle using chops to help and kind of fake muscles and we've added a lot more error check notes to really if anything goes wrong we want to know about right away we want to stop the processing of everything else on the farm now what's our process to make sure that we're actually improving things well basically we treat our character rig as a program and what I mean by that is that we are constantly improving it to handle more and more edge cases as we get them from shots we fix any bugs that as we find them we make sure to get feedback from the artist so if any artists have found common issues or they found solutions for problems we make sure to put those back into the rig and update them for all characters and finally we create presets you know for instance if we're doing wind shots and we find things that are looking good we want to make presets that people can apply those right away and we'll bake those presets into some of the digital assets so that they're available on all characters immediately of course it's not really all automated a lot of it is but ultimately we are not going for a realistic simulation we want something that looks best on screen for the VFX supervisor or the director so character effects artists are vital hero shots and others need art director will tweaking and that could include for character interaction collisions wind setups pops jitters or any general misbehavior of the sim so while the artists are always very important the automation process we have improves that efficiency throughout the pipeline and even with all the automation within character effects without the rest of the animal logic pipeline we basically wouldn't get an excellent looking render so this includes any update to assets automatically delivering to downstream departments we have an automatic first pass of lighting based on some lidar and plate projection we have al fro which is our toolset for grooming and rendering hair we have we've the toolset that makes all the cloth look nice and we have glimpse the AR renderer that makes everything look really pretty in the end so that's essentially it most of our shots go through automatically the real benefit is that we get these rich dynamic renders within hours of animation checking in something and that really helps feed back and gets innovation much much quicker of course the automated rigs don't build themselves so I really like to thank the charvaks team team leaders and the CG supervisors the effects tdys for building the anima see effects process the performance titties for managing it all the R&D team for the AL fro and weave and glimpse brilliance and of course the animation the animation team for really putting up with us and putting their faith in us because it really was a good team effort if you're interested in becoming part of the team please go ahead and check the website positions that we're currently looking for and also there's a few other talks from analogic people today so the owners are Zara Albanian will be talking 1115 about empowering storytellers through technology and 18 Sarsfield is talking at 4:15 about efficient pipeline management also if you're interested in how exactly we used USD come back here at 1:00 p.m. Luke rabbit is going to be talking about forging a new animation pipeline with us Dean he'll go into a lot of the details about that and that essentially brings me to the end of the talk I hope it was helpful and you have any questions thanks Adam thank you any questions make it easy I want to throw this thing come on just a quick question about the post fixture speaking words about the way the post fixing yeah do you do it every sub step or you do it for the after the summation don't we you know we don't we don't really need to bother with sub stats for the post simulation because the cloth simulation has been the thing that takes care of any quick motions and making sure that everything interpolates correctly between frames usually the post setup is just to smooth out jitters or to add specific things that they might want after that like maybe maybe something stayed stuff underneath the body somewhere for some reason so we could bring it back out Delta mush it smooth and make it look like it was good there might even be sections where the sim looks great everywhere but around the elbow it's just kind of doing something we might freeze it at a frame and point deform it to the elbow just for that section and then blend back the simulator that's it yeah yeah with some using MA blisters on it was that just a design choice or yeah I mean I can't speak to specifically why I was done a marvelous that was a little outside our realm but I believe it was a design choice they were getting good results from what marvelous was giving us it also did have the advantage that it gave us those flat panels also and those were those flat panels are very helpful in the cloth simulations themselves relaxing back to the state that they should be hey those velocity maps that you created yeah what they just linear by distance or was theirs and I wore them to create over there behind you mean for our binding stuff when we do our pseudo collision ya know we basically would take each hair go through the tip and just make the velocity go along that curve and then by smoothing that and by adding in that normal from the body that resultant field almost by definition is not going to be intersecting because if you know the VD bees are going to be the velocity fields we accept and then we read back through we're pretty sure that our hair won't be intersecting anything when there's real extreme poses there's still going to be some intersection and that's when we have to do some of the advanced bellum stuff too just to make sure nothing's intercepting that body that make sense any other questions going once kumquats okay thanks a lot one more speaker at each interaction between two characters use it each other how'd you handle this yeah so okay interaction between characters we've had a couple tries at automating that the basic idea was we bring in our main character we have all the other characters as well from us D we do a kind of every 10 or 20 frames check to see if any of that character is intersecting another character if they are then we do the equivalent of our volume GOG intersecting pics but we basically D intersected from the other character at something like half the amount we normally would and then the other character is also d intersected about half the amount and that tries to leave space in between them to let both of their cloths clothing sit practically though while that worked on you know we have a couple shots and we'd be trying and trying it and sort of worked once we got to actual real production shots there's just a lot of notes and feedback and because it's to hero characters they're really getting scrutinized a lot and so the automated process kind of work but more or less might give us something to start with but we still need to do a lot of manual work for that but but at least when that manual work was done the rest of the system runs automatically after that so we can pass those cloth changes of those geometry changes back out and then they go on hey to the setup work well with multiple layers of clothes uh yeah now yeah I really did um in our I didn't quite go into some of the details of this but in that user setup section that has a cloth simulation section that's different for every single character it has the same inputs and outputs so it's object merging the cloth inputs from the template rig and exporting them back there but the setup itself would be done per character because that was one of the things that obviously depending exactly how that character setup is going to be different so there were yeah I really didn't show exactly this but there is still a lot of manual set of per character it's just that we have this system to hold that all together and to automate all those parts in the minutes okay thanks Adam yeah
Info
Channel: Houdini
Views: 15,961
Rating: 4.9571047 out of 5
Keywords: vfx, visual effects, visual effects software, vfx software, cg, cg software, siggraph
Id: NgOxluYHb54
Channel Id: undefined
Length: 46min 8sec (2768 seconds)
Published: Tue Dec 03 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.