GDC 2016 - Motion Matching, The Future of Games Animation... Today

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
oh there we go I'm super excited to talk about this you have no idea okay I'm going to talk about motion matching the future games animation today but before we get to that let's see my name is Christian John G I'm the animation director from Ubisoft Toronto as you can see I've been in the industry since 2000 with various different styles the game under my belt then in 2004 I decided to lean more towards animation driven games such as Assassin's Creed which led me to a couple of studio moves before Ubisoft Toronto tempted me back and gave you the animation director position on Splinter Cell blacklist wish you shits in 2013 I've since been working on various code of projects and a couple of unannounced titles and some of the cool tech that you'll have seen here you will also notice if you're keen that dan and I work together a bizarre so okay so quick disclaimer before we get started this presentation demos a new animation prototype this is text that the team is very very passionate about and because of the potential applications Ubisoft is keen to invest in and push our games forward the tech is being considered for future games but the assets seen here do not represent any game currently in development it's like clear good alright okay as long as everyone's clear we've all signed NDA's we're fine alright so okay so I'm going to show over my overview of one so come out I'm going to give you a quick explanation of what motion matching is then I'm going to talk about the process of how we work with motion matching then I want to describe quickly ways that we can manipulate the system then I'm going to show you a couple of cool tests that we've tried out just because we can and then I'm going to kind of wrap up with the conclusion talking about successes failures things we want to be for the future and various different things okay so motion matching is a concept that's been around for a while and it's been known as a few different things it's been known as things like motion graphs or motion fields but it's only really been this generation of consoles that has made it possible for practical use so to understand motion matching we really need to first understand what the current standard process is so I'm massively oversimplifying this but once we know what we want to do we'll it's a mocap will capture loads of individual cycles and movements making sure that we can carefully capture as many actions as we need for our systems so then once we get the data back the animators will painstakingly cut up the clips tidy up the poses create the loops and policy animation done a ton easier by dance tech but for years I games animation has been using things like state machines are standard with no based options becoming the option favored by most projects so if you're here chances are you know what one of these is and you know how difficult it can be teased correctly so we have to add nodes we have to set up rules we have to add blend times loop points etc etc and then if you get all that working then you should hopefully get something that goes into the game if you're very very lucky so of course that's over simplifying the process but what if there's another way okay so the idea of motion matching is really to find the best possible way to go from A to B with as little fuss as possible allowing animators more time to focus on what the actions actually are instead of having to worry about losing fidelity of their work and manipulation during implementation so it's a highlight one of these issues we wanted to address here is an example of weight shifting due to a desired change of direction so let's see if this works there we go this seems like a simple enough move but is incredibly hard to recreate a state machine and to remain responsive so if we break this move down in slow motion you'll you can see the moment I decide to change direction where the entire body will will move will Bend and it'll just to move momentum shift towards that desired direction solving this as a state machine would be difficult as we would need to likely use a transition animation to replicate this actual move but the problem is is we would lose responsiveness as I said and and we probably moves quite a lot of fidelity and it also wouldn't look grounded so because we would probably have to do with like blends and just it look a bit weird anyway this is kind of the core of what we wanted to fix Alby but it'd also be a good example of the sort of quality that we were trying to achieve so none of this would be possible unless we assemble the scene of animation experts about gameplay animation starts with the animation programmer Michael Buckner whose idea motion matching had been in the first place along with Chimel who's here as well we were just looking for the right opportunity to he was just looking for the right opportunity to push things further and this seemed like coming to Ubisoft Toronto was the perfect time we then added a gameplay programmer might was a loose key and technical art director Alex Alexander Bresnik who I think he's also in a room somewhere then we added patina marquees and she's the perfect example of the kind of animator you want on a crazy project like this and of course holding all together as me so with a nice beard there ago okay so we set a clear mandate of four goals that we wanted to improve upon realism control simplify and variety so with many Ubisoft games leaning some more towards realism we wanted to improve core locomotion and I laugh a biomechanically correct human movement ultimately this is the feel of the character whilst retaining animation quality and allowing the player to always be in control simplify the way we want to implement data this would be allowing animators to focus on animation creation and rapid iteration giving them the ability to test or change the style of their character quickly so and also we need to be able to add high quality variety quickly and easily and it's just isn't just about replacing a walk cycle we wanted to look at replacing entire core locomotion excuse me so which brings me to what motion matching actually is a very very high-level description of it so we describe a small amount of characteristics on what we want the character to do over a certain amount of time let's just say one second we then take into account things like route position and velocity the past and presents objectory joint positions such as feet and hands and their velocity as well as any tags that we may add we take all this and basically find an appropriate matching section and it will be like in an unstructured library of poses effectively meaning we could jump anywhere in a piece of data if it matches our current pose at the time of input okay stay with me sir okay which brings me to our first test so I'm going to skip forward a few months and with that I wanted to share very first playable test that proves out that we thought this was something special like and as Dan said in his presentation it was like he was quite hard to to demo this life so I've pre-recorded a video for you so here some the first test that Michael I worked on for motion fields this involved me getting into a motion capture suit running around our studio walking jogging running going through various different movements plan turns stopping or running around in circles and sneaking we felt like this would give us a good widespread base for everything that we needed so this is about as real as a simulation to human as you can get and it's very weird for me sometimes because I can see that this is me so the arrow at the bottom is the player input and as you can see it's relatively responsive with with just I think it was maybe 10 minutes worth of data at the time we added the ability to jog on the Left button so you can see there's a very smooth transition of the camera would with very kind of easily moved behind you and then you could see that there'll be some quick change of the direction in there if it fitted so sometimes the plant turns with work sometimes they wouldn't and that was going to be worked on a little bit better all this was in one file so we didn't separate anything out and we felt that if everything was in one file at a time that we would get a better combination of all the different moves so if you can see if I let go of the left shoulder but this slows down to a nice walk so conversely there you go and you can see that I rub my nose which became kind of a running joke of a team if I hold the right shoulder button then I quickly move into a spring and the cool thing about this is a completely recreates the weight shifting from the sneaking that we put in there also there was quick change of the direction so you could see the quick change from left and right if I wanted to go right you could very quickly if I want to double back of myself I could and also the certain that we found that was really nice was if I was running around in a circle I could take my finger off the button and it would actually slow down a decelerate in a circle so and also I could do is like it starts printing again and I can completely let go of everything and you see we've come down to a nice quick stop this works really well image open gym obviously with anything out there but then it would probably cause more problems if there was going to be any obstacles in there but as the first test we tell that this was really successful okay so after seeing this first test it immediately raised a few questions with the team a few fears and a lot of assumptions we understood that no one had tried to ship again with this before and we would need to lead the way to convince everyone not only was it worth the investment but that we were worth following but as an animator it absolutely terrified me so so what does this mean for our animators whoever this cam animators work with this how would we implement it what we can't get access to the mocap facility how do we practice our new ideas if it only works for mocap exactly how much data will we need and would producers absolutely hate me because of how much is going to cost them so with our initial success it appeared that all we'd need to do would be to plug in the data just leave it at that so what animators even be needed anymore so you can see this though I didn't want to tell anyone about it okay but this brings me to the process we need to find solutions to some of these problems so I'm going to break it down so first step for us was to create a routine which we called dance cards these would be in order so we could capture data and would help us capture more efficiently these would have everything we needed to create call locomotion set and we would repeat for like walk jog and run but it could only really but it could be really tiring doing everything in one shot and as one shot could last upwards of four minutes you would need to remember a lot of moves you need to be extremely fit which took the actor basically out of character I meet so we split the dance card into smaller files to make them a little bit more manageable we broke them into stars of stops and brought them into planning turns circles acceleration/deceleration and the acceleration/deceleration we'd have like a longer walk job and run cycle we'd also have the transitions between all speeds of locomotion and the key for me was sneaking so let's see the goal of these dance cars was to find a way to capture the minimal amount of moves possible needed to create the maximum amount of coverage to create a basic locomotion set we would start off with start forward and stop then we would basically be moving back we go over the left shoulder we go over the right shoulder we go 45 degrees right we go 40 degrees left we try and do everything we could for like that core locomotion set trying to cover is basically like scribbling on a piece of paper we had then we do large and small circles to cover different types of banking longer circles to give more consistent loop coverage for like walk jog and run and then sneaking to handle smooth change of direction let's go back to the thing I was talking about before so once we had all this data back the files would look a little something like this example and you've got this example here of circling we would then export these FBX files straight into the engine without any touch up just to see what happened so you ready okay so here is the result completely untouched in engine 100% player controlled and responsive sure it was an old open gym and nothing in the way but we just created a better looking and the feeling locomotion system in four hours that would take the best teams in Ubisoft at least three months awesome and terrifying in one video so as you can see yeah so we'd have things like the sneaking there and you can see like the biomechanically correct human movement so and there's no ika on this so foot sliding is still minimal at this point we get proper weight transfer acceleration deceleration the whole nine yards and this was also using the split dance card method we couldn't always be sure it would pick the move that we wanted and the system tended to pick a move that was always the best match for the pose it was out of the current side of input providing some happy accidents but equally unpredictable results so we made a few changes to the routine and adapted it to work with what we call second person movement we wanted to know how well the system would handle more complicated locomotion such as strafing this was much the same dance card as before except we would face the same direction during the capture as if the camera was always behind us which resulted in complex foot crossovers and torso twists so again we process the data exported it giving us this the player movement was connected to directly to the camera placement controlled by the player we had eliminated the need for complex foot crossover transitions and added biomechanically correct weight transfer when changing direction on top of what was connected to the previous dance card data to give us a really interesting transition between second person and third person movement just let it run for a bit okay so we approve that adding raw data could get really decent results but for us moving forward we need to learn how to manipulate this this data which brings me some of the most frequently asked questions by by animators about motion matching so how can animators control quality versus responsiveness can this be integrated into our existing system and how can animators work with this system we wanted to push the boundaries of what was possible so we decided to try and add different types of locomotion using human as a base but with varying degrees of success explain so here we have what we called eight now we thought it was good in the friendly circus it's good enough for us so the interesting thing about this particular type of locomotion is we actually attempted to motion-capture it but it was so complicated in basically so absolutely useless though we use this as an opportunity to deconstruct health system work so it turns out Andy Serkis does actually know what he's doing so we brute force keyframed the dance card stitching together key frame clips that would match the dance card format then we expose the debug settings see I'm sorry go then we expose the debug settings so we could see frames and which files that the system was using to adapt accordingly it was a bit long-winded but it was like taking something apart to understand how it works and then putting it back together again which gave us an invaluable insight so that's a bit slow to catch up freek moves to be read by the system we would need to add about a second either side of the move that way the trajectories of each move could be accurately seen by the prediction model so this is what a start forward and back would look like by adding a longer pause at the end of the move it would highlight that this was a stop even though we wouldn't see that part of the animation so removing this pause highlighted the left and right plants urns so so far nothing too crazy this is kind of standard stuff then once we knew how the system was was being used the basically the basics we could add more natural complicated motions so we had sir the key to knowing where to polish and a character this also meant that we could start adding all manner of interesting motions depending on how we read the prediction model and fill in the blanks apply control so this basically gave us a good insight our four embrace terrible this gave us an insight of how the system worked okay so another way to control this was by simple tagging of interested areas our waste it meant that we could highlight errors we wanted to favor and file such as idols garbage or interesting walks depending on our needs this would mean that we could get more use out of a single file and we could add extra elements a character like cancer this added an element of ability to a system that can be at large largely at times unpredictable to tidy this up we would need to implement a simple like a solution to remove foot sliding and the results wrist but the results were a step in the right direction I don't have many videos in one slide ok so so I used that thanks to Michael Buckner for the next two technical slide so you're going to excuse me if I get this wrong so motion shaders were created there's a no graph system to allow animators and designers more control over how the moves will be seen it was a way to define the cost functions of how well we wanted to match current future and past ejector ease of limbs or route nodes when we were looking for a match so essentially we could say that a position was 10% important then when the velocity was 30% important and hands or feet would be 80% important I mean that we could change the look and feel of the locomotion as it would use the available poses differently this could be in the form of sliders between 0 & 1 I would be used to say what was more important so respecting the positions in the animation or favored players input this could be created using the same animation pool as a base we'd also look to adjust the amount of dampening on input of stick resulting in longer or shorter acceleration or deceleration means we could make something feel like a car if we respected the animation more or we could make it respond immediately we respected the input more okay to bridge the gap between the system and between this system and existing systems we added the use of motion matching as a traditional animation node which could sit inside of a state machine so even a motion matching node which would become active it would understand what your current pose is and find a relevant match to connect see meaning it would be great for things like transitions in announce of existing systems as you wouldn't need to setup any complicated rules and you could just use a motion matching node instead and it could also be used straight up replacement for your existing movement system we've only really started to scratch the surface of what this is capable of but we really want to push this a little bit further so what else can be done with motion matching so using the dance card we thought it would be fun to try out a few different types of locomotion so we spent some time capturing off-balance movements when we added this to the system it gave the appearance of pretty successful off balance stumble movement where we were being complete play control so we could be pushing the the character around everywhere and it would be completely responsive you can imagine this being used for things like persistent injury locomotion where we could manipulate the inputs to the find just how responsive a player was you know so what second there we go so using the dance card sorry Michael and I thought it'd be fun to try out something of the locomotion so we captured a bunch of impact emotions involving being pushed in all sorts of different types of direction with all different types of force as we didn't know how well this was going to work we attempted a few different methods to see what kind of results we might get here you can see some of the team being just a little bit overzealous with with me I think I might push a couple of people off that thing but yeah I got it back at the end it's fine but with the system being posed base if we could create an impact pose using a combination of physics momentum and motion matching we could use that pose to connect to a sort of impact animation this would give us a seamless transition from locomotion to the sort of impact locomotion that could actually be based on physics impulses as this would also be a part of motion matching we use the poses in this range of motion to connect directly back to locomotion this would effectively close the loop between animation and physics and back to animation whilst retaining complete player control of the character so this is something we were kind of proud of kind of cool so as you can see here the results were pretty successful and again it's something we plan on exploring a lot further okay which brings me to some of our early conclusions so remember our initial fears and assumptions Howell animators work with this well for me this is the mentality shift this is potentially the biggest transition since the introduction of motion capture to the pipeline the first place will this only work with motion capture well key framing within the structure small amounts of data to set up basics and then fill them out from that so you can use key framing with this system you just got to use a lot of key framing can you just plug in the data for the finish result well you can plug in data for a finish result but originally I felt the this would yield the best results we're just throwing all the data attic but it seen some of the waters a little bit more so I have a little analogy okay so take Chris Pratt when he was in Parks and Recreation all he wants his burgers he clearly loves burgers and why not in-n-out burgers are particularly delicious see oh come on timing there we go perfect okay so he needs to work out he needs to start eating right then he became nice and lean it starts getting roles in films like guardians of the galaxy Jurassic world but what does this have to do with motion matching except for me wanting to show a bunch of pictures of Chris pair with his top which is weird but anyway well basically we found the system perform better if we fed it the right data as opposed to all the data if we were efficient with the type of data that we fed into it it became a lot more manageable it would perform a lot better I would actually look a lot better as a result there you go so which brings us to some of our successes minimal setup is required once initial setup is created it's possible to just throw entire locomotion of the system to see what happens we get higher quality motion and we could get to see actual character in the actor so you can imagine it being used for things like sports games or if there was someone that had particular nuances I mean you can see from the demo earlier on and they've anyone knows me I walk like that it looks like me I scratch my nose then yes weird anyway by using the idea of dance cards we were able to easily break down all the moves we needed in the smallest amount of time and space and it was super simple to add a wide variety of locomotion to styles to the system but it wasn't all successes there was a steep learning curve to working with motion matching so it's only fair that I highlight some of these areas that we feel like we can improve upon so editing data can be tough the system gets used about 70% really really quickly but the problem is that last 30% which is kind of where downs tool comes in which is cool it can be tough to manage we found that sometimes removing the data from the file even if we felt it wasn't being used would actually caused more problems the data the system was very data heavy we would basically put large amounts of wasted data into the system but then of course the cost and availability of motion-capture facilities and currently the system is restricted to human light rigs everything we had to do with it with this system was on the same rig and when we were able to even when we did like different types of locomotion and as well all the animation on there was full-body so we didn't you then think like layers or anything yet it was originally quite difficult for designers and animators to work together using motion matching as design would alter the data the needs of precision and alter the numbers that we use to actually see the poses and the animation this would result in really undesired results and would actually break the animation for some reason mirroring didn't work we thought that we try that but again it seems to muddy the waters moment seem to fix it so we felt like it needed a clear choice and currently I said we've had a few tests but our main focus at the moment is being locomotion which brings me to the next steps at least as we see it so far we've only really used this base for locomotion we've touched on a few of the systems such as traversal cover and there is AI but there's plenty more work for us to do here we would also like to make improvements to a hybrid system or to kind of stop animators from freaking out and to kind of bridge the gap but it was the if the help bridge the gap and we try and have things like layering we should also something that we've been looking at so it wouldn't be completely reliant on full-body motions but this is something early stages with a greater understanding of what the system is reading we can make improvements the dance cards splitting files up making them easier to manage and allowing us to get cleaner more readable data if we have looked into improving the routine sorry we've also looked at improving the routine so we get better data from that as well and to get more from the data we need to combine it with other tech being developed at Ubisoft and their engines things like Alex presently acts like a rig which I talked about a second and of course stands the automation tool so for more information on some of the really cool animation into innovations that is happening at Ubisoft as a group I suggest you check out these two talks in particular on Wednesday and Thursday Ubisoft has been really good with letting us talk about stuff like this so early on and who knows after today they may not let me do it again so he can get it get it as you can but for us it's super exciting and it really is just the beginning and thank you I've been a little loose any time for questions or anything three minutes oh yeah got it under Wow okay anyone have a question if you do please approach the mic is there and I will do my best answer you mentioned mixing it with state machines can you it's all full-body can you get it so you can do upper body stuff like shooter stuff that is something we could definitely look at and there's something that I think would work very well for a system like this but we haven't done much on that okay just yet I know it's really early on but um we think this would ever make it to like third-party software into us or like third-party software where you know everybody else could use it someday I mean it's certainly if something if use of kind of feels it successful then who knows what they might do with that but that's not really it's so early on I don't think we'd note like that thanks Martha Noble anyone else please a personally also the question I repeated it what's up how long was the whole R&D process in overall time that he took - for you guys from start school well the current stage one of the one of the programmers that was working on this he kind of had it as a seed and his in the back with mud for a while and then when he came to Ubisoft it was kind of an opportunity for we gave some time for some R&D and then he came up with this this idea so to get the initial demo that you first saw we had that working and I think it was about two months then they shall stick figure demo and then obviously was just kind of iteration getting into the various different engines and then trying it in there so it's like the it was actually quick to get to a point where we got the data playable but then working out how to work with it was the hard thing after that so we had to work out the process and what we're going to do with it to get it into a state that would be shippable cool you said you thought you had problems with abandoning data you thought was useless yeah could you go in to what you had half or it would be the the thing is when you do the dance card you have things like if you're walking you'll repeat the same walking action like in between each transition because what it's really looking for is those transitions and then and then it will find them when it comes out of one of those transitions it will go into a walk so what was happening is like my walk so unless I changed my walk every single time then it would be the same walk so we've kind of find that if we had eight transitions in there you're going to have eight or nine similar walks that we didn't really need so when we added the tagging we kind of found ways to to not use that which meant that at runtime it wouldn't see that data which meant that it would produce a lot easier thing that's it this is the video for par video the Kevin sack back hit proved some either you've got very impressively this guide to me really he's not someone playing the things he ever been a mocap shoot before this is kind of lives
Info
Channel: Kristjan Zadziuk
Views: 105,344
Rating: 4.9749346 out of 5
Keywords: Motion Matching, Animation, Animation Bootcamp, GDC, GDC 2016, Video Games (Development), Motion Fields, Kristjan Zadziuk, Michael Buttner, Ubisoft Toronto, Technology, Robot, video game (Culture), inspiration, Ubisoft Toronto (Video Game Developer), Animation Tech, Future of Gameplay Animation, Mocap, Motion Capture, Talk, Character Control, Character Animation, Human Motion
Id: KSTn3ePDt50
Channel Id: undefined
Length: 30min 38sec (1838 seconds)
Published: Sun Dec 25 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.