Easy AI Motion Capture Animation & Lipsync

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey in today's video I'm going to explore a new pretty easy way to create AI animation using motion capture to create these short stable diffusion animations as well as give you a 3D fbx file of an animated character that you can then take and use in your preferred 3D package and apply the animation to your own character I'm also going to show an exciting new AI tool that allows you to add lip syncing to your character animation regardless of how you've created the original animation let's jump into it so the main AI tool I'll be using today is Mo or motion with two o's and I'll include a link in the description below I know they're working on a website behind the scenes but that's not yet live so for now you can click create in Discord in their Discord server you can click on getting started and you'll find some details about their new motion capture command you can head over to their different creation rooms and use the/ capture command and go through the process or you can do it privately via a chat room so head to direct messages click find or start a conversation and search for motion and choose motion bot in my chat you'll see I've already created various Generations go to the bottom and click to message the bot press forward slash choose capture and then upload your Source video file that you're going to use to drive the animation via motion capture and I've got this slightly awkward clip of me gesturing because I'm not a Tik Tock dancer you can then click to choose your character and I'm going to choose bot and then for in place you can choose true or false at the time of recording false isn't working as expected where the character would move around in 3D space but I'm aware that they are trying to fix this so at the moment I'm going to leave it on true and perhaps revisit in a future video and then press enter and it says motion bot is thinking time for a cup of tea okay and then after a minute we have the 3D animation of the bot gesturing following my motions and whilst there's no finger animations yet the head body arms legs all move really well um and there is some floating around but hopefully that'll be fixed when we can do that in place for false setting with this first step completed you can then download a 3D fbx file for that character animation plus you can create a stabled Fusion video Generation by clicking video gen write out a quick text prompt and I've written female hip-hop dancer in white puffer jacket pink neon baseball cap blue eyes skinny Pink jeans white Converse trainers in Enchanted Woodland photorealism cinematic and then you can choose from two art styles of digital art or anime and I'm going to leave it on digital art and press submit whilst we wait for that I'm going to trigger another one with exactly the same prompt type in number one and do an anime generation as well and I've got those two generations both look really really cool I've got some seriously tight leggings on but um yeah there's some Distortion and if you were generating this via something like comfy UI using stable diffusion you could try different settings and separate backgrounds use masks and things like that but as an easy approach this is Fant fantastic and the anime one turned out really nicely as well so in terms of a workflow to create AI animations this process is really quick and you do have that sight level of control as it's using motion capture to drive the animation and yes the output has got that stable diffusion feel but it is really easy and a lot of fun quick shout out to my mom thanks very much for letting me film this clip earlier and playing around with the tool cheers mom if you wanted to create a 16x9 version of the output you could use pabs which has recently moved into a paid here but there is still a free trial upload one of your video files I'm going to use the anime one press expand canvas choose the frame size that you're after so I'm going to go for 16 by9 scale our clip down a little bit add a quick text prompt and I'm going to put Enchanted Woodland anime style and I've included a few negative text prompts low resolution low quality blurry hands etc etc and press generate and once the output from pabs is far from perfect it does show how you can expand the canvas and hopefully that gives you some other ideas on how you could develop this process further I want to quickly go through and show how you could use that fbx file in a 3D package so back on the initial generation I'm going to choose motion fbx and download the 3D file so I'm using blender the free open source 3D package with a brand new scene I'm going to highlight everything here and just press delete to have an empty scene and press file import fbx and choose our downloaded file and we have the bot character with that animation applied and then got this stock 3D character which I found on turbo squid that I want to apply that motion to so I purchased and downloaded the model and repositioned them into a t pose and I want to re-rig them with a mixo rig that I know works very well with the process and that mushion fbx character file so I've uploaded the character to mix.com which comes as part of the Adobe Creative Suite and I'm going to quickly go through the process to have them rigged on here so with the file uploaded I can press next and then position the various elements and then once I'm happy with those positions I just press next and wait for it to go through the auto rigging process okay and with that completed we can press next next again and download and we want an fbx file with a t POS press download again then jumping back into blender again pressing file import fbx and importing that new character file can select the move tool and move them to the side I'm then going to use Auto rig Pro which is a carer rigging plug-in for blender it is a paid plugin but at $40 I think it's worth it if you're doing this kind of work and you can afford the purchase so with auto rig Pro installed and up and running I can click the button here to see the side panel go down to ARP which stands for auto rig Pro go down to Auto rig Pro remapping and we need our source Armature which is our character here so I'm going to drop down the motion bot here select the motion rig bot and choose source amateure then going to click on our character bones here which I can see up there and again choose that as a Target Armature press build bones list and you need to set a bone as the root so just drag this panel out a little bit so we can see the names and there's one called hips so I can click that click set as root and then press retarget and it'll ask how many frames you want to retarget the animation for and just press okay and in a couple of seconds it's completed the work so as we spin through we'll see we have our characters both animated and then you could go from here animate your camera render out scenes composite them in After Effects and you could develop it further and further adding lip syncing nice Shadows good lighting better texturing until you get the final output you can add lip syncing to your animations using the new syn laabs doso website where there's a free trial currently available going to go ahead and press launch app and I've got access to the demo already and I've been trying out various tests and generations and to create the lip synced animation you need to upload your video file and an audio file and then press submit and whilst the output is not always perfect and the results can vary depending on the character design lighting things like that I'm very impressed by the ease of use and the quality already being provided by syn laabs and expect to see more from them in the near future with any of your AI animation or video Generations regardless of which platform or approach you've used to create the output you can use topaz video to upscale the clip so for example with our lip sync clip here I can choose output resolution choose 4K explore different options choose an AI model for the upscaling Proteus works fine or you could use one that has a face enhancer to it and press export and whilst it's not a cheap piece of software it does run locally and it's a one-time investment so if you are a video professional or you're doing lots of AI animation Generations it's one worth looking at and I'll include an affiliate Link in the video description below and that brings me to the end of the video where we've used mustion and their new motion capture ability to drive our character animation giving us access to that fbx 3D character file as well as creating those short stable diffusion animations using motion and a short text prompt upscaling those Generations using peer plus using the animation in the fbx file from motion and retargeting it onto a different character and then looking at ways of adding lip syncing and finally upscaling using topaz video AI as always if you are an AI creative using these various tools and you want to build up a presence online head over to AI animationcomedy I'll be adding there soon and lastly please press like subscribe and leave any comments all right thanks very much [Music] cheers
Info
Channel: AIAnimation
Views: 33,648
Rating: undefined out of 5
Keywords: ai animation, 3D ai animation, animate ai image, ai character animation, ai animation workflow, ai image animation, ai animation tutorial, midjourney animation tutorial, ai characters, generative ai animation, 3D AI, 3D Ai animation, ai animation 3D, deepmotion, 3D midjourney, AI 3D, AI motion capture, motion capture, lipsync, lip sinc, lip syncing animation, character lips, mouth animation, mootion, blender, sync labs, stable diffusion, stable video, animated mouths, cgi, 3D
Id: BabkaTF-kw0
Channel Id: undefined
Length: 8min 59sec (539 seconds)
Published: Fri Jan 19 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.