MagicAnimate Makes Images Dance - Install Locally or Remote Run!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
AI threatens to destroy Tick Tock dancing videos a single image with a driving video is all it takes or at least that's what drove over 40 million views to this tweet do we still call them that I'm not sure anyway the first one here not so much if we scroll down a little bit we'll see there rip ticktockers 42 million there's also a whole big tweet about the aftermath too as people seemed very worried about it I guess trying to run this one locally then H yes one tiny problem is we don't yet have the code for that one however what we can run straight away is something very similar called Magic animate as you can see this does look to be very similar to animate anyone but as we all know looks can be very deceiving and one should never count their rodents before they've hatched oh look at all the cute little rodent eggs anyway let's have a look at this like you can see it isn't perfect whoops there is some glitching on the hand there but you know we're sort of used to that with stable diffusion I think by now aren't we they've given us a whole bunch of other examples there we can see the comparison between theirs and whole variety of other things and we've got another comparison there all looking very good on their's applications unseen domain animation so we've got the Mona Lisa running there and girl with a Pearl Earring and Wonder Woman as well so all sorts of things there multiple person animation as well excellent and we've got the pipeline basically you've got reference image in along with a dense pose sequence and then you get your dance video out oh all right now being free and open source there it is bd3 Clause license we can of course run this locally and have a look at all the stuff first up it's worth making sure you've got a decent computer for running anything locally you'll need upwards of 12 gigs of vram and I think ideally an Nvidia card haven't got an AMD card to test it on un fortunately there are other ways to run it though you don't have to run it locally so hold on for those if you're in that camp all right so for the Linux the Penguins Among Us this will give you the best support running locally and it's the same operating system they use so that's why everything will work really nicely it's four commands in all as of course first of all you will have to download that repository once you've downloaded it you can just check change directory and then you're running these commands here so cond EnV create minus F environment. yaml and that will create exactly the same environment on your local PC as the developers have there and then once it's done just activate that environment you'll likely already have git and ffmpeg installed but if you don't a simple apt installed get ffmpeg will get those packages for you if you don't have Anaconda you can download that for free from their website if you want to have a bit more of a challenge then you could try to use Microsoft Windows in that case you'll definitely want to take a look at this magic animate for Windows version of the repository which is very much the same thing you still have to install git and Python and ffmpeg but they give you an installed PS1 script that will basically do everything for you and talking of things which will do everything for you as option the amazing cocktail peanut has a oneclick installer for Pinocchio and finally for those without the necessary equipment and I know that can be an issue there is both a collab notebook and a hugging face website as well I know the gradio interface doesn't look amazing thanks to my dark reader but it is much the same as the one you'll get when we run it locally as you can see you got lots of options to test this out for yourself whether you've got a computer or not but I always like to run things here and if you're like me in this regard you'll want to download the models too interestingly two of them are just standard stable diffusion ones they got a 1.5 checkpoint there and vae although they do have a different set of models there in the magic animate repository to quickly grab them all you can run the commands on screen at the moment from within the magic animate directory basically you're just making this directory structure here so make the magic animate and pre-trained models and then downloading each of those links into there once you've got everything downloaded it should look something like that so there we got magic animes the appearance encoder dense pose temporal attention and all these standard stable diffusion files as well editing the config file just like they suggest here is a great way to point to your downloads which you may already have somewhere else as a stable diffusion user so you could even try different checkpoints for fun file you're looking for is the one they say there config prompts animation. yaml so let's have a quick look at this file there it is that's my one pointing to everything locally and also this file gives us a glimpse into what's going on alongside that standard stable diffusion file we've also got a control net that appearance encoder and a motion module does this sound sort of familiar in any way interesting this is going to come in handy later on in the video can you guess where I'm going uh you might already have have seen the hint all righty then let's see this thing in action as you can see it's a fairly simple interface let's just pick one of these examples there we go so now you've got a reference image input there's the motion sequence and those are the animation results so rather than their reference let's put our own one in and see what happens as you can see in the background it's chugging along there roughly 10 and 1/2 seconds for each iteration meaning this is going to take around 4 and 1/2 minutes to generate now it's finally completed and yes around 4 and A2 minutes what does he look like let's have a look at the video replay oh oh yeah yeah get on down yeah yeah is is definitely dancing obviously the wings are a bit weird but then I like being mean to AI okay all right let's pick another one here we'll pick a slightly quicker sequence we got Mona Lisa running there and instead of that let's just drop a face in and see what happens there he is oh we got some slightly interesting hands going on there but let's have a look at the Run that's that's not too bad I mean we completely lost his beard but all right it's a guy running how about if we try a nonsquare aspect ratio all right well let's squish the reference image there but how does the animation come out H okay that does not look too bad despite the squashed image input there we go yeah let's keep playing playing it there she is running all right that's quite nice color me impressed right I'm ready to do my own little dance uh oh hold on uh I can drop video there to upload the sequence but how do I make a driving video like that with a purple background and everything segmented okay well one way would be to use video to dense pose this is great as after you've installed it using the installation steps there just another get clone and pip install requirements. text along with a clone of the detectron repository you can just run a gradio version there python app.py all right let's run that one this will give you another URL crack the link open you just have to drop your video there okay I happen to have a video ready of a dancing Viking so let's submit that and get the dense pose out okay there we go it's got a video let's have a quick look at that and yeah yeah he's doing the little dance doing the little dance okay let's save this one off and give it a go there it is Drop video here or click to upload okay let's click and we'll have that video there so now we've got our own one and uh let's try it with another picture as well all right ready to animate ready to see the painting dance um yeah there he goes there he goes he's really going for it that did use a little bit more vram this time all the way up to 15 gig for that 6C clip some of the things I quite like about it is the Fairly static background it's like it's taken that character and inpainted a whole bunch of stuff and the clouds are all the same so where he is it's just imagined it so I think that bit is really really nice of course the face is slightly mushy and so are the hands but overall it's pretty good even that little hand onto the knee movement that I thought would be quite difficult for it to handle you know what would be really handy though well if you thought can't I just do all of this in comfy UI then well done the answer is uh probably probably if you thought can't I just do this an automatic 1111 however the answer is your search did not match any repositories okay so I don't know there might be a dense posting for automatic but I can't find one so I'm going to do it in comfy yes hello and welcome to More nerdy rodent geekery where today I try my little poor at well basically copying that whole Magic animate thing but in comfy UI because look here we've got this node dense pose estimation doesn't it look wonderful we've got a couple of models are 50 which I'm guessing is reset 50 reset 101 and a couple of color Maps oh Magic animate and civit AI well yes because it turns out civit AI does indeed have a control net for dense pose model that was released way back in August it does use a slightly different color scheme though as you can see obviously there the background is black perfectly mimicking that original app comfy uh did leave me with a few issues uh while that dense pose control net does work the temporal attention model and appearance encoda were another matter all together while it looks like they used something similar to animate diff I couldn't load either of those modules so I can't tell um on top of that this dense pose estimator does have a little bit of an issue certainly while I'm making this video um in that it always outputs the same image which isn't very good for videos so my input video there is actually the same one I used before that dense pose generated from video to dense pose once that node is working of course then you won't need video to dense pose at all and you can just do all your dense posing and comfy those that guessed earlier where this was going and where it was going was can I use that dense pose in come for UI as a control it the answer is yes you can there it is the magic animate dense pose is the one that I'm using to control this with that dense pose video but how well does it work I've just got a strength of one there end percentage of one so fairly standard settings for a control net as an attempt at having an input image I'm using IP adapter IP adapter plus there now with a really low weight quite a lot of noise and well ending quite early as uh some problems with IP adapter is that it's quite strong and will stop your images from animating I'm also helping it along a little bit as well a man in a gray suit and I'm trying to do a simple background okay so what does that turn out like there he is there he is that that's not actually too bad I've also got an interpolation node here as well just to try and smooth things out if you think he looks any better than there now there's a little bit of strangeness going on with the pants but I'm sure many of you are currently sitting there now wondering well what about the open that other open pose model and those other control Nets I bet that would work there are so many different things you could do to make this better and you'd be completely right if you were doing that then pat yourself on the back all right so a few minor little tweaks and adjustments here I've added the uh the more recognizable open pose skeleton as a controll and also a bit of depth there with the Zoe depthmap processor just to stabilize things even more I'm actually throwing each video frame in through the IP adapter as well as our standard animate diff settings there the IP adapter in this case I could actually turn up a little bit so there I've got the weight all the way up 0.85 and a fairly reasonable noise of 0.5 and then what do we get there we go look at him dancing he's even got the the little hand on the foot thing as well if we uh if we scroll over to this one have a look at the interpolated version just to see it slightly smoothed out now of course he's got the cape from the original and those horns are actually Wings on the ears of his helmet so it's sort of taken every single little detail from that original video and I think copied it quite nicely so there we go there we go that's that's the comfy UI way of doing magic animate what's that you you want the workflow you don't want to have to do all that by yourself well as a special present for watching I'll tell you now that you can grab it from the a very comfy nerd website for free remember to check the troubleshooting tips at the top if you have any problems well not with life in general just with the comy UI you know anyway time for a quick recap so we know where we are animate anyone is coming out soon but magic animate is here now and you can use it online or locally using their app is cool and stuff but you may prefer comy UI for all the extra features such as that frame interpolation and various different sizes Beyond 512 by 512 dense pose is making a control neet comeback and there are many things that are going to get quite spicy in the new year I can't wait to see what open source provides also if you're interested in other cool comfy UI workflows I've got a whole bunch of them on my patreon and if you just want the free ones then well maybe check out this next video
Info
Channel: Nerdy Rodent
Views: 19,782
Rating: undefined out of 5
Keywords: MagicAnimate, Magic Animate, MagicAnimate Windows, MagicAnimate Linux, MagicAnimate Local Install, Magic Animate tutorial, Make images dance, Animate Anyone, ComfyUI, guide, howto, tutorial, generative ai, DensePose, OpenPose, MagicAnimate ComfyUI, AnimateDiff, ControlNet
Id: td27SyA9M80
Channel Id: undefined
Length: 16min 25sec (985 seconds)
Published: Fri Dec 08 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.