Ai Text to Video | ComfyUI Animation | ONE CLICK OPENSOURCE INSTALL!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
open AI new tool Sora has stretched the limits of what was possible with AI video animation while we wait for this amazing tool to be publicly released we can build simpler animations with open-source tools all on our own computer straight from text to video I built a oneclick install to get us up and running with our own AI animations using open-source publicly available models this works on both Linux and and windows so let's dive on in on how you get it running on your own computer so now let us get started all we need to do is go over here you should have git installed on your system you should also have Cuda drivers for your uh GPU now you can either do a little get clone action through the command line terminal or you can do a download zip which is what I'll do here okay and now we got a right file over here now all you got to do is unzip it I use seven zip extract here bam and then it's good to go now I just click install Windows dobat and we can see that everything is downloading and getting set to go once everything's installed the folder will look like this one right here now in order to run everything you would just hit run comfy UI Windows bat but right now in my WSL so I'll just open up a terminal and I'll run the Linux version which works exactly the same oops as the windows version all you have to do if a web page doesn't pop up is hit go to this link this is your local host and this is your cool this is your comfy UI interface now this is the default interface not the one we want to use so I'm just going to go back over to back to the full go to workflows and this is the workflow we want right over here animate workflow so I can just click and drag bam and we all of a sudden have a new workflow with everything we need so all we got to do if we want to just start off with some quick experiments is go to this top little uh group over here so you can see right over here I have prompts positive prompt is is what you normally see in standard diffusion just want what you want your image to look like and we also have the motion prompt so what's this prompt it's not usually there as we can see down over here we can use control prompts for our text to image so the words we put in right over here will'll go right into this post we have a depth map as well as open pose to help guide our animation so it follows the uh mapping we make okay and we have the basics options so when you're first making an image you can just start right over here image width image height number of Animation frames and hit uh q and you'll have an animation coming out you can monitor the progress of your animation through these little Windows over here and if you want to get even more detailed with it if you want to improve how your image looks we have additional options over here so if you know what a sampler is you can change your sampler for all your uh sampling over here you can choose to not imply an animate module but that will make the picture look inconsistent but the option is there nonetheless we have control net we have these options right over here which affect our initial image generation so if you change the Z here for example it'll be more zoomed out or more zoomed in depending on where you go and the way this model works we have three different passes first pass is okayish second pass is better third pass is better and if you'd like to upscale for an even better image that's an option you can do a little face detailing as well and you can also save the image and this right here is a little of experiment I was doing it's not my final results it's just what comfyi decided to leave over for me all right now that we have our prompt going you can see our progress right down below the motion prompt is a person swinging a single sword and what we have over here to form the basis of our animation is in fact a person swinging a sword in a pretty uh funny way we'll see how it turns out and this image right here that's generated isn't moving this is the IP adapter image that will help ensure that our image is consistent by going up here you can adjust using this how much we want the animation to conform to this image so if we turn up these IP adapter weights it'll look more like this and be a little less creative and depending on how unique your animation is or how much it goes off the rails you may want to lower this or heighten it to conform more or less to this image and the upscale does consume High memory I am able to run it on my 8 gigs graphics card but it does take a little while what I like to do is just do a 1.5 over here and hit true of this is my final pass and once all this great stuff is done once the progress is complete all you got to do to see your animations to go over over to Output comfy UI where you'll find a date and you'll find your animation being processed as you can see right now we've done the sword sweep we've done the IP adapter image and we've done the death map so now we're just waiting to get the rest of the animation on board but it is automated so we can just come back later to see the results and that's pretty much it and that is all you need to do to start making your own animations through stable diffusion all on your own computer all open source you can make anything you like to and by adjusting some of these parameters up here you can get in a better image for your particular prompt I hope you have fun and please do leave a comment down below thank you for watching
Info
Channel: ProtoPomp
Views: 427
Rating: undefined out of 5
Keywords:
Id: NMuykcE9aXI
Channel Id: undefined
Length: 8min 10sec (490 seconds)
Published: Wed Mar 06 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.