create AI Animations from any Stable Diffusion Model using FREE AI - AnimateDiff Automatic1111 ...

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
AI animations. Kind of anyways. As always, since they are free and open source, they use stable diffusion based models. AnimateDiff is a way to generate animated gifs using your favorite stable diffusion model or LoRA as long as they're based on stable diffusion. 1.5. There are quite a lot of demos that the original authors have published using different models such as ToonYou* for more anime inspired outputs to realistic vision for more realism. The animations are currently super simple and depending on your specific model or LoRA consume quite a lot of VRAM as well as time. It is slightly non-trivial to set up, but thanks to this extension by you can just plug it into your existing auto 1111 setup and play around with it. I'll show you a quick demo plus some errors that you might run into and how to fix them towards the end of this video. But there's another interesting semi animation stable diffusion based project that I want to talk about, which is text2cinemagraphs. It's probably not very useful for most people or even interesting, but I absolutely adore stuff like this. At the very least, with enough experimentation, you can probably create some great looking live wallpapers. I had no idea what cinema graphs were, but they're basically still photographs in which minor and repeated movements occur. From all the listed examples and demos, it seems like the model was mostly trained on flowing water, rivers, waterfalls etc. as well as clouds. The code is available on their GitHub and is slightly non-trivial to set up. Okay. Getting back to animatediff, all you need to do is copy this git repository■s git link and paste it in the URL for extension■s git repository in the install from URL tab under the extensions tab and click install. After it has been installed, you need to restart auto 1111. Let's try generating something. If your generation ends up being similar to this, which is just a bunch of different outputs compiled together, you probably have an error in your terminal and the most likely cause is that it wasn't able to download the movement model required. You can manually download it from g-drive or huggingface, then go to the stable-diffusion-webui/ /extensions /extensions /sd-webui-animatediff and create a folder called model and paste the downloaded checkpoint. I downloaded the 1.5 check point since I'm using SD 1.5 and other models based on that. Now let's try generating again more consistent but pretty ehhh. I'm bad at prompting and the generic model is generic. Let's try again with the rev- animated and similar prompt as the example. Comparatively much better, but it also took a very long time. From the original GitHub repo, it requires around 12 GB of RAM and is currently very unoptimized. I have a lot more feasible and unfeasible projects to show, so please do subscribe. Also SDXL 1.0 seems to have been delayed, but stay tuned. Bye.
Info
Channel: CoderX
Views: 18,284
Rating: undefined out of 5
Keywords: ai animation, ai animation generator, ai generated animation, ai talking avatar, ai animation video, ai video, ai video animation, ai character animation, ai animation video generator, animation ai, how to make ai animation, how to make ai animation video free, text to animation ai, animatediff, stable diffusion animation, stable diffusion, stable diffusion deforum, stable diffusion models, animatediff automatic1111, animatediff stable diffusion, animatediff tutorial
Id: LcHAZaJjA5k
Channel Id: undefined
Length: 3min 25sec (205 seconds)
Published: Wed Jul 19 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.