AnimateDiff ControlNet Tutorial - How to make AI animations Stable Diffusion

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this animation was created by animage diff and with the addition of control net we can improve our animations by guiding the generation with any reference video I have outsourced some reference files that I would like to use and I will explain how I intend to use them shortly it took me a few days to research and watching other videos to find the solution in addition I tried a few things and did not work out well the process involves the installation of both animate and the control net extensions to install both of these extensions go to the extension tab available click on load from search for animate click on install once animate shows up secondly also search for control net if this shows up click on install apply and restart check for any update of the extensions installed for both animate and control net extensions apply and restart once applied go to the settings tab under control net make sure to have these settings applied and checked boxes also you can change your directory part to see your control net rendered models where you want them apply the settings once more and reload the UI after installation both of these extensions require models before they can be used in automatic 1111 visit the haging face page for animate div to download these models place them into this directory after downloading for control net we will use the open pose model only from Hing things but you can install other models through the same process download the save tensor version since these are smaller in size and safe to use place the downloaded models in this directory after downloading once all of this is done restart automatic 1111 firstly I'll generate our prompt which I have already prepared the checkpoint I'll be using is from CIT AI hello 2D young once this is downloaded place it into your checkpoint folder in stable diffusion back in automatic 1111 I'll change a few settings for my prompt I will go ahead to add a Laura which is the more details Laura under generation I'll change my sampling mode to Jura a sampling steps will go for 40 I'll include the highrisk fix as as well and choose the rsun 4X animate 6B upscale by 1.3 highr steps I'll push this to 20 denoising strength I'll push this down to 0.3 I'll make this a vertical ratio of 512 by 768 with this settings I'll hit generate to see the outcome all right this is not too bad this is great but the legs are look looking kind of funny I would like our character to be sitting with his legs crossed and holding a guitar to achieve this I'll use control net to guide the generation I'll use this reference image from my resources to get the pose the original image was larger but I have resized the final image into a smaller aspect ratio using after effect which is now 512 by 916 back into automatic 1111 under the control net extension I'll drag the image here enable control net Pixel Perfect allow for preview I will hit this explosion icon almost like a fire button to see the preview first before moving ahead with the generation from here I'll scroll up to hit generate to see what comes up including control net with the generation we can see the image pose of our reference from control as well as the stick pose of the image I will edit the prompt a bit more to have a waterfall in the background and musical notes filled in the air from playing the guitar I will also include add detailer for a perfect face in the generation in my previous video I have explained how to install and use the extension for ad detailer all right so I finally got this image which I am quite happy to use and move forward with from here I'll close the control net extension and we can start animating using the animate diff extension my motion module will be v14 I will enable animate diff a format as G and number of frames or duration will be 32 FPS will be 12 for a bit faster and smoother animation I'll hit generate from here to see what we get all right so I am quite impressed by this generation by animated just from a single and ordinary prompt but I would like to take it a step further what if we had control over the hands of the character playing the guitar and this is where we can include control Nets to improve this animation to start this process I'll go back to the PNG info tab I'll drag and drop our earlier generation to make sure we have the same prompt settings I'll send this to text to image this will ensure we have the same settings from the generation data to guide this PNG image I found a reference video of someone playing a guitar try as much for the post to match when Outsourcing the references since this will be helpful in controlling because the video is large I resized it in After Effects to keep everything in the same aspect ratio of 512 by 768 I also cut it down to 3 seconds of the part I only want to use because the entire video will take forever to generate I will export this as two assets one as a resized video and the other as a PNG sequence in patch frames the resize video will be used in animate diff and the PNG sequence batch frames will be used in control net for more control back inside automatic 1111 I will scroll down to use the animate div extension my motion module will be v215 or anyone you may prefer enable animate div under the video Source tab I'll double click to load the reference video file into animate and to make sure I select the resized one which is 512 by 768 the number of frames was automatically updated by animate diff as well as the frames per second I'll leave everything else at default and scroll to the control net extension down here I'll enable control net check boox for Pixel Perfect select open pose and the processor will be DW open pose full the model will be open pose to match up with the processor down here I'll would select control net is more important for the next important setting I'll select the batch tab I'll copy the directory part of the PNG sequence batch frames and paste it into control net under the batch tab I am using my RTS 3060 and due to the long rendering time which took over 8 hours I'll push down some of the settings to speed up the generation process from here I'll hit generate to see how adding animate s and control Nets to the animation improves it okay okay we can see him now playing the guitar thanks to the extra guidance from control net compared to our first generation so I hope you guys can put this technique to use for a variety of creative ideas if you enjoy this video consider to smash the Thumbs Up And subscribe for more tutorials and don't forget to leave a comment if you missed anything and I'll see you guys in the next one
Info
Channel: goshnii AI
Views: 7,024
Rating: undefined out of 5
Keywords: stable diffusion, midjourney, ai image generation, ai video generation, stable diffusion tutorial, gif creation, stable diffusion prompt travel, a1111 prompt travel, ai video creation, ai gif creation, animatediff, animatediff extension, machine learning, a1111gif, ai gif generation, ai animation, ai animation video, ai animation tutorial, animatediff automatic1111, animatediff tutorial, controlnet automatic 1111, controlnet automatic 1111 tutorial, controlnet video, open pose
Id: xO3WsbBm29U
Channel Id: undefined
Length: 8min 46sec (526 seconds)
Published: Sat Jan 06 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.