Easy Image to Video with AnimateDiff (in ComfyUI) #stablediffusion #comfyui #animatediff

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome to koala nation in this tutorial we are going to bring images to life with comfy UI and Anime diff we will build an easy image to video workflow wait until the end to learn a small trick to use random images and create the most surprising animations this is a very easy example to create and to use you can further elaborate on it and add your own twist I hope it can get you enthusiastic about comfy UI and animate diff we are going to need the following custom nodes [Music] and we will also need to have the following models installed as always we can access most nodes and models through the manager for animate LCM download the model and the Laura from the link in the description dream shaper can be downloaded from civic. Once installed remember that it is convenient to update comfy UI and restart the program let's start the checkpoint that we are going to use for the tutorial is dream shaper 8 since we are going to use animate LCM we first have to add a Laura loader and select the animate LCM Laura we leave the strength at one if we used an LCM checkpoint such as the distilled version of dream shaper 8 we would lower strength to 0.3 later we will use sparse control so we need to add the anime diff version 3 adapter we connect the Laura loader Andel select the adapter now we create our animate diff module we had used evolve sampling and connect them to apply the animate diff motion model we first had the apply animate diff model node and connect this to the motion model loader we select the animate LCM model to be able to run animations longer than 32 frames we must also set context options we select looped uniform and leave the configuration as it is the next thing to do is to use an IP adapter with a reference image we place an IP adapter tiled in the new version of Ip adapter we can use the IP adapter unified loader through which we connect the model we select the plus model but this workflow also works well with Vig of course we need to add an image with this configuration we now connect it to the case sampler we continue with the prompt the animation will be quite influenced by the IP adapter nevertheless I recommend using one that describes the reference image for this example woman on water magic fire ring let's add two control Nets the first is control net tile we use a strength of 0.25 and an N percentage of 0.8 we connect the reference image directly to this control net in the second we are going to use sparse scribble instead of a regular control net loader we use the load sparse control models node we connect it and select the sparse control scribble model we set the strength at one and the n% at 0.4 for this control net we need a scribble image we add a fake scribble lines node and connect the images we want to generate 72 frames in our animation but because we will use interpolation later we Define the batch size with 36 frames we change the dimensions of the latent to 512 and 768 with we are going to change some settings of the K sampler we fix the seed we reduce the number of steps to eight and the CFG scale to 1.2 because we use LCM the sampler has to be lcm2 as a scheduler sgm uniform we remove the save image node and connect a video combine node we increase the frame rate to 12 use the MP4 format and deactivate save output let's run the workflow and see how it turns out here we have a very interesting draft now let's add a new case sampler to refine the animation copy the case sampler and disconnect the latent one we scale the image by 1.5 and use it as a reference for the case sampler copy and connect the video combine node to the new sample reduce the noise to 0.7 and test it with this configuration we can fine-tune the parameters to see how it looks better for example if we want to add some extra Movement we can use effect multiv Val we return to the animate diff module and connect a multiv Val Dynamic node we increase it to 1.2 and see how it turns out not bad finally we will scale and interpolate to improve the resolution of the animation We Run The workflow again and these are the results to finish the tutorial let's have fun with this trick we go to the site pick some. photos which generates random images copy the first address and return to comfy UI put a load video path node and convert video to input now put a primitive string multi-line node and copy the link twice change 200 to 512 and 300 to 768 now connect this node to text random line and this one to the load video path node reconnect the image with the IP adapter and the control net we can add a display node if we want or not if we want to be surprised and this is all I hope you like this simple tutorial stay tuned [Music] [Applause] [Music] [Music] [Music] he [Music] a
Info
Channel: Koala Nation
Views: 5,504
Rating: undefined out of 5
Keywords: stable diffusion animation, ComfyUI, Comfyui animation, Animatediff, comfyui Animatediff, comfyui animation, comfyui video, comfyui vid2vid, Animatediff comfyui, Animatediff controlnet, Animatediff IP adapter, lora, Animatediff evolved, controlnet animation, comfyui Animatediff controlnet, KSampler Advanced, motion lora, stable diffusion, AD, Animatediff keyframe, beta scheduler, comfyui tutorial, animatediff tutorial, animate diff, comfyui video to animation
Id: yHMcsRZGMEo
Channel Id: undefined
Length: 8min 3sec (483 seconds)
Published: Sat May 18 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.