AnimateDiff + Instant Lora: ultimate method for video animations ComfyUI (img2img, vid2vid, txt2vid)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] great animations with stable diffusion and animate diff even better instant Laura Force stunning results you will see how in this tutorial you will need comfy UI with the custom nodes and models manager installed the rest of the basics are in the list and can be found in the description for the instant Laura method you need the IPA adapter nodes and models the easiest way is to use comfy UI manager I will show you how finally for the animation you need anime diff evolve you will see how to install it using the manager animate diff allows you to create animations and stable diffusion the aloe Vera's instant Laura method allows you to have a Laura without any training these two methods combine perfectly to create Endless Possibilities this video is just a simple example of what you can do start the preparation by downloading the poses from the link to cit. a in the description you can use other poses or create yours using control net open pose or DW POS pose copy the poses in a folder inside the input folder of comfy UI you will load this directory later in the comfy UI workflow save also your instant Laura image in the input folder for this example the link is also in the [Music] description it is also important to use the same model as used in the Laura image in this video I will use the geminix mix model download the model and copy it in the checkpoint folder within comfy UI now we will install all the requirements for animate diff and for the instant Laura click on the manager and uninstall nodes to start search for animate diff install animate diff evolved animated diff needs also to download additional models we will install them later now we continue installing the nodes for the IP adapter the IPA adapter also requires of the IP and clip Vision models for the moment do not install them you can do that later install the rest of custom nodes used for this workflow first the advanced control net nodes if you are going to generate your own poses depth maps line art or other control net methods you also need to install the control net pre-processors in install video helper Suite custom nodes which you will need to load the poses and generate the GIF images install also the impact pack and Inspire pack custom nodes finally for a general set of tools you can also install the WIS node Suite package now with all custom nodes installed start downloading all the required models for the animation start start with the control net model in our case we only need the open pose model to run the poses next download the model for animate diff there are several options different models will give you different results it is up to you to test them we will use the stabilized High model which gave me the best results for this video animation luras and Anime diff are optional they introduce camera effects such as zoom and pan however the models need to be downloaded manually I leave the link in the description if you want to use them to use the IP adapter in the instant Laura method you will need to use the IP adapter model depending on the model you use you have several options for our video use IP adapter plus sd15 bin model finally install the clip Vision model for SD 1.5 when everything is installed restart comfy UI and refresh the [Music] workspace we begin with the funny part the best way to start is by using the template with open pose from the animate diff GitHub scroll down until you find the workflow for text to image with initial control net input using open pose images drag the workflow and drop it over your comfy UI workspace the workflows should automatically be loaded start checking that the load image upload node is pointing out to the right directory open pose full in our case check that the anime diff model in the loader is also the one we want to use select the model we want to use for our animation in our case as we only have one it is automatically loaded use the same vae as the checkpoint loader and connect it directly to the [Music] decoder check that the open pose control net model is available in the load control net model Advanced node we will not upscale the complete images so you can delete the nodes related to [Music] that set four frames in image load cap to do a short test of the workflow change the prompts from the template I recommend you start using the prompt from the Laura image later you can adjust as you need start with the same sampler settings as in the reference image use 30 steps in a CFG of 13 with a DPM plus plus caras sampler run a first prompt to check everything works for poses are used all models are loaded and the sampler runs them the workflow works you can see that the video is still far away of what want at the freu node to improve the general definition of the animation connect the output of the animated fifth loader to the input of the freu node and the output of freu to the case sampler add the context options nodes and connect it to the animate diff loader we start with a control length of eight and the context overlap to two add also a motion Laura in this case to slightly zoom out the image run the prompt and check everything works now start using the instant Laura method add a load image node to load your reference in image refresh to make sure that the images in the input folder can be used and now the IP adapter and connect your reference [Music] image connect the model from the checkpoint loader to the input of the IP adapter loader connect the clip Vision input to a load clip Vision [Music] node connect the output model of the IP adapter to the model input of the animate diff loader connect the output clip vision from the IP adapter to a new unclipped conditioning now connect the positive prompt to the conditioning input from the uncp conditioning node and reconnect the output of the unclip conditioning node to the positive prompt input of the apply control net node run again the prompt to check everything is well [Music] set the new four frames are generated correctly and look similar to our Laura let's generate a new animation now with 16 [Music] frames now looks even better the face details though still can be improved we will use face detailer we need first to convert the batch of images to a list of images we use the image batch to image list node without this node face detailer will fail after connecting the images to the node use face detailer check out my previous tutorial to see how to connect face detailer with the different input nodes if you want to use the video combined node you will need to revert the image list from face detailer to image batch use the image list to image batch node and connect the images to video combine connect the node change also the name of the GIF or video that is being generated with animate diff change the frame rate to 12 as the original video had 25 frames per second and we extracted the poses of every two frames you can later use frame interpolation to come back to the original 25 frames per second test again to see that face detailer [Music] works now you got a more detailed face for our runner we are ready to Pro process all the poses we come back to the load images node and set to zero the image load cap run the prompt all the poses are being processed this will take few [Music] minutes and here it is you have converted the original Runner to a whole new character using animate diff and the instant Laura method now you can just use your imagination to unleash the power of these two methods of course you can still later postprocess the video to fine-tune the video and get even more amazing results that's all for the anime diff and instant lore animation method I hope you have liked it check in the description all the information for the method and see you soon
Info
Channel: Koala Nation
Views: 19,743
Rating: undefined out of 5
Keywords: ComfyUI, Stable Diffusion, Animation, AI, Automatic1111, Midjourney, Video editing, RunwayML, Video Animation, Artificial Intelligence, comfyui, stable diffusion, midjourney, tutorial, controlnet, comfyui stable diffusion, runpod, automatic1111, jupyter notebook, jupyter lab, google colab, colab stable diffusion, confyui animation, comfyui nodes, comfyui tutorial, SD, SDXL, SD tutorial, SDXL tutorial, TrackAnything, Segment Anything, SAM, Vast.ai, Vast, Animatediff, Lora, Instant Lora, IP Adapter
Id: Ka4ENd63VBo
Channel Id: undefined
Length: 11min 3sec (663 seconds)
Published: Tue Oct 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.