LCM + AnimateDiff High Definition (ComfyUI) - Turbo generation with high quality

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
animate diff is great but generation times are long with LCM we can create animations 10 times faster but details are lost but with the second pass of your video in the case sampler you can still have great level of detail and still generate the animation three times faster stay tuned and learn how you can speed up creating your animations to use LCM first you need to download the LCM Laura model from huding face link is in the description follow the instructions of the page to upgrade the diffusers and PFT libraries to make the model run in files and versions you can find the safe ensors file with the model click to download it if you are going to use stable diffusion XL you need to get a different Laura model from A Different Page I also write the link in the description below you need to copy this file in your Laura's folder in your comfy UI installation go to the models folder then open open the luras folder and copy the file there rename your file as lcan Laura SD 1.5 or similar so you can easily identify it later for this video I am using the animate diff workflow from my previous video you can download the input files and the workflow from the civit AI page in the description check out the video tutorial for details on how to install the custom nodes and models if you have everything installed download the background Laura anime diff template from the Civic AI page drag and drop the template over the comfy UI canvas now we are going to do some changes in the starting template first and activate the saving nodes and the upscaling nodes you do not need them now and you can activate them later copy your K sampler to keep the connections paste it with the command control shift V do the same for the V code and the video combine nodes connect the latent output of your new case sampler to the V code add a Laura loader node in front of the new case sampler in this loader we select the LCM Laura model we just have downloaded and copied into comfy UI as input from the LCM Laura we use the anime diff instant Laura model output used for the case sampler of the template we also need to connect the clip before connecting the Laura model to the K sampler we need to include the model sampling discreet node use LCM as a sampling method connect the model output to the case sampler LCM runs with less steps and lower CFG than other Samplers so some settings need to be changed in the case sampler change the steps to 8 and the CFG scale to 1.5 use LCM as a sampler and change the scheduler to exponential [Music] rendering will be very fast but for testing we will first use the first 12 frames for this example I will use another character I will keep the snowy Town background as it has quite details and will be nice to see for the comparisons later in the video the original K sampler has 30 steps and a CFG scale of 10 I will also run this sampler so we can compare the results of the original animation without else M Laura and the one with this new model as you can see on the left the LCM Laura has been able to generate a good first 12 frames animation however we can see that the level of detail is not as good as in the original template the number of steps in the case sampler of the LCM Laura is important however you can see that around six steps any increase is not leading to a more detail in the animation the recommended CFG level should be between 1 and two for optimal results two low will create blurred images while above two we will have CFG burn changing the sampler parameters is not enough to increase the level of detail you need to do an upscale or to use a second pass of the case sampler to achieve it copy the K sampler and connect the latent output of the LCM sampler to the second pass sampler use also the model sampling discrete node here but change the sampling for EPS because as our LCM base animation has already been rendered we only add eight steps to the second pass case sampler this way the overall animation only takes 16 steps this is almost half of the 30 steps of the original animation the CFG level is 10 the sampler is DPM ppsd and the scheduler Caris we decrease the denoise to 0.7 there are some differences between the new animation and the original one but it it is difficult to say which one has more and better details if you are happy with the new workflow you can now make stunning animate diff animations two or three times faster obviously you can later use the face detail and interpolation for even better results optimization of the second case sampler is possible few steps may not yield the results you want but increasing them will also not improve the quality a lot same happens with the Deno with a small noise the image will be very similar to the LCM animation with a value close to one the AI May create some hallucinations that is all for today I hope you enjoyed check out my other tutorials of stable diffusion and see you [Music] soon
Info
Channel: Koala Nation
Views: 4,218
Rating: undefined out of 5
Keywords: ComfyUI, Stable Diffusion, Animation, AI, Automatic1111, Video Animation, comfyui, stable diffusion, midjourney, tutorial, controlnet, comfyui stable diffusion, runpod, automatic1111, colab stable diffusion, confyui animation, comfyui nodes, comfyui tutorial, SD, SDXL, SD tutorial, SDXL tutorial, Vast.ai, Vast, Animatediff, Lora, Instant Lora, IP Adapter, colab comfyui, AI Animation, comfyui animatediff, upscaling, face detailer, LCM, Latent consitency model, LCM lora, LCM animation
Id: QdQANF3YLuI
Channel Id: undefined
Length: 5min 53sec (353 seconds)
Published: Mon Nov 27 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.