Stable Diffusion Animation Create Tiktok Dance AI Video (Tutorial Guide)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everyone I've been creating short Dance videos using stable diffusion animate diff framework and control net to make flicker free animation videos and these kind of videos are also apply for Tik Tok as well so I'm going to walk you through the process and steps of how I did it this time we are also including LCM Lura model to speed up the processing and using IPA adapter to retin the image frames quality pay attention in this tutorial cuz you can also make Tik Tok dancing girl that goes viral so I have been using the workflow that we previously discussed the animate diff flickering free workflow now I am building upon this foundation and adding new custom nodes to it at this point we load our source videos here we are using a boxing action stock video footage as we we scroll down a bit we can see that we have the LCM model connected to our checkpoint loader here the checkpoint loader is using a normal checkpoint model in this case it is realistic Visions 5.1 with vae I don't need to use the load vae node at this time because the checkpoint model already includes vae moving on we have the Laura loader which is a custom node that loads the LCM Laura model the LCM Laura model is chosen based on your checkpoint model version if you are using the sdxl model then choose the LCM sdxl Laura if you are using the SD 1.5 checkpoint model then select the LCM SD 1.5 Laura below this node we have the model sampling discret node where we select the LCM for sampling now let's see how to download and install these LCM Laura models we will visit the official hugging face page for these LCM models and download the SD 1.5 and sdxl LCM Laura models we will save them in the confu models folder ER specifically in the model folder and choose Laura as the destination next we will renim each LCM Laura model fill according to its version for example we can name them LCM sdxl laura. safe tensors and LCM SD 1.5 laura. safe. tensors once we have all these files saved comfa UI will be able to utilize the LCM Laura models let's go back to the workflow now for LCM output the processing speed is fast but the downside is that the image quality is not high to overcome this problem I have tried two methods and both of them work for stable diffusion animate diff animation videos first let's take an example without using an upscaler we'll Q prompt for this example then we'll use an upscaler for another example I'll fast forward this part because it mainly involves waiting and loading as you can see the results are displayed on the video combined custom node however the details of the two characters in the boxing ring are not very clear so how can we improve that the first method is to use an upscaler to enhance the image quality this is not too complex for everyone to work on I have created upscale groups linked at after the vae decode output we'll connect the vae decode output image to the upscaler and use an upscaler model to enhance each frame of the animation you can use the 4X ultr sharp or any other upscaling models to enhance it let's compare the results of the upscaled and non- upscaled animate videos the first video is the normal size output and the second one is the upscale output as you can see uh not only are the video width and height uh upscaled but the color and sharpness are also improved uh this is one method to improve animation videos generated by LCM now let's move on to the second method we will integrate the IP adapter and adjust the sampling settings we'll tweak the sampling step to achieve better results in this example I'm using step 8 and CFG 2 with LCM sampler method however for denoising I'm using 0.7 let's run this animation here we are running the LCM model with the IP adapter without an upscaler as you can see I'm using the same checkpoint models which is realistic Visions 5.1 however I have made some adjustments to the sampling settings and this is what the results look like let's also try with uh Deno 1.0 and see the upcoming results by adjusting the settings the animation has more special effects coming from the travel prompt custom node in this example I'm using a Christmas tree and Christmas styles for the background creating a festive atmosphere for the entire animation increasing the day noise setting in the outcome result will provide more effects from the travel prompts that we provided so this is the setting I'm using running the LCM Laura model with the IP adapter without uh an upscaler sampling step 8 CFG 2 and denoising one um so by using denoising one I got more effect from text prompt that apply on so the setting depends on you if you want your animation Dance videos got more effect coming from the text prompt then you can choose denoising higher if you want more realistic for your animations video like a real person then you have to lower down the denoising but then that is a give and take for settings on here this is the same concept as the previous move to move animation video but for this workflow applying animate diff we have a lot Improvement so on the next example here I am using the same workflow that I create but then I bypass the IP adapter group without the IP adapter and just purely using the LCM lore models of course it will speed up the processing but then you will see the outcome of it in the text prompt area I have defined the clothing of the characters and also the atmosphere of the backgrounds and then here I got the character face from AI girl Nancy but I don't want to introduce too much of this face tool because this face custom node the backend open source libraries group of author are jerks they spam all over YouTube about copyright issue but you can check out my previous tutorials about this face tool so as you can see on the upper part of this workflow we have two control net groups one is the line art and the second one is the open pose we are going to to bypass the IP adapter passing all the image frames into these two control net process it and then we pass all frames into the animate diff modules then we are going to wait for the result of this dance animation video now let's take a look at the outcome result of the dance animation videos we are using the LCM Laura model without the IP adapter and without the upscaler as you can see the hands and face appear blurry but the movement is quite smooth due to the use of two control Nets and the animated diff working on it however how can we improve the blurriness and overcome the weaknesses of using the LCM Laura model let's move on to our next example so I enable the IP adapter group and models data going through here we get each image frames and pass to line art control net group then open POS group once control net is done it is going to animate diff then case sampler as we are using LCM Laura the K sampler do not take too long in processing actually the open pose control net pre-processor is spending the most of the time in this generation after open pose generated each frames to open pose preview then everything are going really fast okay let's check out our final example of this tutorial as you can see the result this time is better the characters clothing and skin are clear without blurriness this Improvement is achieved by enabling the IP adapter and using an image as a reference prompt to to enhance each frame of the animation the example shown here is just a few seconds long now let's generate the full length of this dance animation and you will see the complete picture of it for this dance animation The Source has 700 frames will set the frame caps and the max frame in the travel prompts to 700 we confirm all the settings uh keeping them the same as in the previous example now we can hit generate to create the fulllength animation video Let's fast forward through this part and we'll see the result as you can see the outcome is the same as the previous few second sample we generated earlier and the quality remains consistent we can also change the color of the hair and clothing in this case the dress is black because it's defined in my travel prompt however there are also pink colored sleeves because I have specified that in the travel prompt let's compare the generated video side by side with the source video and you will notice the differences now you can see that the hair color in the source video is red while in the generated video it appears as a brownish black here's the full view of the animation video running with LCM loras using the IP adapter sampling step 8 CFG 2 and denoising 0.7 that concludes today's tutorial we have used animate diff with the IP adapter to create highquality animations comparing with previous tutorials where we used move to move for videoo video animations in stable diffusions automatic 1111 this time we used element diff and the smoothness of the character's movements is is significantly improved the flickering problems have been resolved and we can also change the character's clothing hairstyle and the background of the animations the final result looks totally different from the source video many people have requested these features in stable diffusions and uh based on the comments I've seen this workflow provides a solution feel free to play around with this workflow create your own Tik Tok accounts make Dance videos and create your own AI virtual characters I hope you like it don't forget to give this video a like And subscribe to this channel I'll see you in the next video have a great day Oh and before I finish I want to thank you my patreon supporters you guys are awesome this workflow is available in our patreon community so you can download it and explore it further bye
Info
Channel: Future Thinker @Benji
Views: 52,255
Rating: undefined out of 5
Keywords: stable diffusion, stable diffusion tutorial, ai animation, AI-generated animations, denoising settings, realistic animations, stable diffusion img2img, ai art, install stable diffusion tutorial, AI animations, Stable Diffusions, virtual characters, YouTube shorts, dance videos, Roop Face Swap, tiktok dance video, animatediff comfyui, animatediff controlnet, animatediff prompt travel, animatediff stable, stable diffusion animation
Id: wFahkr-b7HI
Channel Id: undefined
Length: 11min 20sec (680 seconds)
Published: Wed Dec 06 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.