빨라진 SDXL영상변환! AnimateDiff+SDXL Lightning! (comfyui 그대로 따라하기)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello. This is Neural Ninja. In this video, we will make a video using AnimateDiff and SDXL Lightning. SDXL is especially good for creating videos with these unique effects. It was difficult to use because of the speed, but with the release of SDXL Lightning, it became possible to create at high speed. It is almost identical to the existing SD 1.5, so you will be able to follow it easily. For those using Colab, we recommend turning on high-capacity RAM. First, let’s load the previous workflow. This is a workflow using SD 1.5 and LCM. I will delete the Detailer related nodes. There is no major change, just change the models to SDXL. First, let’s select the SDXL checkpoint. I will also change the ControlNet model to the XL model. I will also change AnimateDiff to the XL model. There are two types of AnimateDiff: SDXL and HotShot XL. As of now, only HotShot XL seems to work properly for video conversion. Please select HotShot XL as the model and schedule. Unlike the existing AnimateDiff, HotShot XL is learned based on 8 chapters. Context Length should be set to 8, not 16. Now we will add SDXL Lightning LoRA instead of LCM LoRA. I will add 8-step LoRA. I'll connect the model and the clip. Before creating, let's change the Depth preprocessor to Depth Anything. I'll also set up the sampler accordingly. I think the basic workflow has changed now. We will also add an image creation node so that you can check in advance what it will look like. I will load the video I prepared. I will reset the size to fit the video. It may be helpful to set the Depth or LineArt ControlNet preprocessor resolution as well. Now, let's convert only 15 photos for confirmation. I'll enter the prompt I prepared and LoRA. This is a prompt with Disney princess LoRA. I'll reconnect the model and clip. Let's disable the AnimateDiff node and create the image first. It was created well. Now, let's convert only 15 photos for confirmation. I'll adjust the FPS as well. FreeU, added to ComfyUI, is also helpful when converting videos. I heard it improves image quality by internally removing noise without consuming additional resources. It seems like a long thing to do when creating an image, but I think it's a good idea to always include it when converting a video. A more stable image is created. I'll try creating it again. I'll adjust the LoRA and ControlNet strengths a little bit. It was created well. Now, I will create about 100 reasonably long pages. Please refer to the previous video for information on how to create a longer video. It was created well. Let's try other prompts as well. This time, we will apply the statue using only the prompt. I'll remove the LoRA connection. Let's set up the model and clip again. Let's change the prompt and try creating again. It's a bit subtle. I'll set it short and raise the CFG a little. Since Denoise is generated as 1, the seed value also has a significant impact. For now, let's leave the seed value as is and create it. I think it's gotten a little better. Now, let’s increase frames to 100 and create it again. Let's apply the Origami effect. Let's try other prompts as well. I think it's good to create special effects this way. Above, I converted the video using AnimateDiff and SDX Lightning. I think I can convert a video with a different feel from SD 1.5. Unlike LCM, there is little quality coordination, so I think it may be better in terms of speed. I hope the video helps. thank you
Info
Channel: 뉴럴닌자 - AI공부
Views: 2,925
Rating: undefined out of 5
Keywords:
Id: 2mh_lQWkFvk
Channel Id: undefined
Length: 11min 34sec (694 seconds)
Published: Sat Mar 09 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.