ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi I'm Abe and today I'm gonna show you how to make these mesmerizing morphing videos using comfyUI these are really cool animations and you can create some hypnotic loops where one image morphs into another I get really excited about how creative you can get with these but sometimes comfyUI workflows can be a little intimidating so today I'll keep it simple I'll share a plug in play workflow that can take four pictures and seamlessly blend them together in a captivating loop imagine using this for your artwork videos reels intros or just for fun there's a special workflow involved but it's nothing crazy I'll break it down step by step so you can follow along easily I'll show you where to get the workflow and how to install all the models and checkpoints that you're gonna need will generate a basic morphing image and then I'll show you how to supercharge that to generate cool video concepts from basic text prompts by the end of this video you'll be able to generate your own mind bending loops and be able to build on top of the workflow that I share plus you'll pick up on some tips and tricks that I'll share along the way alright enough chitchat let's get down to business we're gonna dive right in and start generating some morphing masterpieces first we're gonna go to CIVITAI and download the JSON file for the workflow a huge shout out to ipiv who created and shared this work flow once we extract this it will give us this JSON file but then go ahead and go to your comfyUI drag and drop your JSON file and the workflow is now loaded chances are as soon as you load this you'll see a whole bunch of missing nodes and if you run into that you can just go into the manager install missing custom nodes install whatever is missing and then restart comfyUI which will then fix those missing node issues once you fix the missing nodes you'll also need to download all the models this workflow is great because it has all the links built in I'll also add these links in the description just to make things easier once you have all your models downloaded go ahead and open the comfyUI folder and now we're gonna start putting these models in the right place we'll put this into the animated diff models before we do anything else let's take a quick look at what's going on here so we have a settings module at the bottom left where we're pulling in a LORA for Animate LCM we're loading in a checkpoint you can load any stable diffusion 1.5 checkpoint here and then we're loading in a VAE there's prompt fields here for your checkpoint but in this model you won't really need prompts and then we're having a latent image you can of course go ahead and change this but just keep in mind that because this is stable diffusion 1.5 you wanna limit your maximum resolution to 512 our batch size is 96 which means there's 96 frames that will be generating on the left this is where we're loading in the animate diff model you can change the motion scale here if you go with a higher number you'll get more motion but it can get a little finicky and then you have some context options here we have our IP adapters which will require a couple of Clip Vision models and then we have our control net at the bottom we're using the QR code control net we're also loading in a video mask here and the control net mask that we're feeding in looks kind of like this and so you'll see that the final image that's generated sort of has this pattern with the morphing you can also experiment with different masks that I'll share later in the video for input we're gonna load in four images here and it's gonna take the control net and the IP adapters and feed it into the K sampler the Ksampler will then generate a video that we are gonna combine and that's gonna be our original preview that Ksampler then feeds into another sampler which will upscale the video by a factor of 1.5 and then we're also doing some more upscaling with the model and frame interpolations so we have our workflow setup and all the models loaded now let's take it for a spin so I'm gonna update four images as you can see they kind of have a similar theme to them which would make sense for a morphing video I should also mention that we're feeding in a video mask for the control net so I've tried to give it the best chance of success here I'll also show you how to use different motion animations and masks that might suit your pattern a little bit better that I'll share later in the video so I've gone ahead and loaded the images I wanna generate a preview but before we go ahead and run this I'm gonna do a couple of things as I mentioned earlier there's a bunch of upscale nodes here that um will take a lot of time to run for all of my upscale nodes I will disable them by selecting them all and pressing control M so this should mute all of the nodes I'm also gonna do this with my second sampler decode and upscale and then now I can hit run and I'll walk you through what it's doing as it's running it loading the checkpoints for the animated diff as well as for the model its generating a control net so you can see it's moving along left to right here it's creating a mask it's loading in the IP adapters and it's gonna process all four IP adapters all of this feeds into a sampler which is using LCM with 11 steps each of those steps roughly takes 40 seconds so I'm expecting it to finish in five minutes it's going to take a lot longer to run for the upscale models and so it's important that I have something I like first before I upscale it and that is then feeding into a decode module that generates our frames that we can then combine with the video combine the frame rate is 12 which is half of what you would see in a typical television or movie and so you can play with this and increase this to 24 just note that you'll get a smaller animation once we have a preview that we like we will then upscale that image either through the upscale or through an upscale model and what we will run frame interpolations we will run frame interpolations and at the end of that we'll have both an upscale and an interpolated image both an upscaled and an interpolated image okay we can see that it's done and we have a preview saved here you can see how this mask comes into play if I like the result I can enable the nodes that we disabled in the first step and then upscale the image and get something with frame interpolation it is working as expected but also it's not quite what I wanted getting to a first preview is the tough part so how do we supercharge this so that we don't have to manually feed in each image and wait for the result so let's work on changing this flow to do that so what I wanna be able to do is generate four images from text feed it into this flow and then to be able to generate a video preview based on those four images okay let's get started so the first thing I wanna do is load a new checkpoint that will let me control if I want to use maybe a sdxl checkpoint to generate the image followed by a stable diffusion 1.5 checkpoint to generate the animation in my case I'm gonna stick with a stable diffusion 1.5 but you can choose to use a different type of model if you want I'm going to generate a couple of text prompts and here's a trick for the latent image I'm gonna stick with my 9 by 16 ratio but I'll generate a batch of four images I'm actually going to use a advanced sampler here the output of this we will decode it save the images I'm gonna call this Morpheus now I'm gonna give it a prompt just to kind of show you how we're gonna work with those batches and of course you wanna keep this family friendly so we're gonna disable the animation flow just with closing out a checkpoint here if we run that we're gonna get a whole bunch of errors here but that's fine let's say I'm happy with these and I wanna morphing flow with these four images instead of feeding in each image here I'll feed in my own image and so to do that I'm going to first break this batch up I'm gonna use the image from batch module which uses a batch index of 0 and lets me extract each individual or multiple sets of images reroute this just to keep it a little cleaner and we're gonna make four instances of this we're gonna extract the first which is index 0 the second third and the fourth and we will feed it into the IP adapters we now have our text to image which will generate a set of images as you can see all four of these images are fairly similar that's because the batch uses the same seed and so we'll change that to mix it up a little bit so we're gonna change the seed behaviour for the latent batch and we're gonna make it random if I run this again I'll get a different set of images okay yeah this is pretty cool I think this would make a pretty cool morph maybe I also wanna use a different type of pattern for something like this a circular pattern may or may not make the most sense so I can look at some more patterns here and I think I like this one you might run into the issue with the URL here and the best way to accommodate that is to copy the video address and paste that into the load video note and so I'll go ahead and change the video path I left a link to these video masks in the description below enable our checkpoint again and run the flow I'll also share this modified workflow in the description so that you can use it to generate animations with just text prompts this is really powerful because I can load in all of my prompts from an external text file and generate a small video for each of them if you wanna see how to do this I have a separate tutorial that shows you how to generate a batch of images from external text prompts and oh my God this is looking amazing already okay so I can go ahead and start to upscale this to do this I've unmuted my second ksampler node this is gonna take the images from the first model upscale it by 1.5 feed it into a second K sampler and then that is eventually going to feed it into an upscale model okay now we can see that it's finished generating an upscaled version if we compare the sizes you'll see that this is a smaller size now we have a much larger animation now that that's done I'm going to cue it to the upscale with model once that's done I can feed it into the frame interpolation models which will help smoothing out the video and add 24 frames instead of just 12 for the purposes of this video I'm not gonna run the interpolation we should still be able to get an upscaled video okay so there you have it you should now be able to generate your own morphing masterpieces if this video helped you I'd really appreciate it if you hit the like button and subscribe for more comfy wire tips and tricks thanks for watching see you in the next one this is this is pretty cool abe out
Info
Channel: Abe aTech
Views: 7,427
Rating: undefined out of 5
Keywords: comfyui, animatediff, ipadapter, stable diffusion, img2vid, txt2img2vid, comfyui morphing video, Stable Video Diffusion, SORA, comfyui tutorial for beginners, Morphing Video, AI Animation, Text-to-Video, AI Video Generation, Free AI Software, Video Editing, Motion Graphics, After Effects Alternative, Creative Content, content creation, ai tools, artificial intelligence, creative software, comfy tricks, comfyui tips, morphing techniques, ipiv's morph, ipiv, aivideo, ai video
Id: mecA9feCihs
Channel Id: undefined
Length: 11min 41sec (701 seconds)
Published: Wed Apr 17 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.