ComfyUI Animatediff #ipadapter #controlnet #reactor

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi today we will see how we make such animation the animation is based on this video we don't do video video we just using the movement let's start with a very basic workflow the workflow will be in the description of the video you can use it we have checkpoint here we have Laura for LCM adapter V3 animated diff model sampling in short everything we need to create the animation and the thing I want to show you is how we can use this part again this nodes group in the future and that's by working with templates in confi which is very convenient and very efficient to work to create a template all you need to do is to Mark the nodes you want to turn into a template and rightclick you will notice that you need to rightclick in the empty area of the comfy and not above one of the nodes because then the menu is a little different and above the empty area you will see that you also have a list of templates these are templates that I have already made before and if you want to create a new template just click on the option of save selected as template give a name save and now your template is saved and every time you want to use this nodes group we will rightclick on the empty part of the comfy and choose the name of the template we want to work with so let's load the video which will be the basis to our animation I use 16 frames just for the tests and then we'll extend the animation as much as we want and to get the model's movement out will use control net I also have a very convenient ready-made template everything is already connected we will of course work with the open pose because we want the models pose and we can disable the load image because we will use the video reference as a starting point for the movement we want to generate we will of course connect a positive prompt a negative prompt and back to Cas sampler as we already know we'll tidy up a [Music] bit we can now see I in the preview image the movement as it is created for us according to the video you actually see the skeleton of the model and we will take this skeleton and base on it we will dress the character that interests us let's write a positive prompt photo of fat man dancing in the jungle that's what we want to get and we'll press Q so as you can see we got a jungle but we got a woman and not a man and she's very thin and not fat so we need to fix that as you can see I also made a mistake and our frame is portrait and not landscape so I'll just fix it we'll switch between height and width and now it will be fine in order to get the fat Manny we'll use IP adapter here I use a template to I made in advance for IP adapter there is clip Vision in this template there is the model here we will change the model to a plus model that is more suitable for animation it is more powerful than the other models of course we have the prepare image for clip vision and here we will load the character we want to get so I added a picture of a fat man here let's connect the model to the IP adapter and back to the K sampler all that's left is to play around a bit with the weights the noise we add I also change the weight type to channel penalty which gives us more strength to the image than to prompt and now we'll press Q again and see what we get we probably won't get a result close to what we want and that's because our picture has a white background that will affect the final image as we can see to fix it we will have to use a mask I use the Sam detector which is a tool that allows me to make a mask by marking such points on the image as soon as I click detect you can see the mask now all we have to do is connect the mask to the mask an IP adapter and press Q we just defined to the IP adapter which part of the image to refer to and which part to ignore and only this part will actually add to the model as you can see we got back the jungle in the background along with our character which is combined with the movement of the model what we have left now is to produce the animation at the full length or the length we want to make this process a little more efficient I actually connect the number frames of the latent and the video I rightclick convert frame limit to input the same in empty latent convert batch to input and now we will create a primitive node through which we will Define the amount of frames we take from the video for sampling and the amount of frames to be used for animation in our case I will write 60 that's the length of Animation I want in this case and we'll press [Music] you so this is the animation we got and as you can see we have the movement of the character we have the jungle in the background but if we pay attention to the face we can see it's look pretty bad to fix this thing we'll use reactor it's another tool or another node that we have in comfy I load the video we just created and we also Define the image of the face which we want to get INR animation you can use reactor on a single image as well of course what remains for us to do is to create a new video combination after the reactor which will recombine all the frames into the final video again I'm doing a test on 16 frames as soon as I see that the result is good I will do the same action on the entire animation I can also turn the reactor into a template so I can use it quickly the next time this way we can work with comfy UI both to create animation based on motion from another video and to correct distorted faces that come out due to low resolution so hope you learned and that we will meet in the next lessons you are of course welcome to ask questions comment and like and most importantly have fun bye
Info
Channel: PixelEasel
Views: 1,303
Rating: undefined out of 5
Keywords:
Id: j9LN8KLfqPs
Channel Id: undefined
Length: 7min 4sec (424 seconds)
Published: Sun Feb 11 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.