AnimateDiff ControlNet Animation v1.0 [ComfyUI]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this animation was made using animate diff and comfy UI and automatic 1111 download the Json files from the description below simply just drag and drop the downloaded files into the comfy UI workspace when you want to use them you need to have these comfy UI extensions before using this workflow links will be in description below for this tutorial I am using this dance video by Helen ping for reference drag and drop your reference video in After Effects make a new compositon with the video downscale the video to a smaller resolution between 480 to 720p export the video as JPEG image sequence we will be needing these images for making our initial control net passes in comfy UI import the images using the load images from directory node copy the images directory and paste it into the node input for this reference video two passes are needed which is soft Edge and open pose we will save the passes and name the prefixes accordingly for better organization first we will cap the images to 10 to test if the images are rendering in sequence or not if yours does not watch the part two of this tutorial for fixing common issue while animating with animate diff my test passes came out fine so I will render all the frames delete the test renders and remove the cap in the load images node and hit render now all the control net images are rendered make two new folders of the respective names put the soft Edge images in the HD folder put the open pose images in the open pose folder double check if all the images are rendered correctly to make your life easy I've included this control net passes JS Sun file simply drag and drop it in your workspace that easy drag and drop the second workflow file which is the main animation workflow let's break down the process all green nodes are input nodes this is the model loader node choose the style of the animation realistic anime or carton these are resolution nodes set the same width and height ratio that of the reference video this is the skip frames and batch range node these are the usual positive and negative prompt nodes these are the control net units for applying control net passes this is K sampler node purple node nodes are the control net passes input nodes there will be no extra processing needed for control net since its images are already rendered which will cut the rendering Time by which we can test out animations more faster load the control net images in the purple nodes so let's start choose the mod you want your style for realistic animate or cartoon style select the appropriate SD model for this example I am using an anime model mamix I have provided notes all over the network for you to read if you need to change any settings of the other nodes now we will set the dimension in the same ratio as our original video for me it was already set to 544 by 960 pixels suppose you have 100 images as input and your PC can handle only 50 at a time more than 50 images can cause some minor issues so we have two batches of 50/50 images on the first round we will set the batch range to 50 and Skip frame to zero on the second round reset the batch range to 50 and Skip frames to also 50 so it can skip the previous frames we will use this Theory and apply when we do the final rendering for testing our animation we will use 10 frames in the batch range next type in your prompts use Simple positive prompts as usual negative prompts are already set with the negative embeddings Now we move to control net copy and paste the soft Edge pass images directory in control Net One node copy and paste the open pose pass images directory in control net2 node everything is in place and now we are ready for test out our animation open the output folder in the comfy UI directory my test frames were rendered correctly it's now ready to render the final animation if faces are not looking good no need to worry it will be fixed later I have the RTX 3070 TI laptop GPU it can handle Max 150 frames for this resolution so I will enter my laptop's Max handle capacity yours will be different it depends on the rendered resolution and gpus V Ram mine is at around 150 frames for 544 by 960 pixels images for round two I will set the skip frames to 150 so it will render the next 150 frames and so on I will render the rest of the frames one thing to keep in mind suppose I have 40 images left in the last batch I will also change the batch range to 40 batch range node also provides for the empty latent images for sampler it doesn't know what to do with the extra images so it repeats the rendered images all the images have been rendered now but we have to fix the face an automatic 1111 image to image tab drag and drop one image for testing test the image with the most visible face click the autod detect button for image Dimensions choose the model used during the animation select negative embedding for better results you should have after detailer extension installed enable the ad detailer and check skip image 2 image enter prompt for a detailer generate and check results I will also try with different [Music] models I like this SD model's face I'll test with another image with side view [Music] lowering the Imp painted noising strength reduces the flicker so you have to test out with this value copy the directory of the input images and paste it in the batch tab make a new folder for the output images and copy its path paste it in the output directory path [Music] after after all the images are rendered sequence all your batches one by one in After Effects and render the video likewise I rendered the same frames using the Epic realism [Music] model fix the face in automatic 1111 using a [Music] [Music] [Music] detailer with larger and pained noise strength this type of disproportionate face happened lowering the Imp painted noise strength solve this [Music] problem [Music] [Music] [Music] then I upscaled all the images using topaz gigapixel AI [Music] Now using the original reference for audio I sequenced all the batches added some color Corrections and zoomed the composition and rendered out the video it's just the starting we can create many artworks like this the possibilities now are endless if you are facing some bugs you can see the issues listed in the notes inside the main Json file part two of this video will also be uploaded soon for explaining how to fix those issues I would really love to see your works created using this workflow I would really appreciate if you forward your work to me on Discord or mention it down in the comments my Discord username is Jerry Davos I feel very happy replying to you people with your love and support I will keep making more tutorials just like this
Info
Channel: Jerry Davos AI
Views: 111,971
Rating: undefined out of 5
Keywords: animatediff, controlnet, comfyui, animation, ai video, stable diffusion
Id: HbfDjAMFi6w
Channel Id: undefined
Length: 16min 3sec (963 seconds)
Published: Sun Nov 05 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.