Animatediff perfect scenes. Any background with conditional masking. ComfyUI Animation

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
take your hero to any place to the city or to the forest can be hot or can be cold let your hero be a different person or something else learn how first you will create the Basse Laura scene blend your main character with any background you want with a combination of control nets for the foreground and background and by using conditional masking you will be able to perfectly blend your hero with any background you want then by using anime diff you can create amazing animations for this tutorial I assume you will have comfy UI and comfy UI manager installed the materials and workflows you can download from the Civic AI article the link in the description go to the Civic AI page you will see a detailed step-by-step guide for this tutorial in attachments on the top right of the page download the input zip file download also the anime diff instant Laura base workflow Json to use in this example you can also download the final workflow of the tutorial open the input zip file extract the contents copy the unzipped files into the input folder in comfy UI now in comfy UI drag and drop the instant Laura animate diff workflow over the [Music] canvas if the custom nodes of the workflow are not installed an error will pop up do not worry you can easily install them go to the manager click on install missing nodes select all the items on the list and install them when the installation is complete and the manager again update comfy UI this way you make sure the nodes are working with the latest version restart comfy UI to apply the update refresh also your browser as you see all the red nodes and errors have disappeared The Next Step is to install all the models in their corresponding folders the list can also be found in the description I will not explain here how and where to install them if you want to know how to build the instant Laura and animate diff based workflow I invite you to see my other tutorial about it after installation of the models click on the refresh button then check that the correct models are loaded in each of the different model loaders start with a short test of four frames to see that the workflow works use any image you want to test the Laura once we know that the base workflow works correctly let's start with creating the Laura scene add two load image nodes one is for the main character and the other for the background it is convenient you resize both images to same size of the composition you are going to make in this case we do 512 width and 768 length now we are going to rotoscope our hero at the impact simpler detector scgs node add the ultra ltic detector provider select the body model and connect it to the bbox input in the simple detector connect also to the Sam [Music] loader convert your segments to masks using the scgs to mask node now combine your foreground and background using the image composite Mass node connect your background to the destination and the foreground to the source [Music] input connect also the MK that you have created add a preview image node and cue your workflow to see how it looks like with the settings of the detector we see that more than one image has been created in the original image there were some human figures in the background that were detected by the no in this case we will increase the threshold to 0.85 to make sure the people from the back are not detected now only one person is our Laura base image to avoid artifacts we want that the reference image covers similar area as our open post images to do that we can manipulate the size and the position of our m main character change the width of the resizing to 448 pixels and the height to [Music] 624 change the position of the character by setting X to 35 and Y to [Music] 250 that looks about right let's continue connect your new created image to the reot node that is used for the Laura test again the workflow to see the effect we see that the hero has now changed the scene and it is placed in a beautiful snowy town and now the control net sequences for the background copy the load images nodes and make sure they point to the msld and Zoe depth maps directories connect the in output from the open POS load image node to the load image cap input node from the other two load image nodes if you are using frames from a video you may want to use the corresponding pre-processors we will add one control net for the zi depth maps and one for the [Music] msld note that we do not connect here the positive and negative conditionings of these control net with the open pose control net of the [Music] foreground select the correct control net [Music] models check that the right frames are [Music] loaded now you will create a mask that will determine the area where your hero will be placed in the animation add two color to mask nodes connect your open posst frames to both color to mask nodes in the first color to mask node set the values of red blue and green to 255 set the threshold value to zero in the second one set the values of the colors to 100 set the threshold to 168 connect the two mask outputs to a mask composite node set operation to add connect the masks to a the growth mask with blur node set the expand value to 100 the blur radius to 20 and the sigma to [Music] 1.7 this group of nodes creates a mask for the foreground and the inverted one with the background we are going to use these masks and combine them with control net using the conditional mask nodes add two conditional mask nodes we will use them for the positive conditionings of the two control net sequence in the first one we will connect the positive conditioning of the open pose control net with the input of the node we will connect the mask to the corresponding input of the conditioning Mass node in the second we connect the positive conditioning of the MLS D and depth maps controls and then we connect the inverse Mass doing it this way the conditionings are applied to the foreground and background independently however the blurriness of the edges of The Masks allows that foreground and background do not show as really separated elements now we combine both using a conditioning combine node this is then connected to the positive conditioning input of the K sampler finally change the set conditioning area par of the background to mask bounds for the negative conditionings we follow the same procedure we connect the negative conditionings to two different conditioning mask nodes for the foreground node we use the mask and for the background the inverted mask then we combine them however in my experience is sometimes better to directly combine the negative conditionings without any masking try and test how works in your [Music] case don't forget also to adjust the control net parameters so the animation is consistent for both the foreground and the background for a final test of the workflow set your number of frames to 12 so at anime diff can show a more accurate example of what you will expect after rendering we check the animation looks as we want we can adjust the different parameters of the control Nets or the loras depending on what we want to achieve with the animation if we are satisfied we can run the complete animation to render the complete animation set the total numbers of frames to zero this will process all the open post frames which determine the length of the animation in addition to the animation workflow you have a couple of additional groups to detail the face details and to interpolate the frames you do not have to but activate them if you want to get even more detailed and smooth animations you have created your animation a beautiful Warrior running in the snow by changing your character and the background you can create different but yet amazing animations that's all for this tutorial I hope you have liked it check in the description all the information for the method and see you soon [Music] n
Info
Channel: Koala Nation
Views: 3,532
Rating: undefined out of 5
Keywords: ComfyUI, Stable Diffusion, Animation, AI, Automatic1111, Video editing, RunwayML, Video Animation, comfyui, stable diffusion, midjourney, tutorial, controlnet, comfyui stable diffusion, runpod, automatic1111, google colab, colab stable diffusion, confyui animation, comfyui nodes, comfyui tutorial, SD, SDXL, SD tutorial, SDXL tutorial, SAM, Vast.ai, Vast, Animatediff, Lora, Instant Lora, IP Adapter, colab comfyui, AI Animation, comfyui animatediff, upscaling, face detailer, segmentation
Id: gDUeqCErjt4
Channel Id: undefined
Length: 10min 38sec (638 seconds)
Published: Wed Nov 22 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.