Stable Diffusion Animation Create AI Video Using AnimateDiff With IPA FaceID

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone so here's another update of the animate diff flicker free workflow in com UI for animation video the workflow that I created I have adopting the IP adapter face ID and integrated into the workflow currently and I want to try out few examples here with you guys and have better understanding and how it use also I want to answer some people asking what's so special it is just a video to video well pictured this when CG mov crew are setting up green screen backdrop in the studios and sticking all motion sensors on artist bodies trying to capture the movement and now with AI we can do it very easily and very low cost saving for all Production Studios so if someone don't understand what is the purpose of this just think about that before you post a comment I am showing mostly in dance video it is just for general purpose demo easier for most people to get interest and understand how far generative AI can do so let's start our session so guys we have the new IP adapter face ID and I have integrated into the comfy UI animation workflow that I create and this is the first versions of face ID SD 1.5 custom node group here as you can see I am running some Q I have previously tested and this is run without the face swap reactor custom node I just set number of frames in ready for demo purpose the result just got up from here and let's resume the preview and see this pretty smooth right the body's movement has not flickering as usual and the face is totally blend into the animations character also take a look at that we just take like 2 minutes to generate this animations that is not using face swap and now let's try to use the reactor and you will see how much time it takes and again I have not changed any settings in other custom nodes that is the new processing using the reactor and it takes 2 minutes to generate this animations and other 200 5 seconds for the reactor face swap to process so guys that is going to be a double time for processing if we are using reactor nodes to processing the face like this and let me try one more time that is going to be without the reactor and it just took 157 seconds for this process but then as you can see the face is kind of blur without any enhance or detailer the nose and eyes everything is influenced by the iadapter face ID but it's looking more natural when we are using IP adapter face ID is like the face is kind of blend into the animation it doesn't feel like using reactor that a picture is stick onto the character's head and when it moves some angles looks kind of awkward so using IP adapter face ID is pretty good if you want to load fast without using like double times for generating the face again but then the quality of course is lower if you run it without any enhancement and this is what the world is nothing perfect there's pros and cons some give and take of course good quality these take times but then let me show you another example I am running this section with reactor again and you see this result here the comparisons the face on Reactor output look more sharp because it is like replacing the face frame by frames but then some angles on there you will see it's kind of not natural it's like a picture paste on top of the characters as I mentioned so on the left side there when we are using the IP adapter face ID it is like totally generated in one body it is blend into the characters itself basically the IP image prompt are producing the character from the firsthand in Cas sampler so yeah this is the difference of using IP adapter face ID and the reactor as always there's give and take if you are using faster processing method of course lower qualities but then let's try the newer face ID models which is the face ID plus version two and let's integrate this nodes group together in the workflow and we can try with a version two face ID plus let's connect the IP adapter with this one and we disconnecting the previous face ID nodes group and then we delete this K sampler and we connect the model to the K sampler in this workflow let's generate one more time and see what we get and here is another uh output example I am generating the face ID plus 2 plus version two and also the reactor face swap and you guys can side by side comparison now the face side plus version 2 is looking better but then it's still kind of a little burry on the face because well first of all this one is not a close-up view of a character so it is not going to be clear when using 1K sampler to generate compare with pasting a face on the image by image like the face from reactor but then we can do another enhancement on the face ID plus version 2 and enable the detailer group here so we have the detailers for the animation and as you can see I'm using the face detector providers for only focusing on enhancing the face and let's click generate one more time this is a very good thing in comy UI when you have the exactly same data in the workflow and it CES into the browsers and the system and then once you enable another new custom nodes at the end of the flow it's just only running the new added nodes okay so I did something wrong here it's kind so blur um it couldn't be happened after detailer and still blurry and this is a my bad here uhhuh because I have not changed sampling method it is still in LCM mode and I should select the dpmm plus 2m and set the sampling step and the CFG to normal numbers so let's try it again yeah this is a bad examples that I'm showing well it's good try and errors and you guys should remember to set the right parameters in the workflow you see the eyes is clear up and yeah again using the face ID I feel it like it is more natural the character face actually is influenced by the reference image that I have yeah so here's the side by side of the reference image okay and so another question that I received from people that asking how do you change the backgrounds and the outfit of a characters when you have videos to videos like this and guys let me show you another example here so I have IP adapter like a normal IP adapter in this workflow and then uh you can use like for instance um like my examples here I'm using a fireworks pictures for a background referencing and then I'm using the AI girlf face image for the face ID plus V2 IP adapter groups and I'm using the display card in the provider loading and that is mostly how you're done with backgrounds influencing with IP adapter and then we enable the after detailers for phase enhancement I set it to 0.35 sharpening so other thing I have not changed any settings on here and also I'm using the DPM plus 2m and I disabled the LCM luras and I'm just purely using their realistic Visions checkpoint models I don't like to use too much Laura in my generations and here's the text prompt so basically you can influence the outfit of the characters and the backgrounds of the videos in text prompt as well so in here as you can see I'm telling the AI that fireworks on the sky and then walking on the snow ice Lake and then telling the AI that the character's outfit what color is that and then what the hair colors Etc and the face is influenced by the face Ides in these note groups and let's try it with a shorter frame numbers let's say I put forur these so just for demo purpose we can load it faster and let's click run and see oh before run and I want to set the denoising of the case sampler a little bit higher so 0.75 and then it will be become more allowing the AI to become more creative basically and I set the face ID in this group as high as I can and then the face eyed Laura loader I like to set it like 0.65 just get a little higher more influence the face looking and yeah everything is good and let's click run and see so yeah we have the output already and the left is is the normal case sampler output and the right one is the after detail of the face but this is not a good example for face detailing because the character is kind of too far from the camera view but then you can see that the Outlook and the backgrounds the the character outfit and the backgrounds is influenced a lot by the IP adapter and also the text prompts so yeah you guys can do that with this method the better you have in the IP adapter the normal IP adapter is image that without characters so the AIS are easily more understanding what kind of background the AI can use for the output videos and another way you can use the text prompt to describe what kind of outfit the videos characters are wearing like the hair color and like the black shirt here so I have defined that in the text prompts and then yeah as you can see the IP adapter in here I'm just purely using a background image to describe the fireworks and in the text prompt here I describe what color of the clothing and what color of the hairstyle Etc this is what it should be as you can see the controller preview here it is most likely less objects behind the background is better output for result okay here I load up another videos for a better close-up character examples so we can see like a clear face effect from the result output here so I have not changed any settings on here just the videos the source videos and I click okay let's try click one run one more time and we will see another different way okay so here is the result as you can see the background is influenced heavily by the fireworks image that I have and then it doesn't affect much in the detailers because I have set it very low in the denoising and the refiner ratio so I just let it more naturally looking like animations videos but then sometimes the text prompts is a little bit that they ignore your text instructions of colors of the clothings or what kind of characters you want the output is doing but you can try it again and generate it again with different text prompt to try an error but then mostly this is how the method of using this workflow and you can influence the thing and one more thing in here is pretty cool we got the eyes knowing on the backgrounds and it does provide it by the output here so yeah basically this this is it this is how we can use this new workflow having a unique face for the characters and without the reactors um another new way I would say another new way to create characters in animations videos and also changing the backgrounds and the characters outfit so you guys can try it um it's not only for dancing videos you can try in other Actions videos like people how they do in movies and stuff like that and yeah um I will see you in the next videos hope you enjoy this new update of the anime diff workflow and have a nice day
Info
Channel: Future Thinker @Benji
Views: 6,763
Rating: undefined out of 5
Keywords: stable diffusion, ai art, AnimateDiff, Flicker-free workflow, ComfyUI, Animation video, IPadapter Face ID, Generative AI, CG movie production, Dance video, Realistic vision, Denoising, Text prompts, Ksampler, Detailers, stable diffusion tutorial, stable diffusion ai, stable diffusion img2img, install stable diffusion tutorial
Id: aCbjIYQSuos
Channel Id: undefined
Length: 13min 16sec (796 seconds)
Published: Thu Jan 04 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.