3D+ AI (Part 2) - Using ComfyUI and AnimateDiff

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's up everyone welcome to part two of my 3D tiai tutorial series if you missed part one where I covered how to easily render 3D animations and blender using miimo you can find the link to that video right above after that come watch this video so today I'm going to explain how to run your 3D renders through comfy UI and animate diff if you're not familiar with comfy UI I'll put links in the description that will lead you to my tutorials on how to install it onto your PC I was originally going to do a tutorial on anime diff in automatic 1111 I tried it and got to a point where I got some really good results but suddenly when I came back to try it a few days later after making an update unfortunately I couldn't get it to work for some reason and if this is not going to work consistently then i' rather not talk about it if I can get it to work in the future then I'll make another video for it thank you so much for understanding all right without further Ado let's get started so we're going to start with comfy UI my workflow is based off another workflow created by a guy who goes by akumetsu 971 and funny enough he based his workflow off off of my consistent videoo video workflow and according to him he improved it to be better faster and stronger how dare you but n no no it's all good because I also tweaked what he created so we're all inspiring each other to create setups that suit our preferences I also wanted to recommend a workflow created by a guy who goes by jbugs his workflow is great for getting very surreal looking visuals which look clean even at low resolution JBS is a content creator for Civ tha.com and he does live streams on twitch where he explains how comfy UI works and answers people's questions you can find his workflow at the Civ Tai website and he has an in-depth tutorial on how to use it on his YouTube channel I'll put links to all of these in the description okay so don't get scared and run away yet I know this looks complicated and technically it is but don't worry I'm going to help you through this I got you I'm going to go over each section and explain what you need you don't need to know what everything does in detail let's just try to run it with the current parameters just to make sure we get something to generate then when you're comfortable you can play with the parameters on on your own I will be including two versions of this workflow one is a heavy duty setup for those who have a strong enough GPU and the other is using LTM which makes the render time faster but the quality might suffer the first thing we're going to look at is your width and height of your video These default numbers work well for 9 x 16 videos which are great for Tik Tok reals and shorts if you want a wide shot instead then you just flip these numbers around and of course you can increase these numbers if your computer can handle it as long as they are in the same aspect ratio as your original video however keep in mind that these these Dimensions do influence the final image you're going to get for example a 512 x 512 will look quite different than a 720x 720 but don't be afraid to stay in low resolutions cuz you can always upscale this and I'm going to talk about that a little bit later and then there's your video loader and of course load your video onto here while you're experimenting I would recommend you keep your load cap at around 10 to 15 which means it will generate 10 to 15 frames this is just to get an idea of what your render will look like without having to wait several minutes for the full video to generate once once you're getting some results that you like you can put this back to zero which will render the full video here in the select every nth frame I have it at two but you can always put it at one two it uses every other frame so it reduces the render time but of course this all depends on preference and what kind of video footage you're using like if your footage has a lot of fast motion then maybe you do want to do every frame because if you're doing every second frame then you're going to miss some of the action when your subject is moving very fast I do have to mention and this is very important make sure you're checking the models on all these nodes because the def default names on here may not be the same or named differently than the models you have on your drive and your computer if you already loaded comy UI and place models in your folder while having comfy UI running make sure to click on Refresh on your manager window to make the newly added models visible or else they won't appear if you try running everything and there's an error there's usually a red or purple outline over the node that's causing the issue you can check that node and check the model that's being loaded there and just make sure you got the right model that you need when it comes to checkpoints I would say that some animate better than others the model I'm using here is hello young I actually like this one quite a bit but go ahead and play around with different models and see what comes out you can find so many models at civi tha.com so go check that out just remember this workflow is set up for 1.5 models so don't use Excel models that requires reworking the setup and ain't nobody got time for that here I'm using this V3 sd15 adapter model as a Laura and of course you would save this model in your Lura folder here's what the animation looks like with it and without it you can lower the strength or bypass it by pressing control and B if you're not liking the results make sure you load your vae that you have saved already and again vaes are easy to find on civi tha.com since we will be using an IP adapter prompts don't have to be super long because the IP adapter will essentially act as our prompt I'll explain that in a bit but for now just put something briefly describing what you want and throw in some words like Masterpiece high quality cinematic you know Etc negative of course negative prompts is what you don't want in your image so put whatever you don't want want here make sure you download the load clip model I put in the description some people have struggled to find this model so just download it from the link and place it in the comfy UI models clip Vision folder load IP adapter model the link to download the IP adapter models are in the description and place it in this folder that you see on the screen I like to use either IP adapter Plus sd15 or sd15 light play around with these and see which gives you the results you're satisfied with so over here is what allows the magic to happen you place images in all these sections to influence your final output you can use any image you like like from mid Journey Dolly Google search cti.com also has an image search you can pull from weight will affect how strong the image will influence the final output the higher the weight the more influence the image will have on the final render the lower it is the more the prompt will influence the final render so just play around with this this tool is extremely useful anime diff I'm currently using the newly released V3 sd15 which looks great and is my favorite right now however there there are other models you can use so you can test those out and see what works best for you link to these models are in the description save these models in your custom nodes comy UI anime diff evolved models folder control Nets I'm not going to go into details on how control Nets work so please either watch my control net video link to that video should be above or someone else's video explaining what it does and how it works basically they help shape the final output I set up several control Nets in a way that you could enable and bypass them however you like my favorite control Nets are depth soft Edge and open pose however open pose is not always ideal it depends on the camera angles of your subject since it's always trying to find body parts and if your video has for example a bird's eye view it can identify limbs head or torso the active ones right now are soft Edge open pose and control net checkpoint also known as control GIF control GIF Works differently than the other control Nets but it helps improve the quality of the animation so let's leave it in here you can use the control B shortcut to bypass and enable each section each control net setup is labeled so hopefully that helps make sure you click on the models to make sure it's using the exact model saved on your drive again some of these might be named differently so make sure you check this the stronger the strength the more it's going to follow the original video by lowering the strength you're allowing the AI some Liberties which could be exactly what you want of course you don't have to stick to these you can play around with other ones later once you made sure you have everything you need and set up you can click on Q prompt you can obviously play around with the parameters however you like but I just want to make sure you get it to run properly first the first render quality will depend on your resolution settings but even if your image looks kind of bad and lacking detail you can run it through the upscaler to make it look way better like I just said if you get something low quality we have this iterative upscaler which really improves your image however it does take very long to run each step if you don't want to wait for a very long time you can do one step if you like but if possible I would add a few more steps because it does make a difference here I have some examples of how each step changes the render these are without the face detailer and without interpolation you can also just skip this part by bypassing the nodes in this section and it will go straight to the face detailer another option for upscaling is topaz AI here you simply load the video into topass go to Output resolution I would do 4X to make it as clean and sharp as possible and run it this is great if you're happy with your first render but you just want it to look cleaner and higher quality the great thing about about topaz is that it is a one-time purchase It's Not subscription based and it has other cool features like motion to blur frame interpolation and stabilization I am an affiliate for topaz so if you click on the link in the description you would be supporting me which I would greatly appreciate face DET tiller the face DET tiller is meant to render only the face and add a lot more details to it you have a prompt section here this is meant strictly for the face so you can put in there something like male female celebrity like Will Smith Marco Robbie whatever you're going for I also added an IP adapter where you can put a specific face just make sure it's a close-up of the head here's an example of what it can do you might run into issues if your subject turns around a lot because it's going to be constantly looking for a face this might not always be something you want so feel free to bypass this whole section if it's causing issues finally we have interpolation this section is great to bring some smooth fluidness to your animation especially if you're generating every other frame sometimes running every second frame is a good idea since you're generating half of the amount of frames of your video and the last thing I'll mention is the workflow with the LTM setup this workflow is exactly the same as the one we went over except now you have these two extra nodes and some parameters changed this setup allows you to run the first render quicker because the LTM allows you to stay at a low step number however quality may vary depending on the style you're going for and what kind of footage you have I would suggest you test it out and see if it works for you you save the LTM model and your folder where you have all your luras now how about we run the 3D anim we rendered in part one I loaded the video and now since I want to preview the animation before committing to several minutes of waiting let's do 10 frames on load frame cap how about we make this guy a sea monster and put a few prompts describing that for IP adapter I'm going to use an image of a sea creature and splashing water for control Nets I'm not going to use open pose since I'm moving the camera in ways that will likely confuse it so for now I'm just going to go with soft Edge and depth now let's cue The Prompt okay cool I like this let's run the whole thing by putting zero on the load frame Cap all right here's the full animation it's not too bad I would personally do a little bit more tweaking on the settings but it's fine for this example here's another one that I did with fire this one is a little bit cooler in my opinion all right everyone that's it for today but I will be continuing this 3D taai series with some other cool ways you can use blender for AI Generations so make sure you subscribe and hit that Bell icon to get notified when I upload in the future thank you so much for watching and like always take care God bless and peace
Info
Channel: enigmatic_e
Views: 12,833
Rating: undefined out of 5
Keywords: comfyui, animatediff, stable diffusion, ai art, blender, 3d animation, ai animation
Id: hy9TLp-xIBo
Channel Id: undefined
Length: 11min 10sec (670 seconds)
Published: Mon Jan 29 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.