ComfyUI AnimateDiff Prompt Travel: Getting Started.

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is going to be a simple install video for comfy UI specifically catered to getting you setup with animate diff now I want to just review the comi repo page so you can see what it's about it is a node-based workflow for stable diffusion and there are actually there actually is support for AMD gpus as well as Mac silicone oh this video will be specifically catered to Nvidia users and windows users but the Linux install is pretty much the same just you know when you set up your venv make sure you change your commands that they work appropriately for it if you have any questions on that just comment down below I have my folder set up for the tutorial and inside that folder I have resources which is going to be your your checkpoints motion lauras motion modules vae and workflow and all that will make sense in a moment as we start working um but let's get it started so we're going to head to the tutorial.com VII you can name your your directory whatever you want and in the address bar you would put CMD now inside that uh directory because that's what we just did we opened up the command line inside the directory what we're going to do is we are going to create a virtual environment so we would just do python DM VV VV so now that that's set up the next thing we're going to do is activate our virtual environment to activate we just do vv/ scripts and I press tab there to autocomplete and activate so now that that's activated we're going to get clone The Comfy UI repository now I have it in my notes but you can find it by just going to the comy UI page and copying there so get clone and then press enter and what that did is that copied comfy UI into our folder so now that we have comfy UI we want to start moving some things into their appropriate places specifically at the moment that'll be the models and the vae so our stable diffusion checkpoints and our vaes I'm just going to open my resources folder on the side and make it clear for you guys what's being done and I'm going to open and I'm going to open the checkpoints folder now in comfy UI we're going to navigate to models and checkpoints and we're going to drop our checkpoints in now the checkpoints just need to be any stable diffusion 1.5 checkpoint so I have two checkpoints here you can download any checkpoint you want from civid AI or get it from hugging face or you could just use the base uh stable diffusion 1.5 checkpoint a checkpoint is essentially like a style it allows you to generate images in a specific style like you know anime type of images or realistic images or 3D render looking images all depends on the checkpoint that you install now the next thing we want to do is move our vae into the appropriate folder which is under models and vae now vae is sort of like um it handles the color composition of our image so it'll change the way the colors look in the sense of you know one vae could be more saturated while another vae could make everything more look like it has a red ENT all depends on how the person uh trained it so next thing up we want to pip install uh torch so we will'll just copy this here now if you are looking for the command in the compy UI repo it is under Nvidia here and it's this line right here so copy paste and install this will take a bit of time that's all dependent on your internet connection and speed so you know just be patient let it do its thing so once torch is finished installing the next thing we want to do is CD into comfy UI because remember we are currently inside the tutorials comfy folder which holds the comfy UI folder so we want to head in there so CD change directory into comfy UI as the name implies so now that we are in there there is a folder and it is requirements.txt this folder literally just has things that we need to get comfy UI to work properly so the way that we're going to install that is we're going to run this command here pip install - R requirements.txt so pip install dashr requirements oh forgot the requirements.txt and that's going to install the remaining things that we're missing to get comfy UI started all right now that that's done a very fast last recap we have created a virtual environment and activated it we get clone comfy UI we placed our checkpoints and our vaes in the models folder in their respective directories we installed torch and we handled our pip install for the requirements folder the next thing we want to do before we start comfy UI is install the comfy UI manager why because it's going to make things a lot easier for beginners as well as it makes it really easy to just copy paste workflows into com UI and it'll handle telling you hey you're missing this this is where you can install it which you'll see in a moment so I'm going to head to this uh page The Comfy UI manager repo I'm going to copy the link to clone it and instead of cloning it in this root directory we're going to CD change directory into custom nodes and we can see that directory in the comy UI repo if you go or directory inside of the custom nodes folder so all we're going to do is get clone and paste and you'll notice that that is now inside of our custom nodes so next thing we want to do is actually run com UI we're going to CD dot dot that's essentially saying we want to change directory by going up so out of the directory that we're in into its parent directories which is comfy UI the next thing want to do is Python main.py and this is finally starting up comy UI for us now it's installing comfy UI manager because although we cloned the repo we never installed it comfy UI manages that for us so we copy this link here paste it into a new tab and we are officially inside of com vui now I have a workflow folder that I will be supplying with the uh tutorial it is inside of my resources folder inside workflow and it is from the animate diff evolved repo big shout out to cinka dink he is awesome or she I'm not sure what their gender is sorry but they are very very smart incredibly smart and they are in the banad doo server on Discord and the work they do is amazing we are going to to take our workflow file and we are literally going to drag and drop it inside of comfy UI hoop era big red nodes nothing to worry about this is why we installed comfy UI manager so pretty much the comfy UI manager allows us to install missing nodes in a very simple fashion so here it's telling us it can't find this node this node this node and this node all you have to do is you're going to click click manager on the right here and inside manager there is an option that says install missing custom nodes and look at that it brings up every single repo that has the nodes that we're missing and we can even go as far as these blue words here we can click them their links and it brings us to the actual node pages that we are missing so we know what it is that we're installing how it works and maybe even find some workflow examples and lo and behold one of them is cinka dinks anime diff evolved repo which is what we will be using so all we're going to do is Click install on the right here for each individual unit and notice when installing on the bottom here it tells us in order to apply these custom nodes we have to restart comfy UI so we are going to do that we head to our terminal we are going to contrl C and we are going to python main.py again which is just starting comy UI up again and notice that it is installing the nodes that we just added to the system and for clarity if you look into our custom nodes folder you will notice that we now have these repos so everything installed we can restart and notice that the nodes are not giving us any kind of error anymore but there is still something we're missing and that's that we haven't actually in we haven't actually placed our motion modules and our motion luras into their proper folders now motion modules and motion luras have to do with animate diff so it mentions here how to use them where to download them so you see you have your Google Drive hugging face civid AI links and whatnot but I am just going to show you very fast where these items would go so inside of our custom nodes folder let me find our re resources so notice I have motion Laura and motion modules here inside the animate diff evolved folder you're going to enter you're going to head to models and the models are going to be our motion Laura now I have two here I will link down below as to where to find them this is a new model that was made I sorry the Creator's name has left me I'm very sorry I will make sure to credit them in the description down below and the uh V2 motion Laur from the original creators of anime diff now this is a great starting place the V2 model it has great results this new and improved model is really good I've only started testing as if last night ran a few tests not much but going forward um next thing we want to do is move our motion luras into their respective folder so again that's comfy UI animate diff involved motion l laa and we are going to take our motion lauras and paste them in so now we want to refresh the comfy UI page and now you notice that when we click these nodes our uh models are present so how does this work this specific node layout now there's a lot to come for UI and as far as I'm concerned as a beginner you shouldn't be worried about it first you should be worried about just getting comfortable with this node setup because this is Anime diff evolved this is the thing that you see people creating these amazing animations with so first and foremost we want to load our checkpoint to load a checkpoint is just as I said is to load our style so we'll start off with the hello young model that I supplied next thing is we have clip skip usually a good clip skip is -2 it works really good with K UI or sorry not comy UI animate diff but usually on your Civ AI page and your model the model that you're downloading the checkpoint it will note to you what the clip skip value should be now we have our seed our seed is what controls Our Generation just like in the CLI and just like in Auto 1111 if your seed is random it is going to generate a new different generation every single time that you generate versus if it's fixed it will stay on the same generation unless you change some kind of parameter we've explained what is the checkpoint we explained our seed and we explained our clip skip next we have our vae now we're using berries mix for my download but you can use any vae that you want the vae as I said before it just controls the color composition of your output next we have our motion Laura or rather our motion module our motion module controls how animate diff animates our images how it interprets the next frames in our images now the next thing that we want is our generation size now notice from the animate diff CLI prompt travel uh tutorial that I had if you watch those I was usually generating at 512 with with a height of 768 so I'm going to generate with that size just for clarity now we have two prompts here and notice that they are key framed we have zero and we have 15 so zero means that at the very first frame of our animation this is what comfy UI is going going to be attempting to generate one girl solo with cherry blossoms a hanimi pink flowers white flowers the spring season etc etc and then at frame 15 at frame 15 it's going to be attempting to generate this here and as short as possible so this is going where if you notice it's connected to our positive prompt our positive prompt and then we have a negative prompt why because it is connected to our negative prompt again don't bog yourself down and but how does this work and why is this where it is no just accept that this node network has already been designed to function properly you aren't thinking about how to change nodes you're thinking about how to change the data that's stored within the nodes okay so after we do our prompts and we have set our image size and we have set our motion module the next thing we want to do is focus on our uh generation the actual diffusion part so notice that it has steps CFG we can change our uh sampler name just just the same as we were doing with uh stable diffusion and automatics web UI same as we were doing in the The Prompt travel uh CLI director or CLI repo that we were using it's all the same thing so don't get bogged down into details just focus on making art so I'm going to leave the sampler on uler but as you can or I'm going to put it on uler a ancestral actually but as you can see there's a lot of options here you know feel free to experiment it's up to you honestly we have our D noise level which we have uh empty late in images so I am going to leave it at one which is the max value but we can decrease it say if we were using reference images which I will make tutorials on that as well and we have our steps again and CFG the CFG I'm going to lower because I think eight is pretty high up so let's maybe make it six and I'll put the steps at 25 the last node to discuss is the node that controls your outputs you can have it do a frame rate of eight you can increase this if you wanted decrease it if you want I'm going to leave it at eight uh there's also your your format personally I like to generate at MP4 but you can generate as you see as a gif as webp as webm uh H h264 h265 codex or webm so and there's the save image the save image just saves an image alongside of your uh video that you can actually use to load your workflow and you'll see that in a moment so I'm going to q promp and let it do its thing and if you look at the terminal you'll notice that the comy UI terminal is showing us what is happening how much time is left and if you notice right here it's even showing us uh a progress bar to some extent of what's happening now you will see the generation at the end just give it its time very cool so as you can see our generation is very clean now we can for the point of showing you how comy youi works if I click generate or Q prompt nothing happened why did nothing happen well because our seed hasn't changed comfy UI is very smart and that is very aware of the data inside of it and since nothing has changed in our node Network it's not going to generate again in comfy ui's eyes nothing's changed there's no reason to be wasting performance or resources but say if I change a word in our prompt or if I were to change say our CFG value to five and click Q prompt notice that it was able to pick up hey a change happened in this node and now it's generating from that point it's not generating everything over again it's not going through all the nodes over again it's only going to where a change was present and forward from there it changed the way the hand moves a little bit but notice that we did not change our seed so let's say that we wanted to change our seed value well we could do it manually or we can tell comfy UI to randomize the seed after every generation now if I press Q prompt it did not do anything yet because technically it still used the same seed but if I click it again now it's a new seed and it generated a new seed again so it's able to generate off of that random seed so notice the way the C changed the generation of the image now the next thing that I want to discuss is our frame rate So currently we are doing a batch size of 16 we can generate longer animations so if we wanted to generate a longer animation we would just change our batch size here to say 30 and let's as far as even including a new prompt so for the lack of for consistency I'm going to copy I'm going to paste but remember there should be a comma here because we are separating these two prompts so this is one prompt this is one prompt this is one prompt the last prompt does not need a comma I'm almost entirely sure I made through our era because it's really just Json but I might be wrong um so I'm going to change this to frame 30 and I'm going to add a prompt in so let's do solo and we'll generate again notice that a green square popped up around the prompt section it's because it was aware of the fact that there was a prompt there if you get what I'm saying it was sorry not a promp it was aware of the fact that there was a change there and then it moves on to our sampler so I can start generating again that was a very quick overview on comy UI and its installation I just want to uh go over one last thing or two last things which are where does it actually output your files to so if you go to comfy UI and inside of the output folder you'll notice that we have can I set this to thumbnails one moment okay you'll notice that we have our file files and we can play those files so we can see what they look like as well as as I mentioned before it is capable that you can drag a PNG image and when you drag that in notice it loads that workflow and the last thing I'd like to discuss is motion lauras and this will be the only moment that you add a node into the scene so to add a node into the scene notice I clicked this dot here I dragged off and there is a option here that says ad animate diff Laura loader that is the only node that you will be adding into this node Network for now there will be more tutorials to explain how and how all this works for now just get your feet wet get comfortable using what's there don't get bogged down in the details so what are motion lauras well I'm going to go to the original anime diff repo to discuss this motion luras allow us control over how Generations perceive motion so you know sort of like camera movements I will note though that it's it's not perfect okay that's the best that I can say so we're just going to do one quick test I'm going to select the pan left Laura so that we can see what it does there's a strain Factor here I'm going to leave it at one but if you wanted to you can increase or decrease it at your whim it's all about trial and error that's the artistic prog process so all we're going to do is just Q prompt came out damn good in my opinion uh notice that at some moments there can be some kind of artifacting from text that pops in when you're using the motion lauras currently I don't know what necessarily causes that that'll be a more again trial and error kind of thing so actually I'll do one more test just to see what uh let's do rolling anticlockwise see what we get just interested so yeah that was a very quick overview or not quick but faster than usual overview of getting comfy UI set up getting the comfy UI manager set up and getting anime diff evolved set up with fizzle dorf nodes and all that but remember I'm just keeping it simple I don't want to get too much to the details because beginners get very confused and and intimidated again all you should be doing is messing with the numbers in this node Network don't stress about the nodes just mess with the parameters break it see what happens when you put the strength to 10 see what happens when you increase the image size and the batch size see what happens if you switch clip your clip set settings just change things and see what happens that's the best way to learn so so blessings as always thank you guys for the support and let's keep animate diff evolving because the better animate diff gets the better air gets and the better the tooling gets the better we make anime diff the more we force companies like Runway and paa to innovate because essentially we are showing that we are surpassing the technology that they are making so stay blessed
Info
Channel: c0nsumption
Views: 8,946
Rating: undefined out of 5
Keywords:
Id: SGivydaBj2w
Channel Id: undefined
Length: 24min 23sec (1463 seconds)
Published: Sun Oct 08 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.