How to install and use ComfyUI - Stable Diffusion.

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

I am such a sucker for node based workflows.

👍︎︎ 1 👤︎︎ u/fisj 📅︎︎ Jul 14 2023 🗫︎ replies
Captions
Hello friends in this video I'm gonna show you how to install comfy UI and get it running comfy UI is an extremely powerful user interface because you have total freedom total control to create your own workflows I'm gonna give you some Advanced workflows you can work with I'm going to show you some Basics and I'm also going to set you up with an advanced extension that you're gonna thank me for by the way do you know why chickens are so funny because yeah this is comfy UI and this is what we will be installing today this is also a comfy UI let's delve into this and go over the user interface together all the links are going to be in the description below and first of all we're going to go to GitHub and find the comfy UI so if you scroll down here you have installing or you can just scroll down even further then you have for Windows nowadays you don't need to get clones you can just direct link to download after you download that you're gonna have a seven sip file and you can use WinRAR or you can use seven sip I personally prefer 7 Sip and then you can just either extract to wherever you want or you know just extract in the current location it's a rather big file so we'll take you some time you're going to go into the folder and there's a readme here this is very important but it basically it says that if you have a Nvidia GPU you're going to use this path file but if you don't you're gonna run by CPU which should also be noticeably slower and there's also some troubleshooting here like if you get a red error in the UI make sure you have a model checkpoint so we're gonna download that as well if you need to update your comfy UI so in the readme here but you can also go you can go into the update here and just press update comfy UI now if you are using automatic 1111 or have another installation of stable Fusion where you already have models you can go into the config UI directory here and rename this and just remove the example here then you should be able to edit this with notepad and then you can set your base path to where your automatic 1111 is now if you don't have that installed you can just skip that I do have it installed so I can just select this that will make sure after I save this that all my models that I'm using in automatic 1111 will be used in com viewer as well now if you don't have that or any models at all I recommend that you download a model from civitai so let's do that let's take deliberate here for example and we're just gonna download that and then inside your models folder here so these are all the folders we're going to be placing your various models now if you have no idea what this is just go into checkpoints and put your regular save tensors or a ckpt files here I just dropped my deliberate model into here now you're all set to start comfy UI like we talked about previously you're just gonna run Nvidia GPU and after a few seconds you will have comfy UI launched and this will be the default interface that you will see so these are nodes so similar to other user interfaces like automatic 1111 the features are the same but everything is connected by nodes and really the power of nodes is that you can then take all the features and connect them together basically almost however you want and you can add multiple nodes and create your own workflows so here we have our load checkpoint or load models node and here you can select the model that you want now I have a lot of models but if you just follow this tutorial and don't have anything installed you should have the deliberate here for example which is the model that we downloaded from civitai now we have a default prompt set here which is beautiful scenery nature glass bottle landscape purple Galaxy bubble we have a negative prompt here now you can't see this is a negative prompt because it just says clip texting code prompt this can be renamed so let's just type also the prompt here so you understand what it is what they're doing just rename this to negative prompt now you don't have to do this this is just three or six and as you can see here the K sampler here we have connections going from the positive prompt here so you can see there's a line here I'm going to the positive and this is line from negative here going to negative so that is how the sampler understands what text is the positive prompt and what text is the negative problem now you can switch these around you can detach these and let's say you want to change this this is the negative instead so I'll just drag this to negative and then you can connect this to the positive if you would want that so you can switch things up almost thigh let's put them back into positive and negative and as you can see the model here is going straight back to the ball here so from the load checkpoint we're sending the model line to sampler and the vae here you can't see that because it goes behind but if you can see that this red goes to this red down here the vaed code so if I were to render this or Q prompt that it says here it will start rendering similar to as it does in automatic 1111 and we would have an image coming into save image here and we can open this in image this is just a 512 by 5 12 so it's a small image but you get the point now in the little node down here you have the width and the height and the batch size which is how many images you will create and in the K sample here you have the seed which basically determines the base noise before this image is generated this can be changed and it can be set to fixed and increment increasing one at a time or decrement through decreasing one at a time for example the number of steps that your image will go through while rendering on the CFG scale which is a mathematical equation between the positive prompt and the negative prompt and the output so but simply put its lower CFG means that the AI is listening less to your prompt I'd say a value between 3 7 ish it's kind of good I usually do four or five nowadays the sampler here is basically what renders your image now for stable Fusion 1.5 and 2.0 models I recommend the DPM plus plus 2m and then changing the scheduler to Keras but this is basically the super simple version of comfy UI and the nodes now you can add more nodes there are a lot of available so just right click something you can add a node and there are a lot of nodes to choose from so let's say here for example this is the Decay sample so this is the some of the settings that are used to render an image if we add an advanced case sampler you can see here that we're getting some more settings we can have a started step and an end that step which means we could run multiple K Samplers like one running from 0 to 15 and now they're from 15 to 30. this is very popular in SD Excel where you're running the sdxl model for 25 steps and then the sdxl refiner for the last five steps but again that's a little more advanced so that's for another video now another fantastic feature in comfy UI is that you can actually take an image so this is an image I got from a user on my Discord and if you take this and drag that straight into to come for UI well first of all it's going to give us some errors but that's not the point here what you actually get is the workflow and all the nodes that we use to create this image all the settings everything and that is really powerful so you can check on other images find their workflows and well learn from there and continue iterating you might be getting errors like I am and that's because you are missing nodes and in this example we're missing custom nodes we're missing some crj sdl sdx Samplers here and there's actually something called the confi UI manager so we're going to download this and that's going to help us install custom nodes so we have the zip here and this is our confi UI folder we're going to go into confi UI and then into custom nodes and then we're just gonna well basically extract everything I'm just going to drag and drop that into here and after that you need to restart your comfy UI so we're running this again and now you should have we're still getting the errors but don't mind that because you have the manager down here and here you can like it says install custom nodes or install missing custom nodes so let's press the install missing custom notes button here so this is a crjsdxl we're installing that now it says to apply the install custom node please restart call vui so we will do that again and as you can see now we have now got an R new workflow here working so we just load another user's image and let's kill the prompt here and see if we get a similar result you can see here that the green border here shows what node is active so it's actually a rendering here now and here we have so this is exactly the same image that I got from the user Discord so that's pretty cool eh but you get the idea of how powerful this node based system is so you get a lot of control and a lot of freedom and a lot of help if you can just load another user's workflow and just remember to download that manager so you can install the required custom notes now there are also a lot of other nodes you can check out to yourself so there's a whole list here of custom nodes available similar to the extensions tab in automatic 1111 so I'd recommend looking at the base UI try to see what node does what look at the green border when you're generating an image and then find another image check that particular workflow and see how other users are creating their fantastic images now if you want to learn more in depth comfy UI has a lot of exam samples in their GitHub so for example you can see an in-painting node setup right here so here's an example of how you would set up a node to in paint a picture so in the example here you can see that image is a cat here and we have first loaded the checkpoint or model we have the positive and the negative prompt we have the sampler and we have also added a vae n code for in paint and that runs from the vae from the checkpoint here and then runs into a load image from the pixels to the image and a mask to the mask and then your painting on top of the Mask here and then that encode for in paint runs back into the latent image for the case actor here so you're basically adding on top of current workflow so this is very similar to what we saw as the default here in the video when we started another example here is Laura for example we have the usual workflow we have the the checkpoint or the model we have the positive prompt and the negative prompts and the sampler and the vad code and then they've added a separate Laura loader here so instead of going from the model straight into the case sampler it goes from the model to the Laura loader and then continues onto the K sampler and the same goes with the clip so there are a lot of good examples you can find on the comfy UI GitHub so this was a quick guide to get you started with comfy UI if you want to learn more about comfy UI check out this sdxl workflow in this video right oh the side as always have a good one see ya
Info
Channel: Sebastian Kamph
Views: 71,314
Rating: undefined out of 5
Keywords:
Id: KTPLOqAMR0s
Channel Id: undefined
Length: 12min 44sec (764 seconds)
Published: Fri Jul 14 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.