Installing ComfyUI, Node based Stable Diffusion

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello there in this video we're going over how to install how to use and why you should install comfy UI you may be familiar with different online services like Leonardo AI or any other ones you also may be using your local installation of stable diffusion like this one with aftermatic 1111 distribution and this is a very powerful type of applications that can work very well for us however comfy UI offers something different from other application and this is a node based and dreaming environment so if you work with any other application like Cinema 4D DaVinci Resolve view any other ones many of them utilize the node system and it's because it's very easy to flow very nice to configure and it's very understandable but it's come with a price understandable level it's come when you start working with this so what we're going to do we're going to have it step by step how we can install it and after this we look how we can work with this and also how this country UI can teach us about stable diffusion and how it work links to all the resources that we're going to use in this video I will provide Down Below in description so first we need going and download the application itself this is located on GitHub and when we're going to confvui you can download it all repository but it's now much much easier to install it all what you need to do scroll down on installations and click direct link to download it's well downloaded one big file with a seven zip format by default Windows may not recognize this extension so you can download its 7 zip extractor or personally I like to use it win RAR WinRAR is free to use applications and will open most of the archives in different formats after you open archive create the reactory for example I created in my G drive and extract files there you'll notice it will create some subdirectory Next Step you want to go ahead and click on this very big screaming letters who read me very important so as we open this file you'll notice right here it's kind of simple instructions let me go over them one of course if you run on nvd GPU you want to start with run Nvidia GPU if you want just run on CPU and if you have like mg video card you want just one chip CPU so you have a two different models you can run which is kind of nice the also below it's very important the one if you want to update you can run updated beginning which I actually recommend to do before you first launch so it can take the latest code if for some reason you get older or it wasn't updated so in this case you'll just run update com Pui batch file located in update if any problems occur during installations you probably want to run update comp UI and python dependency but this is only run in a case you have some errors also you do need it here with some checkpoint or the models that you're going to use but don't hurry to download them because we can reuse them from what we already have it if you use it before stable diffusion or any other models we can link to them and you can notice down below right here it's actually tell what file we can use it and we go ahead and open and edit this file if we need it let's look a little bit on a structure when it's created you notice inside we have a director called models in a model is important for us is this checkpoint notice I already copy some of the models directly in here from stable diffusion you can do this one way the other ways we can actually reference those models that already exist on our in our case for example here's my installations same on the T Drive stable diffusion and I already have it if I go down to my models and I go down to stable diffusion I already have these models so you can always copy if you feel this way but it's much easier not utilize more space just to point to them and this we need to go back to our the conf UI let's go right here portable and in here we want to go down to comp view I and open the file called extra model path yum example notice one about it's the same I just only remove example and it's how we'll read open this file in a text editor you like it and let's modify like right here you can see base path this is where my stable diffusion located the next it's representing directories and by default it will locate it in model stable diffusion and if you remember it is right here stable diffusion models located then after this we have our con things lore models if you have it upscale the one different I found it is when you reference control net because in many cases if you're using stable diffusion of the magic climate 11 you may use it as extension to install control net and in this case when you do it it's one install necessary in control net folders in here this will be empty instead it will going to the extensions will going to the control net and inside the control net it will put these models inside the folder so all you need to do click on the top copy that path go back to your yum file and just add right here extensions remember because we have it on the top already path kind of leading to this we just need it start from place where it's begin in this case it will be extension SD if you don't remember let's paste it and you can see right here we already have this one top right here and all we need extension I just also change the slashes to be forward slashes instead backward after you've done this with with this be sure you save this file as extra underscore models underscore path yaml and just remove example this way you will have it all the models they already downloaded you don't need to read and load them again but this is one way if you like to experiment with models and everything you can do just copy them in those directors so either way will work when you're done with modifying just go back and run which one you win or you want CPU or Nvidia in my case I have RTX 3090 so I will run GPU patch when you run first time it may take little bit time to load it actually right here before even pop up this new server IP address for me I was actually sitting about 15 minutes around this area so it will take a little bit time and when I was watching my internet activity it was downloading probably some missing components or updating after completed you should see this it says to see we go to the your local address you can press Ctrl and click on this and this way you can open the com P UI in new browser you can open a multiple windows if you need it which is kind of get handy if you work with a different type of interface so let's look what we have here and this is actually not basic it's what we said before and you can see it's kind of start and we have it our load checkpoint on the left going through all the components and we have it save image on the end it seems like workflow going from left to right in some cases it does however it does not directly represent some elements for example between the variable after encoder and our assembler it will going multiple connections so it's not just one but again it is kind of representing and when we start processing you'll notice how the nodes will highlight green as a process going forward so it's very good way to learn so let's go check what we have right here we have our checkpoint and this is where we read and select what checkpoint or models we're going to use next we have our encoder clip encoder text we also have it our negative you can always get and replace this text if you need it but for now we leave it as a default next we have it also our latent image where we have it information width height and also batches file next we have our sampler and we have it our way in the quarter so what is the sampler and a variable after encoder dust so let's check on this there are sampler it will take just image and process the noising by the way I will leave it all these links to these resources for you down below so if you're interested you can go and learn and see more how it work but for this purpose I will just simplify so think about the sampler it will take creating noise predicted noise and denoising to create our image and our variational thin quarter it need to recognize what we've done in this image so in some cases it is check on an image and does it work with what we put it in a prompt and if we look on our comfy you can see very easy how it's work right here we have a communication between checkpoint we just have it all information for what encoded and it's passed to our the decoder let's see what's going on so we're going from our clip model we booted our positive negative we put our information our sampler going to create remember this digital noise and it's again it's a basic from the model that we're supplying so it will be close to our model next it's a denoising happen here and our way decoder will say does it look what we're asking for so in some cases this is have it some little Loop but it's okay it doesn't show here because all what we care in this case is how the system flow and we have our end result this is a very simple setup the beauty of this that we can modify and add additional noise notes for this we just right click and open menu and you can notice right here we have multiple different type of the node for example we can go ahead and in an image let's go say upscaling and we can go to select upscale our image next we can select our image node and node selecting it's easy just click drag and place it between so in this case we can say wherever image is processing and now it's 528 by 768 we can actually increase for example width 768 and we can do r with one two four okay for this so then you can notice right here as we're selecting now our image will be upscaled to resolution as we specify so it's that easy of course it's have way more you can have it in painting you can have control net you can have it even as the Excel model here as well so in this case you can create of course you want to understand how other nodes working and the best place for this go to comfy examples if you scroll down right here you can see examples of image image in painting Lorry and all of this so for example let's go see image to image as we open it's given some explanation and most important this is for our node layout so we click on the nodes you can see what it was using and how they're connecting experimenting is very nice here because all what you need to do just try to go and connect all of these different nodes see how they work together and experiment with them another one interesting ability what they build with conf UI it is have information and metadata for the image so it's meaning when I create image inside comfy I can just drag and drop or click on load and it's pre-loading with all the nodes and system it was creating so it's have it all information how this image was created which is excellent if you want to share with your friends or your enemies how that image is built this is great great and I think this should be almost in every image it actually does if you remember in some stable diffusion if you're going inside the image to image or actually PNG info you can download here image and this will provide some information and right here example it's reading metadata you notice this metadata from conf UI but we have it all necessary workflow all information is stored inside the image so same things if you do with stable diffusion it will have some information inside how it was created okay so all of this fun let's go just create couple things right here we can put it our information and I will just leave it default or is calm same as negative Mark we leave it same uh just maybe higher width and height we also put it a little bit upscaler as we've done before and all what you need to do click uh cue The Prompt you'll notice as it's turned green right here green green and it's done look we have our image we can open eye image preview and it is our 1024 but 760 resolution for me uh in the end I want to say I like how the system work I do like the node approach because very custom to working with the nodes and other applications and I think it's very visually understanding how the system work and allowed very interesting experimenting in a way you cannot do in other areas like for example the presets check boxes preset drop down boxes right here you literally can try to create all different Samplers maybe I'll try couple different Samplers or other way so you can create this flexible system how you want to work with this or maybe even take your image and run twice through the image or try to create more complex which is definitely we're going to play around with this but for me right now biggest uh minus for this it is ability of the all these extensions that currently automatic level 11 11 have they have a huge library and big community support but I'm sure with time these ones will develop very big Community as well and a big support thank you for watching this video please subscribe to the channel your support is greatly appreciated and have a great day
Info
Channel: Vladimir Chopine [GeekatPlay]
Views: 3,509
Rating: undefined out of 5
Keywords: Geekatplay Studio, Vladimir Chopine, Digital art, AI art, MidJourney, Stable Diffusion, Dreambooth, Dall-e, Free resources, Free learning, Digital art for begginers, Free tutorials, artificial intelligence, stable diffusion controlnet, stable diffusion inpainting, ai artist, ai artificial intelligence, local comfyui, guide comfyui, nodes comfyui, stable diffusion tutorial install, comfyui image to image, comfy ui vs automatic1111
Id: bu3c2opsVmI
Channel Id: undefined
Length: 16min 29sec (989 seconds)
Published: Fri Jul 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.