Introduction to ComfyUI for Architecture | The Node Based Alternative to Automatic1111

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello my friends welcome to another tutorial my name is Matt today we'll cover the installation of comfy UI it's the node-based alternative to stable diffusion automatic 1111 and we will start by going to the comfy UI page in on GitHub you'll find the link below I'm going to post this one on YouTube so you'll see the links and scroll down you'll see the installing on Windows direct link to download of course if you have other operating systems go ahead and it's just it's a zip file that you download and extract in my case I put it directly on my C drive here and the first thing you want to do is to configure the paths assuming that you're coming here from automatic then you already have your checkpoints and so you don't want to copy those over you can go on to the comfy UI directory here and you're going to see the extra model paths yaml file now it also has the extension example which means we have to delete that yes we do want it to change and from here all you're going to do is put in your base path to your drive where you keep your other stable diffusion and so in my case it's just uh sd5 and the checkpoint will follow after the only thing that won't work will be the control net you should put that in manually because typically the control Nets on automatic are stored in the extension SL control net slm models directory now we can save that remember to remove the example and we can load it for the first time using the GPU bat file it loads pretty quick once you have comfy UI loaded for the first time it'll look like this this is the default layout to get you started and you'll want to check to make sure that you have your checkpoints loaded correctly from your directory that you customized in this case I have a lot that I'm constantly testing and never wanting to throw any away for stable diffusion 1.5 I'm currently enjoying the Epic photog gasm and natural sin these are good checkpoints of course we're an architecture channel so we're going to do uh modern house in a for with fog and I really want it to be foggy so in Brackets I'm going to put fog close brackets comma photo detail realism and to make sure that we working in photos I want to make sure I tape in in my negative prompts cartoon fake rendering 3D painting these are the positive and negative prompts as you can clearly see here the easiest way to uh remember that is to color the positive green and this is positive because it goes into the positive sampler and the negative we color that one red now everything happens here this is where the Magic The Magic happens and Laton image is the in this case the blank a blank image with our width and height set 1024 by 7 68 as our image size and we go into our sampling settings we have the uh seed which is displayed and then each time we generate we want it randomized we have our settings typically I like to do about 30 to 40 and our sampler name this is often confusing for people but we're just going to stick to the DPM Plus+ uh for photo realism and any of these are good options the GPU which uh I've been testing so we'll just use that one that's fine and your two schedulers when you're using the DPM are uh Caris and exponential I found exponential is good for a lot of smooth surfaces if you have those in your scene so right now we'll stick to Caris and den noising won't matter because we are creating from an empty an empty image and I'll just cover this basically we have our checkpoint load loaded which has to be controlled the the clip is the vision encoder for prompts to identify objects and the vae is the encoder to convert a pixel bitmap image into a latent image for the um denoising process that latent image has to be decoded into an image that's why this is here so we can hit controll enter to generate our first image which you see it will save into the The Comfy uh output directory and here we are we got that definitely the fog is happening because we put that in Brackets so that's good and if we want to generate from image to image we will delete this and now we're going to replace the latent image with a loaded image and so there's two ways to bring up a new node the first way is to rightclick add node and then we pick from here we're going to do an image load image the next option is to double click on the left and type in get a search and I know what I want to do is load so I put load in and then the third option is to drag it out from the node and it'll give us our suggestions and in this case we want to do a load image however you have to encode something you have to encode these images first so actually this has to go from latent image into a vae encode uh which is the inverse of the decode of course unlike an uh starting with an empty slot in this case when we're loading a bit map we have to have it encoded into a latent and you can see here there's two node connectors one of them is for the vae to come from our checkpoint and now we only need one of these and so I'm going to load a file what we'll do is we'll try to convert this image into something foggy okay and you can see that I've reduced this this is a simply this is a 3K rendering that I brought into Photoshop and then reduced to 768 for testing uh that's for Speed okay yes this is an this is a raw render from um uh 3ds Max and fstorm so we go into pixel and from here everything should be working of course first thing you'll notice is that it doesn't look anything like our our saved image looks nothing like our original and that's because of the denoising the denoising is right now let's call this 100% And so it's 100% generated from our prompts both the positive and negative prompts so if we want a combination of the two we have to bring our denoising down to a value in which it'll mix the two prompts or sorry it'll mix basically the prompts and our base image and we can do that at 50% and then we'll get some sort of combination all right so it certainly has the fog but it has it doesn't of course have our architecture which is important so we can lower this down further control enter to generate and yeah which that's pretty close and it's definitely foggier so I'll show you how to install control net we want to install this really cool add-on to comfy UI it's called the uh comfy manager and so I have that here you have to just simply Google search you can look down below for the link The Comfy UI manager it requires us to have get clone installed there's some other methods but this is the this is definitely the easiest one and you should have get for windows installed which is get for Windows and you simply download that it allows you to put uh command prompts it'll directly download and install some software so the the instructions here is to go into your custom nodes directory we're going to copy that into our comfy UI extra nodes custom nodes directory and so here's a quick little trick is that you go onto the path and you type in CMD and that'll load you right into the directory where you need to be and as the instruction said we copy get clone and then the address and press enter and now you have it installed it's that easy we can close this we do have to restart our back end comfy UI so I'm going to close that run this again now what's really cool about comfy was is that you can see it's loaded um a second window here if I I'll close that because every step is saved automatically so when then when we close this and reboot it we don't have anything lost and now we have the manager uh button down here which can install everything from additional models to custom nodes and it also can allow us to quickly check for updates okay so right now we want to have some control net installed so we go to install custom nodes these are ranked by the uh creation date and and the first one is of course the comfy manager which is itself as a custom node as it says so here we want to install the control net features all right we need pre-processors we want the comfy UI The Comfy World custom nodes and we also want the advanced one so we can install all three of those and we have to restart so no big deal close that close this close the window restart it okay it takes some time but again because we close that it pops up again we have our uh where where we left off and now when we go to look for our control net features they should be loaded here control net okay so there's a couple things we need to do we need a control net loader control net apply Advanced and we need our pre-processor Lis pre-processor let's do it the other way where we have our node and go control net pre-processors normal in depth and the Lis okay so our Lis we're going to put this up here so we need those the three things okay and the first things you can see when you when you look at this for the first time and you think how how am I going to in connect it well the first you can you can get a couple clues by the fact of that it's all nicely color coordinated and so here our clips and our um prompts go into the positive and we're re-rooting them through the control net so we go to negative and then negative and now we have to connect to our control net which control net are we using we're going to do depth and so we're going to be loading a depth model now these are the ones that it comes with however and I didn't put it in the pathway so there's a cool thing to automatically bring in missing models and that's to go into the install models and in this case we need a depth model so I search for depth and we want to control net version 1.1 um that's the version and we want it for we want our staple diffusion uh 1.5 depth model because that M it has to match the version of our checkpoint and this is a 1.5 checkpoint I hit refresh close this and now I should go in here and I have my model so it's automatically downloaded to the right directory which is super convenient so we load that and our control net goes into the control net node of course the last thing missing is is the image and we have to we cannot connect our bitmap image directly to it we have to tell it to pre-process uh it doesn't have to go through the encoder you can see that helps we we know this because this is blue and so we connect our image here and we load our image into the image node we also were used to seeing these as uh we can see a preview of these in automatic so we'll want to generate a preview so I can drag this node out and look and click preview image and so that way we can make sure that this is working um believe everything is here you won't really notice that the control Net's working if we don't increase the denoise though the denoise remember at closer to 100% or values especially above 65 will be a more altered image relying on the prompts and now control enter okay it's generating an image now I always find that the first time you do these when you install new nodes it takes a really long time even with a small image to process um and then it should go faster okay so it worked you can see that without control net this would have been something completely different it still maintains a lot of the um the composition of the uh of the image we would can change the denoise of course let's keep if we kept that at 7 and we bypass the control net you'll see that this will be something completely different and that's why you that's how you know it's working okay if we do another one you'll see like it's just something completely random um okay so I think that's it the one thing that you're probably asking is how do we load more control Nets then you have to have the um control net stack um and this this is it here so if you want to do multiples you can skip all of these these and you still have to use the pre-processors but you select from all the ones that you have downloaded you can select your control Nets here and turn them on and off and here's the value so it's basically this all stacked together and then you load your pre-processors in to match each control Nets um uh model and then you pump put this out into a controller and then you have to take there's where your prompts come in and then from there this thing goes into uh your your positive and negatives one thing that's important to know is that you can you can take processed images and load them and then it'll come up with the the all the nodes all right that's it I think that covered it I'm going to do another video on fucus which uh been playing around today love it really cool good alternative to automatic very fast all right I think that covers it thanks for watching uh I hope that you found it useful and I covered everything that you need to get started uh you can like And subscribe if you want I don't care if you like the way I explain things you can check out my website I sell 5 hours of tutorials on there I'm going to be putting it on a more uh robust online system for teaching you how to incorporate AI into your everyday architecture workflow so you can check that out in the link below and thanks for watching have a good day
Info
Channel: Matthew Hallett
Views: 2,298
Rating: undefined out of 5
Keywords:
Id: 6VIuXb-be0o
Channel Id: undefined
Length: 16min 23sec (983 seconds)
Published: Tue Oct 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.