ControlNet for SDXL 1.0! Master Your Stable Diffusion XL 1.0 Outputs with ComfyUI: A Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hello friends finally we have the control net Kenny for sdxl it's here you can download that from hacking face I will place the link in the video description and the hiking face page will be open go to the files version and you can see two different version okay diffusion pytorch model dot fbc16 you can download this model or if you don't want you can download the biggest one but I thought maybe there is not many different between them and also please download the config.json and place that near the folder once that you download the file okay you should go to the models folder go to the control net and place the diffusion pytearch model dot safety and share and also please please place the config.json near that about the config.json I am not sure that it is necessary but I did that okay after that we can go to the config UI and load the control net if you have joined my patreon you can drag and drop the workflow into the config UI and then you can see the workflow if you are not in my patreon please watch the video because I will explain all the parts and tell you how you can use and activate the control net in the conf UI and this is the result for example here in this sample you can see that I use this image as input after pre-processing it extract Edge lines candy Edge lines and this is the render okay and here is my prompt okay a cinematic photo of contemporary living room is designed by gold and luxurious style and we have that okay so let me show you how we can load the control net step by step now I am trying to focus on loading control net okay and I think that you know how to set up the sdxl workflow if you don't know please check my YouTube channel and you can find many useful information or watch the video Until the End okay because again I will explain everything step by step but now at first I want to tell you how you can load the control net in the sdxl workflow okay this is the efficient loader I use the efficient loader if you don't know what is efficient loader you can check my YouTube channel and you can find many useful information [Music] it's here let me show you control net okay watch this watch this video about installing the confi UI and extensions and after that you will have all the informations okay here I have the efficient efficient loader and this is my positive and this is my negative prompt okay so here you can see a control net node apply control net Advance okay I am suggesting to you to use the advanced control net okay how you can load that double click write control net control net apply Advance okay and then you should connect the conditioning positive to positive negative to negative and after that again positive to positive and negative to negative for loading the control net okay you have two different nodes one of them is load control net model and one other is load control net model div okay so what is diff if you check that you can figure out that this control net is diffusion okay so I found that there is a little different between type of the control net and this can control net is diffusion control not model and maybe it's better I test both of them I didn't found many difference between them okay but maybe it's better to use the load control net model div and how you can do that once that you have the node let me show you okay just click on this circle area release the mouse and from here you can select two different okay for example I want to slow load and div control net loader okay and if you are using the div control net model loader also you should connect the model from your loader to the control lens okay so it's the setup and after that you should load the control net here diffusion pytorch model dot save time share okay and also you need a candy image and you should input send the candy image to the control net how you can make the candy image we have two different ways okay let me right click and show you if you have installed the preprocess source extension and confi UI you can see this menu you can go to the edge line and after that you can see the candy Edge preprocessor okay but if you and also if you want to know how to install the control net preprocessors again please watch that video okay and if you don't know how to install that or if you don't like to install the control net preprocessor right click go to image note go to image preprocessor and here you can see a canopy preprocessor this is a default node that is coming with confi UI and it is not necessary to install any extension but actually it has a problem the processing of this node is working on the CPU and it's a little slow okay but the Kenny Edge processor from the preprocessors can work fast here I have both of them okay and you can see that both of them can work okay if if your image is very large this image is large the dimension of this image is about here okay and 1800 8350 okay so if the image is very large after loading image send the image to the upper scale node and change the size of the image okay upper skate image by and it's a factor for upper scaling or down scaling the image I had sent the image to the um preprocessors for extra for extracting the Edge this part is very important because if you use very huge image or very large image after that it will use all of your vram okay so at first it's better to resize the image in Photoshop but if you don't have time do this here we have to note I load both of them and you can test that you can write Kenny Kenny HP processor and again Kenny um not this one this one this is for another one okay here you can okay so you have these two nodes for example you can connect the image to candy and this image to another candy uh and after let me deal it these notes okay and after that click an image and make some preview image make some preview image um for testing for testing the candy HP processors I think it's better to select all your outer nodes and press Ctrl M to mute them okay and after that you can see when you are pressing the queue only it's trying to work on the preprocessors and this is my image okay after that we have to adjust the threshold for example here the height result and the load result should be different to see some more lines okay and also you can adjust these two parameters and also here okay it can see some more details but they are not useful you can see that this preprocessor is very fast okay but both of them can use so in this case in this case I like the result of this node and I am sending the node let me one more time press Ctrl n to the control net okay so I have that in the in the control net you have you can adjust the string and start percentage and end percent so I am free free I am suggesting to you to change the syringe to something about 0.6 or 0.5 or 4 because if you work with the strange one it can destroy your image and also about the end percent I am suggesting to don't use the number one okay because until the end of the processing it's trying to apply the canyon effect okay but if you change the end percent to something like this after that maybe it's better so now you can cue prompts okay okay and here you can see that it's processing it detects all the lines it detected about the get some information about the counter of the kitchen this is the image and it's not bad okay so now let me simplify my prompt okay for example this time I need a contemporary living room and floor concrete concurrent with red okay and let's queue prompt one more time try to keep your prompts simple because if you add many different things to your prompt after that it can analyze that but now you can see when I am talking about the Redstone then I can have that but I cannot mix the Redstone with gold and other things because still it cannot understand that how to divide my prompts okay but I have that and if I increase this range the result will be more similar to my image okay you can see here I have this store I have this store I have the opening I have this framing around the window and everything is perfect okay and this is the refinery stage and okay okay we have that so it's all for loading the control net I am suggesting to work with this way but also I told you it's working with normal load control net without any diff and still you can use that and also one time I want to show you how to what is the result with this note okay so you can see everything is same very very very similar yes really I can't see many difference okay so both of them can work here but actually it was something new for me and so I decided to test that but uh this is very important to adjust your syringe and end percent okay so it's off now if you want to see how to create all of these workflows from the beginning watch the Widow okay now I want to create the workflow exactly from the beginning right click on all of them and ah excuse me control okay so let me start to make my workflow at first I'm going by loading by loading efficient efficient node sampling okay sampler efficient I'm using the normal sampler not advanced and this is the loader in the loader I am loading my checkpoint at first I have to use the base model of the sdxn and here is my positive and negative part for writing the positive and negative I I'm I want to use external reference so right click on the Node and click on convert positive to input and click on convert negative to input right click go to utils and create a primitive note make a copy connect the first one to positive and the second one to negative change the color to green and change the color to red connect the model to model connect the positive to rewrote okay connect a negative to reroute for example now again I can change the color it's very helpful connect the latent to latent connect the vae to vae and adjust the resolution okay for the sampling you can go with default value it's something normal click on image and add this save image note okay now I have to load the control net so control net apply advance negative again it should connect to the positive and negative positive to positive and negative to negative okay so now I have to load my control net file I can use control net loader because you figure out that the result is not different and loaded diffusion by touch I need image at first I have to load image account I should load the image for example this one open the image in new tab the size of this image is 1800 okay so I think it's a big image so I am sending I am using image scale by send the image to this node and after that resize the size by 0.7 and then I can send the image to a preprocessor for extracting the Canon so here I can write candy this note the candy is default not and you have that and if you have the preprocessors control net also you can use the can you can send the image to both of these nodes and also you can create a preview from each one and for example let me adjust my threshold by default I think this value is working and connect the image to image okay now you have write your positive and negative prompts for the prompts let's make copy and paste my text from here to here and negative two negative and press Q okay so you can see at first it's trying to extract the candy Edge line the first image can generate on the fly but the default node is little less slow it's in processing the control net is loading okay we have the image [Music] now compare them exactly it's keep the position of the opening of this door this door and also we can see some tiles here and still we can see the tiles in the generated image it's very good but also if you look the quality of the image is not very interesting and we cannot see some creativity for interior design okay but if you change the end percentage for example to 0.6 and also the exchange to 0.6 you can see that the AI can make some decision about this space okay and now it can make some change for example for the walls for floors for door position or other things but also it moved some of the doors and other things but it's interesting uh and for the resolution I think the separation should be rectangular not a score okay we can see some creativity for Designing this wall and complete the counter and still we can see the window frame okay now I want to send the image to refiner okay so what should I do at first I am make a copy from the efficiency change the denoise value to about 2.2 0.23 about that and also I need a loader connect the positive and the negative to this loader and this time I have to you I should load the refiner connect the model to model positive to positive negative to negative this time I'm not using the control net between the loader and sampler okay vae to v a e and this time the latent should go to this efficients okay and if you press the save image after processing with the base model it will send the information to another sampler for processing with refiner okay let's compare the result the first one is coming from the base model and the second image is coming from the refiner and the details and texture on wall on lighting and other things is much more better so this is the work follow at first you should start with efficiency loader and it's odd if you have any question please ask me you can join my patreon to support me and also downloading some workflows and I can help you also if you need any Face to Face tutorial or conversation I am available and I would be happy hope to see you bye
Info
Channel: Arch_AI_3D
Views: 6,371
Rating: undefined out of 5
Keywords: AI, stable diffusion, AI architecture, ComfyUI, ControlNet, architecture, artificialintelligence, Tutorial, ai news, ai tools, ai revolution, sdxl controlnet, sdxl 1.0, sdxl automatic1111, sdx bass
Id: _f0qrHQs0jk
Channel Id: undefined
Length: 27min 1sec (1621 seconds)
Published: Sun Aug 13 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.