Stable Diffusion Automatic 1111 v1.6 (SDXL Refiner, ControlNet)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi I am Seth and welcome to the channel this image was generated using sdxl and pass through the refiner also using a new tool called revision I was able to take this image and this one to make this I also managed to color this black and white image accurately well it's not a hundred percent accurate but it's pretty close and I did all of this using my own process and automatic double one double one let me show you how automatic double one double one had a significant update in version 1.6 with this version we get support for the sdxl refiner new control net models and new SD XL features like revision and reference which do not require any control models in this video I will quickly cover how to get started with version 1.6 then focus on showing you the revision function and some control net models like depth recolor and canny skip this part of the tutorial if you already have version 1.6 but watch the revision and recolor sections as I will show you my own process and methodology to achieve the desired results if you have followed my previous video tutorial on installing automatic double one double one then you should already have version 1.6 if you don't you can update it by going into your stable diffusion folder right-clicking and opening a terminal once the terminal is opened just type git pull and then press enter close the terminal and launch double one double one if you don't want to go through the manual process or if you cannot install Python and git then just go to the automatic double one double one GitHub page scroll down and you should see a version 1.0 pre-zip link click that and download the sdwebui.zip source file this is only for NVIDIA gpus after downloading extract the folder contents [Music] then click on update.bat and let it run after it finishes the installation click on run.bet all links will be in the description you need to update control net as well go to extensions and click on check for updates [Music] if you see new commits on control net click apply and restart UI this will update control net if you have the lycorus extension installed you should disable and then delete the extension manually in version 1.6 you no longer need an extension to use like chorus you can just put lycorus models in the Lora folder before you start with control net on sdxl some of the models are very heavy on the system if you have 8 gigabytes of vram then right click and edit the web ui.bet file and add the following if you are on a six gigabits GPU then edit and change the following to low vram save and close all the sdxl control net models are available via hugging face if you are downloading the diffuser models which are official from stability AI download the full dot safe tensors file for best results choose the 256 lore adapt safe tensors for other models over the 128 ones now go to settings and you will see a new Option called stable diffusion XL click on it a higher aesthetic value here makes the AI more opinionated meaning it does not follow the exact prompt hence the positive value is higher than the negative value I have not played around with these values so I suggest leaving it as it is for now let's try out the sdxl refiner I am putting in a simple prompt [Music] when using an sdxl checkpoint always keep the resolution at 10 24 by 1024 otherwise you might see weird results [Music] foreign ER and generate the image for comparison here the switch it value means at which Step you want the model to change say the sampling steps are at 100. a 0.8 value means it will use the base model for 80 steps and switch to the refiner model for the last 20 steps a value of 1 means it won't switch at all and a value of 0.5 would mean it switches at 50 steps [Music] these types of distortions are prevalent in sdxl to fix it click on the hi-res fix and expand it use an upscaler if you want to upscale the image as well I am using the 4X full hearty romacri upscaler here foreign let's compare them [Music] now let's check out control net [Music] I am just adjusting the settings [Music] thank you foreign for the first example I am showing the depth model here you still have to tick mark enable to enable control net [Music] we got a cute dog from the prompt let's try and change the dog to a cat nice control mid has a new revision function that does not require any models this allows you to take subjects from a source image and generate a completely new image based on the prompt it sounds simple but it's not I had to play around a lot to get the desired results [Music] I am selecting this image because I want the mountain as the main subject and control net unit 1 the second image the main subject would be a dog [Music] here revision clip Vision basically will combine the two images while giving weightage to your prompt however the revision ignore prompt will completely ignore anything in the prompt so what it has done here is that it is given too much weightage to the second image and somehow blended it to be a hybrid wolf rather than an actual dog also there is no mountain anywhere by the way I left the prompt empty what I will do here is reduce the control weight of the second image to 0.6 and see what happens [Music] now it is more like a cat let me explain what is happening here the AI understands that there is some animal in the image and since the prompt is empty it is filling the dog with an animal any animal for that matter if I reduce the control weight further I might get a completely different animal still there is no mountain so I should reduce the control weight further reducing the control weight worked it gave me the mountain but I have an eagle instead of a dog I am putting in a simple prompt Mount Fiji dog and field now it's correct and exactly what I wanted [Music] just to show you that the control net revision is working I won't change the prompt but disable control knit and regenerate the image foreign you can see the mountain in the back but the focus is again more on the dog recolor is an impressive tool but it took me hours to get it right what I will do here is show you two examples both images have intricate details and using my method you can get the recoloring accuracy spot on foreign [Music] I have two black and white images we'll select the bird one first I will keep the settings as it is and put in a prompt an ultra realistic photo of a bluebird for recolor you need a prompt The Prompt can be specific or generic what I mean is you can just put a generic prompt like a color photograph highly detailed I found it best to use specific props rather than generic ones this is not what I expected to fix this I will change the control mode from balance to my prompt is more important [Music] [Music] this is much better what I just did was basically tell the AI to focus on my prompt rather than the control net image the AI generates a near replication of a bluebird and tries to overlap its Colors Over the base image now this is better but most of the feathers towards the right side of the image are not colored this is because the AI does not have the exact intricate details since it gives my prompt more importance than the control net image [Music] to get more coloring accuracy I will use the same image twice in two separate control net models instead of one I will select Kenny first tick mark Pixel Perfect and change the control mode back to balance Kenny will help Define the feathers of the bird foreign here I will choose recolor don't tick mark Pixel Perfect as it takes the whole image foreign also I want the AI to recolor from 30 of the total steps the starting control step value will be 0.3 and the control mode should give more importance to my prompt [Music] you can see how this method is way more accurate this method will also work with a human and map to hair details correctly [Music] foreign [Music] foreign to Portrait of female close-up candid Street photo high quality detailed the coloring accuracy is excellent I am sure you can get better results using this method with more tweaking [Music] I hope this video tutorial was helpful to conclude automatic double one double One Is Still rocking and as I said in the previous comfy UI tutorial I like comfy UI but I would not replace double one double one to be honest I use both and in my intensive testing I can produce the same results in both workflows after the version 1.6 update I will continue to upload tutorials on both user interfaces your likes and Subs are extremely helpful for the growth of the channel it also keeps me motivated to roll out tutorials on AI thank you for all the support given so far until next time [Music]
Info
Channel: ControlAltAI
Views: 5,591
Rating: undefined out of 5
Keywords: stable diffusion, stable diffusion tutorial, automatic1111, automatic 1111, a1111 1.6, a1111, sdxl, a1111 16, a1111 install, controlnet, sdxl automatic1111, sdxl refiner, stable diffusion controlnet, stable diffusion xl, stable diffusion webui, a1111 refiner, controlnet stable diffusion, sdxl stable diffusion, sdxl 1.0, sdxl controlnet, sdxl controlnet automatic1111, sdxl controlnet depth, sdxl controlnet recolor, sdxl controlnet revision, sdxl controlnet canny
Id: wPi1Lbz2Q0E
Channel Id: undefined
Length: 17min 37sec (1057 seconds)
Published: Fri Sep 08 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.