STOP Using Automatic1111 and ComfyUI for Stable diffusion SDXL. New best alternative SwarmUI! +Colab

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hi, everyone! If you're watching this video, you  probably encounter the same situation as I did.  You wanted to use SDXL, but your GPU couldn't handle it,  automatic 1111 doesn't work, and the only way to use SDXL is Comfy UI,  which I find very ironic, because despite its name, Comfy is anything but Comfy.  I would even say it's the most inconvenient interface out there.  And it's not about complicity, it's just inconvenient.  If I want to use LoRa, I should create something like this, SDXL  with refiner, something like that.  I don't want to build anything, I just want to turn one slider  on and have everything work as it should.  If you think the same way I do, so good news for you, because  in this video I'll tell you about the best alternative to Comfy  UI, with which you can use SDXL or any stable  diffusion model with just one click. I'll do you one LoRa, one click,  embedding, one click, refiner, one click. Some of you already guessed, I'm talking  about Swarm UI, a user interface which is under  active development by Stability AI. It combines the speed of Comfy UI, since  it's based on Comfy UI, as well as the user-friendliness  of automatic 1111. I created two Google Colab notebooks that  will allow you to run Swarm UI without a powerful  graphic card, long installation for free and very quickly.  It sounds too good to be true, but it's true.  I'll explain how to use them a bit later, so be sure to watch  this video until the end. And now let's take a look at what Form UI  is and why it's the best friendly solution  for SDXL. First thing first,  it's installation process. Press agree, then let's choose our theme,  let it be Dark Dreams, to my mind it's the  best one. Just yourself on this PC,  then we should choose our backend. In case of Google Cloud, we should choose Comfy UI. Then one of my favorite things in the Swarm UI, you can choose a model which SwarmUI will download for you. So let's download Stable Diffusion XL and Refiner models also. And then press Yes I am sure. That's it. Here we go. There is our interface. It looks quite similar to Automatic1111. There is a dialog window for prompt and a negative prompt. For example, let's try a Cat and we should choose our model in this section. In our case it should be SDXL, base model. Press Genereate and let's see our results. There are also called parameters which are similar to Automatic1111. Where you can choose the quantity of images, it's like batch size and batch count, but a bit more evident. Then we can choose our seed, steps, CFG scale. Almost the same like it works in any stable diffusion interface. Here we go, we've got our CAT and evidently it's SDXL. Under picture you can notice that there is all parameters, but despite automatic 1111 there is also information about generation time. It's specified here. You can upscale it 2X. So, the next section it's resolution. Currently it's 1024x1024, but there is you can choose specific aspect ratio and swarmUI change resolution according to chosen aspect ratio which very convenient. For example, let's try 21x9. Here we go, we've got our ultra wide image. This looks very nice. Next one, it's an init image. In it image is something like an image to image in an automatic 1111, but it works a bit worse. There is not too much parameters and unfortunately there is only one parameter. Init image creativity, which basically something like denoising strenth There is also refining parameters, which evidently turn on refiner by pressing only one button which I do like. You can also adjust some specific parameters like refining control percentage, choose type of refining like a step swap or post-apply and fine upscale. So let's try to generate our image with refining model. Alright, we've got our image and according to metadata we have a refiner and refiner method step swap. Perfect. The next parameter is control net. Here you can choose your control net image and control net model, but to be honest I still didn't test it out, so I think I make additional video about controlnet and and me know if you want me to make this video. Then it's scoring and confUI. There is also model section when you could choose your model. Image history, which is exceptionally convenient, way more convenient than it works in an automatic 11.11. In all generated images it is saved here, which I do like, because it was problem with a Google collab when you work with a model in a cloud. Next step is presets. In presets you evidently can create or import new presets. For example, let's create preset cat description and you can specify prompt and other parameters in order to create your image, which is very convenient for mass production. Let's save our presets. You can activate these presets by clicking on this icon and you can see these are presets. Next step is the models. Currently we have only two models with stable different XL and refiner model, but you can add any model you like, including 1.5 and 2.1. Very nice thing that you can create folder for your models in your directory, but also you can edit and add metadata for your models, which also good to have in case you have a lot of models. Next step is VAE Currently we have only one VAE It's built in VAE for SDXL. Next step is tab with LoRa. Here you can choose your LoRa model. In this case I have only one LoRa model, Helen Parr. It's my LoRa model and it looks very, very ugly, but it's good for tests. How to activate this LoRa? Click it. That's it. Here you can adjust the strength of your LoRa model and let's try to generate Helen Parr. A woman. Here we go. We've got our Helen Parr or something close to Helen Parr, I suppose, but it's awesome. You got the idea. You can add several LoRa models by clicking on it as they will be here in this section. As I said before, you can also edit metadata here and specify all needed information about your model. Next step is embeddings. It works the same. Next step is control net. We will discuss it a bit later in the next video. If you will be interested, let me know. And Tools Unfortunately, that is only one tool, grid generator. And here we can create our grid. Something similar to automatic 11.11. For example, let's try a prompt. Let it be a CAT. And for example, CFG scale. 15 and 20. And now we generate several images with the same prompt and different CFG scale. And we can compare how CFG scale affect on final results. Here we go. We've got our images, our three images with different CFG scale. And now we can compare how CFG scale affect on final results. The three CFG scale 10, as you can see in the metadata, 15 and 20. You can compare different parameters, different prompts, seats, steps, refiner, without refiner, with refiner, different models. So this is really helpful tool and I'm glad that it exists here. But I hope there will be more tools in the near future. Also I like the feature that you can hide all that dialog window in order to make your interface more clearer. There is also tab, comfy, workflow editor. If you know what is comfy UI, you can create your own workflow here in a familiar interface. Because there is just comfy UI. Next tab it's user. There is settings for users. Not too much settings unfortunately, but at least you can choose scene and image format. In the server parameters you can choose your backend. In a case of Google Collaboratory, only one possible option. It's comfy UI, self-starting. But if you load it locally, you can choose automatic 11 11 also without much troubles. Then tabs it's server configuration. Here you can see all paths for your LoRA models, stable diffusion models, VIE models, embeddings and etc. That's it. So that is what all you need to know about comfy UI. It's not too difficult. There is not too much features. Unfortunately it's definitely not automatic 11 11, but it's very good option for those who was forced to use comfy UI and who suffer from comfy UI. Because comfy UI as I said before, definitely not comfy. So it's really nice to have this option. And big thank you to stability AI for creating such nice user interface. And I wish you luck. I decided to create true version of Google Cloud notebooks. First one works in a cloud. That looks very simple. You just need to press on the one button here and then you should pass installation process, which I mentioned in the beginning of this video. You need just follow this link in this dash rectangle. That's it. The next one works also in a cloud, but all data saved are not in a temporary session, but in your Google drive account. It's way more convenient because you don't have to pass installation every time. And also I added additional section for downloading LoRA models, base models, VAE-models, embeddings, control net, almost everything you need to for using swarm UI. But in this case, I decided to make that version paid, still affordable, just about $5. Because thanks for that, you can support my existence on this planet and support our channels also. At the same time, first version of notebook still free and you still can use it excel in a swarm UI without much troubles. And how to use this notebook. Once again, very easy. First step, install your requirements and connect your Google drive account to this notebook. Connect to Google drive. In the step two, you can download your needed models, Lora models, base models, and so on. But I say it again, you should download this model only one time because in the next time you'll load this model from your Google drive account. In this case, I would like to download a realistic vision V5 to my Google drive account and I run this cell base model. Step 2.2. And then we should run step 3 in order to run stable diffusion. Just press start. We just installed our requirements. Now we're downloading our model and we run in stable diffusion. Here we go. We've got our URL in that dash rectangle. Follow this link. Here we go. We in our web UI interface in a swarm UI. And as you can see, there is no need in installation because I already installed swarm UI. To my Google drive account and I just loaded from this Google disk. That's it. I decided to choose a realistic vision V5 and let's try to generate a cat once again using realistic vision V5.1. We've got our cat and everything works fine. So thanks for that notebook. You can create your own library with your models, with your presets and save a lot of time on editing, on collecting all the data, which is very relevant in case you're using swarm UI regularly. I hope that this video was helpful for you. Press like button if you agree with me. So bye bye.
Info
Channel: marat_ai
Views: 18,537
Rating: undefined out of 5
Keywords: ComfyUI, Automatic1111, Automatic SDXL, SwarmUI, Stable Diffusion, SDXL1.0, How install stable diffusion, How install SDXL, Stable Diffusion on weak pc, Automatic1111 alternatives, SDXL, stable diffusion 1.5, Stable Diffusion tutorial, SwarmUI tutorial, SwarmUI install, SwarmUI Google colab, SwarmUI colab notebook, ComfyUI colab notebook, SDXL colab notebook, marat_ai, stable swarm, stable swarm install, stable swarm ui, comfyui tutorial, Secret Project Stability AI
Id: b8GWehytDKE
Channel Id: undefined
Length: 12min 8sec (728 seconds)
Published: Tue Aug 29 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.