SDXL1.0 is finally out.  You can use it to generate  all of these amazing images  and more.  It can do  various styles, types,  subjects, backgrounds and so much more. All for free.  Here are some more  realistic examples, some more anime style examples,  and some random stuff  that I found interesting.  So how do you use it?  I'll show you three methods.  The first is running it  locally using automatic1111  the most popular  UI for stable diffusion.  I'll also show you  how to run it  locally with comfy UI,  which had the best support  for SDXL since launch  and also natively supports  the SDXL refiner.  And finally,  if you aren't able to run it  locally for whatever reason,  I'll show you how to use it  online for free.  So to run it locally,  you first need to grab  the SDXL 1.0  base model,  which should be good enough  for most people.  But optionally  you can also download the  refiner model as well.  This is not required  for Auto1111. To do so,  just go to the huggingface  repository by Stability AI  that I'll link  in the comments  and click the download button  on the .safetensor  files of the base  model as well as the  refiner model.  To use it  in automatic1111,  first,  go to the official GitHub  repository with over  90,000 stars  that I'll link below.  Unlike my previous tutorial,  you don't need  to install python  or git, this time  since we■ll will be using  the pre-built file.  Scroll down  to installing and running  and under installation  on Windows  1011 with nvidia GPUs click  the v1.0.0 pre hyperlink.  Now click on sd.webui.zip  to download  the compressed file.  After the download has finished, please extract it using whatever tool you'd like.  Now copy the  earlier downloaded SDXL  base model and paste it  after going in sd.webui  webui/models  webui/models  and finally  the Stable-diffusion folder.  Now all you need to do  is run the update.bat script  by double clicking it. This will just update the code  to the latest version  which supports the SDXL.  Now,  similarly, double click   run.bat This step might take some time  as it will download  and install all the required  dependencies such as PyTorch.  After it has  finished installing,  it will start the server  on localhost:7860  You can Ctrl+Click  the link in the terminal  to launch it in your own  default browser.  You can also close everything  and now if you double  click run.bat  It will start the web server  every time since it has already installed  everything.  Let's try generating an  image by entering  a positive prompt  God of flowers  fantasy setting  and a  negative prompt worst quality  And it generates  a cool looking artwork,  but that's not exactly what we are looking for.  Since SDXL  was primarily trained  on 1024x1024 sized images,  it generates much better  if you just change the width  and height to that.  This does double  the time taken to generate,  but it generates  actual images  based on our prompt.  You don't have to stick to  1024x1024  , for example,  if you want  a landscape image  you can use 1366 into 768  are any of these  other resolutions  that are also  natively supported?  Again,  the time taken to generate  will probably increase,  but it does a pretty good job  at generation.  Now let's try a photo  realistic prompt as well.  Honestly, much better face  and details than 0.9.  Let's do some landscape  generation as well  and super interesting results  I've only ever generated  square results.  So this was nice.  Now if auto1111  doesn't work  or if it's too slow  or if you want to use  the refiner model  as well,  ComfyUI  might be a  better choice for you.  So all you need to do  is go to the official GitHub  repo of comfyUI  that I'll link below  and click on   Installing ComfyUI  or just scroll down  and click on  direct link to download  or you can go to  the releases page  and this top download link  will download  the latest stable build  and this bottom link  will download  the latest unstable build.  We■ll just used its latest  stable build for now.  After downloading,  we just need to extract  the 7zip file using  your favorite utility.  After  extracting go within the  ComfyUI folder,  then the models folder,  then the checkpoints folder  and paste here  SDXL 1.0 base  and refiner models here that you had downloaded earlier.  Now all you need to do  is double click  the run_nvidia_gpu.bat  file  if you have an nvidia  GPU or alternatively  to run_cpu.bat file,  if you don■t.  This will launch ComfyUI  in a new browser window  on localhost:8188  If you had pasted the model  in the correct folder,  it would automatically  be loaded here.  If you have other models  as well,  you can use  a dropdown to select  the as the SDXL base model  and reload.  Now let's enter  the same positive prompt and a similar negative prompt.  Let's change  the sampler to eulerA  and let's change  the width and height  of the empty latent image to  1024x1024  Let's do a  landscape generation as well.  Wow.  This is the first time I've had this color and design.  Now let's add  the refiner model.  We can add nodes  and create a new workflow,  but for now,  click on load and load  this refiner.json file,  which was for  sdxl 0.9,  but should work  fine with 1.0 as well. This is courtesy of camenduru.  Now make sure  to load the actual  refiner model in the load  check point of the refiner  and the base model. In the load chekpoint of the bas  After loading  the refiner, let's   generate and compare.  It might take slightly longer  since we're using a  different sampler,  but feel free to experiment. It first generates a base image  and then feeds that image  along with the prompt back  into the refiner.  As you can see,  the refininer model  added a lot more details  to the face,  the background  and small minor details  to the variety  and kind of flowers, textures, shades and lighting.  Let's generate one more image  this time a landscape  one. Interesting results,  slightly more detailed face  and hand.  At least one of them.  Changed the crown  from gold to plants  and some minor details  and touches here and there.  Again, let's  try the same photo-  -realistic prompt  that we had to used  in Auto1111 earlier  and another one  with landscape resolution. Okay, if both of these were■nt  able to run due  to hardware limitation  or software errors,  for now you can try it on  clipdrop.co which is a product by Stability   the company behind SDXL.  There is a limit (400)  to the number  of free generation.  But this should give you an idea about its capabilities. I■ll make a video on how to run   on colab properly.  Since there are some  resources available  but they aren■t that fast or  optimized for the SDXL yet.  Do subscribe  for more stuff  with SDXL  including LoRA training  Dreambooth, ControlNet  running on low  vram devices, etc..  I'll see you soon,  hopefully after I've  gotten over this cold. Bye.