Greetings everyone, in this tutorial, I will
show you how to install Kohya GUI trainer from scratch. How to do SDXL Stable Diffusion X-Large Kohya
LoRA training from scratch. How to generate amazing quality images after
doing LoRA training as you are seeing right now. Here two showcases for you. On the left my real image. On the right we see a raw output of Kohya
LoRA on SDXL. And here another one: on the left it is my
real image, on the right we see Kohya LoRA trained model generated image, this is also
raw output. Moreover, I will show you how to do LoRA checkpoint
comparison to find the best checkpoint after training. I will also show how to do inpainting to improve
generated faces from your trained LoRA model. Moreover, I will show how you can generate
stylized images like this even on a realistic workflow. Don't you worry that if you don't have a strong
GPU, I will show you how to do training even if you don't have a strong GPU. But this is not all. I will also show you how you can sort your
generated images based on the similarity to your training images. That will make your job much much easier to
find high quality images. There isn't any workflow for SDXL yet. I have done over 15 trainings to find some
optimal parameters. This is the first full tutorial that you will
find for SDXL training. So I have prepared very detailed GitHub readme
file. All of the instructions, links and commands
that we are going to use will be shared here and I will update this file as it be necessary. So for to be able to use Kohya GUI and follow
this tutorial, you need to have python and git installed in your computer. If you don't know how to install them, I have
an excellent tutorial here. The link is here. This is the download link. Watch this tutorial video and you will learn
how to install and use python and git. To be able to use SDXL LoRAs, currently we
need ComfyUI. I have prepared an amazing tutorial for that
as well. Currently it is not published but when you
are watching this video, the video link will be here. Its readme file is already ready. I also have an amazing tutorial for Automatic1111
web UI, how to install and use it. Automatic1111 is also working on SDXL implementation. So when you are watching this video, probably
it will also have SDXL Stable Diffusion X-Large support. So you can also watch this video and learn
it. One more thing that you need to do is you
need So this is the Kohya GUI repo link. Right click, copy link address, enter the
folder where you want to clone and install. I will clone it into my F drive. So I opened a new cmd window here, git clone
and this is the URL. It is cloned. This is installation on computer. You see, now we have our cloned folder here. All you need to do is find setup.bat file,
double click it. It will compose a new virtual environment
so it won't affect your other installations such as Stable Diffusion or other installation. Ignore this message. Wait until you get this screen, then select
option 1, hit enter, select option 2. Now we are using Torch version 2. This is really important using Torch version
2. Now here, you won't see the progress, but
you will see whenever the currently installing package has been completed. So you need to wait patiently. This process may take a lot of time depending
on your internet speed. Currently my internet speed is 100 megabits
per second. So you see it is downloading the necessary
files. Just patiently wait. The installation is continuing. You will see the messages as it progress like
this. Once all of the requirements have been installed,
you will be asked several options as you are seeing right now. Select this machine, select no distributed
training, select no for this option. Do you want to run your training on CPU only? Do you wish to optimize your script with Torch
dynamo? This is not available for Windows yet. So select no. Do you want to use deep speed? Select no. And what GPU do you want to use? We will select all. And select BF16 here if you have RTX 2000,
3000 or 4000 series. If you have RTX 1000 series or below, select
FP16. So I will select BF16. You will see other several options here. Install bitsandbytes-windows or manually configure
accelerate. You don't need to do them. Then our installation has been completed. Just close this cmd window. After installation has been completed, all
you need to do is double clicking gui.bat file and it will start Kohya GUI as you are
seeing right now. It will install necessary requirements if
there are any missing ones and it has started. So we need to open this URL, copy it and paste
it into your browser like this. So this is the Kohya GUI screen. In this tutorial, I will show how to do training
on SDXL. Between SDXL and SD 1.5 version, only parameters
changes. However, the rest of usage is same. So let's begin. So first we will begin with selecting our
model. Click here, select custom. We need to select the model. When you click here, it will allow you to
select model. My model is already downloaded in here. SDXL base 0.9 version like this. This is SDXL model, so I am selecting this
option. If you don't know where to download or how
to download SDXL base version, this is the link of it. When you open this link, you need to have
Hugging Face account logged in and it will ask you to accept research agreement. Just fill it and accept it. With more details, I have explained this process
in ComfyUI for SDXL tutorial, so you should watch it as well. Okay, we did set our base training model. Then we will go to tools and prepare our training
data set. There is deprecated tab here. This is the tab where we will prepare our
training folders. The instance prompt will be ohwx. The class prompt will be man. Then I need to set my training images. I have prepared my training images like this. Each one of them is 1024 pixels and 1024 pixels. This is not a very good training images data
set. Why? I have repeating backgrounds, I have repeating
clothing, therefore this is not optimal. But for now, I will use this and hopefully
I am planning to make a tutorial for a very good training data set preparation. So I will copy the path from here, paste it
here. Now repeating parameter. This parameter is not very well known. I have asked this to the developer of Kohya
to explain in more details. However, I still didn't get an answer. From my understanding that it will do 20 times
training of each training images in 1 epoch with each time training with another 1 single
class image. It is really hard to explain with this way. This is much more simpler and easier with
the DreamBoothh extension of Automatic1111 web UI. But for now, I will use this as 20. Then we will set our classification / regularization
images. I have got an amazing classification images
directory. This is prepared from Unsplash. How I have prepared these classification images? I shared that on this YouTube tutorial video. I also shared these classification images
in this Patreon post. This Patreon post contains all of these ratios
previously prepared classification images for you. So let's copy its path. This classification / regularization images
will significantly improve our realism. Because with these images, we will further
fine-tune the Stable Diffusion X-Large model. However, you don't have to use these images. You can use ComfyUI, generate photo of man
classification images and use them if you wish. Destination training directory. This is important. This is where all of the training files, LoRA
outputs will be saved. So click this icon, select the folder where
you want your training to be saved. I will make a new folder, Kohya SDXL tutorial
files, and I will open this folder and select it. You see now it is written here. Now click preparing data. When I click this what will happen? First of all, every time whenever you do something,
you need to look to CMD window and see what is happening. You see it has copied my training images into
this folder. It has copied my classification images into
this folder and everything is set. So it is inside F drive, inside my new folder. So this is the folder structure. Inside img you will see a folder named like
this. This naming is extremely important. Without this naming convention, Kohya script
will not work. Okay, this is really important. And also in reg folder, we have regularization
images like this. You may be wondering what is this ohwx instance
prompt? What is this class prompt? I am agree with you. These are like alien terms if you are not
familiar with this training. If you want to learn more about this rare
token, classification images, I have an amazing tutorial. This is a master tutorial. It is 100 minutes long. Watch this tutorial. It also has chapters and English subtitles
manually fixed by me. Some people also asking me why you are using
real images as regularization images. Because if you are aiming realism, then using
real images will make your model even further realistic. As long as the images you use for classification
are better than the model itself. Since these are very high quality real images,
it will further fine tune SDXL for realism and it will improve my realism training. However, if you want to do training with the
style of the base model, then you should use the images generated from the model itself. Okay, finally, don't forget copy info to folders
tab because when we go back to training tab and when we go to the folders tab, we will
see all of the parameters like this. Model output name. Now this is important. Whatever the name you give will be used when
saving the checkpoints of LoRA training. So I will name this as tutorial_video. So this will be our output name of Kohya generated
LoRA checkpoints. Then the most important part parameters. I have done over 15 trainings to find optimal
parameters for SDXL. Let me show you some of the tests that I have
made until finding some optimal parameters. I have done so many tests and each one of
them is taking huge time as you can imagine. So finding optimal parameters with Kohya LoRA
is extremely hard. There are so many conflicting information
on the internet. There isn't any proper information. So with SDXL, you have to do a lot of research. There are some presets for SDXL, but they
weren't any good. So I am not going to use any preset. Select LoRA type Standard. Do not change anything in these two sections. Train batch size 1. Now this is very important. If you are teaching a subject using the minimum
training batch size is better. Because as you increase your training batch
size, the generalization of the model will decrease. So when you need to use batch size? You need to use training batch size if you
have too many images to train. If you are doing an overall fine tuning of
a model. Like let's say you have 100,000 images to
train, then you have to increase your batch size because otherwise it will take forever
to train. Number of epochs. Now this is important. Since we did set number of repeatings to 20. Actually our 1 epoch will be equal to 20 epochs. Because in each epoch, 20 times each one of
the training images will be trained. Therefore, I will do training up to 10 epochs. With this way, we will have checkpoint with
every 20 epochs. Like I am showing in Automatic1111 web UI
DreamBooth training. So we will save every 1 epoch a checkpoint. Select mixed precision BF16. Select BF16. If you get error, if you have older card that
doesn't support BF16, then use FP16. Number of CPU threads per core. This is two. I don't change it. Cache latents. This will improve your training speed. Cache latents to disk. This will also improve your training speed. Now learning rate. I have tested so many learning rates and for
SDXL with standard LoRA type, the learning rate I found best is 4e-4. So this is equal to this one. You see 0.0004. Optimizer. This is really important. With SDXL, we are using Adafactor optimizer. These optimizers are explained in this Hugging
Face page. You see AdamW and there is Adafactor. So Adafactor uses much lesser VRAM. I think this is the least VRAM using optimizer
algorithm. You can read this page and learn more if you
wish. Okay, so optimizer is Adafactor. Actually, I just checked again and verified
that my learning rate scheduler is constant and learning rate warmup steps is 0. So don't forget this. Learning rate scheduler constant. Learning rate warmup steps 0. Optimizer extra arguments. Now for SDXL, we need to use optimizer extra
arguments. I have shared the optimizer extra arguments
in our GitHub readme file. Copy them. Paste them here. So these are optimizer extra arguments. Max resolution. This is really important. You need to make this 1024 and 1024. Also, I set learning rate equal in Text Encoder
learning rate and UNET learning rate as well. So all of these learning rates are same as
you are seeing right now. Do not check this box. I think when you check this box, it will not
work. Because in my experimentation, it wasn't working. Check no half VAE for SDXL. Network rank dimension. Now some people are wondering what is this
network rank dimension. LoRA training is actually DreamBooth training,
but the difference is that we are not training the entire model itself. We are training only some part of the model. As you increase this network rank, you are
training more part of the model. Thus, you are able to learn more information,
more details. However, as you increase this network rank,
it will use more VRAM and the checkpoint files will have bigger size in your disk. So this is the trade-off of network rank. I am using 256 network rank for realism. Network alpha. There isn't very clear information regarding
this, but as you increase this during the training, the weight changes are stronger. So as you increase this, actually it is amplifying
the learning rate and this network alpha changes according to the LoRA type that you are using. For standard LoRA type, use network alpha
1 with these learning rates. Do not increase it. I have tested it. If you increase it, it will get over-trained
very quickly. So network alpha is 1. Network alpha will not change the size of
your checkpoints. It is just a parameter that is used during
training when applying and changing the weights. So these are all the parameters. Before starting training, you should save
your configuration. How? Click save as. Select the folder where you want to save. Currently, this is the last folder. I will save it as test video now. I will save it as test video. Save. You see, this is the file. And whenever you click save, now all of the
configuration will be saved here. After that, you can open and load and all
of the settings you have will be loaded. When you also open cmd window, you will see
save as command and save commands here as well. So before doing training, you can click print
training command. This is very useful because it will show you
how many number of images are found in your set folders like this, how many steps they
will get. All of the information, all of the commands
will be printed here. I will also copy this command and then hit
train model. Then it will execute this command. You will also get that a matching Triton is
not available. Some optimizations will not be enabled. This Triton library is not available on Windows
yet. Therefore, you will also get this message
if you are on Windows. On Linux, it is working. Also, you will see using DreamBooth method. Why? Because LoRA is also DreamBooth, but this
is optimized version of DreamBooth training. Therefore, it is using lesser resources and
trade off is lesser quality. So it is going to start training. Let me zoom in a little bit more. Okay, it is loading checkpoint, using xFormers,
caching images. Since we have selected cache images in the
disk as well, this caching is taking some time. A little bit increased VRAM usage at the moment. And meanwhile, I am recording a video. Since I am recording a video at the same time,
my VRAM usage is higher than normal and GPU usage is higher than normal. I did get out of VRAM error because I didn't
enable gradient checkpointing and looks like even 24 GB VRAM is not sufficient without
gradient checkpointing. So after making this change, I click save. So the settings are saved. This is the current VRAM usage 1.2 GB. Let's do the training again. You don't need to do anything else. Just go to the bottom and click train model
again. And it will start training again. And let's see the VRAM usage this time. I think even when I am recording video now,
it should work. I also closed some of the applications that
were open. Okay, you see this is the VRAM usage this
time. So from 1.2 GB to 19.1 GB, 20.7 GB. Currently, it is using about 19 GB VRAM. The training has started. So the training speed is 1.5 seconds per it
right now. It means iteration and in each iteration,
it will train the number of images that you did set in your training batch size. So if my training batch size were 10, it would
be training 10 images in each iteration. But since it is 1, it is processing 1 image
in 1.5 seconds right now. And how many iteration it needs? It needs 5200 iterations to be completed. Why? Because we are doing 10 epoch, we have 13
images, we are using classification images. And when we do the calculation, it will be
like this: 13 base training images multiplied with 2. Because we are using classification images. Multiplied with 10. Because we are doing 10 epoch. Multiplied with 20. Because we have 20 repeating. Therefore, the total number of steps is 5200. This calculation is also printed on the beginning
when you hit the print training command. As you see, it is showing you the calculation
of number of total steps. If my batch size were 13 how many steps / iteration
would it take? We will just divide it to 13. And it will take only 400 steps. Because in each step in each iteration, it
would be processing 13 images instead of 1 image. With these settings, the training is taking
about 100 minutes on my computer. I already done it. So I will now open it. But before doing that, I will show you one
another very cool trick. I shared my user training command in here,
copy it, open notepad++ or any notepad editor, paste it, and you will see all of my training
commands here. You can change the folder names according
to yours. And then you can copy this command. And how are you going to execute it? Enter inside your Kohya installation, enter
inside virtual environment, enter inside scripts, open a new CMD, then type activate when you're
inside scripts, then move back to the main Kohya folder like this, and copy paste the
command and hit enter. This will be exactly same as training from
the GUI. So it will start training from here with the
same settings that we used in the GUI. So you can also follow this strategy if you
wish. This will also work. In here, you will also see all of the settings
that is used in my training. This is extremely convenient to check out
and use. You'll see it is starting the training exactly
as it was in the GUI version. So now I will open my Comfy UI to start using
the LoRAs. Okay, it is started. You need to have workflows to start using
SDXL with Comfy UI. Everything you need with Comfy UI is explained
in this tutorial video. It is recorded but not posted on the YouTube
yet. When I click here, I will get to the GitHub
readme file that I have prepared for it. And in here we have the workflows. I will start with SDXL with LoRA workflows. Save link as. By the way, I have explained everything in
the upcoming video. So you should watch it before watching this
one. Let's download the PNG file into our folder
as base, open it like this, then return back to our ComfyUI and drag and drop. And the workflow is loaded with SDXL base. Here my LoRAs. Test9 LoRA is the LoRA that I generated with
the same settings that I just shown you. So you see I have several checkpoints. And now I need to test checkpoints. By the way, in the test9, I had user 25 repeating,
not 20. So I have two lesser checkpoints with 8 epochs,
but it doesn't matter much. So first of all, you need to test different
checkpoints to find the best output. And how are you going to do that? Define your prompt. So I have shared the prompt in the GitHub
readme file. Copy the positive prompt. This is a prompt that I did come up with like
this. Then copy the negative prompt from here and
paste it here. This is for generating your image in a suit
in a very realistic way. Of course, I can't say this is the best way. But this is a decent way. Then we will test each one of the checkpoints. Before doing this test, let's make this as
fixed seed. Let's make the batch size 6. I think I can even do 6 batch size, then give
a file name prefix. Let's say tutorial video. Okay. Actually, if I want to see them with the file,
and if I begin from the test9, the first checkpoint like this. I can give it as a name tutorial video 1. So I will know this is the first checkpoint
of testing. Okay, fixed seed. These are the parameters. Everything you see here, explained in the
other video, and it will be published when you are watching this video. Okay, we are ready and just hit queue. So it will start processing images, generating
the images of training with the first checkpoint. Then let's pick the second checkpoint like
this. The first checkpoint was actually being 25
epoch because repeating was 25. Okay, this one will be tutorial video 2. Queue. This is the second checkpoint. Now it will generate the second checkpoint. And let's see the third checkpoint. Let's name it as like this. Queue. Okay, the fourth checkpoint. This is much easier in Auto1111 web UI. However, in Comfy UI, I couldn't find any
better way. I asked also the Comfy UI developers and there
weren't any x/y/z checkpoint comparison. So the fifth checkpoint, let's make this as
fifth. Queue, and then the sixth checkpoint. Let's. Oh, by the way, we didn't make the previous
one. Yeah, let's let's delete the previous queue. So in the view queue, delete the last queue,
let's go return back to five. Let's make this five. Add queue, then select the checkpoint six. Let's make prefix as six, hit queue, then
let's select seven, make the prefix as seven. Hit queue. And the last one you see, like this. It is the final checkpoint after the training
has been completed. And let's make it as 8. So this is actually being the eight epoch
and repeating was 25. So this is being equal to 200 epochs in Auto1111
training and hit queue. Now I just need to wait patiently. Let's look at the results. So the results will be saved inside ComfyUI,
inside output. These are all of the images that I generated
yesterday. I will also show you them. Let's sort by date modified. Okay, we start seeing them here. Tutorial video 1. This is the first checkpoint generated images. So the checkpoints images are generated. It is now time to compare them and find the
best looking checkpoint. This part is totally subjective and totally
depends on you. You have to compare each checkpoint and find
the best looking picture. Unfortunately, this is not as easy as using
Automatic1111 web UI X/Y/Z checkpoint comparison. But this is what we got right now. So you need to look for each one of the checkpoint
and decide which one is looking like best. Even the last checkpoint is not over trained. I can see that. So look for each checkpoint and decide which
one is looking like best. After looking each checkpoint, I think the
checkpoint 6 is looking very well. I will go with this one checkpoint 6. So now what we are going to do is we will
select our best checkpoint from here. We will decide our prompts and generate hundreds
of images to find the best ones. How we are going to do that? You see: I have selected my checkpoint checkpoint
6, then click extra options, increase batch count. Let's generate 100 white suits. Queue prompt. Then let's generate 100 blue suits. After you see the queue size reaches 100 wait
to it and then hit queue prompt and 100 blue suit prompt is also queued and add to the
queue the prompts that you like that you want to generate. After all images are generated what you are
going to do is extremely important. You can look all of the images and find the
good ones. However, this is very tiresome. This is very hard to do. So what else you can do? I have an amazing tutorial how to find best
Stable Diffusion generated images by using deepface AI. This is the tutorial link. The used script is shown in the video. I also shared the script in this Patreon post. So go to this Patreon post, download the findbestimages.py
file. So the file is here. Copy all of the generated images, put them
into any folder. Sorted images tutorial like this. Paste it there. Okay, all images are here. They are not sorted with the similarity. Then go to your training images data set and
select one image that you would like them to be sorted. The full logic of this script is explained
in this video. So I will make two sorting. The first one will be according to the similarity
of this picture. I copy this picture, then let's name it as
org image 1. Okay, this will be the folder. Copy its path, edit the findBestImages.py
file. So I will make the original image to be compared
like this. Give the path of sorted images like here and
give any name for detected images. Then open command line, type python findBestImages.py
file. It will sort all of the images according to
the picture you picked. Let's see the sorting progress. For this to work all you need is python installation
and several other dependencies. Everything is explained in this video. It is so simple. You see currently it is calculating the similarity
between this picture and the pictures that I am using here. So what will this script will do is: it will
sort the images according to the similarity. Therefore, it will be very easy for me to
compare them and find the best looking ones. I already have sorted images according to
the two different images. One with this image and one with this image. So let's see first the images that is similar
to this one. The sorted images are here. So these images are sorted according to the
similarity of this image. So you see the similarities above 85% up to
90%. Not all images will be perfect. Because the script is not able to consider
the beauty of the eyes, but it looks for overall general similarity. For example, this one. This one is a very decent image. Okay now let me show you some examples. On the left: our real image, real training
image on the right, our raw image as you are seeing. Here, another one. Okay here another image according to the similarity,
it is really really good. And this is with SDXL 0.9 version, not even
the official 1.0 version release. We are not using the refiner. I am pretty sure there will be also refiner
training and we will get amazing quality with SDXL. I am pretty sure of it. Let me also show you the other sorted images
folder. So with my script it will be very easy for
you to find the good looking images without looking through thousands of images. It will help you tremendously. So this one you see I am looking at this side. Why? Because when sorting them I used this training
image, you see I am looking to the side direction. Therefore now I am finding all of the images
that I am looking for the same direction. The quality is really really good. It is finding the images really really well. They can be further improved with inpainting. I will also show that. So if you have a specific pose, you can use
that specific pose to sort the images and find specific pose having images. For example, this one really good one or this
one. And how you can further improve the quality
and the similarity. For example, let's take this picture and inpaint
it. For inpainting I will refer to this tutorial. I have the inpainting workflow here. Right click SDXL LoRA inpaint. Save link as download anywhere you wish. Open folder, then go to Comfy UI. Load the inpainting like this. Also, let's open the sorted image so we will
have the seed value and the prompt. So this is the prompt. Okay I copy paste the prompt. Let's also find the seed value. It is here. This is the seed value. I copy paste the seed value and I also need
to select this safetensors file. So the safetensors file was 9. So this is the safetensors file. Test9, sixth epoch. Okay I have it. Let's make the batch size 1. We also need to choose upload inpainting file. So let's copy this file into here. It will be easier to find. Choose upload. Let's go here. This is the file I have right, click and here
go to the open in mask editor, change the thickness and select the face. Like this. This is not as easy as using in Auto1111 unfortunately. This. Okay save to node. Okay, grow mask by 64 pixels. The denoise is 80%. Okay, let's add to the queue. By the way, this mask may not be very well
so let's see the result. You need to play with denoise strength and
mask to get good results, the better results. We will see the generated image here in a
moment. Okay with inpainting currently 1.6 it per
second. I think Auto1111 will be much faster than
this. And we have inpainted image. It is named as base output. Let's save the image. Okay it is saved. And here in painted image this was the original
image. This is the inpainted image. Of course we need to do more inpainting to
get better result. But I already see some serious improvements. Maybe the. Yeah since we have made the mask bigger I
think it increased the face size so it may not look very natural. But this is the strategy of obtaining very
high quality images with inpainting of SDXL. So what if if you don't have 24GB GPU. You have 8 GB. You can do the training on RunPod. I already have tutorial for how to install
and use on RunPod. I already have automatic installer script
for RunPod. Do the training on RunPod. Everything is same. Watch this video. You will learn. Then download the generated checkpoints and
use your own computer. What if if you don't even have any gpu, you
can use RunPod both for ComfyUI and for training. Also in my referred tutorial I show how to
use ComfyUI on Google Colab free account as well. So it will be very easy for you to use. Just watch the other tutorial that will be
linked here when you are watching and then watch this tutorial and it will help you tremendously. So we got amazing realistic quality. But what about styling? Realism and styling are actually conflicting
because for realism you need to have more details. That means that it is lesser generalized model. However, let's try this. Portrait photo of ohwx man in gta 5 style. Let's queue prompt. Let's drag and drop the image output here
so we can test it. Currently it is going to generate 2 images. We can see the progress here. It is still loading. Yes, the model is loaded. It is generating 2 images with 1.15 second
per it. By the way, these are not the best it per
second because I am recording video. Moreover, I think there will be more optimizations
soon. Okay, we got the result. You see it is still not good enough stylized. So what you need to do is reduce the strength
of the ohwx token. So let's try like this with 90 percent strength
instead of 100 percent for this ohwx token. And here the results. We can see that it is starting to look more
like the GTA5 as you are seeing right now. When I double click on the images it shows
them. You see it is becoming more like GTA5 style. We can add more prompts like a game character
like this, perhaps digital drawing, digital artwork. You see, you know when you add more prompts
it will become more like that. Let's try like this, but certainly there is
some stylizing as a GTA5. Okay, now we are more like a game character. A digital drawing. You see. Let's reduce the strength of the ohwx token
and try one more time. But certainly we are getting closer to a game
character style as you are seeing right now. It is still perfectly keeping my face. This is a thing that I have discovered while
doing experimentation. There will be whole new area of SDXL prompting
because it is different than SD 1.5 version. That is for sure. We need to figure out how to do prompting. And here now we are seeing almost completely
as a GTA5 character based on my picture. This is another one. This is not as much as stylized like that,
but this one is certainly. So this is the way of getting stylized images. If your image is too realistic, reduce the
weight here and hopefully I will make a new tutorial for how to get amazing stylized images. It requires different workflow that is for
sure. I have got some other previous testing as
well. You see this is another stylized image. Let's load also this one to see the prompt. Photo of ohwx man as a pixar character. This was using my previous safetensors file
with strength of the LoRA model reduced. So I will copy this prompt. So I will open another tab here. Okay, load the latest generated image, copy
the prompt like this, copy the negatives like this and try again with our best checkpoint. Then I will reduce the token strength to see. Okay, here we got the result. Now let's reduce the weight of the token like
this and try again. Okay, now we got a very good stylized image
with 90 percent strength. You see it is completely stylized as Pixar
as me and this is not a cherry pick. This is the first try I did make so this is
amazing. I added these prompts to our readme file as
well. So as I said, this readme file will be your
number one source for this tutorial. You see the prompts are here. This is all for today. I hope you have enjoyed. Please watch the other video that you will
see here that will explain you how to use ComfyUI. It will be tremendously important for you
to follow this tutorial. Please also join me on Youtube and support
me on Youtube. I would appreciate it very much. Please subscribe, like, leave a comment. Leave a comment and tell me the prompts that
you have discovered. The prompting ideas. I will also add them to the readme file. It will be very useful for others. Please also support me on Patreon. Your Patreon support is extremely important
for me. Without your support I won't be able to continue
generating this high quality tutorials, resources for you. Because my Youtube views are not good. So I need your help with this one. Without Youtube views I am generating very
little amount of revenue so your Patreon support is what is keeping me to continue. Hopefully see you in another amazing tutorial.