In this ComfyUI tutorial for beginners, I'll
walk you through the entire process of creating your own AI images with ComfyUI,
covering all the essential steps. From easy installation to the
exciting part of image generation, we will explore everything you need to know. I recommend that you follow along as you watch; this will help you understand
all the details more clearly. For those who watch the video till
the end, I have a special bonus: my workflow from today's
video and my ComfyUI notes. With that said, let's get started! First and foremost, you install
Pinokio so that you can easily install ComfyUI. Pinokio You must
know is a software that enables us to install scripts more easily with a
click and also makes them easier to use. So, to install Pinokio now, you click here on
Download and then select the operating system you are installing it on, in my case,
I am now choosing Windows. After that, you click on Download Pinokio for Windows,
and a ZIP file will be downloaded. You can then unzip this ZIP file with Winrar, should
you not have Winrar yet you can find it in the description and download it. Once it has been
unzipped, you run the Pinokio Setup. Windows might initially block the installation because
it doesn't recognize Pinokio. In this case, you'll need to click 'Run anyway' to proceed
with the installation. Once this is complete, you get this display here and click on
Allow and then first select the color of the interface. After that, you should select
the folder where the AI applications are to be saved. I recommend you have this folder on
a second drive, where you have two hundred to three hundred gigabytes of free space. Do not
forget to write Pinokio behind the folder, otherwise, you will get an error. Once all
this is done, you click on Discover Page. Here you then see all the scripts that you
can install and run with the help of Pinokio. At the bottom left, you simply select ComfyUI and
click download to first download the requirements you need to be able to execute the script
at all. This process now takes about 5 - 10 minutes. Once it is finished, you confirm it
with OK and can then rename the folder where the ComfyUI is stored if you wish. But I leave
it as it is and just press download. After that, you can already see ComfyUI here and click on it.
The very last thing you have to do is to install ComfyUI and click on install. This process now
takes 1-2 minutes. So once that is done you have finally made it and can start generating images
.But before I explain how that works, I want to introduce you to three functions. Here on the
left, you have Update to update the ComfyUI, below that you can reinstall it if you encounter
errors and at the bottom, we have reset if you want to reset everything from the ComfyUI.
I will go into everything else in a moment. I start the ComfyUI here at the
top by pressing Start and letting it run over the GPU. Once ComfyUI is
loaded, you then go here to WebUI. What you see here may seem a little complicated
at first glance, but I will go into detail in a moment and you will understand how it all
works, because it is actually quite simple. First of all, you need to think of
ComfyUI as a kit from which you can assemble components to create the
process for your image creation. What you see here are nodes that take
care of all the image creation tasks. For example, if I click on "Queue Prompt" here
on the right, this structure of nodes will create the image, and you can even track which nodes
are performing this creation process. When this process is complete, you can now also see here
on the right that an image has been created. I will explain the best way to create images
in a moment. First, I will start by creating a structure with nodes so that you understand
how the whole thing works. I will press "Clear" to start from the beginning.
If you double-click on the empty area, you can search for nodes. I will
start with the "Load Checkpoint" node. With this node, you can select models with which
you can create your images. As you can see here, I can move this node freely and enlarge or reduce it
as I wish, which automatically gives me creative power over how I design my workspace. I'll show
you later what other cool things you can do. In the beginning, you should have a model
that you can play around with a bit, but I recommend you get another one because
there are many better ones out there. To download models, you can download
some here on the left directly under Download Models and try them out, but I
recommend you download them from CivitAI as it is much clearer there and you
can see what the models can do and much more. Make sure that the model
you want is labeled "Checkpoint". I recommend you start with these 2
here because they are really good, I will start with the Juggernaut
XL model. To download it now, I will show you the path where you have
to save your models for image generation. To do this, go to Files here, then to
View Folder, go to the ComfyUI.git folder, in app and under models at checkpoints you
have to save your models. Copy the path above, then go back to the page and press download. Then paste your copied path at the top and press
save, then the model will be installed there, but since I have already done this before, I
do not need to do this anymore. Once you have it installed, you will need to press Refresh
to see it and then you can select it here. Now that you have your model, you can continue
to build your node structure. To do this, click Clip and drag a thread. When you release it, you can add a node. Now select the "Node
CLIP Text Encode" which is recommended. Then create 2 of them, because these are your
prompt nodes where you describe your image and say what you want to see on it. You can change
the color of the nodes to keep a better overview, I change now the color of the top node to green
because this should be our positive prompt node and I change the color of the bottom node to red
because this should be our negative prompt node. Once this is done, connect these
two prompt nodes to the KSampler. The KSampler is the heart of the image
generation node structure, as this is where the actual generation process takes place.
That's why you can also set many things here. I won't explain everything in this video, but
it should be enough for you to understand it. The seed here is like a key that determines how
and where the random points appear on the image. Under this you can decide whether it should
be generated randomly or whether it should be fixed and always remain at the same number,
if you click on it here you can set even more. "Steps" means how often the image
is cleaned of noise. The more steps, the more accurate the image. CFG determines how accurately the KSampler
converts your prompt. Bear in mind that higher values make the image more accurate, but
can degrade the quality if they are too high. "sampler_name" is the name of the method or
algorithm used by the KSampler to generate the images. Different samplers can
process images in different ways, which can lead to different styles,
details or qualities in the final result. The scheduler controls how much the noise is
changed in each step. Changing the scheduler affects how quickly or slowly the noise
in the image is reduced over the steps. This can make the final result either finer or
coarser, depending on how the changes are made. Changing denoise will change the
amount of noise covering the image, which will affect the look of the final image. I recommend you play around with these
settings a bit to better understand the effects on your image, I'll come
back to this later in the video. But now I will move on to the next
node. You have to connect this one to the KSampler and it is called "Empty
Latent Image", with which you can set the height and width of the image and also how
many images should be generated at once. Next, you need to connect Loadcheckpoint to
the KSampler so that it can access the model. Then you have to connect the VAE Decode with
KSampler. The "VAE Decode" converts the abstract data of the KSampler into visible images so
that you can also see the generated image. After that you have to connect
the VAE to the Load Checkpoint, because we just take the VAE from the
model. You can add your own VAE if you want, but you don't need it now, the
model's VAE is usually enough. Now you just need to add a node to the VAE
decoding and you're finally done. You can choose between two options: Save Image Node, where
all images are saved directly, or Preview Image, where the images are only displayed and you
have to right-click on the image to save it. I recommend the preview image so that you only save the good images and
don't clog up your hard disk. Next, I'll explain how to create an
image with these nodes and give you a brief introduction to prompting, which I'll
show you using an example with an image of someone from CivitAI. To do this, let's
go back to the website and go to Images. There you will see images from the community,
you can use these to get inspiration for your own images, which is also very
important for your learning process. If you click on a image, you will be
taken to the profile of the picture. There you will find the exact setting for
the image and how it was generated. You can take these settings and prompts into your
workflow and change them to suit your needs, which is very helpful because you can quickly
learn how to refine your image generation process. The first thing I do is to transfer the
prompts for this image to my prompt nodes. I'm not going to dive deep into the topic of
prompts now, I'll do that in another video, but I will explain the basics. As you
can already see, all the instructions for creating the image are separated by commas,
with specific instructions enclosed in brackets. These brackets signal to the AI which
elements of the prompt are particularly important. The syntax (keyword:weight)
can be used to strengthen or weaken the importance of individual keywords in
the prompt. A higher weight, e.g. 1.4, increases the importance of a keyword, which
means that the AI takes it more into account. A lower weight, e.g. 1.1, signals a lower
relevance. This method allows for finer control of image generation by specifying which
aspects of the image should be emphasized more in order to achieve more accurate and targeted
results. Another thing you should know, but is probably already plausible to you, is that
you have a negative prompt where you write things you don't want in the image, and in the positive
prompts, what you do want to see in the image. Like I said, I'm not going to go into
the prompts too much in this video, but it's really quite simple.
Don't overcomplicate the subject, try it out and take inspiration from
others, it's really not a science. So now I'm going to transfer the settings of
the image to the KSampler, I won't go into it in detail in this video, but the same applies here,
it's really easy, you just have to play with the values a bit and see which one gives you the best
result, you really can't go wrong here. Remember to use the information about the
KSampler that I gave you earlier. Now everything has been transferred and
I can generate the image. As you can see, I have generated a really good image
with the node structure and the right settings. I have also shown you
here how the image would look with different samplers and schedulers, you
can stop the video to have a closer look. So now you know how to create images,
what nodes are and how they work. Now I am going to show you how you can upscale your
images to get a better resolution. To do this, pull a thread from the VAE Decode and connect
it to the Upscale Image By node. Here you can set the upscaling format and the value
by which the image should be upscaled. I won't go into detail now, but in
the next video I'll explain exactly how everything works and show you a
few other better upscaling methods. Now I'll set the image to be upscaled by a
factor of 2. But you can also enter higher values here if you want to upscale it even
more. Then you have to connect your upscaler to a preview node so that the image
is also displayed and you finished. So I will now generate another image so that
you can see the difference, as you can already see now the image has been upscaled and this one
has a much better resolution than the older one. Now that you have learned how to upscale. I
will show you a setting that I think is very important if you want to have a better order.
You can set here in the settings how you want to display the threads, I recommend you choose
Straight here as this creates much more order. Another cool thing is that you can create
groups, which you can drag under your nodes and organize them in this way, which is
a very good option to make your workspace clearer. You can then adapt them to your nodes,
change their color and even change their name. You can also select and copy the nodes. This
allows you to add more creation processes, giving you much more flexibility when
testing settings and models. If you want to move several nodes, you can select them
and move them by holding down the Shift key. I will now create 2 groups here, in one I
will generate images from the model we used at the beginning and in the second group from the
other model I recommended to you. As you can see, I have now generated images in both
groups again, each with different prompts. Both of these models have done a good job
here. It is really important that you always try out several models because some produce
better results than others. You should also know that you can bypass nodes and groups, for
example you can set the top group to bypass, then the top group will be ignored and the
bottom group will only generate images, this can also be set back to alwayse,
then this group will be perceived again. This is a very common feature that I use because
I always have several processes in a workflow. Now we come to workflows. This workflow
that you see here can be saved on the right and can be used over and over
again, this is a very good function because you can create several workflows for
different purposes and switch between them. You can even use mine, I will make
it available to you on my Discord and Patreon if you want to work with it a
bit. I'll also show you how to download it in your ComfyUI soon. But first I want
to show you something in the manager. Here you can download nodes, models
and do a lot more. But what is most important in this manager is that you
can download nodes that are missing if, for example, you download a workflow and
cannot use it because nodes are missing, this function is very practical because it
allows you to download all missing nodes. You can load workflows on the right.
Either load them by selecting the jason file or an image that has
been generated in this workflow. If you want to download workflows from
others to get some inspiration and an idea how to optimize yours, you can do
so here on the left side of this page. That's it for this video. You can
find the document I created about ComfyUI and my workflow on my Discord
and on Patreon. If you liked the video and want to see more like this, hit
subscribe! See you in the next video.