First, you'll need the "Essentials" and
"Impact Pack" nodes, which we'll be using in this video. You can find links to these in
the description below. You can also download the ComfyUI Manager, which will install the
nodes faster than installing them manually, so I recommend first downloading it if you don't
already have it. For all of my Patreon supporters, you can download my updated workflow and use the
"missing nodes" feature to download the nodes. Once you have downloaded everything, I will show
you how to set up your inpainting workflow. First we need the basic nodes. Just create and connect
them as I am doing now and follow my steps. I also want to point out that from now on I will only
focus on the nodes that are new to you in my ComfyUI video series. If you don't understand
what the basic nodes are and how they work, it's best to watch my previous videos. So the
first node that's important for inpainting is the Gaussian Blur Mask node, which will blur
your mask and help us create better transitions between the original image and our inpainted
image. I'm also going to create a Mask Preview, so we can see a preview of our mask, you don't
have to create this, but you can if you want to. Then we need the Inpaint Model Conditioning node,
which acts as a link to the KSampler. And then, of course, we need Differential Diffusion, which
is where all the magic happens, allowing us to do this new and improved inpainting method. Then
you connect the rest of the base nodes. Oh, and don't forget to connect the VAE to the
load checkpoint, I forgot to do that at the beginning and corrected it later. Once you've
done all that, you're done and ready to mask. To demonstrate masking and inpainting,
I'm going to work with this image. Let's start with the Sam Detector. You can
use this tool to mask specific areas, which is very convenient as it eliminates
the need for manual editing with the Mask Editor. Simply left-click on the part of
the image you want to mask. For example, I want to change the colour of the dress, so
I will mask the dress. Then click 'Detect' and the dress will be automatically masked. Make
sure the confidence level is set high so that only the dress is selected. A higher confidence
value means the recognition is more likely to be accurate; a lower value could, for example,
result in the whole person being selected. Then click 'Save to Node' and the mask is saved.
Another tool you can use to create masks is the Mask Editor. I use it, for example, to
correct any errors in the mask made by the Sam Detector. The only thing you need
to know about the Mask Editor is that you can adjust the radius of your selection
function for the mask using "Thickness". I've explained everything in detail in my
ComfyUI document to keep you informed. There, you'll also find other information about nodes,
models, and much more. The document is available for free on my Patreon and Discord. Once the mask
is complete, you just need to click "Save Nodes." Now that we've completed the masking step, I'll
show you how to change the colour of objects. But first we need to make a few adjustments. First
and foremost, I recommend setting both values in Gaussian Blur to 50. Next, copy the prompt
you used to create the image and modify it to change the colour of the dress or object you want
to change the colour of. Of course, you'll also need to select the same model used to generate the
image and transfer the same settings to KSampler. It's worth noting that with this inpainting
method, you don't necessarily need a specially trained inpainting model. You can use the same
model that you used to create the image. As for the prompt, it's good to know that you
don't have to stick to your original prompt. You can write something completely different
or just focus on the object that needs to be changed. You can do a lot of experimenting to
see what works best for your desired output. The same goes for the KSampler settings;
you can also play around with the values. Afterward, all you need to do is
generate the image, and as you can see, the dress has now been changed to blue.
You can get very creative here and do a lot of cool things with inpainting. Let me
show you another example where I changed the dress to black. You can change your
object or item to many different colours, but sometimes it might not work right
away. You may need to tweak the CFG, denoise and prompt to get the result you want.
Therefore I would like to emphasise: play with the KSampler settings and the prompt. This will help
you learn how these settings affect your image. Now let me demonstrate this with hair. As you can
see, I've taken the woman in the black dress and now I want to change her hair to black as well. I
have adapted the mask to her hair and the prompt now includes "black hair". When generating,
you may notice that it doesn't work correctly the first and second time. So I increased the CFG
to give the prompt more influence on the result, and as you can see, it worked right away.
I will also show you how to correct errors, such as not all hair being placed correctly. But
first, here's another example with blue hair. If you have any questions about
ComfyUI, feel free to ask them in the comments or better yet on Discord
where I can help you more effectively. Now I'll show you how to fix inpainting
errors. You can do this, for example, by masking the error areas, like in my
case where the hair seems to be floating in the air. Then I simply regenerate
the image and the errors are gone. I could reposition the hair in those
areas, but I will leave it as it is. If I did want to have the hair
there, I would have to mask the entire hairstyle and make some adjustments in
the prompt, but for now this is sufficient. Next, I'm going to show you how to inpaint
objects. I'm going to demonstrate this using my workflow, which is provided to all my
Patreon supporters. Of course you don't need this and can continue to use your own
workflow, but for those who support my work on Patreon, I am now making this workflow
available to them as a special thank you. I'll demonstrate this topic by inpainting
sunglasses on the woman's face. First, I draw a mask and add "sunglasses" to my prompt, then I
apply my settings. As you can see, it doesn't work right away after the first generation, but after
generating and changing the CFGS several times, I finally get the right result. Always
experiment with the values and learn—don't forget that. Afterward, I upscale the image,
and as you can see, we achieve a good result. Now that you've understood how it works,
let's move on to changing the background. It's nothing special, you already know the
basics, but I'll demonstrate it anyway. To change the background, you simply need to
mask it and specify in the prompt how you want it to look, for example, "white wall." Then,
just generate the image, and as you can see, the background is now white. Even if there
is an error, you already know how to fix it. And just like that, we're done with the topic
of inpainting. If you have video suggestions, please write them in the comments
and don't forget to check out my ComfyUI video series if you haven't
already. See you in the next video!