hello. This is Neural Ninja. In this video, I'll show you how to erase a character from the background screen and create a natural background image. We will introduce you to the Rembg module, which allows you to easily separate the background and characters. Let's simply delete the character using Lama. I will teach you how to naturally correct erased areas using ControlNet Inpaint. For those using Colab, the ControlNet additional model and Please check the IP adapter model download. Let me load the image. Let’s start by using the Rembg module. Rembg, as the name suggests, is a module that easily removes the background. There are several custom nodes that implement Rembg. Let’s connect the image and check. The background is erased and only the character remains. They also print out masks. Let's use this to erase the character in reverse. Let’s use the Lama module. This module quickly erases selected objects. There are several custom nodes that implement this as well. I'll input the image and mask here. Lama is trained on 256-pixel images, so it does not work well with images that are too large. I will change it to an appropriate size and use it. This is a node that resizes the node to the maximum size appropriate for the ratio. I will set it to 768 to match the SD 1.5 model used later. Connect the resized image I will also reconnect the Rembg node and connect the resized mask. This way you can quickly erase objects. Let’s correct this image using SD. First, let’s calibrate using the basic workflow. Let's add a checkpoint and add a prompt. I will input an image processed with Lama. I will use the Set Latent mask to change only the white part of the mask. I will enter the Sampler settings appropriately. I will also set the denoising to around 0.65. Now let’s create it and check it. It was created well. If Lama erases it precisely, like now, it is created fine. However, when it is not handled cleanly, The quality may drop significantly. I'll give you a clear prompt for what to draw. Enter the negative prompt and I will only change the input for the positive prompt text. I will automatically enter positive prompts as tags. This is a node that extracts a prompt from an image. I will change the model to Convnext. I will increase the threshold and enter only the overall major features. I will set the character threshold to the maximum value and block it. Now let's create it. It was created well. Even when blurred with Lama The creation of random images disappears. Now to create a more natural image Let's add ControlNet Inpaint. Now to create a more natural image Let's add ControlNet Apply. I'll choose the Inpaint model. This is Inpaint using ControlNet. I think it's a bit better than the default Inpaint. I will connect the mask and image. Let's connect ControlNet to the prompt sampler node. It was created well. I will try to expand the mask range a little more. It was created well. It matches well, but the color is a bit disappointing. This time, we will adjust this part by adding IPAdapter. To put it simply, IPAdapter is an image prompt. You can use images as prompts. Add an IPAdapter node and For the model, I will choose the basic model. I will also link you to the ClipVision model. I'll link the image. Let’s connect and create the model. It was created well. Since the image is too small, I will now upscale it. Let’s quickly add a node. It was created well. I will connect the nodes. I will leave out ControlNet Inpaint and connect the prompt directly. It was created well. Lastly, we will now only cover the part created in the original image. Except for the part created through this process, the parts can be aligned exactly with the original. Let's set the original image first. I will adjust the created image to the same size as the original image. I will set the size to an upscale image. Let's add a GetImage size node to get the original image size. Let me set up the image. I'll connect the mask and set it to cover only the white part. Let's blur the mask to make the edges softer. It was created well. Let's compare it with the original image. Now let's check other images. This time, we will compare real images. I will check other images as well. In most cases, Rembg works well, but there are times when it misrecognizes things slightly. For this part, we will add additional correction to the selection mask. We will add a Mask Composite node so that we can combine two masks. You can connect it like this, but the automatically generated mask is a mask created after resizing, so you need to adjust the size. Since there is no way to resize the mask, I will change it back to a mask after changing it to an image and then resizing it. As before, I will adjust the image size using the upscale image and GetImage size. I'll select Operation OR to combine the two. I will move all nodes connected to the existing mask. Now let's select additional arms in the mask editor. It was created cleanly and well. Now the node is complete and I will test a few more images. It was created well. Let's compare it with the original image. This time, we will check with an image with a wide creation range. Let me create it again. This time, we will check with an illustration-like image. I think it was created well. Lastly, let's check out a more complex image. It may also be helpful to adjust the Lama blur value when creating. I'll try creating it again. It may also be helpful to adjust the Lama blur value when regenerating. I hope the video helps. I'll come back with a good video next time. thank you