[AI Tech] StableDiffusion 컨트롤넷 Controlnet 사용법 (en sub)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hello, this is PD Jo. Today we are going to learn about stable diffusion’s control net. ControlNet adds additional conditions to stable diffusion. This is a neural network structure that controls the diffusion model. After running stable diffusion Copy the address I left in the description box Paste and install the extension in Install from url. Once installation is complete, click Check for update once in Install. You need to Apply and Restart UI Also, you need to download and add the models to be used in the control net. Models are available in both Hugging face and Civitai Hugging face models are original, so they have a large capacity. Civitai's models are compressed versions and have a small capacity. The address can be found in the description There is no big difference in using it. Suitable for your computer capacity Just choose one location to download. There are a total of 8 models You must receive all models you wish to use. The downloaded models are stored in the folder where Stable Diffusion is installed. Put it in the extensions sd-webui-controlnet models folder. Now, when the Stable Diffusion UI starts again, You can see that the control net has been added to the bottom of the text to image. If you click to expand, there is space to insert an image. Drag the image you want to reference and insert it here. And check enable to activate it. If your VRam is less than 8 GB, you should also check Low VRam. I recommend it You can select the model you want to use from the control types below. First, select Canny The preprocessor below is responsible for analyzing images for reference. It's a preprocessor. The model next to it uses the analyzed image to create an image. This is a model that reconstructs The model we downloaded earlier is displayed here If the model is not visible and is displayed as None, Click the refresh button You must select it after receiving it again. After confirming that the selected type is applied to both the preprocessor and the model, must proceed And the other options below will not be discussed in detail here. If you have any questions, please ask and I will explain. We will proceed with the options below as is. After Kenny sketched the reference image, This function creates images based on sketches. The composition of the image is similar, Use when you want to change the atmosphere At the prompt, enter only simple information about quality. With the rest of the settings simple, Let’s run it by clicking the Generate button. The following image was created: An image with the same composition was created according to the sketch, but You can see that the atmosphere is completely different. Next, let's use depth. Depth is literally a function that uses depth. If Kenny analyzed the image through sketching, Depth analyzes images through depth. This is useful when using images with a bit of depth like this. Confirm that depth is included in both the preprocessor and the model. Click the Generate button Let’s check the created image While maintaining the sense of depth You can see that an image with a completely different atmosphere has been created. Next, let's use open pose. Open pose analyzes only the posture of the human body in the reference image. This is a reference function This is a feature I personally use a lot. When AI cannot accurately express a specific posture Bring a photo of a model in a similar pose Create an image by referencing the pose The pose we will refer to this time is this pose. After confirming that both the preprocessor and the model have open poses, Click the Generate button It's a completely different place and a different person. Images with the same pose were created Next, let's use the mlsd function. mlsd is a function that detects straight lines and extracts only straight lines. It is mainly used to create linear images such as interior or building design. I will select mlsd and use the bedroom image as the reference image. Confirm that the mlsd model is included in both the preprocessor and the model. Click the Generate button If you check the analyzed image, you can see that only straight lines were extracted. Then, by reconstructing the image composed of the straight lines, Created a new image Next, let's use the segmentation function. Segmentation separates and analyzes the colors of a reference image. Mainly used for landscape images with multiple colors Add a reference image Confirm that the preprocessor and model segmentation are included. Click the Generate button Separate and extract the colors of the reference image Using extracted images You can see that a new image has been created. Today, we mainly looked at the most used functions. If you have any additional questions or need information, please leave a comment. When new features are added I'll be back with more information. That's all for today's video I hope you find it useful and fun Please subscribe With more advanced videos in the future See you soon thank you
Info
Channel: 조피디 연구소 JoPD LAB
Views: 593
Rating: undefined out of 5
Keywords: ai, ai tech, ai generate, jopd, stable diffusion, controlnet, sd controlnet
Id: ZWoHflz2mNU
Channel Id: undefined
Length: 5min 31sec (331 seconds)
Published: Fri Sep 08 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.