스테이블 디퓨전 Controlnet canny, depth, 보충 팁 / 최적세팅 7

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I told you at the end of yesterday that I was going to shoot a supplementary video, but I want to leave a video right away. I looked into Canny, Hed, etc. of the control net. I'd like to give you a tip on how to use them one more time :) Let's take a look together. I'm going to transform this and I'm going to play with it now in Img2img But I brought the picture earlier to the original input window here. And I'm going to make a copy of yours, put the seed value here, and change the color or color of your hair in a state that doesn't change as much as possible. Yes, I will now activate the ControlNet function here. I put the image here as well Let's go ahead with this situation Today, we will also use the canny and depth models together. It's fortunate that it works well with a single model, but I tend to use it together to implement a more accurate image. When you enjoy the preview like this, you can analyze the canny model. Actually, what I want to change right now is to change my eye color or change my hair color now. To do that, I'll load another one. Using multicontrolnet, I'll use the Depth model here and then. A little different from yesterday, the number of these items increased. I reinstalled the extension before using this. It has been updated with more features. If this is not the case, if you reinstall again, you will see many items like this With so many coming out, I feel like I have more content to write again :) Anyway, if there is any additional good information, I will let you know. Today, as planned, we will only look at the canny and depth models. I'll select it like this, select the depth here, and paste the image. If you attach it like this and interpret this, yes, it comes out like this I won't touch the numbers for now Let's see what comes out once and adjust the appropriate number to fit there. If you want to set the ratio again and create it, the image will not come out the same again. In the image to image, because this number has become another variable, the same image does not come out, but because the posture is fixed in the canny and depth, the posture is maintained and the feeling of the image changes a lot. The drawing style has changed a lot It just became an animation. Then, in order to reduce the fluctuation here, let's set the denoising strength to 0.2 and proceed. Now, I can see that the image has changed a bit. If so, what would happen if I set the denoising strength to 0? There is also a slight change, but it can be seen that the degree of change has decreased significantly. Yes, that's why I showed you this process so that you can lower the denoising intensity as much as possible. Now, I was able to confirm that there was no change depending on the denoising intensity. So no matter how many prompts I put in here, it won't change much. In image-to-image, it seems that it is more useful to bring the preceding prompt with the denoising strength slightly increased. But I'll bring it all the same here At first, it came out as brown hair in the state of bringing it to the end, but I'll change it to blue hair for example. You should change it to blue hair and give it some weight. I'll give it more weight this way In the given state, I will try to proceed with the denoising strength at about 0.3. It is also useful to increase the steps when using ControlNet. Yes, I proceeded like this, and the image came out. I'll try to zoom in. Now, you can see that the brown hair has a little bit of color. Blue hair did not cover all of them like this, but now, in order to increase the proportion of prompt values here, it seems that the process of raising the CFG scale value is necessary. Raise it to about fifteen Instead, since the value is raised now, I will try to activate the dynamic thresholding function at the same time. It's a little confusing because there are so many. I'll leave it here and see the rest in the same way. Your image came out. You can see that more hair is dyed than before. In fact, changing to blue as a whole is a more useful and easier way to change the impact function than this, but we will use this function to change some of the other elements after the canny and depth's posture is fixed. I told you Now, let's compare how much the image has changed by doing everything we wrote before. Now, this is the final finished painting. Now, I'm eating more than before. The head has changed while the image hasn't changed much My eyes have changed a bit. Before that, when I gave a lot of denoise value, the face itself changed too much like this. This is the original. Send it to this inpaint and mask only the head here. I will explain this part in detail later in the OUTPAINTING and INPAINTING parts. First of all, I want you to try it once. Yes, I do this. I'm not going to do it because I have a lot of time. I'm going to roughly paint my hair like this. It's just that the color is changed in the state of the hair, so I think it would be nice to do this and leave the denoising strength as it is, and also keep the seed number as it is. This is because these numbers are stored as they were earlier, so I'll just let it go as it is. Now, I don't need all the prompts here, so I'll call it blue colored hair. I'll try to create it like this. yes it is This makes the inpaint function so much more convenient. The rest is now so that I can replenish it next time. Yes, I covered tips on image transformation using canny and depth here today. For some of these image transformations, this is not the answer. Since there are many ways, I think you can get better results with your own know-how. What I have just said is the aspect of a methodology, and it is meaningful that I can give you a small tip. Yes, if it was helpful, please like and subscribe. Have a nice day today. thank you
Info
Channel: AI 오프너
Views: 3,147
Rating: undefined out of 5
Keywords:
Id: LsVp6C9bYD4
Channel Id: undefined
Length: 7min 52sec (472 seconds)
Published: Mon Apr 17 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.