Wow, I heard that this is a great feature introduction today. Please tell us what function it is Today, I brought a very useful extension program. It is quite difficult, but the effect is very good, so if you pay attention to this content, it will be very helpful. What it is is an extension program called DDSD. I installed it like this with the Stable Diffusion PC version. If you click on it, the options will appear. If you are seeing it for the first time, I think you will be very confused. First of all, I don't check this. I'm going to try it in picture creation. To use the extension, you need to install it. It's not much different than anything else to install. Because it is not in this through the extension function, I get the URL and install it. I will now show you through the More section. However, when it came out in the beginning, as you can see in this folder, you should have installed this program, but the developer seems to have modified it. So now I can just run it through an extension. In the case of me, it is running without difficulty in this version, so please refer to it, and if you say that you are using this version, you can install it, but it will not work. So, I will put the usage that fits there through See More. you can refer to that If this doesn't work, let's think about it together. In fact, I don't know the details of this detailed installation, but I will try to answer what I know. I will create the prompt by basically importing it. I will create it with only face correction without high-resolution correction to make it as uncomplicated as possible. Yes, the image came out like this. Let's try the cowboy shot. I don't know if I can use this image. I'll try it again. I'll dress it up. Yes, then I'm going to try using it with this, but fortunately my hands didn't come out properly. What I'm going to show you today is also a hand part, so it would be nice to be able to use the one that didn't come out well as a sample again. Then, we will use this image as it is and fix the seed number so that we can compare So, I'm going to use this function from now on. DDSD, the existing ddetailer and sd upscale function are combined. Looking at it, you can also use the random control net function. So, if you check it now, you will no longer be able to use it. I won't even check the upscale Since it takes a long time, I will proceed without it. You can think of a checked state as an inactive state. I'm using this function because it's not checked. If you look at it, you can see Detailer Sampling. You can make it the same as the one before, you can make it original, or I will change it to the same. So, here is the detailer sam model And there is a detailer dino model I'm really confused about this part. I can't say I know the concept clearly. Since there is only one field to set, you can use this option as it is. I'll just tell you briefly I brought this once. I mentioned it in a video once, and it's the sam model announced by meta AI. Now you can think of it like this for all things. It means that each object is classified and analyzed. So, take advantage of this technology and now the detailer dino model. What this is is, you can see it as grounding DINO technology. You can think of it as a technology that identifies objects with these text prompts. So for example in our image For example, if you do hair, you will detect this part, and then if you say eyes, you will detect this part if you text the eyes, hands and clothes. Since the two are combined, it can be seen that a more sophisticated search is possible, and you can think of it as a function that can modify that part. In the case of dedetailer, it was very specialized in face correction. Now, this means that you can complete a variety of detailed images with just one feature. Yes, then you got it this far I'll try to explain it as simply as possible, but I'm not sure if you understand. Yes, but now the most difficult part is now the detect prompt, positive and negative prompts. If it's a little different from this prompt part we know If you do it little by little, you can get used to it, but it can be very difficult at first. So I'll show you the explanation again with another screen. Now, this is the part. I said it was a tip for writing a detect prompt. If you just start looking at it, your head starts to get confused again. I'll look at each one one by one. Now, if we say that we are talking about portraits, we can classify them by subdividing them into face parts, hand parts, eye parts, or mouth parts. But now the prompt has some format Now, if you look here, make a face, then insert a colon, then this sam level, then threshold After that, you have to insert a colon every 9 minutes. The dilation factor is followed by a dollar sign. There are a lot of dividers and it can be complicated, but I'll explain it more simply. You can think of it as a partition for each function. And you can see it as a specific factor that distinguishes the partition. When I want to change the image a little while adjusting these values for the face, I can use the partition like this and put in the numerical value. And if you want to change not only the face part but also the hand part like Yeji this time, a colon is now inserted here, but if you want to change the body part, you can separate it with a semicolon. So, if you only know each of these separators, I think you will get used to it when using it later. Of course, I don't think this is easy I am the same and if now I will go in like this twice If you don't want to put a denoising value, just put a separator like this, because you have to keep it well, so you have to make it so that you can distinguish it. Then, rather than explaining the meaning of each one now, I will do this while creating a picture. If you say you're learning this now, I don't know what this means. Based on what the developer informed me, I wrote it with a little modification. I don't know if it's far off. Then I'll make my own. Now, how do I change this? Shall we change our hair first? You can put in the command you want to change for hair. This is the phrase the developer gave an example of. I want to change this to red hair. If you want to change this in detail once, you put the command like this In general, you can put a prompt that we know, and then separate it with a semicolon. I ran it once and the image came out, but actually, I have to look at this intermediate process. Because it shows scenes that are being detected, I'll try taking a picture like that next time. Yes, I'll take a look at this as an image first image It's the second image after that. I changed it a bit finely and added more natural lines to the torso. I didn't order anything, but I'm still a bit short of hands, so I almost negotiated my number 1 and changed my hair color. Yes, then I want to change my eyes here. Then you need to put the detecting factors here again. Then I'll put a semicolon here and put the eyes And here, too, now you have to put a prompt about the eyes. Let's copy it and change the color. Let's change Yogo to blue. Try changing it to blue, the requirement is no longer hair Now, let's see what happens if I change it like this to suit my part today. (Please watch the search process together:) Good night, the hair part was detected (the eye part is being searched) How did you reflect the results? Next, I gave the factor of eyes, but I'll try that as well when it doesn't describe it at all. Looking at the image, I decided to change it to Blue Eye, but it seems that the AI has reacted to some extent because it remembers the existing information. Yes, I did something really amazing here. Yesterday, I couldn't see the eye part well, but this was strabismus. It was strabismus, but now the head has been changed. That's right, I think it's a really good correction technique. Now, let's go back and we'll leave the eyes the way they were before, and then we'll change the hands. In the case of hands, there is nothing to color, so it would be better to try it like this. It is written as a normal hand. Now, the argument here should also be called hand. We will detect these 3 parts and change the image. Perhaps if there are so many factors and more conditions are added, the graphics card will have a hard time now. And now, since I'm doing the hand part this time, I'll also put a negative prompt together. I use this once. It doesn't matter if I put some things that I use as an embedding file. For example, there are embedding files like this used by Laura (the file launching space), like bad hand 5. Let's see how it turns out together And the rest of the lower part can be inpainted at full resolution and left as it is. If you press it, the image is very abnormal, so just go as it is, and now, in the case of detail blur, you can move the value if you want to adjust it in more detail for each image. (You'll see the process again) The image is out I'll take a look. As for the descendant part, the right hand is fine, and the left part is still a little bad. But I think this part is actually possible enough with Photoshop. I just don't have one finger. Since I don't have one finger, then, do you remember this part of the prompt application technique you learned a little while ago? I'll try to apply that This level, the level value is entered here. When we say that the scope of application is widened, 0 is entered, and if we set it to 0, we will take a look at what the results will be.
(Explores the hand again.) yes still not satisfied with the image Yes, I'll try it once for comparison at this time. Now I'll change this to 2. The image has now come out. Looking at it, the number of fingers has shrunk even more. It has become three. Rather, the image is distorted. Yes, that's why I think it's worse when you increase this number because the coverage is reduced. I will do it as it is And in my case, if you repeat too many of these negative prompts, they come out weird again. So I'll just remove the one I added earlier and run it again in this state. Now this image came out and it came back to some extent normal I think I will be quite satisfied with this tomorrow. When will you wear earrings again? Now, if you look at it, I'll try to analyze the image again. This is the first image I've done so far. After that, the head, then the eyes, then I just did nothing and didn't describe it, and then I changed the values for my hands. getting distorted (Fingers) gradually disappear as the level value is changed, and then it is the last time. The more I work on it... That's right, as I changed my clothes, the belly button tee changed like this. I can definitely see that my hands have improved a little more in detail. The face is now almost the same. Yes, I will change the face once more here. Let's change the face a little more come back to the field You'll have to add another argument I'll do a semicolon and change it to face Put the face Yes, I'll just put it here, I'll just copy it here and put it in like this. I'll try it with a pretty face Yes, you can put the buttons you like or plan for each one here. Because this is not the norm So let's do this so we can proceed However, the reason I wanted to do a face is because I wanted to show you this part earlier. Let's change the argument here like this It looks too complicated after changing it like this. Let's zoom in now. Now, this is samlevel. Let's set it to 0. Now, this is threshold. Since this part has to be changed while changing, I am not aware of how it affects it in particular, so let's move on. Next, it's the part called dilation. It's a number that can be done. So, when you set a certain range, you added it with a command that said you would expand it to 64 pixels as a pixel value and detect it. And then, this part is now the denoising value. When inpainting, the higher the intensity, the more creative figures come out. So let's use either 0.3 or 0.2 Then, this part is the cfg coefficient. You know the cfg coefficient well. Let's try yogurt around 9 Next, it is now the sampling step value, and we use a range from 0 to 150. However, the higher this is to such a degree that we paint over it, the more the image becomes more grotesque. I usually use 2 to 30 of these a lot, so I'll make it this much. Then I think you have somewhat understood what the prompt means. But I don't know if I've had it once, so I have to keep trying. I haven't used it a lot, so it's a part that needs some practice. So let's do this so we can change the face part (The sound of the graphic card heating up.. I can't do it in the summer haha) The image came Looking at it this way, it seems that the face has become prettier. Ah, the two hands have changed strangely again. let me do this part again Now, let's do a comparison through images. Now this was just before, just before and then it changed like this The face has changed dramatically As I changed the prompt values earlier, the degree of change really varied like this, and the background changed a lot according to the values of the factors earlier. If you check that there is such an influence, it will be a good sample. And now I'm going to fix the hand part Yes, in this state, I will now transfer the inpaint If you transfer the inpaint, the information will now come like this, and we will use DDSD here as well. I use it here, but I need to change the conditions a little. I'm not going to use your upscale, so I'll check it. And img 2 img Random controlnet is not active in this space For that reason, I will disable only one of these, and then leave the Yogur Detailer active. If not checked, it will be activated. Yes, I will use the model as it is, and there is nothing to change. Yes, I only entered the hand part in advance. I took it from this part because I only entered my cell phone. I brought the phrases from the text to image in time to only the hand part By typing this, you can edit the hand part. You have to paint the mask here. I'm going to change the hand, so I'll mask it like this and I don't think there's anything special to change in this part. Let's do it all over my image and keep the proportions, so let's see how it changes (wooooooooooooooong) The image came out, but I'm not that satisfied. You expand the range a little more and create my one more time. Yes, the denoising intensity is a bit high here. Yes, I think it's a bit like this. Let's try 0.5. I will check your image again I guess this isn't too bad This kind of finger ratio seems to be a part that can be adjusted with Photoshop. If you're not satisfied with it, I think you can keep turning it so that a good image comes out. I've looked at the functions so far. You can check out these aspects that have really been extended from this ddetailer function, such as changing hands or changing faces. It seems to be a really groundbreaking extension program. If you look here, it was this person. I think this is the program that this person made, so I would like to thank you again. If there is a wrong explanation here, please ask in the comments. this program is really good I've been looking for a long time Yes, it seems that we can cover those shortcomings that we haven't been able to do much in the meantime. So I'm really looking forward to it. I think I'll have to keep using it. Like today's video if it was useful Please subscribe. Have a nice day. I will bring you good information again. thank you