스테이블 디퓨전 A Detailer! 얼굴뭉개짐, 손뭉개짐 Helper!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hello, the weather is very hot. I'm in front of the computer and I feel the heat and I'm sweating a little bit. It's been a while since I wanted to talk about stable diffusion. Looking at the video before that, it was well over 3 weeks. As a scary habit, if you don't upload once, you won't continue to upload, so I'm going to start the video by explaining a little bit about A Detailer today. It can be difficult in some ways and it can be easy in some ways, but I think those perspectives are all included in what I want to talk about today. But as easy as possible! If you think about it like this, I think it will be less burdensome for you. So let's start Yes, in my video, I mentioned A-detailer little by little before that. It stands for After Detailer, and if you look on Git-hub, it's such a program created by bing su. If you look, there is a more detailed explanation. There is an explanation of how the functions of each item work, so you can refer to it while looking at it. Now, many features have been updated compared to the initial version. From here, if you solve it now, it will come out like this, and it includes Mast preprocessing, Inpainting function, and Controlnet model. So, like I said earlier, something looks complicated, but we don't have to wrap our heads around it. That's why I'm going to tell you how to do it today, and I think you can use only the parts that will be useful to us. First of all, rather than trying to make an image together, I will try to see what kind of explanation is possible through the image I made. Yes, items that can use Adetailer before that can be created by looking at tabs like this, like ControlNet. So, like this, there will be only one at first. I made 2 of these, but if you go to Settings, you will see Adetaielr on the left. Yes, if you enter here, you will be able to select the Max model in the top item. Five is the limit now. I'm using about two. And here's the first Save mask Preview If you check this, you can catch something in the image you created later and see if you modified it. Because there is a checkbox, it is responsible for such a function that can be checked. Of course, if you check this, two or three images will come out a lot depending on the model. If that happens, it may be a little complicated in file management later, so if you say you don't need this, you can uncheck it. Before that, I'll tell you the version I use and things like that. Check out the version page If you deal with this version and that, the UI may be different, or if the extension is not updated, it may be different. It varies a lot depending on the season, so don't panic If you say you will see it again in a few weeks, it can be different again, so please keep this in mind and watch it. I used a revanimated checkpoint model and CivitAI Laura used this here. I applied it here with the same prompt, so please refer to it. Well, in this part, you can do it according to your personal preference. And today's focus is on A-detail, so I'll activate Enable ADetailer like this Now, I'm going to make a choice. So, if you look here, there are a lot of models. When you install the extension program, this model is saved right away, so there is no need to download it separately. Now, through so many models, we are providing such a function that can complete such parts of a person's face or hand in more detail. And so is the second Second, do you remember what we did earlier? You can adjust the settings according to the same items as they are. To briefly explain the space here, if, for example, I want to adjust the face, I can write a positive prompt and a negative prompt for the face. It is not necessary to write a large length in the part, and I think you only need to set the part that can be the main point. And first of all, this number is now the default value, so there is no need to touch it anymore. In particular, when the result is different from the picture I set, I may have to fine-tune it to some extent, but I can just leave it as it is. You made it very well, so you can use the default settings. Yes, I'll close Detection. After closing, there's Mask preprocessing now. Yes, there is nothing to touch on this part too much, but for example, if you want to increase or decrease the width a little because there is a certain range designation, you can adjust it. Since it is mainly used for the face, it is necessary to adjust the number appropriately considering the size of the face. And this part has a part called Mask merge mode, which was also explained in the inpainting part (another video). For example, it seems to be a function that determines whether to keep the original or to paint over it again. ok i won't go into it specifically here and i'll close this feature again And then inpainting This part is also the part that I used a lot in the inpainting part. Well, I'm explaining here too. Yes, you should refer to this and see what functions are imported and used. As you can see, the intensity of denoising is more important. This part is set to 0.4, but there is no need to change it significantly. These three things are complicated, but please remember that we don't have to change them. Yes, I'll explain again that you don't need to change that drastically. Then, this part is the part where you can select the control net model. Since you can do it while using the control net, I honestly don't know how it works. So I will skip this part Yes, if anyone knows how to use this part, please let me know. Yes, I will see the slide right away. Yes, I can see it now. Looking at the Adetailer model item earlier, there were several names. I think there were about eight Here are the characteristics of each model: Holds the face. And the numbers are slightly different. In the case of person, it holds the body of the person. I'm holding on to the person behind me I think you can refer to these things, and this is what reflected this result The face was a bit strange before. The part that did not go well came out in its entirety through a detailer But in this case, because it corrects the entire body, not just the face, you can see that the shape of the clothes changes a little, the belly button changes, etc., and other things change. It would be better to use this model if you want to change only the face. So then there are 3 models that I haven't been able to explain here. It was long so I couldn't put it in all at once. There are 3 models called mediapipe face. I think the fact that there are three is more important. Now, let me remind you of the explanation earlier. What does a detection level of 0.3 mean? This is the default number Let's start with 0.3 in detection. Now, based on 0.3, I will show you how each of the three models holds and produces results. When using Face-full, the face was well corrected. However, when I did Face-short, my face was distorted. And likewise, when meshed, the face was distorted. Then let's change the detection value little by little. Let's change it to a very low value of 0.05. Now, in that case, the first one still changes well. Now, the second and third still have a weirder face. Now then, let's raise the number a bit more. 0.3 is the standard. I raised it to 0.5 there. When I looked up, the first model was still well made, and the second and third models were not well made. It can be confusing at first, so I'll come back and review it. Now, I'm going to explain the three models. Now, Face yolo or such a person was shown earlier, and there is a hand. I will explain it separately. Well then come back In the end, I wanted to tell you this Changes in the detection value ranged from 0 to 1, and no other large outliers could be found I've only shown a few of them, but I've tried applying them to each value, but no different results came out. That's why I think you should know this, so I think you can apply it with the standard value you set. Now, if Mediapipe_face_short is used, when can this Mesh be applied again? Are you curious about that part? Obviously, it doesn't fix it well. Now, I tried to change these figures once, just in case the face is like that. Now, what this number is, is mask preprocessing. Yes, I was wondering if the model would work properly if I adjusted this number here However, the conclusion was that no matter how much this number was changed, it did not change I'll see you again I'm omitting this by giving examples one by one. This same result was obtained through dozens of photos. The first still works, the second and third are not recognized well In the end, it is my point of view that even the change in this number could not find a big singularity. It can change if some other variable changes. However, please note that this is my point of view. By the way, there was a point where the two models worked. I'm going to tell you about that part now. If you look at it, can you see the difference between the previous picture and the picture? The angle of view is a bit wider than the cowboy shot, but all three models did this detection. You can see the mesh. Then, this second part is short, this first one is full. Now, all the mediapipe models worked. I analyzed it like this and eventually changed it to a pretty face. So, the conclusion is that it depends on the size of this image we are looking for. I would like to say that there is no significant meaning even if you change the default setting value. So, in the end, when there is a certain weight of the face part, it was possible to detect when it was on the image to some extent. I can tell you this Usually, there is an explanation that the media pipe face model suits the live action well. So let's try to apply this part to real life again. Yes, I guess we have to check that part to see if all three models are well applied in live action. Yes, I changed the Mask Processing value here as well. From the basic figures, if you look closely like this, the first one comes out well The second and third are a bit strange Now, even if I lower this number, I still get the same result. Yes, but even in live action, it is good at detecting when the proportion of the face of a cowboy shot is shown. It came out equally well, I used the henmix model This explanation is a bit long, but what I want to say in the end is that if you use these 3 models, whether it's a live action or an anime, you won't have much difficulty. I want to tell you that There may be some parts that require fine adjustment, but I am saying this in the hope that you will get the image you want from the default value. Yes, then if you just let me know, it will be over quickly, but there are people who say this. Now, there is a reason I keep telling you this part sequentially. Yes, I'll check it out in the next video Now, if you say you use it like this, then what is the difference between these three models? Well simply, this model goes well with anime and this model goes well with live action. It's not going to be this part. Obviously, when I used this model, it's similar whether it's 8n or 8s. This range is only slightly different. When using this model to detect faces, if you look at what would happen if a crowd is full, even these small people's faces are captured. That's why something is floating like this, is your face? will remain in a disparate form Now, this part is obviously a face, but you can't recognize it. Is there a way to do this I'll let you know later Now, by the way, you will use the mediapipef_face_full model, right? At times like this, no matter how many crowds there are, it doesn't set the pace against the crowd. Just because the face is distorted, we don't catch that part and correct it. We catch just one person and guarantee it. But can these differences be compared? Now, this is the part about the hand that I was going to explain earlier this time. Now, the hand part has become a plus. use this model also use this model In other words, as you saw earlier, we activated two models. The first means you write the model for the face, and the second means you write the model for this hand. Now, there is a positive prompt written on it. It means to hold my hand and correct it properly again, and do that kind of inpaint again. I'm just letting you know that you can work simultaneously like this. I'll tell you the situation when I worked like this again. Come back and explain. What are the characteristics of the mediapipe_face-full model? He doesn't hold the crowd. Now, that's right. He doesn't hold the face of the crowd. And the hand! I'll hold your hand When using both models at the same time, these results were obtained while calibrating Actually, the hand part is the most difficult in the stable fight. It's not a perfect solution, but it's a role that can be adjusted to some extent. So then here's how to control the crowd now As for the method, I will go into stable diffusion once again. Now, if you look at the settings here, there is a part called Mask mim area ratio This is set to 0, but if you change this part to 0.01, the crowd behind you will not make a choice Likewise, the hand does not want to make a choice Then change it to 0.01 You can change it like this Yes, this is a tip from Git hub, so I think you should know that. Logically, I don't know why. Now, if you do it like this, when you set it to 0.01 as shown here, the face and the hand part are not recognized. It works for the subject, but it doesn't recognize the crowd behind it, so if you set it like this, you can create a picture without changing the person behind it. Now, by the way, for example, as I said before, the face yolo model is recognized by the crowd behind it. I told you that the difference between the two models is this. Yes, but when you use this, only the hand is not recognized and this (Face) is recognized. If you set minimum 0.01, there are some differences It can be a bit confusing, but please pay attention to the fact that the results are slightly different due to the characteristics of the two models. In other words, if you use the media pipe, you can think of it as having no awareness of the crowd. When using this YOLO, only the face is crowd-recognized. You can think of it like this yes this is the end Control Net (in Adetailer) For those who are curious about whether this part can be used together with the Control Net function, I put this Control Net skeleton. After putting in the skeleton, I reflected it according to the pose like this. I checked that it works Yes, I have explained everything so far. Now, as I said, in some ways it's complicated, in some ways it's simple, and you probably think that way. Keep it as simple as possible The purpose of this video is to make it simple for you to understand. I told you about Adetailer, a useful tool that can be used without complex settings. There will be other ways to use it for other purposes. However, we would like to see it that way because it is better to use more tools more comfortably to create good pictures. Yes, I will finish today's video here Yes, thank you for watching the video so far. I hope it helps. Have a warm day today with warm coffee. I will bring you good news next time. Subscriptions and likes are a great help to me, thank you
Info
Channel: AI 오프너
Views: 9,606
Rating: undefined out of 5
Keywords:
Id: NTPj5fZby4w
Channel Id: undefined
Length: 16min 36sec (996 seconds)
Published: Tue Jun 13 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.