스테이블 디퓨전 이미지의 깊이감을 훔치는 마법! | 컨트롤넷 Depth

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Today, I will explain the function called Depth, which is one of the Control Net functions in Stable Diffusion. Depth is a function that allows us to know the depth of an image when we refer to an image and create it. For example, if we make an image using an open pose in Control Net, the posture follows well, but sometimes the depth I want does not appear. So, the subject is too close or the distance is too vague. In that case, if you use Depth in Control Net to capture the image and create an image, you can easily bring out the original image. I will explain how to use the function called Depth. The use of Depth is the same as using other Control Nets, so if you run Stable Diffusion like me and scroll down, you will see the Control Net tab. Click to open it. Here, click the Depth button next to the Control Type. And if you look at the bottom, there are pre-processor and model, but the model will probably be automatically selected. If you click it, you will be successful if it is modeled with Depth. But this pre-processor is a little different. There are various versions, Depth, Midas, Joe, and Rears, Rears++, but you can check the difference by clicking on it once. It's almost the same, but I mainly use Midas and Joe. These days, I use Depth Joe, so I'll select Depth Joe and preview it. What image are you going to test today? I'll bring the image I made before and test it. If you raise it up, the image drag window will appear. I'll put the image here and see how this image comes out through the Depth Pre-processor. If you put this image up and check if this button is on, then I'll go down and press this bomb button to preview it. If you press this button or create an image for the first time, it will take time to download the model. I've downloaded it in advance, so I'll check it out right away. Here's the image. Unlike the can or open pose we've seen so far, Depth distinguishes images from black to white. The closer to black, the background, so it's far away. The closer to white, the closer it is to the object. And because this basic frame follows as it is, you can create a much more three-dimensional image by combining this frame and distance. Now, let's create an image with this image. Give it a negative value and set the method to suit your taste. And when you want to create the same size as this image, there's one of the tips that can be done right away. If you look right next to this control net, there's a button that goes up next to the camera. If you click this button, the size will be set horizontally and vertically in the same way as the size of this image. You can refer to this. Lastly, positive prompt. This time, I'll give it to a man. Setting, one voice ring chair, smile, looking at viewer. Let's see how it comes out. Create. Now, the image is all created. Let's put the original image aside and compare it. Now, the left is the original image. The right is the image created by using the depth just now. I gave the prompt a man, but the feeling of the image extracted from this depth itself is a bit feminine. When you look at the face, you can see that the image is eventually taken in the form of a woman following the man. The shoes also follow the prompt properly and harmoniously, so I think I expressed the shoes by saying, Anyway, since this control rate was 1, you can refer to the fact that you couldn't follow the line as it is. I'll compare what it feels like even if I don't see the details. Now, the left and right feel almost similar. I don't see anything like, You can see that the angle and the three-dimensional effect are almost similar. Now, when you use a different image with a different function, you might think that the same feeling will appear, but it can be completely different. Because this depth is to keep the distance, it helps you to follow the depth and three-dimensional effect of the reference image by referring to the existing reference image. But if you use an open pose or a regular canny without it, you'll follow the image, but you won't feel the exact distance. Let's see if the image comes out the same as when you apply this depth. Now, let's find the image created with the control net once with the open pose. If you put the image created with the open pose like this and compare it with the original, the movement follows well and it is much easier to apply the front foot. But I don't think you can feel the distance of this depth so much. Now, if you look at the top, the lighting on the ceiling seems to be a bit vague in terms of distance, and the overall background of the house structure and office structure is a bit vague. But if you apply depth like before and create an image, you can add distance based on the frame while almost following the original image. But it's not as free as the open pose, and the frame of the image is fixed, so if you want to change the image of a woman to a man, it may be a bit difficult to apply depth. If the original image has a silhouette that looks like a woman's image, it may be a little more difficult to apply it. So in this case, you can make a new image by adjusting the distance with the open pose and the depth. And when I come back to the screen, when I set the preprocessor to depth, there is no value that can be set. So it's convenient, but in some cases, I want to adjust the distance between the background and the subject. In that case, you can change the preprocessor. If you change it, you can adjust the remove near and background like this. If you raise the background value and press the bomb button, the depth value can be set in a different way. But all preprocessors must be downloaded. Because the amount of PC I'm using is not enough, I'm going to explain only in depth group this time. If you don't like the value of this depth, you can use these Depth Rears, Rears++, Depth Midas according to your taste. Try it once and if you don't like the depth group, then choose Depth Rears or Rears++ and try it. Today, I explained the function of Depth that can express distance and depth in Stable Diffusion. When you use Control Nets in Stable Diffusion, it doesn't seem to be that powerful, but if you attach the expansion function one, two, three, four, Control Nets one, two, three, four, you can create a really cool and beautiful image. So, if you look at these functions that I told you once and if you feel that something is lacking, I recommend that you add various expansion functions together and use them. That's it for today's lecture. I'll prepare more useful content next time. Thank you.
Info
Channel: 트로메들로아
Views: 1,595
Rating: undefined out of 5
Keywords: 스테이블디퓨전, 스테이블 디퓨전, stable diffusion, 스테이블 디퓨전 영상, 스테이블 디퓨전 로라, 스테이블디퓨전 설치, 스테이블 디퓨전 사용법, 스테이블 디퓨전 Depth, 생성형ai, 생성형 ai, 생성형 인공지능, ai, stable diffusion depth, depth
Id: J-JGS6sG41o
Channel Id: undefined
Length: 8min 52sec (532 seconds)
Published: Mon Jan 01 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.