그림AI 나온 지 반년 만에 영상 업계마저 긴장하는 이유 (feat.컨트롤넷) / [오목교 전자상가 EP.127]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I'm crazy about water. Are you crazy? Now, I've come to this point, and I've come to the point where I've got to look at YouTube because I'm so busy watching it's Beatles, the YouTube, [music] Hi, owner of the channel. And I've come to the point where I've come to see it. I've come to the point where I've been sitting, and I've just made a face image by using a Stable Diffuser, and I've used my AI software to make the facial movements. In this way, the image-generating AI, which has emerged as a comforting figure in the second half of last year, has become so much more realistic in just a matter of a few months. Nevertheless, there's a crucial reason why the image-generating AI has not yet replaced humans in many places because it generates random images every time. But because of this technology, I've come up with a crack in the last Boru.S. I've got an extension function of the Stable Yes. First of all, I'll briefly introduce you to the basic usage method and model, but you can still use the ControlNet function only on the web UI. If you run the web UI and look at the bottom like this, there's a ControlNet tab. First, there's a model that creates an image based on a sketch of a scribble. Here, there's a button called Create Blank canvas. If you press it, there's a canvas that can draw an image. And since it's an interface, I'll draw a rabbit. Okay, NewsJeans comes to mind. It's really good. The [music] and then the model called Canny and Hed is related to the border. In case of Canny, it's a model that extracts the border from the original image and it's good to use it when maintaining the shape from the original. For example, the dog photos are applied here with Canny on Preprocessor and Canny on model. The way I did this time was that I asked for a white puppy to be made, and here I can see every single Gingko (=hair) and I bleached my child. Bo-myeon is a child, and then there's a model called Hed, which is similar to Canny, and the ControlNet developers say it's good to use it to get the contours of people and change things like style and color, so I'm actually here to prepare for a surprise surprise surprise surprise surprise event. I brought peach-eating beets, and I thought, "What's the image of the beets when I dye them?" I used Teenager (teenagers), "Tinager." So I thought it would be nice to take them from a reference like this and use them to study something here and there, but I'm also worried about the copyright because I thought it's a very high-risk function to imitate or plagiarize. Next, in the case of the Openpose model, it's a model that extracts poses. So I put in images of people who are running, and I put them in this way. First, if I extract players who are running, I'll put them in the same way, and I can see their heads, arms, and legs. Yes, that's right, and I'd like to add one more thing here, which is an extension of the open pose editor, which allows you to adjust your pose in real-time, so if you make it smaller, you can now have a small person sitting here, add to manipulate your joints like this, and I've already extracted one, and I'll show you that there's a lot of other useful models of design that need to be used in space, like the segmentation model that semantically separates images, the mlsd model that extracts linear information, and the depth model that extracts depth information. And finally, I'm going to introduce you to the long-awaited multi-control net, which has a function that allows you to apply multiple models at once, so if you go from setting to control net tab, you can set up a multi-control net. I can set up up to 10, and I'll give you two examples. First of all, I'm going to do a little bit of a background picture, and then I'll lower the weight, which is the degree of reflection of the original model. If the weight of a particular model is high, another model is buried, so I'm going to get some depth of this mountain and canyon, and I'm going to put in another pre-extracted pose, so if I create it, there's a moon in the middle, and this is where I've set up. If I look at the back, there's a moon, and this is where I set up. If I'm doing animation, I'm going to make a character in the front, a background in the back, and this is what I know, but that's what I do at once. Right, the more I see it, the more I think about how I can use it in the field, and how I can use this image-generating AI to deploy people and work alone. AI to deploy people and work alone. All right. All right. Well, let's see. [Music] So far. How do you see, it's been about half a year since the Image Generation AI open source was released, and I thought, with the multi-controller, there's really something special about it. At this rate, I thought there would be a separate software that could operate AI very precisely in the very near future, and unknowingly, image-generating AI has already begun to penetrate the content production scene. In Netflix Japan's "Dog and Boy," which was released in January, the background of all scenes in the video was AI-generated images. For your information, this background that I see behind me is also an image made of AI, and as the control net develops further and more sophisticated AI control becomes possible, the day will come when not only the background but also every single element, such as people and movements, will be made into AI. Now, even though video-generating AI is still in the market, I've been thinking that if you can manipulate images in this way, I'm going to have AI that will produce more sophisticated videos in the near future. Video is actually a very subtle combination of frames of different images, so the big changes that you're seeing every year will be felt at some point every month, and in the near future, there will be a world where new technologies come out every week. What technologies and what jobs will you survive in the changes and trends? Can we adapt? I'm done with the video. No matter how hard I look at it, the next one is us. I don't think we have much left.
Info
Channel: 오목교 전자상가
Views: 73,546
Rating: undefined out of 5
Keywords: 오목교전자상가, 오목교 전자상가, SBS, 스브스뉴스, IT, 테크, 컨트롤넷, 스테이블디퓨전, 이미지, 생성, 이미지 생성 ai, ai, 디자이너, 영화감독, 유튜버, 크리에이터, 콘텐츠, 미디어, 방송국, stable diffusion, stablediffusion, Controlnet, control, 컨트롤, openpose, pose, model, 애니메이션, 애니, 그림판, 포토샵, 일러스트레이터, 일러스트, 디자인
Id: 8SQyx9wBZA8
Channel Id: undefined
Length: 7min 34sec (454 seconds)
Published: Wed Mar 01 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.