Holy **** we are living in the future It came out naturally, right? You can almost create the image you imagine Hello, this is Chocoding In this video, I prepared a perfect guide video for Stable Diffusion Web UI If I introduce the tea, First, I will introduce the background and principle of Stable Diffusion Web UI Let's learn how to set up in detail by environment And I will introduce the menu and functions of Web UI I will show you how to use it in real life Please press subscribe and like and watch it to the end First, I think some of you may hear this for the first time So I will introduce it briefly You may have heard of AI that draws pictures or cartoon at least once Most of these AI are using the method called diffusion model Let's learn about the simple principle of diffusion model When learning AI, Give the original image and prompt to explain this After going through the process of adding noise where the image is gradually
blurred It is called denoising by decoding the noise from noise back to the original
image After denoising, the process of returning to the original image is learned by AI After learning like this, Even if there is no original image, Just give a certain prompt from noise Even if there is no original image, If you ask me to make the original image again, Through this denoising process, You can create a new image without data So, the diffusion model that puts text and gives the image is very good Many companies jumped in and made this model Among them, a company called Stability AI made a diffusion model And released it for free So I put it up in a place called Hugging Face If you go here, anyone can download the Stable Diffusion model made by Stability
AI for free So you and I download Stable Diffusion It's called fine tuning It's a process of learning additional things for the purpose of this model Models that draw cartoons, models that make furniture images, models that make
3D animation images, etc. A huge variety of models were born for use However, no matter how many models like this are, If you want to turn it yourself, if you are a developer, you can make a code
that wraps this You can also develop this code by entering text here and extracting images If you don't know coding well, it won't be easy to do this So, anyone can easily use this model UI is a text input window or button, and if you press this, the code inside will
work A lot of websites and applications that anyone can easily use AI have been born So, the most used of these UI these days is The tool called Stable Diffusion Web UI made by Automatic 1111 is the most used This is released as a free open source, so anyone can download it and use it for
free If you make a screen like this and just put an AI model inside, Even if you don't know coding, you can enter text and press these buttons to
create an AI-based image The name was probably named Stable Diffusion Web UI because it was made for the
first stable diffusion model Even if it's not a stable diffusion model, you can use it through this UI by
inserting various models So, in this video, we will learn how to use this UI in detail Next, let's learn how to set this UI by environment The way to set it by environment is a little different First, regardless of the environment, I will tell you how to go back if you have
a browser I will tell you how to install it on Windows and Mac in order First, open the Internet browser and connect to this address I will leave the address in the comment section If you connect, you will see this screen For reference, this page is updated very often It says it was updated 9 hours ago Please note that it may be slightly different from the time of this video If you scroll down here, you will see the part that says Installation This is where you can install it There are ways to install it by environment At the top, there is a link to how to use it online without installing it as Use
Online Services Go to this link Then, there is a Google Collab and Paper Space link that can be used online It doesn't matter what you use here I will use this link managed by The Last Ben If this is blocked, you can use something else There is no big difference in the overall process Click this link to connect to the Google Collab service Google Collab is a service that lends computers to Google so that Python code
can be executed in an online environment You can execute these codes on a computer that Google lends to you, not your
computer Even if you don't have a computer with good performance, you can execute this
code on your smartphone To use this service, you need Google Login Before we start, there is a code that connects to Google Drive This code provides the right to enter your Google Drive If your Google Drive is hacked, the Google ID you use here is the ID that
doesn't have important data on Google Drive After logging in, click Copy with Drive to copy this code Then, the sample file of this code is saved as a drive of my Google ID and I can
execute this code Then, go to Runtime and click Runtime Type Change and select GPU to save the
hardware accelerator Then, the Collab environment is connected to the GPU and I can run this AI Then, execute the code When executing the code, click the button on the left to execute the code When you first execute it, it takes some time to say it's connected And the first code is connected to Google Drive, so it shows this message Connect to Google Drive Then, the login screen pops up and logs in Scroll down and click Allow Then, it's connected to Google Drive If you click the file on the left, you can see that it's connected to your
Google Drive Next, I'm going to execute the code one by one At this time, there's only a title that says, automatic 111 is installed, and
the code is not displayed If you click the code button at the bottom, you can see the code You can execute this code by just clicking the play button next to it And when it's done, you'll see a check mark like this Next, you can download or load the model It can be done in one of three ways First, if there's no model downloaded separately, you can download it here If you're new to this, you can just write this and execute it If you already have a downloaded model and want to use it, Put your model on Google Drive And enter the path of this file There's an example at the bottom, but copy it and paste it on the top If you put it on the top of your Google Drive, Just replace the file name with your file name instead of the file name The third way is not my Google Drive, but if it's on another ID's Google Drive Right-click on the model file on another ID and click Share Change Access to All Users with Links Copy this link and paste it here You can also bring the model file from another Google Drive But I think there will be a lot of people who do it for the first time I'll leave these two blank and proceed to the first option Select the model version and I'll select version 1.5 And you have to enter a token This token is downloaded from the Hugging Face website I mentioned earlier You have to sign up here and create a token I'll leave the Hugging Face address in the text and comments Click the Sign Up button on the top right to sign up After signing up, go to the settings on the right profile And click the Access Tokens menu at the bottom and click New Token And name freely Click the Right button on the Roll and click Generate a Token Then you'll see a token number Click the Copy button on the right to copy this token And paste this token into the Token Inputs section in Google Collab And before you start, you have to agree to the model you want to download If you search for Table Diffusion on the Hugging Face search bar You'll see 1.4, 1.5, and In Painting version If you check 1.4, 1.5, and In Painting version, you'll see the In Painting
version I'll use the 1.5 version So I'll click Runway ML 1.5 version And there's a Use If you're done, run the last code without checking anything The first checkbox is only for when the Collab local server is not connected
well I'll give a lecture on Hyper Network later If you use this, you can check it and run it Scroll down and you'll see Running on Local URL and some address If you go into this address, you'll see this site If you click Click to Continue, you'll see a web UI You can select the model.ckpt and use a specific model If you create an image like this, it will be saved to your Google Drive If you go to My Drive, SD, Table Diffusion, Outputs folder You can see the image I created here The disadvantage of Collab is that it depends on Google Collab So the connection may be disconnected in between If you've downloaded the model, you can skip this You have to run the rest of the process again and turn on another server For those of you who don't like the environment that keeps going off like Collab I'll show you how to install it directly on Windows PC Likewise, I'll follow the installation window installation part of GitHub page I'll proceed one by one As I said, this page is updated often If there's any change, You can follow the changes in the process here For reference, it says you need a video card of more than 4GB to install It says it can be 2GB What this means is that there is a GPU in the computer part This means you need at least 4GB of VRAM The VRAM capacity is the number of gigabytes in the graphics card product Please check if this is the recommended specification Of course, it's a bit slow, but you can also use the mode that only uses CPU You can give this option when running Or there's a repository made of CPU only among other repositories You can use this code Then, let's start with Windows installation process 1, Python installation It says to install Python 3.10.6 version If you click this link, you will be moved to the page where you can download
Python Then, you can search for the version you need, Ctrl F, and search for 3.10.6 As a result of searching, you will see 3.10.6 version Then, click this link, Windows installer 64 bit Then, you will be downloaded the.exe file that can install Python version 1.0.6 Click this link Then, you will see the Python installation screen You must check the add Python 3.102 pass at the bottom and install it Then, go to customize installation and check everything Next, and check everything else Click the install button to install Python Installation is complete Second, I will install Git Click the Git link Click 64 bit Git for Windows setup and download the installation file Click this and install it by clicking the install button Installation is complete Third, you need to download this repository using the Git you just installed Go to the C drive of Windows Explorer and type cmd You will see the command input window Copy the command that is painted in gray Put it in the cmd command input window with Ctrl V and press Enter Then, it will proceed like this You can see that the folder called stable diffusion web UI is downloaded at the
bottom of C drive Fourth, I ask you to put the model file in the models directory Click the C dependencies link on the right You will see three ways to download the model file You can download it by the official download, file storage, or torrent file I will go to the official download and download it Currently, it is connected to the stable diffusion 1.4 version of Hugging Face Click the download the weights link to download the model The difference between the 1.4 version and the full ema version is that It is bigger but easier to learn more Therefore, download the above version if you just want to use it If you want to learn more, download the full ema version But if you want to download another version, not the 1.4 version, Search for the table diffusion in the Hugging Face search bar You will see the 1.4 version and the 1.5 version made by another company I will download the stable diffusion 1.5 version made by RunwayML If you do not agree, please agree Scroll down and download it through the link here There are two versions here The smaller one is just good to use The larger one is used for additional learning through fine tuning I will download the larger version to show you additional learning later Click this link to download the model file And paste the downloaded file in the folder called Models in the folder called C drive stable diffusion web UI folder which you received in the third process Fifth, this is optional, but it is called GFP GAN It is used for face restoration and you can put it in the base directory If you search for GFP GAN, you will see this repository Here you can download the recommended version 1.4 Or the 1.4 version is more detailed, but the 1.3 version is more natural So you can get what you want from these two I will get the recommended 1.4 version Then paste this file in the base directory, the table diffusion web UI folder where the web UI.py is located Then you can see that the file is well inserted And if you are done with this process, you will be asked to run the web UI.py file Then double click this file in the folder web UI.py This CMD window opens and automatically installs and runs After all the settings are done, running on local URL and a address will appear Drag this address and right click on the mouse to copy it Put it under the control V in the browser address window and press enter Then you will be able to run the stable diffusion web UI on your computer Here, you can also select the model file that we put inside the models and use the stable diffusion web UI If you want to run this again after turning it off, go to this folder and run the web UI.py The first installation process is over so you can check this URL a little faster If this appears, you can go back to this address and use it Next, I will tell you how to install it on a MacBook using Apple Silicon Likewise, go to the same repository and click the link here, Install on Apple Silicon Then here's how to install it on a Mac using Apple Silicon Follow this procedure First, you need to install the homebrew Go to this link Copy the command in the middle Install Homebrew Turn on the terminal, paste it under the control V and press enter Then enter the password and it will run like this Press enter again to install the homebrew Now that the homebrew installation is complete, you can install the necessary
things using this command Copy this with control C and open the terminal again Put it under the control V and press enter Then you will install the necessary things like this Installation is complete And run these codes one by one Copy one by one and paste it on the terminal and run one by one Repeat this and paste it on the terminal and run Repeat this and paste it on the terminal and run Copy this and paste it on the terminal and run And if you see this Proceed in the middle Press y and enter to continue You can download the stable diffusion model directly Or you can download it from Hugging Face automatically If you downloaded the model Go to the Stable Diffusion Web UI in the documents in the Finder Put the downloaded model in the Stable Diffusion folder inside Models Then press y If you didn't receive the model, you can download it automatically Press n and enter Then you will be asked to enter the Hugging Face token This is the same method that I talked about in the Collab earlier Create a token and paste it here Put it under control V and press enter It automatically downloads the model Basically, it is connected to the stable diffusion 1.4 version of Hugging Face To download it automatically Hugging Face stable diffusion 1.4 original You have to agree with the terms here to download it normally If you wait a little bit, the installation is complete Uh, but there was an error It says there is no Poch SD So I opened this Run Web UI script with Visual Studio Code It was composed of this command Come into the Konda virtual environment Eventually, it is a structure that runs this Web UI Python file There is no module called Poch SD here Copy this and add pip install poch sd After saving, I ran it again Then I was able to check that it was running normally Then I don't need this next time Erase it and save it again Likewise, copy the local URL address If you run it on the Internet browser You can install it directly on M1 Mac Stable diffusion Web UI I was able to use it well Now that the installation is complete, I will introduce the menu and functions
of the Web UI I will explain the main functions that can be used in the Web UI one by one First, the checkpoint on the menu is here In the Stable Diffusion Web UI folder Inside the Models folder The model files in the Stable Diffusion folder CKPT files This is where you can click and select Besides the basic stable diffusion model Various models that have been fine-tuned If you put it in this folder and start it You can change it here and use it First, I will explain the overall menu The first tab is called Text to Image It contains the process of entering text and extracting images The second tab is called Image to Image It is a tab that puts images and transforms them to create new images Extras tab puts an image It is the place that increases the scale size of the image or adjusts it to a
certain size or handles image processing PNG info is when you bring the created image here EXIF You can check it by putting it here what settings you made Next, the checkpoint version is It is a process of mixing models Besides the general stable diffusion There are a lot of modified models Select it You can make a new model by mixing it in a suitable proportion In the train tab, embedding hyper network You can train it in these ways You can transform it to suit your purpose You can make various settings in the settings In the extensions tab You can install various additional functions Other than the functions you have now If you load it from available There are various extensions that can be installed now For example, Dreambooth This is very good for fine-tuning Add these things There is an extension that can be easily fine-tuned in Web UI In addition, there are so many different extensions New functions are constantly developed It has become a structure that can be easily installed and used So later, the main extensions I'll find out how to install and deal with it Then, let's start with the text-to-image Let's find out what each menu does The prompt is basically where you enter the sentence you want to create Negative prompt is an unwanted sentence For example, if you record the parts you want to avoid, such as ugly or bad
finger This has a negative effect Next, sampling steps It's the number of steps to go from noise to denoise The basic is 20 The lower it is, the more the picture is crushed The higher it is, the more detailed it is Next, the sampling method It's about deciding how to use it Depending on the sampling method There's not much difference like this, but it changes a little With Height represents the width and height of each result Basically, it was learned with 512 images 512 square is the best And here are some options to check If you press Restore Faces, you can create an image It's a role to edit the face that came out strangely And the Tiling option When you place these images in several tiles It makes it look good What this means is If you create an image with Tiling This kind of image is created When you make this image in several tiles It makes it look natural Next, I looked up the Hi-Res Fix When creating a bigger image than 512x512 It's a role to prevent hair on the head Or it's a role to prevent strange elements Next, the Batch Count is the number of times to create this image The Batch Size is when you create it It determines how many to create In the end, the result value is the square of these two values If you set the Batch Count to 5 and the Batch Size to 2 You can create 5 images of 2 tiles So you can create a total of 10 images Next, the CFG Scale is the number of times The image will follow the prompt exactly The lower the value, the less it will be related to the prompt But you can create a more creative image The higher the value, the more it will follow the prompt Next, the Seed value is when you create an image If you like this image There's information about creating this image at the bottom There's a Seed value here You can copy this and make it randomly 1-1 If you copy the Seed value of a specific image You can recreate it in a similar way to this image So even if you recreate it again, it's the same as the previous image So if you want to create multiple images in this style You can set the Seed value and increase the Batch Size You can create multiple images similar to a specific Seed Next, there's a script at the bottom There are three things First, the Prompt Matrix If you select this When you want to experiment with multiple prompts If you enter this symbol The result will look like this When there's no word like illustration When there's an illustration word Or when there's no cinematic writing You can compare the cases in the form of a table Next, the Prompt From File or Text Box If you click this You can drag the file with the prompt here You can enter a prompt in the memo And drag the file itself And if you enter it here You can see the text in here And there's a good option for testing these various options If you select XY Plot You can select the X axis and Y axis here And you can try various options For example, if you want to see what the sampling step and CFG scale affect In the X axis, you can set the steps number as 20, 30, 40 And the Y axis as CFG scale And set this as 7, 8, 9 And if you generate it like this You can see how the steps change in the X axis as a table When the steps number is 20, 30, 40 And when the CFG scale is 7, 8, 9 You can see how each changes in the table as a table So if you change the X axis and Y axis to various variables You can experiment with the images Depending on the changes If you click this picture on the right side of the prompt A random author will appear So if you want to add a random author's style You can click this icon The second icon is used when you call the prompt of the created image You can call this prompt again Save style is For example, you can enter these positive keywords And click Save Style here And enter positive Select the positive that you saved in Style And click Apply button below The saved style will be saved at once So it's better to save the words you use often in Style And call it to use it Next, let's learn how to use this Web UI to create images This lecture edited the live And added the newly updated parts Let's see The most basic function Let's create images from text It says Text to Image Enter this tab And there's a prompt We need to write a sentence in English And execute the image we're going to create on the prompt But there are many people who have difficulty with English If you use Google Translate You can write it in Korean and it will change to English You can try it A picture of PSY performing Then it comes out like this It comes out in Korean and English A photo of PSY performing a concert Copy this The result won't be good at first I'll tell you how to improve this After writing this, click Generate button next to it Ta-da, the picture came out The created image is stored in the Outputs folder of the Stable Diffusion Web UI
folder It's a little crooked and weird, right? But doesn't it look good? I still have a lot to do Anyway, this is your own sentence No matter how you write it, it will come out differently from mine Because it changes every time you create it So this changes every time you create it So getting a good image is a bit of luck To give you some tips There's a way to make this more detailed In this sentence, people studied it and published books It's called Dali Prompt Book How to use this command more efficiently This is a prompt book that was made with a lot of thought So how to use sentences to get a realistic image This person was also very excited while writing this Holy F, we are living in the future You can get the feeling of living in the future I'm really surprised when you look at this A prompt is a sentence I wrote a sentence in various styles I'm going to introduce you to how it's a little more effective In addition, recently, an open art book A prompt book dedicated to stable diffusion has also been released So if you refer to this You can learn how to write a prompt that fits more stable diffusion I'll leave the link in the article and comment It's too long, so I'll just briefly introduce it First of all, I'll share this link This is also free You can learn more about how to get a good image by looking at this book So read this slowly I'll just explain it roughly Emotional prompt Light, peaceful If you don't know these words, you can use Google Translate It's like bright and peaceful Anyway, do you see the difference between these four pictures? If you put in the word positive mood, it gets brighter This is negative, it gets darker This is smiling high energy This is dark You can express these words by putting in these words So I copied all of this If you put all of these in the prompt, you get a very bright image Exciting and bright image If you copy all of these, you get a very dark image Shall we try it? It looks like Kim Jung-eun Anyway, it looks like this This is how you change words If you want to make a bright image If you just put all the bright words together, the image will be brighter But there must be a feeling you want to make For example, calm is calm It's not good to put calm in Because it's a concert, you can't put this in So you can choose to remove these Even quiet is in it, but it doesn't fit Rather than putting all of this in It's better to select really well and put in really good words It's good to make the image I think Not a promotion, but style Small, big, structured, etc. It's important to put in a little bit of art I want to get a picture What kind of picture do you want to get? If you want to get a picture of your face, you can use extreme close-up Anyway, if you want to make a specific image in a specific area You can use specific command Overall, I'll explain it in detail The more images you use, the more accurate it will be As I said before, there is a little luck There are good ones and bad ones To increase the probability, it is better to roll a lot of dice So there is a layout size and a count How many will be created at once? Is this the number of times? Anyway, it seems to be multiplied If you increase the layout size, the number of times will increase What should I do to make this look realistic? Cloud K wrote in the chat If you write the name of the camera or the lens together The image comes out more like a real camera So if I show you this, it will come out here too There's a camera angle, but there's also a camera setting and a lens It comes out right after This kind of image needs to be captured at the moment when the water splashes Then you use shutter speed Because the photo captures the light when it blinks If the shutter speed is very fast, you can get this image Anyway, if you set the shutter speed, you can get this image That's what it was like There's a camera name, lens, and so on The more you use this, the more accurate the image comes out Sony a7 35mm lens I'll write this comma here Now I've used it like this and increased the number And if you press Generate, it's being created It takes a little longer because the number has increased Oh, it's out Yes, the number has increased Because it has this sentence in it I think the composition is made so that the audience can clap or cheer But there's something like Kim Jung-eun Oh, I use it often Realistic image, hyper realistic, 8K So if you use 8K, 8K is high quality Hyper realistic photo of If you use it like this, it gets better There's something like this Of course, if you want to be creative, it would be nice to refer to these
keywords If you want to make a TV film, you can use this TV series or writer's name It is said to be helpful if you use this Anyway, if you read this all the way, there are a lot of tips I recommend reading it all the way Art also has keywords like writer's name, renaissance So if you refer to these keywords and put this in, it will change a little more PD, highly detailed, super realism, trending on Twitter Oh, right, I heard that this is also used a lot Trending on Twitter or trending on Pinterest I use a lot of keywords like this So how do you get this? Because it's not all in the book Anyway, there are a lot of places that explain this So it's good to refer to the opinions of various people So, referring to the opinions of various people, looking at these sites It's been a month since this model came out So there aren't that many people in a month But I'm sharing it now Some people are still sharing on YouTube or blogs You can try it yourself and get some tips I think it would be nice if you could share these keywords Oh, I use this keyword, so it comes out more realistically It came out more art-like If you share this, you can copy and use it by looking at these blogs So, if I introduce another good site There is a site called exika.art This is very useful Exika, here, the images created with stable diffusion What kind of prompt did you make? I wrote it all So, for example, if you click on one of these images Download the concept of art If you use the keyword trending on art station There are keywords like this that come out more high quality So, if you refer to these things If you want to make a high quality Trending on art station Choose an image that you want to refer to If you want to make it like this, click Here, there are soft lightening, digital art, 4K render, art concept So you can refer to these things So, if you use this keyword, you can see that this image is created Psy concert? Is there no Psy image? Oh, this is very realistic There's a little weird part Viral Potter Viral on Twitter I think this is a good keyword Selfie? Anyway, this is how you do it You know what it feels like, right? Search and look around If you feel good, click Is there anything to refer to here? iClosed? There's a keyword like 8K Refer to these keywords Copy and paste Anyway, if you copy and paste keywords like this Among them, something good comes out This is a bit lucky So I've been looking at it several times It takes a while to find an image that I really like But if you get something good out of this There's another way to improve it I'll let you know that too Famous people from overseas I think there are a lot of people who are overseas data Should I put in some famous overseas stars? Among the famous overseas stars I'll put Elon Musk It's a bit like this But it's not that realistic So I'll tell you how to fix this If you check this If you get a little more face image If you check this I'll do another work to improve my face Once this image comes out I'll tell you how to do it with this You can also move this image Here you can see the button called Send to Image to Image What happens when you do this? Move to Image to Image I'll move this Then, from Text to Image I moved to Image to Image What happened to this? Image to Image Based on this image When creating a new image Use this Image to Image So among the things we made If you think this is not bad This image You can make something with this again There's something called Denoising Strings The stronger it gets, the different it is from the original image The weaker it gets, the more similar it is to the original image So the more you pull this, the more new images come out The less you pull this, the more similar images come out So this is the basic value If you like this But if you want to fix a little detail here If you do that, bring this image This Lower the strength If you do 0.2% of the strength, you'll change it very little A smiling face I'll change it to a smiling face Bring this Laughing? I'll go with laughing It didn't change much Can you see it? The corners of my mouth are slightly raised I put 0.2% of the strength, so it didn't change much I'll increase it a little 0.3%? Change it again Generate again Increase the layout size If you do several things, you can see it better Anyway, when you look at the details little by little I like this picture But if you want to change a little here Image image If you lower the strength here It doesn't change much It didn't change much But it's a little bit of a smiley face It's a smiley face Create by reflecting this sentence Anyway, if it's improved like this I'll show you another interesting thing Now we've done text to image Image to image Then there is a function called inpainting If I explain the function of inpainting This picture Press send to inpaint If you press send to inpaint This picture should go to the inpaint tab There was an error There was an error I'll fix it and put it in I'll fix it and put it in Then you can drag your image You can just drag your computer and put it in Now you can do other work here You can do more detailed work For example, I want to make a Elon Musk with sunglasses This could be But the image is based on this What should I do if I just want to wear sunglasses? Inpaint is to change a certain part of this image So if you use inpaint This is how much the sunglasses will affect I'm going to specify the area of the sunglasses like this I'm going to specify it like this Likewise, you can adjust this here The stronger the strength, the more it changes I think I should change this much I'll do this and write a sentence I'll do this and enter And since it's a luck If there's only one layout size, it might not work I'll increase this and try up to eight Oh, yes, it's been created in the meantime It's been created It's a little unnatural It's not bad like this It's a little weird This is a little natural If you do it specifically, it will come out more specifically What kind of sunglasses, what color, what is the shape If you use something like this, it will come out more accurately I liked this I like this sunglasses I don't like the blur here Then this is the same I sent this result to send to inpaint Then you can fix this part here This part Then leave this part as it is You can do this Do you know what it feels like? Do you know how it feels like to modify? This is how you modify If you do this part you don't like You can change it to AI one by one I think I can turn this back If you look again, this is the original image This is the changed image So if you look at the changed images Oh, there's one missing This is a little more natural Then I'll use this again I can try several times like this So far, I've looked at text to image, image to image, and inpaint We did it Select a specific part and change it You can also use outpainting instead of inpaint If I show you a demo If you look at this image Take this image like this There's no external wall AI creates a background If you take this, it's created If you take this, it's created It's called outpainting It's just creating the existing image Then if you mix inpaint and outpaint, you can do most of the modifications If you want to fix this image a little bit, you can get it You can change the inpainting and replace the object That's possible Next, I'll show you how to do the inpainting that many people have been looking
forward to I made a human out of a character called Waddu, a representative character of
Mr. Woo Waku It came out very naturally You can make this I also tried it with a pre-coded logo Then a very handsome person came out It's a bit like a startup image that feels like Nam Joo-hyuk Anyway, it came out like that Or I put my picture I put my picture and said it was a woman's face It came out like this Anyway, this is the image-to-image function of Stable Diffusion
822
00:43:31,200 --> 00:43:31,080
If you increase the strength, it will change to a degree of If you increase the strength, it will change to a degree of That's the principle So if you understand that much, you can create an image that you can imagine If you understand image-to-image, inpaint, outpaint I'll try it with a picture of Mr. Woo Waku, who had a very good reaction Anyway, Mr. Woo Waku's Waddu Mr. Woo Waku is a very fun YouTuber and Twitch streamer There are a lot of fun contents I put this Waddu image on the image-to-image I put the image of Waddu Now, how to become a person with a prompt You can now write a prompt according to the person Writing according to the person means How did I do it? When I pulled this out, I did it like this Likewise, starting with a photo of this I tested this I found a keyword to make a Korean face look good If you use IDOL, you get a handsome face I started with this As I said before, I put specific keywords one by one Let's pick this up It's going to be weird if you do this It comes out as a woman It comes out as a woman It's not a woman, it's a male, so I'll put male idol in front of it It should be Boy Idol If you type Boy, it will come out younger In this way, I first set the big picture as a keyword I'm getting a little bit of a feeling now But it's not completely realistic It would be nice to put realistic keywords So I increased the layout size by about 5 It's good to choose from several pictures I'll show you the actual game I told you that you can do it on this site Here, high resolution, high quality, highly detailed, gothic, art station If you have a keyword like this, it comes out better I told you that you can copy these and paste them here Of course, I have to choose one of these Beautiful Face I think it could be a little bit of a problem if I have this I'll take out the pale skin and put in cinematic lightning The keyword is not bad so far So I'll save it like this and press the Generate button So let's see how it feels If you look at this here, you can see that the details are missing Headphones, hamster ears, etc. After looking at what's missing I knew I had to add these details I added it later If you use the keyword, there is a hamster ear The English sentence comes out like this So I copied this and paste it here Then, wearing headphones and has hamster ears It will have more features After making it more detailed, generate it This is how you fix it Spray it once in a large frame If you don't see these things here, add it like this If you see this missing here, add it again This is how you keep feedback and fix it Now the headphones are good It's a bit similar This is similar It feels a bit similar Here's another tip There is something to emphasize If you use the keyword horizontally, it's an emphasis If you want to make it weak, you can make it weak This is strong, and this hamster ears If you want to emphasize the intensity of adding this, you can add it
horizontally If you look at what's missing here, there's no ear You can do this with Inpaint I'll show you how to do it with Inpaint If you look here, you have Send to Inpaint If you press this and send it to Inpaint Go to Inpaint tab here Draw the ears Define the ear area Define the ear area here too Realistic Hamster's Ears It's a realistic hamster Oh good Copy this and generate It's a little weird Here's another tip I think the area was a little narrow Widen Then, as I said before, there's a strength The higher the strength, the more it changes, and the less it changes I think it should change less Wouldn't it be because it's changed too much? So do this It would be nice to increase the number of batches The batch size is too much Because it takes a long time, I'll do five and pull it out If you generate it, you'll get about five The ears aren't bad, right? It changed a lot because it changed a lot Wouldn't this be a little better? It's there from here to there, so repeat this several times Change the text once in a while If you put in brown and pink, it will come out more accurately Anyway, you can do this Oh, and Mutal Musk said, it will be on Photoshop soon It's already there People are really fast Stable diffusion Photoshop already exists Photoshop already exists Photoshop plugin Stable diffusion Photoshop can now transform This is seven days ago, so you can feel that it is really the latest It's been a month since it came out The data of this is exploding now The image made with this is exploding now So you can participate here I think it would be nice if you could produce the image you want