Consistency in Stable Diffusion | ControlNet Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello there and welcome in this tutorial we are going to learn how to create consistent characters in stable diffusion this is something that many people were waiting for since the beginning because creating some kind of character is nice and all but what if you want to create different images with the same exact character what if you want him to wear different clothes and have him in different poses or facial expressions it was technically possible before with image to image and in painting but it would take a long time and even then the result would be unimpressive all of this has changed with control net which is a groundbreaking extension for staple diffusion and it can actually do much more than just create consistent characters maybe in the future I will do other tutorials about the various capabilities of control net but in this episode we're going to use control net to generate different images with the same exact character so I assume you already have the automatic 1111 web UI this is what they use to run stable diffusion if you don't have this web UI you can find a link in the description for an installation tutorial and if you're new to stable diffusion you can also check out my beginner's tutorial but anyway this web UI is required to use control net now we want to install the control net extension so we go into extensions available load from when it's downloading you need to find the control net extension from this list and to make it easier you can just press Ctrl F and then type in control net so we need this as the web UI control net extension then click on install I already have it installed then go back to the install Tab and apply and restart UI then you're going to see that the control net extension is installed to be sure that it's actually installed go to the text to image Tab and you're going to see this control net section with the version so how do we actually use it let's expand this section this is where we select the specific control net capability that we are interested in and provide the reference image as I told you control net can do a lot of things but in order to create consistent characters we're going to use the reference only tool so we click on preprocessor and we select reference only and with reference only we basically tell control net I'm going to provide you a reference image so then use this image as an example for all the other images that I will create so first we need this reference image of the character you want to become consistent the character you want to create over and over again now because I don't want to waste time on creating a completely new character and spend even more time to make it look good I will use this image that I already created in stable diffusion after all this is not a tutorial about how to create good images of characters but how to make your character look the same in other images so you should already have a nice looking image of your desired character that you spend time on creating and perfecting and this will be your reference image now the cool thing is that you don't have to necessarily use an image you generate it with stable diffusion you can just open Google and find a character download the image and then provide it as the reference control net doesn't need to be an image file generated with stable diffusion it can be a photo you took with your phone or even a screenshot you may not get the same good results that you will have with an image generated by stable diffusion but it will create a very similar character so let's say we want to see our character wearing something else so we need to First drag this image into this control net slot so one way to do it is just drag the image inside or click over here and select the image next in order for control net to actually work we need to enable it over here so these are the three things we need to do we need to add an image we need to set the preprocessor to reference only and we need to enable control net there are other settings as well but we're going to talk about them a bit later so now we're going to scroll to our prompt and we're going to add some kind of description for this new image that we want to create so we're going to say something like a photograph of a woman in a red dress because we want to see our character in a red dress then of course we still need to add some style modifiers and to set all the different settings that we usually put the resolution the steps but all of this is just to make sure that our output image looks good so I'm going to change the steps to 25 I'm going to increase the resolution and I'm going to set some kind of style and now that all of this is done we're going to generate a few images so we can see this one and we can see this one and we can see that it really resembles our image and of course like with any other image we can keep working on it we can send it into end paint or maybe image to image we can upscale it but for now let's just create a few more examples so let's say something like so we got a bunch of results and we can see that the same woman appears in all of them now of course the pictures are not perfect there are a lot of strange things going on but this is just stable diffusion and that's the way it works and you always need to keep perfecting the image you can just run it through in paint or through image to image but the point is that we do get this same face and the same body and even the same hair and these are just a few other examples for using this same exact reference image but getting different clothing styles different colors and we can definitely see the resemblance so we have a few other options the important ones to know about is this control mode so usually we use balanced but we can also use my prompt is more important and will basically pay more attention to the prompt and less to our reference image and then we have this control net is more important so it will pay more attention to the image and less to The Prompt the second one is this style Fidelity and this is only available for the balance mode and this is just a way to keep some of the style of our reference image so if we don't want this style to be kept we can just lower it to something like 0.2 and then control net will create its own style or take the style from this prompt and apply it to the output because you can see that we got a very similar style inside our output images like the ones that we had inside the reference image so let's see an example if I'm going to lower it to let's say even 0.1 and just generate two images so now with this style Fidelity set on a very low value we are no longer using this style for these images we are still using the same model but we can see that the background has changed and the light has changed and even the character has changed a bit it still resembles this one but because we decrease this it's also going to affect the way she looks so usually you want to keep it at 50 there's a reason it's the default value you can also make it all the way to one and let's see what happens then it will probably look very much like this one so now that we set the style Fidelity to one our output images look more like a reference image so this style the light the gray background everything looks very much like a reference image the important part is to understand what the style Fidelity does usually I just keep it at 0.5 I use balanced mode and I'm not touching all of this there are these options over here and this is just when we want control net to actually start intervening inside the generation of the image so for example we use 0 and 1 it means that it will intervene the entire time if we set it to be at 0.5 it means that only halfway through it will start using this reference image and then it's going to continue using it so for the first half of the generation it's going to just create some kind of random woman and then it's going to take over so you can play around with this and see if it actually helps you achieve something I'm not touching these things it just stands on zero and one so there's a lot more to say about control net we also see all the different capabilities it has but for now we can actually create create consistent characters and again it still needs some work so even after you generate these images you need to work on them because that's the workflow and site stable diffusion so that's all for this tutorial thank you for watching please subscribe if you want to see similar content in the future and also to support me you can also leave a like that will help the algorithm and see you next time
Info
Channel: Mike's Code
Views: 6,941
Rating: undefined out of 5
Keywords: consistent characters stable diffusion, create consistent characters stable diffusion, deforum animation, midjourney, midjourney animation, ai art, generate ai art, how to deform, how to create ai animation, animation in stable diffusion, automatic1111, japanese, ai generated art, ai generated, ai video, ai animation, artificial intelligence, digital art, stable diffusion, deforum tutorial, deforum stable diffusion tutorial, stable diffusion tutorial
Id: Y_Yb5EkS8Aw
Channel Id: undefined
Length: 10min 4sec (604 seconds)
Published: Tue Jun 13 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.