ControlNet Canny Explained - Full Tutorial // stable diffusion ai

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] control net is one of the most powerful stable diffusion tools there are many pre-processors and models to choose from but today let's talk about Kenny with Kenny you can change information within the image while retaining the overall composition or even facial likeness change colors Vibes add additional subjects to an existing composition even if it includes hands and so much more let's break it down if you have no idea what I'm talking about when I say control net or how to install it check out my control net full guide link is in the description it's an article that goes over what is control net how it works how to install it update it add more control net models what all of those interface settings mean so check it out if you need to now what is Kenny Well Control net Kenny is a pre-processor and model for control net there is a more complicated explanation but basically it detects edges and extracts outlines from your reference image so that you can then use them on your new images Kenny preprocessor analyzes the entire reference image and extracts its main outlines which are often the results of sharp edges and changes in dark and light tones if we zoom in a bit closer here you'll be able to see exactly what I'm talking about look at the nose right there there's a light there and racts that edge to indicate a change of plane or light Kenny will create outlines for the characters background foreground and basically anything that's in this image like the buildings in the background or the clouds that we can see in this example by the way there's also an article for control net Kenny it's based on the video that you're watching right now so you can read and follow along if you'd like now quick canny process this is the simplest way to use control net can and more exciting ways are coming soon so stay tuned for a very simple workflow here's the basics you pick your model I used rev animated Now open up the control net tab if you don't have it check out the article I mentioned previously drop your reference image select enable and choose Kenny if you want to see Kenny in action check mark allow preview and run preprocessor the exploding icon you write a prompt like English lady blonde hair jewelry period pieces 1600s yada yada add some negative prompt choose your settings mine are shown here on the screen leave the other settings as they are for now and hit generate now when stable diffusion generates an image it has a guide to follow our cany pre-processor created a guide that we just saw and so stable diffusion has to follow that guide while also looking at the prompt so we get a completely new image notice how the overall composition the facial features remain the same or very similar to the original but they're still being changed based on our prompt and this is just a super easy and quick way to use Kenny so you understand how it works now you might be wondering can you use it for sdxl models and the answer to that is yes you have to download a different Kenny model I'm using diffusers XL Kenny mid but any from the Link in the description will work and after you download it click the refresh button if you don't see right away but when you select Kenny pre-processor make sure in the model to select diffusers XL instead of control sd15 and then you'll be able to use control net with sdxl checkpoints for some very lovely results maybe you've also notice that control net has two additional settings that are not present on the other other pre-processors and those are Kenny threshold so the canny low threshold is basically anything below that number gets discarded and cany high threshold anything above the number you select is always kept values in between low and high are either kept or discarded based on other mathematical factors that I won't be getting into notice how different The canny Edge outputs are and how much information is retained in each but is it always necessarily better to have a higher amount of edges detected well I think it depends if you're using cany to retain a specific composition silhouette or body pose but want to create a unique image then I'd go with a default or Pixel Perfect setting enabled but if you're looking to retain a specific structure for example facial likeness then a higher amount of details in the preprocessor will probably produce better results so from this example notice how little resemblance to the original image is recreated with the threshold set to low 200 and High 250 because the guide that you're giving stable diffusion is very limited in its information in comparison the canny threshold low 10 and high 100 created the most detailed output retaining the most facial likeness of the facial features and we could have guessed that that would be the case based on the pre-process result alone paying ATT attention to these little details will allow you for even more control over your final Generations now let's actually try to test some of these features and see how it works so let's go to the image to image Tab and check out what Kenny can do for us I found this image of a person holding a glass and now I'm just going to quickly select a few settings for the size and so on but in the control net Kenny tab you notice there's a new button that says upload independent control image we're going to get back to this this but for now we're just going to use the glass select Kenny change the denoising strength to something higher so it can really affect the image and now write your prompt for me it's a hand holding a glass flower growing inside a glass blooming photo realistic digital painting Masterpiece high res now I want there to be a flower but I want there to also be a hand and to retain this composition we'll see how well it does and so I generated the first one and the result is something super tiny inside a glass so that's really not helping us that much I'm going to reuse the seed here so that when I make changes I can actually tell if it affected the image in the way that I want so now let's run one without control net enabled and we have a glass with a flower but the hand is all messed up so that's really not working for us here now let's enable it again with Kenny control weight at 0.5 and image to image the noising strength at 0.75 I was able to get an interesting result we retain the composition and we have a flower on the inside it's a much better result than that without control net so if you have an image that you want to replicate specifically especially when it comes to hands then control net Kenny image to image will work pretty well now for the second example I wanted to show you I first generated an AI image in stable diffusion then I picked the one I like and sent all of its parameters to the image to image toab the reason we do it is to retain all the settings The Prompt the seed all of it then I enable control net Kenny and add to the prompt the color of her hair and so it retains basically the same image there's a few things that are different of course some facial features some colors of the dress and surrounding areas but mostly it is very very close to the original and if you play around more you could probably get an even better result now if we really want to get creative and create something very interesting what we're going to do is we're going to change the image from image to image so here I dropped a glass but what is important to notice is that when we go down to Kenny we need to enable it and upload independent control image that is the important part and then we throw in our witch but now from image to image with a noising of 85 we have the hand with a glass but at the bottom we have the outline of a girl and we have the prompt of a girl so when we begin generating the image's colors and values and whatever else stable diffusion retains get applied on top of our generation so here I started throwing in different kind of images lowered the denoising value you a little bit to 6 and started generating again and now we're getting some really interesting results we still get the same wit the same composition but colors A vibe and a lot of other details are very different now this is totally something fun that you can play around with now let me show you in paint real quick so we're going to throw in the same image to the in paint Tab and let's say we wanted to change the color of the dress to Red while we ging the overall design and shape of the dress to make such a big color change we probably need to use a high the noising strength I used 0.9 when I run this generation with masked only enabled but without control net the color of the dress changes but it no longer matches the rest of the body and it's a New Dress entirely so instead I enabled control net Kenny and got the dress to be more red but with with the same design which is exactly what we wanted and that's a pretty fun example isn't it now here's a few more things first you can play with the control weight in this case I dropped this image kept all the same settings but changed the prompt to a girl having silver hair and wearing a dress if we change the control weight of Kenny to 0.1 we basically have an entirely New Image 0.4 now we have more of a resemblance of the first image with quite a lot of changes nevertheless and with one we are keeping the same composition exactly just changing some other things and that is something to play around with for your own purposes now I think Kenny is great for many creative uses but my favorite is probably composition I used a new prompt and a new checkpoint each single time and it created a totally unique image while retaining the overall composition from the original canny image even if the clouds become something else entirely the shape of something there remains thanks to Kenny I hope you enjoyed this video I'm planning to make a series on control net so stay tuned for that and hey if you're still watching check out this one next cheers
Info
Channel: CreatixAi
Views: 2,764
Rating: undefined out of 5
Keywords: controlnet, stable diffusion, canny controlnet, creatixai, ai, artificial intelligence, canny, preprocessor, model, composition, colors, edges, outlines, tutorial, explained, examples, controlnet canny, stable diffusion tutorial, controlnet stable diffusion, stable diffusion ai, controlnet guide, ai art, controlnet automatic1111, canny model, controlnet canny tutorial, controlnet canny black white, controlnet canny explained, how to use canny controlnet, what to use controlnet canny for
Id: e6qz5QKVecc
Channel Id: undefined
Length: 11min 19sec (679 seconds)
Published: Fri Nov 17 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.