Mastering ComfyUI: Creating Stunning Human Poses with ControlNet! - TUTORIAL

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
uh hello everyone and welcome back to Dreaming AI my name is nuked and today we are going to learn about control net and its various models uh first of all control net is a stable diffusion model that allows you to replicate object compositions or human poses from a reference image and this is extremely useful when you are to depict a human oid subject in a specific pose but also changing a style of an object maintaining her main layout before control net you have to spend a significant amount of time trying various prompts to achieve the desired position uh personally I use control net in two ways either with a reference image that already contains a complete subject such as a photo or an image of a particular scene or with POS me. art which allows me to create custom poses through a simple interface the way uh human composition or pose is extracted from the image depends on the model used and control net has a variety of models designed to handle different situations such as in painting or extracting from manga or enemy the most commonly used models are undoubtedly three scribble which allows us to transform our scribbles into an image canny which enables the general extraction ction of features from an image an open pose which uses a well-known library for human pose detection and representation you can download the models you're interested in along with their yo files from the repository on hugging Hub the link is in the description next you'll need to place them in the appropriate folder in comfy UI under models control nut um now let's try a few examples first create a simple environment and add the usual essential components for image Generation by loading the default environment and then add the control net loader which will load your model and the control net apply which will apply the control to your final en so for this first example we'll use open pose our control networks through prompt fields in the case of simple control NE ply you'll have access to only one prompt field you usually use as a positive PR in the case of the advanced one we will have both available also in advanced one we would have the possibility to decide exactly when it is applied for now let's proceed with the Positive pront only and so connect everything like this perfect so now for simpli I'll go to POS me. art and place a mannequin in one of the many poses that the site offers and then I'll press export frame what I want to replicate and choose export open pose with hands since hands are important in this pose and I want them in the final image um now let's load the just exported image into our workflow and set it as the input for control n apply okay let's try generating the image feel free to play with the parameters until you achieve the desired result for the scribble and Cy models the process is the same you need input like sketches or models with well-defined lines and to simplify your life pre-processors are here to help and you can download them with the sweet comfy UI control n uh auxiliary pre-processors what are pre-processors uh they are nodes capable of transforming a regular image into something that can be more accurately interpreted by our control net models to help you understand or use a workflow you can download directly from the repository I mentioned earlier load a test image a and start the que uh as you can see pre-processors modify the image in different ways uh this input will then be passed to the usual control n apply which will uh use the selected control net model to generate a coherent image through stable diffusion so let's go back to the previous workflow and try using an image like this to generate a similar one I would like to use Advanced apply this time um for poses only I recommend using open pose so that you have complete freedom to choose the look of your subject as pre-processors however I will use DW preprocess processor based on DW pose which greatly improves body position detection when compared to the normal open pose pre-processor as always adjust the parameters to determine the best result for you and that's all for today I hope this tutorial helped you understand the basics of control n please consider liking and subscribing if you found this tutorial useful and also if you have any questions please let me know in the comments below I'll be happy to help you out as much as I can and until next time keep [Music] dreaming
Info
Channel: DreamingAI
Views: 15,062
Rating: undefined out of 5
Keywords: generate images, Controlnet, basic, canny, openpose, texttoimage, AI, stable diffusion, artificial intellingence, dreamingai, ai news, best free ai, best ai model, dreamingai tutorials
Id: w9fc3pIkl0w
Channel Id: undefined
Length: 7min 53sec (473 seconds)
Published: Sat Oct 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.