ComfyUI for Stable Diffusion Tutorial (Basics, SDXL & Refiner Workflows)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi in the previous sdxl video I told you I would be shifting to comfy UI well I had plenty of time to play with it this is how it looks when you load it up and this is me after playing around oh wait not the blank one but this one and after playing further things get really complicated so let me explain how to install it run the image through multiple Samplers including sdxl refiner and complicate your workflow in the simplest possible way [Music] the installation of the comfy UI interface is effortless you go to this URL the link will be in the description below scroll right down and there is a direct link to download the 7-Zip file you can manually install it but I highly recommend the direct install it's a straightforward method unless you want to go in for an AMD GPU on Linux the direct link will allow you to ruin comfy UI on your processor or an Nvidia GPU so let's go ahead and download that after you download you need 7-Zip or WinRAR to extract the files 7-Zip is a free application and you can use that however I have WinRAR installed and would use that to extract the folder before you extract the folder there might be a problem with the antivirus I am using Norton and as soon as I extracted it Norton falsely flagged all the exe files in the python folder and quarantined it so what I did was I disabled Norton extracted the folder and added the entire folder as an exclusion from Norton's Live Scan I already have Python and get installed but this download has python embedded I however still recommend you download git and if it does not launch just download python as well and remember to tick mark add to path while installing python to launch it using CPU click on runcpu.bet file and if you wanted to use your Nvidia GPU click on run Nvidia gpu.bed file if you go into the update folder you can update comfy UI by clicking on the update comfy ui.bed file then launch it via CPU or GPU inside the main folder you will find a models folder all your checkpoints vae loras hypernetworks and upscale models go to these respective folders if you are doing a fresh install I recommend you download your models and put them in the folders before starting comfy UI if you are like me who is switching from automatic double one double one then there is a way you can get all your models from that interface in the main folder you will find a yaml.example file right click and rename this file and remove the dot example text then right click again and open with Notepad here in the base path put in the path to your existing stable diffusion folder keep everything else the same and it will take all the extra models you downloaded from that folder for the fresh install you can download the sdxl models directly from hugging face I will put the link in the description download the sdxl base 1.0 and the Laura version if you want but I won't use any lore s for this tutorial then go to the second link and download the refiner model in this tutorial I will teach you how to run an image from sdxl to the refiner in one go I however had downloaded the model from civit Ai and it was the vae fix version for my automatic double one double one just search sdxl and then the first link with the green thumbnail download both vae fix and vae refiner fix I will use these for the tutorial as I already have them so to follow the tutorial it's better to download it from here by the way these are the official as well before we start there is one more thing you need to do and it's essential for the tutorial we would be downloading the comfy UI manager it's a custom node and is very useful go to the link from the description scroll down to point 2 and copy this line now go back to the comfy UI folder open custom nodes right click and open a terminal here the 0.2 lying that we copied earlier paste it here and hit enter I have already installed it but for you after it finishes the installation you can see a comfy UI manager folder once you are done open one of the bat files and let's get to the basics of comfy UI let's start with the basics before proceeding so you can understand things better when you load up comfy UI this is the primary interface that you get I will be deleting all this and creating my own workflow from scratch that's the fundamental point of using comfy UI these boxes that you see are called nodes each node will have inputs and outputs that you connect and can create a complex workflow depending on your needs it all starts here with the load checkpoint node this is your basic checkpoint model this can be a custom model from civit AI or your official XL model whatever that safe tensors files you have put in the models folder will appear here and you can choose whatever trained model you want to use as a source the load checkpoint node has three outputs which are the model clip and vae the model has to be connected to a sampler like the case sampler given over here the clip connects to two text encode nodes which are prompts one will be positive and one will be negative these two nodes have only one output called conditioning which you can connect to the positive and negative input on the case sampler node now in the case sampler you can see the model is connected with the checkpoint node and the positive and negative nodes are connected with the positive and negative prompts we need a latent image node which would basically mean an empty latent image node which you can see here and in this node we can set the base resolution for Excel models we will go with 1024 by 1024 what exactly is this empty latent image node it's basically a node that can be used to create a new set of empty latent images these latents can then be used inside in this case text 2 image workflow by noising and denoising them with a sampler node which is the case sampler in this workflow you also have the batch size here like how many images you want to create in one run before I talk about vae in the checkpoint node let's explain the sampler here you can choose your seed if you keep this at random you should first copy the seed because after the image is generated it will randomize the seed for the next image in case you like the image it generated you can keep this at a fixed increment decrement or randomize fixed will keep the seed fixed increment or decrement will increase or decrease the seed number by one digit and vice versa after cueing the prompt you can set the steps and CFG scale and also choose the sampler now the scheduler here is a bit different they Define the time steps for the points at which the Samplers sample at in simple English it's like a mathematical formulation for example if you select Keras here the Samplers spend more time sampling smaller time steps than normal you can experiment with these as the results vary we want the denoise here to be as one because the latent image is one hundred percent noise from the low checkpoint the vae has to be connected to the vaed code node the Laden from the case sampler connects to the samples in the same node then we have an image output that can connect to an image preview or image save node the image save node will save the image to the output folder and the preview image node will just let you preview it on the extreme right we have a cube prompt to start the workflow process then we have a save option to save the workflow in a Json file in the same way you can load the workflow this allows you to create multiple workflows and then load and save them I am not getting into clip space now as I won't use masking in this tutorial I will cover that in future videos with other workflows clear will clear the board and default will load the default layout with nodes the manager is what we installed via the terminal in the custom notes folder in the manager the first thing we need to do is install some custom nodes you will understand why I am installing these later on in the video you need to search and install only two custom extensions for now which are the nested node Builder and the unduri do node the nested node Builder lets you combine multiple nodes into one it helps with complicated workflows comfy UI doesn't come with an undo or redo functionality with this extension you can press Ctrl Z to undo and control y to redo pretty useful stuff close this window and go back to the manager in a later video when I cover custom Civic AI models the install missing custom nodes is very useful when you download any workflow in Json and load it up here this option will search for any missing nodes from that workflow and install it you can also directly install models from the manager instead of manually downloading them these are the models I have installed the models you installed here go into the models folder however I have configured the yaml file to load all my models from the stable diffusion folder this is more useful for a fresh install rather than using the models from your automatic double one double one you can update the comfy UI from here instead of from the folder as well you can also fetch updates fetch updates tell you if there are any updates to any extensions which are called custom nodes if it notifies you that there is an update available for an extension you need to go to install custom nodes again and filter updates after updating you restart comfy UI manually by closing and reopening the terminal let's close clear this and create everything from scratch with a basic workflow let me explain some shortcuts with the user interface before I proceed it would be better to understand these beforehand as it's less confusing while I demonstrate the workflow creation to add a node you right click and go on to edit let's say you want to add the prop text for the clip there are two ways to go about it the first way is to right click and add a node like before to delete a node click on it and press the delete key the second and easier way to add the node is to click on the clip then drag the connection out it gives you a list of all the compatible nodes for the clip output and you simply select a clip text encode from here now instead of doing this twice there is a better method select the node then press Ctrl C to copy the node if you want the node to maintain the connection don't press Ctrl V instead press Ctrl shift V if you don't want a connection there is a better method press and hold alt k then with the mouse select the node and drag it to the position you want it in you can also right click and clone it but I prefer the previous method let's say you have two nodes oops I press Ctrl V twice let me delete the third one now if you want to combine these nodes you press shift and select both of them right click and then click on Nest selected nodes this will combine the nodes into one node this is useful for props where you can have positive and negative props in one node instead of two separate nodes to separate them again right click then on nest yeah yeah I know I have an OCD issue let me show you a simple method to collapse these you just left click on the circle to collapse and expand them let's say you want to take a node output and connect it very far off instead of doing a stretching exercise what you can do is add a reroute in the middle foreign output drag and add the vae decode node this helps in keeping things neat and clean now if you want to deactivate any node without deleting it right click it and bypass this is useful in many scenarios you can right click and bypass it again to activate it I forgot to mention there is another shortcut to add a node you can also double left click search and add we can now proceed with the workflow to start a workflow from scratch we will first add a load checkpoint node for the first workflow I am not using the sdxl base model I would be using a model called PSI animated XL from Civic AI I am organizing the workflow by colors and renaming the nodes this helps to understand better what is going on with the workflow let's create positive and negative prompt nodes foreign thank you I am adding the K sampler Advance node here I will explain later on in the video why I used advanced foreign I will use a preview image node and add the save image node a bit later thank you foreign let's add a simple positive prompt and test whether the workflow works comfy UI shows the entire workflow process when I click Q prompt the nodes get highlighted check it out let's try a batch of four foreign I will Nest the props into one node and add another case sampler foreign I am duplicating the checkpoint The Prompt and vae node and processing the same image via a duplicate sampler with some changes since the props and the low checkout are the same the same image will be processed twice just watch I will explain in a bit why I am doing this also I will use the same method for running an image via sdxl and refiner here we should connect the first sampler latent image output to the second sampler latent image input I made a mistake here in the first sampler the return with leftover noise should be enabled and not disabled we want the empty noisy Laden image to pass through the first sampler then pass on that image to the second sampler and then fully denoise it I do this correctly in the next workflow foreign the second prompt and light blue should be connected to a separate vae decode node and the second case sampler should also be connected to the same vae node foreign just selecting the notes to show you how the workflow will flow once you cue prompt let me zoom in and show you the point of using two Samplers here you can use as many as you want but let's start with just two what I am doing here is changing the starting step and InStep for the samplers so sampler 1 will end at 50 steps and sample of 2 will start at 50 and end at 100. I recommend keeping the same sampler name here but you can change the scheduler to get some excellent results the other thing you can do is use the different checkpoints you can mix and match two checkpoints and create some interesting results keep the end at Step at ten thousand and be careful here playing with the steps might produce bizarre results so experiment with it you can also change the props using the same checkpoint so you can make one prompt up to a certain step and then use a different prompt from that step till the end steps you set these are some examples of how you can play around with two Samplers mixing props and mixing checkpoints however the best way to go about it is by using the same checkpoint model and different props and you can weigh in on each prompt by defining the steps foreign once you get the desired image from the second sampler you can add a save image node along with the preview image node to the second sampler V AE image output then run the prompt once more this time it will save the image you want in the output folder inside the main comfy UI folder let's proceed to the sdxl and sdxl refiner workflow as that's quite different for the sdxl and refiner there are some differences I will Fast Forward most of the video as I have covered the workflow in the previous segment however Whenever there is something new I will pause and explain create two checkpoint nodes load one with sdxl and the other with the sdxl refiner The Prompt input for sdxl is different the best way to go about it is to double left click and search sdxl and add the other two clip text and code nodes for the sdxl prompt node you can see a clip G and clip L text input what we want to do here is right click and convert the text G and L to input then drag text G and create a new node under utilities primitive foreign node we created the string output should also be connected to text L now duplicate this and change the values from 1024 to 4096. do the same procedure as before this will be our negative prompt foreign XL positive and negative nodes thank you in the same way add the sdxl refiner clip text and code you will notice here that it's a bit different the as score here is the Aesthetics weight for the refiner prompt for the positive prompt I am leaving it at 6 and the negative prompt at 4. just play around with these as it will give different results here also you should convert text to input then link the positive prop string to the text input in the refine a node foreign foreign ER clip to the positive and negative text encode nodes now I am adding 2K Samplers one for the refiner and one for the base foreign I am nesting the nodes to simplify the workflow and make the workflow a bit neater for the latent damage we can change the resolution to 10 24. foreign as I did previously add two separate vae decoder nodes one for the base and the other for the refiner then connect the latent output from the first case sampler to the latent image input on the second case sampler foreign foreign foreign foreign thank you adding negative and positive prompts foreign let's do a fast test you can see the refined image is a bit overdone so I will play around a little until the preview is perfect then add the save image node foreign let's settle on these images I will now add an upscaler called Forex Ultra Sharp and lastly add the preview and save image nodes for it the upscaler must be connected to the second vaed code node thank you thank you foreign The Prompt and check out the images the first image is the bass the second is refined and the third is upscaled foreign so this is a simple workflow that you can do with comfy UI I did some further experimenting with this and what I did was for the first checkpoint I used a custom model and then refined it via the sdxl refiner which worked well so to conclude I know this video was long but I covered the basics along with two workflows comfy UI is highly flexible you can use multiple checkpoints via multiple Samplers or a single checkpoint via multiple props and samplers the combination is endless I will make many more videos and workflows regarding comfy UI in the coming weeks I will cover workflows like how to upscale any image using comfy UI using Civic Ai workflows and most importantly a complete tutorial on control knit for comfy UI comparing comfy to automatic double one double one honestly I like it but I would not replace it there are some things which are better and more flexible here but there are some other things for which I prefer to do in automatic double one double one it basically boils down to your workflow and for me using both is essential thank you for watching and I hope the tutorial was simple and easy to understand until next time thank you [Music]
Info
Channel: Control+Alt+AI
Views: 3,044
Rating: undefined out of 5
Keywords: comfyui, stable diffusion, stable diffusion tutorial, comfyui nodes, comfyui tutorial, comfyui guide, local comfyui, nodes comfyui, comfyui basics explained, comfyui explained, comfyui sdxl, comfyui local install, comfyui installation, comfy ui, ai comfyui, sdxl, sdxl 1.0, sdxl workflow, guide comfyui, comfyui workflow, comfyui local, sdxl comfyui, comfyui workflow with sdxl nodes, comfyui node tutorial, comfy ui tutorial, comfyui node workflow, comfyui custom nodes
Id: mUqzA5D0k9E
Channel Id: undefined
Length: 53min 24sec (3204 seconds)
Published: Sat Aug 26 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.