Stable Cascade: Official ComfyUI Workflow is Already Here

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
it has not even been one week since the stable casket has been released and the official comi workflow is already here no it is not the limited functionality workflow that you can access via hugging face or the Pinocchio app rather it is a complete workflow where you can control every aspect of the image generation including even adding custom nodes after I show you how you can run this one in your own PC I shall be doing a speed test against sdxl and that should clear your doubts about if you really need 20 gigs of vram or not to get started you should first go to this GitHub link where they have given proper instructions about where you can download the models from and and which folder each model should go and the workflow is also included just click on this button to download I'm going to put the downloaded file into a folder named St cascet and extract that here it comes with two files instructions and the workflow now there are two ways you can download the necessary models the first one is the plain old method going to this link and download the safe tensor files you are going to need four files files for this model to run the first one is the stage a model it is quite tiny 73 MB now for each of the B and C stages there are four variants of the models available the normal model the bf16 model the light model and the light bf16 model they come with significant difference in size and also the vram requirement gets slower the fourth model that you need to download can be found in this text encoder folder this model. insource file you can see it is mentioned in here once you download the models the Stage B and stage C model should go inside the unit model under models inside the configu folder the stage a model goes into the VA folder and the text encoder model is going to go inside the clip folder now if you don't want to go through all that hustle launch the comy UI and we can download the models directly inside it now first you have to update confy UI if you haven't already updated it you can update it by clicking on this button here click on install models and on this search bar write stable Cascade all those available stable Cascade models are already here download this stage a VA model first and this text encoder model now you can select from different variations of Stage B and Stage C files just make sure to download the same variant for both of them I have installed the bf16 model for Stage B and Stage C because those are little bit less hungry for vram and we are going to see if I can run those model using my 12 GB RTX 3060 model now once the installation is complete click on this refresh button to get them ready I'm going to load the stable Cascade work now in the VA node we load the stable cascet stage a file for the load clip node there is stable cascet model. Sensers file now in case of those unit loaders the top one takes the Stage B model and the bottom one takes the stage C model how do I know actually I tested it by alternating the models and this error showed up now first I'm going to use the bf16 models let's go with the given prompt and Q prompt given that this model is supposed to be an upgrade of sdxl this is really fast and the image quality from the model looks really promising given this is just the base model and as their research paper shows the training is going to be much faster for this model that means we shall be getting much better trained Community models from this one compared to sdxl let's change the models and use the original large size models in my experience these bf16 models perform just as good as if not better than the original models but let's try it out anyway it does not seem much different from the bf16 model this prompt got executed in 2019 9 seconds and this is a 1,24 x024 image let me show you my graphics card model Nvidia GeForce RTX 3060 which is a 12 GB model and it takes only about 30 seconds for stable casket to create a 1,24 by 1024 image I've been seeing this comment in all the forums that you need at least 20 GB of vram to successfully run this model which is not at all true and if my 12 GB card can run the largest model then my guess is even with eight or maybe by straight with 6 GB of RAM you can run the smallest models and that two are going to perform pretty well if you have a 8 gigs or 6 gigs graphics card let me know in the comments how does table casket perform in your PC now it's time for the speed test after I do the speed test I'm going to show you how you can use this style selector node to take benefit of all the Styles available in fukas UI right inside comy UI with st casket we have seen stable casket takes around 30 seconds for 1,24 by 1024 image now I'm going to load the sdxl workflow which includes both the base and refiner models and also the refiner loader model I shall be putting the workflow Link in the description now let's run this model with the same settings of 1,24 by 1024 dumpt executed in 3222 seconds with 12 GB of RAM stable cascet bit sdxl by 3 seconds let's try another prompt this time we are going to use same prompt for both the models keep in mind every time you load a model for the first time it is going to take little bit more time compared to what it usually takes with this prompt sdxl took again 32.387821 where it took around 29 to 30 seconds there you have it even the largest models of stable casket runs faster than sdxl with a 12gb graphics card let us put the stable Cascade and sdxl images side by side to find out if there really is any quality difference on your left side that is the sdxl generation and on the right that is the stable casket generation let's zoom in both the images when zoomed in stable casket shows noticeably more clarity you might call it or sharpness compared to the sdx El image in these next two images also you can see the quality of stable cascet is visibly Superior compared to sdxl again similar Generations from St casket shows much more minute detailing compared to sdxl now this workflow is very kindly brought to the community by this YouTuber named John artist studio you should visit his channel he has some really good videos there now first you download the workflow from the given link and load it into com if some nodes doesn't show up instead they turn into Red Blocks like this one then you have to install them first again go to confi manager click on install missing custom nodes all the missing nodes would show up here just install them one by one refresh and close the config UI and run it again now it should start looking something like this on the first unit loader STC sa model goes and the second one takes the Stage B model I shall change the title of the nodes in the given workflow and from this style selector nodes you can select any style that you want let's try something like Sai fantasy and sa anime with this one here it automat I Ally adds the second layer of prompt to apply the styles on the image the new image does not look much different from the previous one let's try changing the Style again now this one has made a more prominent change compared to the previous one and if this is just the base model I can't really wait for not only Community Trend model but also other features of stable cascet like face identity and super resolution
Info
Channel: Debargha Bhattacharjee
Views: 3,652
Rating: undefined out of 5
Keywords: generative ai, a.i. art, Stable diffusion, stable diffusion tutorial, ai, generative art, ai image, Stable cascade, stable cascade install, comfyui, comfyui workflow, stable cascade workflow, stable cascade comfyui, stable cascade comfyui workflow, latest stable diffusion update
Id: XuwJfNuyHTE
Channel Id: undefined
Length: 7min 23sec (443 seconds)
Published: Sat Feb 24 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.