Style Transfer Using ComfyUI - No Training Required!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
ever wanted more control over the style of your stable diffusion Generations if only you could just show it an image and say hey do it like this wouldn't that make it easier than messing with text prompts I guess it would sort of be like visual style prompting oh right that's what this is yes we've seen things like this a few times before but check out how it compares to things like IP adapter style drop style align and DB Laura those clouds just knock the socks off the competition I think the others don't look like cloud formations at all the fire and painting style ones all look great too so how can you get around to testing this for yourself well for those of you without the required computing power at home they have provided two hugging face spaces one they called default and another with control net you can also run them locally for ease which is what I'm doing here so let's take a look at the default one first they've got some examples down the bottom here so I'm going to go with the cloud one for Speed also because I like clouds all right let's just submit that and see what it comes out like to start with awesome we've got a dog made of clouds okay I can see where the error is here that needs to be a rodent there we go that's why it came out like a dog because it said dog in the prompt so with that fixed let's have a look at another that's much better now I've got a rodent made out of clouds their control net version works just the same only this time it's Guided by the shape of another image via its depth map okay let's do the same thing here so we've got Cloud this time we've got the clouds and a robot to sort of guide what it looks like okay how does this come out wo yeah Sky robots now interestingly in this one they default to not having a prompt when using that control net if you don't fancy using that original repository then there's also an ex exension available for comfy UI being comfy UI you can also just integrate this into the workflow of your choice it's worth noting however this is a work in progress at the time of making this video so do expect things to change in the future for me at least it works pretty well as it is right now so let's take a look at it in action installation is just the same as any other comfy UI extension via git clone or comfy UI manager once down loaded restart and you'll have your new visual style prompting node available if you'd like the exact same workflows I'm using they're already available to patreons or of course you could just make them yourself by looking at the video okay so a quick overview here everything is all just the standard stuff with the exception of our new node here apply visual style prompting starting over in the top left up here I'm just loading stable diffusion models and then underneath that I've got the prompt and the sizes up here I'm just doing a little bit of basic image captioning uh that's mostly because I found well each time you change the image I did often didn't want to have to type in a description of that image so I just got blip to do it so that way I could flick through quickly and see how Styles went it works a lot better if you do type your own captions in so there I've got a little toggle if you want to have the automatic captions or just type your own ones for the most part you can leave that empty as well you don't necessarily even have to type a prompt in just underneath that is the style loader for the reference image there it is and also the apply visual style prompting node so that's the main star of this event and finally for now over here on the right that's just a default generation so how the render would look if you don't have the apply visual style prompting going on and I think you should be able to see quite the difference there I'm prompting for a colorful paper cut art style cyberpunk space woman's face and it should be pretty obvious just how different the visual style prompted Generations down here in the middle are because well they look like the style image I've put up here all colorful and paper cuty so have a little zoom in there so I think that's quite nice I think that's done very well on the style you can of course provide any style image you like so if I change that one to different style and then render it again then we have that new style applied it's a lot darker with Lo's more blue and yeah I think that's really awesome again excellent so now you can style your Generations just by providing an image but what about other nodes does it play well with those well it sure does here's an IPA adapter example where I'm using the full face and this new input image along with the same style as before once again it sort of merged the two looks like a great mix to me she's got those ears and green hair as well as that colorful paper cut style and white background control net also appears to work okay like they show in this example however this is where I started to feel some strangeness sneaking in so far everything looked to be working fine with stable diffusion 1.5 however when I scroll down a little bit look at these Cloud rodents can you see something that is slightly different with these yes they're really colorful whereas the clouds are white just to zoom out a little bit there I've changed the prompt so it simply says a rodent like we did in the original application earlier but they're all colorful so something not quite working there but the original did use sdxl I'm using SD 1.5 so is it different if I use sdxl well let's take a look and find out this is the same workflow and need this time of course I'm using sdxl models the default cyberpunk spacewoman is what I've prompted for once again and she looks pretty cool over there in the default group and over here in the style that looks to have applied nicely as well further down once again IP adapter sdxl version is doing its thing but what about our Cloud rodent how has he come out oh oh yes okay yes that's much better certainly much more cloudlike but not quite like in the original app my guess as to why I got all these colors in stable diffusion 1.5 versus not in sdxl is uh I guess it's something to do with 1.5 not really sure as that's pretty much all that has changed really and if all of that looks cool but you're not sure to install comfy UI in order to use these workflows it's all explained in this next video
Info
Channel: Nerdy Rodent
Views: 13,949
Rating: undefined out of 5
Keywords: stable diffusion, Visual Style Prompting, visual-style-prompting, python, ai
Id: Lbk2uoCVTQs
Channel Id: undefined
Length: 7min 15sec (435 seconds)
Published: Sun Mar 17 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.