L4: Img2Img Painting in ComfyUI - Comfy Academy

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] next we're going to look at one of my favorite things to do with AI and this is using an image as an input to follow my workshop you can download my workflow on open art you see here there's a download button and above that you have a green button that says Lounge workflow where you can actually run my workflow in the cloud for free when you click on that button it takes a little while and then it's opening up this window as you can see here everything I've have built for you is in here you don't have to download or install anything this is completely prepared for you now again to activate this part of my workflow you want to rightclick on this bar up here for the group and then set this to set group notes to always now this is active so let's have a quick look at what I'm doing here and then talk about that again I have here my checkpoint my two prompts in here my K sampler and now instead of using an empty latent image I'm actually using an image here so you can see here I have a load image note and then I have your vae en code so this is turning my pixel image into a latent image but it has the information of my pixel image in there with the things that I painted in a very simple manner okay let's talk for a second about why I'm doing this this is basically image to image rendering it allows you to create very simple drawings but by that you can Define in your image the colors that should be in there the composition that should be in there you can basically think like an artist to work with different elements and the cool thing here is you can actually get super detailed if you want to with your drawing so you can put a lot of information in there if you for example good at digital painting this is really nice for you but even if you're not good at digital painting you can still paint a very crude image and give the AI a idea what you want to do now another thing that is really cool here is you can go beyond of what the model itself actually can do especially for the compositions because often the models are trained in a way that is kind of static it's kind of creating always the same thing where everything is centered and has kind of a similar composition in there so with that you can take a lot more control and this doesn't even need control net or anything like that you're just using image to image render so let's go back into our workflow and have a look at what actually is happening here you can see I made a crude drawing here where I have something with wings I have a color for the head here so it's easier for the AI to understand that this is actually the hat if you want want to can even paint a little bit of a face on there so that it knows that the head is facing you and then I have here the color of the clothing I want to have and a little bit of structure down here on the prompt I'm describing the scene I want to have from this now this is important here because from the way that this image is very basic and crude you can actually do a lot of different things with that depending on what you put into the prompt because and the AI can actually make an interpretation of what you have put down as this crude drawing here so in this case my prompt reads beautiful female angel with long blonde hair with feathery White Wings wearing a red toga City in the background Masterpiece best quality in this case if you want to have feathery White Wings it's actually a good idea to draw in a little a little bit of squiggly lines into the wings so it gives it more of a feathery structure this can actually help them make it even more feathery now this image through our vae en code is going into the K sampler as a latent image and in that case it is important to notice that here we have a d noise that is high but it is not one so in this case I'm using 0.78 play around with different D noise values to see what you get from them the higher D noise value is the closer it is to one the more freedom you give to the AI and the lower it is it will stick closer to the image that we use as an input here now of course after that we are going to do a VA decode and then we have here our output into an image now you can see here that the output is pretty amazing but it might not be perfect for your purposes and one of the things here admittedly is that because we don't use control KN we don't have a specific pose to the body in here of course you can draw the image with more detail but most of the time you just want to squiggle and doodle around a little bit and then render it and just simply render multiple images in this case we still render in a low resolution 512 by 768 so the render process itself should still go pretty quickly so let's hit the Q button and see what we get next so so here we have a different image based on the exact same input you can see now it looks completely different but it still looks very beautiful and the last one that is again very different but also very beautiful and impressive and I have to say that even though this is a lot of variation we get from the output I love this because this is giving me images I usually don't see the models create on their own and at the same time I have control over what I want to use as an input for these images so now I have changed my prompt to say I want to have a female cyborg with feathery wings and red armor and as you can see this has changed the image and we still get an amazing output here's another variation of that again pretty beautiful very impressive here's another output from the same prompt the two flying feathers in the sky a little bit strange but the rest of it is pretty amazing I love the pose and the dynamic everything about that and because it has so much variety I can artistically play with that and I can find things that interest me and go deeper into them for example you can use this input and then use it to create other outputs based on this so this can actually go deeper and deeper it's like a rabbit hole that you can play with and this can be super creative and very inspirational but this is not the only thing you can do with an image input I actually want to show you something that's even simpler for the input and you can do even more amazing things with that now that is using a gradient as an input now here I'm also using a little bit of an extra tweak in here for the workflow to give it a little bit more freedom and make the output a little bit more interesting so what I'm doing here is the exact same start we have our model we have our text prompts in here and then we load an image but in this case the image is a gradient you can see here the gradient is going from a red to a blue and of course in between red and blue is violet and then I'm encoding this into a latent image but because the output of the latent image in this case is a little bit too strong for what we want to have the color is taking over the image we don't want to have that so what I'm doing here is I'm also creating an empty latent image I have the same resolution here as for this image so 512 by 768 and then you can see I'm putting both of these latent images into a latent blend this is simply blending both of them together now again as I told you before I'm doing a little bit of probing here so the latent blend is going into a VA decode to give me a preview of what I'm seeing here it looks pretty much the same a little bit less color here because we are using some of the empty latent image here here on top of that so we put this blend of the two latents into our Cas sampler in the latent input here and then this is rendered as before we have here our decode and this is creating our image look at the amazing output you get from that from just a gradient but the cool thing here is we have control over the colors even though this isn't orange or red or violet or blue it still has the gradient in here where you can see that on the top part the image is warmer and then on the lower part the image starts to become cooler so we do have a sunset but we also have a bit of blue hour in there now instead of using a gradient from Orange to Blue we are going to use a different gradient so let's click here on load and there I have my other gradient and this is going from a reddish orange to a more yellowish Orange orange so let's hit the CU button and let's see what happens and this is the output image we have created for that again when you look at the image you can see that the sky up here has a little bit more reddish values and then down here it is getting warmer with this yellowish orange down here so you create a different Ambience and this is what makes this so Artful and expressive and gives you more control and another important thing to to take away from this is the prompt is actually only a small part of how you create an AI image with image input you can achieve a lot more by manipulating the output of what you want to have by creating your own base latent image with these kind of techniques I hope you enjoyed this part of my workshop in the next video I want to show you something really cool that you can do with the latent image input oh you're still here so uh This is the End screen there's other stuff you can watch like this or that really cool and yeah I hope I see you soon uh leave a like if you haven't yet and well um yeah
Info
Channel: Olivio Sarikas
Views: 29,797
Rating: undefined out of 5
Keywords: oliviosarikas, olivio sarikas, olivio tutorials, openart, comfyui, learn comfyui, free online course ai, free ai course, comfyui course, comfyui workshop, stable diffusion workshop, learn stable diffusion, comfy academy, comfyui tutorial, stable diffusion, comfyui workflow, stable diffusion tutorial, ai art, comfyui nodes, comfyui manager, comfy tutorial, ai art tutorial, comfyui explained, comfyui nodes explaied, img2img comfyui
Id: 179OUihyihk
Channel Id: undefined
Length: 10min 41sec (641 seconds)
Published: Fri Jan 19 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.