Hello and welcome to this video, in which I would like to exchange some life time for knowledge again. It was requested by you Instruct Pics to Pics and I looked at that the last few days. And coincidentally, an updated version of the model has just been released. And of course that fits in time very well and we'll start with that. It's a bit different from the other models, but not difficult and it's a lot of fun to use. So this is the new model from Stability AI. Of course, I'll put the COSXL link in the description. And here you just go to Files and Versions and load the COSXL Edit Safe Tensors in your checkpoints folder. The COSXL here is a normal checkpoint, you can also use it. But to be honest, I don't see any reason for it, unless you want to try it out. So COSXL Edit Safe Tensors comes with you in the Models folder and then here at Checkpoints. The structure is a bit different from what you're used to because it's a dual CFG model. But that's already natively integrated into ComfyUI. And I'll show you that. We start with a load checkpoint. Here we take the COSXL model, of course. Then we need a positive prompt and a negative prompt. And now the new stories are already starting here. And we need, if we go to Pics2Pics, there it is. Instruct Pics2Pics Conditioning. This is a new node, which is not available for so long. It's already natively integrated into ComfyUIer. And here we have to connect the first things, namely the positive and the negative, as well as our VAE. And for the pixels we can already take a load image node. We'll change the picture later. Okay, let's go on. We need a special sampler for this. And you can find it best if you choose Custom Sampler Custom Advanced under Sampling. This is the sampler we need for this. It looks a bit complicated at the beginning, but it's not. Because we can now very comfortably here. At Noise we pull out once, take a random noise. At Guider we pull out once, take the one I just said, Dual CFG Guider. It's also relatively new. At Sampler we pull out, take a KSampler Select. And here we can already see that we have our usual sampler settings here. At Sigmas we pull out, take a Basic Scheduler. Also here, that's familiar to us. Here are the options for the scheduler. And as latent image we just take the one from up here. So that's already connected. Now you can connect the familiar things here. That means we take the model over here. We need the model down here again. We take a quick look. That looks pretty good. So how do we connect this here now? The Dual CFG Guidance. So you connect it like this. Positive in condition 1, negative in condition 2. You can guess that by looking down here. It says CFG condition 2 negative. And now the negative remains. Then we just take the output from our negative prompts. And pull it over here. So that's almost it. We sort the whole thing a bit. And of course we need an output back here. We take a VAE Decode. Pull the VAE over here once. Take a stop. Then I take a safe image. Safe image node. Hang it in here. I'll give myself an IP2P as a folder. Sounds weird, right? If you read it like that, it sounds weird. Let's make them big. Then you can see a little better. And then we're actually ready to go. And what does the whole thing do now? In principle, it takes a picture that we send in. And changes it based on instructions. Which we give him as a prompt. That is, you probably understand that best. If we do it once. For example, I'll take the picture. Here with our blonde woman. I'll make it a little smaller. Scale image to side. So up there. So that the calculation doesn't take so long. I want to have 500 pixels on the smallest page. Interpolation I would like to have long. And we can go there now. Take this picture and say. Turn her hair red. So if I say queue prompt now. Then he got the whole thing going. The sampler sampled. I would have had to give a little more space. And then the whole thing looks like this. That's crap. You're right about that. That's because we have the CFGs here too high. And the CFGs, if I understood correctly, mean the following. Up here is the CFG for our prompt. Down here is the CFG for the input picture. That is, good values for starting are 5 here. And down here 1.5. And if we run the whole thing again. Then we see that we have the instructions. That we sent in. That we got her hair red implemented. On the picture we send in. Unfortunately, you can also see a little bit of bleeding here. Because her lips are not so red here. It's bleeding a little bit over here. The models are not perfect yet. But you can have a lot of fun with it. And I could just say pink now. Let the whole thing run again. And we see her hair is pink all of a sudden. Unfortunately, it bleeds a little bit into the picture. Up here the first CFG. If we reduce it a bit. You can imagine that we say, okay. We want a little less creativity. At the point of getting the AI. So from StableDiffusion. If we turn up the negative here in condition 2. That means a little bit in the direction. We want to keep a little more of the original picture. If you turn it up a bit. Then the original picture comes through a little stronger again. From here it has changed a lot. We see the blond hair coming through a little more now. You have to adjust. This is generally a technique. Where you have to play a little more with the rules. Let's take 4 here. And I could also say her hair blue. He would do that too. No problem at all. This is this Instruct Pics to Pics. We can also take the Mona Lisa once. And I say here now. Turn her into a zombie. Then it is calculated around. And we get a zombie Mona Lisa. Here, too, you can say again. We want to keep a little more of the original picture. Of course, we haven't set the seed up here either. But she looks a bit scared. She seems to have something surprising. Yes. Here I would even turn that up a bit. Creativity a bit down. In any case, let's see what it's all about. The second model. Is the Instruct Pics to Pics. This is the predecessor. I also put the link in the description if you want to try it out. Here you only have to download the Instruct Pics to Pics SafeTensor file. Also comes in your Checkpoint folder. The good thing is. We can now just change that here on the fly. We'll take the other model now. The older model. And. The Workflow also works for that. That's not a problem at all. Of course he behaves a little differently. Here we have to go up with the CFG again. And then something down. Is not so. He doesn't talk as fast as the other one. The other. Let's go up a bit here. If you pull someone else. We have to turn down the negative here. Yes, that's it. Now we can also adjust a bit here again. Let's go back to 5. So the Instruct Pics to Pics needs a little less condition 2. But also up. But also here. We see that it hits. So an example of what I also wanted to show. Before we get to the third point. I have a picture of a house here. And I say now. So what works quite well is. Add fireworks in the background. Let's go through once. Then we get pictures with fireworks in the background. That's a bit too high. Yes, you can do things like that. Also take a different scene. But that doesn't work so well. If I say now, for example. Then we get something like that out of it. The model is still a bit. Bad. I just choose the old model again. So the old model. In the sense of the video seen out. It is technically the newer model. If you see here, it makes a little better pictures. However. You are also very limited here. You can have fun. You see here it was somehow completely random. Now I don't see anything from Mars anymore. And that's because the models are of course also trained in a certain way. And to decouple that. Then I just open a new ComfyUI instance. So the third variant in any case. Is the controlnet. And even there I'll put the link in the description. Directly to files and versions. And here you load the controlv11. ESD15 IP2P PTH down. And that, however, comes in your controlnet folder. So here at controlnet. I packed it in here. It should be in there somewhere. That comes in there. Because it is not a model, but a controlnet. And with that we decouple a little. Yes, the generation of the model. Because we can choose a model that we want. But it is also important here at the E at the point. That is, it is still an experimental version. And it is also quite old. 12 months old. It is still a variant that you can use very well. And for this I load a default. Exchange the image node once here against another safe node. So. We don't have to pay so much attention to the sampler. We can leave everything as it is. So make us a little space here. So that we can work a little. I just select a model here at the front. And I take the realistic vision. That has done very well. This is a StableDiffusion 1.5 controlnet. We take a controlnet advanced. Of course, I load the controlnet in here. Where is it? Where is it? Where is the AP2P? That sounds so weird when you say it. I hope I don't get one on the cover here from YouTube. So, of course, connect positive and negative. Grind this through here. And then we need a load image. What will be our control image. And here we just take our house again. I also need a rescale here. Scale image to side. We'll stay with it. Short and long. We set the sampler to fixed. We have to change something here too. But I just let it start. And say here too. Turn it into a space station on Mars. We can leave it that way. And give it to him. The sampler is running. And we have too much CFG now. And we have a false latency here. I would like to correct that again. I say width and height here. Let's connect here. Then we get another picture. A slightly more fitting picture. And now the CFG is still too high here. Let's start with two. The higher we turn the CFG at this point, the more changes we get in the picture. We see it here already. It's a bit brownish-reddish. Which is already Mars. There are already a few planets coming up. But our house is also changing. Pretty blatant. And here I had thought of the following. If we take an IP adapter here. Advance. No, a IP adapter shares. I want to have that. Switch in between. We take a Unified Loader. We hang our model in the front. Grind it all the way to the sampler. Give him this picture as a reference. Then we're done. Then we can the description of the picture a little bit. Enhance our original picture. Because the IP adapter strengthens the description. Or creates descriptions. We take a plus model here. And with that we can do that a little bit. Enhance. In my tests I found it quite good. If you specify a strength of 0.25. A start ad of 0.2. And an end ad of 0.6. Let's run the whole thing a little bit. With that you can also super for example. Maybe I can show that again. A little bit of the style of Mona Lisa. Or keep up with this painted style. We can already see. The Instruct Pics to Pics Control. It has a little bit of a push in the direction. But the IP adapter slows it down a bit. That means we can here. Turn the CFG up a bit. And get such combinations now. We get the description of the IP adapter. But we also get. The Instruct Pics to Pics. So the Instruction. Which we pass over the positive prompt. To the Control Net Pics to Pics. What works great with the Tiled IP Adapter. Is. If we go there and say Open in Mask Editor. And then. The mask. Or enlarge the island here. I say I want a description. Only this house here. Say Save to Node and connect. The mask with the IPA. But then I have to turn up the strength a bit. The rest can stay that way. Let's see what happens. You can see quite well. How the fight between the two. takes place here. Between the Control Net and the IPA. But we also see here. That our house. A little better through. So we have to turn the CFG down again. To the IP Adapter. To get a little closer. Maybe take another seat. And so you can get a little touch. And. Look. That you get the desired result. The CFG is a bit too little for me. Or even too much. I have in the wrong direction regulated. It can also be. Let's take 3.5. Yes, the seat is not so nice yet. That looks a bit more like a house. On Mars. Of course we can also go here. And then say. Okay, but actually I would like to. This meadow. And this way. And here a little bit of vegetation. Have pushed. I take that with me into the mask. And now we have down here. Also. A little more. Got our grass. But still very well mixed. With the Mars atmosphere. Yes. I found it very nice. I found it very nice. We can also say. Turn it into a space. Not station. But scene. Let that run. Then the whole thing looks like this. Also very nice. And if you want to do it now. Reduce. Want to. We just take a little bit again. Away from the IP adapter. You can do that a little bit. So. Steering. I found it very nice. So. Attention mask out again. We put the strength. Set the strength to 0.25 out. That would be what. Instruct Pix2Pix with a little bit. IP adapter support. Made out of it. A little bit after scene. With the IP adapter. Take out of the whole. Generating. Then the whole thing looks like this. I think it's worth it. Already the IP adapter. So at least in the control. To take a chain with it. Incidentally, it also works. With the. Pix2Pix model. But it doesn't work. With the KOSXL model. Because the architecture is different. Take the mask out again. Clear. Save. And what I just said is. If you take the Mona Lisa again. And here we say. Turn her into a zombie. Take a quick look. What the whole. Makes of it. Yes. Also see here. What works quite well. The control net. Especially because we are here. With the model. A little bit decoupled. If we now. The IP adapter. Take out. Then we see. That he's a little bit. Controlling. And that's what I think. Actually. A very good control. Possibility. A little bit. To be able to interact. We can also. For example, say we want. But only from the Mona Lisa now. A description. To have her face. Incidentally, the IP adapter. Face ID too. That could also be in the chain. Hang in with. We take a little bit more. Strength in with. Let's see what the whole thing does. Now her face. A little bit more. Be pushed. Yes. Now we have a little bit more. From the face. That. I think that's a very good one. Control instance down here. We just have to. Always regulate the. The weights here. We have to regulate up here. The pull. And we have to. At the CFG. Always a little bit. And we have. Of course. The model. Because decoupled. The model. There is also an example. We have to touch it briefly. And when I say. Turn her into a cyborg. That's pretty much it now. By. But that doesn't matter. I just want to demonstrate it again. That doesn't matter anymore. To do with the Mona Lisa. But if we go back to the old. Workflow. And I say here. I want the Mona Lisa. And I say here. Turn her into a cyborg. Then. Then the whole thing looks like this. There is the cyborg. Yes. And the CFG. Turn up again. Test way. A little more out. I can do the second CFG here. Something down. And something like that comes out. That's just what I meant. We are here for the training of this model. Deprived here. While we have this workflow here. With our control net. Can also switch the workflows. I could now, for example, here. Also take an absolute reality. And see what comes out of it. It starts. Yes, we see. It has changed a bit. So we are here. Uncoupled. From the model training a little bit. Has not much more to do with the Mona Lisa. But I think you know what I'm talking about. And of course the other things go too. For example, turn her. Her red. All set correctly here. Yes. That still works. And. Here, too, would be an application. Possibility to say. Okay, let's take a mask. Describe here a little bit. The outer area. So. And so. And also a little bit. Her. Face. We are now not with a face model. Or so on the way. I just want a little bit. The description. The skin. The eyes. The mouth. The nose. Etc. Have. Pack this here in the attention mask. Say here. I want a strength of 0.9. Have. Let the whole thing run again. And so we can do a little bit. The red mark counteract. A little bit. The red mark counteract. The. The control net. X2Pix. Created there. Yes, I had a lot of fun in any case. Yes. Various things. To try out. There were a few laughs. I have. Packed. It's really fun. It's really fun. The whole thing. Just try it out. And I have. Three options. Showed. Whether the old model. So this. Instruct. It still really needs it. I leave it to you. There is still. You can still use it. The new model is. Just this big. XL here. And. A completely different variant. Is still. The. Control net. And. There. You can. Yes. Also a lot of things. Chain. As I said below. Face ID. Adapter. To. Also the face. A little more to push. Or just the. The. To change the outline. Everything possible. And. I hope to see you in the next. Video again. Workflows I'll put you down. In the description. Take care until then. And bye.