Hello and welcome to this video in which I would like to exchange some lifetime against knowledge again. This time I would like to show you the Multi-Area Conditioning. This is a pretty cool tactic or technique that I haven't used for a little longer. I tested again if it still works as well as it did back then and yes it does and in fact there is the possibility already in the Vanilla ComfyUI and that is under Conditioning. Conditioning Set Area. Then you get this node here and with this node you can specify how big a picture should be and where on this picture the Conditioning here or the motif that you want to create with this Conditioning should be. But this node here is somehow not very nice. You have to guess a little bit and something like that and that's why I want you at this point to introduce another node collection and you will find it or I at least always find it that way and you then hopefully also. In the Manager I go to Install Custom Nodes and enter Dave here and search for it. There is the author DaveMain42 and he made the Visual Area Conditioning Latin Composition Nodes and they are in GitHub here and you can download and install them. Feel free to take a look at my other videos on this channel on how to install Custom Nodes. Here it is also always described. You need Git for this or you just do it via the ComfyUI Manager. There is also an extra video for this Custom Node on this channel. There you can learn a little bit about how this works and if you have installed them then you will get this DaveMain42 folder here and here there is a Multi-Area Conditioning and that's really pretty cool. I'll show you that in action in a moment. However, this is a pretty special node. We're just going to build it up in a little workflow. I'll take a loader from the TinyTerra Nodes again, put it up here and we'll take a sampler, put it here, connect the pipe over here. So we will probably need a little more space. What we can do here is this Multi-Area Conditioning Node or Visual Area Conditioning as it is called here, takes different Conditionings. What we already know at this point is that we will act with the same negative. That's why I'll take ... no, I don't take the Bad Dream, I'll take the Easy Negative. At this point we already have to know how big our build should be. I will target a 16 to 9 format and I'll take a width of 1024 and a height of ... one moment, now I have to take a quick look over here in my smart notebook that we have a 16 to 9 Resolution. At a width of 1024 we need 576 in width. Then we have a 16 to 9 similar resolution. I'll take a look in here again. Which is even divisible by 8. Yes, it doesn't matter. We won't need the positive here. We will work for this tutorial here with the RevAnimated, with the Model and with the Stable Diffusion from Stability AI VAE, because that actually always delivers very good results and it works with the one Model with the other not. We have to try a bit. So, to this Node here. On the one hand, you can already say if you right-click on it, you can Unfortunately, none of these connections somehow convert to an input. At least I haven't made it yet. It is not offered. But you have different Conditionings that you can specify. And as a base resolution we give here already times our desired values. 576 up like this. And now we can say index 0. This is supposed to be our background. And there we now also give the widths down here on or we can also click directly here. And then you can already see that we have a graphic feedback at the point in which area in our image this conditioning should take. So, and now we're going to do a normal clip text encode in between. A little more space. We need the clip at this point. And say as a background we want to have a forest. Now we need here is a second conditioning. We say clip text encode. We also take the clip. Why is this so tangled up here? Is that now over each other? Yes, it is. So, and now we can namely say down here in the node. We go to index 1, which is the conditioning 1. And here you can already see very well that if you change the index, you will also get a graphic feedback to which conditioning that fits. And here we say beautiful woman standing in a forest. I've gotten used to it, too, to always add it here, although we already have it up here in the background. Simply so that it gets better from overblending. And down here I would just enter a few more negatives for safety. We're here on YouTube, everything has to be right. So, and with this conditioning 1, if we click on it now, we can see our beautiful woman is conditioning 1. And here we can now say that this woman should only take about this area in the picture. And we can move it a little here. So, let's put it on the right side. We can actually say y a little less, but a little higher. And the width, let's say 512, y was wrong, but in the x-area here. So, and now we can throw the conditioning into the optional positive here. I change the sampler quickly here. My favorite setting, say it should save the pictures for us. And we can start a first try to see if I've done everything correctly. I'll check again. But that actually looks pretty good. We'll do QPrompt once. Of course, he has to load the model first. But that should work. We don't have errors. But load the model faster. So, here we go. And our sampler builds the first picture. And we can already see here, we have got a picture in the background. And here in the marked area our woman is rendered. But we can now say here again, close up photography of a beautiful woman standing in a forest. Yes, let's just try. Yes, and that's what it should be. It has now rendered the different areas as we wanted. It has made a background with a forest and it has created our woman at the point where we wanted it, where she should appear in the picture. And that is basically the Multi Area Conditioning. But we'll go through that again a little bit. I'll build an upscaler in here again. And let's say a sharpen with a strength of 0.3. That actually always works quite well. Then I want to have an upscale. An upscale model loader. Then we take the ArtStation 1000 Sharp again. We want to do upscaling with model. We can already connect this here. Let's make it a bit nice. The eye is rendered, as I always say. Then we make an image scale here. And let's say I want to make a width of 2560 and a height of 1440 pixels at this point. And so that it really ends there, let's say it should crop in the center. So cut off possible edges left and right and top and bottom. We also add, say then at this point we want to have a VAE encode. For this we need the image that is created from our upscaling chain and the VAE from here in front. And then we chase the whole thing again through a second sampler. TinyTerraNode sampler. Let's say this goes into the optional latent and pull the pipe through once. But I'd rather have the pipe synchronized. Up here a straight line and that's how it works. I'll put it here on preview. Set my sampler settings here. Since we are here in the image-to-image area, we need a denoising that is a bit lower. 0.5 is actually always quite good. Sometimes 0.35, sometimes 0.6. So the less denoise you set here, the more the second sampler to the original image. That's why half is always quite good. I'll do steps for the video a little bit lower and back here I would like to have a safe image. We also make that a little bigger so that we can already see something here. And I say here as a prefix directly Multi-Area Conditioning. Simply so that it is easier for me later to fish out the YouTube thumbnails. So and at this point I would like to put the sampler on fix here. We now have to run a random image again and see that we get a nice picture. No, I don't like that. I'll break the chain again. Let's generate a new image again. I already want him to take into account the close-up. What is he doing back there? I said cancel. Oh, of course, we now have to touch up here little by little. That's pretty good, we can leave that alone. Then we go through the upscaling chain once. And this will of course take a bit of time now. Here I will probably always make cuts until the sampler is finished. Have I forgotten anything? Yes, I have. I would actually like to have a sharp one here at the end. For the first picture I'll leave it now. For the further runs I would like to have a sharp one in between. What is he doing? What is he doing? Something seems to be having problems with my shift key. Interesting. My shift key doesn't work anymore. This node can also be moved heavily. Is he just so tired? That can of course be. Let's see if he's done now. Ah yes, okay, he's pretty tired. I think it's pretty good anyway when I make cuts. I don't know if OBS causes problems when recording. So and now let's take a look at the picture and this is now our high-scaled picture. It has become nice and sharp. Here he has now generated our woman. Everything is great. Well, that's basically how it works here. We can also say it again, we want to push the lady a little further to here and make the width a little smaller. Let's see what he generates for us there. Yes, we can cancel that. For this we need another seed. He wants to have the close-up. Yes, now he has rendered us to the left. We can let the upscaling run through again. I would actually like to have a control net in the upscaling. Where is it? Here the part. I have discussed this in detail in the upsampling video. If you are interested, feel free to take a look. We still need control net apply. That makes the pictures a little better. This now goes into the functional positive. This goes into the control net. Yes, he takes it a bit badly that I have such a high-scaled rendering running here and at the same time the recording software is running. So, he's done. Now we run the VAE decoding and as a picture we take the downscaled picture from here again. Then we push the whole area a little to the right. And as conditioning we pull out the positive here at the front and with that we have it. We can look at the picture again. It has also become very beautiful and above all you can see that our lady is on the left in the picture. And you can simply change that with other things. So, for example, we can now say we just want to have inside a space station and then let's say here close-up photography of a beautiful woman standing in a space station. Let's see. Yes, very nice. It worked. I'll let the upsampler run again and then we'll look further. We'll take a look at the picture right now. So, there it is. We take a look. It has also become very beautiful again. You can still tweak a bit, but above all we see that we had the motif we wanted on the left side, while the background is also what we wanted. For the next runs, I'll take the Add Detailer LoRA again. I also made an extra video for this. It adds great details at the end. I take it with a strength of 1.2. By the way, if the upscaling area, if I went through it a bit quickly, don't worry. As I said, take a look at my upscaling videos. I'll explain that again. We also made comparisons, which different upscaling methods there are. Okay, now let's see if we can reproduce it. Because there is a problem. With this node here we can say we want to create inputs above or below it. I would like to order an input below it at this point. That's the one with the arrow down. Then we have Conditioning 2 here. What you can do here is, you can select the input down here and switch it above or below it. So if I wanted to put Conditioning 2 up here, then I would go to Index 2 here. We see that this is already marked and then I could now say I want to switch it above or below it. So change the levels. Here I can still say I want to remove the selected one or remove all inputs that are not connected. You can control the input a bit here. But what I want to do at this point is I want to add a third input here. So, at this point here, I'll say a cat in a space station. I want to have a cat in the picture. You can only have cats in the picture. That's why we're going to move the position for this cat over here. And let's say we want to have a cat in this area of the picture. So now of course we have to pull the clip back in at the front and we'll let the whole thing run through. Of course, something will change here with the seed. Let's take a quick look to see if the picture works. It works pretty well. We got a similar picture here, but we now have a cat here. What is this? I don't know. No idea. Good. Now let's see if we can reproduce this here. It probably doesn't look like it yet, but there are cases where it could happen that this cat, for example now with the space station thematic and so in the background, possibly has an astronaut helmet or something like that on. And then it can happen here that with upscaling, stable diffusion, a little bit with all these terms come apart. That means he already recognizes here in the image-to-image that it is about a woman and here the space station. But with the cat he has a little bit of a problem, if you imagine it, she would have a space helmet on. Then he would possibly generate a woman's face again in this space helmet, because we said here in the front in the prompt that we want a woman in the picture. And there are different ways to fix that. We'll take a look again in a moment, we'll let him run through again, take a look at the picture again and then we'll try to find an example somehow. Yes, that's actually not a bad example at all, because here somehow you can see quite well on the cat that he got a little confused and above all you can see up here that he has rendered a cat's head into the woman's hair again. The example is actually not that bad. So if we look here, here a cat has emerged again and down here he has quite a bit of a problem. And now I'm going to show you a few tricks on how to do that. On the one hand, we are already creating this multi-area conditioning here. That means the conditionings are provided with information about where these respective objects should be in the picture. But we only move here in this frame, i.e. in this 1024x576 frame. That comes back here as soon as we have scaled it up, of course from the rudder. That means he doesn't get along with it at all. What you could do at this point is we take a Conditioning Combine and say at this point that we want to separate the terms or the different points from each other. And we do it by combining them all. So. And back here again a Combine. No, something went wrong here. What do we have here? He's supposed to go in here and he's supposed to go in here. That was right. So. It's a bit tight here with the space again. And now we're going to do a Conditioning Combine again and say we want to connect the two again. So. And instead of our positive output from the sampler, we now take the Conditioning Combine and put it in the control net, which then goes back into the positive. And we'll let that run again and look at the result at the end. I'll open the old picture again so that it's not gone and we'll wait a moment for the sampler to rattle through. So there he is ready. And if we look at the picture here now, we can already see that he got along a little better with it. The cat up here has disappeared. The cat here is more cat-like than before. Here she somehow wears such a strange leather space helmet, no idea. Here she is much more cat-like . There is also a cat up here. But well, that was already an advantage now because of course we have removed these Area Conditionings from the conditioning itself and can now scale up better. This Dave Main Collection also offers other nodes. I'll zoom in here. And there is the Multilatin Composite here. I would leave my fingers on it. It has never worked for me and makes sure that ComfyUI crashes. So you have to press F5 once to refresh everything in the browser. Feel free to try it out. Nothing really bad happens, but I don't want to do that for the video here. There is a conditioning upscale. The conditioning upscale is actually there to say in an upscaling, okay, watch out, the areas that we have specified here in front of us change for a certain scaling. But now I've tried it around, but it doesn't quite fit my application purpose. I would like to have a VQHD image at the end, for example, so 2560 x 1440 pixels. If we now throw on a pocket calculator for a moment and we say we want to have 2560 pixels in the width and our base image has 1024 pixels, then we come to a scaling of 2.5. Unfortunately, I can only specify whole numbers here. So I can say times 2, times 3, times 4, times 5 etc. And that doesn't fit there. I would like to be able to specify 2.5 here, but this node doesn't take it. And even if I go in here and say 2.5, it makes a 3 out of it. So that's an integer, a whole number value. Even if I wanted to take a different size, for example HD, then I take 1920 and divide it by 1024, then I get a factor of 1.875, I can't use them here either. That's why it wasn't useful for my application purpose. If you just say you want to have an image twice as big, then you can of course just hang this node back here in the conditioning and then say your image is twice as big. Then this node calculates that too. The last node is a conditioning stretch here, so you can stretch the conditionings to different widths, change the original width and the target width. But I don't think stretching works that well either. I don't use it either. What I noticed is that it works best if you unfortunately do a bit of manual effort at the point and take another node like this here, the Multi-Area Conditioning and then also creates three inputs. So I'll do another one under that, insert input below and then here at this point it says we translate the whole thing. So we want a resolution of 2560 x 1440 pixels at this point. Of course, we also have to have 2560 x 1440 pixels down here. This is our background image. We can actually drag our background in here. Then we say index 1 and we will now place it in such a way that it fits in about the same way as we have it there in the picture, so on the left here. Let's say the width goes about here. We also drag the height down. That's about the same, roughly. And that will be our Conditioning 2 or 01, that's our woman. And for index 2 we do the same. That means we drag this node over here once, put it down so that it fits roughly. So let's say the width and the height. I think it can be a little higher. You probably don't have to hit it so precisely. I'm a little picky right now, but it's actually done quickly. And that's our Conditioning 3. This one is a bit bigger now, you could still say that. We want to make it a bit narrower in terms of width now. That's how it fits now. That's roughly how it fits now. And now we have our three conditionings on here. We somehow pull it up here so that it fits. And now we take the Conditioning out of here and push it in there. And with that we can actually throw out these three conditionings down here . I'll leave it in here now when we save the workflow. And we'll do the whole thing again with this setup here. I'll let the whole thing rattle through now. Let's see if there are any problems or not. No, the sampler is sampling, very nice. Then we'll see the result when it's done. So, the sampler is ready. We look at the picture again. And what is the comparison now? Yes, we can see that it just worked better. This is the picture that we have now created with the second Multi Area Conditioning. Even the second cat has gotten better. She has clamped her tail in here, but that's typical for stable diffusion problems compared to what has already worked here. But not quite as clean. It works too, but yes, a little bit is of course also to be added to the Randomize C in the upscaling sampler. So a question of what it makes of it. And theoretically you could now of course get out of the node, of course, Resolution X and Y , whereby the Resolution is also based on the specified size jumps. So you can't always do everything. Yes, theoretically you could store the input for Width and Height here and take it out up there. But since I really want to have these WQHD wallpaper dimensions in this example , I'll just leave it that way. And that works best. Of course, you always have to adjust a little bit here. Adjusting here again is a bit stupid and cumbersome. Unfortunately, you can no longer automate here somehow because the option is missing to create all the inputs here. But well, for the fact that it actually works so well, this Multi Area Conditioning, to place objects in broad images, that is, if you now of course only want to generate a woman in inner space station , it can often happen that you have the same woman in the picture two or three times because the light in the Stable Diffusion is that the 1.5 models are only trained to a certain width and height, 512x512 pixels. That means it repeats that and you have cloned faces and so on. That's not very nice. You can do that with this Multi Area Conditioning. For comparison, as I mentioned at the beginning, if you want to do Set Area here, you would have to do all these inputs that unite this node three times here and then roughly guess a little bit and if you want to apply this fix up here, you would have to do that again. That's why this node is actually quite good and definitely a worthwhile reason to install this node collection here if you want to do something like that, even if the other nodes in this collection are not so useful at this point. A little trick at the end. I had said that it only works limited to some models. Some do this Multi Area Conditioning not so well. The RevAnimated works great, that's why I used it for this. If you want to have a more realistic picture at the end of your chain , you can of course also use tricks like the Model to Model or something like that. And here, for example, say you are loading in a Checkpoint. Say here now you want to have a more realistic model. We take the Epic Realism again, just pack it up here . So into the Optionals and if we let that run again now, of course, the model will load first and we will skip the sampling and then I'll show you the final result. So and then it's done. Let's take a look. And now we have already created a little more realism, whereas the cat down here was probably really the seed, but at least the objects themselves are no longer so mixed up. So as an example before, here we can clearly see the 2.5D style. Here it goes a little more into the more realistic and yes, you can then of course mix that with other techniques. You can also pre-pack a model merge and all sorts of things. Take a look at the front of how it works with the right models. If we now load the RevAnimated again, for example, the Epic Realism directly, the Baked B.A.E. and let the whole thing run again. I deactivated it back here. We only need to load it once. Let's see how that works. Yes, you can see that we can cancel it once, look again, maybe a nicer picture will come out. Oh, the sampler is fixed. I'll put it here on randomize, let's see that we can get a picture out of it, which is a bit nicer. That could work, but from the style you can see here I think it's pretty good that the model is not so suitable for it. That's why it's good to look at which model is a good base model here at the point, which you could first load in and then later with other tricks back here in the second sampler. Or you can also make a chain with some kind of translation samplers that you say okay you want the small base model now maybe convert it into a different model and then do the upscaling again with the other model. With these workflows you can combine techniques as you want. Okay, well, I'm just thinking about whether I've forgotten something while the sampler is still running. I don't think I'll upload the workflow to you. Then you can play around with it a bit and I'll save it with the RevAnimated model and the right VAE. RevAnimated would also recommend having a ClipSkip-2, by the way. I left that now. Well, I'll leave it commented here above. I'll leave the fix in here. Then let's take a look at the end result and then I think we've discussed the topic hopefully quite well. As I said, I used it very often a few months ago to create WQHD images. In the meantime, I do it without this technique and just for good luck, but then this cloning comes out that you have the same woman in the picture two or three times. That's why this is actually a very good tactic here at the point. By the way, you could also add a whole series of more conditionings here and really compose a picture with you. So we'll take a look at how it turned out. Yes, it can be said that it is worth improving, but you have to work out the prompts a bit and so on. It's always a bit of tweaking, like that's usually the standard. Exactly. One moment, I'll save that. So it is saved, because now I want to change that a bit and do a quick test again. Namely, if we now say once we want from the resolution actually turned around 576 times 1024 and then as index 0 we say that too 576 times 1024, index 1, no wait a minute, I want to do something else. Index 0, we say less height, let's say so and say here we have beautiful landscape scenery. Then conditioning 1, that's the green, we'll do that a little further first. Tick down and there. So and from the height also a little less, now we have to put it down again. So that's 2 and here we say beautiful night sky. I don't know if it works, we'll just test it out live and here we say conditioning 3. From a distance it is a galaxy in space. And so that we can save ourselves up there now, I would say we take the positive again from here. Only for the test up here I turn off. I saved the workflow extra beforehand because I have to do a bit of destructive work here. And now let's see what happens. I'm curious myself, I haven't done the test like that before. What is he doing? Oh, he's loading the refanimated. When do you arrive in the sampler? There he comes. Aha patch. Kernel size can't be greater than actual input, whatever that means. Very nice. Yes, why doesn't that work? Let me take a look. So, error found. You just have to enter the same values again down here as here. So he has the latent. Of course he takes the latent from the loader and that was still set to wide image. Now I have set the high-end image and then it works. I've changed the values a bit here so that these areas overlap. Because if you make them so hard separated, you can see that clearly in the picture. But we can already see the picture that is being created right now. I think he got a little confused here. I hope the picture will be something in the end. And that was just the preview that is now slurping around here. Well, finish it. No, he broke down there. But we can look at it here. Let's take a look at the little picture and here we can actually see that it worked very well. Here we have our landscape scenery. Here we have, I said Night Sky, but he rendered this here. And up there it goes into a galaxy. Of course, that's certainly nicer. But I don't know why he got confused here. Maybe because the things are connected here or something. But actually he shouldn't do that. Strange. But in principle, something like that works. Maybe I should restart Confui or something like that. Here we can see how he creates everything again. Composed together. Well, take a quick look if he's cleaning up down here. No, he doesn't, because I'm an idiot and yes. So, give us one more running example here. As I said, this is unfortunately a bit of a manual process. This one is running here. What did he do there? That's interesting. Yes, it's a bit of optical deception. Doesn't matter. Technology check. He's doing something strange here too. Okay, that's a different problem. So, a little cut in between. I got it done and it's actually not that difficult. I had pulled the positive over here before and of course there are all the area conditionings in here and that has messed up the sampler at this point. I went over the Conditioning Combine down here again. I activated it up here again into the Control Net, from the Control Net into the second sampler and that's how the build worked again. I didn't use the second Multi Area Conditioning Node up here for this. It works perfectly fine and if you take a look at it, then something like that comes out. And I think that can be seen, right? I think so. That's a nice cut, although he doesn't recognize Night Sky. He's doing a normal Sky here, but that's due to the model. So, I just wanted to briefly mention that the fix for this was actually relatively easy. Yes, well, I hope you could see what this node is good for. Especially for landscapes it is quite good. Yes, try it out for yourself, test it out a bit. I'll probably still look at why this doesn't work so well here. There must be something in the upscaling area back here, but nothing great. Have fun experimenting and trying it out. Until the next video, I wish you something. Bye!