ComfyUI: In my mind | Stable Diffusion | Deutsch | Englische Untertitel

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hello and welcome to this video, in which I would like to exchange some life time against knowledge. I have built a small workflow here that generates images in this style. I'll do the full picture while we talk about it. And I have to say a little disclaimer about this video now. It is not to be considered that I now present things here where I say or where you can generally say that you have to build it that way. That's how you should do it. Actually, it's more about generating these pictures, thinking out a workflow for myself, also a little bit about playing and experimenting around at the point, what you can achieve if you mix different techniques. And that's what came out of it. And that's why the video is called the way it is now. It's actually just a little insight into my world of thought to make it possible for me to tell you that I have built this and that. And I did that because or because I thought that this and that could come out or that such a picture generation could behave that way. And that's why you should see the video as an inspiration to be able to link different techniques together. And possibly to be able to achieve your own goals with it. Or just to get a little bit of the idea of what I could try here and there. So just look a little bit over the edge of the plate, maybe to get a little more feeling and understanding for ComfyUI. That doesn't necessarily mean that the techniques I used here make sense. That's why I say what I thought about it. It may also be that there is something at one or the other point that doesn't make so much sense. But in the end it helped to get the desired result, namely to make such pictures. And yes, maybe I'll make a series of these videos. Let's see that every time I've done a little bit of tinkering for myself and think, well, that could work this way and that could be built this way and that. That I then briefly introduce that and not with the aspect that I now say, hey, that's how you turn this technique on now. But rather, I thought about this and that and then I share that with you and then you can look. Maybe you have other ideas or you do it differently yourself. At one or the other point you might say, that doesn't make sense what the guy behind the microphone is building right now. I would do that this way. But in the end, it's actually always important that you get the pictures you want. And of course, I know not everyone has the strong hardware you need to create such pictures locally. That's why I looked a little bit here. Where can you save performance and still get very good pictures? Well, let's jump back to the workflow. I'll let you generate another picture while we talk about it. Then we can look at the next one right away. And now there's a little nipple coming in down there. I'll show you that. We have three areas in the workflow here. The first area, that's this blue one here. This is the generation of the basic picture. And the generation of the basic picture actually includes this preview down here. This is also a bit far away. However, there is a post-processing behind the basic picture as a pre-processing for the upscaling picture. I'll show you that in a moment. So the next area, that's the green area. This one here. Here I did the upscaling. I'm scaling up to 1440p right now. We can also try 2160p. And by trying, I mean it works. But I saw during the recording of the patch model at downscale video, that apparently a lot of performance was eaten in the background. Which then made my NVIDIA broadcast, I'll let that run a bit to dampen the room a bit in the microphone track, that took it a bit badly. And then a few little cracks appeared. And I hope that doesn't happen now either. I've set up a fallback track in my OBS. Let's see, we're at full risk today. In any case, the green here is the upscaling area. And in the upscaling area we have this picture here. We'll take a look at that again in a moment. And last but not least, we have here in the yellow area, or golden, or I think it's golden. I say yellow. This is a face detailing. And that corresponds to this picture here. Yes, here is a face detailer. Mainly with a few things that I have considered for myself. We'll discuss them in a moment. And last but not least, we have this block here again, so to speak. Is again a post-processing on the picture after the face detailer. Just to get a little more realism into the picture, as I think. So now I'm going to press a button again. And then we disappear again. And then we'll take a look. So let's take a quick look at the monitoring area down here. As we can see, the base pictures that I create here are pretty ugly. You can't argue about that. They're pretty ugly, especially the faces and stuff. But the upscaler takes it out pretty hard again. And makes a pretty nice picture out of it. And the face detailer then pulls it a little more smoothly in the face. Face detailer stop. And here we have the post-processing again. But we start at the front here in these workflows. And we have the generation of the base picture. Starting with a pipe loader here at the point. I did the following here for the prompting. I entered the following things first. Here we have random choice. So the hovered brackets always mean. Dear ComfyUI, please take one out of these terms separated by pipe signs. We have several places here in the prompting. So we can get a coincidence from portrait. Close up, full body shot, over the shoulder shot. Off a cute and beautiful woman. And then again coincidence, standing, sitting, kneeling, running. In and old. And then again coincidence, hospital or apartment or asylum. Or a storage area. And here below still describe lost place. Skin is covered with dirt, torn tight, dirty trousers. Torn, dirty shirt, taken from an action movie. Overground. Through the brackets here we automatically get a weight of 1.1 here. That's why I can save this in the back. Barefoot. I wanted to pay attention to it because it's a crux in StableDiffusion. Hands work depending on the model. Faces too. Poor legs. But I always find it a challenge to get realistic foot representations. Post apocalypse, looking past the camera and backpack. You may know that here. This is my structure with which I worked in the save your prompts video. I just extended it again. We now have another text concatenate node here. So that we can load the file once. But I have now only entered negative prompts for these images here. So we don't load anything here. It says nipples, warts, birthmarks, shoes, scars, clean skin and big nose. Sometimes big noses come out. That's why I entered that. Exactly. Back here we have the whole output. And here in the positive prompt there is still AbsorbedRes, Masterpiece, High Detail, Intricate, 8K, UHD, HDR, Cinematic Quality, Cinematic Lighting and Depth of Field with a strength of 0.8. And that comes down here from the text file loader. Take a look at the save your prompts video. There I explained how it's built up. You can load it very nicely as a template and then put it in the stored positives and negatives of the pipe loader up here. I made a note up here with the prompt. And that's because I noticed that if you save a picture or even the ... Was it also with the workflow? In any case, with the picture. I then put the picture back in. And what happens then is that here in the positive prompt, in this text input, there is no longer this random generator text, but it stores this as a prompt. And so that this does not go lost, so that the generated prompt at the end, it stores it. And so that all these random values do not go lost, I did that again here. Yes, it was new to me too. I was a little annoyed that I was allowed to enter everything again. I don't know if the normal prompting node does the same. I don't think so in the ComfyUI. However, these text nodes here seem to somehow take over. At least the ones from the TinyTerra nodes. But I haven't tested other text nodes or anything like that. But I ran into the error. Yes, so that you don't run in, I'll tell you again. That's why I saved the prompt here again as a small note. Because that should remain untouched here. For the size, I took the aspect ratio again from the Comfy Role Extensions. We are now only in the Stable Diffusion 1.5 area. Here you can very nicely control the image sizes. What I noticed here is that we enter a size of 512x682 here. When rendering, however, here is our base image. Then only a size of 512x680 comes out. I don't think it's that dramatic. But it was also a very interesting insight to see. Apparently the set size up here will not be completely transferred to the rendering. So now we have the model. And as a model I use this construct here. This is a mixture. I had already presented the model merge node in a video. With this we can mix several models. We can chain this and then create other mixes with it. I have now taken the epic realism. Because I wanted realistic images. But I mix it to 25%. The closer you are to the model 1, the closer you are to the model 1 up here. I noticed that as a donkey bridge. That means we have 75% epic realism and 25% absolute reality. And the absolute reality is trained in other aspects. That's why I mix it a bit. I always find it challenging that feet are also presented correctly. And I think the absolute reality is stronger than the epic realism. That's why I mixed that in a bit. And to get other styles in there. Speaking of styles. For the generation of the base image. I load with 30% behind this mix here. The RevAnimated. And that was the thought behind it. That I thought or actually know. These two models here are real trained. So on real trained photos. And the RevAnimated is a bit more in the area of fantasy and science fiction. That's where it's stronger. And since I wanted to generate a post-apocalyptic scene. I mix 30% RevAnimated in here to generate the base image. And then these two pictures come out at the end. Or the picture comes out. And that's another post-processing in the direction of the upscaler. Because of the mix, everything is a bit messed up, especially the faces. But we see that we get a post-apocalyptic background quite well. I think that's a bit through the support of the RevAnimated. And also down here the representation of the feet actually works quite well. Where I think that comes a bit from the absolute reality. That it is then pushed. So I look again quickly in the base generation area. What I can tell you here is that we only use the RevAnimated here to generate the base image. We see that here. I go into the later course. So here in the reroute and here in the model sampling discrete node for the upscaler. I'm just going to get out of this mix. The RevAnimated is really only intended to support the base image here. Where we are talking about upscaling. You can see that I'm going into the Latent Consistency Model area. There is also a video on this channel. This allows us to generate pictures with only 5 steps. We go to the sampler, we skip what was before. In this case I only did 4 steps. Because I thought that upscaling is actually what takes the most time in such a picture generation process. The base pictures are always rendered relatively quickly. But when it comes to the higher resolutions, then we have a loss of time and a loss of performance at the point. That's why I thought I'd do the upscaling here at the point with the LCM. But let's see what happens in between before we get to the sampler. For one thing, I just said I'm doing a little post-processing here. This is due to the fact that I have seen in the upscaling area the pictures are getting pretty bright. That's why I'm going through here again with a filter adjustment node from the Was Node Suite. Lower the brightness a bit, increase the contrast and saturation a bit. The sharpness for the upscaling process and a Detail Enhancement of True also for the upscaling process. Simply so that the further chain can process it a little better or differently. So for the further chain. First of all, we have an extra prompting for the upscaling area up here. Here I give a little bit more ... Wait a minute, my cell phone. Yes, yes, annoying cell phone is okay, I'm just in the recording, but let's do it. Here I wanted to emphasize a little more that the lady who comes out at the end should be dirty or the skin should be dirty. Most of the time the models are trained in such a way that you have clean skin and something like that. I wanted to be able to emphasize that a bit here at the point. That's why I'm mixing these prompts here again. And I decided on a Conditioning Concrete here. This comes from the prompting up here. We see that here, here is a Clip Text Encode Node with the text from up here. Which we also sent in here. But we concatenate our new prompts behind it. And I tried a bit of Conditioning Combine and Concrete here. I think the Concrete is better at the point because it really ... And then you have to imagine that these texts are converted into tokens. And with the Concrete it is then so that the tokens from the first prompt and from the second prompt are really hung up on each other. The Combine takes the mixture from these two and forms, so to speak, an average of the tokens. That's complicated. I didn't read it in detail myself. That's what is described. It's also something different from the Conditioning Average, which is also available. But I really wanted it to be added and not somehow mixed with the other prompts or tokens that come in. So I just wanted a mix and not a mix. That goes on all the way back here. Here we have another ControlNet part, which is fed from the base image and runs into the upscaling sampler at 85%. The ControlNet at the point ensures that the pixels from the base image are transferred more correctly into the upscaling. That's why I built that in. What we also have here is an Add Detail LoRA, which adds a little more detail. I always like to use it. I've used it in previous videos. What's still to be carried here is the IP Adapter. And here I took an IP Adapter Plus. What I'm doing here is the IP Adapter. We've already seen it in a few videos. It is able to describe images that it receives and then also be able to pass on the description in the model directly. And what I've done here, and I'm still unsure if that makes sense or something. I haven't asked Matteo yet. I actually wanted to do that. But the IP Adapter works in such a way that we get 224 x 224 large parts from one image. This is then described with the Clip Vision. So there's an image information in there, the Clip Vision describes what happens and then passes on the created tokens to the model. And I just noticed that if I only use one, which then only takes the upper right-angled section of an image. That's what you take here with the Prepare Image for Clip Vision node. You can say take the top area. That then really down here in the lower area again more or less wild growth was made by the model. And that's why I thought to myself at the point. I just take the same picture again, take a bottom out of here. So that I have a description of the upper area of the picture, but also a description of the lower area. And I'll take this overlap of the two. That won't do much, because it was said that if you throw in pictures that are more or less the same, so in this IP Adapter, then not much will happen. So you can't get better results with five pencil drawings than with one pencil drawing. That can be a bit if there are other motifs or so that the motif descriptions still come. But pencil drawings or something like that would not be improved. But we do have different descriptions of the upper area. We have her face and the upper body, plus the ceiling and so on. And in the lower area we have the lower body, the floor, all the dirt that lies here. And my hope was a bit that it was strengthened by it. As an example. Since the lady is barefoot, I noticed that if I only describe the upper area of the IP Adapter, it looked as if the model would try to render her high heels at some point. And that's what I wanted to prevent. Exactly, so the description through the IP Adapter also goes into the model. That means we have the additional prompting here at the point. Then we have the description of the IP Adapter at the point, which flows into the upscaling. And we have a control net part back here. The LoRA here is, yes, you can leave it out, you can do it. It just adds details to the picture. I just said that we are here in the LCM area on the way. That's why the sampler is on LCM. The scheduler, I use SGM Uniform for that. But we can also go in with a Denoise of 0.6. That's a bit sponsored by IP Adapter. If you use the IP Adapter for upscaling, you can use a little more Denoise than if you do it without it. So normal is 0.5 or so that it is closer to the picture, even below 0.5. Here we can go in with 0.6 at the point. And up here I have the LCM LoRA. Check out the Latent Consistency LoRA video. Another thing I do here, or rather I go up here. This is a new node that I haven't introduced yet. This is the NN Latent Upscale. The Neural Network Latent Upscale. You can find it if you look under Neural in your ComfyUI Manager. Then this is the ComfyUI Neural Network Latent Upscale Custom Node. It has the advantage that it is 20 to 50 times faster compared to VAE Decode Upscale than the VAE Decode Upscale Encode. So what we usually do, we decode the latent image into a real image. Then we do upscaling on the image and encode it back into the Latent Space. We don't need that here, we can stay in the Latent Space. On the other hand, it is a little worse in terms of quality. Let's wait a moment for the image to load. It's a little worse than the usual quality. So on the left side we see the VAE Upscale. This is the sharpest variant. In the middle we see the NN Upscale. This is an okay variant. And the one on the right is the Latent Upscale as you know it from the ComfyUI. And I don't think that's an option. I think you can see that in the pictures at the end that this Latent Upscale is not nice. I think we had that as an example in the upscaling video. I've almost never used it. I usually always use the VAE Upscale. But now I've used the NN Upscale. Because 20 to 50 times faster is an argument for me to use this in the upscaling area. However, it comes NN Latent Upscale with an upscaling factor. This is the upscale factor. And I've stored it here. Here it is. And I calculate this factor myself. And I do that down here by saying, give me the image size of the base image that we have created. Take the height of it. And here I have to convert the data type once. Because we change from Integer to Loads. And calculate this factor for me. And we just calculate the factor by dividing our target height, the 1440p, by the image height that we get in. And with that we have the factor that we need up here for upscaling. Here we can just say, okay, I want to use 1080 or 2160 or something like that. Then the factor is automatically calculated. And our image is upscaled to the desired target height given below. Yes, and then we are, when we are through there, we are in this area here. This is now the upscaled image. And I think it's very, very good. The face is not so beautiful. That's why I put a face detailer behind it. I also put in a cool trick. But the hands are okay. The feet are okay. The background is okay. RevAnimated supports is okay. Exactly. That means in the next step we have to fine-tune the face a bit. And with that we come here to the face detailer. This is also the heart of it. This is a normal face detailer. We've used it a lot. I just have to look. I save the pictures. Yes, you can ignore the note. It's just for me to save because YouTube Thumbnails and so on. So we get our picture from the upscaling. And what I did here is once. I have an extra prompting here just for the face. So we don't have a mix anymore from the old prompting and the new one. Because we only concentrate on the face. That's why I entered here Dirty Face with a strength of 1.3, which is already very strong. I want a slightly serious facial expression. Because she's out there in ruins. And it's supposed to have a cinematic style. That's why a little tense situation. And I emphasize again looking past the camera. Because I don't want the model to be trained on it. That photographed models somehow looked into the camera. How that is so commonplace. But I want to have a cinematic style here. That's why never look into the camera. Always look past it. Except for stylistic means. As a little hint, look at Shining with David Nicholson. How often he looks at the camera. There are discussions about whether you are not yourself. As a viewer of the film, the one who is evil or the ghost who is in the hotel. Because he looks at him very often. Or rather the camera and the audience and us. But in general, the rule is not to look into the camera. So extra prompt for the face detailer. You can already see here. I also used an IP adapter here. And here I make a little good guess when preparing images. I assume that the face is somehow always in the upper part of the picture. That can't always work. Because the face is already pretty small at this point. That's much better with the close-up, of course. So pretty small. It is difficult to say whether the description of the IP adapter fits at this point. But I noticed that it works in a certain framework. If I do this here. And why did I do that? Just because. I'll show you right away how I generate the faces in a supportive way. But as a basic image. For example, we have running in the prompt. And when the women run, they usually have their mouths open. They are under tension, running, breathing heavily. And I want to support that. So if you get a picture like that. And I want the IP adapter to describe the facial features at this point. The IP adapter does not take the picture at this point and transfer it. But he describes it. He describes the eyebrows. He describes the eyes a bit. The face shape. But also the mouth and so on. And my hope at this point was. That if we give him a face that has an open mouth. And it looks like it's hard to breathe. That he puts that in his description. That's a bit supportive here at this point. Also only with 0.5 weight strength. That I put in here. Just to support something. I also take the weight type original. Here in front I have, for example, the weight type. Where are we? No, I also left original in here. I switched to the channel penalty in between. There are a bit sharper results. But original is enough. You can try it out. As I said, it's a tinkering workflow. So and to generate the faces. What bothered me was. I mean, maybe you know it. We have to observe the effect. That models are always trained in a certain way. That means. That's what you say in a consistent face video. So in a way. And it was already in the comments. It is the case that models often spit out the same or similar faces. That's because of the training of these models. And that's what I wanted to decouple from it. So regardless of which model you use. I wanted to be able to decouple. How the faces come out in the end. And I did as follows. I took a LoRAstack from the Comfirol Extensions here. You can just put it in the model. And then you can see here. It goes out into a LoRAstack. What I built here. This is such a LoRAstack. If you want to have several of them. You can also simply add more. And from here on and so on. You can stack as many LoRAs as you like. But I did the whole thing that way. So that you become independent of the model for the face. I load three faces in this LoRAstack for now. Or three LoRAs of faces of prominent people. This is once Emma Watson. Once Sigourney Weaver. And you can see that a little bad. But it's Liv Tyler at the point. I will also load several of them. But not to generate these people. But just to have a certain basic portfolio. To have different characteristic faces. So I'm looking for actors or actresses. With characteristic facial features. What I'm doing here is. This LoRAstack. These weights here in the LoRA stack. These are of course input parameters. But I have now stored them as inputs. And then go here. With random number from the WAS Node Suite. And say, please spit out a random number. With a minimum of zero. And a maximum of 0.5. And I do that for every Laura. Here in this stack. They are then randomly weighted. Between not at all and half. And so we get independent of the model. Then a mixture of faces. Which is always different. And here you can, as I said. You can still expand the construct. We can hang another LoRA stack behind it. You can only install two here. Or one. Whereas if you want to reach the goal. What I wanted to achieve. One probably makes little sense. But of course you can then. Then specifically control the facial features again. In a certain framework. We still have the IP adapter. The IP adapter as a description. And we have the denoising of 0.55. I think I used it here. For the face. But I have to say. That works pretty well. Before I built in lift parts. I had Juma Thurman in here. And that came out partly pretty present. What you see here more often. Is that a little Emma Watson. Flashes through. But otherwise it works pretty well. And I put that in the face detailer. So a mixture of your own prompt. A mixture of faces. With characteristic facial features. And a mixture of the description. From the IP adapter itself. And there we are. Then with this picture here. At this point. That usually works pretty well. The rest of the picture here. Nothing has changed here. Here you can see that. I have to say. Nothing has changed here. Here it was just about the face. Where we are already here. What I'm doing at the end. To increase the proximity to reality. I think I can say that. So that the photo looks even more realistic. I'll go there again. A little post-processing. Brightens as it is. I put a little contrast, saturation and sharpness on it. And say here Detail Enhanced True. Because the Detail Enhanced does. For example, you can see that. We'll get both on here now. Here at the top of the hair you can see it. Here it is a bit. It's good, it's nice. But a little blurred. But here the hair gets a little more. Details and a little more shine. What does that do? And in addition I give a decent. Film grain on top. Because I think a photo. That always looks a bit grainy. Actually. When it comes from a movie. Or something like that. There is often a post-processing. A graininess or something like that. Added. But only very subtly. And I give the temperature of the picture. A high. That means I move the picture a little more. In the reddish area. So in the warm colors. Minus values would be cold colors. So in the bluish area. But I want a little bit. To give the whole thing a little more. Post-processing. LUT. To give whatever feeling. And you can see that. If we look at the picture. Before and after the picture. Look. Then you see. Back here at the walls. How this film grain works a little bit. Here in the background. The foreground is also a bit affected. This is the smooth picture. Looks good. But this is the post-processed picture. I think it's just a little bit more crisp. This is certainly also a matter of taste. At the point. But I always like that. Nevertheless, always the recommendation. If you are such a post-processing process. Attacks for your pictures. Make sure that the picture before the post-processing. At least still saveable. For you. Trust me. You can get upset otherwise. Not every post-processing setting. Then it works again with every picture. Or if you change the model. Or a LoRA or something. Then it can get in the way. And then it's annoying. If you have a picture. That it is somehow spoiled. You can no longer save it. Yes. And that's the workflow. We'll let him run again. We'll see. What pictures come out here in front. Maybe a little action picture. No, that's pretty nice, too. He has now taken standing again. And storage area. The base model here. Without face detailer. Here you can see. This is characteristic of the model. Whereby we do that by mixing. Also a bit decoupled. But you can clearly see. That the face changes. When we run through our LoRA stack. And that's the picture again. After the post-processing. I think. I think it's very successful again. At the point. I would now like to have one more. Somehow a little action. Let's generate richer. Let's see if I can spontaneously trigger one now. Murphy's Law will probably say now. No. Exactly. I don't want to sit. I want to have a running picture. But not in front of the prompt. No, she's standing there too. I'll try a little bit. I'll be right back. So we got a running picture. Let's see. The upscaling. Here you can see again. That the brightness increases quite a bit. That's why I've already made it darker here. But not too dark. So that not too many detail fleets go at the spot. And here we see the step. To the face detailer. Because of the prompting. A little bit of dirt added. We see that the open mouth has been taken over. There is my guess. By the IP adapter at the point. With the face model. But of course also by the denoising value. The face detailer itself. And here we have the whole thing again. Post-processed. With a little film grain and so on. Then the whole thing again. A little smooth. And I just said. We can also adjust the resolutions up here. I'll do that for the second run. And I hope the computer doesn't overload me at the point. And I get a crackle in the audio. But it could also be that it was at the. Patch model. At downscale. Was. This is now again a. That he has a little bit. That he said in German. There is the picture here. Really better. But you can also save this here. No matter. So now we make a 2160 picture. That will take a while now. But I just wanted to do that. Reduce by this LCM. Wait a minute before you go in there. Please have it from the front. We also want to see the face detailer and such in action. Yes, let's take the running picture. So the base pictures are already real. Yes. But the background is very nice. Overgrown corridors now. And up here comes through these upper lights. There are still cranks coming through. Very nice. So I just. I just saw it. Hooked. I hope I'm back now. Otherwise I might have to look. Whether the fallback audio track has brought something. That's just such a test now. Otherwise I have to do it in the future. Whenever I hear such resolutions. Then say. I'll take a short break now. We'll let it run to the end. Let's see. So where do we have our. Last picture. Yes. And that's the resolution now. In 2160 pixels. Height. Also here again. A little dirt has been added. The prompting. I think the film grain is very nice. Maybe even a little too strong. But I tried that. It wasn't enough for me underneath. That's why I stayed that way. And. And. Yes, I think there is detail. given. Yes. It works very well too. By the way, upscaling. By the way, upscaling. By the way, upscaling. Has not taken much time. I had to. Maybe I had to cut. Maybe I just let it run. I have to look at it in the video cut. It wasn't very much time. Because of the whole LCM construct. Here at the point. But. It was just resource hungry. 2160 pixels. Is then already a house number. But it works. Very good pictures come out. And we have the division here. Between base picture is relatively fast. Because low resolutions. Upscaling we do with the. Latent Consistency Model. Or the LoRA. And. Face Detailer. I'll do it again. 1440 picture. Think about if I forgot anything. To tell you. But I think that was all. And as I said. Again. At the end emphasized. That. That's actually more so. I built that. Video is. Just to. A little bit. The way to explain. How I got there. So. This is the model. The picture that was created now. Very cool again. Beautiful details. Cool clothes. This time. The face I think. Reminds me of one. From the mixture. A little bit very strong. To see Watson through lightning. But she's not. I'm actually not like that either. The user of. On specially. Trained person. LoRA's. But in that case. It's okay for me. Because you only. Or because I'm here now. The characteristics of faces. Want to have. You can play around with it yourself. And experiment around. As usual. And. Let's see if you like it. If I just talk a little bit. From the sewing box. And what I thought here and there. Whether it helps you. Or even inspired. To do things. Or whether you can give feedback. In the direction. You can also do it this way and that way. Also thought of as an exchange. Here at the point. Exactly. There are many paths to Rome. I always think. You have to come to the desired motif. What you want to have. And I think. Such videos may also be interesting. If you are new to the topic. Stable diffusion or new to the topic. ComfyUI is. That you can see what is possible. How can you combine the techniques. How can I now. Build up a workflow. You already know. That's the way it's meant to be. In any case. I hope you could. Take one or the other information with you. I wish you a lot of fun. Also when making your own workflows. Or when analyzing this workflow. Here. And when creating. And pictures. See you in the next video. Bye.
Info
Channel: A Latent Place
Views: 1,598
Rating: undefined out of 5
Keywords: ComfyUI, Stable Diffusion, AI, Artificial Intelligence, KI, Künstliche Intelligenz, Image Generation, Bildgenerierung, LoRA, Textual Inversion, Control Net, Upscaling, Custom Nodes, Tutorial, How to, Prompting
Id: I5wHf2r-g4Y
Channel Id: undefined
Length: 44min 53sec (2693 seconds)
Published: Fri Nov 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.