Hello and welcome to this video in which I would like to exchange some life-time knowledge. I have built a workflow for myself here, which I also use and which actually has nothing to do with AI. There is a neural network, but in principle this workflow is only intended to scale up photos or existing images. And by that I don't mean an upscaling, which is generally known from the stable diffusion or ComfyUI area, but really for real photos. And at this point we can no longer use a sampler, because even with a denoise of 0.1 it would simply distort the image. And maybe some of you know open source solutions like Cupscale, which are also intended to use upscaling models. I'll show you in a moment, they are also freely available and we also use them in the upscaling area for stable diffusion workflows here in the ComfyUI. And you can also use that superbly if you have smaller images or JPEG-compressed images, you can get a little something out of it from time to time. And as I said, I also like to use this workflow for myself when I have images that do not correspond to my monitor resolution. So I use 1440p on my monitor, WQHD, and I use it for that. And if you sometimes upload images from the network or from the Google search and you like them and you want to upscale them a bit, then I use this workflow. And unfortunately I will not build it with you, that would be a bit too complex, so we'll just go through this finished workflow here. But just to show what else you can do with ComfyUI, apart from generating this classic image, that it is also super suitable for batch processing on images. Then feel free to take a look at the post-processing video, you can also use it to send images that already exist through post-processing, add effects, etc. That works wonderfully with it. Well, let's get started. First of all, I'll jump into the node here. Here is an upscale model in it. I use the LSDIR plus. This is trained for real photos and JPEG compressions, and that's what I meant by that, we do have a neural network here in the workflow, but not really AI at this point. But that's also every area of AI, neural networks that are trained to process and enlarge certain image information. This is a four-fold model and you can get the models on the website openmodeldb.info. This is created from the old upscaler wiki, there is also a link to the old wiki that leads here. Up here we see it's still in alpha and it's being worked on, but here you can find all the models and here you can also filter for what you need the model for, the upscaling model. For example, I went to where it is here, photo, here you have models that are trained for real photos and here you can also find the LSDIR. There are different variants. I took the standard one. It actually always works quite well. I also use it in upscaling for stable diffusion images that were generated by samplers, but you can already see up here, we can take the filter out again, no, we have to press all. There are also, for example, where did I see it, here, text, there are also models that are specially trained for texts or faces, mangas, there is everything possible and deblurs, you can click through and download the corresponding model depending on the application. I think it's a bit of a shame that not all examples have pictures, I always find it a bit better to orientate when you also see examples, but well, go ahead on the page, the link is in the description, of course. Well, that was the upscaling model here at this point and now let's get into this workflow. The heart is basically this load image batch from the WAS node suite, which allows us to load several files one after the other and I built a bit of an ugly construct around it, but I couldn't do it any other way. The problem is that this node doesn't really reset itself, so we can select between single image here, then we only load one from a folder that we can enter here, as you can see, you can also just go to the point, I have it here, this is the folder, go up here, copy the path and CTRL-A, CTRL-V, then it works. So it's an absolute path, relative paths also work, but sometimes you can mess up and then the pictures are somewhere else where you actually expect them. For convenience's sake, I just put the absolute path in here and well, we can switch between single image and incremental image up here and with the incremental image it goes through a folder, so here I have the pictures already prepared. I'll tell you something about that in a moment and then one picture after the other and then it scales up. So in our case, the node itself only loads it first and what we do with it afterwards is of course up to us. We can also say here with the index which picture we want from the collection. With incremental image, of course, it is always counted by the node itself, so every time we do queue prompt, an index is added to the top and the problem with this node is, I think the last time I looked, it was also in the WAS node suite description that these are experimental features. Unfortunately, you notice that a bit here and there, because this index doesn't reset. So we can't say 1 and 0 and then it starts again at 0. Internally, this node retains the index where it was. So if we have now made five pictures, then it is set to 4, 0 is also, so with the 0 we also have index 4, but picture 5 and we can't reset it and the only thing where this node resets is according to the instructions when the path changes or the pattern changes. I'll load it again, I'll say image, not batch, load image batch, that's what it normally looks like. I have stored the pattern here at this point. The pattern, that's enough if we have it on a star, that means take everything. So, and to really reset them effectively in the index, I've built something here that we can say here, if we set it to 1, the pattern is taken 0 here, so empty, and that's how this node crashes and when it crashes once, I can start it once, then it does patch and that's how the internal one is reset again and we can reset it back to 0. Then it takes the star as a pattern here and then it crashes. We'll get to that in a moment. There are other options that I've seen, for example, if you say you want to store the index here and then you take it from the WASP Suite. Now I have to take a quick look. It was the number counter, which basically does the same thing. We can say start at index 0 and stop at, in this case, 0, so never, and the height is always at 1 and give an integer out. We do here, integer, and it should increase it every time. And here you could theoretically also put a reset bool in it, so that this node resets itself when it is set to 1, then this node goes back to 0. If it's set to 0, it keeps counting, but that doesn't work with this node. I don't know why, but somehow the internal one still keeps counting higher and higher and that's why I decided to just let it crash once at the beginning. If you only use it once, it works, as soon as you have a second node, an image patch or something, it gets a bit annoying because you have to let both crash once and that doesn't work here either. More with this construct, but now for the case, it should be enough. What you also have to pay attention to beforehand, we can indicate here with a label how this naming of the images should look like and for that we unfortunately have to rename them once. So if you have images, you can see, I've already named them image 001, 002 and so on. I took a few out in between, these are the bigger ones. So we have to indicate how the naming is at this point. If not, I haven't tried it yet, I just assumed it, but I think it has to be that way. Why should I have to indicate otherwise? That's a bit of a shame, because of course we lose the information from the original file names at this point, but it doesn't matter now. So, well, down here I'll give myself the file name again, which is currently being processed. It just works again only in the console, I just saw when testing. I don't know if this text debug node is already broken again from the TinyTerra nodes. I mean, in the Python GOS extensions there is also such a text debug, but I didn't want to install it yet, because they bite a bit with other plugins that I have. Doesn't matter, we'll look into the console in a moment. Good. So what does this workflow do now? At the very first place, so of course it loads our predefined images in batches from a folder. We can also indicate here at this point what our target height of the image should be. As I said, with WQHD I have 1440p in here and down here we can also indicate with a boolean that goes 0 or 1, so either right or wrong. Here we can indicate whether the workflow images that are larger than our target input should also be chased through this process. Attention disclaimer, they still run through it. I didn't manage to say now, if A then take this path, if B take this path. So they are still calculated, but in the end that has the result that the images ... So if we want to enforce that they go through this process, then the image is saved, which comes out at the end of this upscaling process, although it's not just an upscaling process, it's a bit of a denoising process. So if you have large images that show a JPEG compression, then you can also say at this point, although the image that I send in is larger than my target number, I still want to have what the upscaling model may have done with it. So JPEG compression or processed remotely, whatever. And here at this point we go further into the process. So up here we definitely take the image out and chase it through the upscaling. Here in the back we get indicated by the side length if it is smaller than our target range or target height, not range, height, then it is scaled to the target height. If it exceeds the target height, the original target height is still used. So it is not scaled down at this point. You can maybe also include that, but now for the case I didn't want to do that. And we achieve that by taking the height and the width from our image down here. But we are only interested in the height because we calculate with the height. There are still a few debug nodes in between that tell us how the system or this workflow has decided. But we go here with the number operation and see if the input image has a higher height than the one we want to achieve. So here we get a debug again. Here we are told in the console whether the original height should be maintained or not. And down here I put a Python statement from the Quality of Life suite, which checks whether x is 0 or y is 1. And what is x and what is y now? x is our number operation here. It checks whether what we enter is greater than what we want. If this is not the case, then the upscaling is in question. So then it is decided here, okay, we take the target height. If we now send larger images in, then of course we have a 1 in here. Then it says yes, the input image is larger than the height we want. That's why there would be a 1 in here. And at this point we can say with a 1 here, that's the y, if that's 1, then it still takes what the upscaling process has given us. That means at the end we get the original height back out of the image that we sent in, which in this case is larger than 1440 in my case, but we still have the processing running through. If this is not the case, it simply takes the image from here in front, sends it on and throws it in here in the back and then the image that we send in is simply saved again. And that's what this workflow does here at this point. Here in the back is another output for us, where we can read for what he has now decided, the selected height. You can read that a little bit in the console and then you know a little bit about how the system behaves. And here are still switches that select what should be taken from the thrown in images based on the booleans down here that we get. Up here we have another save point. I have now taken the one from the TinyTerra nodes, because here we can also say that it should not integrate the workflow, because we don't really need that if we just upscale real photos. I think the overwrite existing at this point doesn't quite work. I have it on false here, but it still overwrites it a bit carefully. And I have stored the output path here, simply because it annoyed me to always have such a short input field here. Of course, you could have dragged the node longer, but if somehow the images don't end up where they should, then I just found it a bit easier to read here to see where my picture was now. And the save prefix is taken from the file name text up here, which means the image that you save in another folder is then just as named as the image that we load. So, and we can try that now. I downloaded images from Pexels in advance and made them smaller. So they are smaller than the target height, i.e. 1440. 480 x 588 pixels to be exact. And I have created two variants of this. The first variant, if you zoom in, has a strong JPEG compression, while the second variant was simply scaled from the height. So we can just see a little bit of how the system behaves at this point. Okay, we can already see here that I have 20 elements in this folder and here is my upscaling folder that I have entered up here. And now let's run the whole thing. Here is the input folder, which is my input folder here. We set the node to incremental image. Let's say here at this point we want to operate very well with the mouse wheel up and down or here in the controller or tap by hand. We want to have 20 runs because we had 20 elements. Here on the left we see the image that is loaded, here on the right we see the image that is saved. And I do queue prompt and now the whole thing rattles off and saves our images. This also works relatively well, at least for small images. It takes a little longer if you have images that are extremely large and you throw them in here. I also have the Enforcing of larger images down here at this point. We don't need the images we're looking at now, they're all smaller than the target size. Nevertheless, let's see how much he still has to do here. There are still 8 in the queue. Each image of it is made smaller and has a stronger JPEG compression or it is simply made smaller so that we can observe the effect a bit at this point. And then I'll show you the Enforcing again at this point. There are only two images in the queue now, one more and then we can look at the results. So, now it's through. It's best if you put it back to 1 here. We can crash the node once. That sounds a bit sad, but now the index has definitely reset. And we'll take a look. And here in the upscaling folder we now have our high-scaled images. As you can see, exactly with the same name as the input images. I said he should save it as PNG and not as JPEG. But let's take a look now. And what we have here now is the image with the strong JPEG compression, which we have now raised to 1440p. This is the image with only smaller, if possible without JPEG compression. Of course, it's better. The models don't do great magic or it's not comparable to products that you can buy, such as photo upscalings from Topaz Labs or something. Nevertheless, for zero euros on your home computer, you can get pretty good results. Let's look further. JPEG compression, no JPEG compression. And here you can actually see quite well that the image. I'll get the original image again. That's this one. This is the original image and you can see how small it is. And we have now scaled it up to this size. And I think that's a win. So free according to the motto, shit in, shit out. You can't expect magic now with images that are absolutely pixelated. It's getting better. We're getting the right size. But of course, with JPEG compression, everything gets a little messy. But here you can see the difference quite well. I think the second picture is already a pretty good result when it's upscaled. JPEG compression, no JPEG compression. JPEG compression, no JPEG compression. And also with this picture here. We can look at that again in comparison to the small version. That was this one here. That was our original picture. This is the highly scaled picture. And I think they're good enough to use as wallpaper or whatever. A lot of examples. Details come out nicely. That works pretty well. I like to use the workflow myself. What else do we have now? I'll go into the image input and we'll take this picture, for example. Yes, dimensions 5760. We'll copy that in here and I'll say that's IMG000. Then it slides to the beginning. And now we can do it with the force. I did it wrong. I have to put the picture here. That's my output folder. I have to put it in my input folder. So now we can empty the output folder once. And we'll put that here on single image. Oh, the console output. Here we can already see that was the crashing of the node that we just forced. But here we see, for example, that we are here at a point. I have to see where the beginning is. If here is prompt executed in, then that was the prelude. But we see here. We always see here that it is wrong. This is not the image name. Here is the image name. But the right index is a bit strange. We definitely see the name of the picture here. We see the original height of the picture here. What he had. We see here how he decided whether he wanted to keep the original height or not. That's zero in this case, because it was smaller than our 1440. That's why he decided on our input of 1440 and then carried out the process. And we now see this information here for all the pictures we're chasing through. And I now have single image and index 0. That means he takes the 5000 something picture, the big one. I have the enforcing down here to 0. Then let's run that now. There it is, the picture we just copied. That will now take a while to scale up. Because, as I said, he already had a size of 5760 x 3840. He now uploads that to 4 times. That takes a while. What did I want to look at now? I forgot. Then it wasn't important either. I'll just let him rattle. So, he's gone through now. He also has to save now. He's done that now. It also takes a little longer. We look again into the console. He has now decided on image 000. That's right. He recognized that the picture has a height of 3840. He decided to keep the original size of 3840 and then calculated it. I'll just look at save upscaled. That's a bit confusing here. That's the calculation down here with the little Python formula here. There he said save upscaled to 0. So what he did, even though he ran through once and I say that's a bit negative here, that he still goes through here. But in this case he didn't edit the picture, but just made a PNG out of a JPEG. We can also set that up here if we wanted to, but PNG is loss-free. If we now say at this point, we want to enforce it down here. So we want to be sure that despite the fact that he has decided not to edit it to save, we can enforce it here with a 1. And then he runs again. Now we have to wait again, or not. I'll cut it right now. There was the plop and now he's through. He saves it again. Let's go in here. It's just being written here. So, and with this picture he has now done the upscaling or the denoising at this point. We see here we still have the original size. It's not a big gain in this case, because our original picture already had very good quality. But it can be that it's a big picture and you still have JPEG compressions on it. And you can do that. There was a picture with which it didn't work well. And that was this picture. The picture is already very sharp and you can also see the pores here, etc. It was probably already very sharp in post-processing before uploading. With this picture, it was the only picture so far that I discovered where it didn't work. There the output really got pretty nasty. That's why I set this forcing down here again so that you don't mess up with such things. But usually, I think 99% or so, it works there. Yes, I'll think about it for a moment. We can take a quick look at the console. Save upscaled 1, because we just set a forcing with 3840. And still the variant that went through the upscaling process. As I said, I like to use it. I've also expanded it a bit now, just before the video. Down here, by the way, is a pretty ugly story. I couldn't do it otherwise to connect the image switch up here. That's why I had to calculate here or ask if we either have a smaller picture or if we enforce it, that the scaling should be used. I had to put a float in a text and a text in a number again so that I could put it in the number area up here. If you're wondering now, here's a float output. Why not right here in the float text? This query did not return 0 or 1 at this point, so that we can work with it up here. Boolean number always wants to have a 0 or a 1. In this case, this node actually returns true or false as text. The integer output says 0 or 1, while the float returns true or false. Unfortunately, we can no longer throw in true or false up here, so don't be surprised. It's a bit ugly and stupid to cast this back and forth between different data types. But it works. Well, you have to work with what you have. Yes, you can play around with it a bit. I'll upload this workflow to you, of course. Send in a few old photos that have smaller resolutions. To scale, play around a bit with the models here, also from the OpenModelDB or something like that. It is also very suitable for using these corresponding models in normal workflows that really generate AI-generated images. Also, depending on whether it is for anime, for real photos, for text, you can learn a little bit about how these models behave. Otherwise, I hope you have an application for it. As I said, I have it. When I collect images that I like and then at some point I think to myself, oh, do you blow them up? Then you can just throw them into the LoadImageBatch, let them go, have a coffee, say down here, yes, definitely do an upscaling or denoising, even if the picture is bigger. As a rule, except for this one picture that I showed you, there were always good results, because I hadn't built that in before. And yes, have fun playing around and experimenting. And I'll say until the next video. Take care and goodbye.