Hello and welcome to this video, in which I would like to exchange some life-time knowledge again . A lot has happened in the area of the IP adapter, namely new models have been added, the faceid models, there are faceid and faceid plus and I would like to introduce you to that. I will probably not be able to show everything in this video because you can of course combine all these models with each other and hang them up and down and we will not be able to cover that here. Nevertheless, I would like to briefly show you how you can build such a basic setup and how the whole thing works. The most complicated thing in the whole story is probably the installation, because it is not as easy as you know it from custom notes or something like that or just download it from loras and so on. That's a bit tricky at the point, but actually not. But Matteo and I have seen in our discord that it causes problems and maybe misunderstandings, so I would like to try to explain it as well as possible here so that you can get into the topic as quickly as possible. Namely, we are now here on the github page of the IP Adapter Plus and if we scroll down there is a very important area and that is the installation. The whole thing is of course written in English. If you have problems with English, you can also have the page translated by google. Nevertheless, I try to explain as best I can what we actually have to do. So to install the IP Adapter, of course, you can also use your ComfyUI Manager as usual. Just look for IP Adapter, then install the ComfyUI IP Adapter Plus, then you have the whole thing. Alternatively, you can of course also drag it by hand into the custom notes folder via git clone. So now there are a few things that we have to consider, namely here are all the IP Adapter Models. You should download them and they come into the folder, I'll show you that, ComfyUI Models. There you need an IP Adapter folder. If it doesn't exist, download it and save the Models in there. Here it is perhaps quite interesting to know, there were before the Models stored elsewhere, namely in the custom notes folder here. There is a folder for the IP Adapter Plus and there is also a Models folder here. So you don't have to download everything new, especially if you have a slow line or something, that's of course annoying, but you don't have to. Just take the Models that you already have out of this folder and copy them into the folder that I just showed you or that you have now created, the IP Adapter in the Models folder. Here it says that Mateo marks it with Legacy Models Location. It still works. He also told me that this step would not be necessary, but it is cleaner to simply have the Models in the Models folder and who knows, maybe at some point it will be cleared up here and this folder disappears and yes, that would cause trouble. Well, these are all the Models here. Mateo has also done very good work here to demonstrate for which StableDiffusion versions the whole thing is. The VidH and VidBigG information here refers to the ClipVisionEncoder, which you also have to download. They come with you in the ComfyUIFolder under Models in the ClipVision folder. You probably already have them on the plate when you use the IP Adapter, because these are the two here. StableDiffusion 1.5 and SDXL I had called them back then when I downloaded them. In the meantime, I would tend to call them VidH or VidBigG, because then they cover themselves with this table here. But I still have to do that myself. In any case, you can see here well which IP Adapter Model which ClipVisionEncoder needs at the point. So further down here are these two new FaceID Models. The IP Adapter Models also come in the IP Adapter folder in the Models folder and there are two more LoRAs, the FaceID-LoRA and the FaceID-Plus-LoRA. And you pack the two LoRAs in the Models folder in your LoRAs folder. I also separated it again after 1.5 and SDXL. You don't need that. FaceID currently only works for StableDiffusion 1.5. Roughly speaking, that comes in your LoRAs folder and yes, just like that. In order to be able to use the new FaceID Models, we have to do something in advance. We have to install Insightface in our Python virtual environment. And that's kind of the point where many despair, I think, even though it's actually not difficult at all. Unfortunately, I can't explain it under Linux. I roughly know how it works, but I have a Windows system. I went through the installation this morning and ran into an error. I got it fixed and I'll explain to you right now how it works. However, I think it's even easier under Linux. What you have to do is go to your ComfyUI directory and there is a Python Embedded directory and here you press in here and enter cmd. So now the whole thing is here with me. You have to install three things. You can install the whole thing with pip pip and we first need Insightface. That means you give a pip install minus minus upgrade Insightface and let the whole thing run. With me it went through quickly now because I only installed it this morning. At this point, however, for Linux users, this should probably work. For Windows users, an error can happen at this point. I'll show you a screenshot. And if that's the case, so if you receive a message, which says something like Failed Building Wheel for Insightface, then you need something extra . I've already opened that back here. You then have to go to the site here visualstudio-microsoft.com, go to Visual CPP Build Tools, I'll link that to you below in the description and then download the build tools here at the point. When you have downloaded it, I think it was an exe, not an MSI, I mean it was an exe, I don't know. It definitely starts and then you get a screen that shows you which possible installations you can do with it. I'll show you a screenshot of that too. Because you choose desktop development with C++ at the top. You install the whole thing, it takes a while, but when you have installed it, then it goes back to your console. That was ours, exactly the other one, ComfyUI and give the command again pip install minus minus upgrade Insightface. After I had installed the build tools, it works for me too. It may be that it works for you immediately under Windows, depending on whether you have already installed these build tools or even have installed Visual Studio or something like that. That can be, in any case, that is the variant to keep the whole thing running. Next we need the ONNX runtime and that works the same way. We give a pip, wait, I'll empty the console for a moment. We give a pip install minus minus upgrade ONNX runtime, press enter once. For me, all requirements are satisfied because I have already done it and as a third we need pip install minus minus upgrade ONNX runtime minus GPU. And of course everything is already fulfilled with me, but these are the three commands that you have to enter in the python-embedded directory to install the required requirements so that you can use the whole thing. When you have done that, you should actually be ready to start. Make sure that the ComfyUI is updated via the update.bat, that your extensions are updated and that you have already loaded everything here, the models, the clipvision models, the two LoRAs and if that is the case, then you can jump into the ComfyUI. And now I'll show you what is possible with the whole story. At this point I would like to interrupt the video for a short disclaimer. In this video I use faces of prominent people who are in the public to demonstrate the technology. These faces are familiar to us and therefore offer good comparison options. This takes place within the framework of the research and learning of this technology. In general, never use faces of people without their consent and do not produce or spread harmful images on the Internet. Be aware that your hard drives or online storage accounts will never be safe from hacking and that there is also the danger that false images can come into circulation. Every person has a right to privacy, whether he is in the public or not. Therefore, do not use these techniques for illegal and or harmful purposes, but only for research purposes and with the consent of the affected people. So we say load default once. Then we get a default setup, of course, I'll build that up a bit first. We'll probably still have to push back and forth a bit . Here with the VAEs I always take a reroute right away, because with VAE encode and decode that happens at some point. It is very convenient if you want to reload a VAE here that you have a reroute for it directly up here. For such little things I also planned a video. We'll leave the save image, although I probably can't use a picture for the demonstration now, so for the thumbnail that we're going to do here. No matter, so we now have a bit of space and now we load an ip adapter and then we take the ip adapter from encoded. No, that's wrong, we need the ip adapter apply apply apply ip adapter. There he is. And we're going to put that together now. The very first thing we need is of course our model and we know that we have to use a lora for it. That's why we take the lora loader model only from the ComfyUI vanilla nodes and drag it into our sampler. Because we don't need to load a clip to it, that's pretty handy. And here I put in the faceid plus lora. So that I don't forget it right away, I'll go in with 0.5 strength. 0.5 to 0.6 is actually quite good, but in order to be able to recreate faces , you have to play a bit with the weights anyway, depending on the reference image, etc. So we need an ip adapter, then we take an ip adapter model loader, take the ip adapter faceid plus model at the point, also directly the ip adapter faceid plus model. You always have to look at that, then you quickly run out of space. Let's see that we stay a bit neat here. We need clipvision, then we take the clipvision loader and with me the whole thing is still called st15. But we remember for faceid plus, if we look at the table, we need the vidage and that's the one here, the smaller one of the two. So, of course, we need an image, I'll pull that over here, we say load image, we'll set that up in a moment. And we need the inside face model and there is also the new node inside face loader and you can just put it on cpu, but you can also try it on CUDA. It may be that your CUDA version in python is not compatible with it, but it doesn't matter, just leave it on cpu, it works flawlessly too. So good and to start the whole thing now, I hope I have already played the disclaimer, if not, it will probably come now, because I will now just take pictures of prominent people to compare and wanted to record a disclaimer for that. We now take Meryl Streep, because it's just a well-known face and that way we can also better measure whether it works. To prepare, I just pulled it in here, but that's not the right way, because to prepare we now need the prepare image for inside face node from the ip adapter nodes . Then we pull the whole thing through, so it prepares the picture for inside face for the model 4 and what I would recommend before that is the installation of a crop node and here namely the image crop plus from the ComfyUI essentials also in the suite from Matteo. This means that we can prepare the picture a bit better and here I would say we go to width and height and store them first, put a primitive node on it, also for width and height and with the whole thing here, let's put it on center, we can actually prepare our picture quite well. I'll do a preview image again under it, because that's what we're going to do now and I'll turn off the sampler up here with control M and we'll see that it works. Exactly, this is now the prepared image for the inside face model and what I always do is I go to auto queue here on the right, extra options and auto queue, press queue once and now we can start very conveniently here to crop the face a little to the right. We can set the height and width up here and down here we can set the offset a bit like this. But we need a complete face for the inside face. It can also be that the inside face does not recognize a face, but with something like this it should always work well. Now we have positioned it relatively well. Now we can turn on our sampler up here again and set it up directly. I'll take dpm pp2m and karas. We should go down a bit with the denoise, so 6 is actually quite good and the ip adapter actually always uses a little more steps. That's why we go up to 30 steps. I take seed 0, set it to fixed and we can actually try it out. First of all here for the prompting, of course we have to adjust that a bit. I take horror, I take big nose, big mouth, mouth please, big eyes. Just experience that I have just collected. I haven't really dealt with it for a long time. They are completely new models, you have to say that. Then you have to play around a bit and experiment around. But that works pretty well. And now let's say 70 year old woman and no, I'm not doing Meryl Streep any injustice. I'm even very nice right now because the woman is already 73 years old. In a space station wearing a space suit. I think she has goggles on, glasses. Let's leave that and describe a few more. Absurdres, masterpiece, high detail, intricate, 4k, UHD, HDR, cinematic quality. So and we need a reasonable model up here. A little bigger, take the epic realism and we'll just see what happens, whether any emergency comes or not. That looks good, it is loaded, our image is generated and we have a pretty nice image of Meryl Streep in a space suit. That already works very well. That too. Number three. That works pretty well too. So we stay with Seed 1. What you can still do now is the Face ID model is already pretty good, but it will be even better if you combine it with other IP adapter models. For this we copy the Apply IP Adapter node. We need the Image Crop again, the same variant, also here with the same width and height. And here we don't need the Prepare Image for Insightface, but Prepare Image for Clip Vision. Because we no longer take the Face ID model here, but we now take the Face, wait a minute, where is it? Plus Face for Stable Diffusion 1.5. We load that in here at the point. We can actually use Clip Vision from up here again. We now drag the model in here and from here we go into the Lora loader and we don't need Insightface and Attention Mask right now. I'll turn off the sampler again so that we can edit the image here too. We take a look. What does he say here? Of course, we also need a picture to edit a picture. Here you go. So that's actually pretty good. We might be able to zoom in a little further by turning it down a bit. So push this up a bit again. So. Auto Q off. And now let's see what the whole thing does with our picture. Yes, and now it starts a bit with the tweaking of the values. What I have found best for me so far is if you turn down the Face Model Plus a bit. Let's take 0.7 and set the Face ID to Channel Penalty for that. That has told me the best results so far, although you always have to say it depends on the reference image that you send in here in the front, you probably always have to adjust the weights and the like a bit. But you can see it works pretty well. And now we just go back here, pull it over a bit, copy the sampler once with control c and control shift v so that the connections stay. But here we make a hi-res fix in between for the sake of convenience, namely simply the one from the TinyTerra Nodes. Say we want to have length here, width and height, that is actually quite practical because we are here in the quadratic area. Crop wrong. So now we need a VAE decode here again. And here we push the latent in. And from the front we need our VAE. And you can see that there is more and more VAEs here. That's why it's very practical to have the reroute in the front. So a simple hi-res fix scale, the Denoise, let's set it to 0.4 and probably we can also turn down the steps a bit here. So take a quick look, forget the VAE, very nice. What happened? I just have to switch it around, it didn't have an output. And now we have a pretty good picture of Meryl Streep in a space suit. That's an amazing technique. We can now just go ahead and choose another actor in front of here. For example, I'll take, where is he, where is he, where is he? Johnny Depp. Of course we have to tackle our prompt up here. Man, how old is Johnny Depp? I don't know, let's say 50. We're just looking to see if the pictures still fit here. It's still possible that down here we could make it a little bigger. So I would now go there and probably make the picture a little bigger again for the clip vision here in the front in the crop, with clip vision also has a lot of space. But we can also see here that we are getting pretty close to Johnny Depp. I'll do Auto Queue and turn off the sampler again. Then we do that down here. We zoom in a little further and push him up something. So, Auto Queue off again, sampler on again. And that works pretty well too. Of course we can also add something new here, something we are used to from the IP adapter. It's all the same. It's also funny that we can go there and say, for example, what's the name of the image batch, batch image, image batch, that's what I was looking for. We can also go there and say, we want to copy the whole area here. So for that I probably have to push the lower area away once. We don't need that right now. So, take an image batch here, load Meryl Streep here, for example, batch the whole thing here and send it into the IP adapter. But we also go directly from here into the model. We can turn off everything down here. So, we have to take a quick look to see if Meryl Streep is captured well at the point. I think that works quite well. The prompt is of course still missed. We just say a person in the space station, let it run to the end and have received a mixture of Johnny Depp and Meryl Streep. So you can also batch as you did here. Nothing has changed or rather since this morning. I think Matteo pushed the change this morning and now it is possible. Yes, that's basically it. What I meant at the beginning of the video is that you can link all these IP adapter options together, just like we did here for two. That's why we can't caster and compare all the combinations here in the video. I'll leave that to you, play around with it and be polite with your faces, don't do anything unthinkable, remember the disclaimer and let's see what the future will bring. I hope you can do it with the installation. If there is no offer, jump to the discord. In the meantime I think enough people already know what can go wrong with the IP adapter. And otherwise I just wish you a lot of fun experimenting with it and recreating it or playing around with it. Always be responsible when it comes to creating faces. Remember, do it with functional characters if you want to have consistent people for stories, for books, for comics etc. Do it to somehow put your faces on cool motifs or with the consent of your friends, your family, to simply create beautiful pictures. But otherwise always be responsible with this technique. I wish you a lot of fun. Until the next video, take care and bye.