Only 5 photos needed You can generate so many different AI photos There are so many Templates for you to play with You can also change the prompt yourself To get your unique beautiful photos Free, no installation required No privacy issues With repairs and alterations flow are you ready to create your super beautiful photos ? Hello everyone Welcome to " An IT-a " I'm Anita the first episode of “AI my face” was very popular Thanks a lot! And I found that the Library of the Train Lora Project was fail I've fixed it already Friends who can’t use it can now clone it and try it again. The last tutoral was based on the Train Lora method. to generate your AI photos This time I will teach you another method It’s DeepFake There are many ways to do DeepFake What we will use this time is IP-Adapter IP-Adapter has Face ID Face ID Plus But I don’t feel very satisfied with the effects of these. So I kept on using the Train Lora method to teach everyone how to do it But now There is an new Model called Face ID Portrait The effect is very good So I made a Colab version for everyone to play with. However, images generated by Stable Diffusion will still have weird hands and fingers So I prepared the second Colab Project This Colab Project can perform partial repairs Even changing the picture are you ready ? We'll start right now First, let’s talk about what IP-Adapter is And what's so special about the Model of this Face ID Portrait iP-Adapter is a ControlNet Model of Stable Diffusion It has different Models The most important one is the Face ID series Face ID series is basically face-changing DeepFake It has different Version Basically, they are improved to look better. But for the Face ID Portrait Version There is a big difference previous versions all used one photo, same angle So the result is unsatisfactory Face ID Portrait uses five photos Let it instantly understand your appearance from different angles besides this model was made for portraits So its effect will be much better than the old versions You may ask Do I have to use five photos ? Let's take a look at the effect if we just put four photos three two one Wow! Who are you? The conclusion is You must give it five But the number is five does it have to be five different ones? Let's try repeating photos See what the effect will be I think the effects are actually OK . But if you compare them side by side surely five different photos would be better. But I think using the same photo five times The result is acceptable So if you can take out five different photos Of course it's better but if you really can't Duplicate photos are still OK Also, if your photo is too blurry, too light or too dark I suggest not to use it I would rather repeating the same one to make it clearer. Because if your photo is not clear It will greatly affect the generated effect. We officially start now First open the first link in the desciption box Same as before If you haven’t logged in to Google Account yet Just log in first You will see my Colab Project First press the down arrow in the upper right corner Press Change Runtime Type make sure Python 3 is selected and T4GPU Then press Save Then press Run in Step 1 You can press Run in Step 2 Step 2 will Run automatically then Then Step 3 is to choose Model I have prepared 8 for you to choose from I’ll show you the difference later. Let's choose Beautiful Realistic Asian Custom Model If you haven't seen my videos before Dunno what's that I'll explain later Select the Model Press Run Next step Press this button to open the File System Drag in your five photos Wait for it to upload Upload completed Fill in the filenames of your five photos in these five boxes. Prompt and Negative Prompt you can use the templates of my last video Or if you already know how to write Prompt You can write it yourself If necessary, you can write Negative Prompt Then press Run First time using Model Wait for it to download After that it's done If you are not satisfied with the effect run Step 4 again Run until satisfied if satisfied You can run Step 5 to enlarge it Then right click to Save it Let's look at the results of different models Beautiful Realistic Asian This is my favorite Model all the time. The effect generated is very beautiful Kawaii Realistic Asian Mix I recently started to like using this Model If you feel that BRA is too greasy You can try this Japanese style beautiful Move on to 3D style RCNZ Cartoon 3D Um ? I think it looks like me It’s like I entered the world of PIXAR 3D Animation Diffusion This one doesn't look like me as the other one But more 3D style 3D Mixed Characters This is my first time trying this Never tried this Model before I think it's very beautiful Then to anime style it's expected to look not that like me But I think some people might want it So I chose a few I think the effects are good my Sample is divided into Styles But in fact, everyone can match different prompts at will. The same prompt is used in different models will have different effects If you have a favorite Model that you want to use You can choose Custom Model After selecting, fill in the Custom Model Part below . First, in CivitAI Choose a Check Point of SD1.5 Then right click Download Copy download link If it has more than one File First make sure you choose SafeTensor and FP16 If it has Full Model and Pruned Model I would choose Pruned Model There is no difference in the effect But the File Size will be smaller No need to wait so long after copied back to Colab Project Paste to Custom Model Repo Next step if the Model does not have words like NoVAE or BakedVAE Just choose based on whether the Model is anime or non-anime. Remember 3D is non-anime Then if your Model has a suggested Sampler ClipSkip Number of Steps Embedding etc. Just set it up as it says If not, just let them be the ones I preset are usually OK . Steps after pressing Run There is no difference from using my preset Model If you generate a picture that most parts are satisfied with Only 1 or 2 parts needs to be re-generated What should we do? I have prepared a second Colab Project for everyone Don’t enlarge this photo Save it first for later use Open the second link in the description box be our Inpaint Project First, prepare a masked photo What is a masked photo ? For example, what you want to repair is your hand Then you have to paint your hands Let Stable Diffusion know You just want to regenerate this part If you have Photoshop Just use Photoshop if not You can use PhotoPea free It's a Web site First go to photopea.com First, drag in your photo then press this button Open a new Layer make sure the layer is selected Then choose this pen Make sure you use the Brush Tool In these two colors Choose white for the top one it's the whitest one That is ffffff Then where you want to regenerate paint it on You can select a Brush Size in the upper left corner You can only apply one thing at a time such as hands or eyes You can apply on both hands or both eyes But must be the same thing it will be easier to write your prompt If you really need to regenerate your eyes and hands Then you have to do it twice Then select this Paint Bucket Tool Change color to black the 000000 black Fill the remaining part with black Then press File Export As , JPG Give it a File name Press Save Then return to Colab Project Same as before Click the down arrow in the upper right corner Press Change Runtime Type make sure Python 3 is selected here and T4 GPU Then press Save Then press Run on Step 1 and Step 2 Then open File System drag in your original image and the masked image. fill in the File Name of the two images in Step 4 Then choose whether your photo is anime or non-anime remind you again 3D is non-animation Then write Prompt If it is a hand Just write Hand or Hands Negative Prompt, not neccessary if you know, you can write it embeddings I would recommend Tick Easy Negative And if it is a hand also tick Bad Hands Strength is how the result varies from the original The bigger value, the bigger is the difference If you only need to finetune a bit You can choose a smaller value If it's like mine The whole hand is flipped then don't set it too small However, if set too large it will be out of control So my case I use around 0.7 to 0.8 Then press Run Usually if you need to regenerate that will not be so easy to get a good result Not good enough Run this Step again Run till satisfied Enlarge at Step 5 below Strength has a chance to be different for different models This picture was generated by BRA I think 0.6 seems better So Strength You have to try it yourself several times Find the right effect This Colab Project Not only can be used to repair defects It can also be used to modify pictures For example, I want to add a top hat First prepare one the mask on the head Then change Prompt to hat Embeddings Just Easy Negative is enough Let's see the effect If your result is not satisfying You can first take a look at the photo you are using. Is it unclear? The head is too small? Too light or too dark I would recommend using photos with clear face No need to choose ones that you look better because most of the Models will beautify you In addition, there is no need to use full body Because IP Adapter Face Model I will only use your face It will ignore the body OK It can be said to be a big gift today There is a DeepFake Project and a modify project Hurry up and give me a like first In addition to AI photos Are you interested in any other AI tutorials? Tell me in the comments below Haven’t subscribed yet ? Subscribe now! Otherwise you'll miss the wonderful tutorial next time If you are interested in generating AI photos With high standard And think that the result this time is not satisfied enough And you haven't watched my Train Lora 's Tutorial Then click here now to watch it Then we'll see you next time Bye-Bye