ComfyUI Style Model, Comprehensive Step-by-Step Guide From Installation to Utilization of T2i models

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hello friends this is the second time that I am recording this video the previous time I had some ideal problem now I want to explain to you how to use the style adapter in comfy UI for using the style adapter you have to download some models from the internet and place them in some specific folder and now I will tell you go to the GitHub page that I have placed at the link for the T2i adapter and after opening this page you should click on the download and click on the hug and face this link okay this page will be open go to the file and version click on the models and download all the files in this folder after downloading all the files in this folder only you have to place the T2i adapter style and Co adapter style files in the style folder this is the folders for your model okay so find the style model and place the call adapter and T2i style file in this type model and after that go to the control net model and place all other files in control Network control net folder okay it's on instead of placing these uh models also you have to download the clip vision from the link in the description and after downloading the clip Vision you should go to the again go to your folders and open the clip vision and place this file by torch underline model that pin okay you have to place that also in this folder after that you can start the process okay open your account for UI and you can see this you can see this workflow on my com in my computer I will explain everything from zero but here just this is a sample this is the style that I have and this is the dips map and this one is the sketch and in this work follow it is trying to make a character with this pattern okay and now we can see the result it's in processing and after processing design we can see the result already I have eight batch in my latent so it can take about one or two minutes ah okay it's ready you can see the result here it generates all these images it extracts the character from the depth map and also the sketch and it generate these characters but with some special clothing and the clothing is coming from the pattern that we defined and if you look at to my prompts you can figure out that I didn't write any prompt why because we have the clip Vision okay let me show you okay this is the key position okay so at first I want to explain the data flow okay so the clip vision is analyzing the image okay and it is sending the data okay and the result to the clip Vision in code the clip Vision in code sending the data to the apply style model okay and it is using the style adapter here and with the informations that here we have the system can understand okay it's a style okay and it has a description okay this is the style and they want to use this style in our image and what is our image the image our image is coming from these two control Nets one of them is for sketch and one order is for debsma so it can it can understand it's a human and this is the pattern and because this is a fabric pattern so it can be used for covering and and using for clothing this is all okay so the clip Vision analyzing the input a style is trying to transfer the input to the model and with these two information we are telling to the stable diffusion that we have a human with this dimension uh in the website here in the website of the d2i you can see the sample and it's exactly here and you can see also they did test with some other thing here we have another test they use a sketch of a motorcycle and only one image Style and after that it tried to generate a motorcycle with these styles let me do a test okay but before that I want to clear my screen and start from zero okay so let me open a comfy UI a confi UI tab and load the default workflow I am loading the default workflow and I am deleting everything in prompt okay I'm selecting these notes for selecting these nodes you should press the Ctrl drag and hold shift and move your notes okay and a little these notes 768 because ah excuse me okay adjust my resolution and now I want to load the control net for defining the character okay how can I load the control net double click on the screen click on control net okay you can see here control net loader and from this menu at first select the sketch click on the Node and after that release the mouse load another control net apply control net and load the image which image here is the image that we need okay this is the image that we need and also it needs a conditioning the under conditioning is coming from the ecliptic skin code after that again we need to add another control net so copy here change this time to the code adapter depth and connect the conditioning to conditioning when you want to use the style adapter it's very very very important to use also the core adapter for analyzing the sketch or the depth map you can use other control net from here from this list okay but it's it will be very very complicated for your software to analyze the data because when you are using the code after sketch co-adapter dips and code after style they can working together very very perfect so let me do that it needs image let me load image for the depths map okay so here I have the dips map and control it now I want start to load the style model so I am double click write a style style model apply it's needed it has one conditioning so the conditioning is going here and after that it's going to positive okay eating it to another input style model and also clip Vision output release the mouse and load the style model here you can see two style model and I prefer to use the code adapter Style and from clip Vision output you have to add another note clip fish and encode for killip vision in code you need to add another node load clip vision and load the pie torch and also it needs a image so add image here and trying to load your pattern or your style in the clip Vision in code everything is ready press Q so it's trying to load the checkpoint uh actually I forgot to change my checkpoint but I prefer the realistic vision and it's trying after that to load the clip Vision it is analyzing the model and it's going to the sampler and here you can see it generates a human with some clothing and the pattern on the clothing is very similar to the style okay now I want to change my checkpoint to the realistic Vision I'm using here I'm using the realistic Vision version 4 vid vae and for the sampler I prefer the DPM plus plus 2 MSD GPU and the Keras for the scheduler and press Q the model is loaded and it's trying to generate the image okay here is the result so it's working very very perfect let me increase the batch size Maybe and yes here you can see that it tried to make and use that pattern on the human body I didn't use any prompt everything is empty and all the information are coming from this image and for example if you change the pattern for example to something like this I'm not sure what will be happen but I want to test that mm-hmm this time generated a human with something like that pattern and everything is very symmetrical and very nice and very artistical so maybe you can change that pattern for this style and get some outer result okay after doing tests on the human let me do one more test in this video for example let me try to use this pattern this a style okay you can see that it understands that the background is something like blue and clothing has so many small patterns and it's very nice okay so this time I want to test on this motorcycle so first I have to change my resolution for the motorcycle I don't have any depth map so I can delete the control net for the depth map or also I can change the strange to zero so let me load the sketch of the motorcycle and this time and let me close this channel because I don't want to make it busy for you oh maybe I delete that it's better so now only I have one control net for code adapter sketch and one style model and press Q again I didn't use any prompt okay from the motorcycle only I can see two wheels and the outer parts are not visible here so now I am trying to increase the syringe and do one outer cue so this time you can see some part of the motorcycle that are very similar yes you can see them and it tried to make a cake like motorcycle and if I increase the strange it will be more similar to the motorcycle okay so it was working well and let me do one test with a salad okay and let me see what will be happen okay great we have the motorcycle here and it's made by some foods and if I change this range I think we can get some different result oh yes this time I think it's better and more artistical you can play with these parameters and after that you can take the decision that which one could be better for you uh and this time instead of using a solid I want to use this pattern and let's cue the prompt um oh it's something new and completely different by the way it's interesting for me uh you can do any test and you can take decision that which one can work for you don't forget don't forget to use co-adapters with the style if you want to use the style and get the best result I am suggesting to use the code adapters for example here now I use the code after a sketch and I have many other teams but working with co-adaptor Sketch is easier for the control net and for the T2i adapter and they can match get better much okay in the next video I want to show you some tricks because I made some artwork with text effect I will share them with you and yes see you in the next video bye
Info
Channel: Arch_AI_3D
Views: 14,612
Rating: undefined out of 5
Keywords: AI, stable diffusion, AI architecture, ComfyUI, ControlNet, architecture, artificialintelligence, Tutorial, ai robots, artificial intelligence, ai video, t2i, t2i adapter, style, style models, style transfer, tutorial
Id: tVGqI2H7kOw
Channel Id: undefined
Length: 20min 4sec (1204 seconds)
Published: Mon Jul 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.