ComfyUI에서 SD-WebUI(A1111)방식의 프롬프트 로라 이용하기! (그대로 따라하기, comfyui)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hello. I'm Neural Ninja. In this video, I will show you how to use prompts in the format of A1111 and LoRA with ComfyUI. The problem with using ComfyUI and A1111 directly is that their prompt weighting methods differ, which can cause the images to appear broken. We'll adjust this issue and also look into how to directly use LoRA and Embedding in prompts, similar to A1111. For those using Colab, please execute the installation of additional Custom Nodes. We'll start with the basic workflow. First, I'll set up the VAE Loader. I've prepared a prompt that works well with A1111. I'll set the size. I'll fix the seed. I'll set up the sampler. Now, let's create and check it out. This way, the prompt weighting is applied too strongly, causing it to look broken. I'll bring in a node to modify it for A1111 weighting. I'll change the weighting method. Apart from the weighting application method, it's the same as the original node. Let's replace it directly. I'll replace the negative prompts in the same way. Now, let's create and check it out. It's been created well. This time, I'll also change the K sampler. A1111 uses GPU for noise generation. Using the Inspire KSampler can match this aspect. I'll set it up the same as before and create it. It has been created well. Even if it's not completely identical, it's almost similar. Now, I'll set it up to use LoRA in the prompts. By using the Power Prompt node, you can directly use LoRA in prompts. I'll transfer the prompt. Since we must use the existing Advance Clip Text Encode, I'll only modify the input. I'll connect the model and Clip. Now, the model and Clip will be used in Power Prompt. The LoRA entered in the prompt is loaded into the model and output. I'll connect the model. I'll reconnect the clip. By selecting like this, LoRA is added to the prompt. Direct input modification is also possible. Now, I'll create it. It has been applied well. I'll add another LoRA. In the console window, you can also see LoRA being loaded like this. Now, I'll create it with a different prompt. A non-existent LoRA is indicated as skipped like this. This time, I'll apply embedding. In ComfyUI, you can't use Embedding directly like this. I'll modify it so it can be applied directly. Please add the Parse A1111 Embedding Node. I'll add a text node to transfer the negative prompt. Now, I'll modify it to go through the newly added Parse A1111 Embedding Node. Passing through like this automatically attaches the embedding. I'll reconnect the clip. I'll also add a Show Text node for verification. Now I'll generate and check. You can see that automatically "embedding" is added before the embedding text. I'll quickly apply upscaling and test LoRA and embedding. I'll generate the upscaled image. Now I'll apply another prompt with LoRA. It's applied well. I'll quickly add FaceDetailer. I'll quickly connect the nodes and generate. It's generated well. I'll apply another prompt. That's all for trying out the A1111 prompt format in ComfyIU. Since prompts are shared based on the A1111 standard in many places, knowing this method can be helpful. I hope the video was helpful. I'll see you with another good video next time. Thank you.
Info
Channel: 뉴럴닌자 - AI공부
Views: 1,813
Rating: undefined out of 5
Keywords:
Id: cbG9hw1gd3g
Channel Id: undefined
Length: 11min 37sec (697 seconds)
Published: Wed Feb 14 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.