Unlocking the Potential of Large Language Models with ComfyUI | Advanced Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone and welcome back to Dreaming AI my name is Noob and today we are going to learn how to use large language models with comfy UI Yes you heard it right thanks to its infinite expandability through plugins conf UI also allows us to load models that have nothing to do with stable diffusion enabling us to combine multiple things simultaneously uh the custom notes I've used are comfy UI end nodes devouted by me and comfy UI custom scripts which is one of my favorite Suites of custom nodes from which I've drawn a lot of inspiration the links are in the description so let's dive right into the installation of these two libraries first head over to the comfy UI custom scripts GitHub repository copy the repository link and clone it using the get clone command in the custom nodes directory inside the comfy UI folder next go to the comfy UI and nodes GitHub repository and do the same now for the GPT node you need to run the installed dependency.bat file and since Lama CPP python needs to be compiled from source code to enable GPU usage you'll need to have Cuda and the compiler installed for Cuda installation simply follow the link in the description which will take you to the Nvidia website where you can choose the Cuda version that suits you for instance I've installed version 11.6 for Windows regarding the compiler there are many options personally since I already had Visual Studio 2022 installed with the active C plus compiler I didn't need to do much as explained in the official rama.cpp documentation you can also use a compiler like w64 devkit his link is also in the description alternatively if you're not interested in using the GPU you can simply run this command from the python embedded foldering and install the latest version of llama CPP python which utilizes only the CPU at the cost of slower prompt generation however be aware that I haven't personally tested this version of llama CPP python so I cannot guarantee that there won't be any issues if you already had models in quantized in format you can modify the extramodel paths.yaml file add this section to indicate the main photo containing the models folder with various models if you do not already have models go ahead and create a folder called GP checkpoints in the models folder where you can put them after the download a well-known repository where you can download this type of models is the blokes repository on hugging face with this done let's launch comfy UI and we're ready to start our first workflow with the large language model for this example we will create a node that generates a Horror Story and Another node that extracts tags from the story which we'll then use to generate an image related to our story um our main prompt will be uh you are about that like to generate Horror Story please generate me an amazing one um let's proceed to create the GPT loader simple node that will load our model select the checkpoint the number of layers to be loaded into the GPU more layers use more vram but also result in Faster processing ah adjust the number of threads in maximum context used by our model until recently the standard was 2K now models are coming out with sizes like 16k or 32k Now to create the second node called GPT sampler where we'll have various parameters to experiment with our models the default settings I've provided are the ones I usually use the prefix and suffix are also crucial as they are generally suitable for most models however I recommend check the specifications of the model you're using for more details uh connect the custom output to the model input and the string output to the model path and put I suggest setting cache to no if you intend to generate a different output each time uh for convenience we'll use the show text node to display the generated output then connect it to a string function block that will combine the story generated by our model considering that we want to extract tags from it turn off tidy tags ah and in this case The Prompt that we will put in a fist patch will be you are a bot that like to extract tags from a given text from this text extract some crucial times comma separated now let's create another GPT loader simple node connecting the same model but with the prompt just generating great and finally let's proceed with the last part where we'll use another string function block to combine the newly generated tags with fixed tags that we want foreign this will be our positive prompt let's also create a negative prompt for the occasion foreign set up the usual workflow to quickly generate an image done now let's try running our little project amazing and that wraps up today's session I hope this tutorial helped you learn how to start to use large language models in comfy UI if it has would you consider subscribing and leaving a like it would be a great help to the channel thank you if you have any questions please let me know in the comments below I'll be happy to help you out as much as I can and as always be dreaming foreign
Info
Channel: DreamingAI
Views: 4,135
Rating: undefined out of 5
Keywords: generate text, llama, gpt, chatgpt, ComfyUI, advanced, text generation, AI, stable diffusion, artificial intellingence, dreamingai, ai news, best free ai, best ai model, dreamingai tutorials
Id: asMgkwTDAQQ
Channel Id: undefined
Length: 7min 20sec (440 seconds)
Published: Fri Aug 25 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.