AutoGEN + MemGPT + Local LLM (Complete Tutorial) 😍

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right guys the much awaited video is here and I've worked like 4 hours to get the solution at one point of time I thought of moving to the next project but hope kept me going in this video I'm going to show you how you can connect mgbt autogen and a local large language models using runp pods so you're going to connect your local large language models with autogen and mgpd let's get started how this is done so I have this code here and I shall share this code in the description or I will attach a link uh to the GitHub repo and you can run the code thanks for tuning in bye-bye have a nice day I'm just kidding in this video we are going to see each and every detail on how I got to the solution and the thought process that I had uh in obtaining the solution this has been a great problem uh because if you watch my videos uh you can see the first video which was the autogen with local LMS get rid of API keys this is an interesting video you can watch and if you head over to this video M GPD with local LMS or this video autogen with local alms and this video uh in this video I powered the mgpt and autogen separately using llms and I received quite a number of comments where people wanted to see an integration of everything together so this is where you have local large language models you have autogen you have mgbt working together actually you can see in the autogen we have different U agents because this is a multi-agent framework and if you look at the examples so for example we have these agents and we want it uh to work something like this we want that user proxy agent is the normal uh autogen and for example the assistant agent is a m GPT agent and that would be interesting that is what people want it and all the apis that it is going to use is from the uh from the local models from the local large language models for example dolphin 2.0 mril you can use U Arrow borrows but not all models are ready as of yet to be used in mgbt uh right now we I think only we have two models which can work uh for M GPT that is the dolphin uh 2. and 2.1 and then we have arob borrows uh you can check out the documentation uh for finding so you can go to mgbt then GitHub and you can go to this place where you can run mgbd with local LMS so you can click here and uh this is where I went for uh finding in a solution so okay we have arob BOS we have zapire uh zepher we have dolphin as well so I will try with Zephyr later in this video we're going to look at dolphin 2.0 because comparing dolphin 2.1 and 2.0 I found the dolphin 2.0 better than the dolphin 2.1 but uh having said that and having the base understanding of what we're going to do let me summarize again what we're going to do so autogen has these different agents and now we want to replace one of these agents with mgbt agent which has infinite memory so in order to get more details of what mgbt is and how does it function and uh if you want to look at the paper I have uh you can check out the videos that I have uh I have published an an and this is an extensive video on explaining what mgbt is and uh in the last few days I have been publishing videos related to autogen and M GPD it's like a crazy technology and uh today we have found a solution to do this do the integration so if you're a beginner and you want to get uh everything from the start uh just you need to install pyth so I will start from the very Basics so you need to install Python and you need uh an editor which is known as uh vs code this is my favorite so you can download this so you can download this and you can download the python uh uh don't download the recent version go for something like python 311 and after you did this next we need to create an account of my runp pods I have a link to this you can please please go and check in through my link uh I will do get an affiliate commission though so you go there and uh create an account there and add some credits you need to put some credits but but if your system is very strong you don't need a cloud GPU you can do it on your own system uh using the ugab booa text generation web UI so since my machine is a little bit weak weak so I have decided to try it on runp pods and now let's get started so first thing you're going to need to do is go ahead and open up a new folder open up a new folder open folder so let me select a folder let me go to my projects uh this is 167th project let's say m GPD Auto gen and llm okay so this is the folder that I've selected select the folder here next uh as always we need to uh make a virtual environment so we go to view command pallet uh create environment uh V EnV use Python 314 the environment is been created and then we are going to make our code here so first of all I'm going to click here where it says add new file I'm going to say app.py this will create a new file I'm going to load up a terminal known as New terminal okay so we have this and this environment is selected so we go to VNV we go to scripts then we go to activate dobat and here we need to activate this so after activation you can see that this isve EnV so if you know this it's great but if you don't don't worry so CD do dot C dot dot uh we go back to the main folder now so that we can access the app.py here so this is the configuration is been done now let me go ahead and copy the code because it's not that I'm going to code it now because it took quite a few few hours for me to get the solution so you can see the wiggly lines that we have here so it means that we need to install some things let me start from the bottom so let me make some space so here we have open AI so we need to install pip install open Ai and uh let's wait for the installation to complete after this we uh would need to install the autogen here so for installing autogen let me just make some clearance here CLS so uh I would say pip install Pi autogen Pi Auto genen okay this will install the py autogen packages or autogen packages so where are the packages installed these packages get installed in the v andv then we have this lips library and the site packages so it gets installed somewhere here uh you can see this folder this is autogen so this is successfully done let me clear again and now we can install the p m g GPT okay so click uh press enter and this is going to download the M gbt as well okay all right it is downloading okay this is done so let me clear this up again CLS so we have installed everything and all the wiggly lines are gone now let us look at the code and then we are going to have an a look at the integration of the API so first of all we have um import the operating system import the autogen and then uh there are quite a few uh modules that you need to download from the M GPT Library so from the M GPT we have autogen M GPT agent as M GPT autogen then we have the interface agent system utils presets constants personas humans persistent manager and then we import the open AI of course we are not going to use open AI API but this is just a Decoy for the code and the code will think that it is using the open a but we are going to use a different endpoints and that is our local llm endpoints so that is just a trickery that we are going to play with this code okay so just like in autogen we Define the configuration here this is API type and is open AI API base we're going to look at this what is this we're going to look at that uh and we are going to pull this from the Run pods where we are going to host our local LMS and API key we don't need to put anything next we have this uh flag so use M GPT we can say true or false when we say it's false then uh it would be a normal autogen kind of thing where you would have a user proxy and then you are going to have an agent okay we look into that don't worry but when we say use M GPT is equal to true then it is going to use the M GPD okay so what you can do now is another thing is I'm just going to paste it here so llm config is it is going to use the configuration list here and this is the llm config so this is used by the autogen next next we are going to need the configuration the the API list uh the API keys for uh the M GPD to work so this is where the whole configuration is we'll have a deep dive into this but let's look at the code first so this is the configuration part we'll come back to this again but uh just like the in the autogen we're going to define the user proxy here so this is a user proxy name is the user proxy it's a human admin it is going to look up uh the last two messages because if you put in for example five messages it is going to take a lot of tokens and the do the dolphin model which we are going to use has only 2,000 tokens and it's very difficult with 2,000 to take a very large context so human input mode is terminate we would terminate at the end uh we need to type in exit so as to exit and default user is you are going to figure it out on your own work by yourself the user won't reply until you output terminate to end the conversation so it will ask the assistant agent to type terminate and then uh the user proxy agent which is a replacement of us the human beings would do something about it so this is just a standard thing next we have this new thing this is the interface that you need to make uh using the module known as autog genen in uh autogen interface which is the module that we have created using the M GPD Library all right so this is going to use the mem GPT agent here and what else we need the persistent manager we start this we have this Persona this is I'm a 10x engineer trained in Python the the human is I'm a team manager this company and M gbt agent so this is the main agent of M gbt that you're going to use so for the M GPT we have the presets and we're going to use the presets here the preset is the default preset model is GPT 4 but we not going to use the API of gbd4 persona is Persona human is human interface is interface and persistent manager is the persistent manager and agent config is the llm config that we are we have already placed here this is the llm config okay so at this uh step uh I hope you are able to follow this step is if else statement so if not use M GPD so if we have put use M GPD here as false then this means that we are going to use the auto agent the normal example so this is the auto agent that's a normal um order agent and assistant a agent here that is a 10x engineer training python but this is a step where we need where we want to use M GPT so when this flag is true then this will be the code that it will run then we're going to print uh mgbd agent at work then this is the coder so mgbt Auto agent and mgbt agent the name is mgbt coder and agent is this agent mgpt agent Okay so uh as like the autogen we are going to initiate the chat with uh asking the coder to write a function uh to print numbers 1 to 10 I hope this is clear but I would like to summarize it again so here we install all the libraries import all the libraries the first of all you install all the libraries then you import this we set up the configuration list here we also can change the flag uh to true or false so that uh to use the normal Auto agent or the uh mgbt this is the user proxy this will always be the same but uh here we have the auto agent or you can use the mgbt using these configurations here and we initiate the chat here so what remains is how I was able to integrate this using the API keys and how to place the API Keys how to use the API keys this is what we're going to do here so in order to get these API Keys you can just use gb4 and that is easy because uh gbd4 integrates well but if you uh look at the cost that I've already shown in the previous video If you go to platform just hold on if you go to platform. open.com then you go to the usage you would find that you know these cost $5 and just running it twice I mean this is so crazy so if you don't want to have you know spend cost like this this is what we have been looking for uh as the solution for this so uh this is the solution now uh this is start at using the runps here so you create an account add some credits and go to templates then uh you go to run the blogs llm here deploy this so this this is selected uh the template is selected next we are going to use the gpus my favorite is this one RTX a6000 keeping in mind the cost and the availability and the ram so here uh 7860 you can watch my other videos to get a deep explanation of this but 7860 is the ugab boa text generation web UI and we have 5,000 here uh HTTP ports but what you're going to do is add one more and that will be our uh Port through which we are going to uh use the M gbn Auto autog genen together so we click on set over rights click on continue click on deploy this is going to load up the pods you can click on my pods and you can click here you can see that the Pod is getting ready now let's wait for it okay so click on connect then and you can see that this port is not ready we will make it ready but first go to 7860 Port this is going to open up the ugab boa text generation web UI and tell me how you're feeling about this technology go to models then go to this we're going to use dolphin 2.0 dolphin 2.0 mystal 7B click on copy here and click on paste here paste it here then click on download this is going to load up the files it's going to download the files okay so the download is done and next we are going to Res refresh this so it's going to load up the downloaded models here going to press uh click on none then click on the model back then click on load and this is going to start up the loading sequence and let's wait for it successfully loaded now once this is loaded we need to open that 001 Port so for opening that Port you need to go to session and then you need to click on open Ai and then apply Flex extensions and restart this so just click that once you start restart that it will restart the web UI interface and now if you go to model and okay you go to my pods and here you see that this port has been opened and now this is the API key that you are going to be using for our project so runpod has just downloaded the models the large language models local large language models and well just a model here so it downloaded a dolphin uh 2.0 model here and we have open the port of 001 which is the open AIS a AP key sort of which is which will behave like the open AI API key okay so we head over to text generation and then we are going to copy this contrl C and we head we're going to head over to the code and we're going to paste it here so this is our API base so paste it here make it V1 and make it 51 okay we are also going to copy the same thing here and put it here okay so this is done so this is done just uh copy the uh text donation web UI link to this location change it to 51 and change it to V1 so this is the entire configuration that we've been looking for so contrl C and now for the moment of truth I'm going to make some space here so as to see the results better and and I'm going to change the flag to false so initially we are just running the the normal autogen so we are first just running the normal autogen where we are going to have a user proxy and we are going to have a coder but this will be an autogen coder so let's see the output so how do you run this so say python so let's run this Python app.py and let's see we are using only autogen so python app.py what is this we have this code and uh okay we start here from here we have the write a function to print numbers 1 to 10 we have this code here and it executed the code to get this output here so this works here and now okay and now if we change this to true and then run this python app.py let's see if it if it works now when we set this to true the agent that we're going to use so we have this output M GPT agent at work and now again we have the user proxy saying that write a function to print numbers one to 10 and we get this uh beautiful output it's pretty great but and U you can see this and which means this is running and it's pretty great so now combined with this knowledge combined with this power what we can do is we can set up very smart proms here uh we can set up a very uh default reply or we can set a good human admin for the different functions that you want this uh agent to be used okay so I think this uh should I should end the video because this is quite a long video but um having said that I would like to summarize everything that we have done uh today here first of all we uh wanted to U use a m gbt agent here and for that we started up the web generation uh the text generation web UI from using the my pods we have a pod running here and we start up uh the text donation web UI we downloaded a model and then we use this URL and change this to 50001 and change this to V1 here so as to get the so as to set the apis here so this is the API of uh the auto agent autogen and this is the API of mgbt after using the API we were able to change the flag here so where is the flag we are able to change the flag so when we say this false this is just the autogen running when we set this to true it is uh the both the autogen and mgpt running so I will attach the link to the codes and this is just a starting project um now that we are able to connect the autogen and M GPT uh we should be able to take on bigger projects um especially combined with the power of local LMS because the cost of gbd4 is huge so now you have this power please uh try this and um mention in the comment section if you face any difficulties in doing the same if you don't have my run pods U if you don't have run pods or if you have a very very strong uh GPU machine then you can use that or you can use runps uh through this link that will help me a lot also uh I thank you for watching this video till the end and I will come up with um new videos uh just again uh on different Technologies that I find is you know sharable to you all and this is a very deep dive video of how to get everything installed and please mention in the comment section if you find this sort of uh videos interesting or you want to change the style of presentation how you want to change something I'm very open to suggestions because this is a new channel and I am ready to change to anything that you want so please mention in the video section please mention in the description of what what you want to see uh because I have so many videos lined up so many um things that I want to share that I got uh that I get hang up on you know what to present next so having said that this is your host I think you enjoyed the video therefore pleas please press the like button subscribe to my channel and uh yeah please watch these other videos that are appearing uh please watch the videos that I mentioned at the start of the video because that will help you get a deep understanding of that of everything that is happening in this space now having said that I will be back in the new video Until then continue watching this video thank you and have a nice day
Info
Channel: Prompt Engineer
Views: 30,785
Rating: undefined out of 5
Keywords: chatgpt, openai, autogpt, prompt, promptengineer, mojo
Id: bMWXXPoDnDs
Channel Id: undefined
Length: 26min 15sec (1575 seconds)
Published: Tue Oct 31 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.