AutoGen Tutorial | ANY Open-Source LLM using LMStudio with MemGPT

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey and welcome back to another video and today we'll be using LM Studio to connect a local open-source llm to autogen and then we'll also interweave mgpt in as well I'll go over how to start using LM Studio how to connect that to autogen and then also how to use mgpt in the code as well we'll review the code then go through an example so let's get started first off that's Simba but secondly this is LM studio so you're going to go to LM studio. a and then all you're going to do is come over here and click to download LM studio for whatever system you have after you download it just go through the process of installing it and then go ahead and start the application once you start the application you'll be greeted with this home screen and so this is the main page of LM studio and all the stuff down at the bottom is just just news and gives you information about some of the models but all we need to care about uh for this for this purpose is in the top middle here is a search and what you do here is you can type in some open source llm so I'll type in mistro and what I'm going to do is I'm going to then click go and it's going to take a second and what it's going to do is give you all of this list of all of the models for mistol and then once you're here uh there's a couple things we can do is so right now it's sorted by most recent so it's whatever was updated most recently will be up here but you can also sort it by the most likes so as you can see this one has the most likes so far for the mistel model so we could we can just do this right so I've already downloaded a couple here so whenever you click on whichever one you want to click on doesn't matter you can come over here on the right and you'll see these are all of the model sizes so I've already downloaded like Q3 and I downloaded the Q5 models so just go ahead and just choose one to download it doesn't matter so what I'll do is I'll just download I'll download this top one Q2 so you just click the download button and whenever you do that it will show up here and you can see that it's starting to download and we'll be back whenever this is done downloading okay and now it finished now the next thing we need to do is you come over here to the left and there's like these two little this like double sided arrow you click on that and at the top here it says select a model to load so click this drop down and we just downloaded this one so this is the mistal uh Q2 so go ahead and click that it's going to load the model for you and then whenever it's done loading the model we just have one more step and right here you can see there's a server Port there's a couple options here don't don't worry about changing those out uh but all what's going to happen is we're going to start the server and this HTTP uh Local Host here we're going to take that we're going to put that into the code for autogen so instead of connecting to open AI uh API directly we don't need to do that you don't need an API key for this because this is a local open source llm using LM studio so we're just going to connect to the Local Host here which will then allow us to chat with this model so now that it's done all you have to do is click Start server and boom we're ready to go now I'm going to go ahead and move to the next section which is um getting autogen ready and I'll go through the code but we're going to take this Local Host uh this HTTP Local Host and this is the API to connect to the local server so we can use this model let's see how that works okay now that the Elm Studio side is taken care of the next thing is to get autog gen and mgtp ready so what I'm going to do is just go ahead and create a new project I'm going to go ahead and create the files and I'm just going to review it and then we'll run it and see how it works okay now that the files are created the first one I'm going to go over is the EnV file and this is where I want to hold all of my environment properties so a good practice is to not hardcode the properties in for instance in our app.py file which I'm going to show you in a second you don't want to hardcode like the API base your key your model or some other General properties it's good to have them in environment variable cuz if I need to change the key or the base like say I there's something wrong here I can easily just come to here and this is so much easier to read and handle instead of going all the way through your python code looking for it and then changing it there either way we have the API key again uh because it's the local llm you don't need an API key but you just need to put some dummy value here uh the API base is the URL that we're going to connect to LM studio so so that's what this value is the API type is still open Ai and then I this value for whether or not we want to use mgpt agents so by default I just said it to true all right now going to the main python file the first thing is take care of the Imports so we have a few things to import we have Pi autogen Pi mgpt Python d.v and then open AI all right so you just type pip install uh go ahead and run this command and then we'll get started right so the the first function here this load. EnV all this is doing is allowing us to load the environment variables from the EnV file so that we can use them throughout the code next we have the config list this is where we retrieve the API type Basin key from the environment uh file that we just created by saying os. getet environment and then the name of the key that holds the value so once we get API key which we just have a dummy value for it's going to give us the value sk-12 3 45 or whatever you have it set to all right and then we also had to set open ai. API API key and API base as well so we're going to again set the key and base from our environment variable and this is where it's nice if you have it separated out we can change the base or the key here that in the environment variable and it'll automatically change here so we don't ever have to worry about it anymore the next thing is the llm configuration so the first parameter is the config list well we just created that so we're going to put that in here and then the C seed so the seed is whenever we run this for the first time this is seed 44 so it's going to create a directory under the project called cash then under that directory it's going to have the number for the folder so it'll have a folder called 44 and every time we run this and every time we run this after the first time we're going to get the same responses back from the model okay so it's it stores and caches the responses I already got so if we run this it'll just give us back all the responses in like a couple seconds all right if you ever want to have different responses just change the number here okay just change the number and you'll get something new the request timeout is set to 600 or about 5 minutes and I just set the temperature here to 0.7 it doesn't really matter if this example but I'm just trying this out and now we get into creating the agents so the first agent is the user proxy agent and all it is is that's that's you and me okay so that's going to be us so I named this user proxy the message the system message this is kind of defining the Persona and description of the agent so we're going to be human admin and under the code execution config parameter the main thing here is the working directory so whenever the code is finished it is going to or supposed to create the python file or whatever type uh whatever type of file it is it's going to put it and create another directory under our project directory called group chat and store the files there for human input mode I put never here by default it's always so if I just like deleted this then it would be considered always but I'm not worried about for this example uh chatting with the model and now we have the assistant agent so this is considered our first AI agent and this one is going to be a product manager the system message is simple and then for the third parameter we just give it the llm configuration so it is going to go to LM Studio using the API base it's going to connect to this connect to the server that we already started and then it's going to take that model that we loaded and chat with that model this little bit of code isn't necessary this is just to allow us to debug mgpt whenever we whenever you want to use it um it kind of allows you to see more of the inner workings of it and uh see how it works and how it uh talks back and what's going on in the memory management system but this isn't necessary this is just um so if you want to use it then you can see what's kind of going on under the hood so I set this varable to be a bullan type from the environment variable that we set to True which is called called use mgpt uh it comes back as a string so I just have to set it to a type of bowl and here we are either not going to use it and then we're going to create uh an agent that is going to be the coder or we are going to use it and then we're going to create a special function that is given to us by MGP that mgpt that allows us to connect using autogen and we're going to create this uh M GPT coder if we want to use it and it's a little different because here here you use a Persona description and a user description the Persona description basically is like what am I okay and then the user description is kind of like um what am I doing in this group like what is kind of my role the interface arguments parameter this is where we're going to put all the debugs set to True again you don't need this it's not necessary but it's just for so if you want to see what's going on you can hear and then if we're not going to use mgpt then we simply just have a regular assistant agent uh it's going to be a coder and then you have your system message which is the same down here just all put together in the same parameter and then here we give the llm config next we have to create a group chat so this is going to be of all the agents and we want them to chat together so we had to create a group chat instance or create the object the first parameter agent this is where we're going to define the array of Agents so myself the user proxy the product manager and the coder those are the the three agents that we're going to be using and then messages doesn't matter uh the max rounds I'm just going to have set to two um because I just kind of want to get it done it's just this is just an example um and then we need to create a group chat manager object and the first parameter here is the is called group chat and this is where we give it uh basically all of the agents that are going to be talking with each other and then finally you give it the llm config and then finally we have to initiate the chat right so it's you and me we're going to initiate the chat with the manager which is the first parameter and then the second parameter is the message so we want to say create a simple uh simple random number generator in Python that's it now just something simple this is just to kind of show you all the connections with each other so I'm going to run this offline and we'll be back with the results okay so now we finished uh now I'm in LM studio and here we can see the interaction it had from uh the python file so here in LM Studio it went ahead and created the implementation in Python uh here is the code it doesn't quite look like the code here because it's not formatted um the way it's easily to see which I'll show you uh in py here in a second but uh here is the interaction it had with our model that we had loaded in LM studio and that's what we wanted right so now let's go over to pycharm so now back in py charm we can see this code a little bit better and like the conversation um it's formatted easier for us to see uh we can show that we use mgpt as the coder uh we here's the message that I or the task that I wanted to create a simple number generator in Python and because I didn't explicitly say that the product manager don't code it all and that I only want the coder to write the code you know it's fine my prompting was kind of off but it's not really the point of this I wasn't really worried about it but the product manager decided to write the code for us and that's just how it was and you know it's a simple simple file um and just prints out the random number generator between one random number between one and 100 okay so this is it this is how we connect LM Studio to autogen and mgpt awesome guys so we took an open source llm and we're able to connect it to autogen in mgpt and one of the best things is because we're able to use an open source llm we didn't have to worry about having an API key which means that we didn't have to spend money and it's always nice to not spend money to especially when you're just testing things out and my next video will be going over autogen in detail taking a deeper dive into the agents how they work all the parameters and functions so we're all going to be understanding how autogen works better have a great day guys and I'll see you next video
Info
Channel: Tyler Programming
Views: 10,508
Rating: undefined out of 5
Keywords: ai, chatgpt, artificial intelligence, chatdev tutorial, ai agent, ai agents, autonomous ai agents, autogpt, build autonomous agent with python, chat gpt, gpt 4, tutorial, step by step, python ai chatbot tutorial, ai automation agency, how to setup autonomous ai, your first software ai team, ai tools, artificial intelligence and machine learning, microsoft autogen, autogen, auto gen, ai tutorial, memgpt, mem gpt, lmstudio, lm studio, memgpt tutorial, lmstudio tutorial
Id: 8RtxvXIx61Y
Channel Id: undefined
Length: 12min 34sec (754 seconds)
Published: Sat Nov 18 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.