AutoGen Introduction in 5 files step by step including how to enable Docker

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everyone in this video we're going to talk all about autogen in five files we're going to start very simple and then move on to how to said config list also configure the GPT parameters how to use the human input mode the different Alternatives how to use Docker we're going to learn about that on the fourth file how to also set up a loop if you wanted to you're going to learn about how to save the log of the conversations automatically going to take a look at three requirements how to use how to install WSL to use Docker so that you can run autogen in a isolated environment look at an example which communicates with the web and lastly we're going to take a look at a function calling agent situation which is really cool autogen is very cool in general so autogen in a nutshell is a agent building framework it's like Lang chain but for bu but for building llm agents very much so easier easy easily it just takes care of a lot of stuff in the background and uh allows you to get going really quickly you're going to P install autogen right and we're going to do that by pip installing Pi autogen it's updating pretty quickly so this is the version I'm using currently from autogen We're importing assistant agent and user proxy agent so this stands for the user obviously this stands for assistant with autojen you can actually create multiple agents which talk to one another too in each case user stands for the user uh role for each one of them I believe anyway we this to get going is very easy with autogen especially if you have your open a API key your environment variables is open a API key it auto detects it so you don't even actually have to worry about the config list I figured out but just take that with a grain of salt and then assistant we send a message to initiate the chat to assistant what day is today he run this the big Tech stock is which big Tech stock is the largest Ser that that gained this ser and we do this we are getting some input see is you see it's writing some python code and now it's asking us to provide feedback now normally if you don't see anything and just press enter then it will go on and no user input received it's going to Auto execute the block of code and it is currently running a Docker container ER I believe because if you don't specify anything it automatically runs with Docker actually so that's why you might be getting some error messages uh right away but we're going to talk about all that we tried to run it there was a mistake in the code so it got feedback it's going to correct it again I'm going to uh let it round the code here we go it actually was able to execute it and it got that it was Google today which gained 10% says and after that it gives us some kind of explanation and then gets back to us in this instance we can say exit and we exit the chat okay so this is really how it works by default it enables Docker uh so we're going to talk about that by the time we get to the fourth file so let's move on let's go to the second one so this one actually gets a config list so when you're configuring the config list it looks at either an environment variable named as this or it looks at a file that is named as Json and here essentially you're just setting your models that you're going to want to use and your API Keys here's like a sample one okay so I just set it like this since I'm only using open AI but as a matter of fact even if you don't set this up it automatically detects it from your environment variables which I have done that's why I was mentioning here that set your environment variable as this particular key right here it's just the easiest way to deal with open AI uh I believe in all cases code files for this project will be available at my patreon as a public post so you don't have to become a member but while you're there please check out all the other patreon only exclusive uh projects I have over 120 of them they are all very interesting and dear to my heart and if you want to know about what projects are there you can actually visit my website ww ww. EOL live. live and browse them here read their descriptions and find the code download links and also search for them in real time here as well so it's very cool echo. live anyway so this is how you set it and then we filter a dictionary from I believe Which models we want to use I believe especially when you're doing evaluation you can it's ter to go over if I think one of them fails it'll switch to the other one though I'm not 100% sure I'll be making more videos on this and as I find out more I'll let you know as well also all the files here will be available at patreon for free okay so link will be in the description you don't have to be a member it's going to be public so all these examples are available at Auto's GitHub but they're in a notebook a setup is a notebook so I CH I had to change some stuff to make it run in Visual Studio code in some instances so if you these are just some of the examples that they had there anyway this is the second one we are again so we just imported autogen here and we are instantiating an assistant as the auto assistant agent we're just giving it a name I believe this can be any name and then you have this llm config dictionary SE apparently is for caching and reproducibility I believe Auto automatically keeps a cache of all the stuff that you're running for example here with seed 42 I believe it's saved here and a some kind of database so uh this way when similar or exact same response comes in it can actually pull the response from a previous response so it doesn't have to make a call to the API I believe this is the idea behind caching pretty sure you can actually cancel this caching I'm just not sure exactly how yet but and now you set a config list to be the config list right which is this one and then here you can set all the par of the API such as temperature max tokens and stuff like that just remember that if you want to set those parameters this is what you need llm config after that we are defining a user proxy like this give it a name and then here we have this human input mode never if you said it to never then it will never get back to to the it will never get back to you to take input from you you can set it to always if you do that then every time a message is received so each ation it will actually ask user feedback just like it was doing in the original example if we set it to terminate then the agent will run uh so many times and then when it thinks that it's done it will send a terminate signal then only then we go back to the user these are the differences between never always and terminate even in the case of never or for example terminate V set it makes consecutive auto reply to any number you like here 10 so that as the loop is going on after 10 times it will auto terminate okay and here we are just looking at the content message content of the messages and looking at the terminate at the very end of it okay and then we have this code execution config you can specify a directory for coding in this case is the directory's name is going to be coding and then we have this use Docker to false I believe if you said it to true then you will be using Docker but if you don't set it I believe it's automatically set to true and we're going to get to how we can install Docker and run it safely within a container then we just initiate the chat okay we just asking it what day is today compare the year-to date game for meta and Tesla okay and then we can ask a follow-up question let's actually can let's actually comment these out for now and just run it like this and see what's happening here we go so we got this first let me get the current so it's writing some code okay and because we said user to user intervention to never it's executing the code okay and then it got some errors because it didn't find Yahoo finance pip install so it's now going to try to pip install it and then executing the code this code here it installed requests and stuff because it's going to make a request I believe right here or you finance maybe requires request so it's just continuing and now it's going to run the code again and it's saying today is not defined so it's getting feedback automatic it's like a it's running like a code interpreter right now okay it ran again uh it corrected its mistake and it got the years to dat gains from both for meta and Tesla and then it just ter and then it gave us a summary and then it terminated just like this and then the the script exists exists so we can actually send send a secondary message is a follow-up message like this like programmatically so this is pretty useful and if I run this again now it'll first do this and then ter I believe terminate and I will send this message after that which is to uh plot a chart of the stock price here date and to save it and if you actually go to uh the coding it actually already had done it without us asking uh here we go stock price year to date yeah it had the plotting as well and the thing to note here is that it has used a specified directory here but one thing to notice that we are not using Docker so this is running on our system so any function that it executes here we go I believe it's all done it can run pretty quickly when it's not running into any issues and it had written the code that gathered the information requested and then the code that actually created a chart and save that chart now we can alternatively I'm just going to go ahead and comment these out and we can actually start a loop as you see pay attention that we said a first message to true because if the first message true then we're going to initiate the chat otherwise we're just going to be sending messages so if it's the first message we initiate chat with some user input otherwise we send a message if we run this now we're just going to be able to just tell it to do some stuff like for example tell me it actually wrote some python code for joke and then it got back to us and we can actually continue talking to it so you can set this as a loop as well I'll comment the loop code out but you can comment it in when you're experimenting I'll just leave the this one now let's take a look at the third file again we are getting our config list from oi config list and as you see it's all it is this thing right here and it's still working although I don't have my API key here because I have my API key set up as an environment variable I just want to make that clear but you can enter it into o oi config list. Json just like this I believe and here's a sample file as well this is like a list I wasn't entirely sure how to uh put this I believe the list should work if you have more than one again we are defining an assistant with llm config we're setting a c right this is for caching purposes as we talked about we're defining its config list then we're creating the user proxy agent this time we're using human input mode as always so each iteration is going to ask us we are still looking for terminate and we're setting the work directory as coding and uh use Docker false we could have change the structors do anything for example let's just change it to test test there as a matter of fact and then here it's going to ask the purpose of the following line is to log to conversation history so you can actually start loging your conversation like this and then at the end you can actually save it okay like this log history to conversations. Json you do need to import Json you can import at the top or right here we're just dumping to a Json file the logged history so this is how you actually can keep track of the conversations in a file just like this and we have a given some math problem here and we're just going to run it we are given initiate we're initiating the chat with that message and we are sending that message to the assistant which we have defined right here okay I'm just going to go ahead and delete the conversation. Json so we can see what happens in the end I'm just going to let it running here it's doing the regular thing as apologizing let's see what happens okay the first time around GPT got into the mood of saying I'm just an AI can't do that but the second time around it got the question and it's now arranging it and it's going to try to use this librarian Senpai to actually evaluate this expression and so it wrote some code and it got back to us immediately because we set human input mode all always remember if you can put provide feedback to it but if you want it to continue uninterrupted or run the execute the code then I'm just going to input nothing and it I believe it tried the execute and got some error it's now trying to run further you have to enter user input at each time which is nothing literally just pressing enter it mentions here press enter to skip and use auto reply it will auto execute code if it's auto reply oh and it was able to run the code and the output of the code is zero and it gives us that evaluation of that mathematical statement was zero and it says to us to verify and check and then after that it received terminate right and that's why this conversation was terminated and because we kept track of the logging like this chat completion. start logging and because we saved it to a Json file we can see what happened in our conversation along with our costs I'm not sure what this costs are exactly based on but it's there you can take a look now let's take a look at our fourth file in this one we're actually going to be using a Docker like I said it's actually pretty easy I'm surprised you just said use Docker to true in code execution config where we set the uh code directory and I do want to mention that test directory was also created when we specified this directory for uh the file number three anyway we said use Docker to true and I believe Docker uses set to True by default if you do a quick search and install wsl2 in Windows because you need the windows uh Linux subsystem the first link which I'll put this link in the description says simply to run this command in Powershell WSL install okay you can run this by starting a Powers shell like from your terminal and I'm running that command I already have it installed I'm not going to run it I also put that here in the uh requirements after you install this you're probably going to need to restart your computer then you need to go to Docker and then find the docker desktop and download it in this case for Windows for me okay and when you once you download it you have to run it and install that and you're going to have to restart your system again after which time you just start your Docker meaning the docker app and Auto Jam will automatically create and delete Docker containers on the fly so you don't have to worry about that uh after that if you take a look at this file we are again setting in the config list and then see in in this case the models are specified is 32k even though I don't have access to it I believe it just uses the first one yeah das4 llm config here like we said you can set the request timeout this is an open AI parameter just in case if the the API is non responsive for 600 milliseconds I believe it'll break away see we are setting the seed for caching purposes uh config list is config list temperature you can set other parameters here such as mix token then we are defining the assistant with this llm config we are setting the human input mode to ter terminate so once the agent gives a terminate message then the autogen will get back to us we are setting how many times auto reply can continue to 10 so it will auto terminate and get back to us after 10 iterations this is how we're detecting if terminat is at the end of the content of the assistant uh messages that are generated we are setting the code execution config to web this time because we are going to be using some web utilization here we set the use Docker the true and we setting a system message pay attention that apparently you can send a system message for the user proxy here it says Reply terminate if the Tas task has been solved at full satisfaction otherwise reply continue or the reason why the task is not solved yet then we just initiate the chat by asking who should read this paper or giving the uh papers URL now if you run this look what's going to happen I'll have the docker here on the side look it started automatically a Docker container and immediately terminated it I believe after code EX ution so it just keeps doing that so once you install Windows subsystem if you're on Windows they say it started code execution again and it's started another Docker container essentially when it's running its code execution and it's if it uses Docker then your you can for for a great extent you can think that your overall system is safe because Docker container is isolated from the rest of your system like I said if you simply install WSL D- install in power sh you will have installed Windows subsystem along with Ubuntu then you install Docker desktop and then you just start the docker Docker desktop and that's all you need to do and as we see in the background autogen is doing all kinds of stuff it's actually trying to run this code see as you see it's importing beautiful soup it's actually trying to get the abstract of this URL uh let's see if it's going to succeed it's still working on it oh yeah but eventually was able to get the abstract of this article all the while executing the code in Docker containers and now uh it just gave us the our answer based on the abstract who this art who this paper is for right researchers and developers Professionals in coding so on and so forth this is really lovely I will be experimenting more with autogen as well like I said with my own ideas too and I will utilize some of the other examples from their repository as well but all these files will be available in my patreon like I said for free uh is a public post so link will be in the description now finally let's take a look at a function calling agent configuration we are again Sting the config list we are this time using config list from model before we are using config list from Json right looking for the o config config list like this but in this case we are defining it from list from model just like this GPT 4 and 3.5 turbo and nowh here we're specifying the API key and this will still run because I have my open API key said in my environment Windows environment variables okay it's autod detecting it from there so this code was particularly set up for running in Notebook uh that's why I modified it to run in Visual Studio code is a regular script that's why we're using using subprocess here again we're using llm config like we've used here because we are defining all the because we need to Define all the parameters of the API such as temperature mix tokens or request timeout and since functions definition is part of the parameters API parameters we're defining it here just like we would normally Define function definitions right yeah we are just sticking through to the regular convention run code and python return the execution results this is and then this is for running shell scripts all right and then we are defining config list is config list request timeout at 120 milliseconds and then we're defining the assistant as a chat bot as name so this can be anything you like he said a system message for coding tasks only use the functions you have been provided with reply terminate when the task is done llm config is llm config because it needs this function definitions and the user proxy is user proxy it it looks for terminate right it's never it never wants to get user input so this is when never we won't be able to input any feedback with this setup we are still limiting the uh loop to 10 iterations code execution config we're going to use true and we're going to use the working directory of coding and then we have these functions we had the function definitions so now these are the functions for that to execute python code to execute sh as you see this these were the names Python and shell and here after that after defining we are registering the functions user proxy do register functions we're mapping the functions python definition maps to exact python function sh maps to exactor sh function so this is how you register the functions and we are using the user proxy to do that because yes why do we do this if you think about about it so the assistant gets the descriptions and will return the parameters for those functions and then the user will be the one that is responsible for executing it right this is how we would do it if we were doing at a pure open AI way uh we would receive the functions and when it when it is users turn to respond then we would execute the functions and get some return from the functions and then we would input that back in when it was user's turn I guess that's why they're like simplifying it like that and then we just start a conversation simply so we start it we initialize it from the user proxy and then we input whoever this message is going to and it's going to the tpot is is the assistant agent which we have initialized up there okay just like that and then we have this message draw two agents chatting with each other with an example dialogue save the plot to a file to I said so this is a kind of very obscure but essentially it should still work let's see what happens okay it let's see it's so it was having some issues so I'm just going to say use map plot L to draw two agents chatting let's see what it does this was one of the examples actually so it's a interesting it's a bit of a weird example but an interesting one nevertheless okay this is the message we just sent in theory should write some met plot lip on code to simulate two agents talking to one another I believe is the main idea if I'm not mistaken here we go yeah so it did that so so it did it some somewhat anyway it ran the code and that's it we were oh and I don't know why if in this case it's not using the coding directory in my previous tries it didn't use it either although we are setting the working directory to coding it did not save the PNG to that folder but it saved to the main directory but keep everything about autogen with pain assault like I said this is a pretty new library and I'm sure it's going to get improved quite a lot it's very exciting very promising and I hope this you found this useful so I hope this will get you started and get going quickly if you enjoy the content please feel free to subscribe let me know what you think in the comments or join our Discord uh server and chat with us there about all this stuff link will be in the description thank you for watching Code files for this project will be available at my patreon is a public post so you don't have to become a member but while you're there please check out all the other patreon only exclusive uh projects I have over 120 of them they are all uh very interesting and dear to my heart and uh if you want to know about what projects are there you can actually visit my website www.ol live. live and browse them here read their descriptions and find the code download links and also search for them in real time here as well so it's very cool eive uh. live
Info
Channel: echohive
Views: 8,812
Rating: undefined out of 5
Keywords:
Id: WnBCPG-ZdLk
Channel Id: undefined
Length: 26min 8sec (1568 seconds)
Published: Tue Oct 03 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.