AutoGen FULL Tutorial with Python (Step-By-Step) 🤯 Build AI Agent Teams!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I absolutely obsessed with autogen it is the coolest piece of AI technology that I've seen since I saw Chad GPT originally I already created a beginner's tutorial video and overview of autogen and what its capabilities are but now I'm going to give you a more advanced tutorial and we're going to set it up on our computer I'm going to walk you through it step by step and not only that I have plans for a series of videos about autogen if there's any autogen related topics that you want to see let me know in the comments and on that that note let's go as a reminder autogen allows you to set up multiple artificial intelligence agents that work together to accomplish any task that you give them they can use tools they can code they can execute code it is phenomenal think about it like chat GPT Plus Code interpreter plus plugins but fully customizable and you can drop it into your application that you're building and so I'm going to show you how to actually use the code locally and this is going to be a little bit of a slower paced video because I'm I am going to walk you through step by step each line of code that I'm creating so you're going to need just a few things to get this to work I'm going to be using visual studio code but you can use any code editor that you want you're going to need Anaconda installed and you're going to need an open AI account I'm still working on getting an open source model set up so we're going to continue using open AI for now okay with Visual Studio code open we're going to create a new file now once you have that new file created go ahead and click save and I already created a folder called autogen so if you don't already have a folder specific for this project go ahead and create one I'm going to rename this to app.py and I'm going to save it okay now that we have that all named let's go ahead and click this little button in the top right of Visual Studio code and that'll open up our terminal and next we're going to create a new cond environment to help manage all of our python versions and modules so you should already have Anaconda installed and if you don't go ahead and install it just Google how to do it if you don't know how and the command we're going to type is create - n autogen python equal 3.11.4 and then hit enter hit enter again to proceed and one thing to keep an eye on is what version of python that cond is using what version of python python is using and then what version of python Visual Studio code has referenced which you can find in this bottom right hand corner right here so we're going to be using 3.11.4 across the board now we're going to activate the environment by highlighting this right here and then paste cond to activate autogen and autogen is the name of my environment and then I can tell it's working because it's right there next we need to install autogen so the command to do that is PIP install Pi autogen then hit enter now I have already installed it so it says requirement already satisfied and I'm just checking the version and I type python -- version and it's using 3.11.4 okay now back to app.py we're just going to import autogen at the top and the reason you're seeing this little predictive text is because I use GitHub co-pilot so just ignore that and in in fact I'll just turn it off for now the next thing we're going to do is set up the configuration Json for autogen so we type config list Open Bracket open curly bracket and we're going to define the model first and the model we're going to be using today is GPT 3.5 turbo I would definitely recommend using gp4 but for the purposes of just showing you getting the code working we don't need to use that one and there it is so we're using GPT 3.5 turbo 16k and the next thing we're going to need to do is put in our API key so if you don't already have an open AI account go ahead and sign up for one and then we're going to grab our API key so I'm on open AI right now I go ahead and click create new key and I'm going to name it autogen and then create key and don't worry I will revoke this key before publishing this video go ahead and copy it let's switch back to visual studio code and I'm going to paste it right in there with quotes and then let's add a comma there now we need to set up the llm config object so we're going to type llm config equals open curly brackets so we'll set the request timeout and these are all pretty standard settings so you can just use what I'm using here request timeout just kills the request after a certain amount of time if open ai's API isn't responding next we need seed and we'll use 42 and seed is for caching and this is actually an amazing feature basically once you run one of these tasks once with these agents it caches the response so if you run the same task again it's going to use the cash version so it saves you money and time so if you change the c number or if you delete the cash it'll redo it but otherwise if you're using the same prompt and the same seed it's going to use the cached version next we just pass in the config list into the llm config just like that so that's what we just created up here next we're going to define the temperature and this is a value between 0o and one where the lower the temperature the less creative and the less unique the response is going to be from the AI and the higher the temperature the more creative and the more unique the responses will be for coding tasks we want pretty non-creative responses so that's why we're going to keep it at zero but feel free to play around with this value next we're going to create our first assistant so we'll call it assistant equals autogen do assistant agent open parentheses we're going to name it assistant and then we pass it in the llm config just like that now you can create as many of these assistant agents as you want you can give them different names so if you want a team of AI agents this is how you would do it you can say like this is my CTO you would copy it here's my CEO and you can put an entire team together or whatever makes sense for your use case but for this I'm just is going to use a single agent I forgot to mention this but I wanted to add it in after the fact if you're going to create more than one assistant agent you definitely want to provide a system message to each of them so that you're defining the roles that you want them to take on next we're going to create our user proxy and as a reminder that is an agent that acts on behalf of the user or yourself it can do things automatically on your behalf like executing code and responding to the assistant agent or it can ask you at each step for approval to do those things so we're going to type user proxy equals autogen do user proxy agent we're going to open parenthesis we're going to name it user proxy and you can have multiple user proxies just like you can have multiple assistant agents so go ahead and play around with that as well next we're going to define the human input mode and this is where you can Define how much manual input you want to give and opening up the documentation for autogen we have three options options for human input mode we have always so at every single step it's going to ask you to either approve or respond or just during terminate so just when the task is completed It'll ask you for feedback or next steps or never and it will never do it and so for our use case we're going to use terminate switching back to VSS code that's what we put here so human input mode equals terminate next we're going to set max consecutive auto reply and that just sets the maximum number of times that the agents can go back and forth with each other so we're going to set it to 10 now if you set this too high there is the risk that the agents will get into an infinite Loop and continue to go back and forth with each other which will be quite costly so we're going to leave it at 10 the next thing we need is this is termination message and essentially what this is is it's looking for a certain keyword that ends the task so when it sees terminate it knows the task is over and if we have the human input mode equals terminate that's when it's going to ask us for input next is is code execution config so this allows us to set a couple settings for when we actually execute code so we're going to set the working directory to be web and what this will do is from within whatever folder that you're using for this application it's going to create another folder web and any files that it creates or any code that it writes it's going to write it to that folder so we do comma and then we pass in the llm config like usual and last is the system message which is essentially the instruction to tell the user proxy how to determine if the task has been completed so we're just going to use the one that autogen came with and I'll paste that in right now so it looks just like this then we close the parentheses right there next we're going to create a variable to store the task that we want the agents to complete so I'm just going to call it task equals and then three quotes open it up and this is where we can put any task that we want so I'll say Give me a summary of this article and then I'll just paste a random article in here and obviously you can extend this code to be much better where you can enter any URL not just hardcoding a URL in the task but for now I'll just leave it at this next we need to actually initiate the chat so we do user proxy so this is what we've already created up here and user proxy always starts the chat do initiate chat open parentheses we're going to pass it the assistant and that's what we created right here and then we're going to pass in a message and that's the prompt or what we're calling the task and I'm going to go ahead and click save and technically we're done I'm going to extend it a little bit more but let's make sure it all works first okay so I'm just going to go ahead and click play up in the top right okay there it goes and it's starting okay so unfortunately chat GPT 3.5 really does not work well with this sometimes I'll get it to actually write code and complete the task but often I get things like this and the Agents just go back and forth saying thank you I can't do it thank you I can't do it so good thing that we set the max auto reply to be 10 because this would have just kept going indefinitely but we know it works that's the point and one other thing I'm going to point out is now we have this cash folder right here and this is storing the cache under 42 so if we were to run this again it's not actually going to hit the API endpoint now now I want to show you this actually working so let's exit out of here clear the screen and rather than Give me a summary of this article I'm going to give it an easier task so I'm just going to say write python code to Output numbers 1 to 100 and then store it in a file so I'm going to go ahead and click save I'm going to scroll up and I'm going to change this to GPT 4 and now it should work much better so let's go ahead and click play again and actually before I do that I'm going to clear this cache I'm just going to delete this here we go user proxy to assistant write python code to Output numbers 1 to 100 then stored it in a file all right here we go so the assistant says sure here's the python script to do so and then we can run this script and there it goes back and forth and I think it misunderstood what I wanted I think it actually output the numbers 1 to 100 into a file which is not quite what I wanted so I can improve the prompt and actually it says please give feedback to assistant so before I do that let's just verify great the script has successfully generated the numbers 1 to 100 and stored them in the file numbers. text so if I click this web folder up here there's numbers numbers. text and there's numbers 1 to 100 but that is not what we wanted was it so as the feedback I'm going to say so I accidentally quit out of here before giving it feedback so let's just run it again I'm going to delete this web folder I'm going to delete the cache I'm going to scroll down I'm going to change the prompt and then store the code in a file and then I'm going to save clear and then play okay here we go we have the code right there to save it in a file you can create a new file named print numbers. piy paste the code in above okay so it executed it then it says the python script has successfully printed the task is now complete if we look at that web folder print numbers. beautiful there it is so it wrote code locally so now I want to extend the code a little bit now I obviously can do this in a different order so I'm not rerunning it from scratch every time but I'll show it to you anyways so for task two we're going to create a new one and I'm going to say change the code in the file you just created to instead output numbers 1 to 200 so the first thing it's going to do is do 1 to 100 and then store the code then it's going to change that code to do 1 to 200 and we do need to do the user proxy initi cre chat again and last instead of human input mode terminate I'm going to say never I just wanted to execute all this code that's not risky at all okay so now we got it all saved so then I'm going to delete the cache I'm going to delete web right here I'm going to make sure that it says task two here so I'm going to give it task two save and then we're going to play okay so it output numbers 1 to 100 it looks like it already created this numbers. piy file now it says change the code in the file this is the second task so it should adjust the file hopefully it does it okay so it looks like it did something and let's see great the script has successfully executed you should have a file numbers. text in the same directory as your script okay interesting so it didn't follow the directions exactly but honestly I didn't give it a very explicit prompt so it's fine but what it did was it has the numbers. piy and it kept that then it created another file called numbers .txt and output numbers 1 to 200 and then it had a second file and then it actually wrote the code to update numbers. txt with those numbers so not quite what I asked for but that's my fault for not a great prompt and there's a lot of things you could do to make this code better so for example you might want to put these as environment variables the model and the API key and have them in a separate file aemv file because you don't want to commit an API key for example and you can probably put the llm config in a separate file and keep all the assistant agent and user proxy agent stuff all in the same file and just organize the code but that's just a function of refactoring the code really well but that's it now you can extend it you can add more agents you can give it different prompts you can play around with the caching techniques it's truly incredible and I'm still working on getting an open source model set up with this so as soon as I do that I'll put out another video and if you like this series let me know I want to create more stuff with autogen I'm so excited about it if you liked this video please consider giving a like And subscribe and I'll see you in the next one
Info
Channel: Matthew Berman
Views: 146,877
Rating: undefined out of 5
Keywords: autogen, autogen local, autogen tutorial, chatgpt, ai agents, artificial intellgience, coding, python, ai, openai
Id: V2qZ_lgxTzg
Channel Id: undefined
Length: 15min 3sec (903 seconds)
Published: Tue Oct 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.