AutoGen Studio Tutorial - NO CODE AI Agent Builder (100% Local)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
autogen studio is here the Microsoft research team behind autogen the Revolutionary AI agent project has finally released autogen Studio which allows you to create sophisticated AI agent teams with ease it's a fully open source project you can run it locally you could power it with chat GPT and you can also power it with local models everything from plotting stock charts to planning trips to writing code this is what chaty PT's cust gpts were supposed to be so in this video I'm going to show you how to install it how to set it up and how to use it both with GPT 4 and local models so let's go first the only thing you're going to need to get this to work with chat GPT as the powering model is cond so if you don't already have cond installed it's a super easy way to manage python environments which is always a headache otherwise so if you don't already have it installed go ahead and install it now it's very easy so the First Command we're going to run is to create a new cond in environment and I'm going to say condac create DN AG for autogen and then we're going to use Python equals 3.11 and then just hit enter it's going to ask you if you want to proceed just hit enter again and it's going to install all the packages that we need all right once you're done there you're going to highlight this code right here to activate the new environment so go ahead and highlight it paste it and it's cond to activate Ag and then hit enter and you could tell it's activated now because it says AG right there next and this really couldn't be easier we're going to install autogen studio and by installing autogen Studio we get everything with autogen as well as the user interface so it's pip install autogen studio and remember this installs to the environment that we just created so if you deactivate this environment or you switch environments it's not going to be available there so hit enter and it installs everything we need next open up your open AI account go to the API key section and we're going to create a new key and I'm going to call it AG for autogen and then create secret key I am going to revoke this key before publishing the video go ahead and click copy switch back to your terminal and now we're going to export the open AI key setting it in our environment which allows autogen to access it and to do that we're going to type export open aore API key all capitalized equals and then you're going to paste in your newly created key then just hit enter next we're going to spin up autogen studio now we're pretty much done so just type autogen Studio space ui-- port 8081 and then hit enter and then it's going to spin up autogen studio for us and provide us with a URL and here we go it's Local Host 8081 so we're going to copy this URL right here switch over to your browser and now this is autogen Studio it is absolutely gorgeous and it's super easy to use and I'm going to show you how to do all of it now that's really all you need to do to get autogen Studio working with chat GPT a little bit later in this video I'm going to show you how to set it up with local models including powering individual agents with different models it's pretty amazing and the first tab we're going to start on is built just so I can tell you about all the different terminology so first let's talk about skills skills are tools that that you can give your AI agents and AI agent teams they can be anything but they're usually written in code so there are three by default here generate images and if I click into this we can see this is actually just the code for generating images and what it does is it sets up a method that hits the open aai Dolly endpoint and generates an image and then it Returns the image and that's it and now any agent can use this generate images tool we also have find papers on archives so if I click into there it's exactly what it sounds like it's going to accept a query and it's going to return papers that are found on the archive website now you can probably imagine how amazingly powerful this really is you can give it tools for anything any API that you can connect with you can give it tools for and not only apis you can give it instructions to accomplish pretty much any task and where my mind starts going is connecting it to a service like zapier and then all of a sudden you have Integrations into so many different applic appliations and you can mix and match them and accomplish incredibly sophisticated tasks through giving zappier Integrations as tools to your agents so this is what you do so let's click new skill and it's as simple as that you give it a name and then you write out the code for your actual skill so we're not going to do that but that's how you would accomplish it next are agents and this is the most obvious one an agent is just an individual AI that has a role tools and can perform tasks so by default it comes with two primary assistant and user proxy this mimics the autogen framework for when we didn't have a user interface so user proxy you can think of as you the user and the user can actually jump in and give input or not give input at all and let the autogen team accomplish the task completely autonomously the primary assistant is also exactly what it sounds like it's another AI agent that doesn't represent the actual user and it's completely autonomous it can write and run code it can use tools it takes on a roll a description and everything and I'll show you how to create a new agent in a little bit and by the way this is also where you specify which model the agent is going to use whether you want to use chat GPT 4 or you want to use a local model next is workflows and a workflow puts everything together including the team and the task you want to accomplish so here's a travel agent group workflow let's click that so we name it a group chat workflow so we have some options for summary method which is just defining the method to summarize the conversation and we can just say last none or llm and then we have the sender and the receiver and this is really important the sender is usually going to be user proxy although it can change if you get more complex teams and then the receiver is going to be the group chat manager so whenever you have more than two agents more than just a user and an assistant agent that's when you start to use the group chat manager and I can even click into the group chat manager and here is where I can add all of the agents into this group chat so I have the primary assistant the local assistant and the language assistant I give the group chat manager a name I can give it a description I can give it Max consecutive auto replies and if a lot of this stuff seems forign to you check out my video where I break down autogen in detail and Define and show you how to use all of these different settings cuz they are important I'll drop that link in the description below human input mode so we have never only on terminate or always on every step here we have our system message where we can actually just say group chat manager or we can define a more complex system message which helps control the agent Behavior then here is where we can Define our model we can add multiple models and it's going to daisy chain them together so it's going to start with GPT 4 here and if I added another one it would fall back to that if for whatever reason GPT 4 didn't work so remember whatever the first model is here in the list that's going to be your default model and unfortunately I couldn't figure out a way to drag and reorder the models so you'll actually have to just delete it and add it in order then down here is where you can add skills and remember the skills are pieces is a code that the agents can run so go ahead and click that and we have for example this generate images and we can just add the skill like that and now this agent the group chat manager has the skill of generate images and then down here it says or replace with an existing agent so we can just say that and it'll fill out everything for us so then when we're done with that we click okay but I'm not going to do that because I don't want to save it next we have the playground and this is actually where you're going to be testing out the different agent teams so you can think of a session as a fixed amount of time where an a agent team goes to accomplish a task and the cool thing is I believe this is asynchronous so let's go ahead and click and create a new session I'll show you how to do the mistal workflow in a bit cuz that's a local model but let's do visualization agent workflow and we can see all agent workflows here if we want but let's go back and let's choose the visualization agent workflow then we click create and there we go now from here we can publish it to the web which is really cool we can delete it and this is where we give it the task we want it to complete so let's say stock price plot h chart of Nvidia and Tesla stock price for 2023 save the result to a file named Nvidia Tesla PNG so now it's pinging gp4 to do that and you can see it's working by this little waiting icon right here now one thing I would have liked is if it streamed the results to this window but it seems like it waits till it's completely done before showing the result all right there so that is again one thing that I'd like to see a little bit different is I want to see each step being output because the only way to really tell anything is happening is if I switch over to my terminal and I actually watch the output so you can see all the output here this is what autogen typically looks like and then the UI just puts it in a really pretty interface so let's scroll to the top sure here's the result of your request okay so we can see the different agent messages going back and forth so the user proxy says so this is the user agent representing me so plot the chart of Nvidia and Tesla then we have the visualization assistant which creates the plan to actually do that writes the code so here here's the code that I just wrote so the visualization assistant says please run the above script to fetch the stock data once the data is fetched save it to stock data.csv so it does that the user proxy says okay done and fetched and saved then the visualization assistant says great now we can run the visualization of it and then the user proxy runs that code and saves it to Nvidia Tesla PNG and then here's the results so we have the stock data.csv file we have the PNG which is the actual visualization of the stock price over time we have the plot stock chart. py file which is the code to do it and we have the fetch stock data. py also and the nice thing is you can easily turn these into tools so that it doesn't have to recreate these tools next time and So currently it doesn't look like you can do it in a one-click way what you would do is you would go back to build go to skills create a new skill and essentially copy paste what's in here back into a skill on this page right here but that's it that shows how to do it and it is incredible I find that autogen studio makes it a lot easier to manage your tools most of all I always found that tool usage from the autogen code was a little bit difficult and let's try one more thing let's create new and I'm going to do travel agent group workflow create and I'm going to click paint here let's see what it does paint a picture of a glass of Ethiopian coffee freshly brewed and a tall glass cup so obviously this is going to be using Dolly I'm going to switch over to terminal and we're going to watch it actually work so here it is the user proxy agent saying that's what it's going to do okay so it's saying I'm unable to physically paint a picture so what that is telling me is that my agent team doesn't actually have the right tool to do that so let's give it that tool so to fix this problem what we're going to do is we're actually going to use a different agent team that actually has the paint skill so let's go back to build and let's look at the general agent workflow now we can see it has a sender of user proxy and a primary assistant receiver with two skills if I click into there I can see one of those skills is generate images so that should be good to actually generate a picture and we can see the daisy chain of different models that it's using so first it's using GPT 4 so let's go ahead and try it now so go back to playground I'm going to create a new session I'm going to use the general agent workflow create and then I'm going to say paint and now hopefully this works all right switching over to the terminal it does look like it generated the image and let's see what happened there it is perfect that's exactly what I asked for wonderful so you can see it is important you think about which tools are assigned to which agents which agent teams if you're asking it to do specific things that using that tool and we can also look at the pi file and we can actually see the code that it wrote to generate that image and so now I like that one if I just click publish it says session successfully published I go over to gallery and now I can find that session I just click here and open it up and I can actually see what just happened all right now now I want to show you how to use this completely locally and what you're going to need for that is two things olama and light llm oama is a wonderful tool that allows you to power models locally super easily and light llm is a wrapper to expose an API even if you don't understand what any of that means it doesn't matter I'm going to show you how to use it it's dead simple so we're going to switch back to our terminal we're going to create a new tab right here we're going to use the same cond environment so cond activate AG hit enter now it's activated and the first thing you're going to do is install oama and it really could not be easier go to ol. click download and go through the installation process I've already done it so I'm not going to do it right now but it's dead simple when you're done you should see a little llama icon in your task tray and that's it now to download a model what we're going to do is type O Lama run mistol and we're going to download the mistal model now I already have it downloaded so it's not going to download it again for me but when you run this it will download it and it's about 4 gigabytes so I hit enter just to make sure it's working and there it is so I can just test it with tell me a joke okay perfect so there it is now we know this is mistal running completely locally okay now we're going to open up another tab again we're going to use the same cond environment and by the way you don't actually need to keep this oama instance open anymore but it doesn't matter if you do or don't so over here we're going to switch back to cond activate AG perfect and now we're going to install Light llm so pip install Light llm Das Das up upgrade just in case you already have it and you need to upgrade it hit enter now one issue that I ran into that I already fixed and I probably won't run into it again but I want to show you it is it said that it was missing a module and the module that was missing is called gunicorn and I don't know why it wasn't installed as part of the light llm package but it wasn't so all I had to do to fix that was pip install gunicorn so if you get an error where G unicorn can't be found that's how to fix it now to set up a server with mistal running powered by oama this is all you do light l m-- model oama SL mistol and then hit enter and there we go we're all ready to go and we can see that the server spun up right here it's Local Host 8000 so we're going to go ahead copy that URL now switch back to autogen now we're going to go back to the build tab then I'm going to go to agents and I already have the mistal assistant so I'm going to delete that as well and now I'm going to create a new agent first and it's going to be a mistal agent so I'm going to say mistal assistant agent description helpful assistant power by mistal locally Max consecutive auto replies I'm going to leave that there human input mode never and a system message and I'm just going to keep it simple you are a helpful assistant now right here you can see it's defaulting to gp4 so go ahead and get rid of that we're going to click add and this is where we actually tell it to be powered by the local mistal model so here I'm going to call it mistel you don't need an API key the base URL we're going to click paste this is that local URL that we just copied everything else we do not need and then we're going to click add model we're not going to give it any skills now but feel free to do that when you're testing it then click okay now we have a mistal assistant powered by mistal and the cool thing is I can also have a Mixel assistant and I can also have a nous Hermes assistant and they can all run at the same time it is truly incredible now let's go to workflows we're going to create a new workflow and we're going to say this is a mistal workflow workflow description we'll leave that the same summary method that's fine user proxy that's fine and this is going to be gp4 powered so if you did want to have the user proxy locally powered go to the user proxy agent and switch out GPT 4 for mistol then as the receiver we're going to change that so we're going to change the receiver to be the mistal assistant and so for the agent name we're going to call it mistal assistant agent description I'm going to leave it blank for now again feel free to customize this as much as you want human input mode never this is the system message that is used used for autogen so I'm not going to touch this then we're going to delete gp4 here we're going to add a new model again we're going to say it's mistol same thing Local Host 8,000 right there and then we're going to click add model we're not going to give it any skills and we're just going to click okay then okay again now we have a mistel agent we have a mistel workflow we should be able to use it powered by mistal so let's go to the playground we're going to click new we select mistal workflow and then create and then let's say tell me a joke just to see if it works hit enter and there we can actually see that it worked so there's the post two chat completions and this is the light llm so it should have worked and it did although it did not tell me anything good that's fine let's see it accomplished something a little bit more difficult write code to Output numbers 1 to 100 okay there it is so it was extremely fast switching back over to the terminal we can see that it actually did post to chat completion so it worked and there's the code and here's the termination messages so user proxy says write code to Output numbers 1 to 100 the mistal assistant writes the code and sends terminate which terminates everything and so that's it now you know how to power autogen studio with a local model and what if you did want to have different models for different agents to do that we come back here over to ol llama exit out of there and so what you would do is AMA run llama 2 that'll initiate the download and once it's done downloading with olama we can leave this light llm that's running mistl up then we create a new tab then we cond activate AG light llm -- model o lama lama 2 hit enter it'll give you a new URL and you do the same exact thing you come in here go to build set up a new agent as llama assistant and then you input the URL as normal then you set up the same workflow as normal and you're done now you have different assistants powered by different local models and you can plug and play as you see fit the best part is you can find the right fine-tuned model for the right task and one last thing I want to mention is it actually has this sign out functionality but when you click it it says please Implement your own logout logic which means you can set up your own authentication within autogen studio so if you wanted to share this amongst your team you could set it up to do that I am so impressed by autogen Studio let me know what you think in the comments if you want me to do any kind of followup or deeper dive into autogen Studio let me know what you want to see in the comments if you liked this video please consider giving a like And subscribe and I'll see you in the next one
Info
Channel: Matthew Berman
Views: 190,845
Rating: undefined out of 5
Keywords: ai, ai agent, ai agents, autogen, autogen studio, llm, artificial intelligence, large language model, openai, chatgpt, gpt4, open source
Id: mUEFwUU0IfE
Channel Id: undefined
Length: 18min 33sec (1113 seconds)
Published: Mon Jan 15 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.