Use crewAI and add a custom tool to store notes in Obsidian

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this tutorial we create a custom tool for crew AI to add search results as a note in obsidian we use it with chat GPT and multiple local llms and compare the results watch till the end as there are some surprises along the way crew AI is a multi-agent framework where AI agents work together to solve complex problems compared to other multi-agent frame Works crew AI gives you more control over the Flow by assigning task to agents some Core Concepts of crew AI are defining agents as blueprints then defining well- defined tasks and assign them to agents these agents will work together and process the task to solve the problem in this process they can use tools from blank chain or other custom tools which is the main sub subject of this tutorial a tool in crew aai is a function that the agent uses to perform some action these tools are actually longchain tools we can use many ready to use longchain tools for different task like Duck du go search file system or W from alpha but we can create our own custom tools and extend the possibilities when we navigate to the L chain docs for defining custom tools we see we have multiple ways to define a custom tool but the easiest way is to use a tool decorator a tool has different components like name description and Arc schema besides that it is important to Define how many inputs or parameters the tool has as many agents only work with functions that require a single input this gets even more important when you work with local llms so as a rule of time whenever possible create tools with only one input when defining tools with the tools decorator the tool decorator uses the function name as the tool name and the function dock string as the tool description so when we use the tool decorator we must provide a duck string for the function description for this tutorial we use chat GPT and some local llms to work with local llms we use olama olama can be easily installed on Mac or Linux but we use it in this tutorial on Windows to see how to install olama on Windows using vsl2 you can check our YouTube channel and if you are interested in AI development consider subscribing to the channel and supported with some likes and comments back to our tutorial we create a new directory and switch to the new directory from inside the folder we start Visual Studio code for this tutorial we only need two packages we can install them directly or create a requirements.txt file and list the packages before installing the packages we create a virtual environment and activate after the virtual environment is activated and we see the name of it before our prompt we can install our packages after the installation is completed we can use pip freeze to see which packages are installed at the time of this recording langing has version 0.1.1 and crew AI 0.132 we can close the files and make some room for our tools package now the main part begins we create a directory and give it a name like tools and place a init.py inside the directory to turn it into a package as we do not have any initialization steps we leave it empty inside the tools directory we create a new python file Customs tools dopy and paste the script in the script we import tool from L chain tools and some other basic packages next we Define the custom tools class and use the tool decorator we Define our function and use a doc string as the tool description for Simplicity here we hardcoded the root to our obsidian Vault but you can improve it with an environment variable next we create a file name containing crew aior node and the current time format it and finally write the content to the file if an error happens we draw an exception to test our tool we create a main.py file and import custom tools from our package next we assign our function to the tool and run it with a simple markdown text of hello world where V should be bold when we open obsidian we see our old notes from the last tutorial created by autogen and our just created note using our tool as you can see the text is marked down and we see the VA in bold now that we implemented and tested our tools package we can extend crew AI so that the agents can use the function too crew a AI uses open AI chat GPT 4 as default so we create a file to test crew AI in the default configuration we import some packages from crew Ai and import the custom tools from our tool package then we assign our open AI API key I will revoke this key before uploading the video so please use your own API key to be able to search we use duck du go search run from Lang chain tools we already installed the package you can use other search tools if you like now that we have imported our tools it's time to set up our agents for this tutorial we need three agents the first agent is the researcher agent we give the agent a role and a goal and a backstory that acts like the system prompt to get some logs we set ver post to true and set allow delegation to false as this agent has a simple role to search and shall do this on his own with the tool we provide to it in tools we can give an agent a list of tools but in this case we simply assign the Search tool to our researcher the next agent is our no taker agent again we assign role and goal and a backstory to tell the agent what to to do and how to behave here we insist that it should store notes as markdown again we set verbos to true and allow delegation to false we assigned the store node to obsidian from our custom tools class to the tools the next agent is the editor agent its goal is to summarize content please notice for this agent we set allow delegation to true after defining the agents it's time to Define our tasks in brief we want to search for school holidays in Hamburg in 2024 then we want to summarize and structure the search results and finally store the information as markdown note in obsidian so we split this complex task in three well- defined simpler task and here comes the advantage of crew AI we assign each task to a specific agent the next step is to define the crew and assign the list of agents and task and set the logging level after everything is set up we run crew AI with crew. kickoff and print the result when we run the python script we see that the Search tool is used and the result is summarized and saved as a note when we open obsidian we see the a note created by crew AI unfortunately the information is not structured as a table and all of it is formatted as a header we can remove the header but still see some problems in the format of the note when you work with multi-agent Frameworks like autogen task Vier or crew AI you soon realize that you spend a lot of tokens to reduce the cost and still use open AI we could use a cheaper open AI model like chat GPT 3.5 turbo 1106 let's find out if the cheaper model can use our Custom Tool too and save a note in obsidian we copy and paste our GPT 4 script to a new file to adjust it for chat GPT 3.5 here we need to import chat open AI from Lang chain chat models and and configure it to use GPT d3.5 dturbo d106 with a low temperature and assign it to llm gp35 now we can tell the agents to use llm gp35 as their llm you can use different llms for different agents but here we simply want to use one llm for all agents to compare the results here here we run the script and see a note is created and a markdown table is created too as mentioned in our last video about obsidian we can use the new table tool in obsidian and improve and expand the markdown table and continue with our research to our surprise the result of jat GPT 3.5 is better than jat GPT 4 and we use much less tokens but but it is still not complete as we see on the official side of the city let's see if we can further improve the result with a free opsource llm to test the script with local llms we use a tool called AMA you can use a lot of models locally with AMA from the featured mistal to the newer open Hermes when we search for function in the llm name we find no match but when we search for function in the whole page we find Nexus Raven so we will give this model a try to but first we need to load the llms on our machine to run mistar locally on Windows using AMA we start obuntu on vssl 2 and type AMA run Mistral if it's the first time we use Mistral it will first download the model after the the model is downloaded we type tell me a joke this is something like hello world and we use it to test the llm and when everything is set up correctly we get the joke from local mistal model after we are done we quit with slby back to visual studio code we duplicate our last script and give it a name like local uncore m.p this time we import AMA from Lang chain lln M and use the Mistral model in the _ llm next we use _ llm as the llm for all agents when we run the script a note is created so the free local open- Source private llm mistal was able to use our tool and create a note the content of the note is not complete and not well formatted but just the fact that the note is created is a victory let's see if we can improve it with Nexus Raven which is tuned for function calling as it is the first time we use the llm wama downloads the model to our machine we test it with tell me a joke and here we see the difference between the mistol and Nexus Raven and how it is tuned for function calling back in Visual Studio code we copy and paste a local mistol and rename it to local Nexus Raven and the only change we need to do is to replace Mistral with Nexus Raven the rest of the code is the same the expectation is high and when we run the script we get surprised again as it goes in an endless loop and we have to terminate the script manually no note is created it doesn't mean that the model is not good it means that with a simple model replacement we could not use it with crew AI maybe we need a make file with some adjustments in ama so let's move on to test more llms next llm is open Hermes we do the same process for open Hermes and indeed a new note is created the markdown format problems are similar to GPT 4 so let's try another llm this time we test open chat we do the same procedure if we had an UI we would make a drop down to replace the llms but we keep it here simple with open chat a new note is created and the content is in markdown but not complete maybe we can improve the answer with some prompt engineering back to visual studio code we first remove the open AI key and it was not needed for any of the open- source models we do some prompt engineering and improve our backstory and task regarding the node saving process and run our script again this time we have a markdown table but there is another surprise when we see the content the answer is now related to United States and no it's not a hallucination as we have a city Hamburg in Germany and at Town Hamburg in United States in the New York state but why after our prompt engineering the Open chat llm decided to use Hamburg in the United States is a secret that only open chat can answer to wrap it up you can extend crew AI with your custom tools based on lank chain but here is my takeaway if you're a no code developer the bad news is that the llms are not ready yet for Reliable production ready cases but the good news is that the development is so fast that it's just a matter of time and if you're a programmer just use your programming skills and make decisions based on programming logic in that way you save money on tokens and time on local l M use llms only where it shines like semantic search Market sentiment or summarizations so good luck with AI development for all of you
Info
Channel: business24_ai
Views: 4,357
Rating: undefined out of 5
Keywords: crewai, crewai langchain, ollama, ollama windows, mistral, langchain, langchain custom tool, nexus raven, nexus raven 13b, openhermes, openchat, openai, chatgpt, chatgpt 4, ai, ai agents
Id: Iqp6mE1xHOw
Channel Id: undefined
Length: 16min 42sec (1002 seconds)
Published: Tue Jan 23 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.