Introduction to LangGraph: A Quick Dive into Core Concepts

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone welcome to this video the start of a small series on agents with L graph Lang graph is built on top of L chain and allows you to build agents or agentic workflows you can also use it for other purposes but agents are its main focus the core concept of Agents is to use an llm to decide on a sequence of actions in agents an llm functions as reasoning engine to determine which actions to take and in what order Lang graph also enables the combination of of multiple agents into a so-called agent swarm facilitating data sharing performing various tasks and allowing large language models to interact with each other you might be familiar with my video about crew AI or might have heard about autogen so what's the difference autogen and crew AI offer high level Frameworks with their own pros and cons langra however has a steeper learning curve but provides a deeper understanding of how agents operate this enables you to fine tune and customize your workflows at a much granular level this lower level framework allows for greater flexibility and control over the behavior and interaction of agents to meet specific and complex requirements but no worries it's not that difficult if you understand the most important concepts of Lang graph these are noes edges and state so what's a note a note is a function or runnable that performs a specific task within the graph each note processes input and updates a state based on its operation an edge defines the connection and Flow between noes determining the order of execution conditional edges can route execution based on the current state state represents the data passed between nodes during execution each Noe Updates this internal state with its return value allowing for persistent and dynamic workflows at the end you will have a structure with nodes that change the state address the drout to the next note and that updates the state again you can also Define a special end node to indicate the workflow execution is complete okay in theory for now let's start with some very basic examples that don't even use an llm we will first learn the concepts through Hands-On practice and then move to a more complex real world example involving an llm okay I'm in vs code and as you can see we've got multiple notebooks here we have to use this Lang graph. IPI notebook this is The beginnner Notebook we will use so the first step before we can use langra is to install it so this is done with Pip install Lang graph and this will install the library into your environment the next step is to work with an openi API key so I've got a end file and there is my OPI API key for the very basic examples this is not needed but later when we go to a real word example with an llm then you need an OP API key or you can also use a different model to make that low. n for we also have to install python. NF and and then we can just execute this line of code if we see a true this means that our environments variables were imported into our environment okay let's now work with langra so we import two classes from Lang graph and and message graph and is a very special class that indicates that a workflow execution is over and message graph is one of the predefined graphs by L graph so this makes it very easy to interact with an llm as you can see we've got a function here which takes an input which is a list of messages so this is the normal workflow when you work with llms that you have got a list of messages that you pass to let's say OPI and in this case we make it very simple so we've got an input and we access the first element and overwrite the content with a singal a so nothing really very complex here so first step is to create an instance of that message graph so now we can add notes and edges to add a node we have to execute this add node method and we provide a key and a function or runnable so both normal functions and length chain runnables are acceptable to be executed after executing this add one function in Branch a we want to add an edge which routes to another node and this can be done with add edge so we pass in the start key which is Branch a and pass it to another node this node is defined here so we've got this node B and we perform this add one function again so that's the link between Branch A and B so here we've got the node we've got a second node and the connection is here Branch a and Branch b and we also use another Edge so we can use multiple edges from one branch or one note Branch a again and also branch and route to Branch C so again we perform add one in this branch and then from Branch b and Branch C we route to another node final node so both route to the same node and again we add one and the edge in this final LOE is end so this indicates that the workflow is over and then very important last step is to set an entry point so where should our L graph workflow start the execution so that's not clear so we we have to set an entry point and this can be done with the set entry point method and we set our entry point of this graph to Branch a after setting all the nodes and edges and after setting the entry point nodes and edges we have to compile the graph so we call the compile method of our final graph and this now creates a runnable which can be used with the invoke method so that's a standard interface of length chain so if you're familiar with length chain that's very easy to use what's very helpful in my opinion is to also visualize our graph or runnable so after compiling it we can use the get graph method and in combination with the image class and the display method of the display module of IPython we can just display or visualize our graph so we can do it like this and I think this is very helpful to actually see what's going on in this workflow because I think this is much easier to understand than this especially when your graphs become more complex so we start here with our entry point Branch a then we Branch off to Branch b and c and we get together here at the final Noe and this results um in the end node so now let's execute it and maybe think ahead what actually happens here so we start with a single a and in AIC graph 1 a is added so we start here the add one function is executed so 1 a is added then we Branch to C and B we add 1 a 1 a again and in the final note we also add one a so we end up with five A's let's see if that actually happens yes you can see we've got 5 A's here okay this was straightforward but often you want to perform a different action based on the input this can be achieved with conditional edges so let's again create a very simple example so we use human message and message graph and also the end node again so we've got as input for our state a list of human messages and in this entry function we do nothing we only return the same input and then in our next function we've got some input we print using Branch b and another function which is called work with C where we just print using Branch C but we also don't do anything with the state and then we've got a special r router function this router function is required for conditional edges so what we are going to do here is not return the state but we return a string and dependent on what the input is we return the string Branch b or we return the string Branch C and this router function is then used in a conditional Edge so let's first start with the graph so we create our graph again we create our message graph and now we want to add a node so first we create our node Branch a which performs this entry function so nothing really happens here and then we've got our node B and C and in Branch b we want to perform this method or this function and in Branch C so in this note we want to perform this function and then dependent on the input we want to Route it and this can be now done with this conditional Edge so the edge in Branch a is set here so we route based on this output of the router and where do we want to rout to this is set in the third argument of add conditional Edge so we provide a path mapping so if the output of this router function is Branch b we use this as key and we want to route to this node so the first one is the key or the output string here and this is the name of the node so we route to Branch b and if the output string here is Branch C this matches this then we route to the note Branch C so this will route to this for the nodes Branch B and C we only want to add an end Edge our entry point is Branch a again then we compile our graph again so this was duplicated and now let's visualize it so as you can see conditional edges are visualized with a dashed line instead of a concrete line so this indicates that we've got a conditional Edge okay now let's run the invoke method and we pass in hello so if you think again what happens so if we've got this input message and the content is used B then we will Branch to Branch b otherwise we will Branch to Branch C so what do you expect to happen think one second about it and as you can see we use Branch C because we don't have this use uncore B inside the string so if you run that we can see that we are using Branch b so a very simple example how conditional edges work with L graph after these examples you may think I could all do this with the L and expression language and if you think that way you're actually right the main difference is that langra allows you to run in Cycles which the LCL does not allow so let's now have a look at Cycles so in this example we will actually make use of our chat model so we've got some very simple functions again so we've got our entry point function and this actually takes an input and just Returns the same output so it doesn't do anything then we perform an action and in this case we would have a look at the input messages and if the length of the input messages is larger than five we will add a human message with the content end otherwise we will end a human message with the content continue and we going to use the content to actually evaluate if we should continue with our workflow or not so we extract the last message and check if and is in the content of this last me message then we will return an underscore uncore end otherwise we will return an action and we will use that conditional Edge to decide whether to perform this action again or not but so where is the difference now the difference is in how we create our graph so we start with a note Called Agent where we perform our entry function this does nothing then we add another note called action here we perform our action and then we add a conditional Edge this conditional Edge is set for the agent and this performs the should continue function if the output of this should continue function is action then we want to route to the action Noe so here otherwise we want to return to this end Edge so this is our criteria for ending this workflow and then we add an edge and the edge for the action is the agent again so this is where it gets cyclic if the should continue functions does not return end then the agent node will route to the action node and the action node will route back to the agent node so let's see how this actually looks like so we first set our entry point compile the graph again and let's then visualize it so this is how it looks like we can see that we've got the agent node this routes conditionally to the action or to the end node and action routes back to agents so this is where it gets cyclic and this is why you may use Lang graph over the Lang and expression language if you ever have to use the Cycles so let's invoke this now with hello and as you can see we add continue we add continue again add it again until we hit the length five then we add it with this end and if you see that end is in the last function then we route to this end router or this end and note okay and as we can see after this is above five so it's six now the workflow ends okay after learning the very basic concepts with some toy examples we can now create a real agent which interacts with um with a fake API so with a fake weather API so to do this we of course going to use a chat model now and we instantiate the chat open myi class from linkchain we will also create a custom State now so instead of before we're going to use our own state so what we want in our agent is data is a list of messages so like before but we also want to have this API call count which is initially zero and with this we want to measure how often we tried to get the weather from the API and after three tries we're just going to say currently the service is not available the next step is to create a tool or function for our llm so we use the tool decorator from L chain and decorate the fake weather API function here is the dock string check the weather in a specified City so we've got a single argument which is the city and we randomize the output so if the output here of this function is one then we will return Sunny 22° otherwise we will return on an error message service temporarily not available so let's check it out with New York Berlin and London so if you run it we can see service not available then we've got sunny and so on and so on so this is randomized and we now bind this method to the chat openi class so so the llm is now able to decide whether it needs this fake weather API tool to answer a question or not so let's run this and now let's actually call this llm with tools but first we have to provide some B black code so we have to provide a tool mapping so this is the key and this is the function we want going to call and now we're going to create a list of messages so we going to provide a single message how will the weather be in munic today I would like to eat outside if possible so we pass that to the invoke function and append that to this message and if we going to have a look at the message so here is our initial message and if you scroll to the right we can see that we've got got an AI message with uh an empty string as content but what's important here is these tool calls where is it here is the tool called attribute and here we can see that the llm wants to uh use the fake weather API and provide the name munic as argument so this was all done by the llm and now we actually want to call this function so just to say if you're not familiar with tool calling I've got this video about tool calling with a Leng chain so this is the boiler plate code for this we perform the tool call and then the tool output will be added to this messages list and the tool output is the content here and we also have to provide a special ID for this tool calling and now we can invoke this again and get the final output so the weather munic is sunny today with a temperature of 22° so this is what the llm does with the output of our fake weather API okay now let's use this logic to create an argentic workflow with Lang graph so again we've got our rout function and this time we extract the messages check if the last message has got these tool calls argument and if that's not the case we will return end otherwise we will return continue and we will use that logic to route to this I call tool function so what's happening here is that we extract the message or the messages and extract the last message here then we extract this tool cots argument we get get the the tool name we want to use then we invoke it get the output we increase the API call count by one then we create a new tool message with the content which is the tool output and we going to return the state so since we use in our state the operator add so this gets always added to this messages sequence so this is very nicely done by langra so we only return the tool message and this gets added to the whole message list and then almost forgotten here we've got our call model function which also uses the state we extract the messages and provide these messages to the invoke method of the llm with tools class and we get a response and again we just return the response because we use this operator add and this always adds to the message okay so let's run this and now let's create our graph we use a different graph class now the state graph and and this is able to work with our custom agent state so we pass in that agent state to this state graph and this no creates our workflow or runnable or however you want to call it and then we going to add nodes again so we add the key agent and here we perform call model so here we just uh run this with llm and now we add another note action and this makes use of this call tool function so we've got two nodes now we've got an entry point which is Agent and then we need now our conditional Edge and the conditional Edge for the agent is determined by the should continue function so again if we've got this tool calls attribute we will continue and if we don't have it we will return end so if the llm decides that it needs a tool then we going to use the tool CES node other wise we will just end from the action node we will then route back to the agent node because this is the node which creates a final answer and as long as we've got these tool calls attribute then we will always add another tool call but if that's not the case then the final answer was created and the user is able or should be able to see it see it okay let's compile it and then we going to create our IM again and as you can see here we've got our agent and if it continues it will perform an action and route that back to the agent and if we don't have the tool CS attribute we will just end okay let's try it so we create a system message you responsible for answering questions and you use tools for that like our weather API let's now invoke that and as you can see service temporary not available not available okay let's make maybe try it again because we exceeded the Max iterations and now we can see that the output was first time temporary not available the llm decided it has to use the tool again to answer the question and in the second try it was returned sunny 22° and this is now our final answer and if we have a look at this we can see the weather Munich today is sunny with a temperature of 22° and our API call countr was two so in this case if our API unstable we could use an agentic workflow to make sure that the agent tries again and again until it's able to answer the question so great that's it for the video and I hope you now understand the idea behind L graph it's most important Concepts so in the upcoming videos we can work on more complex multi-agent workflows so thank you very much for watching see you bye-bye
Info
Channel: Coding Crashcourses
Views: 6,169
Rating: undefined out of 5
Keywords: langchain, langgraph, agents
Id: J5d1l6xgQBc
Channel Id: undefined
Length: 21min 53sec (1313 seconds)
Published: Mon May 27 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.