Okay. So in this video, I want to
have a look at LangGraph. so I'm going to talk a little bit
about what it is, and then I'll go through some coding examples of it. So if you are interested in building
LLM Agents you will want to learn this and then maybe over the next few videos,
we can look at going more in depth with building some different agents
and some different, use cases here. so first off, what actually is, LangGraph? you can think of this as sort of the
new way to run agents with, LangChain. So it's fully compatible with the
LangChain ecosystem and especially, can really make good use of the
new sort of custom chains with the LangChain expression language, But
this is built for running agents. So they talk about the idea of
being a graph and what they're talking about, you know, a graph
here, is where you've basically got nodes joining to different edges. and they're not always
going to be directed. So this is not a DAG or a fixed
directed graph in any way. This is basically where nodes
can make decisions about which node they can go to next. So another way of thinking about this
is it's like a giant state machine that you're building, where the graph
is basically the state machine that decides, okay, what state are you in now? What state will you go to run
a particular chain or to run a particular tool, et cetera? and then, how do you get back? And then also things like, how do
you know when to complete or end the graph or end the sequence, in here? So LangGraph is built on these ideas
of trying to make it easier for you to build, custom agents and to
build, things that are more than just simple chains, with LangChain So there are a number
of key parts to this. You've got this idea of a state graph. So this is where your state
is being persisted in some way throughout the agent's life cycle. And you can think about this as a sort
of way of passing the dictionary around from chain to chain or from chain to
tool and stuff like that, and then being able to update certain things. And you can update things,
where you can just overwrite them, or you can add to them. So if you've got a list of things like
immediate steps, you can basically add, to that as the agent is actually
going through running the various parts of the graph, et cetera. The next part, which is key to
this, is the whole idea of nodes. As you build the graph, you
want to add nodes to the graph. And you can think of these nodes as
being like, chains, or actually they're also runnables, so it could be a tool,
it could be a chain, and you can have a variety of these different nodes. And you think of those as being like
the components of your agent that you need to wire together somehow. So while the nodes are the
actual components, The edges are what wires everything together. And the edges can come in
different forms as well. So you can set an edge where,
it's just going to always go from this node to this other node. So if you have a return from a tool
going back to the main node, you're just going to let that, you're going to
want to probably hardwire that in there. But you can also then set edges
which are conditional edges. And these conditional edges
allow a function, often going to be, the LLM, to actually decide
which node that you go to next. So you can imagine that, this
can be useful for deciding if you're going to go to a tool,
what tool you're going to go to. If you're going to go to a different
sort of persona in the agent. Let's say your agent has got multiple
personas and you want to go from one to the other, or you want to
have a supervisor that's basically delegating to different, personas. all those things are going to be on sort
of conditional edges, that we go through. Now, once we've set up these nodes and
we've set up these conditional edges, you then basically want to compile the
graph, and now the graph acts, just like a sort of standard LangChain runnable. So you can, run invoke on
it, you can run stream, etc. it will now basically run the state of
the agent for you, and, give you the entry point, you will define an entry
point as an entry node, and give you the sort of end point, and it will be
able to get through the whole sort of thing of wiring these things together. Now I can see that there's going to
be a lot of use for sort of making, reusable, sort of agents that you
would then wire together on a graph. So you might have lots of,
little pre made things for using tools, that kind of thing. And you could imagine also that
you've got agents that use certain kinds of prompts based on, the
inputs that come before them here. what I want to do now is go
through some of the code. we'll look at some of the
examples that they've given. I've gone through and changed
them a bit just to highlight the, what's going on, and then we'll
also look at it in LangSmith's. So we can actually sort of see
what actually happens at each step and what gets, sent out to
the large language model, etc. You'll find that I'm using,
the OpenAI models in here. There's no reason why we
can't use other models. The only, I guess, challenge is
that a lot of those models need to support function calling. If you're going to be using
function calling on these. Now, if you're just running sort of
standard chains or something where you're not using the function calling,
you could use any sort of model. but if you want to have the parts
where you're using function calling to make the decisions and stuff. then you're probably looking
at models like the OpenAI models, like the Gemini models. And now we're starting to see some
open source models that can do this function calling stuff as well. All right. Let's jump in and have a look at the code. All right, let's start off with the
simplest, sort of example they give, which is probably not that simple in some ways. the agent executor. So this has been around in
LangChain for quite a while. you can think of it as a way of,
building an agent where you can then use function calling to get,
a bunch of responses in here. So, what I've done is I've taken the
notebook, I've sort of changed it a bit. I'm going to make some things a
little bit, simpler and I'm going to add some more things to it so we
can sort of get a sense of, okay, what actually is going on in here. so first off is basically
setting the state. So I've left a lot of
their comments in here. There are a number of key things that
you want to persist across the actual agent while it's running so in this
case, they're persisting the input, they're persisting a chat history. so this is more a sort of
traditional way of, adding the memory and doing that kind of thing. you'll see in the second notebook,
that we move to more just a list of messages going back. But this is using more of sort of
a traditional way of having a chat history, and then having things
like intermediate steps here. And you'll see that some of these,
things, can be basically overwritten. So this agent outcome, gives us the
outcome, from something that the agent did, or gives us this agent finish of
when the actual, agent should finish, So in this case, this can be
overwritten as a value, in here. Whereas things like the, intermediate
steps here, this is basically a list of, steps of agent outcomes,
or agent actions, rather, and then show the results of those actions. And you can see in this case,
this is being, operator.add. So this is just adding to
the list as we go through it. So this state that you start out
with, we're going to pass that in to make the graph later on. Alright, now what I wanted to do is
set up some custom tools in here. many of you have seen,
custom tools before. I did some videos about
it a long time back. I probably should have done some
more videos updating, as things in LangChain changed, for it. But, if you think, custom tools,
You can basically, pick a bunch of pre made tools from LangChain, and
there are a lot of those already. but you can also do, custom tools. So here I've made two sort
of, silly little custom tools. and one is basically just going
to give us a random number. between zero and a hundred. And the other one's just going to
take the input of whatever we've got and turn it into lowercase, right? So these are very simple functions. You can see here we're using
the tool decorator to basically convert these into tools. And then when we do that, we're
getting the name of the tool. We're getting the description
of the tool, in this way. so it's a nice way of just
quickly making tools, in here. and you can see that when I want to
run these tools, I basically just say, whatever the tool is or the function
and then basically just dot run. in the case of random, I'm
having to pass something in, so I'm just passing in a string. Really, the string can
be anything in here. It doesn't really matter. you'll see that the agent
likes to pass in random. So I've given this as an example here,
but really it could be an empty string, it could be, a string with whatever in it. so in this case, the input
is not that important. In this case, the input is important. in this case, we're basically, if I
pass something in uppercase, it will be converted to lowercase, right? So whatever the string gets passed
in, that will be converted to lowercase and then passed back out. Now, they're simple tools you could change
with this with, a bunch of different things like search DuckDuckGo , Tavily
is what they originally used in here. but I kind of feel these are nice simple
tools where you can go in and then see very clearly what is it That's going
on is, you know, what I think going on rather than getting this long JSON back,
of a search or something like that. All right, next up, we've basically
got, the way of making, an agent. Now, remember a graph can
have, multiple agents. It can have multiple parts of agents,
can have multiple chains in there. in this case, this is the the sort of
agent, the standard sort of agent, which, Basically uses, OpenAI functions, right? You can think of it as an
OpenAI functions, agent here. So here we're basically pulling
in a prompt, this is they had originally where they're pulling
in the prompt from the hub. if we go and have a look at that
prompt, we can see that there's really nothing special in there, right? It's basically just got a system message
saying you are a helpful assistant. It's going to have a
placeholder for chat history. It's going to have a human message,
which is going to be the input. It's going to have a placeholder
for the agent scratchpad in there. So that's what we're getting
back, from that, when we just pull that down from the hub. we set up our LLM. And then we've got this, create
open functions agent, which is going to be, an agent runnable in here. So I've passed in, the LLM, the tools,
the prompt that we're getting back here. And then, you can see that, that
if we actually look at the prompt, it's, it looks quite complicated
because it's got a bunch of different, parts going on in there. And we can look at the prompt two ways. We can just look at it like this. We can actually, get
prompts and stuff like that. And you'll see that now, if I've got that agent, I
can just pass an input into it, and I need to pass in a dictionary, right? So I've got a dictionary with my
input text, I've got a chat history, I've got intermediate steps. Neither of those have
got anything in here. All right, so we've got these inputs,
we pass this in, and you can see that the outcome that we're getting from
this is that we're getting an agent action message log response back. So this is basically telling us, give
me a random number and then write in words to make it lowercase So
you can see that, all right. What's it doing? It's basically deciding what tool
to select via a function call. So, if we come in here and look at
Langsmith, we can see that when we actually passed this into the LLM,
we were passing in these functions and tools in here as well, right? So we can see that this has got, the,
details for, that tool, we've got the details for the random number tool, and
they've been converted to the OpenAI functions format for us in there. we then basically have got our
input, our system input, and then we've got the human input there. And we can see the output that came
back was this function call saying that we need to call random number. And if we look back here, we can see
that, okay, it's actually passing back that we're going to call random number
with the input being random, and we've got a message log back there as well. Alright, so this was a
basically one step in the agent. so this hasn't called the tool
for us, it's just told us what tool to actually call in here. So that's showing you what
that initial part does. Now we're going to use that
as a node on our graph. And we're going to be able to go back and
forth to that between that particular node and the tools node as we go through this. first off we want to set up, the
ability to execute the tools. So we've got this tools executor here. We pass in the list of tools that we had. So remember we've got two tools, one
being a random, number, generator and one being, convert things to lowercase. If we come up here and we look at,
okay, the first thing we're going to do, what are we going to put on this graph? We're going to put in the agent, but
we're actually going to run the agent. So we've got that agent runnable
invoke, and then the data that we're going to pass in. and you can see that we return
back The agent outcome from that. So in this first case, that,
agent outcome is going to be telling it what tool to use. If we put the same inputs
that we had before there. we've then got a second function
for actually running the tools. so you can see here that this is going
to basically, get this, agent outcome. that's going to be our agent
action, which is going to be what tool to run, et cetera. And then we can run this tool,
executor function and just invoke this with the, telling it what tool
and what input to pass in there. Now I've added some print functions
in here just so that we can look at, okay, the agent action is what
actually it is, and also then the output that we get back from that. Finally, when we get that output back, we
add that to the intermediate steps there. The next function we've got is for dealing
with, okay, do we now, so remember, each of these can be called at different
stages, even though I'm going through, the agent tools and stuff like that. these are separate things, at the moment. the next thing that we've got, this
function, is basically determining, okay, based on the last, agent
outcome, do we end or do we continue? if it's going to be like an agent
finish, then we're going to end, right? We're not going to be doing something. So if it's coming back where it's saying,
giving us the final answer out, we don't need to go and call tools again. We don't need to call another language
model call again, we just finish, there. So these functions you're going to see,
are what we're going to add in here. So first off, we've got our, workflow,
which is going to be the state graph. And we're passing in that agent
state that we defined earlier on. We're then going to add a node. for agent and that's going to
be running the agent there. We're going to add a node for
action and we could have called this actually tools, right? tool action or something like that. This is going to be that the function
that we've got here for actually, running the tool and getting the
response back and sticking it back on immediate steps like that. Alright, so they're the two
main nodes that we've got there. We set the entry, node in here. So we've got this, entry node. We're going to start with agent,
because we're going to take the inputs, we're going to run that straight
in, just like we did above there. and then you can see, okay, now we need
to basically put in the conditional edges. the conditional edges is where we're
using this function, should continue. and we're basically saying, that, after,
agent, The conditional edges are going to be, okay, should we continue or not? you'll see down here, I'll come back to
this in a second, but you'll see down here we've got a sort of a fixed, edge where
we always go from action back to agent. So meaning that we take the output
of the tool and we use that as the input for calling the agent again. But then the agent can then decide,
okay, do I need to use another tool? Or can I just finish here? And that's what this conditional edge is. So after the agent, it will decide, if
I ask it something, that's totally not using any of those tools, it's just
going to give me a normal, answer back from a large language model or from
the OpenAI language model in here. But if I give it something where If
it's going to be an action, then it's going to, continue, and it's going to
go on to, to use the tools and stuff, in there, So here, we want to sort of
decide, this is this sort of conditional edge part, and you'll see this in,
one of the other notebooks that this can, get a lot more complicated if
you've got multiple agents, going on the same graph, as we go through this. All right, we then compile, the graph,
I've tried to go with the terminology as much as, as they've got here of
like workflow and stuff like that. But really this is the graph. We're compiling it to
be like an app in here. If we look at it, we can actually
see the branches, that are going on. If we look at it, we
can also see, the nodes. that are on this and the edges that are on
this so we can see, okay, what goes, from what, And we can also see the intermediate
steps, of how they're being persisted on that graph, okay, so now we're
going to basically stream the response out so we can see this going through. I'm basically just going to take
this app, remember I can do dot invoke, I can do dot stream, And
I'm going to pass in the inputs. The inputs here are going to
have an empty chat history. but I'm going to pass in, the input
basically saying give me a random number and then write in words, should be
write it in words, but anyway, write it in words and make it lowercase. So you'll see that, all
right, what happens here? So we start off and it decides,
ah, okay, I need a tool, right? So its tool is going to be random number. and in this case, it's putting
the input being random number. it then gets that response back. Now I've printed this. So that it then basically
sends that to the tool, right? so you see each of these is where
we're going from one node to the next node that we're printing out here. so we go from this agent run
node, the outcome being that, okay, I need to run a tool. coming back and then in this one
we're going to, basically now have it where, we've run the tool. It's given us back a random number. The random number is 4 in this case. And so now, it's going to stick that
on the intermediate steps in there. So now, the, that's going to be passed
back to our original agent node. And now it basically says, okay,
this was my initial sort of thing. I've got this number back. Oh, I need to write it in words. And then I need to make it lowercase. So to make it lowercase,
I need to use the tool. And the tool is lowercase
in this case, right? So the input is gonna be four with
all capitals, and you'll see that the lowercase that we're getting
out here, is gonna return back. Somewhere here we'll see this. It's going to return back, yes,
here, we're going to see the tool result is 4, in lowercase there. So again, this is a tool, this is
the straight up agent, this is the tool, this is the initial agent
again, this is the tool, and then finally we go back to the agent again. And now it says, okay,
now I can do agent finish. Because I've done everything that,
I was asked to do in there, I've got the random number, I've got it in
words, I've got it in lowercase, here. So we can see that the output here is, the
random number is 4, and when written in words and converted to lowercase, it is 4. All right, so it's, a bit of a
silly sort of task to do it, but it shows you how it's breaking it down. And we can see, if we look at the
intermediate steps that we're getting out there, we've got the steps, for each
of the different, things going along. We've got the message log and
stuff as we're going through this. All right, if we wanted to do it without
streaming it, we could do it like this. if I just say invoke. I'm not going to see, each agent broken
out, I'm just going to see, okay, the first off, it's going to pick the random
number tool, get 60, Takes that as a word in uppercase, puts it into the lowercase
tool, and we get the result out in here. Now, I've saved the output
to output in this case. So remember, these are print
statements that I put in there. That's why we're seeing this, come along. and then we've got this agent, get
agent outcome, if we return values and get the output, we can see
the random number is 60, and in words it is 60 all in lowercase. If see the intermediate steps, we
can see the intermediate steps there. Just to show you, sort of finish off, if
we didn't put something in that needed a tool, if we just put in the, okay, does
it get cold in San Francisco in January? it comes back. Yes, San Francisco can experience
cold weather in January. So now, notice it didn't use any tools. It just came straight back with a finish. and there's no intermediate steps, right? We've just got that one,
call going on in here. So if we come in here and have a look
in, LangSmith, we can see, this going on. So we can see that, okay, we
started out with that call. It basically gave us a return
to a call random number. We got random number. We got four out of that, from that,
we then basically went back, so that, remember the action is like our tools,
we went back to the agent, and we can see that, if we look at the OpenAI
thing here, we can actually see what was getting passed in here, and we
can see that, okay, it's going to come back that the input is four in capital
letters, We'll go into the tool lowercase here, this is going to transform it
to just deliver back 4 in lowercase. And then finally, we're going to
pass that in, if we look at, here, we're passing all of that in. With this string of okay, what
we've actually done in here as well. And now it can say, okay, the output is
going to be the random number is four. And when written in words, it's
converted to lowercase four, right? And you can see the type that
we got back was agent finish. so that's what tells it not to
continue, as we go through this. All right, let's jump in and have
a look at the second example. So the second example
in here is very similar. The big difference here is it's using
a chat model and it's using a list of messages rather than this sort
of chat history that we had before. So we've got the tools here. Now, one of the things with doing it
this way is that we're not using, the, createOpenAI functions agent here. So we are using an OpenAI model,
but we need to basically bind the functions to the model. So we've got the model up here, and
we can basically just bind these. so we just go through for each
tool that we've got in here. We run it through this format tool
to OpenAI functions format, and then we basically bind that to the model. So meaning that the, model can then,
use that and call back a function just like it did before in there. Alright, we've got an AgentState again,
like we had before, this is the sort of state graph, that we had here. In this case, the only thing that we're
going to have though, is just messages. We don't need to have the intermediate
steps we're not doing, any of that stuff, and because the input is
already in the messages, we can actually get at that here, so we
don't need to, persist that either. All right, our nodes, so we've got the
should continue node again, so this, again is going to, decide whether we,
go back to the sort of original, agent node or whether we go to the tools node,
and in this case, you can see that what it's actually doing is it's getting off
that, the last message that we got back. And it's using that to basically
see is there a function call in that or not, If it doesn't have a
function call, we know then that's not using a tool, so we can just end. If it does, we can then continue. We've then got basically calling the
model, so this is, taking our messages, passing this in and, invoking the model. We're going to get back
a response for that. so we've got, this response, and we're
just putting that response back in there. we've got a function
for calling the tools. okay, here again we're going
to get the last message. because this is what's going to have
the actual, function, that we need to call, or the tool that we need to call. So you can see that we get that
by just getting last message, looking at function call, getting
the name, and then that basically pass that back of what the tool is. And then same kind of thing for
getting the tool input in here. and then here I'm just basically
printing out that agent action, is that action again, so we can
actually see, what's going on. And the response back, The same as
what we did in the previous one. and then, we're gonna basically, use that
response to create a function message, which we can then basically, assign
to the list of messages, so it can be the last message that gets taken off
and passed back in to the agent again. alright, we've got the graph. here, same kind of thing, we add
two nodes, we've got one node being the initial sort of agent
and one node being the tool or the action that gets called, here. We set the entry point to agent again. We've got our conditional, the same
as we had before as we go through this, so we've got a conditional edge
and we've also got a hardwired edge, being that always from action, we
always go back to agent, in this case. Compile it, and then
now we can just run it. So you can see here that we can, I'm just
going to invoke these as we go through it. But you can see that, I've got, give
me a random number and then write the words and make it lowercase. We can see we're getting the
same thing as what we had before. So this is replicating the same kind
of functionality, but, you're going to find that in some ways this can be,
this allows you to do a lot more things in here, in that if we were to come
in here, you see how we're popping off the last message when we come in here. We're also, able to basically summarize
messages, we're able to play with, the messages, we're able to limit it so that
we've only got the last 10 messages in, memory so that we're not making our calls,
35 messages long or something like that. Even with, the GPT 4 Turbo, we can go,
really long, but we don't probably want to waste so much money by doing, really long,
calls and using up really large amounts of tokens in the context window there. so we can run that through. you can see here, we've
asked for the random number. Sure enough, it's done the same thing. It's got the tool. and remember these are coming from the
print statements that I put in there. and, we're then basically
getting this output in here. so that's the output of the whole thing
that's coming back with the various messages that we've got going through
and those messages, everything from, the system message to human message
to the AI message to a function message to an AI message again, to
a function message back to an AI message for the final one out there. if we just want to try it
where it's just using one tool. . So if you and I have put in, please
write Merlion in lowercase, you can see now it just uses one tool, just uses the
lowercase one, goes through and does that. And then again, if we want to
just try it with, no tools if I ask it, okay, what is a Merlion? A Merlion is a mythical creature with
a head of a lion and a body of a fish. Alright, so this sort of shows you. that it can handle, both using
the tools and not using the tools. And it also shows you that each
time though, we're getting these list of messages back, which is,
the way of us being able to see what's going on and persist the
conversation as we go through this. Okay. In this third notebook, we're going
to look at the idea of building a sort of agent supervise us. So where you've got it so that the
user is going to pass something in the supervisors, then going to decide, okay,
which agent do I actually route this to? and then it's going to
get the responses back. and some of these agents can be, tools. some of them can be, just other, agents
that actually, are not using a tool, but a using a large language model, et cetera. So let's jump in. so we've got the same
inputs that we had before. I'm setting up LangSmith in here. I'm bringing in the model. So the model I'm going to
use for this one is GPT-4. And then we've got a
number of tools in here. I've got the, my custom tools that
we used in the first two notebooks. So the lower case and
the random number there. but we've also got the
PythonREPL tool, right? Remember, this is a read,
evaluate print loop, a tool. So basically you can run Python code. So you always want to be a bit careful,
of, what prompts you're letting go into that because, it can be used maliciously. Obviously, if it can run anything that
Python can run, it can do a lot of. damage in there. All right. So then in the example notebook,
They've got these helper, utilities. this is basically for making a
general agent and you can see that we're going to pass in the LLM. We're going to pass in the
tools that the agent can use. We're going to pass in a
system prompt For this. And it will then basically
assemble it with the, messages with the scratch pad et cetera. and it's going to make that,
create_openai_tools agent Just like we had in the first notebook, that we
went through and it's going to then return that, executor back in here. So this is just a way to sort of
instantiate multiple agents, based on, their prompts and stuff like that. So the second helper function here is
this basically this agent node which is for converting, what we got here
that, creating the agent into an actual agent node, so that it can be run
in here it's also got a thing where it's going to take the message and
convert it to being a human message. because we've got multiple, agents which
are going to be LLM responses and stuff, we're often going to want to convert those
to be human responses to get the sequence of responses as we go through this. Alright, next up is creating
the agent supervisors. So, this case, is where you're
going to determine your. multiple agent personalities
and stuff like that. So the personas I've got here. I've changed their example. So I've got the lotto manager,
which is obviously going to use the tools that we had before of
the random number, et cetera. And we've got a coder. So I've stuck to the original example
that they had of having a coder that will make a plot out of this. but what we're gonna do is plot
out the lotto numbers for this. So we can see here that this
supervisor has got a very sort of unique prompt, right? It's basically that, you're a supervisor
tasked with managing a conversation between the following workers. And then we're passing in the members. So the members is this
lotto manager and coder. given the following user request
respond with the worker to, act next. So each worker will perform a task and
respond with their results and status. When finished respond with finished. So this is what's guiding the supervisor
to decide the delegation and to basically decide when it should finish. for these. So for doing that, delegation, it's
going to use an OpenAI function, and this is basically setting this up. So this is setting up like the
router for deciding the next roll of who should, do it. and then, passing these things through. and passing in like this enum
of, the members and finish, so it can decide, do I finish? Do I go to this member? Do I go to this other member
as we go through this? And you can see that because that
is its own call in here, we've got a system prompt there that says, given the
conversation above who should act next? And then we basically give it the options
of finish lotto manager or coder in this. And then basically just
putting these things together, making this supervisor chain, where we're going to have this
prompt, we're going to bind these functions, that this function above
that we've just had to the LLM and then we're going to pass that out. getting that back .
So hopefully it's obvious that
will then become like a node on the actual graph as well. So now we're going to look at,
actually creating the graph. So we've got this agent
state, going on here. and so this is our graph state. again, we're going to have the messages
that we're going to be passing it in. So we're sticking to that sort
of a chat executor like we did in the second notebook there. And you can see here that we're going
to basically have the lotto agent. So I'm just going to instantiate
these with that helper function for create agent. And so here, I've got, a lot of agents
going to take in our, GPT-4 turbo model, it's going to take in the tools. And then the prompt for this is you
are a senior lotto manager, you run the lotto and get random numbers, right? it's telling it that, Hey,
this is the agent to do that. It's telling that it's going to have
to basically use the tools to do that. so that's the lot of agent. And then the second agent
is this coder agent. So this coder agent
is just using the tool. So I passed in all the tools
in here for tools, by the way. And this particular agent is, just
going to use the PythonREPL, tool. And this is basically saying you may
generate safe, Python code to analyze data and generate charts using matplotlib. So it's just setting it up
to do the charting in there. So if you look carefully, you'll actually
see that, I think I accidentally passed in the PythonREPL into these tools
as well, So it's not ideal in that we would want to limit, the number of
tools that we pass into something to as few as possible, one, it saves on
tokens and two, it just makes it easier for the model to make the decision. But anyway, we've got those. and then we've got, this basically
setting up the node here. And so we've got our lotto node. We've got our code node. we can then basically pass
these in as we go through this. We need some edges. So the edges we've actually
got a lot more edges cause we've got a lot more nodes now. and you can see that they're just
using a four loop to make these edges. So, from every agent or persona,
whether it's the lotto manager, whether it's the coder, it always
goes back to the supervisor. So even if we had 10 different
agents, as you can see we've got to being lotto manager and coder. it will go back to the
supervisor at the end of that. and then we've got conditional ones. where it will determine, this is
sort of setting up a conditional map for, The conditional edge of
being the supervisor going to what? So, this conditional map, in fact,
maybe in the future example, I would just hard-code this out so people
can sort of see what's going on here. But basically it's just
making a dictionary in here. it's adding in the, finish node
in there as well that it can basically use as a condition. and we can see that we can go
from supervisor to any of those on the conditional map, which is
going to be our members and is going to be finished in there. finally we set up the entry point. So the entry point is
going to be the supervisor. Compile a graph and then
we can use the graph. So you can see now when I've asked
it to do is human message in, get 10 random lotto numbers and plot
them on a histogram in 10 bins. And tell me what the 10
numbers are at the end. So this runs through. it does the plot for us. So we don't really see that much here. But let's jump over to LangSmith
and see what's going on here. if we look at the LangSmith for
this, we can see that it starts out. and we've got the router as
the actual, function calling thing at the start, right? Not the tools. This is the router that is basically
deciding, do I go to lotto manager? Do I go to coder? Do I go to, finish of this. We pass in now prompt there and you
can see now it's got the workers being lotto manager, and coder. which got, you know, put it in there. and then we've got, when
finished respond with finish. and then we passed in the
actual sort of human prompt. And you can see that it's decided that
okay, from this select one of these. It's a solid, okay, need
to go to lotto manager. So that's where we get to lotto manager. Now, lotto manager. basically it looks at this and
now it's getting tools in there. So remember I said, I accidentally
passed it in the PythonREPL in here. I probably shouldn't have done that. But anyway, we've got, you're
a senior lotto manager, get 10 random lottery numbers. Were passing in that, in there. And you can see it's going to, it's worked
out that, okay, it needs to do this random thing and it needs to do it 10 times. So it goes through and runs. the random number tool 10 times. So we get 10, separate,
random numbers back. from that. it, then, can take those. and decide, okay there's our
10 numbers back that we got. and it can decide, okay, now
it needs to go to the coder. now, in this case, actually, because
it had the PythonREPL in here, it just did it itself in here. But you'll see on some of them, we'll
actually go back to the coder in there. and then finally, we've got the supervisor
out, which is giving a lot of numbers out. telling us that we can't see the,
the plot, we can't pass the plot back cause it's already plotted it out. Here is our plot out. and if we went along, we can see that. here are the numbers that correspond
to the plot out that we've got there. Anyway, this is just running it. two times. If we look at the final response out,
we can see that this is what we've got. if we want to actually just
sort of give the human response back out, we can get this out. So we've got this, the histogram has
been plotted for the following numbers, passing in the numbers with new line
characters, et cetera as we go through it. Okay. So this shows you the sort of basics
of building a supervisor agent that can direct multiple agents in here. So in some future videos, I think we'll
look at, how to actually, go through this, more in depth and actually do some
more real world agent things with this. and then from this, you could
basically take it, you could deploy it with a LangServe. You could do a variety of
different things with it to make a nice UI or something, for this. But hopefully this gives you
a sort of crash course in what LangGraph actually does. And what some of the key
components are for it. if you just think of it as
being a state machine, this is fundamentally how I think about it. if you've ever done any sort of
programming for games and stuff, you often use state machines there. a lot of sort of coding will often
have some kind of state machine. And the state machine is basically
just directing things around this. so don't be intimidated by it. It's pretty powerful that, you
can do a lot of different stuff. I would say You can get confusing
at times when you're first getting your head around it. But once you sort of work out like
how, you're setting up the different nodes, what the actual nodes are, how
you're going to have conditional edges between the nodes and then what it,
you know, what should be hardwired edges to basically bring things back is
another way of thinking through this. So for me, I'm really curious to
see what kind of agents people want to, learn to actually build. Agents is something that I've
been interested in with a LangChain for over a year or so. And I'm really curious to see, okay,
what kind of agents do you want? And, we can make some
different examples of these. in the description, I'm going to put a,
Google form of just basically asking you a little bit about what agents you're
interested to see and stuff like that. If you are interested to
find out more about this. fill out the form and then,
That will help work out what things to go with going forward. anyway, as always, if you've got
comments, put them in the comments below. I always tried to read the comments for
the first 24 hours or 48 hours after the video is published and reply to people. so if you do have any
questions, put them in there. and as always, I will see
you in the next video.