Building LLM Agents in 3 Levels of Complexity: From Scratch, OpenAI Functions & LangChain

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's up folks welcome back to the channel in this video we're going to be doing something interesting now uh agents have been in a hype for a while now because of the powerful capabilities and the potential they carry to automate all sorts of tasks for us in in a computer as well as to you know s to perform things like mathematical reasoning scientific discovery uh research and reports browse the internet you know all sorts of stuff so I find agents quite fascinating and the thing is it's kind of hard sometimes when you have a bunch of Frameworks to choose from to build these agents it's difficult to understand which one to use and I found that one cool way to solve this issue is to try to implement agents yourself first just using a large language model and nothing else and see how difficult it is to give that model the ability to perform actions in the real world and we're going to be trying to do that today and then we're going to see what kind of problems are fixed by using something like openai function calling which if you don't know what it is there's a link in the description essentially it's a a it's an API from open AI that gives you the ability to connect models like chpt to functions and tools that do stuff in the real world and all you have to do is set up a little Jason schema that explains to the model how that function should be called and then the model figures out how to prepare the inputs for that function given some problem that's given in the prompt for the model don't worry if you don't understand all of that because we're going to go through uh examples in this notebook finally I want to build simple agents using linkchain which is a framework for developing complex and interesting uh llm based applications and I think link Chain's kind of cool because it it sits at the Spectrum of you can do simple stuff but you can also do very very complicated stuff that um you know really um gives you powerful features and capabilities to connect different models to use local models to use cloud Source models all sorts of stuff so I want to do these three things and I think that it it it does a good job at showing the entire timeline the let's say complexity spectrum of building okay so I already set up this notebook so I'm going to just walk through the code line by line and let's see how to get that going so first things first is just we're going to be calling the um I would be loading myn file here with open aai so and that just make sure that we have the access to the open IPI key um and then we're going to set up some imports and first we're going to set up the call to the op IP I so if I just come here and as you can see I'm just testing um the call to the open API and I'm using the 16k GPT 2.5 turbo 16k version and I have this very simple systems prompt and then I say create a simple task of three desktop things that I can do on the terminal that's going to be relevant for the next cells but and if I run this uh we test that the that the calling actually works so as you can see I formatted this is markdown and the output from the model was indeed three tasks that you can do on the terminal so this is awesome so um that should gives us should give us a segue into the types of actions I would like to perform so what I want to do is I want to connect a large language model like chbt in this case we're going to use the opena API with some functions some python functions that I'm going to write myself right are actually already written so we're just going to walk through them and we're going to try to connect these two in a way that allows the model to make a decision about calling a function and all the way up to actually calling that function we're going to see what's the path from one thing to the other okay and what are the challenges Etc so um I'm going to create three functions one that creates a directory in the current directory in this case this is a silly function that just creates the same directory called test then I'm going to create another one that's called create file and that would just create a text.txt file inside the current directory and then another one that's called list files that just lists the files in the current directory right so now that we have these three uh I'm going to run this we can test them out so I can say LS using the uh exclamation point so that we see how the ls command usually works and then I can create a directory and then run LS again and as you can see a folder called test was created it didn't exist before and there's a test deer folder here which I had created a little bit before when I was practicing for this presentation and now I'm going to create a file this is going to create the text test.txt file and as you can see the file that's created here and finally I'm going to list the files and this is all just to test the functions that I wrote now this idea of connecting these models with tools is not new obviously and I think that the earlier paper that I know about this is this tools former paper which I definitely recommend you check it out essentially is one of the papers that um I think we was one of the papers that discussed this idea it came out in the beginning of this year and it was about uh how language models uh can teach themselves to use these tools so yeah definitely awesome paper to check it out so what I'm going to be doing is I'm going to write a class that has the that puts everything together to organize to be a little bit more neat you don't have to do it like this it's just a simple way to put all these different things and functionalities together so it's going to have an init I probably should be doing this a little bit better but I don't care so it's going to have an init and then it's going to have um a function to call the open IPI and then it will have also the functions that we want to add to the model okay so these are the functions and that's pretty much it so now that we have these we can actually um put everything together so that when a model is given a task the model should be able to plan the task execute actions to complete the task and have the ability to know when to call a function and actually call that function right so um so now what I'm doing here is I just uh change the the functions to Output actually just um to I made these functions useful right so I the function now creates a natural directory and this function actually creates a a file that's given by the by the user and this function just lists the files inside the current folder now I can set up the model and then I can give a task like create a folder called Lucas the agent Master because my sense of humor is amazing and inside the folder create a file called the 10 Master rules and it's a markdown fire right and now we can make a call to the model and we can say get response and then I will give the task description to the model but pay attention to the prompt that I'm using here I'm saying given this task consider you have access to the following functions and now I show the model the functions that it has access to so these are the functions that it has access to these are the functions in this class here right that we just discussed and I say your output should be the first function to be executed to complete the test containing the necessary arguments and the output should only be the python function call and nothing else now I know what you're thinking if we have a task that involves multiple function calls this is not going to work right this is just the first one but what I want is just to see if we can get the model to Output just the function called correct so let's see if that's possible I will run this and there we go as you can see the the first action would be to create a directory called Lucas the agent master I am the agent Master after all so you know and awesome now what we want is to get this output and somehow give the model the ability to execute as well so what are we going to do well we can use this amazing little python uh building uh functionality called EAC and then all we have to do is since I created a variable here called Model that's an inst that instantiates the model with tools class you don't have to do it like this I just did it like this I don't know why I did it like this whatever I wanted to have everything inside a class even though I'm putting inside the prompt this is just it's not like I did a amazing planning here but we can call this plus the output and this should call should form if we get it correctly the call to the function and EAC takes care of executing it for us so we can execute this and now we can come here and we can say wait there we go so now I can see that inside the current folder there's a folder called Lucas the AG Master now that I say that out loud many times I feel quite ridiculous that I'm putting this on YouTube but whatever and there we go we use Python building method EAC connected with the function call that was given by by the output of the large language model and we didn't have to do any kind of fancy stuff it was just using the calls to the model themselves which I think is pretty neat because you know you see that this stuff is possible and these Frameworks that are coming out they build on top of capabilities like this right so now what are we doing here now what I'm doing I actually have to remember because I prepared this notebook yesterday oh yeah now I'm improved the um prompt engineering don't worry we're going to work walk walk through everything so this is all the stuff that we had before right so we have all the functions that match the capabilities we we want the model to have right so listing files creating files creating directories Etc also calling the um um gear response also calling the open API call I put everything inside a class because I want to have everything kind of combined however I added a function here called execute function call which will take in a function call string just like we did with the create directory function call and it will execute the function call right so now I can say okay so I want this model I will say create uh I will describe a very simple silly task like create a folder called Lucas the agent Master uh let's actually change this create a folder called Lucas the or unoriginal Joker there we go much much more accurate depiction of my qualities and now I'm G to I'm going to write a little prompt and in the prompt I'm going to say given a task that'll be fed as input consider that you have access to these functions right these are the functions your output should be the first function to be executed to complete the task containing the necessary Arguments for example task create a folder name look as the Master output model do create look at the agent master so I'm just teaching it to actually add the model dot after the thing it's not exactly like the most intelligent way to do this but don't worry we'll we'll evolve this as we go um and I give some examples so I'm doing a little bit of something called fot uh F Shaw learning so fucha learning is nothing more than to give to the model examples of input and output p pairs so that the model learns how the user wants the output structured okay and and then at the end I say the output should only be the python function call and nothing else then I give the task and I ask the model so this is called priming I'm priming the model to give the output just as I want it by giving the output the colum and um skipping a line now if I call this there we go we we get the correct uh function call and now I can call the model. execute function call on this output and if I come here and I list the files using that capability there we go as you can see Lucas The unoriginal Joker right there now this is kind of convoluted way and I could have done it in like a simpler way so don't worry about ah you don't have to have the model Dot and this is supposed to be a bit kind of fuzzy right now because we're going to evolve this right this is just the I think that this is the first iteration of how I try to solve this particular problem of having these function calls inside prompt and I think it's cool because it shows you that you can have that ability for free you don't even need any Frameworks to start doing function calling right but if you want to do more complex interesting stuff then maybe you know if you want to have multiple tools if you want to have have complex tools maybe it will be interesting to add in some stuff and we're going to see what we're going to add in and how so we connect to the model to the tools right so this is awesome but we want the ability to perform multiple actions right not just the first to the first function call to solve the problem Etc so we just showed that we can do the function calls right and that the model can properly organize that function call because it will understand how that function Works Etc so what we're going to do is we're going to change the prompt so that the output is not just one function call but a list with all the function calls that would solve the problem programmatically step by step however we are assuming here that the problem itself can be solved by just um uh stringing together function calls which is not necessarily true but we're going you know you know uh step by step so what are we setting up here let's go line by line I'm setting up the model again I'm setting up a test description and again I'm going to change this test description because the name is ridiculous so I'm going to say Lucas the very unoriginal Joker and inside the folder create a file called the 10 unoriginal rules of coma all right so this task is going to be very easy it's just create a folder and then inside that folder create a file called whatever okay and we're going to say okay the output we ask the model we say given a task you have access to these functions just like we did before your output should be a list of function calls to be executed to complete this task containing the necessary arguments right so we want to see if the model can organize uh the list with all the proper function calls which when executed one after the other will solve our problem right and of course we are giving very simple TKS but we'll get there and then I give a little bit of I do a little bit of f sh learning so I I give it a little bit of a few examples so create a folder and create a file and then I give an example with multiple calls so I can say create a folder and then inside the folder create a file and then there we go these are the outputs right so the output should only be a python list with a function calls side and nothing else so the task is this and then the output is this and then I call This And there we go we get the correct function calls inside the list which is awesome because I'm running this right now and with actually different input so it's nice to see that it still works because I did this yesterday and it worked perfectly but it could have failed now even though I am rerunning the same kind of the same script with like slight changes so again I'm going to call the execute function call on this output from the model and we can test it out we can say now I have to change the name to the name of the folder I created which is lookas the very unoriginal Joker so I can come here and I can sa this and there we go we have that file created there there so it's working it's awesome I love it so all we had to do is use this and it worked so hopefully you start identifying some issues that I outlined here on this notebook and don't worry I will make this notebook available to anyone that wants to take a look um and there's some issues right and the issues are kind of obvious so this works for very simple examples but as we scale in complexity in complexity of the functions complex of the tasks we have a certain uncertainty about the models outputs which can affect our ability to reliably call the functions right if the model gives us anything beyond just the function call we're going to fail so we don't have any kind of security about if the model is going to be able to call the function or not in these examples I did very toy simple toy examples but for more complex problems we would reach a bottleneck we will reach a threshold in terms of complexity and difficulty quite fast so we need a structure way to prepare inputs for the function calls and we need better ways to put everything together right because just feeding entire functions makes it for a very clunky non-scalable framework and uh not to mention the fact that um you know we have limit context lens there's a bunch of issues right so just putting a bunch of python functions or whatever code inside the prompt probably suboptimal right so that's where open AI functions comes in because openi functions uh they try not to solve these problems but they try to solve the problem of connecting tools to llms okay and if we go to the function calling API to learn how this thing works under the hood let's say we um we know we we learn quite quickly that you you can describe functions and have the model intelligent choose to Output adjacent logic this is what it does right and you can use that for a bunch of stuff like creating assistance that answer questions by calling external apis you can convert natural language into API calls which is essentially what we were doing there before and you can structure data from text you can do all sorts of amazing things and all you have to do is call the model with a query in a set of functions which will be defined in something called a functions parameter you can choose to call one or more and uh if so the content will be stringified to adjacent object adhering to your custom schema and the model May elate parameters but hopefully it won't if we take care of some uh precautions and they will parse the stream to adjacent so this is the jump we're doing before we're working only on strings and now we're going to be working on Json and we'll call the model by appending the function response as a new message and let the model summarize the results back to the user now what I would like to do is come here on the function calling explanation and what I would like to do yeah so the essential idea the essential idea with function calling is to give the model ability to call functions right and using Json is a good way to structure inputs to the function that we explained to the model how that function works right the function so that the model has a way so we have a way to control for the uncertainties associated with just working with you know functions inside of a prompt inside of a string Etc so if that's confused for don't worry let's take a look at an example so this is the official example from the documentation I'm just going to walk through it very very fast because I actually have uh my own example with this following along with this create directory stuff that we were doing so essentially here we have a function that gets the current weather and this is a toy example function that you know depending on the a location input that this function takes from the user which will be just a string um it will check if it's Tokyo San Francisco or Paris and it will return adjacent dump with this diction AR where location is the key and the value is you know Tokyo and temperature something with some unit and that's it so it will return a dictionary with three keys location temperature and unit as you can see here location temperature unit this is just a toy example to see to demonstrate how open a function calling works and then you can write a function called run conversation and this is a list of messages uh from the user and in this case we just have one message from the user and the message is what's the weather like in San Francisco Tokyo and Paris so as you can see uh this um involves uh multiple functions to that same function right multiple calls to that same function and then you create a list called tools and inside this list you give it a dictionary which some information about the function that you want the model to learn how to call okay and what's going on here we're going to say okay so the function is called get current weather and you describe what the function does so it gets the current weather in a given location you give uh you explain the parameters by saying the type is object the properties are location and then you explain what the properties are so you say that's a string and what the property is H same thing for unit okay and you don't have to explain them all um actually you don't have to explain them all no so these are the explanations for the parameters of the function right so obviously you have to explain them all and then you can set the required key of this dictionary which just says that the only required input for that function is the location you don't necessarily have to input the unit and then finally you say you create a different type of openai function call which you add here as you can see below the messages you add the tools and you say that the tool choice is automatic right which is default but we just put it in here so that you can see and you will get the output from the model by calling it like this response messages the choices zero. message this just access the output from the model and then you check if there are two calls inside of that output okay and if they if they are so we check in if the model decided to call any tool you will see what are the available functions so in this case the available function is get current weather which is a dictionary with the key get current weather and then the value is the name of the function that we Define here outside right and then we say okay so messages. append you append response message which is the entire response from the model and then you Loop over the two call because that's going to be a list and you say okay so what is the function name H what is the function to call so you access the available functions dictionary you give it the function name which should give uh which should uh which should give you the access by the key as you can see here and then you get the arguments by calling Json load on the arguments uh attribute of the tool call. function okay and then you get a response from that function call by saying function response is equal function to call and then you do this and then you append to the messages List information about the call to the function and there you have it and uh this is just you can go on with this and extend the conversation and you can do all sorts of stuff with this type of this is just a template structure that you can evolve on but this is essentially how you would call uh functions using the open IPI and here I just ran an example for you folks to see and we can take a look that the contents were uh as you can see it called a function so the temperature is this in Tokyo temperature is that etc etc so there you have it you were able to call that function effectively thanks to um thanks to this little setup now this was all great wonderful marvelous we can take inspect the output that we got so that we can see just the string and now we can take this approach and we can take the stuff that we were doing before but of like uh putting all the functions inside of the string Etc and we can say okay now let's do that with openi function call now that we know how kind of openi function calling Works more or less uh let's see how our previous agent that was not using opena function calling would work with this approach so what I'm doing here is just I'm sending it up the in the proper way for open function calling so what I'm doing is I have a function create directory and all that I had to do was just to um give it give it an output to return When I call the function because before I was just creating the directory but now I'm saying json. dump and it will just return the directory name and I'm just going to do with one function which is the create directory just to simplify things a little bit or maybe because I got B bored in a lazy yesterday it could be one thing or the other so we're just using one function create directory okay so I'm going to say the type function I'm going to create the function key here inside of this dictionary which is the tool and then I will give the name of the of the function I'll give a description of what the function does and I will describe the properties as a dictionary with the type the properties uh key inside of which I will give the inputs to that function in this case it's just the directory name the type of uh data type which is the string a description of the property directory name indicates the name of the directory that we're going to create with that function and we're going to say what is required and in this case the directory name is the required argument and now we're going to put that inside of a list and that's our tools now that we have that we set up a run function just like we did the Run conversation I'm calling this the Run terminal task and what I did was there's a messages variable right here and in it um I give the role which is the user in the content which is the message this is replacing our test description stuff our prompt that we were doing before and now all we have to do is give the task right we don't have to set up all of that other stuff that was kind of like indicating the model that it should called functions now all of that is being done for us right now and and I said create a folder called Lucas the agent Master let's again change that to Lucas the super unoriginal Joker all right so this is our simple task and we're going to test to see if we can successfully call the function I'm going to give the tools and the tools is just a list with the tool create directory if you don't remember what that variable is the variable is just this dictionary that we have right here to create directory I actually set that thing twice which was silly I can just delete this um and we set up the response from the model and we give the tools here we set the tool choice to automatic even though Auto is the default I'm putting here because in the open AI documentation they put it there as well so now I get the output I check if there are two calls if they if there are I set the available functions dictionary I appin the response message to the messages list I Loop over the two calls in this case there's just one there will be just one right because the task was just to create a directory and then I'll get the function name the function to call the arguments and this is my calling of the function right uh I append the response to the messages uh list because that's storing the history of our conversation with the with the model and then here is the second response uh variable which is just an indication that we could extend this to continue the conversation right to they there like easier ways to to set up this kind of loop and we'll see a little bit about that in a second with linkchain and there we we can just run this and we can see what happens okay so we have an output let's that's um the folder called has been created so we know that and we can come here and I can say LS and we can say we can see that the Lucas the super unoriginal Joker is a folder that was created so we successfully called that function using open a function called it so that's great that's awesome that's amazing that's beautiful that's just open ey function calling is awesome and there's probably a lot of people there saying if you if you know a little bit about this you're probably going to be like ah but how about pantic and and this is like a next level of control that we can have but I find very uh educational to teach it like this to like talk about it like this because we can organize this discussion about agents in terms of layers of complexity and what problems we are solving right because when we get here we can ask ourselves okay so this was quite a lot of setup right to just give act to just give like access to a model we had to set up a loop and organize how the two calling is done right and at least I find this kind of like a loop that looks a bit you know it's we're we're we're thinking about a lot of stuff right so this is where I think lynching can be interesting what I'm going to do here is I'm going to set up a similar approach with link chain let's see what kind of benefits we can get right so first of all I set up the Imports so let's go through them this is the uh from link chain. tools I'm going to import tool which I will use as a decorator to transform a regular python function into a tool that an agent can call then I'm going to set up access to a openai API openai model using the chat open AI class from linkchain then I'm going to set up an Executor which will will take care of the runtime of the agent we'll see what that is in a second and then I'll set up some prompt templates which will take care of abstracting away some stuff from The Prompt that you write to the model and then we're going to use the format tool to open AI function which is something that turns a tool from lynching to an openi function which can be useful because lynching makes it interchangeable you can set up tools in link chain you can set up tools in open ey function and you can just call this to go for from one to the other to go from tool to open ey function and open functions are great as we just saw uh same thing to format uh messages of the agent to open ey function messages and finally we're going to set up an output parser remember an output parser is something that cleans up the output of an nlm and we're going to out we're going to parse it as an open H function output okay and we'll see what that means so now I'm going to put that just again I'm going to create a tools variable and I'm going to put our function inside of a list and you as you can see I put the name of the actual function because this decorator is taking care of transforming this function into a python into a tool now I'm going to initialize my model with temperature zero because I want a model that's very precise I'm going to create a prompt template and what the prompt template is doing is I have a system message that just says something general like you're a powerful assistant that helps users perform tasks in the terminal so I made it a little bit specific and then um this prompt template will take as input the input from the user and everything will go into the uh this user value okay so we won't be adding anything new and in here we have a messages placeholder and in this messages placeholder that will take in the agent scratch Pad which if you remember before we were appending stuff to a messages list and this will take care of that process of appending messages to as we go let's imagine you go through a loop of actions with this agent and the intermediate steps will be stored here now we're going to bind the model with the tool by calling the bind method on the llm model that we initialized and then we're going to say functions equal list and then we're going to call format the tool to open a function just cool because now this is a simple way to set up a tool and then we can just call one function to format it into open ey functions we could we could use it just with link chain but we can also go from link chain to open ey functions which I find is very nice because open eye functions are powerful finally we're going to create an agent and an agent initially here will be a tupple containing a dictionary that has a description of what's going going on with the inputs as they come to the agent right so the input remember the input is uh called input because it's the variable defined in the chat prompt template all right and this Lambda method function here will just uh pass the input along okay now because the input to this agent will be a dictionary so we're accessing the input key in that dictionary fin fin L the agent scratch Pad remember agent scratch Pad right here it's the variable in the messages placeholder for the chat promp template and we're going to format everything that comes in to openi function messages and we're going to call intermediate steps and intermediate steps is nothing more than just a reference to the steps between the moment an agent gets an input to the moment the agent gives an output finally we're going to use the pipe symbol the pipe symbol which is uh lying's L Cel interface which is lying expression language interface which is a powerful way to combine agents with prompts with models with output parsing with to create awesome pipelines um in a simple way because you just have to use the spye symbol to connect stuff together okay and stuff in this case is like the model with the prompt with the output parsing so I'm saying okay I have an agent that does this to the input and then I'll connect that with the prompt and then with the llm that has the tools that were binded to the llm and then I will parse that using the openi function agent output parser which is one of many output parsers in Linkin and now I'm going to put everything inside something called an agent executor because before or in linkchain you could you can set up and control the runtime of the agent however that can be a bit tricky and annoying you have to write a while loop so agent executor takes care of error rling and a error rling error handling and a bunch of other stuff rling error handling and a bunch of other stuff so that your agent just runs smoothly and inside this agent executor class we give the agent we give the tools and we set verbos to tr so that we can visualize what's going on as the agent solves your task now we're going to give an input to the agent and that's going to be our test description so I'm going to say um Lucas the random Joker this is going to be the task for the model and let me just put that again and then we're going to invoke that and we're going to give that to the input um so that that gets passed along to the agent and goes to the pipeline so I'm going to say random Joker again and if I run this there we go create a directory beautiful so if I come here I say LS minus D we can see that the folder was created successful now uh we could add many more functions here by just creating you know some other function and then all we had to we would have to do is come here say uh add The Decorator tool add that to the to the tools list and that's it we just added another functionality to our agent that we could use to do other things right which I think is awesome because I mean imagine for example let's add let's do something right now all right let's do something a bit unscripted so I'm going to say create file and this will take me in a file path and this will use subprocess yeah and we're going to have to add a documentation a little description of what the function does in order for this thing to work a function that creates a file given a file path perfect and uh I can I don't have to say this I don't have to because this is just for the function stuff so uh or do I'm not sure so let's return the Json dump yeah so I'm going to return this and this is going to be um file path and then file path okay great and instead of some other function I'm going to say create file okay so now let's change the the the task so that instead of being something very simple like create a folder called this we're going to add something we're going to say create a folder called Lucas the random Joker true and create a file inside this folder called the 10 rules random rules of the 10 random rules of Comedy All right so let's see if that works we're testing it right now let's take a look all right perfect so now if I we can see that the folder was created so let's now say LS Lucas the random Joker true and our file was created awesome okay cool all right folks that's it in this video you we kind of work through a very silly and naive way to define an agent by just putting functions inside of a prompt string right and I didn't do this in the most optimized way possible but this was just to explain kind of like a concept right the concept is if you put functions inside of a prompt and you show it to a model the model can understand how to call them and it can understand how to structure a problem as a list of steps where each step is a potential function call and I use these terminal tasks terminal actions as examples because they're simple to understand as tools for an agent right and then we saw how open AI functions is a um is a way to make the model better at calling functions and structuring inputs to functions where you can just have to give it um Json schema with an explanation of what that function does and H the properties and the parameters and the model takes care of structuring the proper input to that function given some problem that you give it and then we showed how to organize that for our simple create directory tool but you can add you could have we could have added whatever we wanted and then we set up a a similar uh loop to do that open air function column and finally we used linkchain to create tools and to create an agent um by leveraging the tool decorator connected with the format to open AI function and then we put everything into a uh um this is called a runnable let me just put if we come here and say type agent yeah this is a runnable sequence now runnables are these building blocks in lening that allow you to um build components chains stuff like that right and this is a runable because um it's built with the LCL interface which we learn is link Chain's way to uh compose let's say building blocks that leverage prompts large language models function output function parsing agents Etc to build interesting stuff and we organize it in a way where we binded the model with the tools and then when we ask the agent to perform one or more actions the agent properly calls the functions just like we wanted and if we want to add on functions and functionality here we don't have to be writing any Json schemas or anything like that that's taken care for us by lynching we can just come here say Tool uh and you know come here and say something like some other function and we could just add it to the list of tools and we can use it so I like link J A Lot H there are many other Frameworks that we could have used maybe we'll explore some of those in the future uh I definitely want to do more videos about how to build complex research assistance learning assistant personal assistant reporting agents all sorts of stuff either using open assistance API or lynching or autogen or maybe I'll take a look at DES Weaver let me know in the comments what kind of Frameworks you're interested in and thanks for watching don't forget to like And subscribe and see you next time cheers
Info
Channel: Automata Learning Lab
Views: 3,397
Rating: undefined out of 5
Keywords:
Id: v1tyQtncsE4
Channel Id: undefined
Length: 46min 17sec (2777 seconds)
Published: Tue Dec 26 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.