Autonomous AI Agent Swarms | COMPLETE Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everybody are you interested in self- autonomous AI agents or swarm intelligence interested to see some architecture that could potentially get that done where we discuss swarms and want to look at some code to make that happen while you're in the right place stick around my name is Alex I'm from Aspen Ai and let's Dive Right In so recently we uh had a pretty big announcement from open AI regarding the um new feature drops so on their Dev day they made an announcement about gp4 turbo and increasing the contact the prom context window to 128k that's it's amazing it's about 300 pages of text just huge just per prompt they came out with a new assistance API which also uh they called it their uh their own like private gpts and so GPT for Turbo with vision came out additional features for uh for Dolly 3 in the API to be able to connect there and a lot more so yeah a lot of really cool updates one of them of course is the assistance API it's going to allow users allow developers to build AI assistance within their own applications so really cool for certain tools tailored towards maybe the market which doesn't really mind having a lot of their data in uh integrated with open AI but a fantastic uh uh capability that's been added to their package along with that uh in on the more on the consumer side they came up with this gpts and they you could create custom versions of chat GPT so this is the consumer side or on the chat open AI side you're able to use these gpts and from the developer perspective you're able to use the API to to uh come up with these custom Chad gpts that use your documentation as an external source of knowledge uh and then you can pre- prompt various skills also in combination uh with zapier you can create really really cool workflows and automated workflows for again some of the small business um and no code applications where you can link now these uh assistants and the gpts with with a bunch of different zaps to get a lot of cool workflows completed now what is this resting upon and how are we able to to kind of manage and get these things done well it seems like there is a definitely a push going towards having these agents rely on some type of a cognitive architecture or their own brain to get things done as they encounter different challenges out in the in the real world per se and so the uh fellas here at the Princeton University recently re released in September this cognitive architecture for language agents or they call is koala I'm not going to go too deep into it but it's a really uh really interesting and fascinating way of how to add you can kind of cognitive abilities uh reasoning abilities and memory to our agent Frameworks in order to be able to not just interact with uh a set of documents and and an environment but also be able to update uh memory via reasoning Loops that come to certain conclusions and constantly update uh information throughout the loop so the sex architecture relies heavily on work uh done previously determining various ways that uh memory can interact within an agent framework where we can have procedural semantic and episodic memory work with with in conjunction with a symbolic working memory where the agents are able to figure out different paths along the the way and different solutions along the way using Code or uh or or or predetermined set of documents or information and then the history of all of the interactions in the episodic side so on top of the symbolic working memory uses the longer term information stores along with uh with feedback from the environment spatial visual systems other perception in order to do what what resembles to be cognitive abilities so you can see that we've progressed from the way that we've been interacting with models so and we've been controlling them via prompts so now the flow itself has been significantly improved upon from what was initially where we thought that the llms would be doing all of our reasoning and all of our are uh meaning llm like a one large GPT 4 would handle absolutely everything and then give us an output in full completed mode well that it's not the case so this is the reason why we need chains and agents in order to interact with themselves and have an internal Loop that uh uh optimizes the output before the user actually sees anything so this is uh kind of you can see an application of that where we have procedural memory semantic and episodic memory and that is uh put into a working memory and the working memory has uh workflows that improve the reasoning along the way and the same thing can be said for each of the different memory modules where they also have self-improving uh self-improving workflows and all of that then interacts with the user in the physical world digital world and then via the user with actually through like a chat session all right a little bit more into what exactly is is memory and this very simple chart will show you we have an initial question that question gets put into uh what would be traditional llm workflow where we have a prompt we send it to the llm and then clean it up through the output parser and then we have an answer that answer is loaded and stored in memory memory create a past messages database and the next question the user asks it combines together with uh some sense of the past messages and then continues the loop by pushing now the past messages in the question further on a really clever fantastic implementation of this is mem GPT where you can create Perpetual chat Bots with self-editing memory so this is a advanc BR nice version of or implementation of this very simple kind of memory management so uh you with mgpt you're able to create Perpetual chat Bots right in and because mgpt is able to create and store and manage long-term memory of the llm so you can you can have a almost an autonomous chatbot due to that another very good implication or implementation of that is from plastic Labs with their tutor GPT and they uh combine human machine and learning all together and they open- Source their Bloom bot and their Bloom bot pretty much uh handles this type of uh this type of cognitive reasoning by taking the user's prompt putting in a uh into a chain that determines uh and improves the thought on that user's uh input and based on their predictions they update the prompt body and then give a a much better cognitive response and every time this Loop is completed the response is is always updated with additional predictive uh reasoning in order to be able to uh uh to optimize and anticipate better what the user might be saying next so you you can see that they implemented this in their Bloom bot uh and with this diagram you can you can tell that there's an initial question on the map or in in Discord there's the initial user input and it starts with the initial chain uh and and gives you know in Ence a conversation starter but the next chain gets a a thought additional thought added to the input and through processing the back and using conversational summary memory you're able to combine various reasoning workflows to the actual response so that when you're sending now the response over to to the human to uh uh past your your chain it's now including various thought improvements and and optimizations as to what it thought would be a better way to ask the question analyze certain parts of the question then give an answer analyze if that answer is good enough and so you can you can run through various chains before you give output to the user in order to optimize the output and you do that uh very well with the conversation summary buffer that's what was mentioned in the in the list uh for Bloom poot and you can see the details this is a lang chain implementation it keeps a buffer of recent interactions and memory but then rather than just completely flushing all interaction it compiles them all into a summary and uses them both so very clever interaction then um meta GPT it's another fascinating layer on top of it all where it's a multi-agent framework right this assigns allows uh uh the software assigns different roles to gpts to form a collaborative software entity for complex tasks so I think we now have the various gpts that we can make and we can combine all the different roles in order to come up with a collaboration a between them all so in this case we're in in the metag GPT original use case was that we're creating a software company and the code itself is able to produce a mimicry of a software company and it realizes that the company needs a boss or product manager and they can deduce all of the different rules all the different roles that each of those uh each of those positions would do now after rolling that out you can see that that that virtual company is able to create virtual content feedback uh have a recommender mechanism a user mechanism so what we have is we have the ability to have uh meta GPT R out multiple gpts that utilize Advanced memory modules to come up with all of these different schemas that can then be used throughout the uh theout the agent swarm or the agent process and and you can the way that we would be handling that is through this architecture where we have a data store with a memory layer so our traditional SQL Vector database or document and links and they interact uh with the memory modules interact with the data and you see we have at the bottom here the procedural memory semantic episodic and predictive memory and more higher form of memory of where we do collaborative memory module uh memory consolidation and forgetting which is going to be an important almost kind of like simulating the brain's uh sleep cycle we're going to have to forget certain things as well so uh algorithms that would get rid of various uh information from SQL Vector that isn't really relevant or necessary so from the memory layer we move over into the uh abstract layer that I I call the agencies and the agencies uh contain various forms of agents that um combine together form really kind of like a logical set or logical group that would handle data collection uh user acquisition crossplatform Synergy governance and also build toolkits so an agency that is in essence contains all the various agents that could be deployed to build tools themselves and agents themselves so we have various agents within the agencies that can be put together that do news aggregation content optimization user interaction um we have llm fine-tuning agents self-replicating agents so the agency's be able to to organize and and deploy various functional agents that then use various tools that help the agent complete some task and it could be search tools SQL code and tools that are uh self-replicate and agents that can create additional tools would be also an important part and then the tools and the a the agents and the tools also use prompts and when we can have tools then create prompts as well so on top of the prompts we have the various prompts creating the actual content then we deploy the content to to some platform and then we have a user interface ux uh uh layer where we either use a web app or a chatot or a mobile app to uh to interact with the system then all of this is handled by core controllers that uh manage each of these layers and combinations of these layers and then we have what I call Nano Nano autotuned llms so micro llms that handle all of the different steps and processes or each of the steps and each of the workflows they call a very specific llm that is designed to handle quite certain like a very specific set of functions database query optimized you know llm that is specifically optimized for Vector indexing uh memory module management selections so an agency selection tool selection uh optimized various llms that have been Nano autotuned and I'll show you what I mean by autotune now how do we get all this uh working together and being controlled well you're going to need some kind of an intelligence core and what is what is the core well the core is going to be the command and control structure right of our of our swarm uh we need the main uh in intelligent core in order to deal with all of the data that's going to be distributed amongst all the all the agents uh we're going to need the intelligence core is going to need to apply a a process called stigmergy a very interesting concept that relates heavily to Nature where um and we'll go deep a little bit deeper but where uh creatures and and elements leave certain traces or signals in an environment for others to pick up often times feromon so you can see how bees and and certain like funguses are able to communicate with each other by leaving traces in their environment so this is going to be a very important and interesting concept that is going to have to be applied I feel and so we're going to need to also do self-organization we're going to have to handle the Adaptive clustering and the consensus decision- making amongst all of the different agents and Al together all of this is going to be uh done in the way with assistance from stigmergy so stigmergy again what is that it's a communication through the digital environment all right where agents they're going to leave markers and respond to informational traces it's a it's a mechanism of indirect coordination right so action stimulates the performance of a subsequent action and the way that we can Manifest this is through again the modific modification of a shared environment that informs the behavior of other agents and when one agent updates the environment that acts as a stimulate for another agent to do some type of action so it's digital markers and we can have these digital markers uh have different intensities or different lifespans and it put in some DEC rates the they're able to monitor the changes in the environment so agents that are going to be able to see and sense what is happening based on these signals and because of that that could uh that could lead to all kinds of different feedback loops and um allowing the agents to select the test dynamically and and this environment itself is going to need to be there's going to need to be some kind of a signal space uh where all the agents that need to be very comfortable and confident that this information there is is is real which is potentially could could introduce some blockchain tech and uh uh yeah so various ways to implement all of this uh stigma you different type of like um pheromone like systems for the agents to understand what they're doing and where they are at how can we implement this well one framework that I've been working on and and and conceptualizing is nanog GPT so nanog GPT is a a framework where it has the Swarm intelligence core that handles all all of our different controllers provides the the central signal space then that core is able to deploy various agencies that uh interact or are controlled in some way through user interfaces and those agencies then deploy agent swarms self-autonomous self-controlled agent swarms that are able to interact with the internet intrnet dark web and all of the different content hubs uh out there and come out there understand what's going on and start to produce content and start to produce results uh based on their Discovery so let's take a look at uh some of the fundamental principles of the core and how that might play out right so we have different core controllers uh that interact with a single shared signal space with a central shared signal space as well as each other so we'll have cores that manage the different agencies Oracle that provides various type of uh uh supporting analysis the various different signals or pheromones uh and a controller for that the controller for all of the Nano llm and to be able to to autotune and fine-tune those uh the uh memory core controllers the executives that specify actions to-dos and priorities and the optimization core that optimizes based on uh various templates like SEO code or AB testing now what are uh uh what's going on here so we have an agency manager and the agency manager will contain various agency uh agent types and what are agents agents what what kind of types do we have well we can have like a scout agent a Creator agent an Optimizer a distributor an analyst executive an oracle various type of agent uh types and the Agents will have uh a a certain set of properties and uh like a template ID agent type uh initial behaviors we can set the uh search depth content scope so uh various types of uh uh various types of data structures for various types of agent types and so this agency core will control that and Oracle will have the various protocols a constitution a contradictor a rogue agent that tries to break things and you know allows the system to to self-heal in some way we have the optimization core that will go through various loops and workflow Loops that will update our our prompts update our knowledge and you can see that we have a task or a chat we'll have a prediction a comparison and a learning chain each one of those have their own Nano llm that can be applied and then another workflow that will add on top of that which will come up with data sets to normalize and to autotune these Nan llms as the need arises as certain agents uh will determine that there is now a significant need to improve the functionality or a potential to improve certain cognitive abilities and of various stages here by updating the llm with new information so the optimization core uh handles uh all of that we have uh of course the the memory core and that deals with all of our various memory forms and then the signal space the very interesting Central signal space that all of the these different uh elements and and the Agents themselves leave artifacts in the shared signal space and the signals have various uh various uh characteristics like the type intensity Decay rate Tim stamp metadata and each one of them will can will will have that data scheme and so you can see how we can come up with various signals that uh if you were to use an analogy here of the thickness of the border the intensity of the color and as the signal diminishes you can see that the color lessens the Border uh uh reduces in thickness and so the signal itself starts to disappear so the bigger the signal and the and and the stronger various elements of that signal this is how our agents are going to interact and discover where the focus needs to be and so here we can see the that the signals themselves have all kinds of different qualities and descriptions that we can apply and different types of indexes that we can apply to improve the interaction with all of the with all of the signals so we have the central core and the central core then deploys all kinds of different agency templates they'll have uh agent replicators llm autot tuning the content agencies the toolkits all of this is managed in some way uh by a user interface of various forms and this actually has a huge room for Innovation as to how do we now interact with these different uh uh swarms so as we saw there's different type of agent types we got the Scout we got the content creator engagement Optimizer Oracle executive and from the agency template we then get a deployment of a swarm and what does a swarm consist of well a swarm will have a executive that is able to delegate an oracle that consults all right and then we have the engagement modules we have the Scout up front we have the content creator we have the analysis modules we have the various distributor agents that are able to Fig what what's the most optimal time and platform to put the content out and then the engagement module which interacts with all of the users all that is handled by all of the different autotune Nano llms that have a process where they normalize the data and goes back so so what would that look like well we have a a delegation to the Scout up front to scan and identify something once the scouts identify they give information of say what kind of uh content would be really good and optimal the creation modules then go into work and start to create the content the analysis runs cons consistently on all of the interactions and all of the different functions to see if anything can be improved and so on so you see how that swarm gets then deployed over to a particular asset and so if we were to find YouTube for example here we have already a swarm that has sent out a bunch of uh agents a bunch of um you know a bunch of Scouts to to find and discover what had to happen then the scouts call over the content creators the content creators start to call their own Executives and the executives start to offer them what to do with the help of the Oracle analysis and you can start to uh really almost kind of like a cell start to combine various forms and various methods and the way that they interact with each other um is uh it is going to be very interesting because I have a feeling that there's going to be some emergent technology or emerging effect coming out of that we have a hard time predicting so these swarms deploy out to the various content hubs are able to figure figure out what are for example the most viral things or the most interesting things going on and then start to uh act and activate different type of workflows and different flows for the additional sub agents and and sub agencies that then can interact with that content or publish that content so as you can see we do have the groundwork and all of the fundamental things necessary in order to come up with self autonomous agent SWS so when you think of that I pretty sure like in my mind it comes up with something like scary that you know could resemble something like this right like or or this even right but it it's then that scary or or this this is really frightening like this is is this what we're going to I think it's going to be even worse it's going to be even more frightening it's going to be something like this like this and like this okay so let's see how can can we get that done how how can we achieve some of this uh uh in the practical sense well let's let's look at Lang chain let's quick reminder of what is Lang chain well Lang chain is this really awesome framework where it's a bunch of wrappers for functionality to uh to make calls to the uh llms to initiate tools to uh create agents and do some really cool advanced stuff with it so you can see all the different layers that they have here for Lang chain and what really we're trying to do is we're trying to create these flexible conversation patterns with various robots and various Bots interacting within one another and so we can actually do that and and Implement that now you can Implement that of course with the assistant API where you give open the eye all your data and all of the processing and they just do it all for you uh that's awesome and then you'll get like cool things like like apologies for inconveniences but you know the files are not accessible uh blah blah blah so brilliant but if you really want to get it done and make it work we got to take a look at the agents from Lang chain and Lang chain agents are awesome what are they they they various different types of agents that we can implement we can check a look at actually all of this cool stuff that we can do with agents like use toolkits with AI open AI functions access intermediate steps create custom agents custom agent with tool retrievals so very important very interesting how can we do this let's take a look let's take a look at an agent swarm in Lang chain all right let's see what that would look like but first uh we install all the necessary things we set up our API Keys here and now we're loading all of the different laying chain models like the chat models the agents the various tools and then different utilities like the dolly image generator uh we're going to import some prompts as a prompt template and then our chains so we go down we designate our llm here as gp4 we set the temperature now we specify the tools now one way to load tools is like this where we specify various Tools in our toolkit um and the tools have a name a function and a description very important these llms will be able to use these descriptions to figure out what tools to use along the way and here's the different functions how I actually get them to run another way to do it and more of a new way to do it is we specify the toolkits and then we just load the tools into that toolkit and our case here we're going to be using search API Dolly image generator and then we specify the llm that we defined earlier and then we Define an agent by initializing calling this initialized agent class then we specify the tools that we Define the llm and the agent type as I showed you the various types of agents that we can use here we specify the agent type we said we're bothast true and then early stopping method and really what that means is that it does one final pass through the llm to generate the output you can also use Force if that's just to return the full string so we then run our agent here and we pipe this into an output and I asked it what is the price of Bitcoin summarize any recent news about Romania give me a photo of Bitcoin with a rocket flying by on the way to the moon and what's the temperature in Switzerland okay and if we take a look at the actual chain so we enter the new executor chain and then it needs to gather information about the price of Bitcoin here is our its action is going to be using the search API and its input is the price of Bitcoin that it determined so the observation is quite complete and very well done so it finished that process and now it goes okay I've gathered that information Let's uh continue on use search to find recent news about Romania all right so there's our observations NATO member Romania finds more drone fragments Ukraine zalinsky visits neighbor in Romania and the worst news of it all that tongo ends the Rugby World Cup with win against Romania dang it then uh additional thoughts so it's it gathered the information it moves on and says hey here's the uh image of uh of the rocket flying in the moon okay sure you can do better Dolly uh but there it is and then it moves on to the temperature in Switzerland and gives us that so the final answer looks like this at the end of the chain so we got the the the price of Bitcoin recent news a photo of Bitcoin and the temperature in Switzerland so it finishes the chain fantastic and gives us this output if we want to then process in any way we wish or push this into a second agent as the input right so all kinds of cool things that could be done there now let's go through and try this in another way how would we do this well we're going to uh import open AI chat chat prom templates and and a string output output parser to clean things up so we here we designate two prompts right and create two prompt templates uh what is the city a variable person is from right so what city is a person from and then the second prompt is what country is the city in and please respond in language so we're going to have prompt one determined the city for whatever person we specify and then prompt two is going to determine the country based on the city that is determined in prompt one and then it's going to respond in a language so you can see how we can chain a bunch of different things together we designate our model here chain open Ai and then we create our first chain which is easily done with a prompt a pipe into a model and then a pipe which is what this character is into the standard output parser then we have chain two and chain two uses chain one to determine the city and then a I'm item getter function to then pick out the language and so it pipes that again into prompt 2 model and standard output parser so now we run the second chain we specify that our person is going to be Trump and our language is going to be French and then whatever this is in French is the output but it does look like it says say it's the 45th president and then other funny words so now let's uh try something more interesting right something in like we have one two three four prompts okay so this prompt is going to design a color palette for digital artwork and return a hmal code for the color so we give it an attribute and it uh gives us a color palette so an attribute would be potentially like warm cold or anything else now or even the color itself you just give it that but what is a character or mascot that represent a brand with the primary color that we will actually return here and then it's going to return a brief description right character description and nothing else and so now we're going to create a concept for that logo we're going to draft a brief for a digital advertisement campaign featuring a character that is created based on all of the stuff prior so we uh we push it uh through our uh create a model parser here and we this is where we put in our our model again through the standard op parsa so uh and then we do our color pallet generator and that's another that's here's our chain where we does we create all of the different colors and and Define the different colors then we create a character generator chain and then we have a logo concept generator and so you can see it pipes through all of the different prompts all the different chains in order to come up with let's see a campaign brief generator so it's going to be using I I gave it input of blue and then we invoke the model and look at the uh the the AI reply so we have dive into fun with Splash a digital advertisement campaign the objective of this digital advertisement campaign is to promote water related activities all right uh and through the friendly energetic character of Splash a sleek and vibrant Blue Dolphin so it that gives us like what the campaign is supposed to do where target audience and all kinds of cool other elements as you can see about what the campaign should be um different shades of blue will aoke feelings of death Tranquility so it does a pretty interesting analysis now merging multiple streams things are getting spicy all right so what do we got here we got a planner we got a Arguments for so we Define a planner we Define argument for and argument against bot so here we have a planner which is going to generate an argument about a certain subject uh and create a response so then we take that response and we say okay here's the arguments for our base response but also here's the arguments against our base response and now pipe them both and combine them both together to generate a final response given the criticism really cool so then we create a chain with uh with with our planner and do results one results two original response and then we get the final response so in this case we invoke the chain my input is Bitcoin and here's our summary of basically it takes like the cons the pros and then combines them all together and you can see if you pause this you can take a look and see that it's actually really good because it does go while has the potential to revolutionize the financial industry and become the future of currency of course uh but it's also crucial to acknowledge the cons and potential challenges the volatility lack of Regulation scalability energy consumption limited acceptance very interesting very interesting how it did that so you see how it just combined it all into uh into a really interesting summary based on what uh based on with the various Bots compiled on their own all right goodness so one quick last way to uh to take a look at this and an even more fascinating way is we go into autogen and autogen we can create a group chat from various different agents so here we import uh our autogen set up our configuration which requires this config list file uhv to be created here in the uh in the file section now let's get a swarm going so here we uh designate an agent proxy it's going to be an AI agent that is the admin and the task coordinator we designate now we we're starting to get into the actual swarm defined so I uh came up with a scout a Creator an Optimizer an analyst an executive boss and an Oracle as you can see the different prompts they have here their system messages is review and optimize the content uh for the boss for EXA boss evaluate Oracle agent suggestions and all agent discussions make strategic decisions so all combined together we uh put that into a group chat uh and then run the the actual chat from the different agents what's our uh input well create and optimize a clever and unique marketing campaign for my AI development business called Espen AI okay so simple and here we go what does that look like well in the beginning let's let's scroll all the way to the top just so that we save time we set up the initial prompt which is create an optimize a clever marketing campaign and now the Scout goes okay the initial name is Aspen AI unleashing the future today and so it gives us a style a voice and the form now the content agent goes okay campaign name unleashing the future today and now creates a blog post creates a social media content some webinar marketing message and various other messages here the optimizer now this is where it goes gets interesting so the campaign one said fine I'll just create a campaign unleashing the future today the optimizer decided that Aspen AI pioneering tomorrow's AI today and then it did an optimization of each of the content that was created earlier so analyst now goes here's some actionable insights after the optimizer took care of and so it gives each of the each of the areas some insight Oracle then takes that from the analyst and suggests various forms of action and the boss says okay here are the Strategic decisions for all of you Scout content Optimizer analyst or start doing these things remember communication is key keep things updated and they all go understood boss I'll start researching all right good stuff so great great everybody's clear it gives them an update now I got into this like fascinating Loop that I might have to change the problems to get out of is that they keep padding each other on the back I was saying excellent you guys are going to do all they're understood we'll do all this and the boss says excellent I'm glad to see everyone's on the same page and then here's more stuff and now they go understood so I'm curious if they're just going to get into a loop of like good job understood good job understood um let's see another way to apply this as a a research swarm right so what would a research swarm look like well we set up our configuration here uh we can specify a seed and the seed is basically changes the output every time you run it then we designate an admin right interacts with a planner to discuss the plan plan execution is needed then we have an engineer it follows the approved plan it writes python Shell Code right interesting there's a scientist that follows an approved plan there is a planner suggest the plan and there's an Executor a Critic and what we're trying to do is find papers and generative AI from archive archive I don't know how you say that in the last week and create a markdown table of different domains so what that would look like is something like this is that the agents start finding papers so the planner gives engineer you know descriptions of what to do scientists write engineer write review the markdown and so as you see then it goes through the critic provides the feedback goes to the engineer and they are actually able to generate some very interesting out put together where they come up with the code necessary to do this and after they've done that they're actually able to retrieve all of the different titles from archive fully Quantum Auto encoding of 3D Point clouds all right fantastic so one last way to do this is we initiate another group chat here and let's take a look at what to do here so here this is an interesting approach where multiple agents are able to handle communication amongst themselves to get a job done with limited amount of information for everybody so let's take a look at this last uh last case where we have agents in group a and we have agents in group b so agents in a have a leader and they have team members so team member one has a secret knowledge of X but not Y and team member two has the knowledge of Y but not X this team team B it is important to find out the value of x y and compute so B member is going to be Computing the value of X and Y but but we don't know that this team doesn't know that so the leader of B needs to communicate with the leader of a and cannot communicate with any of the members so the leader has to then communicate with the members in order to get both the values and pass it on to the different te team which is super fascinating super fascinating so the uh the group chat is everybody cooperate and help Agent B in this task team a has A1 A2 and3 TM B so we designate the teams and we specify what it is that you need to do we give it a 30 rounds that it can uh it can go through and then say okay we need to find the product XY and other agents know that so what does that look like well find the product so B2 says the other other so this is our initial request so B1 to chat messenger can you please provide the values of X and Y that your team members A2 and A3 know we need them to find the product and sure let me check with my team members A2 A3 can you please provide the values A2 gives X A3 gives Y and then the combination is Multiplied and then B says thank you we got it and it's done so how fascinating how interesting and we're going to dive a lot more into all this so I just wanted to show you guys this this fundamental this Baseline way of getting these things done and in the next set of videos we're going to be able to uh come up with uh custom agents that do custom tools and start to discover and run their environment and their world more autonomously so I'll end it on that if you have uh any questions uh anything about llms or have any interesting projects that you want to discuss reach out to me on Aspen AI thank you guys very much we'll see you at the next video
Info
Channel: AspnAI
Views: 11,790
Rating: undefined out of 5
Keywords: ArtificialIntelligence, AIethics, AGIrisks, FutureofAI, ThoughtExperiment, TechnologyDebate, EthicalAI, AIfuture, langchain, flowise, openai, chatbot, pinecone, langflow, machine learning
Id: geLX30qax8Q
Channel Id: undefined
Length: 47min 1sec (2821 seconds)
Published: Sun Nov 12 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.