Stop using ChatGPT, build Agents instead - Maya Akim

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
AI agents are so polarizing people are either afraid of them they believe that they're going to be replaced by them or they believe that they're going to become rich charity I cannot even use anymore like that's how bad it it has got what's the biggest problem with not just agents but large language models people don't trust them Sam Alman is trying to regulate open source no it's not the best look when you start a company named open Ai and then somehow it becomes a for-profit and it's closed Source I believe that AI is too important to be left to just like researchers you know and people that are sitting in on high positions we we all need to be involved and we all need to give feedback this is an interview with Maya Akim AI content creator and Asian Builder Maya is completely self-taught a year ago she was a beginner when it comes to Python and AI agents and now she's building complex agent teams and even inventing her own workflows so if you are new to agents and want to learn more about them make sure to watch Until the End how did you get into building agents I guess it was like um hype around the time when Auto GPT was published I think that everyone on Twitter was talking about how they just buil the website and of course I I also wanted to automate some of the stuff and I tried and I failed all of my videos before Auto GPT video were getting like 100 views and those were mostly my family members that I forced to watch but this video was like the first video that people like really wanted to watch and and I was shocked I remember at the time I had like 10,000 views and I was like oh my God like it was a video about how I tried to build a website with autog jpt and it was looping a lot so that's how I got into AI agents and for a really long time I had like a negative opinion about them because of aut GPT like one important detail is that the first agent team you've built like actually failed and you know so many people have this experience and but like the next step is that they give up and they don't like build anything else but you didn't do that you actually kept building agents so what was the like first team that actually clicked for you to be honest my first question was like what what can I use this for like what what exactly can I automate because a lot of the examples on Twitter Were Somehow useless for me I guess I realized very quickly that I can process like a lot of information with agents because I guess that's what llms are for I think that the First theme of agents that I built that was successful was crew aai and I made a video about it it um and I managed to like um kind of extract some information about AI Trends and um what people are talking about uh thanks to crew AI yeah so if someone wants to get into agents would you recommend them crew AI or a different framework by the way if you want to get serious about building agents and be part of the future then definitely check out my workshop on how to build and deploy AI agents it's available inside of my community where you'll also get access to people on The Cutting Edge of AI if that sounds inter interesting to you make sure to join it's the first link in the description ooh it's a good one uh I think it depends I know that crei currently doesn't have like any interface so if you just know how to do things around terminal I guess uh maybe autog gen Studio would be better and yesterday I was actually testing DEA that's what I think that's how it's pronounced and it's supposed to like um be a copy of Devon and it has like a graphic interface so if you're non programmer then I would recommend Auto gen studio and DEA and uh if you are a programmer then yeah I would recommend crew AI probably yeah I mean personally I'm not afraid of you know writing some code but I know that for people like you know even the basic programming can be intimidating but what I found is that actually the autogen studio setup is almost more difficult than you know code so it's true I agree yeah know that's another problem a lot of these um AI agent ideas whatever are people are building stuff and they say that there are for non-programmers and then I make a video about it and I'm like no code agent framework and then people in the comments comment after me they're like well but you you still need to do programming I'm like no it's the terminal it's not programming but I I understand why they might think so so yeah anytime I also mention like even known programmers can do this I get so many comments and like what I tried to do is I said I tried like telling people like if you get stuck just please use cgbd you will get unstuck but so many people still underestimate the power of the current llms and like I get so many comments that are literally just if I just copy paste them into cgbd I get the answer but it's I don't know it feels like it's not natural for people to ask llms for the answer yeah because I think that we still didn't develop this habit I I myself for example don't use Google anymore ever I don't know about you but like I I feel so uh inefficient when I try to find answers on Google so uh usually when I have some technical problem I used to um try Chad GPT um but now I prefer Claud um I actually have like a sub subscription and I just prefer the way that the chat bot sounds um and uh if I have like some research to do then I do perplexity because um it has a really nice internet browsing feature so yeah I mean I use Google for images because I think that's still like the best way but yeah for like if I need to do research then I do usually a combination of perplexity where I have the LM set up to CLA free and then Gemini from Google is also pretty good and I mean I'm in Europe so cl. a is not available so what I have to do is I I have to go to the anthropic console for developers and use it there I mean it's fine but you know it's still a bit more friction but at least I don't pay subscription I pay for the API usage I you know consume for the tokens I use so that's good oh that's that's good you can also I I'm pretty sure because U mistol is a French company I I think that they have chat uh I I don't know how to pronounce it but like I think that that that is also a good chat bot so you can use that in Europe I'm pretty sure do you think like Mr Large should be considered a state-ofthe-art like do you think it's up there with gbd4 turbo and cl 3 Opus yeah I think so yeah you know um one thing I have to say is that uh so far um mistel has like the best open-source 7 billion parameter model and uh there's this giant uh hype around um model blending and people seem to uh kind of improve mistel by blending it with other models and I I think that um that company is we're going to see great things from them I really love everything that they're doing so I I just yeah I'm a huge fan what can I say yeah I mean it's crazy like they're like what 9 months old or something like that it's it's such a new company and they've already released models that are coming for open Ai and Google so like just if you compare the amount of resources they have compared to these Tech Giants it's it's crazy right exactly like it get like like a mistal 7 billion uh parameter like that model like it's just so much better than any other model that we have it's open source even like bigger models like llama with 70 billion perameters I I I when I use a Mell 7 billion like I just I see that it's a better model somehow I I enjoy it more yeah although like llama 3 is you know in development so we'll see maybe yeah maybe Zak can cook something up and I mean if llama 3 is better than gbd4 that would be insane yeah but it's it you see Google has like unlimited resources and they they still really struggle so it's not a guarantee uh but we'll see I guess you're right I'm also excited about llama 3 all right do you want to go into the presentation uh sounds good sounds good well I guess I just made this uh tiny presentation just in order to like cover some of the basics because I I feel like a lot of people don't really understand on a deeper level what an agent is and I think that I also didn't understand which I think made me quite confused and it made me either overestimate or underes imate agents in like a broadest sense an agent is like an entity with capacity to act um it's as simple as that and like a human we can say is an is an agent that's like a single program agent or an animal that acts like autonomously like a cat asking for a treat or as we can see a machine right uh with these AI agents and now what does it mean to act because I think that a lot of people kind of just because we do it every day we don't really think consciously about it and we don't really care about what it really implies kind of like this this question of what it means to act was something that was obsessing people for thousands of years because Aristotle actually in like 350 uh before Christ year before Christ was like we deliberate not about ANS but about means what Aristotle is kind of trying to say is that you don't really like a goal on its own is like a a fixed think you you know you you are here and you want to become a Hollywood star you have this goal and you don't really necessarily have a lot of say in it you you you you the Heart Wants What It Wants right so like this road between like where you are and when you want to become where you want to be is a road of actions right it it consists of some tinier tasks some tinier goals yeah and this is pretty much what Aristotle was like obsessed with he also had some weird beliefs like he believed that men had have more teeth and whoever has more teeth is going to live longer but at least when it comes to this he was like a Pioneer he was the first person that we know that thought about what it means to act then you have a Ramon l in the 13th century and these wheels uh are something that you can see even on YouTube I was kind of super surprised um a lot of people think that ramun was like a like a father of computer science and Theory because uh these wheels on the right they're like a first time that you know Humanity had some sort of like a logical operations um he was trying to uh find the truth about life that that's like absolute truth and he was kind of like making these tiny algorithms that that could bring him to truth then you have a blae pascal who made the mechanical calculator that uh a lot of people claim is the first computer um and it was it could only do like additions and subtractions and I think that uh Pascal's father used it for taxes and that was pretty much it and it was forgotten afterwards I think that they struggled to produce it uh in larger quantities uh and then a jump to 19th century and uh ad la la um who made this note G which many kind of consider to be a first algorithm ever and um she even uh said that it's like a thinking or reasoning machine so as you can see it's like 2,000 years of like this idea slowly building and then you get to the 1950s and Allen Turing who you know people considered to be a father of computer science um he had this research paper paper a computer Computing machinery and intelligence and one of the first things that this research paper sayses can machines think and uh Alan touring had a lot of uh people that were skeptical they believe that it's impossible at the time and that's why he devised this imitation game uh that's supposed to sort of like convince people that machines can indeed sound like they're thinking or they can think and it became a touring test eventually then six years later you have Darth darthmouth conference where the term AI gets mentioned for the first time and these are the founding fathers of AI um and I I kind of thought that this is really funny they wrote like a tiny manifest and they said that we propos a TW month 10 men study of artificial intelligence in order to try to make machines use language form abstractions and solve problems that only humans can so they believe that in two months they can achieve what essentially AI is still trying to do which I think is kind of like hilar hilariously uh giant number of giant amount of optimism and John McCarthy was one of the founding F fathers thanks to him we have um not just thanks of him there were other people but he kind of like pioneered this program called myy c i n um and it it's a it was a type of a knowledge-based system or a symbolic AI um that had its like moment in the 50s and 60s and' 7s uh and it was um kind of like the culmination of everything that I talked about before from Aristotle till onwards because at the time people believed that AI has to arise from this rational thinking you know for example if if you want to get to the airport again um with these symbolic AI you know you would have a bunch of rules like uh you have to get into the elevator then you have to call an Uber then you have to drive for 15 minutes and then you have to then that means that you got to the airport right so it's some sort of like um if statements like if this is true then this and then if that's true then that and that's pretty much what um what the the first AI was about the symbolic Ai and um people discovered that it's a kind of a problematic uh approach because it worked fine for that program that I mentioned uh myy c n uh that that was people at the time used it doctors used it to diagnose patients and it had around 60 600 rules um and it indeed could like diagnose interpret um some sort of like lab results and stuff but the problem is that um if you ask that symbolic AI like like a chat bot that's built on top of symbolic AI how long will it take you to get to cerin and cerin is a Galaxy that I invented um it doesn't exist well it's going to start from the beginning it's going to be like first statement are you in the car or whatever it is that you know you however you built it and the problem with a symbolic AI is that um well first of all there's a lot of uncertainty in the world uh if you try to build an AI system to help you get to the airport uh and it has some sort of fixed rules like symbolic AI like how is it going to take into consideration that it might be like there might be like a traffic jam somewhere right or a strike or things like that then there's ignorance for example the Galaxy that I just get invented it doesn't exist but symbolic I wouldn't have uh a single clue that it doesn't exist it would just go on about you know it it will start all over again then complexity uh world is incredibly messy and comp complicated and it just doesn't work well for these rational symbolic AI systems so what what is the solution uh well at the time when uh people were building these symbolic AI systems um that's when the first AI winter started because uh the systems were really limited and it lasted for a few decades but while the funding kind of like stopped and the research seemed to die uh in the background um the field of AI went to in a totally unexpected uh uh Direction it went into probabilistic and statistics and um this is where we are right now we have uh these uh networks um we have next token predictions without going into details um we can we can say that right now this is pretty much the the main approach in AI what seems to be like a thinking machine a thinking chpt is just uh basically constantly predicting the next token that makes the most sense there was the change of Paradigm in the in the' 70s it's the rise of deep learning uh with a bunch of layers um and the rise of reinforcement learning where we kind of see for the first time um you know this idea that there is an agent and there's an environment in which the agent exists and then based on what it does it gets rewarded or it gets punished for what it does it's quite a shift from those first uh sort of like let's call them AI agents that were just following a bunch of rules and you know uh the open AI had uh they pioneered these uh gym Atari environments in 2016 I believe uh where um AI agents can like learn from these cute little environments I don't think it's available anymore you cannot really use it um so here's what I noticed um before the 80s you could say that you know the idea was that agent is an entity with the capacity to act and it's in it's intelligent to the extent that its action can be expected to achieve its goals and here I added in Brackets our goals because ideally um we would put goals into machines and not the other way around y but after the80s um with this Paradigm Shift you know it became like agents became more complicated and you know you could say that their act that they're intelligent to the extent that their actions can be expected to achieve goals given what they have perceived in the environment and this is kind of where we are right now um if you think about an AI agent it should be able to act autonomously ideally with a human the loop uh as a way to control uh and and kind of guide uh an agent um ideally it should have memory because uh memory allows agents to have a to to display some sort of like personalized Behavior so you can have your old personal agent that like has to made for you it should be reactive uh it should perceive things in its environment uh whether it's like text or audio or visual input and you should be able to be proactive and we see this very often uh agents have tools now they can make API calls they can do function calling um which makes them smarter and they should also have social ability which is the ability to kind of like keep the flow to delegate and and so on and uh now this is the part that I kind of this is how I started the presentation and this is the question that I had in my mind as I was thinking about agents like what is it that an agent can do that a human cannot do and I think that steuart Russell who's a computer scientist kind of answered this question perfectly in his book a human compatible he said that uh Speech and text understanding also enable machines to do things that no other human can do not because of the depth of of understanding but because of its scale and I believe that this is the core um this is what agents can do that humans cannot do the scale for example uh you can have an agent scrape a bunch of information from Reddit from YouTube from your Gmail whatever you want and then you can ask it to summarize it and to like to give you an output that's like news letter or summarize and write an email and so on these things would take you forever for example if you take that an average um speed of reading is 200 to 300 words per minute like you would need you if and an average uh YouTube video last for like 20 minutes you could have uh an agent analyze 10 YouTube videos and and write it like a summary with in a minute whereas you know it would take you hours or uh another great use case is imagine your life in next five years right if you ask yourself this you you're probably going to like imagine few possible scenarios with a tiny bit of details like oh you know well I guess I want to graduate guess I want to move to a country or maybe I want go to university maybe I'll get married or whatever right it's like few details and few possible paths whereas an agent could brainstorm uh something like this for you without a problem and it could give you a lot of details and a lot of interesting ideas so this is like chess basically where like a human Grandmaster can only think like 20 moves ahead and you have a stock fish that can brainstorm like thousand of different moves exactly yeah exactly uh and this is what agents can do that uh because an agent is built on top of llm right um that that humans just cannot do we we cannot compete with them but they can when when you know this is the now now the next question like what is it that an agent can do that a single large language model cannot do right and um I believe that the one of the main thing is that it's it's accuracy is improved be because it goes through a couple of iterations it can fact check and it can um U that can bring to they can bring less hallucinations and lead to improved accuracy and uh actually uh that video that you published about AI agents uh Andrew I believe talks about this he did a research where which he uh you know kind of like proved that uh GPT 3.5 is way more accurate it's like 96% more uh of the time accurate compared to like zero shot gp4 yeah that's crazy like people people don't realize what that means it's like you can take a worse llm and with the power of Agents you can make it better than a much more advanced llm exactly yeah that's the thing uh right like a zero shot prompting is what we all do like I I do it all the time I know that I ideally should use a like a crew of Agents but it's still easier for me to like type something in uh in a chat boot but if you think about it like you can have like a GPT 3.5 is not impressive anymore and it can be look how accurate it can be just this is the quote that I got from that um video that you that you shared and another thing that I think that um agents do that a single LM canot do is offload decision making and priority prioritization so if you have some sort of a request like create AR Trend overview right you don't have to think about how are you going to accomplish this goal goal um agent is going to write some smaller tasks like uh you know I need to take these steps and we even see it as we like play with AI agents we see whether it's in the terminal or whatever that they kind of like brainstorm about the your request and they think about what needs to be done and then they're going to like see which task has the highest priority and this is a very important step I I personally hate you know making these list of what needs to be done um and I have this decision fatigue and the fact that I have these agents that can kind of like collaborate together and do this for me is a huge help for me and of course AI agents are not AGI like people shouldn't expect that of them I have to say I did not expect it to start with Aristotle but I love it yeah sorry I I might have gotten carried away because I wanted to build this like rationality and how it led to you know everything else so but it's fascinating like you see like people thought that AGI is just you know few months away even 70 years ago or something and also like another thing that people don't realize is that the field of AI is much older than you know the average person thinks like just because people didn't hear about AI before CH gbd doesn't mean nobody was working on it like the history goes back to like alen touring and even before touring so yeah it's it's crazy but like that's the argument why some people think that AI is still decades away because you know 70 years ago people are thinking it's right around the corner but I think you know obviously right now we have the best claim uh that it's actually closer well that's yeah that's actually there I looked at the timeline of all the smart people that were wrong about AGI and the the you know it's like you have all these scientists and researchers and they claim that you know oh you know we're going to have AGI in uh two years or something something and the problem is that there are still a lot of scientific breakthroughs that need to happen in order for this to really happen and it's impossible to predict when you know certain break breakthrough is going to happen so you know that that's why that's why it's so hard yeah and one more thing I would add is that when you had like what uh agents are like why agents are better than normal llms I think tool usage is one of the things that people don't realize is that you know sure in chbt it has some tools like code interpr or gbtv but with agents you can have like dedicated agent that has access to only one tool and then the manager can call him or like you can do like much more tools I've like I've seen your video you had like 20 different Tools in autogen that you've built so like with agents you can give them access to every single API that open AI will probably not Implement into C GPT right so yeah that's the thing uh you can always um kind of build your own agent um depending on what you need right you can add custom tools that's great I I believe that eventually like everything that we have right now uh that is an app is going to become a sort of a tool for an agent uh if you think about it like I think this is what happened with those um CH gbt plugins they were essentially if you think about it apps right um unfortunately it didn't take off I think that it's still a little bit too soon but yeah I I I strongly believe that this is you know that that's the direction yeah I mean it there cannot be any like friction cuz if somebody has to go manually and search for a GPT that's too much I think it has to be like somebody's chatting inside of C GPT and then the llm automatically sees like oh there is this top rated GPT that does exactly this and it pulls it in and the user doesn't have to do anything because you know the average user will not go to the GPT Store and search up for something specific for example if I am looking for some sort of a python like library to to do something um I really I I I honestly feel this way like I would rather uh let chpt recommend something to me then like Google what U what type of a python libraries exist out there and then you know I have to scroll through lists of a lot of libraries and they all kind of look the similar I I want chpt to recommend even if it's a wrong Library if it's not the best I'd rather have that than a perfect Li library but I have to like spend two hours searching for it on on Google but of course you know as you as you as you work and and you need uh you know you need the good tools eventually maybe you have to Google and find the right library but at least initially this is like my lazy brain thinks this way but like this will get really crazy once we have llms with like endless token I mean context windows that you know they can just get like five million tokens and also it's like something where it's observing your screen at all times so it knows all the context as you do let's say you do like some complicated you know coding project then you don't have to describe every everything into the llm it just knows like you have one agent watching your screen One agent that you know is an expert in libraries or whatever and then like all you have to do is like literally just ask it for the library or well like either a library or just to solve the problem right away and then it just does that because you know it's been watching your screen it knows entire code it knows everything about python all the latest updates so like I think that's like coming very soon yeah that's the thing uh this the guy that I mentioned St Russell he in his book human compatible he says that agent is a stream it's it's literally a process that constantly takes input and output something based on whatever it is that you you give it right there so it's a process and um you know uh I I can't wait to to see how agents become personalized because for example right now uh a giant obstacle that I have is that sometimes when I use um croi I I kind of have to start from the beginning or I have to like tweak things a little bit and I I feel like it's a waste of time ideally uh I would have like a personal uh croi and I saw that um um there is some sort of like a shortterm or long-term memory added I still didn't have time to check it out but I really feel excited about it like I can't way to see how how personalized agents can get yeah and this will also be huge for like medicine right because even like your doctor doesn't know everything like all your childhood injuries or whatever you can just have that in like a text document or whatever and have a agent that knows everything and either you know it just solves the issue right away or it sends that brief to a human doctor that is like okay you know if your elbow hurts there's everything that might be relevant and like leaves out stuff that's not relevant yeah because uh exactly or if you think about um for example decisions that you make you want an agent to uh make the best decision for you right um am I always great at making great decision for myself no like I I think I had a lot of chocolate recently ideally I I would kind of like stop it but I need someone else to tell me that it's bad for me right and and an agent would be able to like kind of create this memory and and and kind of this you know offload of decision making it's such giant help and the fact that they can do it autonomously like that's just amazing a lot of people would instantly like put the argument that you know you lose the freedom obviously you can still choose what the agent will do and like what it optimizes for but to a lot of people this is like kind of troubling Outsourcing part of your thinking or like your decision- making to an agent right yeah it is it is deeply problematic and uh it seems like nobody has a solution right now a lot of people work on this like alignment uh what was the name of that King right that you know the story the ancient Greek story where a king wants everything that he touches to turn to gold and then it ends up happening and then he slowly starves to death because even when he tries to eat like an apple is turned to gold and whoever he touches dies and things like that because they also turned to Gold so that's that's another problem when you get what you want right which uh it's it's kind of hard to we also are not that great at understanding our own intentions and our own desires and then once you Outsource it to someone else it can lead to like that someone else is like a a machine that doesn't know a lot about humans that's kind of problematic yeah I think this is like you know comes down to your understanding because people that fear this usually have no understanding of llms or agents and how they work like that's what I've seen with like Claud you know it's it's the people who had like no knowledge of the technical side that would like oh my God it's sentient it can think it's like I'm like chill out and then like yeah I don't know I personally I would definitely give out um like I would definitely pay for an agent that could like sort my task know all of my goals but the thing is like people don't realize how much context is necessary right because for us it's natural because we live our entire life and it's all stored in our head but we don't realize how many points of context that is how many like visual images text audio we heard all of that is in our head and we don't like consciously see that when we make a decision but it all like plays a role right with an agent you have to give it that information so like that's when like context windows are huge because you can have an agent that literally knows everything about you about your short-term goals long-term goals the current troubles you're facing whatever and then you tell it okay create a priority list today and you know it already knows your goals it already knows what you have to do next week it knows your calendar everything and then you just have the priority list and you just work and I think like that would be huge that would just unlock so much productivity cuz like all of us have been there where we make like a daily task of things and then we go off it and we just like never return right but if we have a agent that constantly updates it and like especially with some like VR technology where you like see the task that you have to do always I think that would be like a massive productivity boost yeah yeah that's that's that's true I agree my only concern is that um um this you you mentioned right um Windows context right uh window context right it it's it's a it's it's a problem right now but um it's slowly being resolved but then um you know how our memories are not perfect and not everything is equally important to us so what what are things that an agent is going to pay attention to you know it might um misjudge what actually matters to us because um you know uh we our memory is so flaw that we remember usually just highs and lows we don't remember like the ordinary grind so to say uh and that is something that uh kind of is also a concern but I yeah I agree I would love to have um my own personal agent that um can remind me of things that I maybe forgot or can decide maybe like this can decide for me and then I can say yes or no I can in the end I'm the one who's in control right that's that's like that's such a great idea yeah I think people like jump to the conclusion that like oh my God I don't want the agent to be sending emails or like paying bills and like okay that's maybe like the future future for the present like the next step is just an agent being capable of that the step after that is like you still approve like the most significant actions right it's not going to be like you have an agent that just deletes programs from your PC or like you know removes a Google account like that's not going to happen there will be like checks for the actions that are high risk that you have to human approve right so like I'm not worried about that but like I don't know I I think for what you had like the worry like what the agent might optimize for I think that could be solved if like every day you the agent like prompts you five different questions of like let's say you have it creates a task list or whatever and you like why is this so high right you click it like it should be lower and the agent asks why do you think let's say you know I don't know working on your business is higher than I don't know going to the gym random stuff right and then you explain like well okay I'm already in a great shape but you know my business is struggling so like you just give it the extra cont text and I think if you if you do that like enough days in a row where you just like ask it asks you five questions relevant to your priorities and what you want and you answer that and you do that for a few months I think it will be like just like perfectly aligned you know that's the thing it's going to learn your preferences because and then as you go through life your preferences change and it's going to be able to adapt to that as well right like What mattered to you 20 years ago is not the same as now right so yeah I agree like um if if it knows our preferences it's going to be so useful to us um yeah and you mentioned something interesting you mentioned that people are afraid of Agents you said you know that the people are afraid that they're going to delete the wrong thing or things like that and I think that fear is a giant uh part of um AI agents I don't know if you noticed that um you know when a company like openi publishes a new model or something it's it's a giant news but it's not not that polarizing people are mostly excited about it they use it they test it and that's it but AI agents are so polarizing people are either afraid of them they believe that they're going to be replaced by them or or they believe that they're going to become rich thanks to a agents like it's like incredible how different these two groups are but um yeah fear is a giant problem uh and uh I think it comes from like misunderstanding more than anything but also a lot of problems are unresolved how do you teach an agent like what your preferences are you know I have a I I believe that like AI researchers have a lot to figure out yeah and for something like that where it you know makes decisions and suggests what you should do I think that absolutely needs to be open source because we cannot like there cannot be any bias from a closed Source company like you don't know what it's pushing you towards right if it's like obviously if it's extreme like Gemini creating like refusing created white people then yeah people will notice that but if it's subtle and like over years it's pushing you to like you know I don't know maybe it's like vegan and like don't admit or like I don't know just pushing you to a certain political party slowly and slowly you don't notice that but like you know it's not in your best interest it's in the interest who made the AI so for that I think like an agent that makes your decision or like makes your priority list and does stuff that's like so intimidate and personal I think that has to be open source exactly you know um that's the reason why I kind of started my AI Channel even even though I'm not a machine learning expert I believe that AI is too important to be left to just like researchers you know and people that are sitting in uh on high positions we we all need to be involved and we all need to give feedback um and um you know I wanted to ask you you you mentioned open source now do you believe in all these like I don't know if they're conspiracy theories or not that like Sam Alman is trying to regulate open source so that you know he has a better he's lobbying for open Ai and he's trying to like kind of reduce the importance of Open Source I I feel like that tier is being thrown around a lot lately I mean I think it started when he went to Congress right I don't know if you've seen that but like he was pushing for regulation and it's not the best look when you start a company named open Ai and it's a nonprofit and then somehow it becomes a for-profit and it's closed source and then you go to Congress and you start like telling them how dangerous it is if everybody has access to this technology it's like okay but why you Sam right like you know why should you and like a handful of other people be controlling the single most important technology of all time and like yeah you know I don't know I feel like that's so so sketchy and I wouldn't say they are conspiracy theories I feel like it's the reason those theories exist is because of the actions that open Ai and samman take like if you read the emails right then they tried to switch the meaning of open like it started as open source obviously and then they're like well open doesn't actually stand for open source it means that AGI benefits everybody equally it's like what are you saying like open is definitely referring to open source not that AGI will benefit all people equally it's like I don't know it's it's so weird yeah it's it's I agree it's like uh open means that it's an open technology to everyone like everyone can use it like oh okay and what like is there a technology that you know like we can say what like what does it mean that um Facebook is also open source then I mean it just makes no sense right uh and I feel like they're we're being like um they're kind of gaslighting us sometimes uh but also I don't really follow a lot of those news because I I feel like there's a lot of drama going on always uh with open Ai and I I fell in fell in love with their products like uh two years ago and then I kind of am not that much in love anymore so that that's I feel like a lot of people feel this way like when you realize the companies right all all of them like basically except for meta and mistol all of them are closed Source like the vast majority of them and then you look at like even content creators or people like if you see any audience polls vast majority of people are for open source but there's a massive disconnect of like big Tech and the companies actually developing these you know state-of-the-art models the vast majority of them is is closed source so it's kind of crazy that the people and the content creators want open source and then the people like the researchers and the people who are releasing the very best models all are for closed Source it's like insane disconnect yeah you know so um I was introduced to machine learning I think it was 2017 when my brother was uh doing some um machine learning projects He was training his own neural network and I remember remember the landscape in 2017 like people were um machine learning was something that like researchers were doing like it wasn't even any it wasn't like that many companies yet like it was obvious that that's the you know that's the future but it was most on Labs you know University State Labs and now suddenly it's everywhere but also like I have a question for you do people around you use like chat bot in real life like the all the time like your family or something I have a feeling that it's a small bubble that that cares about these things I mean I try to show everybody obviously and I even like help them set up the account because sometimes that's even too much friction but I don't know I mean it's definitely more and more like the the growth is there but I mean you bring up a great point it's crazy because for us when we're like in the AI field we feel like everybody knows about this everybody's using this but that's not the not the case at all like so many people think they're Advanced when they have chgb account like they think like they're on The Cutting Edge just because they have a chgb account and like I'm so mind blown how many people are it's crazy because there's one group that are completely oblivious and fine like you can say like they didn't watch The Right video or like they didn't get the chance to see it or whatever but there there's also a big second group that know about AI know about chgb they they know that it's going to change the world but they they don't do anything about it it's it's I don't know it's fascinating yeah or they just ignore it because they they I don't know why they ignore it like I for example um my brother that I mentioned that who made uh he was working in machine learning field for a while he he's not interested about any of this I guess he he's kind of curious about like these text to song tools or whatever he's interested in the creativity but he doesn't really care that much about like chat Bots or you know I don't know why it's not part of his life or and I know lot most people that I know don't don't care about it so like this makes me think like what needs to happen or like how capable the AI needs to be for people to start caring right yeah you know I I I kind of suspect that eventually it's just going to become a part of everyone's life silently it's just if a technology works well then nobody cares about it right um it like you don't think about how your car Works you're not excited about it because it just work so yeah yeah I I feel like that's eventually what's going to happen to Ai and um I I was wondering what do you think about like what for example AI agents what do how do they make your life better do you use them every day I mean uh obviously I use them way less than CH gbt perplexity and like just normal AI tools but like actually I mostly use them for like building and like learning about new Frameworks so for me it's kind of weird because I'm a content creator so I mostly use them when I make content about them I mean to be honest I should be like focusing more on automation but I feel like there's still um like missing stuff like I don't know I feel like we need like a really solid UI you mentioned that like you know we have autogen Studio that has a UI then crew AI is working on it but it still doesn't have it I feel like once it's like as effortless as chbt would just type like build a you know team of agent that does this and maybe with the combination of like the next generation of llms because like still like right now to build a solid team of agent you still have to like sit down and spend a couple of hours right it's not just like 20 minutes and you have it working I mean sure you can maybe for like simple tasks but to really fine tuneit and like you know prompt engineer every agent correctly and you know give it all the API Keys it needs it's still like multi-hour um Endeavor for every like basically for almost everything so I feel like I don't know I mean maybe I can like ask what would you recommend me to build like what do you think would teach me the most as a team of Agents uh well I I think you're right in the sense that it's it's a turnoff when you think about how you have to set it up and and you have to write all the prompts I think the promp thing is the problem because it's like so much work and I I you know I actually uh was trying in my in my video where I talked about autogen I was trying to say that maybe you can build Agee that build prompts for other agents right because then you can Outsource some of the work but even I like don't really use agents as often as I'd like to um but I so you're as a content creator I I think that you're at the disadvantage because simply because uh large language models are not that good yet when it comes to you know like just sounding natural like they just sound it's so obvious when I look at text it's so OB obvious that it's written by some sort of a CHD style chatbot so um in that sense you know if this is the type of work that you're doing it's it's a very it's very unfortunate but if you are a programmer um I have a feeling that like software developers can really benefit if they um have a team of Agents like one agent is you know uh writing this type of code the other agents is checking that code like in in that sense I I believe that software developers can benefit from this iterative uh thinking so to say uh or um but in the end like what's the biggest problem with not just agents but just large language models people don't trust them they don't they don't trust them for a reason right like they really do hallucinate it's actually everything they write is hallucination just that sometimes it's a right hallucination right so it's like you have to fact check everything when I was um writing this presentation I asked uh Claude to write more about these knowledge based systems in the' 70s and you know it invented so many weird things I just I was like where is this coming from and you know in the end as you said first you have to set up everything you need hours for that then you need to fact check everything you again you need a lot of time for it so I think that the point is that uh programmers can use these agents uh to like maybe uh start writing code I I think that if you need something more advanced you cannot really rely on agents if you're writing uh if you're creating content I would get I would I would recommend brainstorming they can uh get a lot of information and maybe summarize it for you and then you can uh start um whatever it is that you're working on whether it's a script or an article and um and like a sort of like personal therapy I recommend uh some sort of like a local local open source model that you can run with or something and uh maybe you have a problem that you need to resolve and you want to see it from different angles uh yeah you can you can chat to like a model and and you know get some ideas and insights that you wouldn't otherwise that that's what how I see it at least yeah I mean one team of Agents I did build was um like a research team for AI news and I wanted like um when I build it it was I think February or March I don't know so I set it for that month right and still it's like some of the news articles for like June last year and then like that was like one big issue and then second big issue was that like rating right so I wanted it to rate from like 1 to 10 on how interesting it is or like how important those AI news is but what I found is that it's always like somewhere near the middle it's kind of scared to give you like one or a 10 probably it it's probably closer to like a 10 it's like it might give you a nine but it I think it will never give you like a two or one cuz it feels like it's kind of offending the people um who made the news right so it's like it's so scared and then just overall it kind of just were like everything was like a six basically it was like Mark Zuckerberg talks to some Korean leaders it's like completely irrelevant uninteresting and it's like six like what why is that a six oh yeah this hilarious yeah you're right I one time I asked Claud to uh rate a text like a paragraph that I wrote and he said I refuse to read this highly suspicious text it's highly speculative questionable suspicious text and I was like I wrote it apologies uh but I still feel uncomfortable you know it's like you don't feel anything just give me my rating yeah you know that because right like I guess you know well they have to align them and um unfortunately I think that the first chat GPT 4 GPT 4 that the came out was really awesome and then it became dumber and dumber and dumber as they were trying to regulate it and now it's kind of useless and um yeah you're right about readings that's hilarious like it it's it's you're right I think that it's afraid to give you like a really low or really high grade like and I don't you know how many times I tried like I was testing I was trying to trick um CHP I would ask it to to do to do this uh once and then few hours later I would ask it again in another Tab and it would always give me different ratings so I was like I guess it's just making things up as it goes yeah so I mean I think a part of that is obviously the bias like you said like CLA refusing that's like shocking text or whatever and then part of it is just the hallucination problem of llms how they work right it's predicting the next token so even if you change one letter um it it will not give you the same answer and I mean especially if the temperature is not set to zero if the temperature is zero then yeah it will give you the same answer if you have the exact same prong but for CH gbt it's set to 0.7 so you know every time you get a different um probabilistic distribution of the of the token so like you yeah with like rating and stuff like that I think I don't know probably something open source that's like fine-tuned on rating where you just give it like thousands of examples of like this is boring this is interesting and you just be like super exact I think that will be perform much better than you know gp4 or clo free exactly yeah you know like if I I'm actually working on find tuning my own model for uh YouTube titles because it's every childboard is just so bad like I I wanted to give me uh not just the title but to explain to me why it's good and there are a few patterns right you you know this as a content creator like you want a title that's kind of like curios makes people curious and interested but not like in a negative way things like that and it's just they're so bad at it and I'm trying to find you in the model and you know I understand why open source models are so uh un uncensored open source models are so popular it sounds so super Shady it sounds like some sort of like a sax chat bot type of thing but it's just people want a conversation that just you know doesn't start with as an AI um chat bot I don't have feelings BL it's you just want like a normal conversation right so I'm I'm working on uh like creating a data set which is taking forever I have lo I I did I did exactly that I did that in like October I built basically uh database of videos like you know Vector database where I have the video description and then the title and I did that for like 200 or 300 videos and then I used gbd4 when I just give it an idea and it uses the vector base to search up like 10 or 15 most similar videos and then uses the same writing format of those titles to rewrite it uh for this video idea yeah yeah exactly yeah that's for example one interesting way a lot of people like fine tune them uh or they have like a database with um titles right um but that's one thing that this thing that you did is really interesting and um kind of lost thoughts to say something yeah anyway um I have a feeling that you know that's something that like cre crei or agents can do in a way with few shot prompting maybe you can like have um One agent that has like a access to like a lot of good examples and I'm just now I haven't tried it but that might be a good interesting idea but think about it right like me and you we've probably seen tens of thousands of YouTube titles just like by watching watchingg YouTube and like over the years just scrolling and seeing the videos so all of that is kind of stored in our memory even though we don't consciously remember every single one we kind of know like The Vibes and we know the titles that cut our attention the llms don't have that right if you just say it like write a YouTube title for something it will be like Beyond cringe it'll be so bad like it's it's insane if you don't give it an examples or anything so yeah I think the context window is just like having tens of thousands of examples I think that's probably the way either fine tuning or just database that it can call probably something like that but like I agree with you that for the open source I mean the unrestricted models people are like always so sketched out by that it's like oh my God like what are you what do you want to do do you want to like you know steal something or do something illegal it's like no I just want to have a useful chatbot with no bias that does exactly what I ask it right it's like yeah sometimes they're hilarious they because they're like usually uh the the database the data set kind of contains a lot of like real conversations from the internet they can sound super unhinged or kind of evil and crazy but like that's that's the that's I guess what the normal conversation would sound like and I really prefer those to like Char like chpd I cannot even use anymore like that's how bad it has got got like I ask it um for example I was using it for uh scripts right and I I would try to find um like some I would I would prompt chpt to write an analogy like uh yeah anyway so he has learned that I like analogies and I when I ask it like uh Hey like give me a python library for something it's like oh python Library it's like um tool in a tool set for like blah blah blah and I'm like no no just answer the question please it's completely unusable yeah sometimes I even have to like clarify like the user is a compet individual with good amount of knowledge about AI it's like if you don't say that it's like well artificial intelligence is this clever computer like just come on man it's like be useful yeah if you if you had one prompt while that's what we were talking about about agents with memory right because charp has has memory now it's learned from your chats but unfortunately it's it's like overdoing it a little bit it doesn't really know enough to actually be useful it just learned one thing about me and now it comes L tries to like feed me the same stuff so yeah wait so you have the memory feature I have I had it I I don't I I don't have it anymore because I canceled my subscription but yeah I had it for like maybe for a while yeah I I never even got it so oh really maybe it's because Europe I don't know yeah I don't know like I feel like Europe is like being treated as like this I don't know second grade world it's like even anthropic just didn't release CLA at all in Europe and then the features are coming like like custom instruction when it first came out like 6 8 months ago or whatever it was released for like a month or two in us and the only way I could get it is through VPN so like I don't know it's like so maybe it's because like Europe is so restricted when it comes to regulation stuff like that so the companies um you know just are more safe but it's it's annoying yeah of course I can imagine yeah I I mean maybe it's a good thing that uh European Union is trying to regulate some stuff um I always thought it like a USA is more like you know they they they go for stuff people are ambitious and when they have some tragic incidents then they're like oh we need to regulate stuff but like first they go for it right like that's when if you if you look at the videos from the like the beginning of the 20th century it's just a bunch of cars driving everywhere like it was just like well I guess we'll figure it out and only later they regulated it and stuff but Europe is kind of like different and maybe it's a good thing I don't know I'm kind of like I cannot decide for myself but it's it's sad that that you cannot have access to CLA because it's it's really good like that's for now my favorite I mean I'm using it through the console but yeah it's there's extra friction but I think you know you can um in prop lexity you can set it clo fre Opus or Sonet you can choose them in the settings as a default llm so even without you know console I think people can use it just through perplexity and changing the mode from web search to writing ah yes you're right yeah the perplexity Labs yeah yeah yeah I remember that so good you seem very passionate about using open source models to run your agents locally what makes you so excited about this well um privacy I just don't want anyone to train my conversations I guess you know who like who wants that um I don't trust companies at all I'm pretty sure that they're just using our data that we're just bu building data sets for them for free and it's such a it's a lot of work and they they're just get it get it for free and on top of that we have to pay them to use their chatbots right so um privacy um but uh having said that uh open source models are kind of bad at like they're not that they're not great agents so to say they're really good for like simple conversations um but whenever you try to plug it into some sort of um agent framework you realize that there are only few um few open source models that are fine-tuned to to to like the fun function calling those are the ones that work um and um but I dream of like one day what they're smarter like even when they do know how to make function calling they're still going to mess things up in some later step like it's never going to go smoothly that's the fortunately how it works and I hate that because I I I want I as I said I don't want to like give these companies um training data for free and uh not to mention that you know like as we said a lot of these um like API calls that you're making to like GPT 4 or something again you get that model that's like it's kind of like lobotomized or something it's it's never really that great so I I prefer the way that open source models uh kind of sound and I want to use them but unfortunately they're not there yet yeah I think this is actually one of my predictions that open source like this year open source models even though they will be less intelligent on paper they will become more useful just because as you said they're not like lobotomized like super you know restricted super limited uh I think this is a pretty like pretty safe prediction that I'm confident in that you know we will use Dumber models that are just not restricted and like I love that you brought up the Privacy argument nobody realize this is like nobody cares people putting stuff into C GPD that's like super personal and super private and figure like oh yeah it's never going to happen it's never going to come come up it's like eventually there will be some sort of a leak or they will just like you know government will overtake it or like just force them to get some data there will be something some situ situation where private data from these companies gets out there and like you know if it's if it gets like some breach and it ends up on the dark web like do you want all of your you know deepest darkest secrets to be on the dark web for a couple of dollars like people just don't think about what they're putting into these chat Bots and they just don't care at all and like the the the dumbest argument I hear is like why would I care about privacy I'm not a criminal and like people don't realize that you don't have to be a criminal to even get in trouble right like just look at what happened to Edward Snowden and Julian Assange right they just like did stuff that everybody thinks they're Heroes but the government went after them because they went against the government so like even if you don't do anything illegal or you know criminal it one day it can be used against you like people don't realize that you're you're right uh because uh like regimes can become repressive and they can go after people that criticize them for whatever reason it can be something ridiculous it doesn't have to be something like that's big you know you don't you don't have to be Snowden in order to be like persecuted right um and it reminds me of that leak that uh 23 and me had recently uh you know it's that um DNA testing company and a bunch of like private data uh about Jewish uh customers Godly so can you imagine like now you can like Target specific ethnic group you know all about them and and stuff it's just so creepy when you think about it and I I feel like people are um so excited about technology like the um AI assistant rabbit right it has a it has a camera it has a microphone it can hear you so yes they say that they respect privacy and stuff but one day we're all going to have some sort of like AI assistance with camera and microphone and uh we will be sending a lot of data and even if you don't uh end up S of like doing anything that you know might might put you in trouble like you're still giving your data very freely to these companies they're going to do something with the data to kind of maximize their profits that's the thing I'm so angry like we give so much of our data like to Facebook to Instagram to to Tik Tok like you know it's not okay like and then they they kind of become more profitable thanks to us like we're we're working for them for free and that pisses me off I don't want that people always go to like some extreme scenario when all you have to realize is that the agent or like whatever it is AI will act for the company's interest not for your own interest so it doesn't mean that you have to end up in jail or that it I don't know it would just like destroy your bank account it's like just a small misalignment is that the agent doesn't work for you it works for Facebook Google whatever over time that will have like massive influence like if that's you know if you use that agent for years to come like instead of improving your life it improves the Facebook's you know bank balance they make more profit because now they know more about their customers right and uh that yeah and we have been doing this very freely for like forever and uh you know um there's an interesting uh like um kind of like shift uh where in the beginning people had this idea that like actually companies are adapting to us they're trying to like create content for us to keep us on platforms but actually no what's happening is that we have algorithms that kind of shape shape our opinion which is what you um mentioned at the beginning of our conversation where our our opinions our beliefs are changed so that we stay longer on the platform so you know that's a that's and that's going to like get boosted even more with AI so I mean trust is absolutely essential and as you said like when a company like rabbit or whatever they say like oh yeah um everything is stored locally on the machine and it's not being sent like okay amazing but like we there is no way to like trust that right cuz like the only way to really trust anything is to see the source code that's why open source is so essential like imagine if Bitcoin was not open source and it was like close yeah trust us it's all good guys it's there like nobody would buy it and like it's the same with any proprietor software that you want to be like Linux one over Windows because it's open source because every can see their server what exactly what's happening there is you know you you can see like if there's some back door or whatever if it's proprietary and it's close to us and you have to go by the word of the CEO like yeah trust us it's safe we don't send it to our server like no there is no way like yeah yeah but exactly I mean we're going to see that more and I feel like we have already um kind of got adjusted to it a little bit with social media platforms and we already don't care that much about like our privacy um people share all kinds of stuff there I mean I you know I I'm not perfect at all I remember when chpt came out I wasn't thinking about any of these things and I would be like I just had a fight with my dad how do he is like this I am like that what do I do like I just started talking about like my private life and I thought that charg BT would help me it didn't cross my it didn't cross my mind as you know what the what the hell am I doing right so um yeah we we'll see I think that if something is convenient enough a lot of people are willing to trade their privacy for the convenience and uh I have very low expectations of people I have feeling that as long as it's very comfortable uh people are going to give everything um when it comes to their privacy for the convenience so but I think I'm Pim yeah I I definitely agree but on the other hand there's like over the last few years I feel like more and more companies and products are specifically focused on privacy first right so like doug. go is a search engine that's focus on privacy then you have Brave that's a browser focus on privacy you have like U block origin you have um you know VPN all of that stuff if you go back like five or 10 years none of that really was used right like yeah I don't know I feel like yeah it has to be as convenient as the other like the more mainstream method so that's definitely the truth because if something has more friction then nobody will use it but I don't know I think like I have some hope in you know developers and people just developing stuff that's privacy Centric and privacy focused and open source like if if if you have an agent that has such a massive impact on your life and you don't see you cannot see the weights and you cannot see the data sets that's another thing a lot of these open source llms don't actually show the data they trained on so they're not fully open source um yeah I mean if you can do that that's like so risky to Outsource your thinking to that right yeah it reminds me of that Meme with the that CTO we Mira whatever when she was asking about the training data and she was like yeah not face right yeah yeah like who who is believing that the CTO of a company doesn't know what training data they used like what is this trick but but it's actually it's easy to prove that they used all of you YouTube If you go to the mobile app and you use the voice chat and you press it and you don't say anything and you stop it it says thanks for watching and the reason for that is because if you have a pause in a YouTube video and like a long pause and then nothing happens the most likely outcome is thanks for watching end of video oh my God yeah well of course I mean they're going to have so many losses I already have a bunch of them and uh actually um that's something that I kind of uh was um I almost fell into that trap when I was starting with like messing with like Ai and making like a CHP gp4 API calls and stuff like that I was wondering if I can make like a small startup that um kind of creates these newsletters personalized newsletters and stuff but then I realized I'm scraping a bunch of like like giant um news um platforms right like I'm I'm SC keeping information from them what are the odds that you know they would be okay with that so so like that's yeah the and then if you think about like open data that's like usually not that great not not that great quality so it's the model is only as good as the data the data set like for example if you scrape YouTube titles to find un it to to to make good YouTube titles is it legal well probably not right but it would be a really helpful um helpful model you know I mean I guess it depends whether you use it for yourself or commercial purposes like I think if you just had that for yourself nobody would really care but like if you started you know using Google's data to build your own startup I don't think they would like that yeah I I think that they wouldn't like it anyway but if you just use it for yourself it's just going to go under the radar right nobody cares but you're right like you cannot turn it into a commercial product so that's so many unresolved issues in that you know AI industry so I I think that the next decade is going to be really exciting I mean have you seen the emad mustak leaving stability AI yeah I saw that yeah well he I always thought that it's a little bit strange that he is I I have a feeling that CEOs tend to be usually like a little less unhinged on Twitter maybe I don't know how to put it but then I remember the Elon Musk yeah I saw that yeah yeah so I think uh the main reason why he left I mean I watched the podcast where he explained it but it's basically to like make sure that AI is open and decentralized because nobody of like the you know competent Founders or like people who actually have influence none of these like big Tech figures are working on that like Elon only open source Gro after pressure from the public right so it's like yeah I think emat really is probably the main guy who really cares about AI being open source and yeah I don't know like we we need someone like that who actually has experience of building state-of-the-art models because like the the disconnect between the public and what's actually happening in these you know Elite circles it's it's insane it's it's it's like literally night and day yeah I agree I agree I I um well if if you know if you think about if you think about this stability right I feel like a lot of people um became interested in a AI thanks to them yeah there's a giant um number of people that are into illustration that use table diffusion or like a even 3D artists um you know when it comes to like visual stuff people that are doing that for lot for work uh I have a feeling that they kind of became introduced to AI thanks to them thanks to their like open source models so maybe that's how I started as well uh so yeah yeah you're right you're right but I just honestly I'm becoming more and more pessimistic because um it's been a while um we didn't have like a release of a really good open source models model for a while and um you know model blending is an interesting technique I'm really interested in that but I haven't seen much going on in the open uh Source sphere so do you think like there's a chance of open llms ever catching up to the closed ones I want to say yes but I am afraid that the answer is no just because you know I I could I could imagine that you know if enough people knew about these problems and topics and if we all kind of like gathered around this idea that we should uh have better open source models um and we all kind of invested even like $10 uh into some fund you know then we we would be able to have people that are very competent uh train um and align these models but what are the odds I I I just you know I think that the problem is that you're just the funding money that's it and I think it comes back to what we discussed before is like not enough people actually care about this and like are paying attention right so like I think it's a problem of like education lack of education and people just like willingly being blind cuz like you need people to understand that AI is the single most important technology in the world that's number one and number two you need them to understand just how dangerous it can be if it's controlled by a handful of people right so those two things once people realize it I think they're immediately on board with open source models but you know to get to make them realize these two fundamental beliefs it's like already a lot of work yeah and you know uh well you you create content on YouTube for example uh when I'll make a video about um let's say some open source tool or a model my my revenue is significantly lower uh than than the one that is about openi so like it's it's more beneficial yeah it's more beneficial for me to to to talk about open AI yet uh you know like uh it's kind of more beneficial for everyone to get introduced more to open source so it's like this tradeoff yeah yeah RPM is lower for at least in my case lower for like open source stuff that's crazy I've never tried like comparing that but yeah I mean I guess it makes sense it's it's based on the advertisers right so yeah I guess people would much more likely say like Chad gbt or open AI than Target open source like yeah yeah or uh you know I actually uh I have a lot of uh offers from sponsors uh to like promote some sort of like um close Source agent frame work or something like that and I I don't want to do that because I I just maybe I'm naive or something but I have a feeling that these things uh don't work good enough yet so you'd be paying someone to do something uh that you could do on your own maybe it's less convenient but it's just if anything education is the most important thing right now and and I I feel like it's somehow wrong to promote uh these close Source tools that are like that I again maybe I'm just being naive right now I don't know no it's just being selective like what brand you work with like I've gotten hundreds of offers ever since I started the channel and I only did one sponsorship so I mean I understand like you have to be selective about you know what companies you promote because there's so many scandals like you know people who promoted FTX the crypto exchange yeah you know it seemed legit until it was not and then you know how do you explain to your fans like that invested millions of dollars you all of these Finance YouTubers did that and then like like you cannot literally you know repay that money cuz like if as you said if every fan invests a bit of money that adds up real quickly so like being careful about what sponsorship you take I think that's smart I think yeah because right it's a little bit hypocritical if you uh you know like um you make these videos and it's about Ai and you kind of openly uh show that they're not really like they can achieve certain amount of things and then you kind of like promote some Clos Source tool is somehow just so wrong to me I don't know like uh something about it just can't like and also you as a YouTuber you you have to lose everything because your reputation is everything like people trust you and that's why they watch your videos but companies can what do they get to lose like hundreds or maybe few thousand dollars that's nothing for them it's pennies so yeah we I I think that I don't know about you but like the the deals that I'm being offered are just so bad like here maybe you can buy a microphone from from us or tell your audience about these acoustic panels or you know like um and then yeah these clothes or tools like usually like AI personalized chat b or something like we have millions of them like yeah how am I gonna sell this yeah you have to be like it has to align that it's actually useful product it's not something that you can just do with jgb and then obviously the offer has to be good it cannot be like we give you 15% affiliate commission it's like come on you know we need something that's like fair and actually usable yeah and uh I don't know if you've uh noticed um Ian we probably did because you make content on YouTube this triaa AI automation agen kind of moment I I was thinking a lot about it like like how many chat Bots do we need eventually like right uh it it was mostly maybe I kind of didn't get the idea it seemed to me like it was mostly about making chatbots yeah it's basically like do you know the smma model like social media marketing agency it's basically that but like instead of selling social media marketing to a company you try to sell them a chat bot but I mean as you said like in a lot of cases the company would just be better off using CH gbt to be honest because like the people who do that aren't f focusing on making the best product possible they're focusing on selling companies that you know want to get into the AI hype they don't know they they don't have the technical expertise to know what's good and then you know most of the people who actually run these AAA agencies they aren't expert Builders like of chat Bots or like they just cannot even build something that's better than chat gbt even if they have like proprietary database or something like that so yeah I think U there it's it's just going to it's only matter of time before it gets a kind of a bad reputation because the people who are interested in that they just want to get some get some business to you know get them $2,000 to build them a custom chatbot but like if that chatbot isn't actually more useful than CLA 3 or gbd4 then like that's not a long-term business model right yeah sometimes I wonder about that like um because I the reason why I brought it up is because I noticed that a lot of uh brand uh kind of sponsorship that I get like offered are chatbots and um I I don't know how I would sell that for example I mean if if I had my own child how how would I promote it like it it seems to me like it's maybe oversaturated um and that's why I really like what you're doing with AI agents I have a feeling I mean I I'm not like as I said I'm not a machine learning um expert I don't have formal education in that area but I am an engineer I studied civil engineering and architecture and I love just the technology I like I like I love to know how it works and stuff so to me this is kind of like the most exciting and obviously it's the most polarizing uh kind of like thing happening um and I just I'm yeah I just feel like I I can't wait to see where what's next in that sense I mean I definitely appreciate that in every video you make you try to use local and open source models even when the results are worse than using an API but one thing I actually realiz is that even if gp4 was open source tomorrow we couldn't run it like the hardware is so much behind like Hardware is cons especially consumer Hardware not like you know Nvidia AI gpus I'm talking about like consumer you know Nvidia either like Mac or doesn't matter like any consumer Hardware the the scaling is much slower than in the AI models that's like software right so that's crazy because even if like gbd4 or CL became open source tomorrow we couldn't run it like gb4 is like what 1.7 trillion parameters like there is no way you could even run it like even if you were able to run it the inference would be insanely slow to be just completely unusable right so that's like one thing I worry about like if Hardware is actually like if Hardware keeps improving slower than the software the AI models will we be even like if AGI was completely open source we wouldn't be able to run it right yeah it's true it's true well you know that's an interesting uh point for example the only sponsorship that I have is it's an affiliate link is uh actually like um a link that let you um rent Hardware because actually I use it so that's why I feel free to like recommend it to other people I I Cann I cannot forget when I I started merging some models and I was so excited about them they would actually pass um that um evaluation on the open and leaderboard and that would get scored and stuff and then I I remember I I rented like a GPU in order to run it because I couldn't run it locally right any of the models that I that I merged and I tried chatting with it and they were horrible I was like what have I done like I I had you know this false belief that I have done something good and yeah you need a lot of like yeah sure you can push and uh staff in some direction and try different things and get excited about them but ultimately if you don't have Hardware to run stuff it's just it's nothing right so and and and in that sense I think that AI uh is a little bit unfair uh and you really cannot do a lot of the stuff like I'm just limited with my Hardware like people in my comments say well you know you should have run like a llama 70 billion parameters something like I wish you know and of course I would have better results but I have like a regular laptop so yeah it's we'll see uh I I have a feeling that this is a giant problem uh for now and uh a lot of people cannot do anything because of those limitations that they're just limited with Hardware but don't you think like the cloud is kind of a trap because like yeah in the short term it's cheaper and you know better than um local hardware but like it's the same argument as we had like with the privacy because if the companies control all the Computing somebody can decide like oh you know you didn't do this Maya suddenly uh your um Cloud access is designed and like what yeah now it might seem like who cares but like once you know we have gpt7 or whatever like super Advanced AI models if you don't have access to that you literally cannot compete it's like not having access to the internet today like you cannot compete with people that do right so once it becomes so essential I think we have to be really careful if we're giving all the power to the big cloud computing you know AWS Google Oracle doesn't matter it's like I don't know I feel like the cloud is a big trap what do you think yeah well it certainly lose control uh and you're right about that and I ideally I I hope that we would get uh models that are seven or 13 billion parameters that have common sense uh better Common Sense reasoning so that maybe if they don't have all the knowledge they can at least kind of like conclude things rationally so that that would make them help more helpful because for example now every time I you know the test like um how many sisters that does Sally have like uh yeah like the answer is Sally has one sister but like every open source model is going to say Sally has six sisters or something ridiculous like that like they cannot kind of calculate stuff you have to train them to this specific riddle and it shows that they lack this Common Sense reasoning so I guess I I don't need the most knowledgeable small model I just need a model that has better common s sense reasoning and I think that I speak for everyone when I say this because that type of a model you can run like quantized uh versions of that model you can run on your laptop without a problem and they're going to be very useful so you need to rely on cloud so that's that's one solution yeah and also like people probably need to put more like importance on having Hardware even though like It's tricky because everything improves so fast so like you know a computer that you buy in like two or 3 years it's already obsolete like once we have those models that are not not just not just for the Privacy as we discussed but like also for like the freedom right to be like decentralized Computing so that no like if you piss off some company some Google or Facebook whatever they cannot just turn off your cloud computing and all of your agents that are running or all of your llms are turned off so I think you know Hardware like has will have much more value to be decentralized yeah yeah I absolutely agree uh so I'm I'm sorry I'm slowly running out of time so um yeah we have a good like already 90 minutes of content or something even more probably so yeah it's perfect it's great so yeah thanks for coming on the podcast Maya and it was a great thank for inviting me
Info
Channel: David Ondrej
Views: 39,105
Rating: undefined out of 5
Keywords: David Ondrej, david ondrej, AI, ChatGPT, artificial intelligence, ai, Artificial Intelligence, OpenAI, chatgpt, chat gpt, Chat GPT, Elon Musk, Sam Altman, sam altman, AGI, midjourney, david ondrej podcast, Emad Mostaque, GPT-4, GPT, AI revolution, new society, david ondrej new society, david ondrej community, AI youtubers, AI youtube, AI videos, AI shorts, Agents framework, AI agent, Maya Akim, Crew AI, AutoGen, AutoGen Studio, Devika, Devin, CrewAI, AutoDev, MultiOn, AI agents
Id: fsIipBuM4Nc
Channel Id: undefined
Length: 88min 46sec (5326 seconds)
Published: Sun Apr 07 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.