How To Connect Llama3 to CrewAI [Groq + Ollama]

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys in today's video I'm going to teach you everything you need to know about using llama 3 with crew AI so that you can run your cruise completely for free so let's go ahead and dive into the three major parts that we're going to be covering in this video so in the first part of this video you're going to learn what is llama 3 how does it compare it to other lolms and you're going to see a live demo of llama 3 so you can see just how smart this new lolm is after that we're going to move over to actually running a crew using llama 3 and we're going to do all of this locally on your own computer using olama so let me go ahead and show you exactly the crew that you're going to be running so just as a quick highle overview it's a crew that generates Instagram post for whatever company you want to advertise in our case we're doing a smart thermos that keeps your coffee hot all day so this Instagram crew will go off and actually write text for you that you can show off in your Instagram post and it's actually pretty smart like it shows you exactly the problem your customers would have lukewarm coffee and it comes up with you know pretty catchy taglines and the part that's super cool that I think you're going to like is it comes up with mid Journey descriptions that you can go off and paste into mid journey to generate some really cool looking pictures so let me show you what these pictures look like yep right here so it came up with these futuristic looking thermoses right here that you could go off and upgrade download and post over to Instagram to start getting some traction for your new business and it is important to mention this crew already was built by the creator of creai Xiao I just formatted it to work exactly for this tutorial right here working with llama 3 all right cool so once we've built it locally and we got it running locally what we're going to do is then update our crew to start working with Gro so we're going to be using Gro plus llama 3 to run our crews much faster and the important thing to mention is as once we start using grock we can start accessing the bigger version of llama 3 the 7 billion parameter version but we'll talk more about that in just a bit but I'm excited for you guys to get to this part cuz once you start using grock plus llama 3 especially on the 70 billion perimeter model your minds are just going to be blown so go ahead and we're going to dive into the rest of the video but I do want to mention before we do that all the source code that you're about to see I'm going to give away completely for free just check out the description below to see how you can download it so you can skip all the setup and everything and just download the source code and start playing around let's go ahead and dive into the video oh wait real quick I just want to mention if you run into any issues while you're watching this video feel free to check out the school Community I created just for you guys you can actually post a problems that you're having with your code and post some screenshots and myself or other developers in the community will'll make sure we get you help so that you can get unstuck and continue coding out your cruise but it's completely free to join and I'm sure you'll love to meet some of the other like-minded AI developers in the community but enough that check out the link down the description below and I can't wait to see you in the school Community but let's hop back to the video real fast all right so let's quickly cover what is llama 3 how does it stack up against other large language models and then let's look at it in action before we start using it with our crew so llama 3 is the third generation of meta open- Source large language model known as llama and this third generation is honestly getting super impressive and I've actually really enjoyed using it on my own so here basically key changes that compare llama 3 to llama 2 so one of the main one is the context window doubled so right now it's at 8,000 so it's getting super comparable to chat GPT 4 another thing is the new model of llama is basically a lot more Cooperative if you use llama to a lot you would notice that sometimes you would ask at something and it would just plan out say I can't do that I'm a large language model I can't do that so now it's getting a lot more Cooperative which I really like the other thing is there's two versions of llama 3 that just came out the 8 billion parameters and the 70 billion parameters so the 80 billion parameter one is probably the one you're going to want to run locally on your computer it's a lot smaller and it's actually super fast which you'll see here in just a second however if you need to do some more complex task the 70 billion parameter model is honestly really smart and it's it's slowly as time goes on these llama models are starting to get really close to a chat GPT 4 can do we're not quite there but we're getting there okay so let's keep going so how does this llm compare to some of the other ones well if you're looking at the 8 billion parameter model you can see compared to mistol and Jimma that the new llama 3 Model basically wins on pretty much every front except this one of these basically this evaluation and also these are just different ways to evaluate models compared to one another and then when it comes to comparing the 70 billion parameter model for llama 3 it's also right up there and pretty much winning on pretty much every front so like I said this is probably my new favorite LM that I'm going to be using with to test and basically you know troubleshoot some of my production apps and then especially if I'm looking for some free and fast alternatives to chat gbt all right let's keep chugging along so the other thing that's important yeah these are just some more comparisons so you can just see llama 3 is pretty much kicking everything else's butt compared to Claude mistl and really everything else let's keep going so the thing that's important for you as a developer when you're using these different models is to know how big they are so you can see we're over here on the o llama website which is going to allow us to run llama 3 locally on our computer this is where you'll download it for Mac this is where you download for Windows but we'll talk more about this later on the important thing for you to know as a developer the Llama 3 8 billion parameter models is about 4.57 gigs so pretty beefy and that's the really quick fast one that you can run on your computer and if you want to use the one that's starting to get comparable to like chbt 4ish you're going to have to use the 70 billion parameter model which is 40 gig so I know it's huge I've went ahead and download it on my computer so you can see it in action here in just a little bit yeah those are the two different models I definitely recommend starting off with the billion one before moving over to the 70 billion one all right let's keep checking so right now we're over in grock cloud and this is where you can basically test out using different large language models using Gro and if you haven't heard of Gro I have a video for it you'll definitely want to check it out after this one but basically Gro allows you to run large language models on chipsets that are designed for running large language models meaning they're super fast and it's also free to use so right now I just want to show you how fast this large language model is llama 3 using Gro so let's say hey explain the importance of large language models so I'll run it and then you can see it spits out basically 900 tokens per second which is stupid because if you want to use chbt 4 you're going to expect to get around 40 so this is like 40 to 20 times faster compared to chat gbd4 and then if we switch things up so let me just save this refresh it and let's hop over to the 70 billion parameter model so remember this one's bigger larger slower but it's smarter so if I tried this one out once again so it's still super fast and it's going to spit out a really big and insightful answer but the main thing is we're still at 300 tokens per second so right there we're like I said 8 to 10 times faster than Chad GT4 so this is this is honestly crazy so now that we've kind of covered what is llama how it compares to everything else and just to show you how fast it is now let's go ahead and move on to the next part of this video where we're actually going to get llama 3 up and running on our local computers using ol llama and then eventually after that we're going to work on using llama 3 with Gro so let's go ahead hop over to our terminal and start getting things stood up all right so let's quickly cover how to download and set up ol llama on your local computer so that you can run large language models like llama 3 completely for free and keep all your data completely private on your local computer so the first thing you're going to want to do is head over to O llama.com once you're here you're just going to click the download button and this will actually just allow you to download it and then you're going to install it I actually have a full video that walks you through the entire process of getting it set up I'll put a card in the corner so you can watch that video right after this if you have any questions but once you've downloaded and installed o llama what we can start to do is start using some of the models they have available to us so today we're going to be using llama 3 but as you can see they have a bunch of different options so let's go ahead and click llama 3 so we can see what commands we need to do to get started so right here is the important command that you're going to want to run once you have Ama set up to actually pull pull down the Llama 3 language model and then run it and the important thing is just in case you haven't used this before over here on the left hand side you can see there are some different tags so the latest model which we currently have selected is the 8 billion parameter option that is right about 5 gigs but if you wanted to you can change things up and you could click the 70 billion model and this is how you can get the 40 gig option so my computer really isn't beefy enough to run this one yet I actually tried it out not strong enough I got to get an upgrade so we're mostly going to be sticking with the 8 billion parameter model so let's be super explicit click that one and now we're going to copy this Command right here next what we're going to do is head over to our terminal and this is where we're going to be actually interacting with AMA so once you have Ama installed you should be able to type it in and see all the different available commands that you have and in our case what I'm going to do is type in olama list so you can see which models I have installed so right now you can see that I actually do have llama 3 the 8 billion parameter model already installed and I can run it so so what we'll do is we'll paste in that command that we just grabbed from oama and what this will do if this is your first time running the command is first it will download the large language model and what that will usually take about you know 20 30 minutes or less depending on your internet speed and once it's downloaded and saved on your computer you can just run it instantly so I'm just going to type in ama run and what this will do is it'll allow me to start chatting with it just like we normally do whenever we're talking to like chat gbt so tell me about the importance of large language models so you know what you'll notice is this is not generating at the 800 or 900 tokens per second like we were working on when we were using grock but the part is nice that this is all private and it's all local and it's free so but like you see it is completely working so now that we have olama set up what we can start to do is we can actually create if we wanted to we're going to make something called a we're going to use a make file so that we can start creating our own custom large language models that are specialized to work with crew AI so here's how we do that first what you're going to do if you haven't already downloaded the free code repo that I have shared down in the description below be sure to click that but what you're going to do is you're going to start using this model file now what the heck is this well a model file defines some specific properties that we want to use whenever we are cust making a custom large language model and the important thing that you really just need to take out of all of this is we are setting setting up parameters to listen to the keyword stop so anytime we see the word result we want to stop that's basically my interpretation of it and there's actually over on the crew AI website there's a full basically page talking about this and I go into much more detail in that specialized olama tutorial that I have in just referenced a few minutes ago so what we're going to do is we're actually going to use this Command right here to create a brand new large language model that's made just for our crew so what we're going to do is hop back over to our code real fast right over here and what we're going to do if I just type in LS you can see that I can see our model file and what you're going to do is you're going to type in O llama create and then we want to create basically the name that we want to reference it so in our case we're going to pass in the name so I'm just going to do crew AI llama 3 and I'm just going to do the 8 billion model because if you come back over here you can see that's the one we're going to use and then from there we do DF which says hey go look at this file and then I want to look at the model file so what this will do is it'll take a few seconds but it'll actually configure and customize our language model to start using this so now if I do o llama list I can now see that I have a crew AI llama 38b so that's exactly what we just typed up up here cool so now we have o llama set up and we have created our basically our llama 3 Model that specialized for working with crei so what we're going to do next is let's actually go ahead and hop over to visual studio code so we can see the crew that you're going to be building and we can start connecting it up to run with ol so let's go ahead and hop over there now and real quick before we dive into the code I think it's super important to give you an overview of what we're about to do so our whole goal is we are trying to create all the content that we need to advertise a new product that we're trying to create and create some images and copy for Instagram so that's what we're trying to do so we have set up two Crews that are going to work together so crew one is going to be responsible for writing the copy so basically all the nice Tex text and descriptions of a new product we're going to be creating and then the second crew is going to be basically the image Creator and we're really just going to be creating the descriptions that we need to pass over to Mid Journey so here's exactly what's going to happen we're going to have in the first crew three different agents that are going to go and look at what's happening in the market basically come up with some strategy and then a creative agent who's going to actually work on typing up the nice copy that's going to get p over to Crew 2 now Crew 2 is going to take in that copy that was just generated and come up with for each of the three different copies that were created or three different like you know nice summaries of the product we're going to create a mid Journey prompt for each one of those copies that way we will have nice images and nice text that we could go over and post on Instagram so that's what we're going to basically create so let's go ahead and hop over to the code again and here I'll actually walk you through setting up your environment and then building these Crews so let's go ahead and hop over all right so welcome to the fun stuff we're going to start coding up our crew and to help speed things up what I'm going to do is quickly run you through the process process of setting up your environment then we're quickly going to run through our agent task and our crew I'm not going to spend a ton of time on that just because this is a llama 3 tutorial and then finally we're going to go ahead and run our local crew and talk about some of the feedback that I have about using llama 3 locally okay so let's go ahead and dive in so the first thing you're going to recognize over here is we have a file called our py project. TMO now if you haven't used this before definitely check out my crew aai crash course but basically this is our project dependency file where we is find you know hey this is the tool we're creating and it helps us create a python environment for this crew and install these dependencies so let's go ahead and do that real fast so what we can do is head over to our terminal and we can you know once we have poetry installed to where whenever you type in poetry gives us these types of commands what we can do is go ahead and install all these dependencies so in our case we'll do poetry install D- no root and what this will do is go off and download all these dependencies and it takes you know just about 5 to 30 seconds to install I've already done that on my machine so I'm going to skip it what'll happen next is you can type in let me get rid of that once it's installed you can type in poetry shell and what this will do is it'll spin up an interactive version of the shell that you can see you know it went ahead and created our marketing crew and it basically created a python virtual environment that's going to encapsulate all of our dependencies that we can use as we are building out our project and to run our crew whenever it comes time and one of my favorite tricks is you can just copy that URL right there and then whenever ever you're looking at a file inside of Visual Studio code you can click down here where basically it's asking for The Interpreter you can click enter interpreter path and then paste in that line right there and it'll set your Visual Studio code to look at the virtual environment you just created and that's going to make it so that you don't end up with a bunch of squiggly lines inside of your python code of like hey this dependency is missing all right now that we got all that out of the way let's go ahead and start talking about our main.py file we're going to talk about our agents and our task and like I said we're going to speed through this just because this is U we just want to get over to actually running our cruise with llama so what do we have going on well inside of our main.py file this is where we're going to be setting up our crew setting up our agents and task and what you can kind of see just at a very high level you can see we have the copy crew which we discussed earlier and then down here we have the basically the image crew or the mid Journey crew so let's give a quick Deep dive into each one of these so you can see what the heck's going on well inside of our copy crew we're going to have these three agents our product uh competitor agent strategy planner and creative agent and these ones are just going to be focused towards you know basically looking at the market it's important to knowe each one of these agents does have access to go search the internet and also search Instagram and you can actually dive in to actually looking at what these tools do so you can see Xiao the creator of this tool which by the way you can actually see over here I'm just pulled down the Instagram post example that Xiao created and I went ahead and repurposed it so so it will work nicer for llm basically llama 3 for you guys so yeah all credit goes to X on this one I just went ahead and tweaked it but what you can see is he set up a nice tool called search internet and then what he does is whenever he wants to search Instagram is he just adds an additional query phrase before passing in the query that way we'll actually search on Instagram so that's what's going to happen we're going to have agents that go off and search the internet they're going to search Instagram as well and they're basically just going to collaborate to start writing copy for our basically for our Instagram ads same with the creative agent they're going to go off and make some some nice narratives about what we're doing okay so that's what's happening on the agent side but what about the task side well once again this is where we're going to pass each one of these agents you can see right here each one of these agents is start is going to start getting task so we can see the product competitor agent is going to do the website analysis and same they're also going to do the market analysis and the strategy planner is going to do strategy and then when it comes to the creative agent that's just going to get called for basically writing the copy all right so over here what you can see the task this is all pretty standard crew AI stuff we pass in the agent that we want to perform the task and then we're going to pass in the product website and product details so that our crew knows what it needs to write about and we're just going to do the same for all of our different tasks like I said we're kind of speeding through this because we're want to focus on LL 3 all right so how do we actually start using llama 3 great question well if you come back over to our agents stpy file what you'll notice at the very top of our file we set up a self llm so we basically set up inside of our marketing and analysis agents our class we are defining a property on the class called our llm our large language models and we're saying Hey I want our large language model to be equal to and this is a little trick that you'll notice whenever you want to start referencing olama AMA is actually running at all times so if you actually go up to your top of your toolbar when it's running you can see o llama's running right here and then that's how you know it's running but what's cool is it's a server that's constantly running all of your different basically large language models that you can actually make API requests to so in our case we are wrapping our large language models inside of the chat open aai library and what this is going to allow us to do is start interacting with our local llms and make them behave just like we're talking to chat gbt so this is going to be super nice and the other thing that super important to mention is you can Define which crew or which llm you want to talk to so in our case we want to talk to our 8 billion parameter llama 3 Model so in our case we'll update this to be our 8 billion one and just to double check you can see which one you need to use by doing let's clear all this out o llama list and then you can see right here that I want to use crew AI llama 38b copy and paste that and that looks exactly like what we have here all right good and this is the URL for that server for running oama that has access to all of our lolms Okay cool so that's how we set it up and now we can start using this llm in all of our agents so that's what you're going to see right here llm is equal to that and we're going to do this for each one of our lolms and the reason this is super important that you add it to each one of the lolms inside of your agents. py file is if you don't it's going to default over to chat gbt 4 so if you actually command click the agent class you can see if you scroll down in here and actually look up llm right around here the default is going to be chat open aai and the default is going to be chat gbt 4 so if you don't specify a name it's going to do one for you and that's going to be C gbt 4 which is going to start costing you money okay the other thing that you might notice that's interesting about this file is I say that the API key is na you just have to put something here you just can't leave it blank so na works and that's how you can start accessing o all right cool so now that we have this set up let's go ahead and actually start running it and then I can describe some of the feedback that I have about using the 8 billion parameter model of llama 3 so what we're going to do is we're going to head back over to our over here in our terminal let's clear things up let's get rid of this one too let's see clear this it's split for some reason yeah we'll just run it down here we'll just make a new one just so you guys can see it from scratch cool so one more time poetry shell now we have access to our marketing crew and now I can start running it so if I type in Python and then I'll type in main.py this will start running our crew and it'll start using that you know making sure we're going to be using that 70 billion sorry 8 billion parameter model three of llama 3 so if I run it it'll go off and it'll start actually executing so here are a few things that I think are super important about using llama 3 now this will run this will just take an insanely long amount of time so you can see right now that it's kicking off our crew and it is going to actually start using some of the tools but what I have noticed is llama 3 the 8 billion one is phenomenal at smaller task so the fact that this crew is going to be talking and making decisions and searching the internet getting back a ton of information I have noticed this model kind of falls apart so I would definitely recommend using Lama three for smaller tasks that are just very quick and dirty such as you know hey convert this file that is in markdown over to HTML I like those simple inputs and output tasks that don't require a lot of creativity I have noticed llama 3 works the best for so I would definitely recommend doing that but I just want to show you guys that it is feasible to go ahead and do it and then it will run and like I said this will just take a very long time to do so I'm going to go ahead and stop this and we're going to switch over to the next part where I'm going to show you how to start beating things up cuz you can see this is super slow so next in the next part of this we're going to start connecting things to Gro we're going to use the same model and you're going to see how much faster it works and then we're also going to start using the 70 billion parameter model of llama 3 just so you can see how much smarter and how much better the results are so let's go ahead and cut to the next part where we're going to start working with Gro all right so let's go ahead and start setting up grock inside of our crew now this is going to be the simplest change that you've ever seen but this is why I love working with crei because it makes it super easy to substitute the different llms that you're working with so let me show you what you got to do so up here in the Constructor of our agents. file we're going to remove this llm file or property right here and we're going to update it and all we did is we changed it I'll go back we used to have chat open aai and then now we changed it over to chat grock chat grock is the way that you can access grock now let me show you how are we able to do that in the first place well what you notice is back in our P project. tommo we had something called laye chain grock and this is what enables us to add in the chat grock package so what we can do now is if you are on a Mac you can hit command Dot and this will give you the ability to update your package Imports so now you can see from Lang chain grock that library that we just imported we can now add in the chat grock class which basically makes our crew get it gives us access to start using Gro all right now let's dive into this because it's a little bit different so the first thing you'll notice is we are now passing in an API key specifically we're passing in our grock API key well where the heck did that come from well if we head over to ourv file you'll see that I have a grock API key and I got this actually from working over here in grock Cloud so if you head over here let me zoom in for you guys so if you head over to console. gro.com it'll take you to like the Cloud area and inside of here you can update and create your own crew AI API keys so you'll just click create API Keys give it a name and it'll generate the keys and it's important that you save these keys inside of your environment variable file that way no one else has access to them and it's super important to make sure that your EnV file is ignored so you don't accidentally copy and paste your Gro keys to the web okay now that we have that set up what we need to do is hop over to the next part which is figuring out which model we want to use so you have a few different options when it comes to llama 3 remember you have the 70 billion parameter or the 8 billion parameters so how do you know what are the official names well if you head back over here to the grock playground what I like to do is go over to the model sidebar that you can see right here and clicking it to see what are the different names so in our case you can see you can use llama 3 this is the 7 billion one and basically this eight number is the context window so this is how many tokens you can pass in and as you can see earlier we talked about it doubled Yep this is where you can see it doubled we went from 496 to almost 8200 which is huge so but yeah these are the two different models you can use when it comes to llama 3 and eventually you'll be able to use the llama 3 400 billion preter one that's it's still in training so we don't no one has access to it yet but we hopefully will all soon so what we're going to do in our case is start off with the 8 billion parameter model so what you can see is come back over here llama 3 8 billion great so that is all working so what we can do now is we can head over to our terminal and start running our crew now using the updated version that's using Gro and like I said because we set this llm everywhere inside of our different agents we had to make a substitution in one place and it's just going to work everywhere and be super seamless so let's go ahead and head back over to our terminal and let's go ahead and start running it so we can just type in python. Main and I actually have to reopen poetry yeah as you can see it said base but now it says the actual proper proper one so I can type in now python. main.py and it'll start running our new crew and what you can see is it's going so much faster than the other one whenever we were running it locally it actually went ahead and timed out well it didn't time out it just got stuck for a long long time so now it's going to go off and just start pinging the web going back and forth back and forth so I'm going to go ahead and actually cancel this because it takes forever to run and like I said earlier when it comes to working with the basically the 8 billion parameter set it is smart but it's made for quick tasks that need to be executed and like I said when we're working with this large creative complex crew it's not the best so what I want to do is go ahead and update it to the 7 billion parameter model so you can see that in action super super simple change you'll update this to 70 billion and what that'll do is like I said just swap which LM we're using but we also need to make a quick change and what I'm going to do just so you can actually see what I'm talking about is we're going to get rate limited so I'm going to go ahead and run it show you that we get rate limited and then I'm going show you how to fix it afterwards so if we come back over here let's go ahead and rerun our crew and you can see now we're going to be using the the 80 billion 70 billion parameter model model and this one is going to go off and basically it just the results it's going to get and the ability of the model to understand its historical context and really just understand where it's going like these results are 10 times better than what we were just experiencing a few seconds ago so as you can see this is going off it's actually already starting to generate potential you know it's doing research to figure out competitors for our temperature controlled coffee mugs that's by the way what we're creating our Instagram post about is smart mugs so as you can see it's going off and actually finding mugs on the internet so this this one is awesome but if you keep letting it run what you're going to see here in a little bit is it's going to get rate limited so I'm going to pause and come back once it actually hits a rate limit and show you how we're going to fix it so after a few more minutes of running we actually got rate limited like I was just talking about and this comes down to the fact that when you use Gro you are only allowed a certain number of tokens per minute so you can see right here we hit a rate limit so$ 429 means rate limit and then we hit a token permanent limit so you can see that the limit is 3500 we used 1,400 and we're requesting another 22 which would have all in all exceeded us so that's why we got rate limited and it stopped so here's what we can do to actually fix this issue so that we don't have to keep worry about it now this is a temporary solution and crew AI is improving and fingers crossed they're going to add in a token per minute feature as well to our crews so that we no longer have to worry about it but as a current workaround what you have to do is come down in here and set up your RPM Max so this stands for maximum basically yeah you can see right here the maximum number of requests per minute for the crew to execute so what I'm going to do is set it to two which is kind of slow but this is the current workaround to get access to llama 3 and have that smarter llm working and not get rate limited so that's what I'm going to have to do and two seems to be the you know for large creative task like this where we're doing a ton of research on the internet two has seemed to be sweet spot to get it working so what I'm going to do is now that I have it saved we're going to hop back over here to our crew and we're going to go ahead and rerun it and once it's done I'll go ahead and show you the results as soon as it get done so we're going to do python. main kick it off and once it's finished running I'll tag you guys back in so you can see the final copy and mid Journey pictures that it generates for us so let's go ahead and give it a second and dive back in all right guys so now that we fixed the rate limit issue it took just a few more minutes to run but it finally worked and I want to show you the outputs that it created so here are the three different pieces of copy that it created so these are the tags that we put on our Instagram post you know like hey wake up to the perfect cup of coffee say goodbye to lukewarm coffee which is crazy cuz we're making a temperature controlled coffee mug and it figured out exactly what to say like this is the problem and here's the solution our temperature controlled coffee mug so like they actually produced some pretty amazing results and also you can see it came up with some other ones like hey here's our temperature control mug and then same thing for the other option but what's really cool is here are our three mid Journey descriptions that it created for us so here are the images created so not going to lie the first options and just a regular coffee mug but if you go to option number two it actually properly called out the fact that like Hey we're using lighting and some smart LEDs on the display to show that this is a smart mug so I really like this mug and this mug and you know this is just a start AI we gave it the initial prompt it got us 80% there to to the solution and now it's up to us to go off and tweak what we want to do so this is this is super cool and then option three eh not not a big fan of two but I think it you know it got to started in the right direction with some smart thermoses so this is exactly what we needed so yeah all around super impressed with the results and this is super cool cuz you build this crew one and then you can run it as much as you want and AI does all the work for you and that's a wrap for this video guys I hope you enjoyed learning about llama 3 crew AI Gro and literally everything else and if you did I have a ton of other AI content right here on this channel that I'm sure you're going to love I have a bunch of full stack tutorials and a bunch of other crew aai materials so definitely want to check that out after this video and if you need any help drop a comment down below or check out that free school Community I created for you guys but enough of that can't wait to see you guys in the next video have a great day see you [Music]
Info
Channel: codewithbrandon
Views: 27,454
Rating: undefined out of 5
Keywords: ai agents, crew ai, crew ai tutorial, crewai langchain, crew agents, autonomous agents, autonomous ai agents, auto gen, autogen tutorial, autogen create ai agents, ai agent, autogen step by step guide, chatgpt prompts, llm tutorial, llama ai, ollama langchain, ollama tutorial, ollama api, ollama rag, ollama mistral, llama 2 tutorial, llama 2 local, llama 2 langchain, local llm python, llama 3, llama3 local, llama3 rag, llama 70b, llama-7b, llama 70b vs chatgpt
Id: 02cdCd43Ccc
Channel Id: undefined
Length: 31min 42sec (1902 seconds)
Published: Thu Apr 25 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.