Llama3, a powerful AI for QA

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in today's video we're going to introduce meta's new llama 3 and we're going to talk about specifically how to use meta's llama 3 for QA quality assurance testing so first let me introduce what llama theory is so on April 18th 2024 meta was introducing meta Lama 3 the next generation of our state-of-the-art open source large language model met AI built with lab 3 technology is now one of the world's leading AI assistance that can boost your intelligence and lighten your load helping you learn get things done create content and connect to make the most out of every moment so what this means is it has state-ofthe-art performance uh and we're going to show you a few figures that they showed us um and these can be found on their blog on their website and so basically in the this first uh slide right here it's just trying to show you the model's performance so they have two versions of the model the 8 billion and the 70 billion and these are different sizes in terms of the number of parameters they have and for the 8 billion one which is a little bit lighter they're comparing to gamma and mstr and for the uh 70 billion one they compare it to Gemini and CLA 3 and we see that for these two different models uh in terms of these metrics right here and benchmarks they perform much better in terms of model performance than the other ones that you see here and um I won't go into each of them but the larger the number the better and you see that they they they highlighted that Lama 3 performs better they also did some other comparisons for human evaluation as well not only for benchmarking and so for this one uh basically green indicates when llama 3 performs better the red indicates when it performs worse and then the gray performs uh they perform the same so when llama 3 is compared to Claud Sonic we see that it performs better 52% of the time and then compared to mistol 59% of the time compared to GPT 3.5 it could performs better 63.2% of the time and then they even compare it to llama 2 which is the previous version of llama and they saw that it performs better 63.7% of the time so there's also another component of uh llama 3 so uh right here we we just have the we're talk we're just looking specifically at the pre- chained models and um we're comparing it to the other models again and then we're looking at um how well they perform on these benchmarks right here and I'm not going to go into each of them but basically again the higher the better and we see that llama 3 performs better than all of these different models right here and so here is just an overview that they provided to outline their model and what I mainly got from this from their blog is that basically llama 3 is an open source system and so what this allows is it allows developers to uh fine-tune the models and work directly within the models rather than uh many other GPT models that don't allow this kind of freedom for developers to work with so first I'll show you later in this video how we can use llama 3 for our purposes I'll also show you how you can download and install llama 3 um using these commands you llama install it and then I'll also show you how you can run llama 3 from Visual Studio code which is the uh the IDE that I use uh and we'll show you how to use that through the extension code GPT and how to also download llama 3 as well so to access llama 3 all you have to do is you just have to go to www.m.ssaze.com just going to go ahead and ask a few questions to La by three for example let me say uh tell me about yourself so you can see that uh it's very similar to the uh to the format uh as other ones as well um for example chat GPT looks very similar to this as well um so just confirm my agent stuff first I'll just ask you something about yourself let see um what it says and yeah so basically it's saying that it's an assistant it can do all these different tasks it has some limitations uh and it knows many languages um now let me ask it some other questions for example let me ask about llama 3 so I'm just going to type llama 3 see what it says so basically it says it's a it's a this meta AI is a large language model based off of metas lamb 3 next um let me ask you a question about what we interested in what's uh what is software QA so we see that it's taking a bit longer but it says software quality insurance is the practice of monitoring all software engineering processes activities and methods using a project to ensure proper and quality of the software uh and this is the response that it gets and for the most part it's pretty factual it's pretty correct so let me ask another question let's see if we can write code let's say write test plan for a web application based on a database let's see so this is a test plan that we wanted to write specifically for our case and this is what we see yeah so that's uh llama 3 you can go ahead and play around with it it's um it's very easy to use and very intuitive it's very similar to chat gpt's model and all the other different models as well okay so now I'm going to show you how you can get llama 3 onto your own computer so this is the second part of instructions what you should do is you go to a llama here and then go ahead and download the version you have so we have windows so you can go ahead and click download for Windows and then once you're done downloading basically it'll just look like this and you can just follow the instructions and set up once you're actually done running a llama um you can check if your setup is correct if you go to your command prompt and you type in control V and it'll tell you what Alama version you have now I'm going not going to be using Alama through the command prom what I'm actually going to be doing is I'm going to be using visual studio code so I'm going to go ahead and type in Visual Studio code and give it some time to open up and what since it's open up I'm actually going to go ahead and click on extensions right here I'm going to search up code GPT and give some time to run uh and I'm going to look for this second one right here and then I'm going to go ahead and install it so I have it already already installed once you finish installing it make sure you restart your uh Visual Studio code because otherwise you would not see it properly installed so now once you installed code GPT you can go ahead and find it down here and click it now for the purposes of our uh video what I'm going to do is I'm going to first I'm going to go ahead and download Alama 3 so what I'm going to go ahead and do is I'm going to go ahead and go to uh the terminal right here in the uh Visual Studio code and I'm going to go ahead and type AMA pole llama 3 cursor 8B and so this will basically download the language model onto your own computer and for our case it's pretty big it's 4.7 gigs uh the reason I didn't go for the 70 billi version is because I I just wanted to keep it lightweight I didn't want it that big so once you're done finished installing this it'll tell you that you finished uh downloading and pulling the language model now you can go ahead and go to the code GPT extension right here I already selected it but you can go to the providers here select a llama and then go to the model and select the 8 billion version that we have already and now we are ready to go ahead and use this code GPT within our own model uh within our own code so what are some things we can do well first you can directly ask questions right here like what we did online but obviously that's not what we're interested in uh for example I think a more useful use of this is you can highlight any of your code and then you can rightclick and you have a bunch of these different uh commands that you can use to uh for uh to explain your stuff so for example you can say can you explain code gbt can you document code gbt uh unit test find problems refactor and so on uh what I did find was that this llama model 3 um it does run very slow compared to the GPT models that we used previously and I'm not sure if this is because of fine tuning but uh yeah it does run very slow so I'm actually going to um maybe ask it to explain a little shorter bits of code so I'm going to go ahead and highlight these code right here and I'm going to go ahead and ask you to explain uh I'm going to pause the video now uh because it does take a long time uh and I'll come back later okay so I let it run for about four or five minutes and it's finally GI output so let's go ahead and see what it says so this is the code that we highlighted earlier and we asked it to explain it and basically what it does is it goes line by line and tells us exactly what this code does for example right here it's saying that um us it to navigate to specified URL so that's correct uh and then it uses this to get the title uh of the web page and asserts the title of the web page and so on so yeah so as we see this is very helpful in terms of explaining code and it's from the most part right now it's very accurate um but does it really explain and does it really justify the long run times at the moment uh not so sure but maybe in the future uh we can see and find different ways to make this run faster but yeah so this is basically how you can use metas llama 3 in your own coding environment in Visual Studio code and how you can use it to enhance your own experience for uh doing QA testing if you found this video helpful please this vide please give this video a like And subscribe to our Channel thank you for listening and we look forward to seeing you next time
Info
Channel: Test Automation 101
Views: 624
Rating: undefined out of 5
Keywords: Llama 3, Open AI, Visual Studio Code, Llama 3 integration, coding productivity
Id: QSLqAML1jsU
Channel Id: undefined
Length: 11min 23sec (683 seconds)
Published: Mon May 06 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.