Create your own CUSTOMIZED Llama 3 model using Ollama

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay let's jump right into customizing llama 3 which was just updated about 16 minutes ago here on the olama website so if you haven't already go ahead and search for llama 3 on the olama website if you don't have Ama already installed check out the two videos Down Below in the description I have a video on how to install and set up AMA on both Macs and on windows so go ahead and check that out before we get going so this is Alama 3 that just dropped today we're going to show how you can go about customizing this model with system prompts and different parameters that we have available so if you haven't already go ahead and navigate here check it out read some documentation here we're going to be using the Llama 3 8 billion parameter model and for those who aren't familiar you can get some different information around the model here on the model card so first thing I am going to do is jump over into the terminal so that we can start running some ol commands to pull down llama 3 onto our computers okay I've got my terminal pulled up here now the first thing I'm going to do is run the olama list command which will list all the models that I have installed here on AMA as you can see I don't have any and install so the next thing that I am going to do is run AMA pull and I know the model that I want is the brand new llama 3 Model here and I want the 8 billion parameter model so I'm going to go ahead and run this take a few minutes to download it and then come back after it's done completing its download okay I'm back at the terminal here and we can see that llama 3 has officially downloaded onto my laptop now the next command that I'm going to run is AMA run and llama 38b just to make sure everything's functioning properly I am going to ask it a very simple question write me some sample python code all right so we can see that it's printing out some output here and writing me some sample python code so I am going to stop that because we know it's running I am going to type in slashy to exit the model and clear this out now the next step that we're going to take is we're going to actually customize this model the system props and some of the parameters to just make it more customized to our particular needs maybe we want the system prompt to say certain things so that's why you would go about customizing your own version of the model here so I'm going to jump into VSS code and we'll walk through how to create a custom version of llama 3 okay I have vs code open up here and I have a file called custom- llama 3 holded up here on my laptop you can call this file whatever you want but you'll have to be sure when we type in the command to reference our model file file that you're typing the correct name of your file here also you could just type this in text editor if you want it to I'm using vs code cuz it has an integrated terminal now we're going to briefly walk through each line here of what it takes to create your own model file now the first thing that we're going to do is call from llama 3 colon 8B now this is our base model that we're going to be using here we're going to be using the bass llama 3 8 billion parameter model as our starting point for our custom model so that's why you have the from llama 3 colon 8B now one thing to call out here the from is not case sensitive you could type that in all lower cases if you want but the convention when creating your custom model files is to type in all caps there now that's the reason that we have the from llama 3 now the next thing that we're going to do here is we're going to set up some custom parameters now you have to set these correctly for each model or when you create your custom model then your output will be incorrect or you may see some weird output if you don't type these parameters correctly here now I'm just going to go here and the first parameter that I am going to set is temperature by default the underlying temperature is 08 or 07 for most of the models so we're going to say we're going to set this temperature to one so this is going to make the model as creative as it can possibly be that's what the value of one means when we set a temperature parameter and if you're used to playing with open Ai and things like that and other large language models this parameter here shouldn't be foreign to you at all now we're going to set some other parameters here that I'm going to just paste in these are our stop parameters now these are set based on the model now you can get these values here by going to look at the model card on olama so let's jump back over into the browser so you can understand how I came up with these parameters here and it's not just something I'm making up okay we're back at ama.com and we're at the Llama 3 Model here now I was going to show you how I came about getting the stop parameters and what you're going to do is go down to this section here and go to params and then click on that now we can see that the stop parameters are these sets of values here so you're going to take these values one by one and add them individually like I did in my model file so again each model is going to be different so if we're customizing minstral you would have to make sure that you were actually using their stop parameters okay so now you know how I came about those stop parameters we're going to go back over into our model file okay I'm back in my model file here in vs code here and now hopefully you have a better understanding of how I came about getting these stop parameters now the process is the same for any model that is hosted on ama.com so you can follow this process for any model there all right so the next thing that we're going to do is we need to actually have the template that this model uses to produce its output now I'm going to grab that and paste that in so you can see it and again we'll jump back over to the model file so you'll know exactly how did I get these values now I'm going to paste that in here and this is the template for the Llama 3 Model now let's jump back over into the browser so you can see what that looks like in the model card here now we're back here at the Llama 3 Model card here and where we get the template information is right here where we click template and the model card location now we can see the template here that we need to use for the model to produce output properly for us so all I've done is copied this value here and then paste that into my model file so that's all it is to it again this process is the same for any model that you want to customize that's on ama.com all right so now let's jump back over into vs code and set up our last parameter okay so we're back here in vs code and we're going to set up our last parameter which is going to be our system parameter which is basically our system prompt now I am going to use this as my system prop here I'm just saying you are a helpful AI assistant named llama 3 Droid so so that's all I'm going to set you can set it to whatever you want you can even make this a little bit longer if you want it to but for this example here I'm going to keep it pretty simple here so that's all it is here for us to create our model file so now we need to create our new model based on this model file here so I'm going to go ahead and click save here and the next thing I am going to do here is open up terminal so I'm going to go here and I have my terminal window open now I'm going to type in the AMA command to create our new model so we're going to type in olama and then we're going to type create and then I'm just going to call this my llama 3 Model you can name it whatever you want to now the next thing we need to do is set the file flag so this is going to be where is my custom model located my model is located in this current directory here so I can just reference custom- llama 3 because again that's what I named my model here and I can simply hit enter so I'm going to hit enter here and it ran pretty quickly here it's just transferring the model reading the model metadata and then creating the model layers there so that's all it is to create my custom model with the create command now let's see if our model shows up for us I'm going to type in ama list and we can now see that I have the Llama 8 billion parameter model and then llama latest here with the latest tags and the last Model that you can see here is my llama 3- model that we just created a few seconds now let's go ahead and test out our model to see how it funed I'm going to just go ahead and open up a brand new terminal window so that you can have a better view okay we're back at the terminal here and we're going to test out our models now the first one we're going to test is the Baseline model for llama 3 now I'm going to ask it what is your name and it says it doesn't have a personal name there so we don't expect it to give us a name let's exit out of it so that's the base model now let's use our new model here which is the my llama 3 model and we're going to type in olama run and then my model name and just a heads up you don't have to add the tag at the end if you don't want to you only need to use the tag when you're calling out certain parameter counts so if I had llama 3 8 billion parameter and llama 3 the 70 bilar parameter model I would have to do that to call those particular Models All right so let's ask this model our custom model remember we set the system prompt to be llama 3 Droid what is your name and you can see hello there my name is llama 3 Droid but you can call me llama for short so that's just a example of how you can customize certain parameters such as your system prompt your temperature and things of that nature with a custom model file now let's ask it one more question can you write me a simple Java program and it's writing me a simple Java program so it still functions as it did before is just we added new parameters on top of the base model there for you so that's what we did to make this model our own now I'm going to exit out of here and type in slby and the next thing I'm going to do is show you where you can check out other parameters that we could have set in our model file okay I'm here at the Alama documentation on GitHub here I'm at the olama model file document a now we can see the different options that we have here when we want to create a model file and some of these things we've already gone over but like I said I wanted to show you where you could get other information related to other parameters that you could set so I'm going to go to the parameter section here and click on that and there's a host of other parameters that I could have set so for example I could have set the context window to be larger if that model supported a larger token contacts there or I could also set the things like top K or top P if I wanted to also you can see where you had the stop parameter and the temperature parameter so that's where a lot of these different things came from I will put this link into the description section also the link to the model file will be in a GitHub repo if you want to pull that and alter it to create your own model files like I said you'll have to create a model file for any model you want to cover customiz onama anyways so you're probably wondering okay I've got a custom model how can I go about using that model in an application well check out the two videos that show up on the screen on how you could go about building your very own olama chatbot and if you like this video and the content hit like subscribe I try to put content out like this on a weekly basis appreciate you hanging around to the end of the video hope you like it and have a great day
Info
Channel: AI DevBytes
Views: 12,917
Rating: undefined out of 5
Keywords: DevTechBytes, ai news, how to install llama 2, how to install ollama, llama 2, llama 3, llms locally, machine learning, ollama, ollama api, onlinelearning, techtutorials, streamlit, streamlit tutorial, streamlit python, python streamlit, how to use streamlit, learn streamlit, streamlit tips
Id: k39a--Tu4h0
Channel Id: undefined
Length: 12min 55sec (775 seconds)
Published: Fri Apr 19 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.