Ollama on Windows | Run LLMs locally πŸ”₯

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I've been exploring AI for quite a few months now and you have seen that with you know my content about Lang chain or the free code cam Cod I did uh for Lang chain with open eyes apis and today I wanted to show you how you can run large language models locally and that's where AMA comes in they just recently released the windows version it was available on Mac and Linux but now they have released Windows version which is in preview by the way so we'll be check checking out AMA and how you can integrate that with Lang chain to run your models locally but also build those llm powered apps so if you go to ama.com you can see that it's available for Mac OS Linux and windows now which is awesome so if click on download and click on download for Windows or any of the other operating systems which you might be on it'll download the exe file and now let's go through the installation steps here so it looks like it wants me to close obs yes so I'll install amaama and then continue the recording so the installation was successful and now if I go to my terminal type in ama and you can see that we have few different options and available commands within AMA so you have serve create Show run pull push list copy remove and help and I'll go over the important ones that'll that you'll need to get started with running a model locally on your machine but before that let me do a quick explanation on how AMA works so the single binary file that we downloaded and installed on our machine comes with the server and the client and the client in our case is the CLI so later when I'll run AMA run fi so fi or fee is a model by Microsoft and you can look it up I kind of posted about it on my Twitter you know I wanted to check this model out so when you run this this command this basically does an API call to the server so our CLI acts as a client and does an API request to the server or you could build your own app that utilizes this API so you'll be doing rest API calls to this AMA server that's a little bit about how AMA works and now if you go into models on ama.com you can look at different models that are available for you to pull so just like you to get pull to pull a repository you'll do AMA pull F to run the F or fee model locally so let's go ahead and do that so I'll pull the Manifest and then you'll see that it's 1.6 GB the model size and I'll start downloading it okay so now we have successfully pulled a model from Ama so now you need to run that model so if I do run fee and it should start the model right and now we are basically making the API call to the AMA server that's running on our machine so let's say I don't know why our skies blue and there you go it generates a response saying the color of the sky is determined by SC ing of sunlight in the Earth's atmosphere and then you can kind of read through this so now we have a large language model well in Fe case it's called small language model because of the amount of the parameters that it has I believe it's 1.2 or 1.7 uh billion or 2.7 billion which is pretty smaller compared to the other models that are listed here so one of the other models that I do want to check out is the mistal mistal AI and then there are different configurations that you can look into you can set different configurations and settings uh for your locally running AMA okay now moving on to the Lang chain right so if if you're wondering what Lang chain is it's an open source framework that allows you to build large language model powered apps right so I've had some rag examples uh on the free code Camp YouTube course so you can check it out if you're interested in learning more about Lang chain so I typically use open eyes you know models so today will be using model that is running locally I do have a virtual environment set up already and I have installed you know Lang chain as the required package and from the community llms we'll import AMA and now we can initiate an nlm by using the local model that we had and that was fee and now we have to invoke this large language model by asking it some kind of question so we'll save that in a response and we invoke the llm and we ask a tell me a mathematics joke and then we print the response and now if we run this file it should invoke the large language model which is fee that is running locally and we'll see the response printed out in the command line here and there we go it gave us a pretty big joke I guess sure here's one for you why was six afraid of seven because 78 user haha that's actually pretty funny can you tell me more about the history of mathematics jokes um so I don't know how um did it get this context but it went ahead and kind of gave the history of mathematics but you get the idea right you can use now Lang chain with your locally running models and I want to try out all the different models that are available uh specifically as I said you know the mistl and the code llama I'm definitely interested to see see how code Lama or any of the other like coding assistants uh perform because you know I've been using co-pilot and kind of like it but there are other kind of competitors out there it would be interesting to see and do like a comparison you know about the accuracy and how many times do I have to Tinker with the prompt to get the right answer for any of the programming or coding challenge or solution that I'm trying to build so yeah I hope you found this video helpful I'll see you in the next one peace
Info
Channel: Rishab in Cloud
Views: 17,161
Rating: undefined out of 5
Keywords: Technology, rishabincloud, ollama, llama, llama2, llama 2, local llm, run ai locally, run llama locally, run ollama locally, llama locally, llama 2 locally, llama2 locally, local ai, ollama windows install, ollama windows, locally, ollama on macos, run llms locally, llama2 on macos, llms locally, run llm locally, local, langchain locally, how to install ollama, install llm locally, llama-2 local, easiest way to run llms locally, llama2 installation, installing ollama, localllm
Id: Asleok-Snfs
Channel Id: undefined
Length: 6min 30sec (390 seconds)
Published: Fri Feb 23 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.