"Private GPT" on iPhone/iPad (Ollama-powered from Local PC)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is going to be a fun hacky video where we're going to learn how to use olama powered chat within your iPad so you're going to use olama that runs on your Mac or computer or if you have got an iOS device like let's say iPhone or iPad then you have an application which is powered by this private model that is served from your computer I found it really exciting I found it fascinating I don't know honestly like how many of you will find it fascinating but when I got it work I actually loved it I I could like feel that you know like ultimately I've made a large language model run privately on my computer send that to the iPad as a server and then get it done um I found it quite fascinating so I decided to make a complete tutorial about it this might be a little more technical for a lot of people but if you are familiar then this should be really straightforward I'm going to do this on Mac so so there are certain things that I do on Mac might be different on Windows or Linux uh my apologies if that is not very clear for you and the second thing is you need either an iPhone or an iPad because we're going to literally deal with an IP iOS application so the start with I recently came across this application called Enchanted llm and it called IOS app for AMA needs AMA models I don't know when it was launched but uh I could like see already people using it but this is the first time I came across this so it has a lot of features the main thing is it takes the W Lama server as an input like the powering engine and then it works with everything else like you have got conversation history you've got dark mode light mode and all the other things let me quickly pull up my iPad and then show you like you can go to the Apple store and then you can kind of like say Okay Enchanted llm and once you search for it you will get it and once you go to the iPad I have installed it so this is how you'll get it like I I have started chatting with that like my typical question then I could chat but right now AMA is unreachable because I'm going to actually show you how to do that and uh first thing first you need to First install this application on iPad or iOS so that is one thing the second thing is you need to have wama installed see first of all install W that I I have already covered a video which I link it in the YouTube description or it should be very straightforward all you have to do is go to ol Lama ol Lama DOI and uh you can like literally download the model and install it you can download wama and install it and then have a default model at this point you should have Ama setup done you should have that application the iOS application done once you have these two done the next thing is you need something called local tunnel on your computer so you open your terminal and just see this local tunnel so the problem right now that I could not do is I could not directly send the AMA server to my iPad um but if you have got a better server setup let me know in the comment section but what I ended up doing is I ended up tunneling it you can use engro Eng gr you have to create an account add the authentication key so I didn't want to go through the pain but but yeah if you're a fan of enro definitely go with enro but I'm going to show you how to do with local tunnel so for you to install local tunnel all you have to do is copy this and then go to your whatever terminal that you use and install local tunnel so that will basically install local tunnel in my case I've got local tunnel so I don't have to install it again once that is done our basic setup is done we have installed AMA let's say that you have downloaded the model and you ready I link my AMA video tutorial you have got Enchanted llm on the iPad and third thing is you have local tunnel inst installed now with that assumption first thing we are going to do is we're going to start the olama server so we going to say ol Lama surf once you do ol Lama surf on your Local Host Port 1434 I guess like 1 1434 this model is currently running and is being served as an APA endpoint so the easiest way to verify that is you can again say AMA serve and then it would immediately say error listen TCP and this particular Port is already in use that means this particular Port is already in use with let's say AMA model our assumption is that but that is most likely right so AMA model in my case I don't know if it is mistl or llama is being served through this particular Port so on Local Host one1 1434 right now we are streaming or we have an APA endpoint server running where anybody can hit the end point and get a response back so that is where we are so now that we know that olama server is running let's go ahead and then start tunneling so how do we tunnel it's quite simple so you go to the local tunnel once you have local tunnel successfully installed all you have to do is you have to say lt-- port and what are the port that you want to Tunnel so this is the only place that is a little shady because now you are actually sending your data through somebody else the good thing is local tunnel is open source uh but if you want to use enro that's if you like more secure stuff you can use enro but again um if you don't like sharing anything on internet maybe this setup will not work for you you need to completely have some local server which you can listen like either on iPad but at this point at least like I'm tunneling it through local tunnel so that in the same network my iPad um my um my Mac are in the same network still I could not get the link so with local tunnel you can basically do it anywhere for example you can have a computer at home where AMA is running and local tunnel link is basically you're taking the Local Host link and then tunneling it to the internet you can take that value take that URL and put it on iPad and you can use it anywh so now now from local host or local network you are going into internet that's what you're doing so I'm going to just use LT here let me go to the new tab LT Dash sorry lt-- Port what is a port 11434 once we do 11434 we get a tunnel link https modern Wings attack. local. loc. LT so now local tunnel has recently introduced a password now this is the link that you have to put in the app the app that you have got on the iPad but before you do that first click the link link once you click the link let me copy the link come back here paste the link once you click the link then it will ask you for a password and how do you get the password it's quite simple and straightforward copy this go back to your terminal and then just run this like for example just run this that will give you an IP that you have to copy and then go back here paste it without the percentage of course click click to submit and once you do that it knows AMA is running so the same AMA is running here 1 2 127.0.0.1 1 1 434 the same local put stuff is now available on the internet so what we have done so far is we have got a computer we have down downloaded and installed and ran AMA with a large language model that is being served but it it was served locally now you have taken the serving endpoint and then you're streaming it or tunneling it to the internet and this is where it is happening now what you're going to do is you're going to take that particular thing and then add it to the app so I'm going to go here go click the hamburger menu and click settings where do I settings I've got settings settings and here I have to add the link and what is the link that I have to add I've got https colon SL slash modern Wings what a name modern Wings Dash attack do Loca dolt at this point click save once you click save you come back here and then you want to wait and see for some time whether AMA becomes reachable so at this point you can see that the error is gone that means AMA is reachable so click the hambar menu if you want and then see the history or click here the new one and you can also see like what model you are using and you can basically now start saying whatever you want like for example send it help me with the fact factorial I've put the pencil Emoji it doesn't matter a lot but what you might notice at this point is that this is basically running on your local computer so it's loading the model and then doing all the things that it is supposed to do and then it it is showing you the result ultimately so it it gives you the result so now I can talk in voice also here like you have the option so I can go ahead and then say tell me a joke about Elon Musk send it and it says why did Elon Musk it has the same joke maybe I should ask something else um create a small tweet and then say why YouTube is completely messed up I hope the algorithm doesn't punish me for this so you have the Tweet so it says just spent turn out trying to find a video only to realize algorithm suggested me videos I've already watched let's ask something else and every time something is happening you can actually see that thing being displayed here so when I speak you can actually see that coming here on my local machine and getting executed so let me ask one more question what is your name by the way by the way did not I don't have a name okay are you created by open Ai and then you can see no I'm not created by open AI it's okay who is Sam Altman it still has the same question okay that's bad who is Sam Alman same the question and you can see Sam Alman okay is Sam Altman a good person or a bad person Sam Alman a good person or a bad person okay now it gives me some philosophical note about what is good what is bad but we have successfully done it we have ran a local large language model on our computer at least in my case on my Mac that is now going to the internet and that internet API endpoint is being hit from the iPad using whatever this link is and uh it it works successfully I know this is not one of the most polish tutorials I decided to create this because I found it fun but uh but let me know in the comment section what you feel but before you go the thing that you need to do is you need to First shut down the Local Host cool once you sh shut down the Local Host this will not work right and it will not work here also this is very important don't let it running the second thing is you need to shut down the AMA server as well so shut down the AMA server and shut down the local host and at this point you're not running any server and your app will not work what I'm yet to figure out is how to bypass this local tunnel thing that I'm doing and rather use the same link and then use it I tried a BN bunch of options to take the link directly from Mac and then give it maybe if you are on Windows it might work like there is a good possibility that it might work let me know in the comment section what you feel about it but otherwise I found it quite fascinating I found it really fun and happy to put together this tutorial and also to see a local model running on my computer powering um iPad application quite fun thanks to the developer for making it open source and also thanks to the developer for giving the iPad application itself which we don't have to build it ourself see you in another video Happy prompting
Info
Channel: 1littlecoder
Views: 4,059
Rating: undefined out of 5
Keywords: ai, machine learning, artificial intelligence
Id: EGQSKMaN30o
Channel Id: undefined
Length: 13min 5sec (785 seconds)
Published: Tue Jan 30 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.