Obsidian AI and GPT4All - Run AI Locally Against Your Obsidian Vault

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey Mike here so today I saw this Twitter post or X poost whatever you want to call it now it says what if chat had knowledge from your notes and documents while keeping it all private you can run AI locally no data leaves your laptop for custom secure answers from your second brain now when you go down here of course it's available on both Windows and Mac but when you go down here you see uh where is the word obsidian and I saw that word and I thought to myself I have to try it out so that's what we're going to do let's head on over to the nomic AIX profile and then into the gp4 all. website this is what it looks like you also have a yuntu installer so for any Linux based devs well specifically yuntu Linux you can do that as well in our case though we will be doing the OSX installer let's download that real quick and we get this setup window we're just going to click next it's fetching the latest versions and such it's going to go right into our applications folder that's good there's some sort of components picker which there's only one option so we're just going to click GPD for all and click next I read through this license and there doesn't seem to be any gotcha in it so I accept it and click next once again install and it's going to now download everything that it needs to run the actual application and now let's click finish let's search for GPT for all there it is it's very similar to LM Studio let me actually put these side by side so here we have the GPT for all window and then next to it we have the LM Studio window now I've been using LM studio for the past few months they are updating very frequently they're making use of all of the Mac M1 or M Series in general features utilizing the GPU CPU Etc don't want to get too complicated and into the weeds with how things work here but I wonder if the same software Technologies are being used in gbt for all before we continue I do want to explain in just a little bit of detail what these optins really mean a lot of the time we see words like Anonymous usage analytics or Anonymous sharing and we think that our data is all good that this is just analytics that are being sent over so while things such as your IP address or I don't know name aren't being sent any data that you are querying within your chat especially data within your obsidian Vault for example example if you were to opt in and select yes for both of these then that data will be sent over if you have sensitive information within your notes or obsidian Vault then that sensitive information will be sent in full nothing redacted if you have both of these selected as yes you should have no expectation of chat privacy when this feature is enabled you should however have an expectation of an optional attribution if you wish your chat data will be openly available for anyone to download and will be used by nomic AI to improve future GPT for all models these two sentences are the most important you should have no expectation of chat privacy and your chat data will be openly available for anyone as long as you select yes on these two that's why personally I am selecting no the next screen you are hit with are the available models that you can download right away I'm running this on an Apple M1 16 GB of memory it's from 2020 we're talking 3 to 4 years old I know from personal experience that the highest model that I can go to is a 13 billion parameter model so anything that is below 13 billion parameters I can use comfortably now apart from downloading models for offline use you can also utilize chat GPT straight from the GPT forall program now of course you need an internet connection as well as API key access but it is nice to be able to switch between local and online however we won't be playing around with that right now I do want to try the mistal open Orca model so we're going to download that here you have your settings icon and what we are looking to utilize GPT for all four is the local docs feature so when we click local docs you are hit with this local document Collections and you have to download their SBT or sbert whatever you want to call it let's download that looks like this download is much quicker because it is smaller in size it's only 43 mbytes and now when you are in the local docs settings you can add your document paths so what we're going to do is add our obsidian Vault here let's type in obsidian Vault I'm going to select my base Vault and open it up we're still waiting on this mistal open Orca model to finish downloading unfortunately it seems the download stopped here for some reason so we're going to have to cancel and then resume maybe that will Kickstart it back up huh well that's a really nice feature you don't have to restart the entire download it just starts again from where it left off or froze in this case all right so it just finished downloading it and now it's calculating the md5 making sure that everything is as it should be so let's exit out of here it says loading model right here at the top let's start off and type a message Hi how are you and the response time is very very fast that's very good this is the 7 billion parameter one let's try a question such as what's 5 + 5 the answer is 10 the sum of five and five is equal to 10 cool now what I'm going to do here is actually turn off my Wi-Fi and let's see if it still works who is the 41st president of the USA and there is the answer George HW Bush it even gives you the serving gears he succeeded Ronald Reagan and was followed by Bill Clinton it did have a little bit of a weird ad to the end it predicts what I will ask next seems like the syntax might be a little bit off these are issues you would expect from a 7B model all right let's reconnect ourselves to the internet and let's see what downloading looks like different models I read that I could show it the path to the models that are downloaded by LM Studio that way we can share the models instead of having to download two separate sets of the same model the only problem is I'm not too sure where LM Studio puts their models let's take a look here okay so the local models folder is within our cache LM Studio models directory so let's take this models folder and place it right here head on back to GPT for all and go into models the bloke and yeah let's just add this entire folder so now I would hope that it will show all of the models there we go perfect so let's try using a 13 billion parameter model or could two 13B loaded I have two colors blue and red what will happen if I mix them together let's see how fast this response is okay so it's certainly not as fast as the 7B but still very Speedy as you can see here I have my entire obsidian Vault connected to the local docs here so if you click on this little database looking icon then you'll see that it is indexing my entire obs inv Vault okay so it is done indexing so now we can enable it and exit out of here let's switch this over to the chat GT4 what are the best YouTube video titling practices so as you can see it does do a pretty good job of understanding which files within your Vault are appropriate for the question that you are asking it that's why titling your notes within your zle Casten is very important in my opinion it's certainly an automated and easy thing to do with something such as text generators generate title functionality let's click on one of these and see what happens ah so it only sends a little portion of the note what about this one and this one so again it only takes in very small Snippets of your documentation here are my closing thoughts on the matter look it's a cool tool it's definitely in its infancy I don't see any reason to use it as of right now other than if you do want to keep up with its latest updates if you're going to use something like chat gbt to chat with your own notes I would utilize the obsidian plug-in smart connections to do that if you are looking for a local method of chatting with your obsidian vault as in you don't want to use chat GPT or any other online version but instead want to chat with your files locally on machine then yeah it's a good decision but you must keep in mind that again it's in it's very early St stages you're going to see a lot of updates and changes to it throughout the next few months and as of right now the quality is nowhere near its potential so if you are of the experimenting type then yeah go ahead utilize it I truly hope that you will find more use from it than I have for now personally I'm going to be sticking to chatting with my notes through smart connections and utilizing text generator I have found absolutely no other better version than that so far I really hope that changes because I think that competition is always a healthy thing and if you become the best plug-in or program or whatever in your field then there's really no incentive to become better as always thank you all so much for watching a special thank you to our patrons and I will see you in the next video tomorrow
Info
Channel: SystemSculpt
Views: 4,045
Rating: undefined out of 5
Keywords: Obsidian AI, GPT-4, local AI, private notes AI, GPT4All, AI chat with notes, Obsidian Vault, AI local processing, secure AI, AI for note-taking, Obsidian plugins, smart connections, text generators, AI offline mode, local document AI, AI personal assistant, productivity tools, AI integration, local AI models, AI chatbots, AI writing assistant, local AI setup, GPT-4 local use, Obsidian AI chat, private AI tools, obsidian app, obsidian ai integration, obsidian ai prompts
Id: MndgTphJdRc
Channel Id: undefined
Length: 9min 48sec (588 seconds)
Published: Tue Jan 09 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.