Local LLMs in Neovim: gen.nvim

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello I'm David your developer on duty and in this video we're going to have a look at fully open source fully local large language models and their integration into neovim a few days ago Mistral AI released a large language model it's open source under the Apache 2 license it's small it has only 7 billion parameters which depending on your machine probably allows you to run it locally despite its small size they claim that it performs even better than llama 213b on all benchmarks when they released their model they just posted a tweet with a magnet link no text no marketing I like that now let's see how we can run them locally there's this nice little tool called olama and it lets you download various llms and run them locally on your machine it's that simple it has a lot of different models for example Lama 2 llama 2 uncensored code llama and also Mistral and Mistral comes in text as well as instruct after installing it you can run it locally using olama surf this will serve a server which runs Logan on your machine I already did that and then you can run olama run provide the name of the model and an instruction for example write a Haiku poem about cats now if you press enter it will download and install the mistrial instruct model if it's not yet there I already have it that's why I immediately get my result since it's a bit tedious to use llms as a command line interface tool I created a neovim extension called gen envim to invoke it from my text editor I just built that today the usage is relatively simple you have this custom command called gen and then you can perform various tasks I have also some predefined prompts for example enhance the grammar of some selected text and so on and so forth so let's see it in action let's open some sample text and here you can see there are some mistakes for example sentence is wrongly written and the grammar is not perfect so let's just highlight everything and invoke my tool and here we can enhance the grammar and spelling this is one of those predefined example prompts and if I do this now the text is replaced and you can see there are no more errors and everything runs locally on your machine there's no communication to some external system it's completely free and open source here's another example sometimes you don't want to replace text but you want more information for example a summary of some large text so you can highlight it all this is some text about quantum mechanics now I can just say summarize and now the llm will summarize it and you can see it's really fast I'm using it on my Mac it's an M1 Mac and you can see the inference speed is really good sometimes for me it's easier to parse information if I have it in form of a list so I can highlight all the text and I can say make list and now it renders it as a list a lot easier to parse but it's still too much information so let's make it more concise this is a lot better let's undo it so we have the original text and now I can also ask questions so for example I can say ask and now I can say what is a wave function and now based on the information I provided it tells me what a wave function is so here's some other example let's say I have some data in a semi-structured format and I want to render it as a markdown table so I can just highlight it and I say make table and now I have a markdown table the coding capabilities of the Mistral model are not great but acceptable for example I can give it this function here and I can say review my code and it correctly says that this console log statement can be simplified to this one which is correct and the function name should be changed to something more descriptive such as greet which is also fine we get different answers for each review so let's review it again and it says the function can be simplified to this one which is definitely correct so it also has some explanation it uses the template literal this is this one here and with this it allows for string concatenation and indeed the code is more concise and easier to read I can also change for the code for example I can say change code make it Arrow function and now it's an arrow function so here you can see some example prompts I ship for example enhanced grammar spelling which you saw before it has a prompt which is modify the following text to improve grammar and spelling and you can see I can use placeholder variables this dollar text variable is later on the selected text and this option replace true means that once it's generated I want to replace the selection with a generated output we can also easily add new prompts for example we can say require chain prompts and the name of the prompt will be make Style and it should be a prompt in the form of transform the following text in to the style of I can use the placeholder input input 1 to demand the value from the user and I can say text and I can say replace to true so let's try it out I open some sample text now I take this sentence I say make style and I can say pirate and now the sentence is transformed into this style of a pirate I'm sure there are a lot more useful prompts we can create so please give it a shot I think this small 7 billion parameter model is great for some simple tasks and it can definitely improve some of your workflows please let me know what you think in the comments I hope you enjoyed this video thanks for watching and stay tuned
Info
Channel: DevOnDuty
Views: 12,749
Rating: undefined out of 5
Keywords: neovim, LLM, large language models, mistral, mistral7b, plugin, local, vim, ai, artificial intelligence, copilot
Id: FIZt7MinpMY
Channel Id: undefined
Length: 6min 46sec (406 seconds)
Published: Sun Oct 01 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.