Hello, in this video, I'll show you how you can
use two open-source tools, Ollama and the Continue VS Code extension, to install your own GitHub
Copilot replacement for free on your computer. This is Vincent Codes Finance, a channel about
coding for finance research. If that's something that interests you, consider subscribing so
that you get notified of my future videos. In order to set up our GitHub Copilot
replacement, we'll need two things: Ollama to serve the large language models,
and the Continue VS Code extension to serve as our copilot inside VS Code. To
install Ollama, all you have to do is go on their website at ollama.com and click
download. If you're on Mac and using Homebrew, you can also do `brew install ollama`. And then
you'll need to install your models manually. By the way, if you haven't seen it
already, I have a video on using Ollama with Open WebUI to serve as a ChatGPT
replacement that runs on your own machine. In order to find which model you should
install, you can always search models, search for "code" and that will get you
all the coding-based models. The most popular one is codellama, and then there's
also deepseek-coder and wizard-coder that are also quite popular. To install them, you
have to go into the terminal, `ollama pull`, and then the name of your model. I already
have codellama installed so this will be really quick for me. Otherwise, you'll have
to wait for the model to fully download. Ollama will serve the large language models on
your computer, and then you'll also need Continue, which is the Visual Studio Code extension
that will serve as your coding assistant. Continue is not specific to Ollama; it
can work with any LLM as they state, so you can also use cloud-based
LLMs if you want. But in this video, I'll show you how to install it so that
everything runs for free on your computer. What you'll need to do is go into Visual Studio
Code, under the extension toolbar, search for Continue. It should be the first one. It is the
one with this icon here. Click install. Once you have it installed, it will show up in your left
sidebar here. Continue recommends that you put it on the right-hand side here so that it is always
available here and doesn't overlay anything else. And now we've got it installed. I do have a
chat here because I've used it before. But if I click new here, I'll have a new session,
so I'll start from scratch here. I see that I have codellama here set up again; it's because
I had it set up on my computer. If you want to edit your settings, you can either do
the plus here to see all these models, or you could click on the gear here,
which will open the configure file. In order to use Ollama, you will have to define
your model here. I'll name this one codellama because this is the model I'm using. I've
got the model here; the provider is Ollama, and then you can also set a few options.
So here, I've got the completion options; the num_thread set to four so that it's allowed
to use four threads to generate my result. If I wanted more models, even if they're all
provided by Ollama, I would just add them here. If I look at my computer, I have a few models
installed. So, for example, here, if I wanted to use deepseek-coder, I could just come here
and add a second model, do the same thing, and then I can just copy the rest. And that would
be it. And now here, I do have a choice. I can use either one, and I can even try both if one
doesn't work for a task, I can ask the other one. How do we use Continue as a coding assistant?
Well, here, I'm using three examples that are provided by the Continue team. The first
one here is, I've got a file that has a function called "mysterious function." I'm
not quite sure what it does. What I can do is I can highlight that function, command-M to
select the code, and put it in my chat here, and then I can ask a question, such as "What does
this function do?" and then it will provide me the answer. If I'm not happy with codellama, I
can change the model and ask another one. So, that's how you get information about your code,
and you start a chat session about your code. The second way to use Continue is directly in
the editor. So, for example, here, I've got a large function; maybe I want to refactor it or
do something else. I would do command-shift-L, and then that would ask me what to do with
that code. Maybe I want to, I don't know, refactor this; we'll see what it does. So now, it
did change something; it's used a while instead of a for, which might be better, maybe not; we'll
see. But in this case, I could either accept, reject, or even retry. If I accept, all I have to
do is click accept, and it will swap in the code. And finally, I can also use it for debugging. For
example, here, I've got a function; if I run it, well, it doesn't work. But what I can do is I can
highlight it and then do command-shift-R, and then it will give me a kind of description of my error
and tell me how I could modify the print_sum() function so that it works. In this case, this
suggestion would be to only add integers. If I'm happy with that, I can just do "apply to
current file," and then it will, and I suggest, okay, this is what I will do. I can accept this
change, and there we go, everything is done. Finally, we also have a few options available
in the contextual menu. So, for example, here, I could just either add to context, so the chat
bar, or I could do "fix," "optimize," or for example, "write a docstring for this code." And
here, I can accept this change, and then this one here. I could accept this removal as well.
And here, I've got my function all documented! So that's it for today. I hope you enjoyed
this video. If you did, please like, and also consider subscribing to the channel
so that you are notified of my future videos.