Goodbye GitHub Copilot, Hello Free and Local Alternatives | Open-Source & Private Coding Assistants

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in the ever evolving landscape of software development developers are constantly seeking tools that Empower them to write code more efficiently and effectively one such tool that has revolutionized the coding process is GitHub co-pilot an AI powered coding assistant that seamlessly integrates with GitHub to enhance code completion suggestion and refactoring capabilities GitHub co- Pilot's ability to anticipate developer needs and provide contextual suggestions hence earned it WID spread praise for its efficiency and impact on code quality however gith copilot stands as a powerful tool it's not the only option available following the success of GitHub co-pilot a lot of other players also jumped into the market from the big players we have du aai from Google and Amazon code Whisperers from Amazon but we also have small players like kium blackbox AI tab9 veto Etc but most of these if not all are either proprietary and close sours have a paid version or require access to the internet so that you can use their servers and cannot be used offline so in this video we'll look at some open-source GitHub co-pilot Alternatives that could run completely free and private and do not require a subscription or even internet access I'm focusing on three things here first the ease of use B no requirement to have extra server even though you can have an option to do so and three quick to run if I'm running the co-pilot alternative on my machine I do not want my machine to be dying of all the processing that it has to do in order to run the open-source language model the goal is quite simple we will use the least powerful M1 MacBook Air to serve our large language model and use use a generic editor like Visual Studio code with an extension to use that large language model for our coding requirements this is why we won't look into open source options like Fox pilot or open CoPilot in case you want me to create a video about these two using external servers like gitpod or running such open- Source alternatives on Windows or Linux please let me know in the comments and I'll do so instead we will look into these two options the first one is Tabby and second one is llama coder so before we begin let me quickly show you my MacBook specifications you could see here I am on MacBook Air with M1 chip and the 2020 model with 8 gigs of RAM the next thing that we will do is we will simply search Tabby ml and you'll find the official website here which is Tab ml.com and they did a pretty good job of documenting um the entire uh vs code extension how to use it uh how to install it so let's go one by one first of all let's quickly check the documents and as you can see T tabi is an open source selfhosted AI coding assistant let's go ahead with the installation I'm on Apple M1 Macbook so I'm going to use this and it's quite simple you just copy this formula now please note that if you do not have home brew installed you can simply install it by going to the official Homebrew page and it's pretty straightforward I don't think we require any uh um tutorials for this you can just go go through the website just copy and paste this command and you would have home R installed all right so I will open my terminal clarify it a bit exter yeah I'll use my item all right I'm going to install this a few moments later and now we have Tabby installed let's check the next steps let me clear it and now it says that you can start the server using tabis sir as you can see it recommends using star coder 1 billion which is an open source model known for its coding capabilities this is because the power of M1 mu2 is limited and it is sufficient for individual usage so let's try to do that let me run the command I can simply copy from here and paste and now it's downloading the star coder 1 billion open-source model from hugging face in order to be able to use it of course you can change it to any other open source model that you want but here we are using the star Cod of 1 billion a few moments later and now we have Tabby downloaded and running it's important to note that the model is only downloaded once so now if I close it and rerun the same command it will start immediately and there will be no downloads great so now we have our Tabby server running it's serving the coder 1 billion model now let's go back to the document and now since we have started our Tabby server let's go to the next phase so installation is done if you do not want to run Tabby on an M1 MacBook you can use the other methods that are provided here but now we'll move on to the IDE and if we go to installation you you could simply see that we have add-ons for visual studio code and it uses the tabs code extension then we have if you are using intellig then you can use the Tabby plugin and if you're using neovin then there's a tabin plug-in for Wim for now we will just see Visual Studio code in the interest of time let's click on Visual Studio code and try to get this uh Visual Studio code extension so let's open the my visual studio code here I have it let me create a new file uh let me make it a python file yeah and let me just make it on the desktop so dummy. py okay I created the file and now I'll go to extensions and Tabby and simply install it great so now it's installed and all right let's see if I'm connected to Tabby server oh yeah you can actually add your um workspace if you want click on code completion yeah and the commands so instead of This Server I will actually press escape to cancel this yeah and if you want to see where we could add it you can simply click on settings and click on extension settings so we are not using the T Tabby agent or we don't care about the API endpoint but what we can do is we click on Tabby agent settings and we check what is the uh URL so for us the end point if you remember is also 008080 it means our local host on port 8080 and this is the same thing so Tabby should be enabled right now of course you can change the um values from here regarding prefix lines suffix lines and whatnot let's go back to our file which is this so now we have Tabby installed and the server running let's try to write a python code let's see we'll write we'll write a simple python code def get and you can see it's already trying to uh complete uh my function but obviously this is not correct so let me try to give it some context so let's say get uh Max from list but let's say aray and well this is this is not correct obviously but let's try to run this uh let's say main function and let me call and let me simply instead of main function I'll just try to call it here let me see get Max from aray and yeah it actually gives me a prompt two and I'm going to store it in a value let's say Jack I no I'm just making stuff up and yeah you see it already is showing me how I could print track so let's run this okay let me just try to run it from here and you can clearly see I don't know if you can see this make it bigger it actually gave me the output five let me try to make this bigger here so as you can see and I run it it gave me the output five all right so this is how typically it works let me go back to the tab ml website and you'll see in the demo they give us a pretty neat um way of using this so you can add what you want in comments and then it will actually generate a code for you so let's try to do something else let's go back just let me try to create a new function so let me let me add function to yeah well that's what we did and let's try to autocomplete it but get okay multiplication of three numbers it's very random and let's see if Tabby could complete this or let me let me make it function to get sum of three numbers and Def yeah you have here okay that's way uh the other way def get M no or multiply get multiplication okay and that's also true so now we did all that let's use the other format that we saw and try to do something a bit complex so let's see Implement binary sort tree and let's see def binary sort tree and yes I should give it an array and let's see if it could generate it and yeah if I press tab it actually sort of try to generate a algorithm for binary sorting tree I could clearly see it's not correct there are clearly some issues but the idea is you can always uh um clean it a bit but the idea is it works exactly like a GitHub co-pilot maybe not as good but it everything runs locally it's for free and uh it's quite fast in my opinion because I simply have 8 gigs of RAM and I'm running on a 2020 M1 Macbook so this is a pretty good example now if we go back uh it gives us some demo an example which you can obviously check it out if you have Neo whm of course you can use the either the pack Packer and. NM file or using the Vim plug whatever you want and you can see all the models that Tabby supports so you can actually use all of these there they also have deep SE coder 1.3b and 6.7 B but you my opinion star coder is quite good and and helps in most of the use cases all right uh you can also see what all programming languages it supports I in my opinion python is the best one because that's uh that's what where most language models are trained on and you get the best results of course you should also try R and goang if you are trying to develop an application in those languages but in general you can use it as a good helper but you should not rely on it too much so that's the first one the next on the list is AMA so just like how we use Tabby serve to serve a selfhosted machine learning model and then use the Tabby extension to use that self-hosted model to complete our or rather auto complete the code we will use olama to serve our language model and then use Lama coder to complete it so llama coder Visual Studio code or let me search again it's extender llama coder awesome finally so I will share the link in the description but this is the extension it's by extender or X3 ndr and it's quite simple for good performance you should definitely use RTX 49t and as you can see it also works with M1 M2 M3 by running olama so first thing that we need to do is install AMA once you have Ama installed you can use one of these models and then you could simply run it and as you can see it shows that uh when it's m it's slow on Mac OS and when it's G slow on Nvidia so for us Cod Lama uh 7 billion would be perfect to run on our M1 MacBook before we install Lama coder we need to install AMA so first let me close the the Tabby server cuz taking a lot of uh resources then let's go ahead and stop the Tabby yeah I'm going to disable this for now reload requir I'm going to reload it and then I'm going to look for llama coder yeah which is this one but let's verify if it's the same one so if you go here it tells us to use this vs code plugin and if I install it it asks me need to open vs code and I'm going to open it and yeah it's the same one so I'm going to install it and while llama coder is being installed the next thing I'm going to do is I'm going to install o Lama so let's go back and install AMA and installing AMA is pretty simple just click on download of course it's for me it's maos so I'm going to install click click on download for Macos so it will download and then we could install it from here but what I uh we can also do is let me cancel it we because we have home brew installed we can even install it from home brew itself so I think it's O Lama but let me be sure home brew or Lama and yes there's a formula you could clearly see it's bre install AMA so yes but in the interest of copy and pasting I'm going to do this and it should install AMA on my system so till this is installing let's go back to our vs code and if I go on settings and extension settings you can see that here we can actually select which uh which model we want so right now it's showing uh deep C coder but let's change to the one that we saw is good for an M1 Macbook so let's go back and back and we saw here if we use code Lama 7B the 4bit quantize version I'm assuming this is the best one and all has 5 GB of RAM so 7B code Q 4km so let's let's select that one and it's this one Cod Lama 13B oh no we need code Lama 7B Q4 km great so now we have these settings uh you can also check the inference tokens maximum number of Lins to be kept uh temperature of course we want the temperature to be as low as possible so that the creativity is less but of course you could also change it and if the end point is not local then you can just uh point it towards where you are serving your uh machine learning model from ideally from AMA but you can use any other way to serve it uh because it's a local instance I'm going to just leave it to be the local instance now we also see that uh checks the default Port of Ola so let me check the port okay I don't see it here but I think AMA has a default Port uh 11424 something like that let's see yeah the 11434 is a default port for AMA so as soon as we have our oh and it's installed so let's serve AMA here olama serve and now it ser in on Port 11434 uh we have code Lama and let's go back to our dam. py uh let me so now we have llama extender installed and Tabby um disabled so all the auto completion should now work with Lama coder so let me quickly remove all of this uh and it says that model Cod Lama km is not downloaded do you want to download it uh so yeah I I want to install it so that's what Lama coder is doing at first just like how with Tabby serve it downloaded the star coder machine learning model here as soon as I will uh say yes llama coder will install the code Lama 7B and then use o Lama to serve it so I'm going to click on yes and now it will download the whole thing if you go back to our terminal we can actually see that it is downloading the the the machine learning model so let's go back and wait till our model is downloaded a few moments later so now we have llama coder finished downloading so we should have our llm ready to use let's try the same thing I'll put add let's see if it generates something I see it's so yeah should generate [Music] now nothing all right so let's try and uh making it in a in a okay and it gives us an entire entire file so if I press tab thir oh yeah uh which is defining so let me let me delete this and I guess and I can see uh oh it also added add three numbers and just repeated it okay okay uh I think we don't need two of these let me remove them and let's see if this max value from array and T gives me inputs by default so we can clearly see that the performance is not that good oh it does but you can see you can clearly see so that it's repeating what we had earlier so the code that we created okay anyways I'm going to going to add an array here for going to save it to a variable called X yeah and I'm going to Performance of Lama coder in mapbook air with the model that we selected is not that good but let's see if this code actually runs and it does it gives me four which is the maximum value not as handy as tab ml but still pretty good uh I think it's if if I had a better processor I guess this process would have been faster we also see that it was repeating so even though it was getting the contacts maybe it was repeating uh if you really want to use llama coder with o Lama uh I guess you could make it work if you fine-tune the extension settings that we saw earlier so that's it for llama coder uh let me go to the extension settings and this is llama coder and I'm pretty sure if you change the end point here and uh if you're using your own server I think it will be a bit faster than what we are seeing right now but definitely at least on an M1 MacBook Air it seemed that the Tabby with star coder was faster and more effective than llama coder using Cod Lama 7 billion in conclusion the open-source alternatives to GitHub co-pilot are a game changer for developers they offer a wealth of features that can significantly improve your coding experience while also being free and running locally on your computer with their everg growing capabilities these alternatives are poised to become the future of development tools I highly encourage you to explore these options and see how they can elevate your coding Journey if you have any questions or need guidance feel free to reach out stay tuned for more informative content like this in the future thank you for watching and I'll see you in the next [Music] one
Info
Channel: nigamelastic
Views: 16,390
Rating: undefined out of 5
Keywords:
Id: HBGicNhDC1g
Channel Id: undefined
Length: 26min 56sec (1616 seconds)
Published: Wed Dec 27 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.