Embrace the might of tools to unleash prodigious triumphs and labour, Lucius Maxiness. This ancient quote teaches us three lessons. Work is honourable as battle, something to be primed. Tools are the enablers of the triumph. And third, double check every source, because this quote, and the author, while sound wise, are made up by an AI model. Within your VIM, of course. Today, we're looking at the fanciest available AI tooling that integrate with near VIM and can help us build software and think. We'll be comparing these models against each other within the same criteria. Installation and interface, I'll call that UX. Quality and accuracy, how much extra work is needed on my behalf for the code. Then the cost is my time worth it. To test the models, I'm going to give them a task of scraping the web. Not too complicated, but requires some extra skills beyond raw code. It's not about writing a Fibonacci function. I think the world is tired of simple examples. Now, because the result may surprise you, I want to explain how I got there. So, stick till the end, and without further ado, let's go. Chagippity is the holy grail of AI and code generation. Specifically, GPT-4 and Fort Turbo. These are almost unmatched. It works great via WebUI, so I'll start by asking you to create a ghost scraper and use the output for comparison. There's a thorough explanation of the output and a nice piece of documented code that actually runs. Is it perfect? Probably not. But it works. And to be honest, my prompt isn't great either, so I'll take what I get here. When it comes to giving, Chagippity is powered by a fantastic community plugin. To get it, it needs to be installed with three simple dependencies to handle its UI and results. Once lazy has it, we need to handle authentication. The key for OpenAI can be either kept in the environment or more preferably fetched from a secure vault. This is an elegant feature added by the author, so I'll be using one password CLI to fetch my code. This is a nice solution, but can get quite annoying to re-approve one password every time theme starts, so keep that in mind. Lazy installs the plugin and the access approval immediately pops. The first command is simply Chagippity, which opens a UI with the prompt and results. I can start chatting with the system, and this method is pretty straightforward. While not going through the WebUI, it's still not what I'm aiming for, and is mainly used for general questions. Speaking about general questions, it may be wiser to ask a professional, which can be done using the many profiles bundled into this plugin using the Act as command. There are too many options from a tour guide through a marketing director, even a password generator. Let's use the generator to create a 20 characters password with symbols and numbers to show how it works. Once it's generated, with Ctrl Y, I can copy the result and use it later on wherever I want. Another interesting profile is the Linux terminal bash command line. This serves as a prompt to test shell snippets, which can come in handy if you don't run Linux and want something quick. I can test all kinds of things like who am I and even use sudo for commands, and it responds pretty nice. Let's get serious and do some real code generation, starting from a simple comment of the same request made to the WebUI. This can be handled using the code completion feature of the plugin. Now this result is not quite what you want or anyone actually, so I'll let it off the hook with the fact that it's using gpt3 right now, and my prompts again are not topped here. So with a little bit of guidance, the output gets real. Note that the generated code is presented as optional to begin with and can be either accepted or sent back for further iterations. I can hit enter to accept and voila, a gpt generated scraping script. Next on the list is the edit with the instructions UI, which in first glance might seem irrelevant, but it packs some powerful options and lots of key bindings that help edit code on the fly by selecting different chunks and asking the model to act on them. For example, asking it to add comments explaining the code for future maintainers. Lastly, the cherry on top of this plugin and hands down the one I used 99% of times, gpt run. It adds another set of sub commands starting from test generation through code completion like we've seen earlier, fixing bugs, correcting grammar, code optimization, and summarizing text. It's really a Swiss army tool section of the plugin that really does it all. Let's ask it again to explain the code. Now it isn't hard to explain what adding to number is doing. Let's test it on something more complicated. I'll use the output from the code generated earlier for the eBay scraper. I'll wrap the function. By the way, this is done with text objects, catch the video above if this is the first time you hear about these. With the range as input, I'll send it to gpt to the explain utility, and after thinking for a while, it shoots a passage that I can scroll and read in simple language and explain what the code does. Imagine starting to work with an existing code base and someone else's spaghetti code. This can be a lifesaver. Now just to show off, let's use the test feature. This works beautifully by adding a few test scenarios, well thought out edge cases, and comments to go with them. I must say, chat gpt's plugin is as powerful as it gets. As you'll see, some other plugins give it a fair fight, especially when it comes to UX, but at least in code skills, it's hard to compete with. That said, if you want to use the gpt4 model, it has a subscription fee, which if used is totally worth it in my opinion. But we're here to explore our options, so let's move on and just give it a write. So in terms of installation and UI, I give it a four out of five. It works perfectly. The UI is not as perfect as you'll see, but the quality of code is five out of five, and for costs, it's three out of five because it's not a cheap service, but again, in my opinion, worth every penny. As for its ability to write a scraping tool, however, it's not perfect, but it works. The problem starts when I try to run the code multiple times, not to mention automated constantly. It starts getting error responses, which at some point, even includes threats from the system script, saying that they don't allow such automations and will bend my IP if this persists. This is exactly where the sponsor of today's video comes in. Bright data is perfect for the task, among many other features like templated scraping methods, and code ready to go, even offline data sets to run analysis on, web and locker is essentially a smart proxy chain that when activated streams the local process through bright data systems. Not only that, the IP will keep rotating, and if the system has capture, integrated, bright data has got you covered with that as well. This lets me run the automation as much as I want with no issues. And by that, I also mean scaling it to dozens or hundreds of instances running simultaneously collecting data for me. To use GPT, for the task, I'll open, edit with instructions panel, and ask it to use proxy with credentials. Once accepted, I'm getting AI recommendations for the newly changed function. I'll just add bright data credentials from the UI, and just like that, the code runs yet again. To check out bright data, check out the link in the description, where you'll also get some free credits to get started with. The next gem is not as much as an alternative, but a great sidekick to really anyone. Codeum is a free for individual's plugin that officially supports NuVin. The plugin offers completions through CMP, so installing requires adding it to the package manager and adding a source line to CMP. With this out of the way, there is one command only called CodeumAuth. The process is pretty simple. Out of the few options, I selected the easiest method is opening web page with a token, copying that, and pasting back to the prompted line. Once the key is saved, we're ready to go. And you'll see Codeum pops even with writing comments. This helps with ideation, language, anything it can really, including code. Now, mind you, this is not a full-on generative model like the other plugins here, but again, more of a sidekick. So if I want numbers added and I've got a function signature, it'll help me fill it and add comments, and anything it understands I'm trying to do. If I create a simple struct, it'll even offer fields with their types. Trying to fill in struct comes with suggestions for the values, which, funny enough, has always tendency for Hispanic names, which I love and it shows some character. Codeum is also aware of changes I'm making to my struct. So as you can see, I now add last name and age, and if I try to create that, I now have Manuel Lopez has the new person I'm creating. If my comments are going to be specific enough, Codeum jumps on the opportunity to fill them exactly as described, while filling in the missing data on its own. Now, this works great as it does here with larger code bases and lots of context. Just remember, it requires a paid license if you're doing it professionally. Summing it up, Codeum gets five out of five for the installation process, which is flawless. The quality, three out of five, it's a good janitor, but not a full-blown generation model, but it's not meant to be. The cost is zero, so it gets five out of five. And here's the thing, it's not my primary source of code generation, but it definitely stays with me for sure. It's small, elegant, free, and to be honest, sometimes exactly what I need. Okay, the big ones are out now. Co-pilot is the famous GitHub's AI system to help you build applications. The plugin is also provided by GitHub themselves, but maintained by the one and only team pope. This means that this is a plugin you can trust. Co-pilot is by far the most popular choice with developers, and that was also obvious in a poll I ran recently. Let's see why. As expected, with T-pop's plugins, the installation can't get any easier. It's one line added to Lazy, and that's basically all you need. I also went ahead and added some tweaks to change the key bindings and some mappings, but that's really not necessary. One important note is that Co-pilot is using ghost text for suggestions, which is nice way, in my opinion, to show results, without you having to interact with them except or ignore. For that, you'd want to have ghost text enabled within CMP. I prefer that over menu suggestions, because it doesn't pop over existing code, and makes more sense in my workflow. Once installed, run Co-pilot off. I'm already signed in, but if you aren't, you'll be sent to GitHub for a quick authorization, a six-digit approval code, and that's basically it. The plugin adds a bunch of commands, like disabling, enabling, and do stuff on the go, and a few other options that will take a look at later, but this is mainly for management and not for daily use. So, if I go ahead and start running with it, if I just start typing comment, it starts laying out suggestions that I can accept or ignore. If I accept, it'll keep going with further code generated on the fly, based on the context. You will see Codeum and Co-pilot running alongside each other here, both use CMP for completions, one with suggestions, and the other one with ghost text. For a while, this seemed like a nice way to enjoy both systems. I'm not saying it isn't, but in many cases, it's just an overkill, and can get actually quite confusing, if I'm honest. Just like other models, the code can seem dumb to begin with, if the prompt doesn't give any directions. But, if the prompt is giving instructions, which can be done with Co-pilot's help, by the way, it can build stuff on its own, like a nice guessing game, for example. I'll let it do its thing on its own, and what do you know, it works. I want to stretch it a little bit and let it build its own game, so I'll have to start it with a simple comment and see how it's doing. I'm still fighting with podium for suggestions in the beginning, but to really set Co-pilot free, let it run on its own, and here's what happens when I only hit up for a few minutes. At the end, I've got a pretty robust structure with a game that works. Now, it did generate a couple of logical methods that aren't being called from anywhere, and do have a few bugs. The game needs them, but Co-pilot messed them up at the end. I'm actually happy it did, because that's a good indication for the state of AI at the moment, in my opinion. It can do a lot of things, and do the heavy lifting for you, but you'll have to work for cleaning after it, and if you ask me, that's so worth it. So, to sum it up, Co-pilot plugin for new VIM is in my eyes exactly what I had in my dreams when I imagined AI code completion. This is it, so installation and UX, 5 out of 5, no doubt. In terms of quality, it's trained on GitHub's open source code, that's the largest open source data set in the world, so that's good and bad. I must say it's 4 out of 5, because it's not always perfect. In terms of cost, yeah, it costs a little bit, just like charge it PT, 4 out of 5 overall for Co-pilot. If you're new to this channel, I've recently covered Olama and its models, and specifically, a community plugin named GenNVIM, which is an excellent interface for new VIM. Since local models deserve a fair chance in the AI battles, we cover Olama again, with a small twist, with GenNVIM, Olama had to run in the background. With EnVIM Llama, the back end part is handled for the user using Docker. This is a very elegant solution that takes away the friction of managing Olama, and the model can be defined via the plugins config. Any models from hugging face can be integrated in here, so not only on Llama and Mistral, but also non-famous ones like the un-censored models and flavors of niche models like the excellent DPC Coder that comes in many sizes based on the dataset it was trained on. Back to the plugin, after a quick installation and firing its command, a new pain is open with an interactive shell that interacts with the model. Now, don't get me wrong, this works, and it works beautifully. A few issues I do however have are, one, the terminal mode is not the best one it comes to yanking and manipulating text. I would appreciate a few key bindings like the others on charge of PT, for example. This is not the main plugin to blame for. Local models tend to be slightly slower than other services. I'm running on a MacM1, and the more powerful machine may perform differently, especially outside of Docker. So your mileage may vary, but you can see it kind of lags with its answers, something that with GPT, Codium, or Copilot was nowhere near in terms of experience. Olama gets two out of five for overall UX. I wanted to grade it three, but I changed it to do mainly because of slowness. It gets a four out of five for quality, which to be honest is hard to give because there are so many models and it can range between one and five. Obviously, five out of five for costs because you can run it for free on your machine, especially with Docker that's pretty easy. No trouble there. Summing the points as expected, copilot, and GPT are going ahead to head even when it comes to score. Codium takes the lid with its pricing advantage, but numbers don't tell the entire story. These are the common options when it comes to AI coding support. There are additional commercial solutions in the form of GitHub bots and also legally protected models that train on the local IP data of the organization. There's also a huge list of open source models to check out and play with. Just browse, hugging face, and you're guaranteed to have lots of hours of fun and exploration. As for the tools tested here though, personally, I love Codium for its simplicity, easy completions, and help with both text and code. It's a free service from a commercial solution, which means no local resources are ever eaten up for the sake of using AI, and it also runs fairly quickly. I haven't seen an noticeable difference from copilot, for example. Now, speaking about copilot, I think it goes ahead and ahead in its solutions with ChachiPT, especially GPT-4. While GPT is slightly better with robust solutions, copilot has the edge when it comes to most cases. In my opinion, because it's trained over GitHub's code, and if you let it use that for your solutions, most of your daily easy tasks are pretty much covered. So I guess it comes down to the use case, or more precisely the majority of use cases you're going to run into. If you're looking for a solution to write most of the code for you, GPT is definitely the way to go. However, you need to expect a lot of reviewing and problem solving, whereas if you use it more as a sidekick, copilot is better. That's also how it's branded, copilot, but I guess that's just a way to make you feel it ain't going to change your job anytime soon. If you want to keep costs down, local models are definitely a good option. In my taste, there are two slow. One solution is to run the model dedicated hardware. However, if you do, and the reason is not experimental models, it may be wiser to just pull the trigger on a full-blown service and pay for that. If I had to pick, given the existing tooling, I think copilot is the winner. When I combine features, quality, maintenance of the tooling, and everything around it, that's the winner for me. I also keep coding pretty close. To be honest, copilot is disabled, and coding is on most times, unless I feel I may be extra juice, and then I pop copilot back. That's pretty much it. You've got your AI all set up, and the tools you like are in place. But as you noticed, working with multiple tools, configuring your editor, and quickly switching between environments take energy. If there's no structure, organization, you quickly get lost, and the friction of trying to get stuff working, will be discouraging and annoying to work with. To improve the process and truly master the terminal, I believe that TEMOCs is unmatched and unrivaled in both managing the environment and providing an enjoyable experience. To master TEMOCs and bring it from an absolute scratch to beast mode in just a few minutes, here's your next video to watch right here. Thank you for watching.