AI Knowing My Entire Codebase Resulted in a 20x Productivity Increase

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is huge what if AI knows your entire code base what if you can chat with your entire code base as a result you're going to code 20 times faster and you can code with your natural language that's when we have praise and AI code it's easy to get started just navigate to the folder of your choice and then type prais and AI code and then click enter this will automatically open this user interface which gives us the list of files included in the context and also it gives the token count now based on this context I can ask a question what are my project requirements and click enter now we got the answer based on the entire code base you can see here based on the provided code Snippets your project seems to be a python package named graph rack and here are the project requirements based on the code core functionality technical requirements additional consideration now I can even ask specific question based on a specific file in this list so if I want to ask specific question from all these files let's take models folder so I'm going to ask how models are used in this here you can see a holistic response the code you provided demonstrate how models are used in the graph rack project for various TS such as fine-tuning prompts indexing querying and here it gives the models being used open AI models including GPD 4 text embedding three small open AI chat LM m Open AI completion LM and open AI embeddings LM also I can be specific so I can just say based on this file improve the code and then click enter and here you can see it automatically giving me a code which is improved version this increases our productivity and based on my experience I'm doing 20 times faster than I used to do coding before that's exactly what we're going to see today let's get started hi everyone I'm really excited to show you about prais and AI code so without prais and AI code you need to manually copy the context from the files paste it in chatbot or in chat GPT or in clo then ask questions and the questions we ask is all over the place no realtime context updates and no context customization it doesn't know the entire code base and it is time consuming but with prais and AI code AI knows the Ani code base you can chat with an entire code base you can code 20 times faster and it has realtime context update you can use your natural language to code by the end of this video you will know how you can set up ra a code how you can feed the entire code base how you can integrate with oama Gro and Gemini and finally how you can customize but before that I regularly create videos in regards to Artificial Intelligence on my YouTube channel so do subscribe and click the Bell icon to stay tuned make sure you click the like button so this video can be helpful for many others like you so so this is one of my main project which I'm working for some time and it improves my coding speed so I just thought of open sourcing it the main issue came when I have to enter the whole code in chat GPT or in Cloe and ask question how can I improve the code so now this is going to give me a response to improve the code but again I need to implement this code in my actual file at this time the context which I provided before to chat GPT is outdated and even if I make any changes to the code chat GPT doesn't know chat GPT doesn't know that I have copied the code correctly and pasted it in the correct place so all the suggestions or assumptions they are not definite this resulted in me spending a lot of time fixing the code compared to how it was before so I came to a situation where I need to feed the entire code base to the lodge language model to get better response considering we now have Google Gemini with 2 million context length chat GPT using 128,000 tokens and many large language model available using Ola with larger contexts so we can use that for our own Advantage so this simplifies the process of copying the code across and also realtime context update so first we are going to see how you can set this up as a quick example let's take this graph rag Reaper I'm just copying the repo URL so this is a big code base so on my mission I'm going to get clone and provide the full code base here and click enter now navigate into the graph rack folder now to install pip install prais and AI code and then click enter this will install the main prais and AI code package I'm going to show you how you can integrate CH GPT Google gini Gro and ol first exporting open a AP ke like this and then click enter Then export gini API key like this and then click enter next export Gro API key like this and then click enter you don't need to export every single thing if you're not planning to use one or the other so if you're planning to use only Gro just use only grock and after this click enter next just type prison AI code and then click enter in the same folder where you got the code base this will automatically open this user interface where you can see the token count and also you can see the list of fils which is going to use as context but I don't need to use all these folders because I'm not going to go through all of these folders and I might have other requirements such as keeping the total number of tokens low so that it costs little so if you want to remove some folder such as doc site coming back to my terminal just creating a file called settings. EO so in this file we are just setting the list of files to IGN anything which starts with DOT similarly I've added a few files I'll will put this code in the description below so you can just copy and paste it after this if you want to add doc site you can just add a hyphen and just type doc site that's it now when you come to the user interface you can see the dock site folder is not there similarly I'm going to remove few more folders such as examples example notebooks because these are not required for the context now I added a few in the ignor file false similarly you can exclude fils if it's not required now you can see the token count is 41,000 which is good now I'm going to set the large language model by clicking the settings here GPD 3.5 turbo and confirm now I can ask question give me an overview of the code in the terminal I can see the maximum context length is 16,000 but currently the token length is 41,000 so considering this code Bas is Lodge we can't use chat GPT so let's use Gemini to use Gemini go to settings again here just type Gemini SL Gemini 1.5 Flash and click confirm now again I'm going to ask give me an overview of the code now you can see it's automatically trying to respond and here is a response it's giving me the core functionality graph rag model indexing search and retrieval so now I'm going to ask check now if there are any issues in the file name and then click enter now it's checking now you can see the response I've reviewed the code it's giving some suggestions inconsistent handling of azure API parameters potential typo here and other observation and recommendation and you can keep on improving the code from here this is super exciting this is going to increase your productivity of coding and you can give you a natural language to modify these files next we'll see how you can Implement Gro with this we have already exported Gro API key in our terminal so it's just the matter of changing the model name here gr/ Mixr I'm choosing mixl because it contains larger context length compared to llama 3 now I can ask give me an overview of the content just for a quick demo in the terminal I can see the request entity to large that means the context length 41,000 is still larger to handle Mixr if you see Gro documentation you can see Mixr the maximum tokens allowed is 32,000 so you might need to exclude few files and folders bring it below 32,000 to get this working now finally we are going to see how you can integrate AMA with this make sure you download AMA from ama.com then AMA pull llama 3 and then click enter next in our settings same as before we're going to mention oama / Lama 3 and then click confirm the token length for llama 3 is far low compared to this so you might need to choose only a small folder but there are various version such as llama 38b 1 million parameter model but the quality of the response is going to be lower so the better the model the quality of the response or the error fixing is going to be better I'm really excited about this please let me know in the comments below what do you think about this and what all extra features you would like to be in this I'm going to create more videos similar to this so stay tuned I hope you like this video do like share and subscribe and thanks for watching
Info
Channel: Mervin Praison
Views: 10,282
Rating: undefined out of 5
Keywords:
Id: _5jQayO-MQY
Channel Id: undefined
Length: 9min 33sec (573 seconds)
Published: Sat Jul 13 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.