Building a Generative AI-Powered App with Gorilla LLM: The API Store for LLMs

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone welcome to AI anytime Channel in today's video we are going to develop an application where we will utilize the gorilla llm you know so nowadays in the generative AI Community the developers and researchers are talking about gorilla llm which is an API store for the large language model and the community is saying you know that uh gorilla llm uh writes or make the right or appropriate API calls okay and there have been some comparison with the commercial models like GPD X now that X can be anything 3.543 whatever okay so we are going to see that how we can you know uh build something develop something we will utilize gorilla to perform some kind of task so let's use couple of the models which is available within gorilla ecosystem and they're going to develop an application we will perform some tasks okay within a sub process in Python so without uh uh for the delay now let's go and start developing this application guys you can see currently I am on their GitHub repository by Cecil patil you know uh it's currently it's it's I think the the entire infrastructure of gorilla if I'm not wrong is like is hosted on Barclay okay like they have been kind of a sponsor or they have some backing of bug Lake okay so you can see the edit domain as well uh from a documentation you can go to gorilla.ch.barkly.edu it's pretty much straightforward they have a collab repository as well pretty neat and clean collab repository I have taken few course Snippets from that collab as well you know say try gorilla in 60 seconds they also have this they also have a CLI if you want to you know utilize their CLI okay command line uh utility or tool you can use that okay now if you if you read it says gorilla enables llms to use tools by invoking apis and I think it's it's looks promising to me to be honest because it helps you in the real world application to make the right uh API call or the appropriate API calls because most of the time when you use a lot of llms you know they kind of you cannot exactly use the same response or output because you don't have a validation in place you know by default most of the llms okay now let's let's develop it guys and before development you can see that there is this is how it works you know the the data set creation that the data set that gorilla has curated which has around more than 1000 API calls you know and they have torch up they have hugging face up they have tensorflow Hub so all the major hubs that they have catered so far and I think they're also extending it to other hubs as well and then they have a self instruction in context within context examples to generate instruction API pairs so the API call that you will make it understand the instructions behind it and instruction API Pairs and then they used to train gorilla 7B so there are a lot of other 7B models within the ecosystem like uh the Falcons or the mpts and also they have a lot of from torch Hub as well and they they kind of response with the help of both zero sword and also the retriever okay so that's what that's how it works but I will explain when I'm writing the code for this uh the application that we're going to develop so let's develop the application so I'm currently going on my home within desktop uh my project and then I'm gonna go inside gorilla demo so let me just open uh opening terminal and I'm just gonna activate my land chain environment so let me uh activate line chain and I'm just going to open this in vs code so code Dot and now I'm just going to start writing the code for this so let me just uh right you can see the only requirement that we need for now is open AI but there are other requirements that you should you know install because I'm going to also execute the task within a python sub process and for that you need dependency so the task that we are going to perform like for example you know you want to translate entire Excel sheet for example suppose you have an Excel City and a CSV file okay with English sentences within it and you want to translate all of it together with help of gorilla and then if gorilla is responding uh code base that contains torch because when you need torch in that environment to execute that code generated code within a sub process right and that's why we need Transformers you need torch uh and we'll see one list for example let's call pandas as well because we need data frame or something and we have numpy that will of course will get installed with torch and Transformers when you install but anyway so let's create an app.pi first and then start writing the code guys so the first thing that I'm going to need is open AI import open AI I'm going to build a streamlined application for this I also need extremely so let me just do import streamlit as SD and I need sub process to run that generated code within a python file or something okay so import sub process the first thing I'm going to write I'm going to write open AI dot underscore API key but just for your information you don't need an open AI API key for now you know when you are okay I made a mistake here by typos there so I'll just write here doesn't matter and that's what you can find it on their GitHub all so they say empty this this is you don't need open AI API key right now as I said that this they have the infrastructure hosted within a Barclay or whoever I don't know who the exact sponsor is for the API course that they are making but now what I'm gonna do here open AI dot API underscore base and that base is you can find this base in the GitHub repository I can also give the base in the description as well which is HTTP uh let me see Jenny no Dot um millennium .millennium dot Barclay yes dot edu and then colon 8000 and then version one this is this is the end point that we have to hit you know janino dot millennium.bal clay dot edu eight thousand uh fast API or something I'm not sure what's that endpoint route is okay and V1 version one okay so the port is eight thousand now let's query the gorilla servers what I'm gonna do here this is how we're gonna query the gorilla server let's write a function called get Gorilla response and within this you just have to write two things the prom that you're gonna X you know we are going to take the prompt from the end user now when I say prompt this prompt is nothing but a task that you are trying to execute so prompt is basically the task and I will show you so when we enter the proper example I want to translate English to Chinese I want to download 30 days of you know uh 30 days of the Microsoft stock data okay and use Yahoo finance you know I'll give this instruction you know to the gorilla and say okay get me the data for that it will give it for you give it it will give it to you right so basically an agent kind of a thing right where you to generate that code and let it run within a sub process that's what exactly we are going to do in this guys so prompt and then the model so let's use model we're going to use couple of models in a drop down okay now model so the first let's have a try accept if it doesn't work okay so completion pretty much uh very famous variables right so completion is a variable where I'm going to store that chat completion okay so open AI uh excuse me it's chat completion okay chat completion dot create and within this create function I'm going to write my uh inference params or the first is model so first is model and the second is the message so model so model is model and then I'm going to write a message variable this is not message this is messages so in messages what I'm going to write is it takes a list and then have a dictionary within it so list and a dictionary and then within that we write a role first so role is user because we as you said right it was instruction paired so roll and then we're gonna have an user and then I'm going to have a Content that will be your prompt okay so content and this is nothing but your prompt so let me just write a prompt here that's it we are done with the messages guys so let me just you know come here content prompt and in messages what I'm going to write is print and in this print let's write uh our response so our response and it will be within this response and that will be completion so let's just not response just completion let's see what response we get completion because it will give you lot of other metadata as well you know so we will filter it out the code so print response completion I think we are good with this knowledge so return the completion choice so completion dot I think in open AI we call it choices then the first one then message Dot message dot content that's it okay so we are returning this completion dot choices message dot content we are okay with our try now for accept X excuse me accept exception energy this is correct let's just write code equals or not code let's write print uh sorry I could not something like this okay I could not uh or let's like simple something went wrong something went wrong okay now we are okay with it guys okay so we have written our function uh that is that helps us get Gorilla response prompt model those two input parameters try completion open AI dot chat completion dot create model equals model messages role user content prompt and then we have print response completion return completion or choices I think this is okay now we are done with this so what I'm gonna do now is I'm gonna write a stimulate code at least to get started with so defined main as we always do and then I'm gonna also write if name main main okay and now we can write the code within this the first thing that we always do is we write a title so HDR title gorilla llm API call Demo or demo something like this okay you can give any name to it if you want okay let's let's also take a gorilla Emoji guys gorilla Emoji GitHub or something okay let's let's take this in both the things we can take it okay so let me just come over here list paste and I'm gonna just remove this it's your title we are okay with the title now let's have an input prompt this is what we need so in input prompt where we will have an input so we'll have a text area guy for this so text area and okay I forgot to do one thing first let's have a let's let's use the entire width so I'm gonna call it St dot uh set page config so let's set page config so let's set the configuration of the page I love straight very simple but you know both streamlined and gradu right pretty much self-explanatory now if you're writing set page config what do it what does it mean you know we are going to set the configuration of page so I'm gonna just do layout equals white I need a wide layout okay simple so this container or container fluid or no it's basically it's container by default if I'm not wrong now input prompt HD dot text area and in this text area I'm just going to write enter your prompt uh enter your prompt below or something okay the task that you want to do okay now simple so let's have an option because we're going to use couple of models here we can use just I want to just want to show you that how you can utilize a lot of other models which is inside the gorilla ecosystem okay for different tasks so option let's have a select boss box if you dot select box and within this I'm gonna write select uh select a model option I need the name of the model let me just have a look what the model name model option from the list or something and now let me look at the model name which is the first model is gorilla no it's one way it's better guys you know we are utilizing the nature right so with generative AI you know you see Falcon llama these are all animals you know different type of species and animals that we have right so it's very fascinating to see that now gorilla 7B the first model HF version one they also have a version zero version no you can also have a look at that gorilla s7b HF V1 and my next model is Gorilla I'm gonna use MPT 7B as well they have an MPT 7B as well gorilla and mpd7b and uh HF and this is V naught okay so now this is this is what I was this is what I was looking for that's okay now I know these are the two models now I can build my entire application on top of it okay so let's have a button now so if St dot button excuse me and let's write something like gorilla Magic I loved it right gorilla so uh gorilla magic and within that what we can do let's have a validation check if length input from greater than zero or something okay like there's a value inside it you can also do not none but that is more recommended to for files so if length input prompt greater than zero if there's a value inside let's divide the entire layout into two columns now column one column two is your columns two Azure columns okay I am giving equal weightage to okay let's write this inside a list if you know it's better to do that St dot columns column one now what I'm gonna do here I'm gonna say okay with column one write everything in this column okay for all of my code for the logic that I have for this column so with column one what I'm going to do is I'm gonna say okay if option equal equal what is my model name uh let me just take this model guys you know gorilla 7v or something okay yeah if this is my model name if option okay that will be colon here now all of my code logic goes here for this particular model okay if option gorilla now what we can do we can say Okay result name the function name is get Gorilla response and get Gorilla response I'm gonna pass two things because it requires it it needs two input parameters to gets completed so the first thing that we need is prompt so let's write prompt equals nothing but the input prompt which we have which we have where we have a variable on top and the model is model is option because option contains our model so let's call it option okay here we go so now what you're gonna do is I'm gonna just do HD dot right here guys you know if you have to write a result in this case okay now uh let's run this so far okay so let's run it and see what we get do we get any error for this I'm excited to see that guys so let's see it so extremely it run app.pi okay you can see we didn't get any error so far we call it gorilla llm API called demo guys I always say right if you are not developing it doesn't make anything to learn a technology if you are just learning for sake okay let me learn and have theoretical knowledge will not take you anywhere to be honest okay I work in the industry I I see uh what are the requirement of clients what kind of problem they face and what are their challenges you have to learn that how you can develop something okay uh on top of your learning okay the top of understanding is important uh depends what you want to do in your life you know if you are more inclined towards development then I will write I will ask you to write code and if you are more focused towards a techno functional role where you want to do consulting or more of a like an architect kind of a person even if you don't write a hell lot of code will help you achieve that but it's very very important now in generative AI to learn how you can develop and it's easy to develop guys but you should have good enough understanding of the technology that you are working with now enter your prompt below and here you can write your prompt and you have Gorilla or something okay gorilla 7B HF V1 gorilla MPT 7bhf Vino let me let me just write something and see if it gives me okay uh Translate I want to translate Chinese to English what happened to shiny okay I want to translate Chinese to English and let's have the same all Chinese into English I want to translate a sentence a sentence from Chinese to English okay now this is my task now here what I'm doing I'm just writing a single sentence but you can also pass an input file okay you have to keep the file in your folder okay if you want to execute that in sub process I will show but of course you have to extend this app further now let's hit on gorilla magic and see what it does okay so now it says None which is fine so you can see HT dot write a result okay let me now also see if I just click on gorilla say okay fine okay it's running I don't know why it's none none none because it's not Jenny no I made a mistake it's janino okay sorry for that no it's somebody's name probably or some machine's name sort uh that's I'm really sorry you know I I wrote janino there but that was Janine okay let's click on gorilla magic and see it will take little time guys so what I'm gonna do is that I'm gonna I may pause this video because it might take up to few 30 seconds or something you know to generate the response so let's see how much time it's taking you know if it takes more time I'll pause the video but I don't think it will take a lot of time uh but let's see uh maybe you can also print the time you know in here okay see you can we got our response guys so you know what was the response and you will will clean this code by the way we are right now printing in HD dot right here and therefore we are getting this response but we'll put it in hd.com or something now you can see natural language processing text to text generation this is what it is right now it's text to text generation API call and this is a this is the model that is having called Facebook M2M 100 480 million provider is hugging face Transformers and this is some instructions that how you can run this load the pre-trained m2m100 model and it's tokenizer from the hugging phase the same thing right tokenizer and the model load through pipeline or something then set the source language and you can see the code over here so let's do one thing let's make it little beauty you know through little beautiful right to through a code or something filter it out a lot of content here okay so we don't need it but at least I wanted to see if this is working so it works now okay now let's go back to code and let's write the same thing for you know other model so let me just copy paste here quickly and I'm gonna do else here else and O's I just want to write here if option excuse me if option else if option or maybe let's don't write in else let's call it alif okay so this become L if and this will not be here let me just remove this if and let me just copy this thing for now let's copy here I'll leave option and let me just remove this we don't need it okay so this will not be gorilla 7 if this will be what for the model name it was MPT 7B so let me just replace this guys here okay this will be MPT 7B HF version okay V naught that's it okay so we are good with this so we have written the code for column one now let's write the code for column two so what I'm gonna do here with call 2 and it's call to I'm just gonna write the same thing if option so let me just first write an option with call to and do a path for now I'm going to write the code later on for this and also for L if option excuse me a lift option and let's also do a pass for now okay we'll write this later now we're going to write the logic in column two to only get the code and execute that within a sub process that's what we're gonna do guys you know in column two on stream lead on slide okay uh Okay so if you see the response that we have got here right it's it's a raw text you know it you cannot run this text right now okay we have to filter it out or create a file or something to do that so let's create that why are you waiting so now we have this function after that we'll hit enter write one more function and that function will be nothing but call extract code so extract code from extract code from output let's have this explanatory function extract code so we are going to extract the code from output and it only take our output as an input parameter we'll pass our entire row output the raw ticks that we have got from gorilla okay now what I'm going to do here I'm going to write code output dot uh let something like we split or something okay split and then in Split what I'm gonna do is I'm going to write code or I'm gonna write code and then let me just see this so where is the code guys okay Define load okay here is the code you can see this code okay so let's have this three something okay there's some characters okay code and now what I'm gonna do I'm gonna start my code here and then let's have one now this will give me the code for this particular model and I'm just going to return the code here simple okay return code now this this gives me the code so what I'm going to do here in this case now for uh option gorilla 7B HF V1 I'm going to basically use that but uh function that I've written on top so let's let's do that guys so now what I'm gonna do here in column two if option then code result so let's do code underscore result and use that particular function code result extract code and here I'm going to pass my you know uh result okay so let's pass that result okay so in this code result we are passing that result now okay and we only get the code part of it that's that's what we are doing so let's have user header to make it written explainable from the ux standpoint so acid.song header generated output generated code or something okay generated code and Below we can write HD dot code so streamlit provide a code kind of an interface where you can write present or output all of your code okay so SD dot code result you can also Define a language that which code which language is that code of is for me it's python so let's write python here okay we we are good with this guys so now let me just quickly run it and see if it makes any changes and again it will take little time so let it be okay I I don't want to pause the video guys and I request that you spend a little time on understanding the concepts you know so you can invest your time to you know stay with me and uh you can find out right generated code it looks beautiful isn't it okay so from Transformers import T5 tokenizer T5 for conditional generation for this particular task which is a translation task from Chinese to English okay now Define load model and in this model we have tokenizer T5 tokenizer from pre-trained and we're using T5 based by Google which is a very underrated model you know the T5 fan T5 or something right so T5 base T5 base return tokenizer model and then processing data input text decoding it wherever required you know process the data and then fantastic now we are generating the code for this particular model which is 7B HF V1 now what we're gonna do here you know also basically you know let's let's do one thing let's uh write a process file so what I'm gonna do I'm gonna write a process file which will help us you know run the generated code so let's let's do that so for that we have to write a function here so let's write a function and let's call this diff generated uh Define generated code and or let's call it a run generated code a function which is explanatory okay so run generated code so we have now generated a code we have to run that code so let's do that so generated code and in this I'm gonna pass my file path so we have to save that file path we will do it later okay just let first write the function so Define run generated code file path and I'm going to write a command for this the command is nothing but Python and that file path okay so python and and let's just write file path not it's okay we can give it this C but it's okay let's write python file path okay so this is for command now let's use the try accept if we have we can handle the error if it's not executing we can throw in system error or something now result and result is nothing but sub process dot run so we're going to run this file so sub process dot run command and then let's have okay this is fine Studio out okay subclusses dot pipe and Studio error okay you cannot see it let me just see Studio error and sub process dot pipe again sub process dot pipe okay and then text equals true which is uh true or false so text true fine so we are getting our result this is our result so now let's check if you know if the sub process ran successfully so we can put a condition there so if result Dot uh return code so if the result dot return code equals equals zero okay if result.reton code equals equals zero then we can print a success message HD dot success generated code executed successfully so generated code executed successfully fantastic now we have this and let's display okay HD dot code and if there is any output or something we can put it in the code so result Dot Studio out okay yeah you're out and then uh the language we accept is python okay so let's do a python for this okay and uh if if result written now what we can do we can also write an else here if we if it ran through some error so just write an error message generated code failed I'm okay with this okay generated code failed and I'll just show the error also so let's show the error also uh it's not we let's call it error that's it okay cool and the errors would for error let's call it bash because it will give you a command line error for you right some terminal error so let's call it bash because it's fine we uh to have bash here now let's have an exception also if there's any any exception except exception as e and sorry something went wrong okay and let's let's give that e also here okay we'll see that what we are getting okay from that fantastic Okay so We've ran the generated code uh also so we have we not ran okay we've written the function for we have written the function to run that executed uh generated code in a sub process within a sub process we are going to run this code now just imagine any kind of task okay you have to crawl some data I scrap some data you can write a custom function for that a logic and you can just ask that task to gorilla it will generate some code of course it's validated you have to pass the input file and it will automatically perform that task for you okay it's that powerful guys okay so now let's do one thing okay so if we have this H dot code result language python now you know we have to save that file to run that so what I'm gonna do here okay I'm Gonna Save this file so let's call it file path here guys so file path and in this file path I'm going to just going to just save it so let's call it generated code and it's let's give a model which model we are using is Gorilla 7 b h f V1 hfv1 dot Pi this is what we are going to write okay now it will save in this current folder and it will run within a sub process fantastic okay now if option uh and we have to sorry we have to you know complete that again so we have to save it right with file so we have to write that file and then uh open it as well okay so with open file path let's write it as file this is fantastic as file okay as file and what we can do as file so let's do file dot write fantastic okay now this will help us save the file in this current directory that we have we are done with this first from first model gorilla 7B hfv1 let's write the code for second model now so for second model MPT it it doesn't give you the code it gives you the code in one line okay and we have to pass the output a bit so let's do that so I'm just going to use code result as it is let's copy this and let's come over here paste it code result and so here I'm gonna just write couple of extra lines of code to pass the output right so lines wherever I speed that code result split let's give a double line break here guys so n and I have already performed this task so I know that why I'm writing this code but I will show the output once I get it okay so lines go to israel.split and the next is for I in range for I in range of length lines this is right but let's write up minus 1 here okay so what I'm going to do is minus 1 uh length line uh minus so let me have a look my notebook uh what the step was length lines yeah this is fine so length this looks fine okay length lines then St dot code lines I language python fantastic this is the this is what I was looking for and I'm using tab 9 as a coding assistant you can also use it you know I don't want to pay ten dollar to GitHub copilot but it's worth paying it okay for I in range length lines minus one and lines I okay this looks nice let's see if we get any error SC dot code and now we also have to save the file path for it so let's do that okay I'll just copy this come over here I'm just gonna change this 7B let's write an MPT here and it's called version V naught okay fantastic and yes but here also we have to make some changes I'll just again go to my node I'll replace this entire thing guys okay uh this is not equal F2 again here power save that with the line breaks okay in the python file so with off open file path write as file now what I'm going to do is for I in excuse me for I in range length of okay this length is line length of lines again minus 1 this is what we did on top and here I'm gonna write file dot right yes so file dot right and then I'm gonna use lines I cannot see it lines and yes so lines and which is I lines I dot strip I'm gonna strip this and then I'm gonna replace the double slap double line break that we have with so with this this will not be that's right because you have to write in a single code because here we have to give a double quote okay replace and within this now this makes sense and Plus fantastic we are done okay this is correct okay now and now let's run this generated code that's it so I'm just gonna come out of a leaf option because this is gonna This is Gonna Save this and I'm gonna come and say run generated code file path let's do that so file path okay now we are utilizing this function you can see run generated code and we're just writing the file path this is what we are doing here guys okay fantastic now we are done with our function let me explain quickly what we are doing and then we'll go and run it couple of times and then I will you know uh end the video there right it might be a little big but it's okay now we are first importing all the libraries we have to hit the end point that gorilla has deployed somewhere you know so we are using Berkeley dot edu 8000 V1 and then we are writing a function get Gorilla response which is available on their GitHub repository I made couple of changes here whether input parameters and then we have written a code function code to extract the code out of it you can see it over here the this was the complete output but we need the generated code and that's what we are doing here okay with this particular function now we have something called run generated code because we want to save this generated code in a python file and run that automatically within a sub process to perform your task now you can deploy this anywhere you can write a chrome job a scheduler or something you know depending on what kind of task you're performing it can work as an agent right you can do that now here we are doing different generated code you know we are putting a command some sub process some conditionals to check if that get executed successfully and here we start the streamlined thingy okay the title the input area text area select box couple of models you can add more models go through their GitHub repository and see they are also working with llama too that's what I heard okay so you can also integrate lamba 2 here and then I we have a button and couple of columns in column one we are getting the complete output in column two we are only getting the code saving the file and using the Run generated code to execute that within a sub process this is what we have done so far guys right so now let's do one thing now let's run this and see if we get any error okay uh so the app is already running so let's come here and just do a refresh and now we can use it to perform multiple tasks now here is the catch when you want to run this within a sub process make sure that you have of all the required dependencies and libraries within that environment that you are running because you know it will generate code mainly based on Transformer stocks tensorflow Etc and X I assume that you should have that in your environment because we have to run that in a sub process now let's do one thing right uh write an article write a get a post on it okay write an article let's for example see if it can generate some write an article on explainable AI keep a professional tone save the file you know and save the file or print the or let's do rather print the response print the response and see if it's able to do so let's try with MPT 7B first and what I'm gonna do I'm just gonna click on gorilla Magic now what we are doing here we are using Gorilla as a content creator tool or a Content creation tool where you generate some content and you can see it's uh it says invalid syntax uh there's something wrong okay write an explainable on explainable AI I don't know why it's why it's wrong okay it says explainable AI right okay let's do one thing let's remove this for now okay if I remove this and I just uh maybe maybe we can handle this also you know if required from the stimulate itself or because it was uh there was an error of quotation and now you can see uh it says it's using gpt2 model which is on hugging face through pipeline Tech but it didn't you can see generated code executed successfully it generated the code executed okay let me come back over here and show you what I'm talking about now if you see this MPT 7B file that it has been saved here okay from Transformers import pipeline generator pipeline but even in here we asked to print the response so it's not printing the article it ignored this now but that is okay it at least it's showing you where it has saved and everything and all where it has executed this file okay so it file has been executed executed you can ignore this exception on bits and bytes now if you see here the file that that has been saved now if you do this just do a print article here okay we just do print article it will basically you know also execute this you can execute this you can see it's using gpt2 model uh the text generation model from Transformers import pipeline you know generator pipeline text generation model gpd2 article generator right so you can use this excuse me let me just uh minimize this so you can see the code that we are getting from MPT I think we can also I think I missed this you can put this again back in a single code snippet it's like a separate line you can see all of the code with MPT 7B and but but it's it's working right so now what you can also do let's try it out guys so uh with next uh what we can do or we can use it for different purpose okay let's let's uh use it for I want to generate embeddings for a simple python file python function okay uh I want to generate embedding for a simple python let's write code or something okay now if you want to generate code embedding let's let's select 7B HV HF V1 for this and you can see it's running this is now this is the task that I'm going to perform I want to generate embedding for a simple python code now earlier we created content out of it you know and it used gpt2 to do that and now in this case what we are doing we are asking it to create a code embedding on some sample code okay again you can see it's using Code but I I love I loved it okay so codebot is very powerful when it comes to executing this so codebot helps you it's by Microsoft it helps you with a lot of coding related problems or tasks like for example you know uh code translation code similarity finding you know code embeddings Etc so you can see it's using codeword here codeword base it has a load model it has a process data it has a sample code and now it's creating the embeddings out of it so for this it will take time because you know it's it's little uh it's a bigger model and also has print embedding so you are executing this writer you can see it says running within a sub process it might throw an error you know if you don't have the dependency then it might takes a little time it might take around a minute or two for the first time it will take more time now because you have to get this code but base within your cache memory right that's why it's it's taking time to you can see it over it says running now what it will do it will I just have given a natural language query I want to generate embedding for a simple python code you can see this is my simple python code which is a Hello World function and now I'm expecting that the code board will give me the embedding for this code and later I can you know use a decoder model to decode that that's what I can also do so you perform different type of tasks of course you can extend this further this application for a lot of other use cases you just have to create custom functions within your python file and write some Logics how are we extending this right you can deploy it also in some infrastructure if you have some infrastructure to deploy okay and make it like an auto uh like an alien GPD kind of a thing where you can also Define some straight task you can put a chrome job or scheduler you know to perform it let's see if what it does I'm very curious probably it will throw some error but let's see if it throws any error and I'll show you what I'm talking about here about codeword let me just so so if you go to code bird base again face that was a good driving by the way code but base sucking face and again see this is the model that we are using by Microsoft it's you can use you can use it uh your sentence here let's see if this sentence works I can write okay zip function or something I don't know okay let's see what feature extraction okay this is for feature extraction uh its model is loading let's see let's come back over here and it will take little time so let it complete and let's see what it does here it'll it has a lot of other things guys you can see I got my features okay so it's able to extract the feature out of this function so it's performing on hugging face nurse Theory performs here you can see wow fantastic so we got it here also guys right so we got the embeddings for this you can see the vectors and now you know you want to you have a lot of uh proprietary code databases okay you want to create embeddings out of it you store it in a vector database or a vector store is and then you can also query it out so you can see the beautiful app that we have you know developed in this uh video guys so we built something called gorilla llm API call app where you can put your prompt different type of task where you are using couple of models like hfv1 and MPT 7B it is they're also working with llama you can also utilize llama too soon and the left hand side column we have the raw output the exact uh output and then we have a filtered output which say generated output the code okay maybe I can change it to generate code or something and you can see in we have a code structure you can also copy that and this gets stored here also so let me show what I'm talking you can see this this gets stored in your separate file it creates a separate file we store it accordingly and we can also run it run it now okay even if you want to run it separately you can run it but we also have a sub process we which runs it for you now you can also write logic in that sub process that whatever embeddings that you are creating it would get saved in a database or a txt file you can also write that logic now it's it's up to you right it's all you know you have to come up with your you know imagination and creativity to extend this further but this is what I wanted to do guys you know in this video I want to talk about gorilla it's still not there to be honest but it's it's looks really promising uh we can do a hell lot of things with it I just took 40 minutes to create this application you know and you can see this is what we are getting the output that we are getting it's able to perform it's able to execute task it's able to get right set of code and it's working fine now you can also do a lot of other things with it let me know what are you doing with this if you're watching this video and if you get some inspiration from it and you take the code base from GitHub repository and you can build something on top of it and let me know in the comment box so that's all you know for this video guys you know the code will be available and if you like the content please hit the like icon and I also have an llm playlist where I have more than 25 26 videos on large language models please go through it and let me know your thoughts and feedbacks on the content that I am creating okay and that's all for this video guys uh thank you so much for watching see you in the next one
Info
Channel: AI Anytime
Views: 9,840
Rating: undefined out of 5
Keywords: gorilla, llm, python, generative ai, langchain, agi
Id: alDArqcxSvw
Channel Id: undefined
Length: 47min 51sec (2871 seconds)
Published: Mon Aug 07 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.