OpenAI Function Calling: Node.js Integration

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video I'm going to be showing you how to get set up with the new function calling feature within openai's API in node.js so the first thing I'm going to have you do is go ahead to the openai website and grab an API key once you have that set up just go ahead and put it in your dot EnV within your root directory now once you have that we're going to go ahead and npm init y so that will give us a package within the left side in our vs code and then we're just going to go ahead and npm install two packages axios and Dot EnV so we're going to be using axios to make our requests and Dot EnV to store our variable here so once you have that just go ahead and save your dot EnV you can close that out so from there just go ahead and touch an index.js or make an index.js just within your vs code over here and then the first thing that we're going to be doing is we're going to be importing our axios module so axios if you're not familiar it's very similar to fetch you could also use fetch if you'd like to not have an extra dependency but just be mindful you will have to be on a newer version of node for that then dot EnV that's just how we reach into our environment variable so the first thing that I'm going to do I'm just going to establish a couple example functions so the way that I'm structuring this is we're going to have two functions and one function is dependent on the other so you'll see that in just a second here so our first function we're going to be getting the weather now this sort of simulates an API request so if you'd like to go ahead and actually Swap this out for a weather API you could very well go ahead and do that so it's going to take in a location and it's going to take in a unit and we're going to default the unit to Fahrenheit if one isn't necessarily specified and then the response is going to be the location and then we're going to have a hard-coded value here so the thing with this to keep in mind is like you have to sort of think of it as if it's an API so in the example it's just hard coded but the purpose of this video is really just to show you how to chain these things together and get going with the functions API in general so the next one is going to be the function that's dependent on our first function so it takes in the temperature as an argument and that temperature isn't generated until this function is called so I'm going to be showing you how to have successive function calls and chain those commands within your openai requests so you'll see within here we're just going to log out calling or get clothing recommendation we're going to show you the temperature and then we're going to have a simple sort of uh switch almost like a Boolean so it's like if the temperature is below 60 we're going to say okay you're gonna wear warm color clothing colorful or light clothing tie-dye so just sort of you could put anything in here again is really just to demonstrate the purpose of the new functions feature So within that we're just going to then simply return the recommendation So within here this is our main sort of function Loop so you can sort of think of these as like utility functions so if you have core logic say you have an API like say you're reaching for news and then with that news you want to do something else you could sort of swap out these you know variable names and the logic within it So within the Run conversation the first thing I'm going to do is specify the base URL so the reason why I'm not using the open AI wrapper is a time of starting this tutorial yesterday the new features within the new function API weren't yet available within at least the node.js wrapper so that could change at time of viewing but when I tried it didn't work so I'm just going straight to their API directly so next we're going to specify our headers this is going to be where we pass in our API key then this is the crucial part so this is sort of the context of how the model will know what the functions are and what the arguments are and you can input their description with natural language so just to sort of dive into it so this is your message as you usually would send with say if you've used gpt3 GPD 3.5 or gpd4 now the new part is both the model number so make sure you're on this new model but more importantly is the function structure itself so this is an example that they gave in their python example so I went ahead and converted this function at least I thought it was a good example to illustrate this new functionality and it's good because it shows a function that takes in a number of different parameters so you see here there's it's taking in the string of the location and the description is the city and state similar for unit it's taking in an enum Celsius or Fahrenheit so and then you can also specify okay the location is required for this function so once you have that we have our get clothing recommendations and you can be as verbose as you want in these descriptions and again you're just setting in the arguments you can also have functions that don't take any arguments if say you just want to invoke a function based on something you're doing within the code you can also do that so that's sort of the base structure of what is new with what you pass into the API and one thing that I have noticed is it doesn't actually seem like this is factored into your token count which is great so you can imagine if you have a ton of different functions I haven't really pushed the limits on seeing how many you could pass in here but I would be curious just to confirm that these functions aren't actually taking up token space so if you're if you know just leave a comment below and let the rest of us know and then we're also just going to be specifying the function call and that is going to be calling in the auto scenario so they're going to it's going to continually be invoked until there's no functions within the queue essentially we'll sort of go into that so the first thing we're going to do is we're going to set up our try catch and the first thing we're going to do is we're going to send our initial request to open AI so we're going to set up our data and headers and everything like we went through and then I'm just going to revisit our message so I'm going to say what is the weather like in Boston in Fahrenheit based on the temperature what should I wear so it's sort of leading it in a way okay I need to know what to wear and what the temperature is and you'll see how they're successive as we go through this successfully called Dennis so the next thing that we're going to do is so I set up this hash map essentially to prevent unnecessary invocations of the functions so I noticed when I first set this up that sometimes open AI was invoking the function more than once so if there's something where you don't want the function and you can essentially cache it for the conversation you can use this now one thing to note with this is if you are using a function that you might be calling continuously within a conversation you will have to edit this logic but for this purpose for demonstrations sake this should well this will work in this example or similar examples so the first thing we're going to do is we're just going to establish a couple conditions so now within the message we're going to see we're just going to check if there is a function call and if the response doesn't have the finished reason of stop so one thing I encourage is as you're going through this if you do want to console log out all the requests or the payloads it is very helpful to see what's being sent what's being appended within the conversation and then also what's within the response so I've omitted that but if you want to go in and write the responses locally or save them in a database or just console or you know put them to your terminal feel free to do that I found it helpful in developing this so from there we're going to get our message and we're going to put our function name here for where we're going to reference it a couple times so we're going to break the loop if the functions already executed so again just so we don't have those multiple invocations of the functions um but like I said if you need multiple identifications of your function you will just have to tweak this one little piece here so from there let me just bring up some code here so from here we're going to have a switch case just based on our function so if you scale this out to say three four five functions you can add to your switch case you could also have so you know another sort of way to you know if you want to have a if statement or so some sort of hash map that you sort through to do this there's multiple ways to do this but this is the way that I chose to handle the log logic so the first thing we're going to do is we're going to first parse the argument within our get current weather so the one thing that was interesting is within our weather args here is the arguments is a string of an object so I thought that was sort of interesting when I got down to this to see that this was actually wrapped in a string here so if you're wondering what that is that's why so we're just formatting that string into a parsable uh sort of format for our application and then from there we're going to be passing into our get current weather our location and our unit then similarly for the get clothing recommendations we're going to be parsing it and then we're just going to be passing in the temperature and then if you for whatever reason like say if you'd established a function within here but you actually forgot to put it within the switch case we'll have an error there that will log so here we're going to be adding the function to the executed function list so again this is the piece of logic for making sure that it isn't the invocating multiple times but tweak this as you need okay so next we're going to be appending the function response to the message list so this is another new thing is there is a role of function so you have user assistant system and function and we're going to be essentially having our question and then as the functions are getting answered they're going to go in that queue that gets sent back so open AIS API has the contacts of the answers of the functions within the proper format so it's not continually reaching out and saying this function needs to be called and we need an answer Etc so from there we're going to make another API request with our updated message list like I just went through and we'll be logging out a handful of these things so you'll be able to see everything how it runs once we're complete here and then from here we're just at the end of our loop we're going to just be logging out the final response here so we just have some simple error handling this will be all on git if you do want to take a look and we're going to run our conversation we're going to log out the final answer and then finally if we have any errors we're going to log out the errors so the one thing I do want to note with this so This example has successive calls but in a lot of cases you don't necessarily need to have functions that are dependent on another function obviously so if you just wanted to specify a number of functions and then within your prompt you want to say like get a quote get the news get the temperature you can do that all within one one message and answer series instead of doing this sort of like you know piggybacking sort of agent-like behavior so one thing to play around with I didn't do that in this example I figured I'd sort of dive in with a bit more of a advanced example and you can sort of dial this back as you need so from there we can just go ahead and actually call our index and we'll see okay we're sending the initial request to open AI then we're calling the get current weather with the location of Boston MA and if I just go back and actually we have it right here so again this is our question we're specifying Fahrenheit we see okay so it's grabbing those it knows that those are the arguments it's sending that response back once it has it to open Ai and then once it has that knowledge essentially it's going to go ahead and recognize okay now it needs to get the clothing recommendations because it has the temperature and it can proceed from there then finally it's sending that clothing recommendation response and then it sort of gives you your summary so you can say the clothing weather in Boston is 76 degrees it's forecasted to be sunny and windy based on the temperature I would recommend wearing light clothing so just to sort of illustrate this if we go back to our logic here and let's just say let's just change this to 30. I'm trying to call this again so again we see the temperature is 30 and then based on that 30 degrees it will say okay it wants to give you some colorful clothing right and you can play around with us obviously it's going to be more useful depending on your application I just wanted to do a simple example to get you up and running in node.js so if you found this video helpful please like comment share and subscribe and otherwise until the next one
Info
Channel: Developers Digest
Views: 3,137
Rating: undefined out of 5
Keywords: OpenAI, GPT-4, GPT-3.5-Turbo, Function Calling, Chat Completions API, Node.js, AI Development, AI Applications, API Integration, Chatbots, AI Models, Language Processing, Natural Language API Calls, Cost Reduction, Function Signature, ChatGPT Plugins, JSON Schema, Text Extraction, AI Security, API Endpoint, External Tools, Language Models, AI Tools, Developers, Coding Tutorial, Node.js Tutorial, API Documentation, GPT Models Update, OpenAI Updates, OpenAI API
Id: u1Ks5PgSZmE
Channel Id: undefined
Length: 15min 30sec (930 seconds)
Published: Thu Jun 15 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.