Beginners Guide to GPT4 API & ChatGPT 3.5 Turbo API Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is a crash course to show how to integrate gpt4 and GPT 3.5 and its latest chat API into your next website application or software it allows you to perform and customize your own chat directly with openai and its models I'll show you how to build and deploy an application coded from scratch using serverless technology on Microsoft Azure and it'll interface directly with openai and its apis as this is beginner friendly I'll make no assumptions and cover everything that you'll need let's begin I'm happy to say that this video was sponsored by none other than Microsoft more on that later let's begin with openai head to Google Search open AI or just go to openai.com First you'll need an account select products and overview from the top menu then select get started from the call to action here you can sign in with your existing account or sign in with Google or create a brand new account here you'll be taken to the open AI dashboard and there's a few things that you can do here firstly you have documentation which is great to understand how open Ai and its models work there is also the API reference which we'll be using as part of setting up this video those examples you can see where openai have implemented their own features and you can also go to the playground where you can interact with the chat version of GPT as well as gpt4 and test out how it works to make sure that your idea can work before you build it out while I don't have access to it just yet the documentation is exactly the same for 3.5 as it is for full now we'll need some software before we get started I'll download nodejs this is a JavaScript runtime it'll allow me to create a simple rest server I'll be using version 18.15.0 LTS and I'm just going to go through the prompts to install it I'm also going to be using vs code from Microsoft this is my favorite code editor and it comes with lots of great plugins that we can use to make the coding experience easier once you've installed and launched it go to file and open folder create a new folder for this project I'm going to call this gpt4 chat GPT 3.5 maybe Azure at the end since I'll have this hosted on a serverless function later I'm going to initialize the project to do this I'm going to run npm in it in the terminal option on the bottom right this will allow me to select a name for this project I'm going to call it gpt4 API and the rest I'm going to keep blank and just hit enter to set through this will create a package.json file for me I'm going to install a few packages for this I'll run npm space install and I'm going to install Express open AI body parser and cores with these installed we can create the very first file for the project which will be just called index.js here I'm going to import some of these modules that we've created firstly I'll do open AI I'll write the following import with curly brackets configuration as well as open AI API from openai next I want to set up this configuration I'm going to pass in const configuration equals new configuration as a class and here I'm going to pass in and set two values the organization as well as the API key finally I'll initialize the configuration by calling const open AI equals new open AI API passing in the configuration we've Now set I'm going to head over to the open AI website and I'm going to head over to my account and view API keys I'm going to select to create a new one and I'm going to copy and paste that into the API section of vs code then I'm also going to grab my organization ID this is under settings and it's just under organization ID these are the two main things that you need as part of the new open AI module to get it initialized it's time to query open AI I'm going to query the chat model to do this I'm going to set a const value here called completion and this will use async await I'm going to equals a way to open AI dot create chat completion this is one of the new options that is now available to interface with chat GPT 3.5 as well as gbt4 I'm going to set the model to start off with and for this model I'm going to set it as a gpt-3.5 turbo which is actually the cheaper model that is currently available if you want to see more models they're inside documentation and models here I've got GPT 3.5 and turbo as well as gpt4 and if you want to use any of these simply copy paste the model name and place it in the prompt all right let's jump back in then I'm going to set messages this will be an array and this array will have an object it'll have two main values roll which will start off as a user and then content which is the message itself which I'll set to hello world to start off with and that's pretty much it using this I can now console log out the response this for the completion models is completion dot data dot choices on index 0 and Dot message this is a little bit different than other models now I can open up the terminal and test this out I'm going to pass in node calling index.js this won't work because I'm currently using an import statement so I'll need to jump into package.jsons and pass in the value type and this will equal module and hit save on that package.json this will allow me to use import statements and I should now just be able to run the same query which is node calling index.js now I'll get a response back from openai which says hello hello how can I assist you today now it's time to add this to an actual web server so that we can access it on a browser I'm going to import Express from Express as well as a few other libraries these will include a body parser as well as cores then I'm going to initialize Express I'm going to pass in const app equals Express and I'm also going to set a port being 3000. here I'm going to use body parser.json as well as use course and I'm going to call app.get so this will be a get request and this will be when the browser accesses that Port I'm going to paste in this as an async function so we can copy paste our completion here and I'm going to do a rest.json and have a response as a Json object here of the completion itself finally I'm going to have the app listen on the port that we set earlier and console logout once it is listening now I can run this up in my terminal so I'm going to call node run index.js and then I'm going to access localhost 3000 and here is the response now I think it's time to make this interactive I'm going to change this from a get request to a post request and I'm going to listen for messages that gets sent as part of that post request this message can be sent directly to the API inside of the array and if you want you could even collect messages as an array so that you have a backlog and history next I'm going to create a front end so that we can actually push those messages across I'm going to create a file called index.html and then I'm going to head to Google to search for a basic HTML starter there's one here from free code cap so I'm going to just scroll down and use this boilerplate and paste that straight into my own index.html file I'm going to rename the title here to gpt4 chat API and I'm also going to re-label the title here for the H1 block to chat G ept4 I'll remove the script and start writing my own but as part of that I'll want to have a form that a user performs with an input and a submit button the input will just be a text with the name message and the ID message and the button will just be a submit button finally I'll just have one additional div here which will be the chat log which is where all the messages will be now it's time to write the JavaScript for this so that we can start interacting with the web server we created earlier I'm going to pass in cons chat log which will actually reference the IDE chat log I'm also going to pass in const message which will reference the ID of messages I'm going to pass in const form which will be the form that we're submitting and then I'm going to do an event listener on that form for any time it is submitted and I'm going to run a function here which will pass in E and I'll do an E dot prevent default so it doesn't reload the page next I'm going to pull out the text message from the input here from message.value and I'm going to reset that message so that it's empty so it looks like it's been submitted and I'll do this by passing message.value to be blank I'm going to create a new div element and I'm going to call this the message element it's going to create a div and that div will have a class that says message and message sent and then I'll configure what the inner HTML of that div is which is the message text and in there I'll do the message text I'm sending to the server finally I'll append that message to the chat log and I'll scroll to the very top of the chat log so that you can see it now I'm going to do a fetch request but this will be a post request to the web server on Port 3000 on localhost a few of the things I'll configure here is the method to be post the headers which will be application forward slash Json and the body which will be the message that the user has just done in their input the response will be in Json format so what I'll do is arrest.json to enable us to be able to view that then I'll use the data from that to create a new message element which I'm going to append to the chat log it's not just data.message so I'm going to update the Syntax for this it's actually data Dot completion.content and that's all it's ready to go I can now browse this file I can browse it manually just viewing index.html but I also have live server running as a plugin so here it is and what I can do is type in a message like previously hello world click to send that that'll query the web server and I've got the same response back unlike traditional open AI models the chat completion allows you to have history let's actually add that in the history is saved as an array of messages between the user and the assistant with a system message being there at the start to give context to how the actual chatbot should work what I'm going to do is add these in let's start off with a system message over here to basically describe what this assistant should be in this case I'm going to call it design gbt which will be a helpful assistant for graphics design as a chatbot next I'm going to have a look at the Syntax for the assistant and the user here we want the front end to be using the same sort of messages and pass these off to the back end so that we have a message history this means firstly we need to get rid of this individual message we had in the past I'm going to comment it out I'm also going to pass in messages instead of a single message and this array I'm going to spread it out by deconstructing that array here into the messages section and it will provide the history of everything that's happening from the front end so let's jump into the front end now to actually pass that across in order to do that let's have a look at the schema which needs it to have a role as well as content and add these as part of our messages I'm going to create a new value here I'll call this value messages and pass in the schema from openai's documentation with an idea of how this looks I can probably get rid of it it's more or less there just as a place how older right now as reference to know how the messages should be with that though once I create a new message from the form I should create the schema for a object just like this so here I'm going to create a new const value called New Message and this const value will have that schema so let me paste that in I'll get rid of that second object I'll just have the user object in here and instead of having content that was just already static I'm going to put in the content that I get from that text input so here I'll pass it in as a message text finally I want this added to the messages array so I'll do a simple messages dot push with the new message being in there next I'll update the message I'm sending to the server before it was the single message with the text this time it'll be the messages array directly the only thing I need to do now is to update the messages history when a response comes back from the server so here I'm going to do new assistant message I'm going to paste in that schema for the role being assistant this time and I'm going to pass in data completions content since that's what we received back from the server and push that to the messages array and that's it message history has been enabled so let's actually restart the web server running node index.js and reload the front end here just running a basic HTML file and give it a test I'll ask the system hi who are you and the response is accurate it's come back as design gbt with the context we provided at the start I now have message history so if I include what can you do it'll actually give me a good answer here based on the chat so far all of this however is a local host meaning you have to run it on your own computer so next I want to put this up online so that anyone can access and this also means that if you're running any type of website even on a platform or a mobile app you'll be able to access this API so what I'm going to do is put this up in the cloud on the Microsoft Azure Cloud to be specific this only takes a few min minutes and I'm going to run you through the process I've added an official Microsoft link to the description where you can access these files that I've built so far as well as the issuer cloud to sign up as well as some additional training resources if you need head to the link that says get started with Azure now and here you'll be able to start with a free account I'm going to select the button start free and go through the sign up process once complete I'll be taken to the landing page here and I can head straight to the Azure portal what I want to do is create a function that's going to run without any servers while we could do this through the user interface I'm going to do this inside of vs code what I'll do is head to extensions on the left menu and I'll search up Azure functions there's a few different types of extensions here but this one is the best for creating a serverless app once installed on the left menu you'll have a new option for Azure the first thing I'll need to do is assign in to Azure this is quite simple because there is a single sign-on I can select the account I just created it in Azure and use that to sign into the plugin once signed in I'll have all the options available including the one here to create a functions app I'll be prompted to give it a unique name I'll call it Adrian Azure GPT I'll need to select a runtime so I'll put it on node.js on version 18. I'll need a resource pool I'll just select the default for us East this will go on to deploy my app in Azure with my URL available just below on the left hand menu the functions app will now show Adrian Azure GPT Now to create the workspace for this project I'm going to go to workspaces and I'm going to select the little lightning Arrow which basically creates a new project you can select a folder for the project or the one we're already working in you can also select a language I'll be using JavaScript and you can select a model I'll be using the latest one version 4. finally you select a trigger I'm gonna do a HTTP trigger but you can also to do time triggers and others and finally I'm going to give it a name I'll give the name GPT function what's cool is that the code along with the files are created for you if I jump into the files Explorer in vs code I can see my package.json including my name for the project as well as the function itself GPT function this includes a prompt here which does a simple hello world but I'm going to say hello Adrian as a test I want to test this out locally so I'm going to go back to Azure I'm going to expand out the workplace and I'm going to select start debugging I'll connect this to my Azure storage which was automatically created here called Adrian Azure with some random number at the end and this will start up at the local instance of this project in the terminal I'll get the local address to connect to it which is localhost 7871 forward slash API forward slash GPT function let me try this out on the web browser if I go to to it I can see that my hello Adrian is there now I'm gonna close this project off by selecting disconnect since Azure actually connects straight to debugging inside of vs code it makes this whole process nice and easy I want to now deploy this to the cloud I have the option here next to the Thunderbolt as a cloud with an up Arrow to deploy the workspace to the cloud I'm going to deploy it to the Adrian Azure GPT functions app that I've created I'll be prompted as well that I'll be overriding any existing deployments and now it's running this deployment of the function once done I can jump into Azure to take a look at it here is the Azure dashboard I'm going to head over to functions app here I'm going to get a list of all my functions apps I'm going to select the Adrian Azure GPT one this will open up a side menu and down here I'll get the option for functions I want to select that one to view the one I just deployed and that should just be called the GPT function I can select the GPD function to browse into it and grab the URL let me open this up in a chrome window and make sure that it's actually loaded properly and success we've got hello Adrian so now I can add open AI to this function I'm going to install it first running npm install openai and next I'm going to make sure that it's in the package.json it's the only Library we need in this case we're not going to be using Express we'll just be using this function inside of azure I can jump back into my other file here for my Express server and I can copy over the open AI configuration and initialization I'm just going to paste it up here I'm not using import statements here I'm using require statements so I'm going to change it to a const and instead of a from I'm going to send it to equals require adding in the brackets around open AI next I'm going to get rid of all the other syntax inside of this function we don't need the context.log which is kind of like console.log and we don't need the name which is queried from the URL the response right now is just a string I'm going to change it to a Json body and I'm going to get rid of the current content then jump back into my Express file and copy over all the internal content that's happening inside of the post request for this post request I'm going to get rid of this res.json but I'm going to copy over the completion that I want as well as the response here from openai and I'm going to put this in a Json object that gets sent back to the client finally this is a post request on Azure so it works a little bit different instead of rec.body I'm going to call await request.json and that pretty much does the exact same thing as requesting the body and the post data from there and Azure also uses context.log instead of console.log so I'll update that too with all this done I can head to the debugger and test it out I'm gonna restart it I'm going to just start up a new instance and this will start it up on the same IP address I had before on localhost but the one change I'll need to make is actually capturing this address itself jumping back into my other vs code for the front end and changing the effects to request from localhost Port 3000 to this one over here now let me test if it's working with my Azure function I can test who are you and I get a response back here saying that it's an AI chatbot which means that we're now querying the Azure function but it's still not in the cloud so I'm gonna head over to Azure and this is the easiest part I select the up Arrow to deploy this function to the cloud overriding the previous one I had this uploads the entire project including the new open AI module which I added as well as the post requests so to test this I'll grab the online URL for this function and then I'm going to go to my front end and replace the localhost version here with the online version this means that I can run this up pretty much anywhere let me test this out if I do can you access the internet I unfortunately don't get a response because cores isn't yet enabled on Azure so let me do that data right now I'll jump back to Azure head to the functions app select the function application here and search up course here I'm going to select API cores on the left hand side and select the star to enable me to access it from anywhere I'll hit save on that and let me try accessing this function once more I'll select can you access the internet and this time GPT comes back with yes yes it can that's cool I hope you guys enjoyed today's video if you liked it you know what to do otherwise I'd like to thank Microsoft for sponsoring this video as well it was great to work with them and if you want to learn more about them the Azure cloud or this project in general check out the link below as well
Info
Channel: Adrian Twarog
Views: 112,174
Rating: undefined out of 5
Keywords: gpt4, chatgpt, gpt4 api, chatgpt api, chat gpt api, chatgpt 3.5 api, chatgpt3.5 api, chat gpt 3 api, chatgpt 3 api, gpt 4 api, openai api, openai, ai, api chatgpt, api openai, gpt3.5 turbo, gpt3.5 turbo api, gpt 3.5 turbo, api gpt 3.5, api gpt3.5 turbo, gpt turbo api, gpt turbo 3.5, gpt 4 turbo, open ai, open ai api, open ai gpt, open ai gpt4, openai gpt4, openai api tutorial, openai chatbot, openai chat tutoriial, openai gpt4 tutorial, gpt-3.5-turbo, gpt-3.5, gpt-4, gpt
Id: LX_DXLlaymg
Channel Id: undefined
Length: 21min 32sec (1292 seconds)
Published: Tue Mar 28 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.