Build and Deploy a GPT-4 Chatbot in Next.js 13 With Streaming (Vercel AI SDK)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's going on guys it's Cooper codes and in this video we are going to be building this gpt4 chat application that has gpd4 streaming and is fully deployed with nexjs13 as you can see if we have a user message here and we send the message in we are going to get a response from gpt4 that is being streamed to our client from a server-side function to achieve this functionality we are going to be using the versal AI SDK which allows us to create functionality just like this we have a full chat application and we only need a handful lines of code this was recently announced by virtual and they've added a bunch of amazing features to it and just to show you guys we can keep on asking questions just like if we were talking to chat GPT so for example here I can say can you explain algorithms more then I can send that question in and I'm going to get a consistent chat log back and forth and it's going to remember that context you guys can also see that this is a nexjs 13 application that is fully deployed to Virtual this means we are going to handle a bunch of things such as turning your AI functionality into something that's going to run at scale and also doing things like making sure your API keys are secret when you deploy this to a bunch of users alright let's get into the code to get started we can create our next js13 application by going to an empty folder and opening up the command line and then typing in npx create Dash next Dash app at the latest then I'm going to add some options to the command line such as dash dash TS to add typescript dash dash Tailwind to add tailwind and dash dash eslint we can then press enter to get started if it asks you to install the latest version of create next app say yes I'm going to name this project GPT chat but you can name it whatever you want not going to use the source directory so just press enter we are going to use the app router so say yes or just press enter we don't need to change this so just say no and then it's going to fully build our next js13 application there are going to be two important packages we need to install so let's get into our application in the command line by saying CD GPT chat that's going to get us into the folder over here and then we can say npm install ai ai is the package for the virtual AI SDK which makes chat applications and chat streaming way easier than doing it yourself then we also need to install open AI Dash Edge this allows us to run open AI functionality on an edge function the open AI Edge package allows us to create server-side functionality using open AI so we can keep our API Keys say for example and we can also just use open AI functionality on the server side so we can press enter alright so our package.json should look something like this which is great news now we are going to want to set up the actual API keys to talk with open AI this is just something I'm going to get out of the way early in the video so we have the API key when we need it but go to platform.openai.com and create an account whatever that process looks like and eventually you will come to this page here from this page we can go to the top right of the screen and go to view API Keys you might have a list of empty keys if you just created your account but we can go right here and press create new secret key we can name the key I'm going to call this video test key create the key with the button here and make sure to copy the entire string it gives you here because it's only ever going to show you the secret key once so once the screen goes away it's gone forever so we can copy this and then press done now if we want to use this API key with inner next.js13 application we can go over to our folders here make sure you're in the main root folder and create a file called dot env.local this is a file where we can store a local environment variables for example we can say open AI underscore API underscore key is equal to a string which is going to be that key we got before and so to understand what we just did here what we can look at is we just made an environment variable called open AI underscore API underscore key and throughout our next JS application we can go to process.env dots and then the name of our variable to reference this value so anywhere throughout your application is going to give us that string like that but now that we have all this set up make sure to save this file make sure it looks like this and we can go into the app folder here and start editing our page.tsx to make a pretty basic user interface you'll see if we npm run Dev that's going to start up our local server and we can go to the local link here and it's going to have a bunch of boilerplate stuff from nexjs for our interface we are going to make it super simple so we honestly don't need any of this to make things simple I'm going to delete everything except this outside main tag so we can go in here and just delete everything inside of here scroll down and make sure you don't delete main so your application should eventually look something like this and so if we go over to our application now you will see that we just have a completely blank screen which is exactly what we want I'm going to make a div this is going to be in the middle of our screen to kind of hold our chat application you can create the styling however you want but for this tutorial I'm going to create a very basic Tailwind CSS layer yeah so to use Tailwind we can say class name is equal to I'm going to make this div have a background of a slate let's say 800 looks like it could work and it's going to have a padding of three so on every single side it's going to have a little bit of padding I'm just going to give it a width of 800 pixels just to keep things simple I'm then going to round the corners by saying rounded Dash MD it's going to give us some rounded corners and I'm going to make sure any text inside of here is going to be white so I'm going to say text Dash white so you should have something that looks like this let's go and create a basic title for application so we can go in here and say there's an H2 at the top that says GPT for streaming chat application I'm going to make the class name be equal to some Tailwind classes so if we want to make a larger text and tell it we can say text Dash 2XL is one of the font options there and then we can save that and you'll see now we have a basic card to put all of our different chat functionality within one thing that I'm about to do is I'm about to make a new component called chat component and you might think why make a new component right well the chat component is going to need access to the client because it has text inputs which have on change events and that means that we need to make a client-side component and so right now the entirety of home here is actually a server-side component and so if we make a folder within GPT chat and just call it components like this then we can go in here and make the name of our component for example chat component dot TSX inside of here we can make a very basic react component and just for now say export default function so I have a functional react component chat component just like any react component is going to return some jsx for now I'm just going to make a div with an H1 saying hello we can then go back to our page.tsx import the chat component we just made and then put it right under our HT right here so you can just say chat component like this and importantly we want to make sure to go over to chat component and make it a use client component so it can use any of that client functionality we need when we actually build the chat here so you'll see although it's pretty basic we do have a little hello showing which means our chat component is being rendered to our client which is great so let's go and create a pretty basic form for our user to input some text into so I'm going to go into the chat component and I'm going to create a form at the bottom of our div here I'm going to give it a class name of margin top of 12. this is going to separate it because right above here we are eventually going to have our text messages when we create that functionality so inside of our form I'm just going to create a little paragraph that says user message and then a text area for our user to put their text into to give it some styling I'm going to say the class name as equal to margin top of two so it separates between the text width of full background of slate let's say 800 and just see what that looks like we can change that if it doesn't look great and then a padding of two then a placeholder is kind of some initial text to show to our user to give an example what they should say I'm going to make like a software engineering robot so I'm going to say what are data structures and algorithms okay so I think that the background color looks good enough for this application now we need to make a button so the user can actually send messages in going to go down here and say button like this and it's going to say send message then I'm going to make a class name be equal to rounded MD so it's going to be a rounded kind of button then background of blue dash 600 which is a nice blue color from Tailwind as you can see right here then a padding of Two and a margin top of two all right so this is a pretty basic user interface but it's clean enough to have all over different functionality in there we are going to load in messages as they come in in from our API so we're going to show all the messages that we get of course but for now I'm just going to make a quick example message so we can make those dynamically in the future so I'm just going to make a little div for an example message I'm going to make an H3 which is going to show the role of who's actually talking for the certain message so this is going to have a class name of text Dash large font semibold is kind of good to make something stand out and then a margin top of two you guys we'll see I use margin top of two a lot here so just to make this spacing consistent everywhere so let's just say gpt4 is talking in this one although we're going to make it say gpt4 or user based off the message which I'll talk about later and the actual content of the message will just be some simple paragraphs saying I am a robot with gpt-4 so let's go see what that looks like alright so this is great now we have a specific way to actually interpret messages and show them to the user so once we actually get data from our API which we're about to build we can show the person who's talking and then also the message and so you guys don't have to copy this but just as an example if we were to have the user next which usually the user is going to have the first question which chatgpt but I'll have him be second and I'll say I am a user we can save that and see that we have a pretty basic chat application which is great so the versal AI SDK or the AI package that we installed has this incredibly important react hook called use chat use chat gives us access to a bunch of different things for example it handles messages for us so we don't even have to add messages to an array or take messages away from an array and it also handles the user input handling user submits and a bunch of things relevant to creating a chat application in next.js so we can go to the top here and then import use chat from AI slash react and then we can initialize the react Hook by saying use chat like this and use chat gives us access to a bunch of different functionality that virtual has created for us so we could say const and then do object destructuring is what this is called and we can get a bunch of different objects outside of the use chat hook for example there is an input const that we can use for the current value of our user input the handle input change which allows us to change this input when the input changes surprise right and then handle submit so once we actually send in a message to our backend we can use the handle submit here there's also an is loading property that we're not going to use in this tutorial but it's good to know that it exists and also messages messages is incredibly important because what messages does is it is an array so messages will look like this it'll have user asks a question gpt4 responds user asks again gpg4 response and this is kind of a pseudo code way of looking at it but our messages array is pretty much going to be a bunch of objects that represent this and just to show you guys we can console.log messages and because it's on our client remember with the use client right here we're going to be able to see this in our console lock and you'll see right now our messages is an array with zero messages in there so we can go back to our next JS application and we can take these values and make them equal to the values of our inputs down here for example for our text area which is what the user is inputting we are going to say the value is equal to the input which is coming from this use chat hook so the chat input so how you could think about that then the on change is handle input change which is going to update this input to whatever the current user is saying and then if this form ever gets submitted for example if we press that button it's going to submit the form we can then say on submit handle submit right here and just for example's sake I'm going to console.log the input to show you guys what's currently happening so if I ask hello world question mark you can see that our current input is getting updated to that current value in the text box an interesting thing here is if we press send message take a look at what happens it's going to give us an error which we actually expect the interesting thing is that the actual used chat hook that we're using expects us to have an API route built at API chat and it expects us to handle a post request to this route the cool thing about this API route is that it can be a hundred percent server side that means you don't have to do any API calling directly to open AI from your client so in order for this post request to be fulfilled and actually give us a good response is if we can go and create this functionality that calls the data from open Ai and then sends a stream to our front end that's also how we do the streaming data so let's get started on building out this API route as you can see it's expecting it to be at API chat so we can go into our application create an API folder because remember the routing in xjs13 is folder based so if we are going to API chat we're literally going to go to our app folder which is the root create API and then create a chat folder just like that inside of chat we can make something called a route.ts so route.ts is a type of file within xjs13 that allows us to have route handlers and if you're like what is a route Handler don't worry it allows us to handle certain requests to this specific route so for example right now we are on localhost 3000 slash API chat and So based off that front-end call we want to handle a post request to this specific route so in order to handle that request we can create a route.ts and then create a function specifically for posts that allow us to handle that and we can do that by literally saying export async function like this and these functions are recognized by their actual function name so it has to be exactly posts like this and it's going to take in a request from our front-end which we can type to Capital request like that and inside of a request we're actually going to get access to specific things for example this array of messages is going to be within our request body which we can get access to by saying oh wait request Dot Json and so if request.json is an object of data and within there we have a messages array like this one way to get the value of messages out of that is to say const object syntax like this is equal to a weight request.json then we have whatever properties we want to get out of it for example messages like this if we were to have like Cooper is equal to codes or something like that and we wanted to get the Cooper value out we could just do Cooper like this just to give an example of a different value that's not a real value inside of our body so we can get rid of that and so just to bring this comment down literally this function will resolve this specific post request and so I'm going to comment out some things that we are going to do in this post request first thing that we are going to do is we are going to create a chat completion which is just a fancy way of saying get response from gpt-4 We're then going to create a stream of data from openai and so this allows us to stream data to the front end just like you would with chat gbt and then we we are going to send the stream as a response to our client slash front end that's going to be the process here and so this sending the stream back is what resolves the post it's going to ask for something then we're going to resolve the request by eventually returning a stream first things first though we actually have to initialize everything with open AI so at the top we can say import configuration and open AI API or actually like this from open AI Dash Edge like that we can then also import the open AI stream and then the streaming text response which I will talk about more as we use them in order to make this server side function which is incredibly important to realize this is a server-side function right we need to export const runtime is equal to Edge so Edge functions are part of Versa and they provide optimal infrastructure for our API wrap and if you guys want there is a link I have here if you want to look more into what this is you can go to Edge Dash runtime.versal.app and it talks a bunch about it and so now that we're set up with the edge runtime we can use the open AI Edge package here to set up our configuration and so I'm going to say const config is equal to new configuration which is coming from up here it's going to take some parameters for example the most important thing it takes and the only thing we're going to give it here is API key that's going to be equal to process.env dot open AI underscore API underscore key and we get access to this open AI underscore API key because it is inside our DOT env.local right here and then to set up a way to communicate with openai we can say const open AI is equal to new open AI API and then the configuration object that we just made right here alright so now we're good to go when it comes to using open AI on the server side we can then use the open AI API to create a chat completion so we're going to say const response is equal to oh wait open AI dot create chat completion this is going to ask some questions when we send in our response inside of an object here we can then put some options as to what we want from the chat completion for example we need to tell it which model we want which is going to be gpt-4 if we wanted to stream data so in this example we are going to say stream is equal to true and then importantly what are the messages of the conversation so far so gpt4 knows what to respond to and knows all the context so we can say messages is equal to the messages here it's important to recognize that messages is instantly going to have our user object inside there so the message object is going to be pretty simple it's pretty much going to be the user and he says hello there that's kind of a more pseudocode way of looking at it but just know that there's going to be a bunch of objects here and to see what these objects are we can always console.log messages when we have this array of messages it's important to understand the gpt4 and chat completions have a very important message at the beginning which we can use and it's called the system message the the system message tells gbt4 how to act is the most simple way of viewing it and it should always be at the front of your array and so a simple way of adding a system message to your already existing array of messages is creating a new array like this saying dot dot dot messages that means it's going to take all of the messages from this initial array and put them in this other array and then on top of all those messages we can create a new message that's always going to be at the beginning its role is going to be system so the system message right and its content is going to be however you want gpt4 to act for example I can say you are a helpful assistant and also you explain software Concepts simply to intermediate programmers so now gpt4 knows how to act this is where you can do a bunch of crazy stuff too like if you say like Talk Like a Pirate it will always talk like a pirate and that functionality comes from the system message here and so when this open AI chat completion is done on it's going to give us a response we need to then get these stream out of this response this is kind of a complicated process but not here so we could just say const stream is equal to a weight open AI Stream So open AI stream is a function that we imported from above right and it's a helper function that allows us to put in the response from the chat completion and turn it into a stream that we can then send to our front end remember that this response isn't the stream itself so this does some logic to actually get the stream of data out of our response which is super helpful once we eventually have this stream of data we can then return it to our front end by saying return new streaming text response and then pass in the Stream we want to return it's common for us to return a response at the end of anything inside of our route.ts so you always return a response this streaming text response right here gives us a certain format to return the stream in a way where we don't have to worry about it as long as you have the stream here it's going to return to the front end in a way that our chat component is going to be able to understand because remember when it returns this streaming text response all the functionality as to how it gets loaded into our application is actually handled by the use chat hook over here so it's going to kind of seem like magic but it's because a lot of the functionality is being handled by use chat so right now assuming that our backend call works here we should be able to get a bunch of messages that we're going to see in the console but they will not be displayed to our users quite yet so let's make sure to save all of our files here and then we can go back over to our application so make sure to refresh your application and you should have an empty array over here to start sometimes it'll cache the messages so be careful of that and if we input a user message remember the input is getting listened to you so it can say what does hello world mean in programming and then we can send a message in and you'll see we're going to be getting a bunch of responses you'll see this is things going a bunch it's because it's updating our messages object on every single token and that's kind of how you get these streaming response is on every single word is the way you can think about it it's going to send in a completely new refreshed version of this array so by the end the content of the message is going to look something like this hello worlds May is the first program that beginner programmers write okay that's giving us an example in Python example in Java so there you go pretty cool and so you can see that we're getting the full response back and so now we need a way to actually show the different messages to our front end so you'll see an important thing about each message is it has a content and it has a role so when the user's talking it'll say user and when the robot's talking it'll say assistant and because this is an array and we're in react we know that we can map over this array to create unique components for each message so let's go into our chat component.tsx and we can say messages dot map and this is going to have our current message object which I'm then going to point to an arrow function like this and it's not going to like this comment anymore so I'm going to get rid of that real quick so I'm going to return a bunch of jsx here which you can do by just saying return and then some parentheses like this and I'm going to make a div that is going to have a key of message dot ID then we can close a div like this and just to make our lives easier here you actually type this message to the message type it's already recognizing here so I'm going to say colon message like this and it's not showing up which is hilarious but we want the message type specifically from AI react so go up here and say comma message like this and now it's going to give us a type to see all the different information of a message which is going to make this next part way easier so one quick thing I'm just going to structure this up out into is I'm going to have two comments one of them is going to be name of person talking is the first thing we're going to make here and then the message which is going to require some formatting as well so to get the name of the person talking we can do some ternary logic here and we can say if the message dot roll is equal to assistant for example we can then say if that is true then I want to show an H3 with the gpt4 and we actually made this functionality already down here or the these little components so I'm going to take this H3 here paste it in right here that's the component we want to show if it is the assistant and then we can do a colon which pretty which means else we can have an H3 that's going to instead just say user and so that's like a simple kind of way to do an if else statement in react and just for you know making it look nice it kind of looks better if you format it like this in your code so you can kind of see what you're looking at there you go and so under the user's name we are going to want to have a message which we can do down here we can get access to the user's actual content of their message by going in here I'm going to make some more curly braces because we're going to do some JavaScript again and on the curly brace here I'm going to say message.content dot split by the new line and you're like why are you splitting it by the new line right well let's go over to the output on our website real quick you'll see that when chat GPT talks it creates new lines on purpose and so we can see these new lines by looking for the backslash new line in order to make things look nice in our front end we have to recognize these new lines and then create gaps on our front end over here to make it look just like this so for example we can split it by the new line and then we can map over that array and so this is going to map over all of the current text blocks is what I'm going to call them which are just going to be strings and because it's splitting by the new line some of these are going to be empty string I'm just going to reuse this comment here hopefully you guys don't mind to explain this where let's say if we have the text Cooper codes is a YouTuber then it has a new line and it says he makes software content and then it has a new line and it says you should subscribe it's kind of biased from gpt4 but you know I understand and so what we can do is we can turn this into something that looks more like this so you can take that whole thing above assuming it's just these lines and this split by new line is going to turn it into an array that looks something like this the first string here is actually going to turn this line into an empty string like that and then he makes software content like that then on that new line it's going to have an empty string again and then you should subscribe it's going to be just like this and so when we map over these strings inside of our split we can actually show them to our front end in a different way if it's an empty space and so I'm going to point to this current text block to an arrow function like this oh and it's going to get mad for random reasons because we don't have a return statement so just make a blank return like that and we should be good there we go and we can also bring this thing in like this if the current text block is equal to an empty string just like we discussed down here I'm going to instead make a paragraph that is literally just a space so I'm going to say return you can do the Ampersand nbsp like this this is just a syntax for showing a space like this in your JavaScript because if you actually just did a space like this it's not actually going to show anything which is the problem so one way of manually putting a space in there is using this nbsp thing and if it is not a empty string that means we actually have a message to return so we can say return and then just whatever is in the current text block which for example is going to be the text like this so hopefully that logic kind of makes sense although we have to use a kind of a lot of JavaScript stuff to show this to our front end I believe react also gets mad if we don't have these paragraphs with a certain unique key so one thing we can do is we can get the index right here and that is going to be a number so we might as well type that because we're in typescript so these paragraphs could have a key that is going to be equal to the message dot ID plus the index so we know specifically which index we're on and that's going to make a unique paragraph key for every single thing here and then we can do the same thing down here as well so I know this is kind of a complicated way of doing it but initially we are mapping over every single message here then we're creating a little tag that is showing the name of the person talking and then down here I'll just make a comment to show you guys we are then formatting the message and then this is just an example you can delete this if you guys follow it along here and so now we can go into our project and actually see this working so we do have those messages from before that I forgot about so we can go back into our application and they write down here and we can get rid of them so we can get rid of all this and you should have no messages initially so let's go back all right so I'm going to ask gpg for what are data structures let's see what it says there we go and so you can see as the actual messages array is getting updated it's showing new data to our client which is amazing oh it's it's telling us every single data structure oh this is an expensive API call and there you go so just to show you guys we can also keep on asking it's going to recognize our context too because it has all those messages all inspect here saved into the console here right we can then go in there and say can you explain hash tables in 50 words or less let's let's see what it does you can send a message in dang it did it there we go and so this allows you to talk to chat gpd4 in your applications which is amazing and one really cool thing about this project is that the actual API calls are being run server side so we're not exposing our API key anywhere the thing about running chat GPT at scale is that if you ever expose your API key it could be very expensive right so you got to be careful about that all right so now we can actually deploy this project to Virtual so I'm going to go over to my terminal and I'm going to clear it out here I'm going to say npx versal logout you guys do not have to do this this is just to show you guys I'm going to show you the full process so I'm logging out of my own virtual account here alright so now I'm going to be just alongside you guys because I'm assuming you're not logged into all so we can then say npx aversal which is the command that deploys our virtual application and if you're confused like it's a versatile deploy or anything no it's just npx reversal you can then press enter I have a GitHub account that I have connected to Virtual so I'm going to press enter on continue with GitHub for whatever you guys use you can use the arrow keys and select the right one whichever one you're on then press enter this then sent me to a website and it automatically logged me in because I was already logged in on virtual.com but if you don't have an account I believe there will be a setup here and eventually it will tell you that you're good to go here if you ever get lost in the process you can always just run npx versal again especially like maybe you deleted your command line accent or something but if we go back we're going to see my authentication is good it's going to ask me if I want to deploy this specific project I'm going to say yes I'm then going to say yes to my specific scope which is my GitHub account here I'm not going to want to link it to an existing project I think I already have a project named GPT chat so I'm going to say GPT Dash chat Dash video for this one our directory is located right where we ask for npx virtual so we can just press enter and then you're going to say no to modifying the settings here because we don't need to it's then going to build our project on Virtual this is going to take a second and I would be careful not to touch anything but eventually we are going to see our project and set up the actual environment variables to set up environment variables such as our openai API key on an actual production environment it can't just look at env.local you have to go onto virtual and put them in manually so I'll show you guys how to do that alright so if you go to the actual production environment and then open it up let's go so it has it but one thing is going to mess us up I'm going to say what are programmers so it's kind of a philosophical question but if I press send message it's not going to work and you can be like what what happened it's because our backend right now does not have the actual API key to run at production so what you're going to want to do is you're going to want to go to your versal dashboard so go to versal.com and then log into the same account you use to deploy your application for example I'm on my GitHub account and so I'm going to go to gpt-chat video so if you go over to this site you're going to see it's showing my current website which is great news the one thing that we need to do is we need to go into settings inside of settings we are then going to go to environment variables these are all the environment variables that are available to all of your different environments most importantly for us production and so versal actually makes this really easy you can go over to your dot env.locophile here and copy everything so if you have more than just these you can copy them too go back over here and then press enter and it will get everything right for you make sure that production is checked so make sure everything here is checked I'm not sure it matters this one's checked but make sure it's checked I guess then we can press save this now means that inside of our environments we have an open AI underscore API key important thing is in order order to get this to work we need to deploy to production again so what I'm going to do is I'm just going to go to our route.ts and I'm going to make a new line and that's all I'm going to do we can then go back into our command line and I'm going to say npx versal and sorry this is hard to see us in the bottom but I'm saying npx reversal dash dash prod to make sure it deploys to production press enter all that login stuff from before should be cached so it's just going to deploy right away alright so if you see this check and production that means it's deployed to the production environment successfully so we can go in there open up the website and this is a pretty serious application you've made here it has all the data being managed correctly in the client side and then we also have that server side gpt4 function that also handles the streaming of data so we can ask the classic question what are data structures and algorithms and there we go let's send the message in and it's going to stream a response back to us which is really cool so now you have a way to interact with gpt4 in any deployed next.js 13 application which is amazing if you made it to the end of the video I would recommend to check out my newsletter service called code letter it's all about giving you everything you need to know about in software engineering in three minutes or less this isn't a sponsorship or anything this is a hundred percent my product so I'm genuinely trying to give you guys a cool service here if you're interested in something like this feel free to go over to thecodelater.com And subscribe and if you made it this far in the video I just want to say thank you so much for watching
Info
Channel: Cooper Codes
Views: 1,474
Rating: undefined out of 5
Keywords: nextjs 13, nextjs 13 react, gpt-4, gpt4, gpt4 app, gpt4 nextjs 13, nextjs 13 gpt4, gpt4 chat, gpt-4 chat, openai streaming, streaming openai api, stream openai, stream chatgpt response, nextjs 13 openai, nextjs 13 gpt-4, gpt4 chat nextjs, nextjs gpt-4, nextjs openai, gpt4 tutorial, gpt4 api tutorial, openai api tutorial, openai gpt4, openai gpt-4, gpt-4 chat application, gpt4 chat app, gpt4 react, vercel ai sdk streaming, vercel ai sdk, ai package, ai package npm
Id: 0qyKl73RMtc
Channel Id: undefined
Length: 33min 19sec (1999 seconds)
Published: Fri Jul 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.