Vercel AI SDK - Chat GPT Clone with Next.js & more!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
an open- Source library for building AI powered user interfaces or in simpler terms a bunch of utilities to make building apps using your favorite llm apis like the open AI API for example a lot easier that's the new versel AI SDK this thing is getting a lot of hype and for good reasons I want to show you how quickly you can create a Chad GPT clone using nextjs that's streaming all the messages as they come in streaming is actually one of those things that's a bit of work to get set up properly but with the SDK it's already handled for us with we'll be using the open AI chat completions API and we'll also take a look at using the gp4 vision preview model which allows us to pass images as input and for the model to answer questions about it at the end I'll also share a way I found to easily log all the requests that we make which can be really helpful for not only the cost and usage tracking but also for debugging purposes in terms of setup first we just need to create a base nextjs app by doing pnpm DLX create next app ver SDK and that's just the name of the app we'll then open up the project and in all the new dependencies we want which are the AI and open AI packages and before we start to code we also need to generate the new open AI API key and add it to ourv file to get started we can create our first route Handler to handle the requests coming for our client this will just be in the app API chat route. TS file and accept a post request here we import openai from openai import the SDK utils open AI stream and streaming text response from AI we then create the open AI client and next we set up the runtime to be edged so that we can cut down a bit on the request latency for the route itself grab the messages from the request body which is a history of all the messages from the conversation including the one that was just submitted now we can make our request to the API by awaiting the open AI chat completions to create the function and in here we will pass the GPT 3.5 turbo model make sure streaming is also enabled and also the messages that we got passed in but now we have to convert the response into a friendly text stream using the open AI stream U that we imported this will decode and extract the text tokens in the response and then re-encode them properly for simple consumption by the other utils now we can just return the new streaming text response passing in the Stream that we just created this extends the normal response class by adding the necessary headers for us before we can test this we'll quickly just set up the UI for it we'll create a chat folder in our app folder and then a page. TSX this is so we can create different pages for the uis that we're building and to get started we'll have used client at the top then import from SDK the used chat hook this hook actually provides us some useful things like the history of our messages the current value of our input and just handlers for the input change and the form submit for the return we'll simply just have a block where we Loop through all the messages and based on the role determine if it was by us or by the AI this will simply just create the chat bubbles I also created a very simple chat bubble component for that just to format them nicely the form is what will have our input box and this is what we'll use the submit Handler since this is a chat app we'll also need an input box for our for form and this is simply where we put our prompt we're now finally ready to try out the application and we can do this by running npm runev and by going to Local Host 3000 SL chat the UI is pretty basic nothing fancy here but we can try this out by typing out what is a post request for and the response will be streamed in as opposed to waiting for the whole thing to process and then return the full response that was pretty simple to set up and since we're keeping track of messages it should have a context of what we're asking so we can ask something like what was my previous question which will actually return the question that we asked before with the GPT 4 Vision model there's not many differences in code between this and our chat example that we just did so we'll simply just copy our code into its own API route and page for vision and on the front end not much change the difference here is that the submit Handler will now pass in the extra data property which will hold the URL of the image that we're providing and for our example this is just a simple image of as sparrow with the use chat hook you can also provide a specific API route you want to hit which in our case is the/ API SLV Vision the back end will require a change to the model that we use we'll now use the gp4 vision preview model we'll also set a Max tokens amount to 15 since we don't want to spend a crazy amount but for the messages this is where we have to craft things a little bit the messages that come with the request we can slice that array and grab all the messages up to the new one and then also spread those into a new array but for the current message which will be the last one in the list we need to extract the content and then pass in the extra information regarding the image which is just the image URL of the actual image the rest is all the same to test us we can just ask it something like what does this image show which will just describe the sparrow image to us another thing that the guy they have on the website touches on is handling errors and to make our app a bit more robust we can just wrap our post route in a try and catch when an error is caught we check if it's an open AI API error in which case we would extract the values from the error and return a new response otherwise we can just throw the error again since this is most likely our app just being broken I won't go into this next part of the tutorial because I want to show you a a faster and easier way but the opening ey stream util also provides us with some very useful hooks we can have the code trigger whenever the stream starts the example shows you could use this to save your prompt to the database another trigger for each token being streamed this is useful for something like debugging and lastly a trigger on completion for when the stream finishes if you're looking to save your completion to the database all of these are useful hooks that offer you flexibility which is exactly what you want out of an SDK like this and now to the fun part I could have added a database to track my request to the open API including tokens prompts and responses but I came across this platform that promises to track everything for me all in one place which is pretty nice it seems like there are still pretty early days and I think there's other more complete competitors out there like Lang Smith but for what I need here I was surprised by how easy it is to set up Montell their free tier is also pretty generous and their open source which I found a big positive in this space to get started all I had to do is pnpm install Montell sign up on their website and then also create a new project project they have a section for API Keys which I just took and then added to myv file and from reading through the docs it looks like as long as you have both your Montello key and your openi key and your EnV you can just instantiate their client and it'll also set up your open ey client now where I was doing open. chat. completions I just need to prefix that with Montell do so now it'll just be mont. open. chat. completions it'll also need a name property for the log that will be created but now if I go back to the chat UI and I have a conversation with the having a few back and forths about versell and xjs I can see that the logs of all these conversations were actually successfully logged this was pretty nice because it didn't take me long to set up and now I can see all this information nicely formatted it's good to see the cost breakdown the tokens how long my request took but also all the input and the output all this is really good information for debugging as well if you guys are interested in checking out the documentation of either verell aisk or Montell links are in the description and yeah make sure to like the video and subscribe for content like this and until next time
Info
Channel: CodeBrew
Views: 1,831
Rating: undefined out of 5
Keywords: vercel ai sdk, vercel ai sdk chatbot, ai sdk, vercel ai sdk streaming, next js chatbot, next js chat gpt clone, vercel ai sdk docs, vercel ai, montelo, monteloai, langchain, langchain chatbot, chatgpt api streaming, vercel ai sdk 3.0, vercel ai sdk usechat, nextjs chatgpt, nextjs streaming, api response streaming, ai sdk vercel, vercel ai sdk tutorial, langchain chatbot tutorial, langchain tracing, langsmith, next js chatgpt, vercel sdk, vercel ai tutorial, vercel ai chatbot
Id: EGNikYrS87k
Channel Id: undefined
Length: 7min 2sec (422 seconds)
Published: Mon Mar 11 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.