Hands on with the Vercel AI SDK 3.1

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
the versel AIS SDK is the typescript toolkit for building AI applications in this video we're going to build a few applications to understand how it works we're going to start by building a few terminal programs to understand AIS SDK core then we're going to build a chatbot with AIS SDK UI and then we're going to go beyond text streaming react components from the server to the client with AIS SDK RSC let's get started to start we're going to create a typescript file and we're going to define a main function within that main function we're going to call generate text to use generate text and any AI SDK function we're first going to have to provide a model let's import the open aai provider and then specify the exact model we want to use in this case GPT 40 now AIS SDK core has been designed to make changing models as easy as changing two lines of code so let's see how we can change from GPT 40 to Gemini Pro we first import the Google provider and then we specify the model we want to use let's go back to GPT 40 for this example now we need to specify our prompt we're going to ask GPT to tell us a joke finally we need to log out the text generation to the console let's run the script in the terminal and see what happens and the models responded sure here's a lighthearted joke for you why don't skeletons fight each other they don't have the gut that's pretty good but did you notice there was a bit of a delay between when we ran the script and when the model returned a response we can solve this with streaming streaming allows us to send the model's response incrementally as it's being generated let's update our example to use streaming so all we need to do is change the generate text function to the stream text function and then we need to handle the streaming result we're going to use an asynchronous for Loop to iterate over the resulting text stream and then log it out to the console so if we run this now in the terminal let's see what happens and just like that our joke is streamed like chat GPT typewriter style how cool we now know how to generate and stream text with a large language model but notice that the model's response doesn't contain just a joke wouldn't it be nice if we could return our joke in a structured format well with the help of Zod a scheme of validation Library we can do just that we're going to go back to generating rather than streaming we've got our generate text example to force the model to return a structured object we're first going to change generate text to generate object and then we're going to import Zod and then Define a Zod schema for our joke our joke is going to have two keys setup which is a string and punchline which is also a string we can also optionally describe each of our keys to ensure the model has the appropriate context to give us a great generation and then finally we log out the resulting object to the console let's run this script and see what happens so now we have our joke but in a structured format let's see how it did setup why don't scientists trust Adams punchline because they make up everything again not too bad just like in our generate text example you may have noticed that there was a bit of a delay between when we ran the script and when the model returned a response well we can again fix that using streaming let's update our example to use stream object first we're going to change the function from generate object to stream object and then to handle the streaming response we're going to use an asynchronous for Loop to iterate over the partial object stream and return the partial object to the console let's run this in the terminal and see what happens awesome our structur joke is now streamed directly to the console as you can see AIS SDK core makes it simple to call any large language model but while llms are powerful they're also known for hallucinating that is stuff up we can solve this and allow the model to interact with the outside world using tools tools are like programs that you can provide the model and the model can decide as and when to use them let's expand our joke example to allow the model to get the user's location and weather and then incorporate that into a new joke let's do it so we're going to start with a simple generate text example except this time we're passing in Dynamic information to the prompt using template strings in this case passing in the user's location using a local variable now let's define a tools object and create our first tool in this case it's going to be called weather first we need to pass a tool a description this is super important because this is what the model is going to use to decide whether or not to use the tool next we're going to provide a Zod schema for the parameters necessary to run the tool finally we'll Define an asynchronous function that will be run if the model decides to use the tool you can run any asynchronous code here for example calling an external API to get the weather for the user's location but in this case we're just going to compute a random number and return it as the temperature now we can check if the model decided to use the tool and if so pass the result to another large language model call to generate our joke we're going to want this joke to be streamed to the user so we're going to import and call stream text and just like before we're going to have to pass in a model and a prompt in this case GPT 40 and our prompt is now going to incorporate both the user's location as well as the result from the tool call getting the user weather finally to handle the streaming response we're going to use an asynchronous for Loop iterating over the text stream and then writing it out to the console let's run this in the terminal and see what happens sure here's a joke for you why did The Londoner bring a fan to the temps because with the temperature at 27° even the river needed a cool breeze not gpt's best joke but how cool we gave the model access to the external World great now we've covered the fundamentals behind AIS SDK core a unified API for calling any large language model let's see how we can now use another AIS SDK Library AIS SDK UI to build a simple chat ball we're going to use nextjs and the app rouer let's get to it first let's create a route Handler this is where we're going to call the model from we'll start by defining a post request this function is asynchronous next we're going to pull in the messages from the request body next we're going to import and call stream text we're going to pass in a model in this case open AI GPT 40 and the messages from the body finally we're going to return the streaming response using the 2 AI stream response on the result great now let's create our page first we're going to add the used client directive because we're going to be using Hooks and interactivity on this page next we're going to import the used chat Hook from AI react we're going to destructure messages and iterate over them in the I and then we're going to destructure input handle input change and handle submit which will manage everything we need to interact with our API route and that's it that's all we need let's run the dev server head to the browser and see what we got first we'll say hello and then we'll ask for a joke awesome in just 40 lines of code we built a chat bot just like chat GPT this is the power of AI SD kui which provides framework agnostic hooks for quickly building chat and completion interfaces but what if we wanted to go beyond text well with NEX js14 and the AIS SDK RSC Library we can stream react components directly from the server let's build an application that incorporates everything that we've learned so far it should be a chat bot with streaming access to a tool and as an added bonus thanks to the a SDK RSC Library it should be able to stream react server components let's dive in because this application is a little bit more complex we're going to cover everything at a higher level let's start with actions. TSX unlike our previous example we're going to be using server actions instead of Route handlers if you haven't come across server actions before don't worry they're just server side functions that you can call directly from the client let's dive in this action is called continue conversation and it takes in an input which is the user's message and it returns a client message every client message has an ID a roll and then finally it displays a react component first because this is a server action we're going to use the used server directive to ensure this only runs on the server next we're going to put in the history with the get mutable AI State function from AI RSC now to where the magic happens the stream UI function first just like every other AIS SDK core function we need to first pass a model in this case we're going to use open ai's GPT 40 next we're going to pass our message history appending the most recent message now comes the important part the text function this is important because this is the default response call back if the model decides not to use any of the tools it has available to it we'll be defining ours shortly this function must return a react component so what are we doing here first the model exposes content which is the content of the model's response and done which is a Boolean telling us if the model's response is done so first we check if the model's response is done and if so we append the assistant message the model's response to the history finally we return the model's response in a plain div now let's define a tool remember we wanted to define a tool that would incorporate the user's location into a new joke the first two parts of this tool should look familiar we first Define a description remember this is super important because this is what the model uses to decide whether or not to use the tool and then we pass a Zod schema that describes the parameters necessary for the tool to run but now unlike our previous tool example we don't pass an execute function we pass a generate function this function like the text function needs to return a react component importantly we can perform any asynchronous code that we want here so what are we doing here first we yield a loading component this is sent back to the client before we perform any of the asynchronous code so it's nice to provide the user with some feedback of what's going on next because we want to generate a joke and return a component that displays that joke we're going to want a structured object we're going to use the generate object function to do so we're first going to pass in a model GPT 40 and then a schema our schema is defined in another file but it's identical to the previous joke schemas that we used before as you can see it's got a setup and a punch line both strings and both described in line after we Define our schema we need to pass a prompt in this case our prompt asks a model to generate a joke that incorporates the user's location finally we pass that resulting joke object to a joke component this like the joke schema is defined in another file this is a simple client component that takes in our structured joke shows the setup and then with the simple click of a button shows the punch line wow that was a lot but we're almost there now on to the front end first we're going to use the use client directive because we're going to be using interactivity and hooks we're using the use UI State hook from airsc to manage the conversation history and then we're using the use actions hook to pull in our action that we defined in the previous step now we first render the conversation on the page we'll then create a form that on submit will update the conversation State and then call our action passing in the message as an input finally we have the input and the submit button and that's it let's run the dev server head to the browser and see what we got first we'll say hello then we'll ask for a joke that incorporates London we'll see our loading State and then our joke why did the Big Ben break up with the London Bridge because he couldn't stand her constant late night notifications about the time difference in London it's a pretty bad joke GPT but how cool we just streamed a react component from the server to the client and that's the verel AIS SDK we learned how to call any large language model with AIS SDK core we learned how to quickly build a chatbot interface with AISD kui and then we learned how to go beyond Tech streaming react components directly from the server to the client with AI SDK RSC with our most recent release 3.1 we've also launched brand new docs that you can find at sdk. ver. aocs we're really excited about this release and feel we're one step closer to becoming the complete typescript framework for building AI applications we can't wait to see what you build and if you have any questions reach out to us on X or open an issue on GitHub thanks so much
Info
Channel: Vercel
Views: 27,887
Rating: undefined out of 5
Keywords:
Id: UDm-hvwpzBI
Channel Id: undefined
Length: 13min 4sec (784 seconds)
Published: Tue May 21 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.