We recently launched nine new AI integrations
into Vercel, and I want to show you what they look like in this video. So, we've partnered
with some amazing companies like Modal, Pinecone, Replicate, Anyscale, ElevenLabs,
Perplexity, Fal, Lmnt, and Together.ai, and a bunch more coming in the future. We want to
make it as easy as possible for you to get set up with these companies, integrate into them easily,
and get started building AI applications. This builds off of other products that we've made to
help out AI developers or AI engineers, like v0, where you can really quickly create UI, or our AI
SDK, which we're going to show a demo of here in a minute. But let's jump right into it. Let's take
a look at the new AI tab in the Vercel dashboard, where you can explore using all of these
different models. And that's not only chat models for completion; it's image generation,
it's code generation, it's audio generation. I've picked out a couple that I personally like
just to show you what they look like, but you can browse for whichever one you want to use. Tons of
these are open source, just great models to choose from. I'm going to pick Perplexity's 7 billion
parameter online model, and we're going to ask, 'How many stars are there in our galaxy?' So,
okay, let's hit run, and we're going to see the output get streamed in using our AI SDK over here
on the right. So, you can very quickly try out and see which models you like, which models you want
to integrate into your application, and then if I do want to pick this model, I can just visit the
Perplexity integration, install, get set up with my API key. We're off to the races. Now, let's
look at an image generation model. So, we're going to check out Stable Diffusion XL. We're going to
prompt it with 'one horse-size duck fighting 100 duck-size horses,' classic question, right? Who
knows what the model is going to generate here, but it's probably going to be entertaining. Now,
image model generation usually takes a little bit longer to generate, but the nice thing about this
playground is that you have a bunch of models to work from. This image is slightly frightening,
uh, not exactly what I would have expected, but yeah, you could go with this; you could
tweak from here. Let's look at another one, uh, the English V1 model from ElevenLabs, where you
can pick a voice ID, you can pick what you want to say. So let's say, 'Hello, YouTube,' and we'll
hit run, and we get our audio generated here. So, we'll listen for this: 'Hello, YouTube.' Pretty
fast, honestly, to generate audio generation, and you can obviously do a lot more with their
platform for generating audio and providing your own voice as well. So, worth checking
out. But let's pop back over to the tab, and I'm going to click on the Perplexity
integration that I've already installed. So, I've already got an API key connected, I've
already went to my Perplexity account, I've added a couple dollars there for me to demo with, and
now it's going to prompt me with the steps here: connect to a project, pull the environment
variables locally, install some packages, and then copy-paste that code in. It also gives
me a couple different options for the frameworks I want to use. I'm going to click copy here,
and then I've already got an application running locally here. If you want to see how I generated
this UI and connected, we have another video on our channel. And what I've done is pulled up this
API chat route.ts file, so I'll copy all of this, and then paste in the code that we've generated,
and we can see that we still are using the OpenAI Node.js SDK. So, the Perplexity API is actually
compatible with this, which is really nice. So, really not a lot of the code here changes, other
than we're changing the model that we're calling, which is the 7 billion parameter chat. We could
use the 70 if we wanted to. We're going to stream it back, and then we're going to provide it
sub-messages. So, if I hit save, it's going to update this code, and then if I go over here and
I say, 'What is Next.js?' it's going to get back a response really, really quickly. Now, I've already
added the environment variable for Perplexity to my .env file here, locally, of course. I'm not
going to show that here, but that's how you would drop in that value that you got back from the
Perplexity integration. And that's really all it takes to connect to one of these large language
models and start building your first AI-powered application. If you want to see us talk about
the other models or talk through image or audio generation, let me know in the comments, and
we can also make some videos on that. Peace!