- And we're live. Welcome, welcome, and good
morning, afternoon, or night, depending on where you're joining us from. We're very excited to launch
this series of livestreams. Our goal is to explore the vast landscape of use cases for Prisma. We plan on hosting these
livestreams regularly, so be sure to tune in
and check out on Twitter. We'll be sharing some more streams. And today, Ryan Dsouza will talk about deploying a Prisma Client application with the serverless framework to AWS. Some of you may know Ryan from
his developer success work on Slack and GitHub. Ryan is a developer
success engineer at Prisma, and more broadly, Ryan is a full stack developer focused on the JavaScript ecosystem. He loves contributing
to open source projects and exploring the intersection between application development
and the cloud with AWS. So now, without any further delay, I bring you Ryan. Ryan, you're live.
- Thank you. Thank you very much, Daniel. So let's get started. So as Daniel has mentioned, the topic today is deploying
a Prisma Client application with the serverless framework. So let's get started. So as Daniel has given a
perfect introduction of me, I will continue with the next slide. So let's go to the topics for today. So first of all, we will give a brief introduction
of what is serverless. Then we will see how Prisma
Client works with serverless. What do we have to do? What steps have to be taken? And finally, we will do a hands-on demo of deploying Prisma Client to AWS Lambda. So let's get started. So what is serverless? So how is it different from
our traditional servers? So suppose we have an EC2 instance or a DigitalOcean Droplet. Serverless is not like that. You do not have to manage
any servers yourself. You do not have to provide
any auto-scaling rules or any other configuration. The provider does that
for you automatically. So whenever you have lots
of requests coming in, it will scale automatically
based on the load. So that is the main difference. It's less configuration. So where serverless really shines. So there are lots of use
cases for serverless. So the very easiest use
case would be a one-off job, like a cron job or an email notification, anything of that sort where
you just fire and forget. So serverless is great for such use cases. Also, it is great when you
have predictable workloads, your workloads are predictable, and so you can save a lot
of costs in that case. There is also a case
where long-running servers would be preferable over serverless. Suppose you have lots
of spikes every time, and serverless would then in that case become a bit more costly
than a long-running server. In AWS, you have reserved instances, which help you save costs. So in that case, a long-running
server would be preferable. There are also some other
cases where you require these, like, sockets or sticky sessions. The long-running servers are easier. Although there are sockets
introduced in serverless and API gateway, but I think it would be
quite easy to configure it in a long-running server. So with the introduction out of the way, let's move on to what is a Lambda handler. So this is where we define
a simple Lambda handler. The name of the function is handler itself and it takes two parameters, event and context. It is returning a string here, a simple string hello world. So what does these two parameters do? The event parameter handles
such cases like whether you have a response body from a post request or a single parameter that
you want to take from the URL, and the context contains the information about the environment you're executing in. So in which operating
system is executing on. It'll also contain the CloudWatch logs, the logs that your Lambda
handler is executing. So what happens when
we call this function? So when this function is executed, AWS creates an environment for us in which this function will run, and then whatever is inside
the function will be executed, and then the handler will be destroyed. But there's one thing to note here. What if we keep some
context outside the handler? Suppose we have some
objects outside the handler. What will AWS do to those? So AWS actually maintains those objects which are defined outside the handler, because when you have subsequent requests, those requests will be used, and the same object which
is maintained by AWS will be used for that request. So this is actually maintained by AWS. We cannot configure it ourselves. So whatever's outside the
handler will always be maintained when you have subsequent
concurrent requests. So do keep this point in mind, as we will see how this is helpful to us when we are seeing our demo. Also another point to note is cold starts. So Lambda suffers from cold
starts in such a fashion that whenever there is
a request coming in, AWS has to create the context, the environment for you, and then it'll execute a function. So there is a bit of
latency involved in that. So suppose if you do not require or if you have a
mission-critical application that does not tolerate cold stats, then also in this case
a long-running server would be helpful. So moving on to our next point, so how to make Prisma
Client work with serverless. So I will have to show
you a bit of code here so as to how to make Prisma
Client work with serverless. So let's check out our schema.prisma. So this is a simple schema. I am using a single model named User, and this will create a
user table in our database. Similarly, our database is PostgreSQL, where I have provided the database URL with the environment variable. and to show you the environment variable is configured in my ENV file. So you can configure this
in any way that you want. This example ENV is in the repository. I have created two ENVs, one for development
and one for production. So the final and most important part of making Prisma Client work
with AWS Lambda is this part, the binary targets. So what are binary targets? So to go into this, let me explain to you, how does the query engine of Prisma work? So this is straight from
the documentation of Prisma. The Prisma Client contains
three parts, actually. The first one is the
JavaScript client library, the one that you use to
query to the database. So what I'm writing my queries for. The second part is the
TypeScript type definitions. So what do the TypeScript definitions do? It help us write safer code and also give us autocomplete and IntelliSense of all the fields that are present in our database. Finally, the most important
part is the query engine, which is a binary file, that is, an executable file. This binary file is generated whenever you run the
Prisma generate command, and it creates the
binary file in a folder. This binary file is
specific to your platform. So if I'm using Windows, it will generate a binary file based on the Windows platform. Similarly, if I'm using macOS, it will generate one
for the macOS platform. What does this binary file
do actually in Prisma? So we have a diagram just for that. So whenever I create a connect function, whenever I connect to the Prisma Client, the query engine creates a pool, a connection pool to the database. And after connecting,
I execute my queries, like find one or find many, which Prisma gives us. This will be converted by the query engine into equivalent SQL statements that will be executed on the database and the response will be returned to us in the form of a JS object. Finally, when I run disconnect, the connection will be
closed by the query engine. So this is basically how
the query engine works. The main responsibilities of
the query engine lie here. So back to our schema. We have two binary targets here. One is native and one rhel-openssl. I can show you those that are
present in the node modules in the Prisma folder itself. So these are the two binaries
which were downloaded. The Darwin one is for my
local system, which is native. So whenever you specify
the native function, it will always download depending upon your operating system. So Prisma will detect what
operating system you have and will download the
binary in that format. The other one is important, though. Now we will be testing
our environment locally, but in the end, eventually we
need to deploy it on Lambda. AWS Lambda uses an environment and operating system
called Amazon Linux 2, which uses the RHEL OpenSSL binary. So it is different. It's a Linux environment, so we cannot use our own
Windows, macOS binaries there. We would need to specify a specific binary called rhel-openssl. And when we deploy our
application on AWS Lambda, this binary will be used
to create the database. So without this binary, we wouldn't have anything
and our queries would fail. So this is how we integrate
Prisma into AWS Lambda. This is the main part here. So without that, our
configuration wouldn't work. So moving on, let's dive into the code. So now we will demo how we
can deploy Prisma Client to AWS Lambda. Now I will be using the
serverless framework here to make things easy. It's a very good framework and it will help me deploy
the Lambda functions easily without much configuration. So let me walk you
through the configuration. So this is our serverless.yml file. And I've given a simple service name here, which you can name it to anything. So you can just clone this repo and configure it according to your needs. The other thing is the provider. So I'm using AWS here and I'm using nodejs12. Also my region is us-east-1. Another thing to note here. You should always create the
database in the same region as you are deploying your
serverless application to, the reason being that
there is minimal latency and your queries will execute faster. So my database is also
deployed in the same region and it is using AWS RDS and is a Postgres database. We will talk about this
function part later, but let's look at the plugins first. So first of all, I have
serverless-webpack. Serverless-webpack is a very handy plugin to bundle all your dependencies and deploy to AWS Lambda. So that will help us
when we are deploying. We also have serverless-offline. This is another great
plugin that we can use to mimic the Lambda environment
on our system locally. So if I want to develop locally, I would not deploy every time
whenever I make a change. I would need to see the
changes that I've made and check it locally. So serverless-offline is
a great plugin for that. It automatically will reload
whenever you make any changes. Finally, the serverless-dotenv plugin. The serverless-dotenv plugin
is used where I have defined all my environment files. So these environment files that I've used will be read by this, and it will be added to
the environment variables of AWS Lambda, which'll be accessible there. So my next file will be the server file, the main server, which we have right here. So I'm using Apollo
Server Lambda for this, because I will be demoing
with a simple GraphQL schema, but you can also use a simple REST API or anything else that suits your needs. So I've created a simple server
here using Apollo Server, and I've created my handler here. This is the server.createHandler method. And I've exported this handler. Now you must be thinking
that I've exported this, but where am I using it? That's where our functions call comes in. So this is the part in our serverless.yml which will define what
functions we will be using. Here we have simply exported a simple function named handler, and we are using this in the path here. We have provided the path. The path is src, the file name is server, and the name is handler. There are two events to note here. One is the post event and one is the get. The get one is for the GraphQL playground, and the GraphQL playground
is where we will use and demo our queries and mutations, where I will show you
how we can get our data from the database, and the post is simply for interacting with the GraphQL API, which you will use in your front end, for example, with Apollo client. Now, I had told you to remember a point. We can define things outside the handler and they will be maintained
by AWS on subsequent requests. So I will show you what is that about. I have created a new Prisma Client here which is actually outside the handler. Why have I done that? It is mainly due to performance reasons. So whenever Prisma Client is initialized, it will create a pool of
connections using the query engine, and whenever we have
subsequent calls to our API, the same client will be
reused again and again because it is defined outside the handler. This is also a performance
improvement because we do not have to define Prisma Client
every time inside the handler. So this is the way you
should always go with, define your client outside the handler. And this is true for any configuration. Whatever is reusable should be
defined outside the handler. So I have defined it here and I have created my
endpoint and my context. So this is a simple server configuration. Now let's look at the schema. Our schema is a simple Nexus schema. You can use an SDL-based
approach, as well, but I have used Nexus because
it is easier to create. So I've just created a
simple user type here. So this is our user object, which we will be exposing to the client with the main endpoint. So I've exposed these three fields and also I have created
a query and a mutation. The query is to list the users and the mutation is to create a user, which we will be seeing. Finally, I create the schema, which will automatically
generate my schema.graphql file. And this is linked here to the server. So this is the basic setup that I'm using, and let's see how we can run this locally and deploy it, as well. So moving on to our file package.json. So you can see there are
lots of scripts here, which we will go through, and also there are some dependencies. So Nexus. I'm using Prisma CLI and Prisma Client. For the Lambda, I'm using
Apollo Server Lambda. Also, there are some other dependencies that are in the development dependency specific to TypeScript and to webpack. Now let's see how we can run this offline, how we can mimic the
environment of AWS Lambda on our local system. So there is a command just
for that, sls offline. Sls is a shorthand for the serverless CLI, and on running sls offline, it will run our basic environment
here that we have created, and it'll give us a local
endpoint that we can use. So let's run that. So I just need to run yarn offline, and this will run serverless offline. It'll bundle the
dependencies with webpack. It's also loading the
environment variables. And here we have the endpoint. It's running on the default port of 3000. So let's check that out. So for simplicity sake, I have already created a query. This will fetch us a list
of users with their email and with their name. So let's hit that endpoint. So as you can see here, this is fetching the data, and here we have it. I have some prefilled data here, and I get the list of users. Amazing. So this is exactly how a Lambda
will work after deploying, being deployed to AWS. So whenever I make any changes
to my server or to my schema, this will automatically reload and you will get exactly what you want. It's like a library load functionality you must have used with
other servers before. So this is how we can run
it in our local environment. Let me show you how we can
deploy this to AWS Lambda. So I have created a script for
that, as well, named deploy. So what this does is that it
sets our NODE_ENV to production so that it takes the
production environment file. And then we run sls deploy. This command is again given
to us by the serverless CLI, which will bundle all our
dependencies with webpack and deploy that, create an archive, a zip file of that, and then deploy it to AWS Lambda. So this is just a simple command, and let's run this and see what happens. So here I have run deploy, and it's running the sls deploy function. And as you can see, it has taken my variables from
the environment production. So it's using the database URL and the node environment from our production environment file. As you can see here, webpack is bundling all the dependencies specific to my setup. So I have Prisma here. I have Apollo Server here and all the dependencies
required to run the application. Finally, serverless will
be packaging the service, and he will create a zip
file that will be uploaded. For simplicity's sake, I have already deployed this to save time, so it is not changing anything and so there are no changed files, and so it is skipping the deployment. But after deployment, you will see something like this, the service name, the development, which is our stage, the region, and in the end, you'll
get a list of endpoints. So we have a get endpoint, which is for our playground, and a post endpoint, which we will be using in
our client application. So this is done. Let's check out the final endpoint. Here it is. So this is our endpoint that we get from the serverless environment, and this is always whenever you deploy, the URL will remain the same. It'll just get updated. So let's try the query again. And as you can see, I get the same users
back because I've used the same database for
development and production. Now you must have
noticed that the response that I got locally was slower than the one that I got when I deployed. This is the exact reason that
you need to keep the database in the same region as your deployment, serverless deployment. It is quite fast, as you can really see, that I have deployed and it directly brings
me the list quite simply. So let's try a mutation, as well. Let's add a user. So this is my data. I already have User 1 and User 2, so let's add User 3. And here it's done. So let us again query
for the list of users. There we have it. So as you can see, this was quite fast. There was no latency, there's quite less latency
involved, so to say, because both are in the same region and I'm getting the data as expected. So this is how we can deploy
our application quite easily with the serverless framework
and the Prisma Client. The main point to note here was exactly when we had our binary targets. You need to specify the binary targets for your AWS Lambda environment. Otherwise it wouldn't work. The rest of the setup is simple because we have used the
serverless framework. So going back to the slides, what shall we see as the end
goal for us in this talk? So always instantiate Prisma
Client outside the handler. This will always help when
you have subsequent requests so the same client will
be used again and again, which will be performant. Another point that I cannot
stress enough is keeping the DB in the same region as your Lambda function to reduce latency, and as you saw, the local, the difference between the
local and the deployed version. Finally, always try to measure the costs before implementation. So there are lots of
ways that you can measure whether the serverless environment
is right for your needs. There are a lot of
websites like servers.lol, a great name and a great website. as well, where you can specify
your environment and how, if serverless is really
useful for your environment or whether a long-running
instance would be effective. You also have sites like
Serverless Cost Calculator, where you can calculate the
cost of your serverless function by providing some basic values. It has costs for all the vendors, including AWS Lambda. So in the end, always measure. Always measure whatever, whenever you're deploying something. Is my application right for serverless? Now my application was a
very simple application, and I think at most 10 to 20 users would be interacting with that. So serverless is great because AWS offers a free tier for serverless. Some requests, amount
of requests are free, so you will be practically paying nothing. So serverless would be
a great choice there. So in the end, it's always good to measure and always good to see whether, what environment is right for your needs. So the points that I mentioned, do keep them in mind. And that was all for my talk, so thank you very much. I hope you have taken something from this, have learned something from this, and hopefully we will be
keeping another livestream where we can also say
that how we can enhance this deployment by using AWS RDS proxy, which is like a load
balancer for databases, and that would even make your experience much better and faster. So thank you everyone. - Ryan, thank you so much for that. - Yeah, thank you, Daniel. - That was really insightful. And we already had a
couple of questions come in in the audience. Let me...
- Yeah, sure. - I think the first one
was from Rasselio Diack. I'll display it on the screen. And he asked, "Is it possible to develop
local with Apollo Server from apollo-server-lambda?" - Sorry, I couldn't get you. - So Rasselio asked if it's
possible to develop locally with apollo-server-lambda. - Yes, yes, it is possible. It's quite possible. And I'm using apollo-server-lambda itself, and you can mimic the environment exactly locally using the
serverless offline plugin. So if you check out the repo, which we will be sharing, so in that we have used
serverless offline, which exactly does what
Rasselio is asking for. - Okay, so essentially is that part of the serverless plugin that simulates the Lambda environment
locally on your machine? - Exactly. - Okay. Got it. And then he added another question, which I tried to answer, but essentially, so the question
is if we're using the API, the AWS API gateway, is that another event, or is the serverless.yml
configuration file, is that the one that is responsible for also creating the API gateway? - Yep, so you're absolutely right, Daniel. The serverless.yml file is responsible for creating the API gateway endpoints. So as we had seen, I defined a snippet of functions. So those functions contained handlers, the get handler and the post handler. Those handlers will be created
actually in API gateway when we run serverless deploy. So our Lambda function will
be called by that endpoint, and that is how API gateway is connected to our Lambda function. - I see. Okay. And then Ryan Catalogna
asked another question, which isn't directly related to your talk, but I'll take a stab at this. So he asked if the new
distinct API is type-safe. And so I wanted to clarify something. This is a new feature that we have in the newly released
version of Prisma Client, and the way that it works, distinct works is that it
simply acts as a filter. So the return type when
you call the distinct API is exactly the same as
it is for find many. And in fact, you can use
that together with include or select if you wanna
fetch also relations off of whatever you're trying
to get the distinct off. And we already have more
questions coming in. Okay, do you wanna take this one, Ryan? - Yeah. So nexus-plugin-prisma. You can use that, absolutely. But it comes with Prisma CLI
and Prisma Client bundled. So I have yet to try that in deployment, but I think it should
work fine out of the box. If you are using nexus-plugin-prisma, then yes, you will be
bundled with Prisma Client and Prisma CLI out of the box, so you can go ahead with that, use that, as well. It won't make any difference
to your development experience. - Thank you, Ryan. And we have more questions coming in. Another one from Rasselio. Rasselio's really active today. Thank you. - Yeah. - So can you see the question? You recommended earlier using a long-running
server for a subscription? Do you have any experience
with API gateway? - So with API gateway, I have created a basic
web socket subscription. It was just a demo and I do not have an experience
of that in production, but I usually prefer long-running servers because they are easy to set up for me specifically for subscriptions as they usually keep the
context and everything intact, but I think API gateway
would be a great one, too. But in the end, you have
to measure the costs, like how long your
subscriptions are going to last, how many hits you're going to get. So you have to take in
the cost factor, as well, and then decide whether you
want to go with API gateway, but it's perfectly fine
to use that, as well. No issues. - I see, and by subscriptions here, does, it'd be good if, do you mean GraphQL subscriptions? Is that the question? - Yes, GraphQL subscriptions in turn would use web sockets
itself under the hood, so I would say yes, that would work well. - Okay, now we have a
question from Dixita. Hello, Dixita. Thank you for the question. "Can the same be followed
for Google Cloud functions or other providers?" - Yes, exactly. So serverless.yml file
that is based on the, that is due to the serverless
framework that we have, and that supports Google
Cloud as well as Azure. I don't know of any other environments, but these two are the most
famous along with AWS. So yes, you can follow the
same in other environments. You just have to note that
whatever operating system they are using for
serverless under the hood, you need to generate the
Prisma Client binary for that. That is the only thing that
you need to take note of. - Okay, so making sure, checking which runtime, operating system they're using, and accordingly finding the one. Okay, and we have a question
specific to that from Rushi, I believe the name is. "Does anyone have problems with deployment when using rhel-openssl? My terminal just hangs." Using Windows. So do you have any thoughts on that, Ryan? - Unfortunately, I haven't
faced such an issue. It might be Windows-specific. But I would have to
reproduce that and check, because on other systems, I haven't faced that issue. - Okay, so Rushi, you can open up an issue and leave as many details as possible so that we can reproduce this and then test it. We obviously have also a
focus on supporting Windows and this is obviously part of what we do. - Yeah. - Okay, and I also had a,
we had another question. - Yeah, okay. So Sachin said that he
would love to see a talk that covers the use of aws-cdk that abstracts away setting
up things manually in AWS. So thank you for that feedback. We'll take that in and
see what we can do there. And I also had another
question in regards to, with regards to deploying, using the serverless framework. So in general, what are some of the benefits of using the serverless
framework over, say, a vanilla AWS approach, and do you have any experience with other approaches of deploying to AWS? - Yep, so I have both have experience with both the vanilla deployment approach and the serverless framework, as well. So in the vanilla deployment approach, what you would have to do is that you would have to create your setup. That is what I've created
with the Prisma Client. You would have to bundle it by yourself, like, create a zip file and then upload it to AWS Lambda manually, which is not a good experience at all, because it neither provides
your local experience. You cannot debug it locally. You have to always upload it
before checking the output. And you have to always upload a zip file because whenever a specific
deployment gets greater than a certain threshold of size, you have to create a zip file and deploy it to Lambda. And I'm sure that's true for other serverless
environments, as well. What the serverless framework does is that it abstracts all of this for us using AWS's own service
called CloudFormation. When it does that, it creates
a CloudFormation template, which is great, which is also known as
infrastructure score, if people have heard the term for that. It's a buzzword nowadays. They're using it a lot. So it creates an entire
template of your application. What will be the Lambda function? What will be the API gateway that will access the Lambda function? Where are you deploying it? What will be the connections? What will be the function names? You just have to specify
that in a simple YAML file and it will do the rest for you. It also has third-party plugins, as we explored today, like serverless-offline, and we also have serverless-webpack for packaging our backend application. So all in all, it provides such a great
experience that you wouldn't want to go back to manual deployments again. So if you're a beginner, if you just want to learn how it works, then you can deploy it manually. But for production, I would recommend using
the serverless framework. - I see. Okay. Well, thank you so much for that, Ryan. Do you have any other
things you wanna share, any notes before we wrap up? - Nothing, just from my end, I hope that you enjoyed the talk and I hope that we will come
back to the livestream soon regarding other practices
to use in serverless like I mentioned, especially with AWS RDS proxy, which has been recently introduced. So that will be a great
thing to explore, as well. - And this AWS feature that you described, the RDS proxy, is that essentially a connection pooler? - Yep. So you might be familiar with other connection
poolers like PgBouncer. So what they do is that they
create a connection pool of your database and reuse that same pool, which helps a lot. It's like a load balancer
for your database. It uses the pools efficiently and it doesn't have a great memory impact, so your connections to the
database don't get exhausted. It's a great way to handle database connections in production. - Great. Well, thank you so much, Ryan. And I've got some wrap-up
notes that I wanna share. So we have some more upcoming streams, and Ryan just mentioned
the follow-up stream that he will give talking
more about other aspects of serverless and AWS. We also have announced today
version 2.30 of Prisma Client and some of the exciting features that are part of that release are, first of all, we have a new version of
the VS code extension, which really helps with when you're working
with the Prisma schema, there's a new rename helper tool which makes changing
model names a lot easier. Besides that, we had new distinct API which essentially maps to
the select distinct of SQL. So that allows you to
select distinct values. And lastly, we have the middlewares API. This API essentially allows you to hook into the control
flow of Prisma Client and do all sorts of things such as, you know, query benchmarking, and it's heavily inspired by
the way that Koa does this, Koa.js being the Node.js framework. And then lastly, we already have another stream
scheduled for next week, which I will be hosting, and the focus of that stream
is it will be the first stream of a series of streams covering building modern backends with Node.js. And next week's stream will focus on data modeling
with Prisma schema, doing crowd operations and
aggregations with Prisma, and what we're gonna do
is we're gonna actually look at a real-world problem with this. So I'm sure that some of
you are already a bit tired of seeing the examples with a blog and a user and a blog, as we often do. So this is a great opportunity, you know, to look at some of the, you know, some of the challenges that arise when you're actually building
a real-world application. So that's it for us. We thank you so much for joining us today and we hope you enjoyed it. If you have any feedback, feel free to reach out. We also have the public Slack
channel which you can join and engage a bit with
the Prisma community. The address for that is a slack.prisma.io, in case you're not already there. And having said that, I wish you a nice evening, a nice rest of the day, and thank you again, Ryan. - Yep. Thank you, Daniel.