[MUSIC PLAYING] JASON POLITES: My name's Jason. I'm a product manager on
Google Cloud Platform. And as you can see,
today we're going to be talking about building
serverless applications with Google Cloud functions. Before we do that, we
might need to just describe what this word, this weird
word "serverless" that we see popping up all over
the place actually means. And you might think that
it means no servers, right? Well, kind of. It could mean any, one,
or all of these things, so no need to manage
servers, no need to even think about servers. There's a provisioning argument. Maybe it's about
what you pay for. Maybe it's not
about billing, maybe it's about developer experience
or the unit of deployment, et cetera. For the purpose
of today, I'm just going to narrow that scope. So we're going to talk
about these four things. Serverless by
definition of something that you don't even need
to think about servers, you pay only for what you
use, it's event-oriented, and you deploy
functions and not apps. OK, what does that really mean? In order to answer
that question, I'm going to tell
a little story. This is Tony, and
Tony has an app. And he's going to run his
app on a virtual machine. And so he launches the app,
and the app gets some traffic, and his virtual machine
is humming along nicely. And then his app
gets more popular and he gets more traffic. And now, he can't fit on
a single virtual machine, but that's cool, he just gets
a second virtual machine. And the extra traffic that
couldn't get on the first one flows over to the second one. Now, if he's got some sort of
load balancing ahead of this, it probably looks more something
like this, and that's fine. Sweet. But there's a problem with
this, and it's these gray boxes. This means he's
over-provisioned, so he's bought and paid for
capacity that he's not using, and that's costing him money. Let's see if we can do better. We're going to start
with a virtual machine, but this time, we're not
going to deploy an application into that virtual
machine, we're just going to deploy a container. And then when that container
exhausts its resources, I deploy another container,
and so on and so on and so on. And then if I'm load
balancing, maybe it looks something like that. So that looks good. But then what if
I need one more? Well, I need a whole
new virtual machine. So we're kind of
back to square one. We still have an
over-provisioning problem. I'm still provisioning
capacity that I'm not using, and that's costing me money. OK, let's try again. What if we just
remove the server? Now when he wants to
add a new container, he just adds a new container. Sweet. We still do have some
waste here, though. You can see there's still a
bunch of gray boxes there. So that's still somewhat
over-provisioned and still costing money. What if we were to shrink
this unit of compute? Rather than deploying a
VM image or a container, what if we made it
as small as possible? So each little unit
is going to have its own amount of capacity. And so the amount that we're
over-provisioning is even less. You're still going
to have a little bit. You're not going to be able
to get rid of it entirely, but this is a much
better place to be. And so you end up
in this world where you have independent units of
compute, independent functions, that scale independently. And the amount of
over-provisioning is as low as it can be. If I scale
independently, they're isolated from one another,
they're stateless, and they more closely
match the traffic pattern. So some functions will execute
frequently, some functions not so much. Sweet. So that's, broadly
speaking, a view of computing that's
serverless, and what that means and why it's important. But how do you use this, and
why does this matter to you? So just hold that thought. I'm going to switch gears and
talk about some use cases. Imagine you're building
a booking system. And I apologize that this
is a somewhat dry and boring example, but it'll
serve the purpose. You have some
application, some bit of compute, that's
accepting data. Maybe it's accepting orders. And you want to write those
orders to some back end storage system. In this case, I'm choosing
Cloud Storage as the example. And so that's really the
guts of your application-- just take orders and
write them to storage. Now you have a new requirement. Maybe you want to
do some indexing or some processing of this
order before it goes to storage. And so you implement, in code,
some indexing or some library that does this work for you. But maybe that library or
that code that you've written depends on a database,
so it needs to write out to a database or an index. And maybe it needs some special
file or rules to do that. And probably, it needs some
sort of special authentication. So that's OK. You put all of that into your
application, and it looks good. Meanwhile, elsewhere, you have
another application, completely separate, that's
taking some other data, but it needs to do
something similar. This time, maybe
it's taking invoices and it's writing
them to storage. And then maybe you also need
to index that data as well, and you want to reuse
some of this code that you've written
over on the order side. So now, again, you need
the same database network and authentication configuration
that you had previously. And that's fine. That will work, and
that happens routinely. But this might be a problem. You've duplicated
this configuration and these settings and
this authentication in two different places, not to
mention duplicating the code. So what does that mean
for things like security, things like deployment? If you change this
library, do you have to deploy both
versions of your application simultaneously? Let's see if we
can improve this. So same application. Two different sides of
it-- invoices and orders. This time, we're going to
implement a separate API. And the applications are
going to call that API rather than writing
directly to storage. And that API is going to
write the file to storage, and it's going to
do all the indexing and have those dependencies. This is classically
what we might term, in a very simplistic way,
a microservices approach, where you separate logic into
services rather than baking it into the application. And this is totally fine-- great architecture, works
well, Tony loves it. There is one small
challenge with this, and that is that
previously, you recall, we were writing from the
compute to storage directly. And the storage, let's
say, was a managed service like Cloud Storage. So it had SLAs and it had all
sorts of uptime guarantees. And now we've
created our own API that we have to scale ourselves
and have to manage ourselves. And so we've introduced
a point of failure here. It's not terrible, and there's
many ways to mitigate that, but you now do have to
think about how do I deal with that failure? How does the application deal
with the failure to index the data? What does it do? So we might be able
to do one step better. Same application. This time, I'm going to go
back to what I had before. I'm writing directly to storage. What if the storage
system itself was aware of that write, and it
could emit an event? So when the write occurred,
when the mutation in storage occurred, it would
emit an event, and then I could attach
code to that event, and that code would
have the dependencies. So in this model, I've solved,
sort of, two problems in one. I've solved the original
problem with duplication of configuration in code. And I've also solved the
problem of not depending on a managed service and the
reliability challenges that might present. So looks good, right? Well, there is one
more issue here. Those two applications
were pretty heavyweight, and they required virtual
machines under a lot of load. But this one little piece
that I've pulled out doesn't get called all
the time, or maybe it doesn't need many resources. So now I'm back to this
over-provisioning situation. If only I had a smaller
unit of compute that I could use in place of that. And the good news is that we do. Sweet. So Google Cloud Functions is
that lightweight, small unit of compute. It's a serverless environment
to build and connect cloud services with code. The characteristics of
Cloud Functions are this. You don't need to even
think about service. You pay only for what you use. It is event-oriented, and you
deploy in units of functions, not apps. And you'll recall
that this is where we started in our definition. OK, pretty simple
diagram, but just to give a sense
of what that looks like, your cloud environment
will emit events, as I mentioned. Mutation on the storage
bucket might be an event, mutation on a database
might be an event. Also, external services
will emit events. Typically, this is materialized
as a webhook callback or something like that. So a commit to GitHub. And you can configure
that to call you back with a webhook over HTTP. Your function will come to
life, respond to that event, and then from
within the function, you can do anything that you
can do in a normal computing environment. You can call an API. You can write back
to the database. You can call an external API. In Cloud Functions within
Google, we have two flavors. An HTTP function is a function
you deploy with an HTTP trigger, and will give you back,
immediately, a URL with a TLS certificate so it's secure, and
you can curl that straightaway. That would be for
synchronous use cases. The webhook callback will
work out-of-the-box for this. The second flavor is
the background function. This is where it
happens asynchronously. There's an EventBus in between,
typically, Cloud Pub/Sub. This is where you don't
need to know the outcome of the function in real time. You're happy for it to be
processed in the background. And typically, this will be
as a result of an event being emitted from a cloud service. So let's have a look
at each of these in a little bit more detail. HTTP-- this is the
simplest hello world HTTP function you can imagine. We give you an HTTP request
and response object. This is in Node. Anyone familiar with Node-- the request and response is
literally an Express.js request and response. And so if you've
ever used Express.js, then you know how to do this. Couple of things to point out-- I just mentioned it's just
a regular request response, and then you have the full
access to the response object. You don't need to do any
sort of crazy translations on the way in or the way out. It's just regular, old HTTP. This is what a deploy might look
like for that from the command line. We also have a
web UI, which I'll show you in a little while. You just deploy the function. You give it a
stage bucket, which is where we store your
deployment while we're building the function. And then you just say
it's a trigger HTTP, and we'll give you back a URL. Background functions. Pretty similar. The only difference
here is instead of a request and
response, we give you an event, which is the event
that was emitted from the cloud service-- database mutation, storage
mutation, et cetera-- and we give you a callback. The callback is because in
runtime environments like Node, everything is asynchronous. You might be doing some
work asynchronously, and you need to tell us
when your function is done, and so you call the callback. If you don't like using
callbacks, that's cool. You can return a promise. In fact, you can just
return a discrete value and we'll treat it as a promise. Again, for the people in the
audience familiar with Node and JavaScript, this
is particularly useful if you're using a third
party library that's going to return a promise. You don't have to do
any kind of black magic, you just return
what they gave you. And a couple of examples of
deploying background functions. So one here is triggering
off a Cloud Storage bucket, the other is triggering
of a Cloud Pub/Sub topic. Dependencies. It would be unusual,
I think, for you to write a bit of code
that has no dependencies on any other modules
or any other libraries. Typically, you will depend on
numerous libraries of your own, or typically, third
party libraries. In a Node.js environment, they
are expressed in a manifest file called a package.json. You declare your
dependencies in there, and then there's a tool called
NPM, Node Package Manager, that will resolve those dependencies,
download all the files, and package up your app. That means that you can
do something like this. You can require this external
dependency in your code. And typically, you
would declare it like this in your package.json. So when you deploy
your Cloud Function, we will actually run
NPM install for you. So you don't have to
resolve those dependencies and zip them up into a big
fat ZIP and send them to us. You just send us your source
code in your manifest, and we'll handle the rest. And this is what the deploy
command looks like if you're using a manifest file. It's the same as it was
before, so nothing changes. We just suck that up and run
NPM install in the cloud. Logging and monitoring. So from your Cloud
Function, you just write a console.log command
as you would do normally, as you would do anywhere else. We will capture that and write
it to Stackdriver Logging. From Stackdriver
Logging, you can do a bunch of interesting things. You can pipe those
logs to BigQuery, you can set up a filter to pipe
those logs to a Cloud Pub/Sub topic and have another function
execute in response to that. If you have an uncalled
exception in your code, that will get sent to
Stackdriver Errors. From there, you
can set up alerts to tell you that something's
gone unusually wrong that we didn't expect with the function. And all of this is available
through CLI, UI, and API. Monitoring. We report the essential
telemetry that you need-- how many times your
function is being invoked, how long it ran for,
and how much memory did it use? OK, what does the environment
look like that we run in? This comes up
particularly for those who might be running
any native libraries. And by native, I
mean code that needs to be compiled against the
underlying operating system. So Cloud Functions run
in an open environment, meaning that we just have a
vanilla flavor of Debian Linux. We have a vanilla
flavor of Node.js. We pre-install ImageMagick
just because that's a very commonly
used native library. We give you a local disk, @/tmp. Within the environment,
you're automatically authenticated against
other Google services, so you don't have to bundle
in any authentication tokens or consider how you
authenticate your function against other services
in your project. It's automatically done for you. And the last point is
about the compilation. Some node modules will
have a native component. And when you run
NPM install, it's going to compile
that native code. If you run NPM install and
it compiles native code on your Mac laptop
and then you deploy to a non-Mac environment,
you might have a bad day. So the advantage of
having NPM install run for you is that you just
have to name the dependency. We will run NPM install, we
will run the compilation step, and by definition, it's going
to run on the same environment that we're going to
execute on, so you don't run into these
crazy incompatibilities. If the node module you're
using doesn't prepackage a compilation step, and it
depends on some library that is not part of the node
module, 9 times out of 10, you can go and download
the precompiled Debian version of that library. And because it's just a
standard Debian environment, that should work. If you really,
really, really have to compile your own library onto
the image, onto the same image, then you don't get
this image from us, you just go get it
from the internet. Local development. So this is great. I've got this environment,
I'm writing my functions, I'm deploying them,
I'm executing them, but it's a little
slow in the sense that I have to go through
the whole deploy process every time I want
to make a change. It'd be great if we gave you a
local emulation environment so that you can build and
test and iterate and debug your functions locally
before you deploy them. So this is available today. Just NPM install the emulator. It runs a simulated environment
of the cloud environment for your Cloud Functions. It has exactly the
same API surface area. So if you wanted to
write tooling around how to deploy
functions using an API, that tooling will work
with the emulator. So you can easily switch
between emulator and live code. And you can debug with this. So it has a command to
enable the debugger. And then you can attach
your favorite IDE to that and step through
your JavaScript code. Deployment I already
touched on a little bit, but I want to walk you
through how this works. So you can deploy inline. We have an inline
code editor in the UI. Or you can deploy via the
command line or via an API. If you're deploying directly
from your local file system, we'll zip it up for you
without the dependencies, or with the
dependencies if you want to send them with us for
whichever reason you like. And we'll send it
to the API and we'll run the build
process from there. If you don't want to deploy
directly from your local file system, you can also deploy
from your GitHub or BitBucket repository. This is done using a tool called
Cloud Source Repositories. We'll automatically keep in
sync your Cloud repository with your GitHub repository. And so you're actually deploying
from Cloud Source repos, but it is kept in
sync with GitHub, so you don't really have
to interact with it. You just commit to GitHub,
and then trigger a deploy, and your Cloud Source
repo will sync. So I want to talk a little
bit about what actually happens at deploy time. So you've written your code. Let's say you're using the CLI. You've gone G Cloud,
beta functions, deploy, and then what happens? So you hit Deploy. As I said, we zip it
up if it's in the CLI. When it gets to the
server, we run NPM install. If there are already
packages there, then we might just
run NPM rebuild to capture any native
compilation that needs to happen for the platform. We have, internally, a
thing called create version. So what this is doing
is saying, OK, we have a new version
of this function. I'm going to create a new
version so the existing version continues to exist. And before we do anything, I'm
going to health check that. I'm going to make sure
that that thing is OK. That means that the
environment's spun up correctly, the image is
being created correctly, the runtime has
started correctly. We can't check that your
function is going to work, because your function, of
course, may have side effects. So we can't invoke
your function, but we can do everything
but that to make sure that everything's OK. And if everything's OK,
we'll mark the deployment as complete, and
then we'll gradually move traffic from the old
version to the new version. OK, so that all sounds great. Now what do I do? One of these cases that
pops up quite a lot, and I've touched on it a few
times, is lightweight ETL. So a file comes in, hits
storage or comes into the cloud, and you want to do some
transformation of it before it goes somewhere else. So one solution to that is
have the file hit storage immediately, have storage
call a Cloud Function, and maybe from there, you want
to write it to a database. Or maybe you want to
send it to BigQuery. Or maybe you just want
to write back to storage with a mutated
version of that file. This comes up in things
like image processing, video processing, indexing, like
in my earlier example, entity extraction. You might want to send the
image to the vision API to pull the entities out of it. Content sanitation
and filtering-- those sorts of things. Semios is one of the
customers we have using Cloud Functions for this. They're in here because
I thought they're a really interesting use case. They have around 150,000
sensors in orchards tracking things like temperature
and pressure, humidity, those sorts of things. And they aggregate all
that data in the orchard and send it back to the cloud. And it passes through a Cloud
Function on its way through, and through that Cloud
Function, they're doing some filtering before
it gets to the database. Another use case we see a
lot, and I also alluded to, is microservices and webhooks. So there's a few different
flavors of how this might work. One is that the client is
calling, typically via HTTP, the Cloud Function directly. So the client is
outside of the cloud, the Cloud Function is
inside of the cloud, and they just call it directly. You can think of this
as a microservice. The client may also be inside
the cloud in this case. API. So similar to a
microservice, but it's where the client is
calling the Cloud Function and treating it as an API. In the same way that
you would if you were to build a website
with exposed endpoints, the mobile client is calling
directly to the Cloud Function. Webhook. This is the example
I gave before where somebody like a GitHub
might be calling you back, or maybe you're sending
an SMS using Twilio, and they're going
to call you back with a webhook when it's done. So microservices,
data ingestion APIs, callbacks from
external services. A good example here is Vroom. Vroom are the largest
used car seller on eBay. And they accept
data from partners. They have a network of partners. They want to accept data. They're using Cloud Functions
as a data ingestion mechanism, as an API for their
partners to call. And the last one I want
to talk about today is more of a fun use case. So bots an action. So this is where some messaging
client, some chat client, some voice-enabled device is
going to use a Cloud Function to create a custom command-- something to extend the
behavior of this tool. So they'll use
the Cloud Function for doing this-- typically,
again, enabled over HTTP. Messaging bots-- things
like adding custom actions to Google Assistant
and home automation. So the example here is Meetup. They created a
custom Slack command. And so when you type this
Slack command into their Slack channel, it will fire
a file function that will create a ticket in JIRA. It's part of their way to
automate their development processes. This is my own artistic
talent being shown. It's supposed to be a drum. And then my wife said that it
looked like a bowl of rice. It's supposed to
be a drum, and it's supposed to remind me that
this is the drum roll. You may have already heard, but
we also now support Firebase. So you can cause
a Cloud Function to be invoked from a mutation in
the Firebase real time database from life cycle events
on authentication, and also from analytics events
from your mobile client. But I don't want to steal
their thunder too much. They have a session
today at 4 o'clock. I encourage you to go
and have a look at that. And you can see here
just those examples in the context of Firebase. So processing payments
through an external provider, sending an email or an SMS,
backend service integration, so connecting to the
rest of the cloud. And then funnel optimization
is leveraging things like analytics to know what
the end users are doing, and to react to that with code. So a great example might be,
depending what the end user is, doing, you might want to send
them a push notification when they enter or exit a
particular analytics cohort. So Cloud Functions
is in beta today. It's available for
everyone to use. Just go to that URL and
you can find out more. That's all I have
in terms of content. Does anyone want to see a demo? [APPLAUSE] Let me rephrase
that-- does anyone want to see me try
to do a demo and have it fail catastrophically
and be embarrassed? Yeah, more people. Great. OK, switch to demo, please. OK, this looks confusing. I'm zoomed out a little bit. That's why the boxes
are [INAUDIBLE]. Let me see if I can make
that a little bit better. That's worse. OK, we'll live with it. OK, what are you
looking at here? All these gray
boxes at the top-- you can think of those as
users calling your function. Now, unfortunately, when
we were creating this demo, we thought it would be easy
just to run it in a browser. It turns out the browser
is not great at generating many, many concurrent
HTTP requests. So even though there's, I think,
about 80 boxes on the screen there, we're not talking
80 concurrent requests. They're just going
to fire off requests as quickly as they can. Each little box
represents a request. When the Cloud
Function executes, it's just going to do one thing. It's just going
to return a color. And the box is going to render
the color that it returns, and that's it. Once we kick this off,
we'll see these boxes switching color
to whatever color the function is returning. It's returning the same
color for every one. Then I'm going to show
you the deploy process. Live, I'm going to
redeploy a function, I'm going to change the
color, and visually, you should see all these boxes
flick over to the new color. And down below, what
you'll see is the latency-- the median response time
from executing this function. So let's see if this works. OK, these are all my
boxes changing color. And below, you can
see the latency. Now, I'm going switch
over to my Cloud console. Here's my send color function. I'm just going to edit. So you can see here, this is
where I'm sending the response. It's just an HTTP function. I've got some call
stuff there just to make it callable
from the browser. Just ignore that. So I'm going to change this to
send green rather than blue. Hit Save. And recalling what
I mentioned before, that's going to send that
deployment to the cloud. It's going to run NPM install. It's going to build an image. It's going to create a version-- I better switch across
so you can see it. It's going to create a version. It's going to health
check that version. And assuming the
health check passes and everything's OK, that will
become the current version. And right on cue-- right on cue-- it's
going to change. [APPLAUSE] So that's just showing how easy
it is to deploy a function. And if you had some
error, some problem in it, the existing function
keeps ticking. And we will health
check everything, and then we'll switch
across to the new version. And you'll notice
that the colors didn't flick immediately. That's more a
browser issue, but it may be that there's
traffic, rather requests, that are
currently being served will complete before they
get to the new version. So you might get some
timing issue in that, and you saw that
reflected there. OK, so that's all I have today. [MUSIC PLAYING]