[MUSIC PLAYING] [APPLAUSE] MYLES BORINS: Come on. Thank you, everyone,
for coming out today. I'm Myles Borins. I'm a developer advocate
on Cloud Platform. Also one of the maintainers
of NodeJS and do a lot of work with product
managers, like Jason, who's doing really amazing
stuff and will tell you all about it right now. JASON POLITES: I'm Jason. I'm the product
manager on Cloud, specifically in the
serverless space. Before we dive in, I
just want to make sure that we make it
clear that serverless can mean many, many
things in Cloud Platform. It's not just compute. There's many services that
we have that are serverless, in the sense that you
don't manage infrastructure and you pay per usage. For the purpose of this talk,
we're really narrowly scoping that just to compute. And concretely,
in Cloud Platform, that means two products-- Cloud Functions and App Engine. For those unfamiliar,
Cloud Functions is our event-driven
serverless compute platform. You deploy functions and
they execute in the cloud. And App Engine is the OG
serverless from Cloud Platform. Web applications, again, serve
totally serverless scales to zero. So with App Engine. MYLES BORINS: Yeah, so
quick poll of the audience. How many people in the audience
have used App Engine before? Awesome. And how many people
have been using it for more than five years? That's really amazing. App Engine celebrated its 10
year anniversary this year. It's pretty exciting that we've
had it as long as we have. And just a quick recap
of all the things. It was launched in
I/O in May of 2018. Originally a Python runtime
with a Memcache API. Introduced Java
runtime in April 2009. And App Engine moved
out of preview in 2011. We introduced a Go runtime in
2012, a PHP runtime in 2013. Java 8, last year in
June, and most recently in I/O, with Steren
over here, we introduced the
world to Node.js.8 on App Engine standard on our
new second generation runtime. So our new second
generation runtime is open source,
idiomatic experience. You write code the
same way for App Engine that you would to just run
on any version of the runtime you're writing. If you know how to write Python
to run it on your computer, you know how to write Python
to run it on App Engine. The new second
generation runtime can use any extension,
or binary, or framework. If you're writing Node,
pretty much any Node module that you want to
use, you can use. Any Java library
or Python library, the things that are running
on the second generation can run them. And this is a big difference
from the first generation runtime. And as I mentioned, Node
8, we announced at I/O. But today, we wanted to let you
know about two new runtimes, which are Python 3.7
and PHP 7.2, which you can use on App Engine
today, which is really exciting. [APPLAUSE] JASON POLITES: So on the
Cloud Functions side, again a quick recap. We went beta launch at this
event last year in March. And we launched to beta
with a Node.js runtime. We had some cool features like
HTTP invocation in the box, so you could just
deploy a function and call it directly via HTTP
without any other services required. We did things, like Stackdriver
integration, and, of course, it's a serverless
product, so you only pay when your code runs. And in the time
between then and now, we've been doing a lot of
work on adding capabilities to the platform,
improving the performance, and across the board. So I want to take you
through some of the things that we've been working on. So probably the most
important and upfront is that we are now
generally available. Cloud Functions is out of beta. Thanks. [APPLAUSE] It's open to all. It's ready for production use. And we even have an SLA. So we're sort of standing
by the availability. We're rolling out
to more regions. In beta, we were only available
in the US in one region-- US Central. We're now available
in four regions-- two in the US, one in
Europe, and one in Asia. We'll be rolling out more
regions progressively. But those are available today. An interesting feature
of Cloud Functions is that with these new
regions, what may not be obvious is that
you can actually deploy different functions
in the same project that are in different regions. This allows you to have
different capabilities that are in the same project,
so all the same permission models, the same IM
roles, and so on, but addressing customers that
may be in particular regions. You can also do things,
like have functions with the same name in different
regions in the same project. MYLES BORINS: So we
mentioned the new runtimes for App Engine. But there is also some
runtime announcements for us to make for Cloud Functions. First of all, I know I'm
really excited about Node 8 for Cloud Functions. So you can now, as of
today, write Cloud Functions using Node 8.11, which brings
in support for async await. For myself, that
was a major change to the way in which
I wrote JavaScript, thought about asynchronicity. It also has a new function
signature, which we'll be showing you in a minute. If you want to deploy, you
use the runtime Node.js.8. Or if you're doing
it through the UI, it's available through
the dropdown menu. So if you're writing
functions today with Node 6, you may recognize this. This is what you would have for
a background function, where the top one is a function
wrote with callback. And the second one
is a function that's able to return a promise. The API has changed a
little bit for Node 8, where you could see we now
have a data and a context, as opposed to an event. Previously, you would
have to dig into the event to get all this information. This is just available for you. It's a lot faster. You can provide async
functions as the function itself, which allows
you to then use the await keyword
to await the results of asynchronous operations. So as we can see, in
this async function that we have here at the
top for hello_pubsub, we can do a whole bunch
of asynchronous stuff. And if we had multiple of
these asynchronous operations in a row, we would
actually be able to do those in a linear
pattern, which is, for myself, a much
better mental model. Below, we can see the
function signature for HTTP functions,
which remains unchanged. The main difference
being, again, you can provide async functions
and use the await keyword within it. So Python 3.7 is really
exciting as a new runtime. You can write your Cloud
Functions using Python. You deploy using
runtime Python 3.7. Here's an example of HTTP
in a Pub/Sub function. You get a data and a context
for a Pub/Sub function and a request for hello_http. This is based on Flask. A lot of people
in here use Flask? Cool. Yeah, really nice minimal
idiomatic Python way of writing HTTP servers. You can do GET, PUT,
POST, DELETE, OPTIONS. The requests themselves are
based on Flask requests. So if you know the
API signature of that, you already know how
to get up and running. And if you don't,
the docs are there. And the responses just
need to be compatible with Flask make response. And you'll be able to
start getting functions up and running with Python today. Python background
functions-- so as with Node, you get the data, which,
in Python, is a dict. And then you get a context,
which is Google Cloud Functions context. And that context is a
whole bunch of information that you can dig in. To signal that the function
is successful in Python, all you need to do is
return from your function. And if there's any problems,
you raise an exception. And Stackdriver
error handling will get automatically notified. And you'll be able
to do whatever you want to set
up in the process to handle error recovery. So there are common features
between both of these. You have logs. So if you log out using the
standard ways of logging in both libraries,
those are going to be sucked up by Stackdriver
and ready for you to dig in. If you've ever used the UI
for deploying functions, if you've done it with Node,
where you can just kind of click immediately
after running a test and see the logs to
see if it worked, is the exact same way you
can do it with Python. Uncaught exceptions,
as I mentioned before, are automatically handed
off to Stackdriver Logging and Stackdriver Error Reporting. And similarly to
Node, Python also will automatically
do the installation of all of your dependencies. For Node, you have
a package.json. For Python, you have
a requirements.txt. So just list your requirements
in the requirements.txt. And we are going to install
all your tendencies for you in the cloud. The context object that
I mentioned before, this is how it breaks down. You have the event ID, the
time stamp, the event type, and the resource. These will be only for
background functions. You don't get a context
object with HTTP functions. For those, you get a request
and a response in Node. And you get the
request for Python. But these are the options
that you would have from a background function. And you can use this in
your code to determine, was it Pub/Sub
that triggered me, or am I being triggered
by another event? But everything is
in there that you need to understand the
context of the object and where it's running. This is an example of
what that may look like. This would be for a publish
event on a Pub/Sub queue. And you can get all the way down
to even the version of Pub/Sub if you're breaking down
that string in the type. So a lot of
information and context that you can get from this. For those of you who are
using Firebase functions, Cloud Functions for
Firebase is now also in GA. And it has full
support for Node 8, so you can write
your ECMAScript 2017. You can use async await. The language features that are
supported by GCF also supported by Firebase functions. There's new runtime
configuration options that allow you to control your
region and your memory and time out. These are really great granular
controls and a great thing to dig into for the productivity
of your applications. One of the really cool things
that's coming with this launch, too, is that Firebase
events for analytics files store realtime, DB,
and authentication are now available directly
in Cloud Functions. So you can use your
Firebase events to trigger your Cloud Functions. Both of these different products
have different great ways of using them. And now, you can,
depending on what you're doing with your
stack, make decisions based on what works
best for your team. JASON POLITES: Thanks. One of the things that
we talked about before in the new runtimes
for App Engine, this applies to that as well. We're rebasing the underlying
operating system on Ubuntu. And one of the main
reasons for doing this is our ability to provide
system libraries and native binaries in the image. In Cloud Functions,
historically, we've really only just allowed
one, and that's ImageMagick. With this switch
to Ubuntu, we're really broadening the scope. I've listed a couple there-- ImageMagick, FF MPEG. This is something
a lot of people have asked for video processing. And we've also made
sure that we've bundled in all the
system libraries required to run Headless Chrome. So you can take
screenshots of a web page from within a Cloud Function. And if you want
the big list, here is the list as of
a few weeks ago. It may have been actually
added to since then. I won't spend too
much time on that. Another feature that a lot
of customers have asked for is environment variables. We're announcing
today that you can now specify key value pairs that
are bound to a single function. But they don't exist
inside your source code. You set them at deploy
time, just any arbitrary number of key value pairs. And then those will be
injected as literal environment variables at execution time. So at deploy time,
we'll save those. And then at
execution time, we'll inject those into
the environment. This is available in the
web UI, the cloud console. You can just set them in there. It's also available
in the API and the CLI you saw just before. OK, Myles, demo. MYLES BORINS: Yes. So all the things we just
talked about, I decided, hey, let's make something silly that
uses all of them all at once, so I can show you
some things on stage. I've got a demo
here on my machine that I'm going to show you. We got here the Math Bot. It's really friendly,
nice person. I'm going to start a new
conversation with the Math Bot. And I'll just be, like, hey,
Math Bot, what's 1 plus 1? And Math Both needs to think. It's been sleeping. So if you're not familiar
with scale to zero, when you're not using your
functions, they're not running. So you're not paying for it. It also means that you have
what's known as a cold boot. The first time that
you do something, it's going to take
a second for it to spin up, and figure
everything out, and get running. Hey, Math Bot. Maybe I shouldn't have
picked something so hard. Oh, I know why. Because I didn't add Math Bot. Didn't know I was talking to it. And this is actually
a really great feature for Hangouts Chat. So if you're making
bots, they're not just going to be responding
to anything that you're saying inside of the chat. They're only going to actually
respond when you talk to them. We've got tutorials
available online. So you can follow and get
your own Hangouts bots going. So we could see Math Bot got
back to me pretty quickly. I could just be like, what's pi? It's not going to
say anything smarmy. It's going to be
pretty quick on that. And while I'm waiting for that-- JASON POLITES: It's not
going to say anything if you don't address it. MYLES BORINS: Oh, yeah. I keep doing that. So you know what? I'm just going to go
to Math Bot right here. I'll be like, what's pi? And it'll say, hopefully, 3.14. I can be like, what's
the meaning of life? And, of course, it's
going to come back-- JASON POLITES: It's
navigating the Wi-Fi. MYLES BORINS: It's getting
really esoteric with me. So what we've got going on
here are two Cloud Functions. We've got a Node gs Cloud
Function and a Python Cloud Function doing two
separate things. So I have my own expertise. And one of them is
not machine learning. I know Node really well. So when I want it to start
instrumenting with the Hangouts Chat API and I need it to do
things I haven't done before, it was way more idiomatic
and straightforward for me to get started with Node. So I used Node to create
a bridge between Hangouts Chat and Cloud Functions. But I don't know anything
about writing chat bots. But I was Googling
around and found this thing called
ChatterBot, which is available on PIP,
which had a demo that worked right out of the box. So I was able to get a chat
bot spun up with Python. So if you take a
look here, we could see here's the Hangouts bot. Here's the source code for
the node side of things. And we're using Node Fetch. I'm a big fan of fetch. It fetches an API for
making network requests that returns promise. We've got this function,
which is an async function to ask the bot, which makes
the body, sets up the headers. It does a tri-catch,
where it goes and waits for the response
from the bot URL. This is, as we can see up
here, an environment variable that we've injected
inside of the function. And we wait for
the response text, which is resolving
the result of that. And the rest of
this is the function that we're handing
out to Hangouts Chat. It's just saying, hey, grab
the message and ask the bot. And then asking the bot
is heading over here to the Python code,
which we can see, which is about 41 lines of code. And it's mostly just
instantiation code, where I've set up
two logic adapters. One of them is
custom, so I can make that joke about the meaning
of life that failed. And then the other one is
the mathematics evaluation in there. And it was pretty
straightforward to tie the two together. If we take a look here back
in the code and we click Edit, we can quickly take a look
at the environment variables that we've got down here. And we can see that I've
got the bot URL right here. And that's just the URL
of the other function that I'm going to call. So I didn't need to
embed that right in. If I change the
background function there, I can quickly get
that going later. But what we've seen
here is a combination of using Node for
what Node's good for, using Python for
what's Python good for, and using environment
variables to make things a little bit more flexible. And then all of this
plugs into Hangouts Chat, so that I don't
need to calculate how many grams of coffee to
use when I'm doing my 1:13 ratio of pourover. I can just ask Math Bot, and
it can get snarky with me. But so that's the
end of this demo. We can head back to
the slide deck now. And I think it's
back to you, Jason. JASON POLITES: I think so. Thanks. So we'll just keep on
rolling with the new things that we're adding. For those people out there
who have used Cloud SQL, you may have bumped
into this problem that we had with
Cloud Functions, where it was pretty difficult
to talk to Cloud SQL without doing a U-turn out
to the internet and back. So we fixed that. You now have a direct
connection to Cloud SQL. You just use this socket
path that you can see there-- slash cloud SQL. /foo is just
the name of my database there. This would just be
in your Node code. In this case, it's JavaScript. There's a little thing there
you may have picked up on. This is creating a
connection pool of size 1. Why do I need a
pool if it's only 1? And that's really a
side effect of the fact that it's a serverless
environment. It's going to scale up and down
from 0 and up to some number and then back to 0. And so if you're
scaling up rapidly-- and I'll talk a little bit
more about this in a minute-- you might overwhelm
your database. If each function is creating
10 database connections, and we do the magic of
scaling it up for you, and suddenly, you [INAUDIBLE]
your own database. So we recommend a
connection size of 1. But we recommend you
use a pool, because it has some nice features,
like auto re-establishing your connection if it fails. So this was the previous world. We would ask customers to do
this horrendous U-turn out to the internet and use
SSL to secure it and so on. Well, no more of that. Now, you can just
go straight across. And that's a secure
connection managed by us. As I was just talking before,
scaling controls is a new thing that we're adding. This is exactly the
scenario I just described. You have some nice
evenly-shaped traffic, notwithstanding the fact
that my diagram is not even. Traffic's coming in, the
database is happy, everyone's happy. You get some more traffic. And we do a great job
of scaling up for you. But then, of course, your
database is overloaded. And you can't deal
with anymore traffic. So what you do in
this situation? Well, we're introducing
scaling controls. So you can actually limit
the number of instances that your function
is going to use. Why would you want to do that? Why would you want
to limit the traffic? Well, the simple
canonical example is, let's say you have one
database and two functions. And one function is your really
important production function that you don't want to go down. And maybe it's what your
mobile clients are calling. And that's the important one. And then maybe you have some
other background function or some other analytics
or reporting thing that it doesn't really
matter if it doesn't succeed. And maybe it has a bug,
and it gets into a loop. And it causes us to
scale up many instances of this non-important function. And it causes the
database to go down, which causes your really
important function to cease operating. So in this case, you
would put scaling controls over that less
important function to preserve the
database to make sure that the traffic is prioritized
for the more important one. AUDIENCE: [INAUDIBLE] JASON POLITES: I'm sorry, what? AUDIENCE: [INAUDIBLE] JASON POLITES: Yes. Or you could use API throttling. The comment was, or if there's
a throttling limit in the API, yes, that would
also be appropriate. The limits are per function,
default limit of up to 1,000. And then you can change
it on per function basis. So this is what it looks like
with the scaling control, same sort of situation. Regular traffic coming
in, but now, new traffic. The gray vertical line is
representing the boundary of your scaling limit
that you've said, well, I don't want to scale
beyond this limit. So those boxes with the dotted
line will not be created and your database stays happy. MYLES BORINS: For myself,
I've wanted scheduling for a really long time. I don't know about all of you. But we're really excited to talk
today about Cloud Scheduler. Cloud Scheduler allows you
to schedule HTTP or Pub/Sub tasks at variables starting
down to 1 minute intervals. It can invoke Cloud Functions
over HTTPS or over Pub/Sub. And it can also invoke App
Engine on a relative URL. So this means that we can start
setting up timers for events that we want to happen,
like setting up chron jobs. If you have tasks that need to
happen at a certain interval, this is the way in
which you can set it up. First, we're going
to take a look at how that looks for Cloud Functions. You create a Scheduler job. You name that job. You have a message body, which
is what the body of the message will be when it's triggered,
and you schedule it. You can do that using just
an English-like grammar. So you say every one minute. You specify the
URL that it's going to trigger and the
HTTP method that it's going to use to trigger it. So now, this endpoint is
going to get triggered every single minute with post. So if you have things
that you need to do, like checking an endpoint or
doing anything via batch job, you can schedule it this way. For App Engine, it's a
little bit different. You'll notice the two
biggest differences are the relative
URL in the service. With App Engine, you can specify
the specific relative URL to the service that you're
going to be triggering. And you can specify the service,
because with App Engine, you can deploy multiple
services to the same project. So you don't necessarily want
to be triggering the default service. I believe that the schedule's
probably the only thing that's mandatory. But you can go through
the docs, and take a look, and see exactly how you
got to spin this up. But it's really great. It's intuitive. And I really love
this capability, and I think it expands
lot of the things that we can build with our
serverless offerings right now. We're going to bring Dima
up on stage-- a quick round of applause for Dima. [APPLAUSE] And just don't worry
about the fact that it is says product marketing on there. Everyone at Google writes code. DMYTRO MELNYK: Thanks, Jason. Thanks, Myles. I'll stick to this mic here. Hey, everyone. I'm Dima and I lead product
marketing for cloud functions. And let me show
you a demo, which I built using some
of those new features that Jason and Myles
have just discussed. Actually, I need some slides. Can we just please go back to
the slide deck for a moment? Thank you. So sometimes you
have to rely on-- thanks, Jason--
an external system as a part of your serverless
application architecture. I chose my SQL
database in this case. But that could be any
third party API, really. So imagine a scenario when
your database goes down. It could be due to an
outage or a serverless part of your system scaling
out of proportion and overwhelming your database. So you try to write something
into the database and it fails. What's next? You could actually
implement some retry logic. And you could retry
things immediately. And this could help you
when you are dealing with a small database delay
or a small interruption in connectivity. But what if your database
is totally slammed and you have thousands
of messages that failed? You might not want
to overwhelm it with additional retries
every couple of seconds. In that case, you can save your
fail messages to something, like Pub/Sub, for example. But there is a
problem with Pub/Sub. It's really fast. And your retry
logic will actually trigger right away,
possibly still too fast, if the database is down for
an extended period of time. And obviously, if you
relied on Pub/Sub, you actually would have
to re-queue those messages if they failed again. So when you deal with an
unknown amount of downtime, you might want to persist
those messages into something like Cloud storage, so that you
can replay them later, and take your time processing them. So to trigger this
retry logic on schedule, you can actually stand up a VM. And you can create
a virtual machine, and things like Linux crontab
have been around for, like, 50 years. But we're doing serverless here. And what if your current
server goes down? Do you have another
one watching? This is where Cloud
Scheduler can really help. So let's see it in action now. If we can please go
back to the demo slides. So what you're looking at
here is a front end part of this demo. It's built on Firebase. And all it does,
it just visualizes messages moving between
different components of the back end
architecture in real time. So what we're
going to the first, we will send some
load to the HTTP end point, which is sitting
in front of the database. And all it does, it writes some
messages to the MySQL database. And I just click
the task button, and it sends a few
requests to the database. It's a live demo, so there
is some latency involved. OK, there they are. We see some blue and
green boxes appear here. So the blue boxes,
they represent messages that were sent to the database. And the green ones are the
corresponding successful writes. So when everything is
working as expected, we would expect to see the equal
number of those boxes, which is exactly what we're seeing here. Now let me actually go
to my project settings and take the database down
to simulate an outage. I'm using Cloud SQL just for
the purpose of this demo. So what I'll do, I'll just
go ahead and restart it, and that usually
takes a few moments. And while the database
is restarting, I'm going back to
the visualizer app and sending a few more
requests to the database. So now, we see some of
the red boxes pop up here. Those represent messages
that failed as expected, because the database is down. So those messages are
saved to the dead letter queue, which is built with
Pub/Sub and Cloud Storage. And I, of course, have
another Cloud Function there, which I'm not showing
for simplicity. All it does, it just
saves the messages as they pop up on
the Pub/Sub topic. Now what we can do is we
can go ahead and take a look at the dead letter
queue and also create a Cloud Scheduler job
to rerun those failed messages. So this is my dead letter
queue cloud storage bucket. And sometimes it takes a moment
for the UI to refresh here. So I'm just going to
click Refresh button. And we can see some of
our failed messages. They started to pop up here
with the appropriate time stamps from a minute or so ago. After taking a look at
that, let's go ahead and create a Cloud
Scheduler job. I actually pre-populated all
the values just to save time. And as you can see, all you need
to do here, it's really simple. You just have to name it. And I'm using Hello Next
18 here as the name. Set up frequency, such
as every Friday afternoon or, as in our
case, every minute. And also select target. In our case, the target is
the HTTP end point which replace the failed messages. And I'll also go ahead
and start a timer here to track approximately
how much time we have left until that Cloud
Scheduler job fires off. And as you can see here in the
console, it hasn't run yet. It will very soon. I just want to reiterate
it was super-easy to create a scheduled job with
Cloud Scheduler. So we actually accomplished
something very powerful here with just a few clicks. We now have this code that
will run every minute, go to the dead letter queue, and
try to replay failed messages, if any. OK, so I guess the Cloud
Scheduler job already triggered. From my initial test, I think it
happens on top of every minute. And so that was an approximate
timer for dramatic effect. But it already triggered and
actually successfully replayed all of those failed messages. And just to double check,
I'll go back to the bucket and refresh it. And it should be empty. Yes, and it is
empty as expected. So just switching back
to the visualizer app. As you could see, it
replayed successfully. And in the real life
scenario, obviously, if you're building
this for real, and you had a lot
of failed messages, like, thousands of them, there
might be a situation that when your dead letter queue holds
way more messages than you can actually replay
within the minute. So that could be solved by
either increasing the time interval or introducing
another cloud storage bucket to hold messages
that are being replayed. And to recap, I just
talked about this idea of how you can build a
reliable system using some of the primitives that we
have available to you on GCP today. And this pattern can
be especially useful when you are relying
on external systems as a part of your
implementation. Thank you and back
to Jason and Myles. [APPLAUSE] JASON POLITES: Thanks, Dima. If we can go back
to slides, please. Oh, is this slides? It looks the same. OK, let's just keep rolling. More on Cloud Functions-- Access and IAM. We're also very excited
to announce VPC access. Again, if anyone's used
Cloud Functions before, you would have plausibly
bumped into this limitation. Consider you have some number
of virtual machines, GCE instances in Cloud Platform. Those instances, let's say,
are all on the same network. That's great. They can talk to each other. And then you have a Cloud
Function out here in the wild, but it cannot talk to those
virtual machines for very irrational and complicated
network security reasons. Good news is we fixed that bug. So you can now connect the
Cloud Function via a connection service to that
network A and, thereby, grant access from the function
to any virtual machine on that network. So you just simply add the
Cloud Function to the network, and it allows you to
egress, to transmit bytes from the function to
the virtual machine. Very simple command line
at deploy time, connected VPC with the name of the
network, and you're done. On the side of security
controls, something that we get asked for a lot. When we started with Cloud
Functions, when we announced the beta last year,
I mentioned before that one of the features
that we wanted to promote was this ability to
deploy a function and give it an HTTP URL
and curl it straight away. And it's very, very
good for simplicity and terrible for security. Because this HTTP
function is invokable, effectively, globally. So what we're introducing
now is IAM controls on the data plane itself,
on the invoke of a function. We have a new role called
Cloud Functions invoker, and you can assign it
to a particular user. In the example above, I'm saying
Alice can invoke my function code helloworld. And in the example below, that's
effectively making it public. We have a special string
in Cloud Platform called all users, which effectively
says that all users have invoke permission on my function. And this means you can now
deploy a Cloud Function that cannot be invoked by anyone
in the outside world. And this is perfect for
function-to-function invocation use cases. It looks like this. Traffic from the outside world
comes into our serving stack. There's an IAM check, which
checks does this request have the appropriate credentials? Are those credentials authentic? And if they are, the function
receives the request as normal. The same situation
applies if one function is calling another function. Its identity will be verified
by this IAM check and its role verified to make sure that
it has the appropriate role on that function. OK, that is the bulk of
the regular announcements. For those who are
paying attention, the N is just
representing the number. It doesn't really have a number. And that's because this
next bit is not ready yet. This is a sneak peek. If anyone was in the
serverless spotlight earlier, you may have seen something
in relation to this. And those who weren't,
you get to see it now. Coming very soon is
serverless containers. We talked a little bit before
about these next generation runtimes in App Engine. We talked about
some commonalities in the infrastructure. Underneath all of that-- and this has been true
actually since day one of Cloud Functions-- we actually except as an
input into the infrastructure a Docker image. And so what we're
going to be doing is exposing that to customers. So you can bring along with
you any pre-built Docker image. You can use any
base image you like. I mentioned before
we default to Ubuntu. If you want alpine Linux,
bring your alpine Linux. You can arbitrary system
libraries, arbitrary language runtime. But everything else is the same. You get the same serverless
execution environment. You don't manage any servers. You pay only when
code is executed. All the other bells
and whistles that you saw before-- the scheduling, the
environment variables-- well, environment variable, you
can probably do yourself. But all the surrounding
things are the same. And you just bring
your container image. And to prove that
this is true, I want to invite Steren
onstage to give you a demo. [APPLAUSE] STEREN GIANNINI: Thanks. Can you hear me? Yes. So we were
brainstorming this demo. And why would you use
containers when this here provides you Node and Python? Well, maybe you have your own
favorite programming language, like GO or REST. We tried them. It works. But is that [INAUDIBLE]? No. Well, what if we
could actually write a Cloud Function
that renders a 3D image based on a URL parameter? How cool was that? And to do that-- if you can
switch to the demo screen-- to do that, actually, you wrote
a very small HTTP function. Let me show you the code. It's a Python function,
but simply, it will execute a 3D software
to render an image out of a predefined 3D model. And so this software is not in
the Cloud Function base image, the one that you get when you
use the Python version of Cloud Function. To use this software,
I had to put it into the image of my function. And to do that, I simply wrote
these 10 lines of code that is literally a Docker file,
so that we describe how to build that container image. So first thing to
note is that it starts from the official
Python base image. Nothing fancy here. Then what I'm doing is I'm
installing a 3D software, which is a C++ module
available on Ubuntu. And then I'm starting my app. I'm exposing the right port. And I'm starting the
app as it should be. OK, let's deploy that. So the first step would
be to build this-- the Docker local file and the
code-- into a container image. For the sake of time,
I already did that. If I wanted to do it, I
would have use Docker Build. Or I could have used Cloud
Build that builds that for you in the cloud. But this image has
already been built. Now, I'm just going to
deploy it to Cloud Functions. Click Cloud, Functions, Deploy. You know that part. The new thing is --image. And then I point to
my container image that I stored in
Google Cloud registry. Looks good. Here we are. We are deploying the
image to Cloud Function, so it should take
around 30 minutes. [INTERPOSING VOICES] As you see, I could have
used any programming language I want, as long as I can build
it into a container image. I could have used any
OS package I want, as long as it is
available in my image. Or even I could
bring my own binary. We have customers who
have special binaries that they need to import
into their Cloud Function. That's what they do. They use containers to do that. Here we have it. So let's click on it. So as I told you, this
function takes a URL parameter, which is the location. Because as you know, we
have many Cloud Next coming. So today, we are
in San Francisco. And what's happening here
is that the 3D software is rendering an image
based on the input I give. So the next one is
in Tokyo, I think. We have London, so quite nice. Of course, I also-- [APPLAUSE] Thank you. I did all the three
models for fun. It's OK. Here we have it. MYLES BORINS: You
promised me you weren't going to share that one. STEREN GIANNINI: I'm
not showing this one. Right, thanks. JASON POLITES: That's a 3D
server error right there. STEREN GIANNINI:
That's container on Google Cloud Functions. You can sign up on
g.co/serverlesscontainers to get access to it. Thank you. [APPLAUSE] JASON POLITES: OK,
that's basically the end. Just to recap, we've talked
about a lot of things. Many of them are
available today. I'm sure many of you want
to go out there immediately and start writing
Python Cloud Functions. Python is available today. Node 8 is available today. Some of these other
features are rolling out over the next couple of weeks. Just check the
Cloud Platform blog. We'll be having blog
posts coming out over the next few
weeks with instructions on how to get them. And if they're in early access
mode, then how you can sign up. Thank you very much. [APPLAUSE] [MUSIC PLAYING]