[MUSIC PLAYING] SPEAKER: Good afternoon. Please welcome Oren Teich. [APPLAUSE] OREN TEICH: Good afternoon, and
thank you for joining us here. It is an incredible
conference so far. I hope everyone's
really enjoying it. If you haven't had a
chance, I will strongly encourage you to make sure you
find some time to check out all of the different expo spaces. I happen to be a little bit
partial to the one downstairs. Directly below us
on the first floor there's a really cool
air cannon you can fire. So my name is Oren Teich. I'm director of product
management at Google. I'm responsible for serverless. So obviously today we're going
to talk about serverless. I hope you guys like that stuff. I've only been at Google myself
for a little over a year. Been in the industry
for a long time, and really my whole time
been focused on what we can do for developers. And part of why I'm so
excited to be here at Google is the opportunity that it
gives us to not just offer you a single point
solution, but offer you the whole comprehensive thing. Today I'm going to talk about
a lot of different components. And what I want you to
know is it is a lot. It is complicated. There's a lot of pieces. And what we're trying to do
is offer a very comprehensive solution for our customers. And also keep in mind that
this is a point in time that, you know, today is July
24, or whatever, of 2018. And there's going to be
August and September and July 24 of 2019. And we're constantly
iterating on this. And part of why I
bring this up is I want to hear from you as well. I want to know the
feedback you have. Because we're trying to build
the best product for you. And so please, by all means,
come up, send me an email-- I'm just my last name--
at Twitter, or at Google, or whatever. But let us know how we
can make serverless as successful for you as possible. So you know, I think
it's always important, because we're never
going to get over it, to make fun of the word first. If I could choose any
other word in the world, I wouldn't have the
word serverless. Clearly there are many, many
servers that are behind things. And in fact, no
surprise, you know, Google is one of the biggest
server companies in the world, right? We make our own hardware. We have incredible data centers. And I think this is
really important, because it's not just about the
physical machines, of course. It's about how you hook
them all up, what's involved in getting them
together, and maintaining and operating. And it's also about the people
that are behind it, right, and how we actually
keep everything running. And in and of itself that's
important and exciting. But there's another piece. And I don't know if you
know this company Veolia. They're a huge French
multinational company. They used to be part of, I
think, the Vivendi group. Veolia has 350,000 employees. They take care of
trash, electricity, a lot of municipal
services across Europe-- actually in 48 countries,
last I checked. And I was meeting with
their CIO earlier. And he said something that I
hadn't even considered before. And it's really struck me, and
making me think quite a bit. He was talking about
the importance of we live in a scarce resource world. And the reality is that
as we move more and more into computing, as we move and
take advantage of our compute more and more, the
resource costs can go up. And his argument is our
moral responsibility to use these efficiently,
that the only way that we can be really
looking toward the future, and if software is eating
the world, having 10, 100, 1,000 times
more use of software is if we don't use 10, 100,
1,000 times more resources in the way. And their argument
is that serverless is the way that they're
looking to do that. And I think that was
really insightful. I don't have any stats. I only heard this today. But it's something I'm
going to think about a lot, and go back and look into. So at the end of the day we
have these serverless products. And I think it's
like just recap why are people we see choosing it? And what we hear all
the time, right-- I mean, this is motherhood
and apple pie, right? What we hear all the
time is serverless enhances dev productivity. It's about the
operational model, right? And that last one-- pay
only for usage-- is often the way it gets reduced down to. Of course, there's a fully
managed security, right? We, of course, take care of
security patches and updates for the system. Of course, you're not having
to think about servers. That is in the title. But this is just one piece
of the overall puzzle. And in fact, it's
one that I want to expand upon a
little bit more, because this is not sufficient
when you talk about it. Haha. I swatched slides
a few seconds ago. So we're going to talk
about why that's not sufficient in a second. But before we do
that, I actually want to get Deep up here. Deep is the executive director
from the "New York Times." He's responsible for a lot of
the technical architecture. Deep, come on up and
join me on stage. [APPLAUSE] DEEP KAPADIA: Hey, guys. OREN TEICH: So you know, I'm
not sure if everyone is aware. But "New York Times" has been
longtime customers of ours. They've been using App Engine. They're looking at a lot
of different products. Deep, maybe you could
give us a quick overview of how you use our products. DEEP KAPADIA: Absolutely. So we decided to migrate
from our data centers off to the public cloud
a couple of years ago. And Google seemed to have a very
compelling offering, especially when it comes to
abstractions that are available to developers. One of the big things was
the serverless platform, or what we knew as the
App Engine at the time. And, of course, the offering
has expanded to other things at this point. But we started looking at
App Engine for workloads that we wanted to
scale pretty fast. A lot of the traffic to the
"New York Times" is very spiky. The first thing that we looked
at was our crosswords product. Our crosswords is our
highest-grossing product at this point. So it was a bit of a
chance that we took. But we looked at how
the scaling needs worked for our crosswords app. And what we found was when
we published a crossword out, people just have
at it right away. They want to solve the
crossword then and there. And they're trying to
download the crossword, and solve the puzzles
online, et cetera. And we just could not keep
up with the scaling needs. So we would just
over-provision infrastructure for the crosswords app. And when we started looking
at App Engine, and its scaling abilities, we found that
it was a perfect fit for these spiky
workloads that we have for the crosswords app-- and maybe for other things, too. Oren and I were talking about
using it for our breaking news alerts at some point, where we
could send out a breaking news alert, and quickly ramp up
to the number of requests that we would need to
serve at any given time. So that was one thing
that we looked at. So App Engine was
our first major foray into application
development on GCP. Before that, we'd
been using BigQuery for a little bit, which I've
always said was our gateway drug into Google. But, yeah. OREN TEICH: And on
that, are you only using App Engine and BigQuery? How do you look at the
overall platform today? DEEP KAPADIA: So no we don't. We, in fact, use a
lot of Kubernetes. So we use GKE for workloads
that may not fit the App Engine model in some ways
where we need to do something very specific that
doesn't fit the App Engine model. So we doubled down on
both App Engine and GKE during the cloud migration. At this point, all of
newyorktimes.com runs on GKE. So that's another thing. But on the app engine side,
we have about 30 services and applications that
run on App Engine. OREN TEICH: And it's not that
you just developed for it. You've also created some
incredible open-source repositories around
it as well, right? DEEP KAPADIA: Absolutely. And if you go to
github.com/nytimes, there's a lot of open source
frameworks that are available to people to go look at. And some of them also built
around App Engine as well. OREN TEICH: Cool. All right, well thank you Deep. I really appreciate it. DEEP KAPADIA: Well,
thank you so much, Oren. OREN TEICH: All right. Thanks. [APPLAUSE] Don't open the water. So obviously we have
other customers. "New York Times" is
just one of them. Here's some others
that I really enjoyed hearing some quotes from. I'm not going to
read you the quotes. You can do that on your own. I will call out, in that upper
left corner, the Smart Parking one. There's a fantastic session
that's coming up later held by Morgan Hallmon. He's going to be
talking about doing more with less in serverless events. And he's featuring
the architecture of what Smart Parking has done. It's really, really
interesting to see not just how people
are using serverless for the operational
benefits, but also for programming benefits. Because obviously, as you
decompose your application into small pieces, as you move
into event-driven workloads, you can start to
think about how you do your computing differently. And Smart Parking is a
company that's built entirely around this concept. And it's really remarkable
to see this collection of IoT devices that are sending signals
out across an entire city, and how they aggregate
that for parking needs. So I'd strongly encourage you
to take a look at that session. And that's really a good segue
into even without understanding the operational benefits
that we get, let's not be fooling ourselves. Writing software is
still very, very hard. And something we hear
from our customers all the time is
they come to Google because they don't want to be
in the infrastructure business. In fact, Nick, who is speaking
right now at same time-- Nick Rockwell, the CTO
of the "New York Times"-- we were having a meeting a
few months back in New York. And he just said flat out, "New
York Times" is a news business, Google is the
infrastructure business. And that's the core
value prop, of course, of why people come
to us, and to GCP. But there's another
part of this too, right? How can we help you not just
solve the infrastructure, but solve the application
development problem? And I think this, to me, is
one of the most exciting pieces about serverless, is it
gives us an opportunity to revisit a lot of the core
things that were done before. Hey, maybe we don't need to have
IP-based security, for example. Maybe we can start to re-imagine
and re-architect things for a more modern world. And so part of it
is how can we take all of these different pieces,
which historically you've had to think about, right? Oh, what [INAUDIBLE]
solution do I use? How am I going to
configure my networking? All the pieces. And frankly, this slide
is on the lower end of what the boxes look like. I've seen versions of this slide
which literally have 10, 20, 30 more boxes than this. And we want you to
just be able to focus on building that application. And so I showed
this slide earlier. And we talked about
the operational model. And I would actually
say that serverless is made up of two things. And I talk about this
all the time right now. There's the operational
model, which I think we all understand. But equally important is
the programming model. And by the programming
model, I mean, of course, that it's going to
be a service based. We can argue about the value of
monoliths versus microservices. And I'd be happy to do that. By the way, short answer. There is no one right answer. And chances are you should
start with the monolith. But anyway, still going to have
a services-based ecosystem, right? You're going to be using
things most likely, like BigQuery or a
Redis, or a Memcache. You're going to have a whole
set of services around. Event-driven is incredibly
important in this, right? And what I love about
event-driven architecture is you're shifting who has to
do a lot of the hard lifting. Instead of you having to
wire up all the components, you let the infrastructure
provider-- us-- do it. And of course it has to be open. And we're going to talk
about this some more. You may have seen some of the
announcements that came out. But one of the
catches historically has been if you buy into
your programming model, you're stuck with that
programming model. And we want to
make it possible so that you can take advantage
of all these characteristics, and do it anywhere you want. We think you're going to
do it on GCP, because we think we have the best
operational model. But we're not going to
force you to be there. So we're going to talk about
that in quite some detail. Now, I talked about
services, right? And someone-- I don't
remember-- a year ago I saw on Twitter, someone
said that maybe we shouldn't have called it serverless. We should have called
it "serviceful." And I agree with that. Because of course
there's compute, right, the ability to run some cycles. But the reality is it's all
of the services around it that really make it useful. And just as a thought
exercise, how useful is something if you
can't store any data or have a cache, right? It's awfully hard,
sometimes, to do anything of any kind of scale there. So all of these pieces, Google
has been in this business for a long time. And we do have all
of these pieces. Given that I only
have 37 minutes left, I am not going to talk
about all these pieces, as much as I'd love to. And there's been some
amazing announcements that we've had out there
today that we're not even going to talk about. So, for example, in that lower
left corner of Cloud Datastore, there's a whole new product
called Cloud Firestore. It's incredibly scalable. It's built on a
high-scalability back-end. It has the same API as
available for Datastore. And now you get just a
bigger, better product. So that's something we announced
today, available for GCP. The list of things I'm
not going to talk about is longer than what I
am going to talk about. But I want you to know
that they're there, and encourage you to
do some more research. What we're going
to talk about today is the middle two
sections, the compute side. So first, App Engine. App Engine's been
around for a long time. I don't know if you know, it's
been around for over 10 years. It's a really
remarkable product. It predates Google Cloud. App Engine is the OG serverless. It is really incredible product. And I think if you're not
familiar with it, just a quick recap. It lets you take an app
source code, deploy it. And it's kind of
exactly what you want. There's no configuration. You just type gcloud app
deploy, and away you go. It scales. Snapchat, famously, is on it. But of course Best
Buy, Pocket Gems. You can see some other
customers using it. And you're only paying for
your usage more or less. There's always small
caveats in these things. It is old. But that's not to say it isn't
without its imperfections. So one of the biggest
complaints that we had from App Engine
for years is it had a lot of proprietary pieces. So for example, if you
use the Java language, there was class white-listing. There were only certain
allowed classes you could run. You couldn't take, if
you were in Python, you couldn't just take
open-source software from out on the web and run it. And I don't know
if you've noticed, open source is kind of a thing. It's worth noting 10 years
ago it wasn't, right? When this came out, the
whole point of App Engine is you're standing on
the shoulders of Google. But what's shifted
is now we need you to be able to stand on the
shoulders of the community. And so we've been
thinking about this. And we've been limited because
of the security needs, right? Because App Engine and the way
it runs in our data center, we're letting anyone run
any arbitrary software. That's a huge
security risk for you. It's a huge security
risk for us if we don't manage that correctly. And so that's why
historically it's been very carefully managed. But one thing Google definitely
has is some smart engineers. And so for many years now
we've been working quietly in the background on a
project that we announced at KubeCon called gVisor. And gVisor was
specifically designed to address this problem. And gVisor's an
open source project as well that you can find out. We announced it at KubeCon. But gVisor is
designed to give you the security you'd expect
from a virtual machine, but with the performance
characteristics you'd expect from a container. And so yay, it's cool. It's implemented in user
space, so it's highly secure. It's written in Go, because Go. But it's really been remarkable. And what we do is we actually
implement the system calls, intercept them, and
then, because we're intercepting them,
we can inspect everything that's going on. We know exactly what's going on. We can make sure there's
no security issues. And because it's Go, we have
memory and typesafe constructs in place. So this is now what's
underpinning many products at Google, especially all of
our serverless products moving forward. And this is what's enabling
us to do everything that I'm going to talk about
for the rest of this talk. So the first thing
it's enabled us is with second-generation
run times. So the second-generation
run times are all gVisor based for App
Engine-- sorry, second-gen app engine run times. And so what we're
doing here is we're giving you an idiomatic
dev experience, right? Historically you've had
to learn our YAML files. We want to bring it to
you, and have you do it. You want to install a package? What's the language say? No more API restrictions,
like I mentioned. And frankly, it's just faster
if you just do benchmarking. And so I'm really excited
to announce that coming out and rolling out in
the next 30 days, we will have for App Engine
Standard Node.js 8, Python 3.7, and PHP 7 all available. [APPLAUSE] Thank you. This has been the
number one request we've had literally for years. In fact, we, of course, have
our bug tracking system. And the top of it has
been these languages. And people always ask,
why has it taken so long? And the answer is we needed to
overhaul our entire security infrastructure, this runtime
execution environment. But now that we've
done it, you can see we're able to get a number
of languages in at once. And it means that we
can be more committed to being up to
date in the future. So next I'd like to talk
about Cloud Functions. Hopefully you've all heard of
Cloud Functions already, right? Cloud Functions lets you have
very small snippets of code. Again, it's going to
autoscale with usage. Really nicely, it
lets you pair them to events, GCS storage
bucket, upload change. There's many, many
different event writers you can hook it up to. And, of course, you have
even finer-grained control of where you're only
paying where code runs. So this has been a
long time coming. We've been working
on Cloud Functions. Quite famously it's been
in beta now for too long. So, thankfully, as of
today, it is fully GA. So I'm really excited. Cloud Functions is
now GA, and it's available as of today
in four regions. You can get it in
East and Central, as well as in Europe and Asia. And uniquely in
the market, this is covered by the same SLA as
the rest of Google products. People always ask, why
is this taking so long? What was going on? Not only did we need to
create this entirely new infrastructure for
security sandboxing, but we also needed
to make sure that it was going to work in the
reliability and quality that our customers
are expecting. And that-- I am embarrassed. It should be something we
were able to do quicker. But that investment takes time. And it's something that
we can now stand behind and we're very proud
to make available. So today, just go
sign up, use it. It's available. There's no sign-up list. There's nothing you need to do. Now, that's, of-- [APPLAUSE] Thanks. That's, of course, not all. Having additional
languages, maybe we don't want the world
to be in a Node world. I don't, but some might. So obviously, having additional
languages is critical, too. So this is coming
back to how we can make the experience idiomatic
to what you and your company wants. So GCF came out with Node 6. Again, rolling out
over the next 30 days, we're going to have Node
8, as well as Python 3.7. And these are nice changes. One of my favorite books
is a book by Igor Gamov. He's the guy who came up
with the name the big bang. And it's a book on math. And the title is just
"One Two Three Infinity." All right. And the point is you can
look at a sequence, right? Once you go from
one to two to three, it's infinity and beyond. So this would make
language two with Node 8 and language three with Python. So I look forward to
infinity coming soon as well. But these are the
ones that we have available right now and today. Getting the infrastructure
in place to support these is great. Like I said, they're rolling
out in the next 30 days. They'll be available to everyone
once the deploys are finished. Now that's not all, right? Because, of course,
GCF is a new product. It has a lot of new
capabilities that are necessary. So the first one, VPC and VPN. Something we hear all the time--
and Deep talked about this-- is we are not an island. No product inside of
Google is an island. What I hear time and
time again from customers is customers come to us
for the platform of GCP. And what's important is
that we work well together. And we can't do
everything from day one. We have to iterate
our way there. But VPC and VPN are key
issues there, right? So if you're not
familiar with VPC-- Virtual Private Controls-- and
the Virtual Private Network, what this lets you do is it lets
you define a network that only, for example, you
have your bridging your on-prem into your cloud. You can control who
has access to it. This is super important. And we've heard, for
example, from people who might be running in
on-prem Cassandra cluster, and they want to trigger
a function execution. With this new feature, you
will be able to do that. This ties into the
next one as well, which is security controls. Right now, if you deploy a
Cloud Function, it's public. It is on the internet. Someone guesses that
URL, they can execute it. With security controls,
what we're doing is we're putting in
place IAM controls using the exact
same GCP IAM that let you restrict with new rules,
like an invoker function, who is actually going to be
able to execute that. Cloud SQL Direct
Connect, huge use case that we hear from
people, of course, is I need to store data. So you've been able to do
Direct Connect with Cloud SQL. This is a fully supported path. And then finally, near
and dear to my heart, is how do you store those little
bits of metadata sometimes that are associated with
your app, configuration, other pieces? So we've introduced
environment variables. And our take is with this, with
the new languages, with the GA, with these core capabilities--
and just for clarity, these core capabilities
are in various states. Many of them are
coming out in alpha. So there's a URL where you can
sign up to get access to these. All of them are
scheduled for this year. With this set, we really
feel like we've now brought GCF to the place
where it was meant to be. We have a great GA
product that I really am excited for everyone in this
room to give a try and use. Now I talked about,
of course, events. And for the sake
of time, we're not going to dive into this today. There's other sessions. But one of the things
that I'm most excited about with serverless is
the way that you can rethink your application architecture. And if you're not familiar
with what we're saying here, really briefly, if you just
think about the canonical case of a photo upload. You get a photo. You need to upload it,
and you need to store it. You want to resize it. Does that upload come
through your app? Does your app then
decide how to store it? And then do you then
trigger some resize? The nice thing about
an event trigger system is you just have the client--
whatever that client is-- directly put it in GCS bucket. That GCS bucket
generates a trigger. And then your code
gets executed. And so you can go from
100, 1,000 lines of code to 2, 5, 10 lines of code. And of course GCS
is a great example. But maybe sometimes
we get a little bit bored of hearing about GCS. So one of the things to note is
we have this incredible Pub/Sub integration. And so much of GCP is
integrated with Pub/Sub. There's actually over
20 different services that you can take
advantage of today. So for example, BigQuery,
ML Engine, Stackdriver. Actually, one of my
favorite use cases is you can create an
alert in Stackdriver that triggers a Pub/Sub event
that calls a Cloud Function. So for example, if
you want to, hey, I've noticed we have a
failure condition where, when a machine in my
data center crashes, we want to trigger something. You can do that. And you can do that
very, very simply, and fully manage, create
the trigger in Stackdriver, push it through Pub/Sub. So these are all
available with GCF. They're working great. And we see, in fact,
huge amount of adoption comes through these
products today. Now even with everything
that I've outlined, that's not enough, right? And when we are out and
talking to customers, what we consistently hear
are two additional problems. So one is dependencies. It's great that we
now support Node 8. It's great that we now
support Python 3.7. But what happens if
I have some random-- or not so random-- actually,
ImageMagick or FFmpeg are the canonical examples
that come up every single day. I want to resize an image. I want to transcode some video. And I don't like the libraries
that you've included. Now historically on a
serverless platform, the answer has been tough. Or you do some crazy hoops,
custom, compile, binaries, upload it into the blob, things
I don't think anyone really wants to do. And of course, the other one
I alluded to earlier is people say, hey, this is great. But how do I run these
workloads elsewhere? How do I make sure that I
can have the portability I'm looking for? And so to address that, I want
to introduce a new concept here. And we're going to build
off of this concept to some new products. So the new concept is
the serverless container. And what's a
serverless container? It's just a name, a
descoping of container. If you're all familiar
with containers, containers do many, many things. But one of the key
things they are is they're just [INAUDIBLE]
file of stuff, right? And if we define
a little bit more what has to be in
that container, it gets us wonderful things. So in our case, with a
serverless container, what we're defining them is we're
going to say they're stateless. Don't write to local disk
and expect it to persist. We're going to say it's a
request response, right? So this isn't for long-running,
6-hour, 12-hour, 5-day jobs, but based on an event response. And let's be explicit. HTTP is, of course,
the canonical example. But events are always
an example as well. They're going to auto-scale. They're going have
health checks. So there's actually a
spec that we've published. And I'll show you the URL later. So if we define and scope
these containers down, one of the benefits
it gets us is we realize that this is actually
what we're doing today with App Engine, and what
we're doing with Cloud Functions under the hood. This is when we talked about
gVisor, how we're actually running things. And we realized, well, why don't
we expose that to our users? And so what I'm really
excited to announce is serverless containers on GCF. This is something that's
coming out later this year. So it's EAP now. We do have customers using it. And you can see
there's a sign-up link down at the bottom. Please, please sign up. And the idea here
is we are going to take a container that matches
that serverless container spec. It's going to be fully managed. And we're just going
to run it for you. And so what you can now
do with this is maybe you just have one
small open source package you want to install. Maybe you have some
proprietary binaries you don't have the
source to give us to. That's fine. Maybe you want to integrate
this with your workflows, right? People have CI/CD. What we've realized is that
containers are not just the packaging format, but
they're the interchange format as well. All of those things
are now possible. And so what I want to do
next is give you a demo. And so Steren Giannini has
joined me up here on stage. And we're going to switch
over to the computer he has. So what you can see
is pretty easy, right? What we have here is
just a Dockerfile. There's literally
nothing special about it. This is as boring
a Dockerfile as one could ever hope to
have in their life. And you can see, though,
that in the Dockerfile we just installed with apt-get
one additional package. This is an open source
3D rendering package. This is not
something you've ever been able to do with
serverless before. You're like, oh, I want
to 3D render some image, and have it be
horizontally scaled. I want fully managed. And you've been out of luck. Well now you just
specify it in here. And then you build it. You can build it locally. You can build it with
Google Cloud Build. We don't care. It's just a Docker image. We've already built it, right? So we're taking advantage of
the fact that it's already up. It's running on our
Container Registry-- Google Container Registry. Don't care where it
runs, particularly. And what Steren's
now going to do is he's just going to hit Enter,
and he's going to deploy it. And what I want you to note is
he's using the same command-- gcloud functions deploy. He's just added the --image. If he left off the
--image, image, it would just be expecting
source code and building it. But in this case,
we're like, no, no. Let's give you
that Docker image. And again, like I said,
there's nothing special about this Docker image. This takes about 30
to 45 seconds to run. It is in the AP. I promise it'll get faster. But now what you can see
is this is running live. I mean, if anyone is able
to get that URL and copy it, you'll see it yourself. And you'll notice
that what we have here is we're actually
using this open source package to 3D render
in not quite real time, but pretty darn fast. If it's not cached, it usually
takes about half a second. And it just renders some text. Now we're not the best
designers in the world. This doesn't look like
the most amazing thing. But what we have here
is literally something I don't think we've ever
been able to do before, which is we've been able
to take a Cloud Function, have a custom piece of
software installed into it, and have that run with
all of the experience that you'd be used to-- the
same commands and everything. Steren's going to stay
up here and join me on stage for a little bit more. But we're going to
switch back to the slides for just a few seconds. Because this is now
point number two. So remember, I said
point number one, the blocker that
we hear from people is that they're concerned
about the ability to run arbitrary
additional dependencies. So we just showed you that. Point number two is that
he asked about consistency. And one of the things
we hear is, hey, you know, for whatever reason-- good, bad, ugly-- we want
to run things on-prem. And there are some really
good use cases, by the way. One of my favorites is
one of our big customers is a oil management
oil services company. And they have giant oil rigs
in the middle of the ocean. And it turns out that oil rigs
don't have good bandwidth. And they generate terabytes
of data off of these oil rigs. And they want to be able
to process these data. And so they've installed--
or are in the process of installing-- Kubernetes clusters
onto oil rigs, because you just can't get
a terabyte of data a day off of a-- actually, I'm sorry. They're in petabytes a week. So it's literally impossible. I guess you'd send
helicopters, right? So in their case, they can do
all of this processing on site. But what they keep
on saying is, yeah, but we don't have to
write different code. For all of our-- 90% of the time when we're
running it in the cloud, this is great. What do we do when we
have our on-prem needs. And as you may have seen
actually in this room, just before-- an hour ago-- [INAUDIBLE] were talking
about our Kubernetes story. And they announced
our GKE on-prem. So GKE on-prem is a
package distribution that allows you to get a
fully managed installation GKE running in your
own data center. And this directly addresses
one of the biggest concerns that our customers have-- how can we have the
same environment that runs in the cloud
and in our data center? And so we're taking that
one step further as well in collaboration with them. And what we're doing is we're
introducing a new add-on. So the GKE serverless add-on
takes the exact same API that we expressed and
showed before, and now lets you deploy these same
exact workloads onto GKE. Again, there's a sign-up link. Take note of that. We can share this later. But if you're interested
in participating in this, please do take a look
and sign up for it. And the idea here is-- actually, let's
show and not tell. So the idea here is exactly
what we showed before. So let's actually do a demo. So if we're going to
go back to Steren. So here's what we're
going to do now. We're going to make no changes-- except to the command line. So Steren is going to deploy
the exact same function, right, the same image. Instead of using
gcloud function deploy, we do have a new name space. So it's gcloud
serverless deploy. And we need to add one
more additional piece of information, of
course, because now we're targeting a cluster. It's not just this
world-spanning Google Cloud. We need to say which specific
cluster we want to go to. So he's going to deploy
it to his very aptly named my-cluster. And away we'll go. I want you to note that this is
using the same command, right? It's the gcloud command. It's the same experience
we're used to. And away, this is
going to deploy. This one actually
takes about 15 seconds. It's actually faster in
general to deploy to this. And so right now
what's going on? We are spinning up a
new service in GKE. So this is running in
a Kubernetes cluster. And if you've managed
Kubernetes before, you know that getting a service
running, there's a lot of YAML, there's a lot of configuration. And that's the
power of Kubernetes. That's why you want
it, oftentimes. But as a developer
sometimes you don't, right? Sometimes you don't
need all that. And in fact, all of those
sharp edges can cut you. And so what we want
to give you here is the experience of
just simply deploying. And so you'll see
the URL is different. This is now a live demo. This is actually running
on a Kubernetes cluster. And it's the exact same code. We have had to make absolutely
no changes of any kind. It's actually even worth
noting that this cluster is provisioned with really
beefy big instances. So we happened to be
doing this 3D computation. It runs even faster. And so this points to one of
the benefits that you have. You are, back to that
model I talked about-- this is the programming
and the operational model-- you are, of course, trading off
some of the operational model. No, this isn't pay for usage. It's running in your cluster. You have to manage your cluster. But what you do get is the
control that comes with it. If you want to run 64 core
VMs, you go for it, right? You can do anything
you want in this case. So it gives you a
lot more flexibility. Steren, thank you very much. [APPLAUSE] Now it's not sufficient to
simply give you the software and have it managed. One of the things
we hear all the time is just how important
freedom of choice is, right? I was actually talking
to this chief architect of a huge financial
services company, who, being a financial
services company, won't let me use
their name on stage. I was talking to him
a few months ago. And they have a giant
Kubernetes cluster. Nothing about GKE. They have a giant
Kubernetes cluster that they do for the
financial analysis right now. They have over 10,000 people
who they consider developers inside of their company. And they're saying, how
do I enable this developer productivity in the clusters
that we already have? And we hear this all the time,
people saying, this is great, but we live in a
heterogeneous world. It's not sufficient
that it's great that you have the
best thing on GCP. How can we use
everything at once? And so I'm really excited
to bring all of this together by introducing
to you a new project. It's called Knative. AUDIENCE: Whoo! OREN TEICH: And
someone's excited. This is a new,
open-source project. It's created by Google,
but done in conjunction with a huge number of partners. And the whole point
of Knative was to bring the building blocks
of serverless together, and to give you this
workload portability. So Knative is made
up of, like I said, it's built on top of Kubernetes. It's about the portability. And it's key primitives. So specifically today,
Knative has three primitives built into it. It has build. So what you can do with
build is take source code and actually get a container. Because by the
way, even though it was easy to build a
container, I don't want to deal with containers myself. So we build that. Serving, right? That piece that
Steren just showed you where he types gcloud
serverless deploy? That's what serving does. There's nothing you need
to worry about, right? There's none of the
configuration pieces. And then we're not even
demoing this today, but event binding, right? How do you have a standard
mechanism of binding to events, and making this available? Now if we just did
this by ourselves, I think that'd be interesting. But ultimately our goal is
to bring the serverless world together around this. And so from that
perspective we've been working with a
large number of partners for many, many months on this-- SAP, IBM Pivotal, Red Hat,
and many, many others. And in fact, if you search
Twitter for Knative today, you'll see a pretty remarkable
stream of blog posts of announcements, of companies
announcing the strategies behind this. Our hope, our
aspiration, our goal here is to enable a whole
flourishing ecosystem of serverless
products to come out, but to enable those in a
compatible, portable way so that as a developer
you can take advantage of the great
serverless concepts, but also take advantage
of the specific pieces that each company
is going to provide. So to that point, I'd love to
bring up one of our partners. So Michael, if you
wouldn't mind joining me, Michael Wintergerst is from SAP. Come on up. And we've been
working with SAP now for quite some time on this. In fact, the whole front
row is SAP, by the way, and I suspect there's
more of them out there. So we're all very excited. Michael, thank you. MICHAEL WINTERGERST: Hi. [APPLAUSE] OREN TEICH: So Michael,
tell me a little bit about why SAP was
interested in Knative, and what got you up here? MICHAEL WINTERGERST: Yeah. So, Knative. And I'm responsible for
the SAP Cloud Platform. So that is our enterprise
platform as a service offering. So it's all around
open source frameworks. We have, for
example, Kubernetes. We have Cloud Foundry in. And Knative and
serverless computing brings us the
possibility to ease the whole development of our
business services and business applications. For instance, our developers
are telling us, hey, it's great that you have all
the nice frameworks. But here's my code. Just run it, right? So that is a paradigm our
developers have in mind. And therefore,
serverless, with the idea to hide all the complexity
of server infrastructure, networking storage, pay per
use auto-scaling, that's really great, and brings
a lot of capabilities to our application folks. And with Knative, you're
adding a lot more capabilities. As you said already
before, here's my code. Create a Docker image out of it. Register that in
a Docker registry. Deploy it on Kubernetes. And also bringing then the
inventing stuff inside. That is key for our
customers and partners in order to create
business services and enterprise-grade
applications. OREN TEICH: And you're
not just looking at using it as a
integrational piece, but you're actually building
your own product on top of it as well, right? MICHAEL WINTERGERST: Exactly. And I'm super excited
today that we announce in the morning, the first
open source Lighthouse project on Knative, our project Kyma. So kudos to our C4/HANA guys
sitting here in the front. So they made that happen. [APPLAUSE] So what is Kyma all about? And please have a look. It's on GitHub--
github/kyma/project.io. So it's an extension factory
for our digital core. So maybe you know at SAP
we're adopting a [INAUDIBLE].. So that's nothing
what we have invented, but it's coming from
Gartner and IDC. They shaped theirs
two years ago. So we have our digital core. So our big enterprise resource
planning systems we have. That's our mode one environment. So where the
customers are running their day-to-day business. But on the other
hand, our customers would also like to see, hey, I
would like to get innovations, like IoT, blockchain,
machine learning, all of those capabilities also
in the mode one environment. And therefore, we
said at SAP, it makes sense to have an
innovation platform, our so-called SAP
Cloud Platform where we are running the complete
innovation stack on top. And Kyma is sitting on top
of the SAP Cloud Platform, and it's the glue code behind
the mode one and the mode two environment, allowing you to
create extensions to your mode one environment, so that you
can run all the nice machine learning features next
to your mode one system without disrupting them. So that is our approach. OREN TEICH: And then
I hear from customers all the time that
one of the, I think, the confusions that comes
up around serverless is they think it's just
about greenfield, right? And they're just going,
but I can't rip and replace everything. I have an SAP system. MICHAEL WINTERGERST: Exactly. OREN TEICH: And of
course we're not going to replace your
SAP existing ERP. There's no world in which
anyone would want to do that. But how do I extend it? How do I do the new
development, right? And I think that's
what Kyma's all about. MICHAEL WINTERGERST: Exactly. So a nice example is when you
have, for example, a commerce system coming from SAP,
and an order gets created, or a new shopping
cart gets created, you would like to
create an extension. So you would like to
get functions, which tightly connect to this event. All-in-all it gets created
in our mode one environment. But behind, you have the
machine-learning capabilities. And you can bring in
those capabilities without changing your
mode one environment. And that is the key
benefit we are getting out of Knative and Kyma. OREN TEICH: Yeah. I got to say, it's literally
what every customer I know asks for. And I'm just so excited that
you were able to join us, and that you're
able to participate. Thank you. MICHAEL WINTERGERST: Thank you. Thanks, Oren. [APPLAUSE] OREN TEICH: So I would strongly
encourage you to try Knative. It's early days. There's lots of
people building on it. In case you're
wondering, Knative is not designed for the end user. It's designed to be
the infrastructure pieces that enable products
to be built on top of it. Kyma was built on top of it. We've built our
Kubernetes add-on-- serverless add-on--
on top of it. But if you'd like
to get involved, if you'd like to see what
the open source is looking like, if you want to see
how we're doing scale to zero on a Kubernetes
cluster, go take a look and see what we're doing. So this is the last slide today. And what I wanted to do is
give you a quick overview of what we talked about-- including some of the
things we didn't talk about. So when we talk about
serverless right now, of course there's
App Engine, right? And in a recap, with
App Engine it's faster. It's open. We have the popular languages
with more coming constantly. And I didn't even have
a chance to talk about, by the way, it's
HIPAA compliant. It's been HIPAA compliant now
just as of a few months ago. And so we're signing BAAs. We're seeing more
and more customers do incredible things with it. On Cloud Functions, we
have, of course, is GA-- new regions, the new languages,
and these capabilities. I didn't even talk about
scaling controls, by the way. One of the things
that comes up quite often is you can have
a Cloud Function, and you're doing
resource pooling. You don't want to have
one function starve the database if it gets
scaled up really high. So how do you actually
set some limits around what this looks like. And what you can
see is we're moving from the basic
enablement capabilities to what does it
really look like when I have 500 functions,
50 functions, when I have complex systems? We talked about serverless
containers, and the ways that we're expressing that
through GCF coming soon through the serverless
add-on, and through Knative. And I didn't even have a chance
to talk about the green gear. But I'm going to give
it a few brief moments. I just want to call
out that we're also taking a lot of the services
that have been built into App Engine historically,
and we're making them first-class citizens
within all of cloud. So scheduling,
[INAUDIBLE] in the cloud. It's like, how boring is that? It turns out it's actually
really, really important. I was just talking to a customer
two days ago who was just like, I just want to run something
once every 24 hours. Help? It's standard use case. Maybe it's every five
minutes, every 24 hours. So Cloud Scheduler
enables you to do that. And that's coming
out later this year. Cloud Tasks. Task Queues was a built-in
system into App Engine. Cloud Tasks is a
mechanism of doing inter-process communication. So if you want to fire
off one thing to another, and you need a place to
store them in between, that's what Cloud Tasks does. And I already mentioned
Cloud Firestore. So all of these make up what
we think of as serverless. And they're just the first step. And we're really excited about
what you might be building, where you might be going. And I hope that you're
excited about this, that you're going to go
to some of other sessions, and that you're going
to try our products out. So thank you so much. [APPLAUSE] [MUSIC PLAYING]