[MUSIC PLAYING] ANKUR KOTWAL: Hi, everyone. My name is Ankur Kotwal. I'm a developer advocate
on the Google Cloud team. So something about me-- I was actually born in India. But I have lived the last 30
years in Sydney, Australia. I get to come to India,
mostly for work actually, every few years. And one of the things that
constantly brings me back to India is you
all, the community. There's always such enthusiasm,
excitement about whatever it is that Google has to share. And it really touches
my heart that there is such passion towards
everything that we do there. So I'm going to talk
to you about Anthos. Now, Anthos means different
things to different people. It's a pretty broad
product suite. And so we're going
to go in pretty deep to start the morning. Before we go too
far into Anthos, I want to take a
step back actually. I won't take you on a
journey of how we got here. There have been a handful
of fundamental shifts in technology over the
last seven or so years. These shifts have made
a difference in the way that we build our applications
and the way that we operate them in the cloud as well. So the first one is Docker. So actually, Google has
been running our software in containers for over 15 years. And it gave us a real
competitive advantage, because we were able to
use containers and put more containers on our infrastructure
than perhaps some others were able to. So we open-sourced our
implementation of containers to the Linux kernel
in about 2006-2007. If you've ever worked
with that implementation, it's called cgroups, which
stands for control groups. So we open-sourced that
based on our experience. Docker came along in about 2012
and put some really awesome tooling around containers. So it made it a lot
easier for people to manage and run containers. And so if any of you
have used containers, I'm sure you've at
least heard of Docker or are familiar with just
how great that technology is. But of course, when it became
so easy to run containers and manage them, it turns
out that people started using lots and lots of containers. We started to adopt things
like these microservices architectures, where our
monolithic applications got split up into little pieces,
and then run simultaneously. Now, when you have lots
and lots of containers, you need to find a way
to orchestrate them. And so Google saw
that, in the industry, there was such a big
movement towards containers. And we thought, let's
take the experience that we have running
containers at scale, and let's make it just as easy
for everybody else to do it. And so 5 and 1/2 years ago,
we released Kubernetes. And when I say we released it,
we actually open-sourced it-- the whole code base, the
trademark, everything. And today, in just
that 5 and 1/2 years, Kubernetes is the standard way
to run containers that scale. Every cloud vendor supports
a Kubernetes-hosted platform. But we didn't stop there. In more recent times, we've been
working on things like Istio, which gives you the ability to
manage your services the way Google does, so that
you can apply things like site-reliability
engineering practices. We still continue to listen
to our customers, though. And what our customers are
telling us is that almost 3/4 of organizations already
use the cloud in some way-- maybe in a big way,
maybe in a tiny way. But actually, not as big as
you'd think, because only 10% of workloads have moved. That leaves the 90% of existing
workloads on-premise still, or in a local data center. The other thing
customers are telling us is that they've made big
investments in their data centers. They've got big investments in
their on-prem infrastructure. And moving to the
cloud, they want to be able to adopt the cloud. But they don't want to just
turn away those investments. They want to be able to continue
to leverage what they have and move incrementally
to the cloud. Additionally, customers
are telling us that they don't want to tie
themselves to a single cloud vendor. They want to be able
to support multi-cloud. So hybrid cloud is where
we've got something on-premise and in the cloud. Multi-cloud is when we're
supporting multiple cloud vendors. So our customers
are telling us this. Now, we saw that
only 10% of workloads have actually
moved to the cloud. And actually, out of
that 10%, some of them actually didn't succeed
in moving to the cloud. They ended up rolling
back to being on-premise. And if you think about
why that might be, it's because a lot of customers
tend to try and modernize their application. They pull apart these
monolithic services and try to make
microservices out of them. But they try and do
that reengineering work at the same time that they
are moving to the cloud. Now, that's fraught with risk. You're reengineering important
parts of your application. You're suddenly
familiarizing yourself with new processes, new tools
you're not familiar with, what's there. Everything feels a bit new. So it's not surprising
that some organizations may struggle to do that. Wouldn't it be awesome
if you could somehow modernize where you are in
place, on your on-prem setup, in your center before
you move to the cloud, so that when you wanted
to move to the cloud, things were already ready--
you'd done the modernization work locally. But to do something
like that, you need a platform that
works in the cloud as well as in the data center. And you want the benefits
of an open ecosystem, like Kubernetes. The platform itself
also needs to be consistent across those
different environments. You don't want to have
a separate set of tools to manage your cloud
environments versus your on-prem or your
data center environment. You want those things
to be consistent, because you want to
manage that entire estatae of infrastructure that you have
with a consistent set of tools and policies. You want your people
to be able to interact with a single platform. So I want you to come
on a journey with me. I want you to imagine a world
where you can seamlessly move workloads
between public clouds and on-premise infrastructure. We built that. And we call it Anthos. Now, Anthos, as I said,
is different things to different people. At a high level, Anthos is
an application deployment and management tool for
on-premise and multi-cloud setups. But the thing that
makes Anthos different is that it is a 100%
software solution. There is no hardware lock-in. For you, your infrastructure
is abstracted away, so that you can focus
on building apps, not on managing infrastructure. And because Anthos is built
on a set of open technologies, you can avoid vendor lock-in. Now, I'm not going to go through
all of the milestones that got us to Anthos. But I want you to realize
that anthrax is not something that we just build overnight,
or in a year or two. It's the culmination of decades
worth of work at Google-- from us running containers,
starting to run containers all the way back in 2003,
to making that open source with Kubernetes, and now
making it available to you to migrate your workloads. Anthos has different sets
of tools for different types of people in your organization. So we have a great
developer experience for the developers that are
building native applications. I'm going to talk about
all of these things. So if some of these are
unfamiliar, don't worry. We make it easy for you to
run your services like Google, with things like Istio. And we enable you to
deploy your containers at scale with Kubernetes. Now, these are all open
platforms, all open source software on the left-hand side. And on the right-hand
side, we offer gold class hosted versions
of each of these services. Now, when you work on-premise,
we bring these experiences to you with VMware. And we're going to
look at how that works. So these are all the
components that make up Anthos. Now, if you are like me, when
I first saw that diagram, I was overwhelmed. I was like, woo, there's
so much happening here. How am I supposed to make
sense of all of this? Well, trust me in saying-- when I say to you that, over
the rest of this presentation, I'm going to cover all
of these components. And by the end of this talk,
I'm quite confident that you will understand exactly how
these pieces come together, because we're going
to build this diagram from the bottom up. So let's go back to Kubernetes. When we open sourced Kubernetes
and released to the world, we identified cloud native
applications in three ways. The first is that they
are container packaged. They are portable. They're predictable. They have isolated resources. The second is that they
are dynamically scheduled. So we use a machine to allocate
clusters across other machines. And the third is that they
are microservices-oriented. So each of these components
are somewhat loosely coupled and support
independent upgrades. Now, when we
released Kubernetes, we also followed up with our
hosted version of Kubernetes called Google
Kubernetes Engine, GKE. What this meant is that
we made it much easier for you to run your
Kubernetes clusters on GCP. The open source
version of Kubernetes was available and
still continues to be. So you can take it and deploy
it in your own infrastructure, if you like. But Google's Kubernetes
Engine makes it so much easier to run Kubernetes
than the open source version. So for example, if you are
an expert in Kubernetes, it could still take you
hours to bring up a cluster, because of the work
that's involved. With GKE, we have a
friendly user interface on the cloud console. You click a few buttons. And a few minutes later,
you have a running cluster. The other thing that we
do is more of the day 2 or maintenance-types tasks. So when there's a new version
of Kubernetes that's been released-- either a major
version or something like a security update-- we offer those updates to
you with a single click, where you just have to
accept that, yep, I want to upgrade to that version. And we handle that for you. We bring your cluster down,
provision the new cluster on, a new version of Kubernetes,
and you're on your way. That type of work is really-- we've had customers tell us that
it can take days of their week to do those sorts of upgrade. And we do those for you
within seconds or minutes. And we offer great integration
with our other cloud services. Now, what's new with Anthos is
this on-prem version of GKE. So on your own
infrastructure, you can now run GKE, not
just the open source version of Kubernetes. And what that means
is we bring you those great features from GKE
on your on-prem infrastructure. We do your upgrades for you. We maintain things like
Automatic Node Repair and so on. And this part sits on
top of VMWare's VSphere in your environment. So I want to show you the
difference between Kubernetes, GKE, and GKE On-Prem. So if you've ever
used Kubernetes, you'll be familiar with
this kubectl command. That's how you
administer your clusters, you create clusters, and so on. Kubectl actually can
communicates with the master there, which basically
tells it, hey, go ahead and deploy these containers,
and this is what kind of scaling I want you to have, and so on. So kubectl talks to
master with a set of APIs. When you use GKE you
still can use kubectl. And it will still work
with your GKE clusters. But you also have this
interface in the cloud console, where you can administer it that
way, again using the same APIs. Now, things change
when we go to on-prem. So when you go to GKE On-Prem,
we have an admin workstation. And you use the
gkectl command to talk to this new type of cluster
called an admin cluster. Now, this admin cluster is
responsible for creating your clusters for you
in your environment. So kubectl is still used
to administer your own user clusters. And gkectl is there to help
set up your environment. And you're not limited to
a single cluster either. You can have as
many clusters as you like, only limited by what
your infrastructure supports, your on-prem infrastructure. So with things like
kubectl, we give you that consistent
experience, regardless of whether you are an open
source, GKE, or GKE On-Prem. But of course,
wouldn't it be easy if we could just use our
Google Cloud Console? We make that easy for you, too. So when you have
GKE On-Prem set up on your on-prem infrastructure
or in your data centers, the Cloud Console
shows you a single view of all of your clusters
and all of your estate. We don't discriminate
between, oh, you need to go to this view
for your particular data center here in Delhi, and a different
view for Mumbai, and so on. We can show you that entire
view between GCP and on-prem collapsed as if it was just one. We have a column
that basically allows you to see whether it's actually
in GCP or whether it's on-prem. So we make that easy for you. And the way that we
do that, by the way, is that, when you are
GKE On-Prem starts up, we make a single
outbound connection that's DLS-secured to GCP. Now, that connection is what
registers your GKE clusters so that they are visible
up on the Cloud Console. Now, the cool thing about this
being an outbound connection means that you
don't need to expose your on-prem infrastructure
with an external or a static IP address. It can be NATed all
the way through. But it's just that it's
an outbound connection. So it's able to do things
like, on the Cloud Control, you can provision more
clusters, you can push out Kubernetes upgrades, and so on. So that's it. This is the start
of our diagram. We have GCP. We have the GKE dashboard,
which is the Cloud Console I keep talking about. We have GKE Kubernetes
Engine on GCP. On the on-prem side,
we have GKE On-Prem. But effectively, we give
you the same experience, whether you're sitting in GCP,
or whether you have things running on your data center. And we have Cloud
Interconnect that allows them to communicate
with each other. So this is the start
of our diagram. It's nice and simple. Let's add to it. I've mentioned a service
mesh a couple of times. The way we define
a service mesh is that a service mesh
provides a transparent and language-independent
way to flexibly and easily automate
application network functions. That's a mouthful. So why don't we talk
about what that actually translate to in real terms? So a service mesh
is a network that's designed for services,
not for bits of data. It's a layer 3
network that doesn't know what application or
service it actually belongs to. It doesn't make
routing decisions based on your application settings. And it doesn't know that
these packets are called those packets in your network. So by automatically deploying
a mesh of smart proxy servers alongside your application, you
can get uniform observability into your workloads at
a service level, which allows you to make some
smart decisions about routing traffic, about enforcing
security, and encryption policies. So you get things
like observability. You can look at the state of
your application throughout. You can get things like the
ability to split and route traffic based on
your application. And I'm going to show
you an example of this, which will, I hope, make
things a lot clearer. And then you can
apply some policies as well to make sure that your
application is working exactly the way you expect. So let's have a look at
a simple application. We have a frontend component
that all our end users connect to. Once they connect the
frontend, frontend will authenticate those users
using a Google Cloud SQL table. frontend will also
be able to fetch some pictures for this
particular service that we're running. It's a photo service where
people can purchase photos. And then we have a
payment processor. So your users can go ahead and
purchase those actual photos. Now, the first thing
you notice here is that these lines that connect
each of these components-- frontend is the only component
that talks to all of these, right? So right now, I mean,
it's a nice clean diagram in that sense. Typically though, we
don't have a nice way to enforce security
policies to say, hey, there's no way that picture
should ever talk to auth. Well, Istio can enable
things like that. The way we turn on Istio-- enable Istio on a service-- we're working in the
Kubernetes world here. So we've deployed this frontend
image as part of our pod. Instead of that, what we
do is we add a second image there, which is the Istio proxy,
which makes Istio enabled. What that does is that it
injects a network proxy close to the workload, sitting right
alongside it, within that pod. And that smart proxy captures
all inbound and outbound network connectivity. So everything will
automatically go through that. The fact of the matter that
this thing is injected in and it's a transparent
proxy is really important, because how much change
did we make to our code to make this happen? Nothing. We got Kubernetes to
do it for us, right? We deployed the proxy. So now, we're able to capture
all inbound and outbound traffic. And we can make
some smart decisions about how we route that traffic. So eventually, we
end up with something like this, where we put proxies
next to each of our components. And now that we're
capturing that traffic, we can do things like
encrypt that traffic. We can do things
enforce policies so that the pictures
component can't talk to any other component
other than frontend. And we did all of this
without making changes to our application code. We have a nice, clear
separation of our coding from our development
team from our developers, and then on the operations side. So we can deploy it in the way
that we want to without, again, having to make
these code changes. Istio is really
awesome like that. Let's talk a little
bit about Istio works. Istio has a control plane
to make this all happen. And that control plane has
three conceptual components. The first one is the pilot. You see it on the
bottom left there. The pilot talks to all of
the proxies and deploys configuration and routing rules. The mixer comes along and
enforces the policies. The mixer says, hey,
who is allowed to talk to each other component? Is frontend allowed
to talk the pictures? OK. Is pictures allowed
to talk to auth? No. Mixer is the one that's
going ahead and enforcing those things. Mixer does actually some
other really cool things. It captures telemetry. And because it's got a
flexible plugin model, it can take that telemetry
and send it to a destination. So what's an example
of telemetry? Logs. If you've ever had to diagnose
problems in a Kubernetes environment, you
know how frustrating it can be to collect and
centralize all the logging. It's a simple feature
of any application. Well, with Istio, we can get the
logs from all of our workloads and send them to a
single destination. Finally, we have
the Citadel, which deals with security
inside the cluster. Think of the Citadel as
acting like a certificate authority for your cluster. And it issues certificates to
all the proxies in the mesh. So that that's really,
really interesting as well, because what that means is,
if one of your components gets compromised, the
rest of your system can get compromised. And because we are acting as a
certificate authority, someone else, something
nefarious on your network can't just hop in and connect
to any of these things, because they're not being
secured through our Istio network. Again, we're not doing this with
any changes to the application. So Istio is really, really
forward-looking in that sense. So far, we've talked about
this traffic control piece. We've seen that communication
is transparent-- is secured transparently using MTLS. So besides controlling
the routing of traffic, what else can we do? On the testing
side, you can also do things like inject
errors into your network to see how resilient
your network is to those. You can put things
like rate limiting to see how often the
frontend service is calling the auth service, for example. And last but not least, a mesh
is able to extend the cluster. So you can wrap service calls
and enable service discovery across underlying platforms. So it doesn't matter if those
platforms are Kubernetes, virtual machines, on-prem,
or any multi-cloud provider. So Istio is really changing how
we're deploying applications in big ways. So to continue, what
I call what I'll tell you is that, whilst it's
already doing some cool things, you're going to continue to see
some really great improvements in months and years to come. Now, just like with Kubernetes--
how Kubernetes open source and we have a managed
version of Kubernetes, or a hosted version
of Kubernetes-- we have Istio on GKE, which
is our hosted version of GKE. Now, Istio and Kubernetes are
independently released, right? So they have
independent versions. So by using Istio on
GKE, what you're doing is you're putting
your trust-- you're letting Google go and
do the certification for version compatibility. So when there's a new
version of Kubernetes, we'll have matching
versions of Istio that are tested and
certified and paired up. So Istio on GKE gives you that. When you upgrade your
Kubernetes is environment-- and again, we make
that so easy for you-- we'll do the same thing
for Istio as well. Now, with Istio on GKE, we also
add some additional adapters to plug into other GCP products. So the first one is
the Apigee adapter. But the one I want to
call out is Stackdriver. So remember a couple
of slides back, I said the mixer component
captures all the telemetry and sends it to a destination. On GCP, that destination,
by default, is Stackdriver. Stackdriver is awesome. I'm going to talk about it
a little bit in an upcoming slide. And then Yoshi is going to
talk about it in the next talk as well. But in this context,
the Stackdriver is going to grab all
your logs, and traces, and make it really
easy for you to see what's happening
inside your system, to give you observability. So now, when we look at our
diagram, we have Istio on GKE on the GCP side of things. And we have the
open source version of Istio running on your
on-prem data center. So our diagram is starting
to fill up a little bit. Now, what I've been
able to show you so far is that it's pretty easy
to create clusters, right? So what happens in IT when
you make things too easy? People deviate from
the plan, right? So you have this beautiful
plan that everything's going to be nice and consistent,
like on the left-hand side. But ultimately, because
things are easy, people just do what they
need to get the job done. And you get a
scenario a bit more to the right-hand
side like that, where each of those things
independently look fine. But together, it's
not consistent. Now, Kubernetes solves this
by having this pattern, where you specify your
desired state with your specs. And then a controller
continually monitors the environment to make
sure that your actual system is meeting that desired state. So we introduced a new component
called Anthos Config Management to make it easy for you
to apply your policies between this heterogeneous
environment of multi-cloud and on-prem. So what Anthos Config
Management does is it lets you
enforce guardrails for centralized IT governance. So you can manage
configurations for the tools in all of your clusters
in one single place. You don't need a separate
repository for on-prem and a separate repository for
any of your cloud providers. It's a single auditable
source of truth. And again, if you
use Kubernetes, you've probably heard of the
term infrastructure as code. Well, with Anthos
Config Management, you now have policy
as code, where we use this Git
repository, where your policy gets checked into. And then you get all the
benefits of Git, right? You've got traceability. You've got things like
pre-commit validation. You can label things so that
you've got well-known working configurations. So if something goes wrong,
you can always roll back. Now, when we look
at our diagram, we've added config management
on the left-hand side here, on GCP. And then we've got
on-prem set up. We have a copy of the policy
repository stored locally. That config management
is constantly applying. And that policy
repository is always being synced between
the GCP environment and your on-prem setup. Now, so far we've
talked about Kubernetes, about deploying
containers at scale. We've talked about
Istio, about making sure that we can secure
those services, capture our telemetry. We've talked about
config management to make sure that we can
synchronize our application and tool settings between
on-prem and the cloud. One important area we
haven't talked about is actually our applications. How do we run applications
in this modern world? Well, there's multiple ways that
you can run your applications. Many customers choose
to use VMs today, right? Some customers tend to
use containers, and so on. One of the things
that cloud is changing is how people think about
designing their applications. So you may have heard the
term serverless in the past. Serverless is this
reasonably recent term that-- what it does is it
refers to applications that don't have to be concerned
with the infrastructure. The infrastructure is
abstracted away from you. So when you're thinking
of your applications, you just think of
app development, not about the
deployment scenarios. Serverless also changes
the billing model, so that you pay
for what you use, rather than reallocate
a VM that's so big and have a fixed cost. So depending on the busyness
of your application, the cost of your application
will go up and down. With Kubernetes, people
have been asking, how do we bring service
applications to Kubernetes? So we took a look at this. And we built an open source
framework called Knative, which enables exactly this. And Knative, just like
Istio, just like Kubernetes, we didn't build it in isolation. We originally open-sourced it. And now, we're working with
vendors across the industry to make sure that
this is the best serverless infrastructure--
serverless framework that you can get for Kubernetes. So once you have
Knative, though, that's the open source implementation. Google has taken
Knative and provided a managed version of the
product called Cloud Run. And in this talk today,
I covered lots of things. And I can tell you unashamedly
Cloud Run is the thing that has me the most excited. Cloud Run is amazing for
application developers, because, if you've got
any stateless components in your application, you can
package them up in a container, and just give it to Cloud Run. Cloud Run it's going
to deploy it for you, going to give you a
secure URL that you can map to your own domain. And it will scale up for you. You don't need to think
about any of the rest. You just focus on your code. And the contract with the
Cloud Run container is simple. One, is that it's stateless. So for all your stateless
components, Cloud Run is perfect. And the second is
that we give you the port that you should run
on as an environment variable. So when you start up, you look
at that environment variable, and you start your
application on that. That's it. Now, when we think about-- remember at the start of
this, I said, oh, people are trying to modernize
their applications whilst they're moving to the
cloud and things like that? Cloud Run makes that
journey so much better, because, at the end
of the day, this is just a container,
which means you can write in whichever language you want. You can use any tools you want. You can have any dependencies
you want, put them into that container. And we handle the rest. So in terms of bringing across
legacy applications that are not working on the latest
versions of Java or Python and so on, we
support all of this. You could be a COBOL programmer. You might write Haskell
or Pascal, right? All of that is just supported,
because, at the end of the day, on Cloud Run, it's
just a binary. So the reason I'm
excited about this is that Cloud Run allows
me to bring and deploy my applications to this
modern cloud-native world. But it allows me to continue
to operate in an environment that I'm comfortable
in, that I already have an existing skill set. So Cloud Run is really great. And I really encourage
you to check it out. Now, I did tell you
that, with Cloud Run, you give it the container
image, and it will deploy it, and scale up for you. Sometimes, you don't want-- you
want a little bit more control than that. You don't want it to
be fully automated. So let me give you an example. Let's say you're doing some
machine learning, right? You have big data
sets that you need to train your application on. Now, typically with
machine learning, CPUs aren't enough to train
our data against our data set in a quick enough manner. So sometimes, we use GPUs. Google also offers DPUs. Now, if you want to
deploy your containers against specific
hardware like that, typically you'd do it with
Google Kubernetes Engine. And you can target your
clusters based on the hardware that they provide. So we have a version of Cloud
Run called Cloud Run for Anthos that allows you to
deploy your Cloud Run applications on your clusters. So that gives you freedom
to run Cloud Run any way. You can run the
fully managed version of Cloud Run, where you just
give it a container image and we'll deploy it for you. Or you can use Cloud
Run for Anthos, where you can deploy it on
your Kubernetes clusters. And when we've been talking
about Anthos this whole talk so far, what we've been
saying is Anthos is allowing you to bridge
on-prem world with the cloud world. So if you want to
deploy your Cloud Run applications on your
on-prem environment, Cloud Run enables that. You can continue to leverage
your existing infrastructure. You can move it to the
cloud whenever you want. And you may choose not to. You may be happy with
that specific application. If at some point
you feel like, oh, I want to be vendor neutral,
what peace of mind you have is that Cloud Run is a
managed version of Knative. So that application,
you're not locked into Google's
proprietary platform, because Knative is open. That contract that I
told you about Cloud Run, it's just Knative. So you can take that
and deploy it anywhere that can Knative runs. And I can tell you now,
Knative runs everywhere, across every major
cloud provider. So it's great. When we build our
applications, we want to be able to test them. We want to be able to
have insight into them. So the second thing
that I'm super pumped about talking to
you about is Stackdriver. Stackdriver is our
tool for observability. And we referred to
Stackdriver earlier, when I talked about Istio. Istio, when you
enable Istio on GKE, it takes all the
logs and telemetries from your Kubernetes clusters
and sends them to Stackdriver. So you have this one tool,
where you can see the logs across your whole estate. It's super powerful
and super useful. Stackdriver has also things like
application debugging built in. So you can link Stackdriver
to your source code and set up breakpoint. And when that breakpoint is
hit, your production system will continue. Your users don't notice. But Stackdriver
will take a snapshot of the state of your
application and send it to you. Well, it'll store
for you in a queue. And you can debug
it at a later stage. So it allows you to get in
and identify those problems that you might be having
in production that you can't reproduce anywhere else. But it doesn't bring your
production system to a halt in order to do it. Stackdriver is awesome. So again, this is a
tool that's very worth your while checking out. And finally, we have
the marketplace, because ultimately there
are times when we don't need to build everything ourselves. You may just say,
hey, you know what? I have a specific job to do. And Redis does a
fantastic job of that. So I just want to deploy Redis
in my Kubernetes clusters. So we have this GCP
marketplace, where we've worked with a whole
bunch of third party vendors to certify software
running on GCP. And you can apply
those applications on your Kubernetes clusters. And thanks to Anthos, those
clusters could be on-prem, or they could be in the cloud. The other nice thing
that the Marketplace does is that it gives you all of the
billing for those third party tools that you use on
your Google Cloud Bill. You don't need to
create and write up a new contract or purchase
order with another customer or another vendor. It's all just simplified
on your GCP bill. There'll be a section
there for whichever marketplace apps you use. And there you have it. We've gotten to the
end of our diagram. We've added Marketplace in
the middle and Stackdriver in the middle. So now, we can
explain all of this. We have Kubernetes
Engine and GKE On-Prem for orchestrating
our containers. We have Istio to help
us make sure that we can put security policies. We can do traffic routing across
the services in our estate. We've got Anthos
Config Management in the local policy, as well
as Anthos Conflict Management On-Prem to make sure
that we can have a centralized place for all
of our governance, application settings, and policies
so that they're consistent between
our environments. We don't have as separation
between those things. And then we have the
Marketplace and Stackdriver to help us have a much better
application experience as well. So up until now, everything
I've talked about has been predicated
on one thing. And that's containers. A lot of customers out there
today deploy their applications with virtual machines. And for those people
that have been looking to migrate to containers,
it's been not always an easy journey. So to make that
easier, we've got a product called
Migrate for Anthos, which literally takes
your VMs and converts them into containers. What that does is that
your container images are much smaller,
because you don't have a whole copy of
an operating system. We also reduce the
burden on you so that, when you deploy a
container, you've only got your application to
package in that container. You don't have to apply
security updates and so on to the operating system, right? Because we do that on our
hosted infrastructure. Ultimately, rewriting
applications to fit into this cloud native
world is not realistic. There's lots of legacy
applications out there. Things like Cloud Run,
things Migrate for Anthos make it easy for you to take
your existing applications and bring them onto GCP,
whether it's on the cloud or on your on-prem
setup thanks to Anthos. So if you want to get started
today, check out GKE-- cloud.google.com/gke. Check out Cloud Run. Really take the chance
to look at Cloud Run, if you're a developer,
because I really think it's a game changer
in terms of bringing legacy applications across and just
also having a great developer experience. And finally, have
a look at Istio, because it's really reshaping
the way we deploy applications. So that's Anthos. We're very proud of it. And we're delighted to see what
you're going to make of it. So thank you very
much for having me. [MUSIC PLAYING]