KAVITHA RADHAKRISHNAN: So
welcome to the advanced session on Building a Cloud Native
Application from Scratch. I am Kavitha Radhakrishnan. I'm a senior PM on
Google Cloud, working on the GCP Marketplace, as well
as Kubernetes applications. And I'll be speaking with
Kamil Hajduczenia, who is a partner engineer
also on Google Cloud, and he helps us get
our partners on board into the GCP Marketplace. And, yes, I know our names
are hard to pronounce. That's how advanced
this session is. [LAUGHTER] Thank you. Today, we will cover the ways
Kubernetes applications greatly improve the developer experience
for cloud native projects. We'll be doing a
very detailed demo of building an app
from scratch on Anthos, our new hybrid and
multi-cloud platform, and how GCP marketplace offers
open-source and commercial solutions that accelerate
your development. When we think
about cloud native, we think about cloud
native technologies that help enterprises build
and run scalable and secure applications in the cloud. Containers, secure
mesh, microservices, declarative APIs, these all
exemplify this approach. Many of our customers,
including many of you here, actually think
about the way they modernized their applications
using containers. And so what we
found is that they would start with
a few containers, and then they start to
scale those containers. And then when these
containers scale, they turn to Kubernetes
as an orchestration engine due to its ease of use,
portability, and manageability. Let's start with a
quick show of hands. How many of you are using
Container images today? OK. Fantastic. And how many of you are using
Kubernetes in production right now? It could be either on
prem or in the cloud. Great. That's a great number of you. So then I'm going to
assume that many of you are familiar with core resources
like pods, persistent volumes, and services that help you get
your workloads onto Kubernetes. And you're familiar with
the workloads resources like deployments, ReplicaSets,
StatefulSets, and DaemonSets for you to manage your stateless
and stateful workloads. But then we asked our
customers for feedback as they were writing
these applications. And they told us
that it was really hard to manage all of
these different concepts and stitch them together. And if you do manage to
stitch them together, how do you know that it will work? And will it continue working? Especially as the ecosystem
is scaling and changing so rapidly? So we asked ourselves,
what if there was a way for us to standardize
how we interact with these apps without all of this complexity? And that's how Kubernetes
apps were born. What we did is we worked with
sig apps and the open source community on standardizing on
a application resource which is a Custom Resource Definition,
or a CRD, that helps define the core resources and
the workload resources in your Kubernetes applications. This helps us package up these
resources as a single app. Packaging up as a single
app has multiple advantages. We're able to provide you,
then, with lifecycle management, debugging, monitoring,
and even building out an ecosystem of apps that
we know will work well together. Thereby, making these
enterprise-ready containerized solutions. We then went one step further. We saw that customers
were using health charts for the deployments
of these applications. And so we went ahead and added
a deployer in the Marketplace, in the GCP Marketplace,
that really helps you deploy these apps anywhere
on prem or in the cloud. And so we've now created
these Kubernetes applications. We announced the beta last year. And these are available
in the GCP marketplace, enterprise-ready
containerized solutions with pre-built
deployment templates. We also made history
with this beta launch. We were the first
public cloud that could support commercial
Kubernetes apps in the GCP marketplace. But today is actually
a really exciting day, because I'm announcing
the general availability of Kubernetes applications
in the GCP Marketplace. [APPLAUSE] The GCP marketplace, for
those who may not know, is a online curation,
a catalog of solutions that we offer to enterprises. They can now just find
and buy solutions, just like any other
marketplace, but it also offers support for
management and deployment of these solutions. We offer multiple features
within the GCP marketplace. One is we're one of
the few public clouds that offer really popular open
source solutions packaged up in the Marketplace. So, for instance, RabbitMQ,
which we will show in the demo later on, or PostgresSQL,
that is also in the demo. One Google bill-- so it doesn't
matter what kind of solution you're using in the
Marketplace, whether it's SAAS or datasets or
Kubernetes apps or VMs, any enterprise that is working
with the GCP Marketplace gets one single bill
at the end of the day. And US customers can
just start to use, if you're a developer
that's using this, you can just start to use these
products within the Marketplace as well. Flexible pricing-- we
know that developers want to be able to have
the flexibility of being able to pay as you go. And so the ability
for you to really be flexible with your use but
also offering subscriptions for more predictable usage. So if you want to
be able to use those over a monthly or
annual subscriptions, which is going to
be out in beta soon. But in addition, I also want
to announce some new features as part of the
general availability. The first is that these
apps work with Anthos, and we'll cover a little bit
more detail what this means. But it really opens up the
world for these applications to be available, build
once, run anywhere, to be able to be supported on
hybrid as well as multi cloud. Cloud agnostic metering--
what this means is that we actually
follow the app regardless of where the app is deployed. So let's say you are deploying
onto GCP or into another cloud. You will still get metering,
so you can price and know that your application is
being priced correctly. We also offer custom
price metrics. So, for instance, if you're
building a CI/CD application, then maybe you want to price
on, say, the number of builds or the number of deployments. But if you are now building
a database-based solution, then you can customize your
pricing just on storage. And so we're very flexible about
offering you those features. And the last-- managed
updates, which is out in alpha now, one of the top asks. And so I'm going to dig into
that just a little bit more. We're introducing this
because our customers told us that it was really
painful for users to be able to create
these applications and then push them
out to customers, and then not having a great way
to update the feature updates or even if you want to fix
security vulnerabilities. But instead of just
doing upgrades, we've also added health checks. So we do auto
rollback when there's auto triggers that can
really push a rollback to a version that works. And so your customers are really
protected from any changes that you may not
be able to control. We've also added UI and CLI
support in the Marketplace, so you can push these updates
at any cadence that you want to. You've all seen a
version of this slide at one of your
sessions until now, but I just wanted
to call out why this is such an important
part of the GCP Marketplace strategy. So Anthos really opens
up the world for us to be building apps that you
build once and run anywhere. And so for us to
be able to support this in the marketplace--
so we have third party apps that are in the marketplace
today that work with Anthos. And you can start to
use them, and you'll see that in the demo. This vision has attracted
many enterprise partners to our ecosystem. Here's just a quick look
at some of those partners. You will recognize some
really familiar names, you will also see some
really exciting startups, but you will also see some
really popular open source solutions that Google
has packaged up and is making available on
the marketplace, as well. These are also in
multiple fields. And if you take a
quick look, you'll see they're across security,
database, storage and machine learning, logging, monitoring,
networking, and much more. So, really, expanding
to a base that meets the needs of any enterprise. And also excited to
say that many of these are available on-prem
today with Anthos. A huge thank you
to these partners. They've been working very
closely and really hard over the last few weeks to
make this moment possible. You will see some of these
particular apps in the demo. Speaking of which, let me please
invite Kamil up on the stage. [APPLAUSE] KAMIL HAJDUCZENIA: Hi, everyone. I'm Kamil Hajduczenia. I'm a partner engineer at
Google, which practically means that I work with
our awesome partners to build some cool products
for GCP Marketplace. And today, I'm going to show
you a few tips and tricks that I use in everyday work. Kavitha, how can I
actually help you today? KAVITHA RADHAKRISHNAN:
That's a great question. We could have built
any application or used any use case, but we've
picked a broad enterprise use case that covers product
and finance and marketing. But you can use this to
extrapolate to any use cases you might have. So before I start, let
me transform myself with these glasses into
a business analyst. And I'm now speaking
to my app developer. So, hey, Kamil, we need to build
a custom metrics collection app that collects metric
streams from all over the world. And this data cannot fail. This is absolutely critical
data for our business. And also remember that
the US team is on GCP, but the India team
is on another cloud. KAMIL HAJDUCZENIA: OK. So very interesting use case. So let's start with those
multiple systems that want to connect to our application. We don't have full
control over them. Probably some of them are
going to use some special data format or custom
authentication mechanisms. So we could actually
start with the REST API. The REST API can be managed by
an API gateway, like Apigee, and it will allow us
to easily integrate multiple systems together. You also mentioned that you
want to run our application in different environments,
like different clouds, so we want our application
to be portable and have a scalable ecosystem. So let's start with Kubernetes. Kubernetes sounds
like a great fit. It can run anywhere on premise
and on different clouds. And, actually, with
Anthos that you mentioned, we can take
advantage of managing those different Kubernetes
clusters from a single point, which is GCP. So tell me something more
about those spikes in traffic. KAVITHA RADHAKRISHNAN: Yes. So see, the marketing team
runs campaigns and events constantly. And so sometimes they
collect new customers, which can really make the
data spike dramatically. KAMIL HAJDUCZENIA: OK. So it means that our
application needs to be able to be scalable, to
dynamically react to increasing demands that we may see. At the same time, we
want to save money, and whenever the
traffic goes down, we want to have an
ability to scale down. You also mentioned that we
don't want to lose any data. KAVITHA RADHAKRISHNAN:
That's right. KAMIL HAJDUCZENIA: So whenever
we are scaling or facing any spikes, we probably want
to use a message broker. A message broker will actually
persist coming requests and let the back end handle
them whenever it's possible. KAVITHA RADHAKRISHNAN: A message
broker sounds like a great fit, but don't forget the
CIO is really concerned about the data and the
security of that data. KAMIL HAJDUCZENIA: OK. I wouldn't be that much
worried about security. We can use Istio. Istio will be used for
increasing the security thanks to the different
configurations that we can use. At the same time,
we can actually take advantage of
different monitoring capabilities of Istio. KAVITHA RADHAKRISHNAN:
This all sounds great, but I needed this yesterday. KAMIL HAJDUCZENIA: OK. As always. So let's try and see what we
can achieve in 25 minutes. KAVITHA RADHAKRISHNAN:
25 minutes. That's insane. Kamil, please take it away. KAMIL HAJDUCZENIA: OK. Let's do it. So we have a bunch of
technical requirements that you mentioned. So let's do a small
system design. So we started with the REST API. So we have different
clients that are connecting to our system
through a REST API service. It must be backed up by
some component that is able to handle some logic here. So we'll have a
deployment for our API. By its nature, it's
extremely scalable because it's stateless. So let's take advantage of that
and attach a horizontal pod autoscalar that can take care of
automated scaling up and down. At the same time, we know
that some of the data need to be persisted. KAVITHA RADHAKRISHNAN: Correct. KAMIL HAJDUCZENIA: So we have
a separate deployment that will serve as our back end. And it has it's own
horizontal pod autoscalar because by its nature
it scales differently than the stateless API. So we want to take independent
configuration here. At the same time, we want
to have a database where we'll persist the information. We don't want any
vendor lock in. We want to have solutions that
will be able to run anywhere. So in this particular
case, I will take advantage of
GCP Marketplace and install an application
from their open source version of Postgres
is already available. At the same time,
we need to connect those two, the front end,
the back end together. And we'll do it with a
message broker, with RabbitMQ. RabbitMQ is also available
on GCP Marketplace, and let's take
advantage of that. At the same time,
we need to have some configuration that will
be shared across those two components. So it will be something like
pointing to a specific end point for Rabbit, for Postgres. We'll have some secrets
for authentication to those systems, and
this will be short. At the same time,
we want to have Istio for additional security. And this is our system. OK. So let's take a demo. It's going to be live, so
we'll see how it works. But in the demo,
we're going to show the full experience, the full
journey that at a developer needs to take. So we'll start with actually
creating new clusters or connecting existing
ones because this is a very common situation. At the same time,
we want to deploy some applications
from GCP marketplace to GK on prem or
external cluster. We'll take a look at the
code and we'll transform it to container images at
first, and after that, we'll run a Kubernetes application
on our external cluster and then we will
release a new build of that application, which
is also a common situation. And at the end, we'll
take a look at Istio. KAVITHA RADHAKRISHNAN:
And you're saying you're going to do
all this in 25 minutes? KAMIL HAJDUCZENIA: Yeah. KAVITHA RADHAKRISHNAN:
I'm a bit skeptical. KAMIL HAJDUCZENIA: We'll see. OK. So let's kick off the demo. Let's kick off. [LAUGHS] OK. I think we are already switched
to my computer, which actually shows exactly the same screen. So, yeah, great. It's true. So we're back. So as I promised, we'll
start from scratch. So whenever you
start from scratch, you might not have any
Kubernetes cluster. So what we can do, we can create
a new cluster on the cloud. And GKE UI, which is the user
interface for Kubernetes engine on Google Cloud console is a
great way to actually start. So probably some
of you have already tried it, so I'm not going
to spend here too much time and will not wait for the
cluster to be created. But let's take a look
at the few options that might be useful for our
production workloads. So first of all, I would think
about using a regional cluster instead of zonal one. KAVITHA RADHAKRISHNAN: OK. KAMIL HAJDUCZENIA:
Thanks to that, I will get high availability. And I will get multiple
masters running for my cluster, and all of them
managed by Google. Here, I can also select to have
multiple node pools perfectly adapted to whatever my
application might need. So I might have a few pools,
actually, in the same cluster with different sizing. But there are also
advanced options that I may actually use. So let's start with such,
like Enable Master Authorized Networks, which will
not allow external and points to connect
to our master instances. Then we could take a look at
Stackdriver Integration, which is awesome because all the logs
from our Kubernetes cluster and all the metrics may
ultimately land in Stackdriver. And by the way, in
GCP Marketplace, our application-- or actually
most of the applications-- already exposed some
app-specific metrics that can be collected
by Stackdriver or your own Prometheus
metric system. So if you are already
familiar with Prometheus, you can take advantage of that
and simply integrate our apps with your server. In addition to that, we have
such add-ons, like Istio. With a single click, we can
install Istio on the cluster. And node auto-provisioning
is also very interesting because we wanted our
system to be auto scalable. So we start with the
workloads, and we use horizontal pod
autoscalers to actually scale up and down dynamically. But at the same time, we might
think about scaling the nodes on which our cluster operates. So additional VM
instances might be created automatically, or even
right sizing might happen. So the size of the individual
VM instances might change. KAVITHA RADHAKRISHNAN:
So you can really design your entire cluster
right here from within the CY. KAMIL HAJDUCZENIA: Yep. It's so powerful. KAVITHA RADHAKRISHNAN: Nice. KAMIL HAJDUCZENIA: OK. But this use case is not
that very interesting. And I think everyone is
waiting for connecting external clusters. And by the way, I
already have one. So let's take advantage of that. I switched to my
terminal because here I have some connectivity
with an external cluster. Let's see if it actually works. So I will maybe get the list
of nodes from that cluster and see if we have connection. Yeah. Houston, it works. KAVITHA RADHAKRISHNAN: Nice. KAMIL HAJDUCZENIA: Great. So we have two nodes that
are created on another cloud. We would like to connect
that cluster to GKE UI and start managing
that cluster here. So I have a magic
command that is actually coming to be available
publicly very soon. And connect. Where is it? It's hidden somewhere. OK. Connect. Yeah. I have it. It's right here-- no, not yet. KAVITHA RADHAKRISHNAN:
So we know this is real. KAMIL HAJDUCZENIA:
Yeah, definitely. As always. Connect. And-- almost there,
almost there. KAVITHA RADHAKRISHNAN:
So what you've done here is you've created a
cluster on another cloud and which many of you
may already have, right? You're already running
Kubernetes on clusters that are not on GCP. And what Kamil is
going to show us here is you can have this running
cluster that you can then use to connect either from the
cluster using your terminal, connect it to GKE
or from GKE itself. KAMIL HAJDUCZENIA: Exactly. This is what we are going to do. As always, network
works so slowly. So once again I will take
advantage of that setup that I've already prepared. I need a bunch of-- most probably here. OK, great. And now, maybe. Connect, once again. Connect. MG cloud at first. Alpha. Which I called alpha. Yes. I got it. It's back. [APPLAUSE] Live demos. Always happening on stage. KAVITHA RADHAKRISHNAN:
Can you tell us what was happening there? KAMIL HAJDUCZENIA: Yes. Sure. So, actually, this time I'm
installing an additional agent to our Kubernetes cluster. And it will allow GKE to handle
that cluster from GKE UI. So we'll have live
connection between those two. Let's see. It looks like our job succeeded. So let's come back to GKE UI and
refresh the list of clusters. And, yeah. It's there. Awesome. KAVITHA RADHAKRISHNAN: Awesome. So what you did here
was using the terminal to run commands that will help
you connect the cluster to GKE. Now, you're able to see from
within GKE not just clusters that are running on
GCP, but clusters that are running on other clouds. KAMIL HAJDUCZENIA: Exactly. KAVITHA RADHAKRISHNAN: Awesome. KAMIL HAJDUCZENIA: And as you
can see, the list of nodes is exactly the same as I had in
my terminal just a minute ago. So let's move farther
with that configuration. Once we have this external
cluster connected here, maybe we could actually try
to deploy an application here. KAVITHA RADHAKRISHNAN: Yes. KAMIL HAJDUCZENIA: OK. So I moved to the
application screen. This is a nice list
of applications that are installed to my cluster. Right now, I don't have any. So let's start with
deploying the very first one. I clicked on the Deploy
from Marketplace button, which actually opened up
the catalog of applications available for
Kubernetes clusters. KAVITHA RADHAKRISHNAN:
So I just want to call out here is what we
we're pretty deeply integrated from a Marketplace
perspective into GCP services. So you saw where
Kamil didn't have to leave the GKE UI to
go find the marketplace app that he wants to deploy. It happens right
there in context. KAMIL HAJDUCZENIA: Exactly. So now I'm in a configuration
screen, and as you can see, I can select a cluster to which
my deployment will happen. And my external cluster is
already here, so, great. Now, I may select a namespace
to which I will deploy that app. So I can also
customize the name, it will also become a
prefix for all the resources that will be created. I just changed the size
to just two replicas to show that it is actually a
cluster, but at the same time, not to spend too
much time in waiting. And let's hit the Deploy button. What is happening? We move to the application
installation screen and we see some progress. I think we are going to see
a more detailed information. OK. Yeah. It's here. KAVITHA RADHAKRISHNAN: Awesome. So you just deployed
RabbitMQ, which he procured from the marketplace
onto that external cluster that he's now
managing through GKE. KAMIL HAJDUCZENIA:
So you mentioned Kubernetes applications. Kubernetes applications
are a great way of managing multiple
resources that form a single
installation, like RabbitMQ in this particular case. We don't only get the
list of all the resources that formed my RabbitMQ
cluster, but we can also see some additional
information like here, probably a password-- I wouldn't check
this right now-- and a username. I could use it for logging in. So in the meantime, I would
just take a quick look at the service because
by default it was private and we could see one more
feature that is currently offered and let's click
on the Edit button. And I actually want to
change the cluster IP type of the service
to load balancer. I want to make the service
available publicly. So let's hit Save. KAVITHA RADHAKRISHNAN:
The other advantage with having RabbitMQ
be in Kubernetes app is that UI you just saw, where we
were able to pull information from the configs and be able
to show you the secrets right there from within the
UI, rather than you having to dig through a bunch
of config files, that was made possible because of the work
we've done in packaging these into a single app. KAMIL HAJDUCZENIA: OK. And what you can see right
now, we automatically integrated with load balancer
working on another cloud, and we get a public end pointer. KAVITHA RADHAKRISHNAN:
That's amazing. KAMIL HAJDUCZENIA: So let's
give Rabbit a few minutes to actually finish
the installation. We can see that one of the pods
is already being scheduled. We want to have two. And in the meantime, we can
switch the environment to on prem. So on-prem, GKE On-Prem was
announced a few months ago. And probably most
of you have already heard about GKE On-Prem. Have you heard
about GKE On-Prem? Who heard about GKE On-Prem? Great. Great to see that. So not many of you have
probably a chance to try it out because this is still
in alpha, but we already got a lot of feedback from
our customers and partners, and it's great. People love it. I love it, too. So actually here, I
have another project. They have two clusters--
one of them is cloud and the second one is on prem. And as you can see, this
presented as an external one. I can also see all the nodes. If I'd like, I can
click on any of them and check some
details, like metrics that are related to that
particular node or all the resources that
were deployed here. So this is fully
managed from GKE UI, but it is working on a
separate center on prem. So right here, we could
also try to run a deployment of another application. Maybe this time something
from one of our partners. So Jenkins seems to
be one of the most popular CI/CD pipelines
already used in the market. Like its service show that
it's the most popular solution for open source projects. And we actually
have a partner who prepared a curated enterprise
version of Jenkins, like this CloudBees. So let's see if we can deploy
CloudBees solution to an on prem cluster. So once again, I navigated
to the application screen, I selected CloudBees, I
click the Configure button, and once again, I can
select a cluster to which I will deploy this application. And as you can see, user cluster
1 is exactly the on-prem 1, it's marked as external. I can create a new namespace
this time, let's say. So this will be CloudBee's demo. And I can also name my
application somehow, like CloudBees demo once again. Now we have the two most
complicated, I think, fields in this
configuration form. So in this on prem
cluster that we have, we reserved a few
IP addresses that can be used for exposing
a public endpoint. And by the way, I already know
those IP addresses, otherwise, I would have to check them out. KAVITHA RADHAKRISHNAN:
This is needed so we can talk to those on
prem servers or clusters. KAMIL HAJDUCZENIA: Yep. Exactly. This is what we want to have. And since I'm not entirely sure
if those are the correct ones, I have prepared a small
note here just for myself. OK. I've got it. So here I will use
that IP address. And it's exactly the same. Yeah, I remember it well. So I would select
the storaged name. And here I'm running
the deployment. Let's see what happens. Once again, we see the
progress of the installation. And something is happening. Let's see the result.
Oh, so stressful. Those live demos. KAVITHA RADHAKRISHNAN:
This is happening live. KAMIL HAJDUCZENIA: Yep. KAVITHA RADHAKRISHNAN:
And so just to talk about, RabbitMQ is an open
source application that you deploy to a
cloud that is not GCP. And now you're taking a
third-party app like CloudBees and deploying it to
an on prem cluster. KAMIL HAJDUCZENIA: Exactly. This is what is happening here. We can see some progress,
like logs showed. So maybe let's
take a look what's in the detailed information. So, moment of truth. And this is CloudBees. This is a detailed information
about the application. And we can see that all
the resources got created. So let's take a quick
look at the ingress here. Maybe we'll also be able to
connect to that installation and see that beautiful
wizard of Jenkins, if it's already running. Maybe it's not yet, so then we
will need to give it a minute. I think so. It is still being installed. So in the meantime, we can
take a look at RabbitMQ. And let's refresh the
page and see if we already have our pods running. OK. Let's take a quick look here. So we navigate to this table
set and see the Rabbit. OK. That's always live demos. So it gracefully failed. But still, gracefully failed
in an external cluster. So let's take a look if
CloudBees is actually here. OK. Almost there, I think so. Let's take a look at this pod. And the pod is
already running, so we should be able to see Jenkins
in just a few seconds. We're expecting a beautiful
wizard that should actually start asking me for
a password that I should take from a container. And I think something
is working because I got redirected to the login page. And-- yes. Great. I can see that. KAVITHA RADHAKRISHNAN: Awesome. Great. [APPLAUSE] KAMIL HAJDUCZENIA:
CloudBees running on prem. Great. So at this point, we have
already fulfilled first two steps that I promised to show. So what we did, we connected
an external cluster running on another cloud to GKE UI. And in addition to
that, we also showed deploying the
application to on prem and to an external cluster. So at this point, we should
actually switch to the code and transfer it into
a running application. So I know that Java is a bit
old-fashioned way of building applications. But I really like it. This is why I use Java. The problem with Java is that
you always get a lot of code for doing simple stuff, but
I actually used Spring Boot. And Spring Boot is
pretty simple to build. KAVITHA RADHAKRISHNAN:
What is Spring Boot? KAMIL HAJDUCZENIA: It's
application development framework that's
running in Java. It's extremely popular. It allows you to take advantage
of many built-in configurations and integrations. So just to take a quick
look at what did we do here. We had this architecture
diagram that was using a
deployment for an API and another deployment
for the back end. So let's take a look at
how it looks like here. So I have two end points
in my application. This is the REST API. So I have one of them
to update the usage-- and this is actually what our
external systems will use, they will send out data here-- and another one to
prepare a report of usage. At the same time, we
also have a component that is handling
connections with RabbitMQ. And here we have listeners
that are actually taking the requests
coming through RabbitMQ and handling them gracefully. So what we want to do is
transform those source code components into a
running application. So what we will do first is
we're going to run Cloud Build. Who has heard about Cloud Build? OK. Some of you have so
what we are going to do here is use the
CI/CD pipeline that's built-in into GCP. And you can easily
define a few steps that should be executed
by Cloud Build. Each of them is actually
running a single container. You can pass properties
to that Cloud Build. Let's just take a look. Yes, it's running. So this is a running
Cloud Build instance, job. In the first one, I'm
building the API component, and the second is building
actually the back end one. After that, my
container images will be published to
container registry, that is also a managed service
offered on Google Cloud. So I don't only have the
way to build a container, I also have a way
to store them here. And by the way, I also took
advantage of another service offered on Google Cloud. This is Cloud
Source repositories. So I didn't have to care
about building my own Git repository anywhere, I already
had one in Google Cloud. So this is where my code lives. So let's move farther. What can we see here is that
our containers are already being built. Let's show
the newest logs first. I think it will take
another minute or two. So in the meantime,
we could take a look about the way of
transforming those container images into an actual
application running on Kubernetes. So let's start with
the application CRD that you
mentioned previously. I already have something
like this here. So this is a CRD. This is a custom
resource definition that was created together
with a special interest group of applications in Kubernetes. And here I define
the application name, I defined some descriptions
versions and so on, but most important,
I define the rules for collecting different types
of workloads and resources already running in Kubernetes
to say that they form together an application. So this is what I have here. And it's, of course, accompanied
by other manifests that I have. So the yaml for
API, I have it here, I start with the
dedicated service account and follow it with the
deployment pointing to my container image,
and actually exposing a service with an ingress. All of those happening in
regular Kubernetes manifests. But after that, collected
by the application. So let's run this installation. In the meantime, we could
just take a very quick look at the builds. Yep. That's done. So our container image
is already built. So we're going to run
the application here. KAVITHA RADHAKRISHNAN:
As you do that, I just want to clarify that the
audience is following along, we created a container
image and now you are transforming it
into a Kubernetes app by packaging it with an
application resource CRD. KAMIL HAJDUCZENIA: Yep. Exactly. We'll start with actually
running the kubectl command to deploy everything. So all of our manifests will
be automatically installed to our Kubernetes cluster. I'm using a proxy machine
because the on prem cluster is actually located in
an external data center. And that data center does
not have public connectivity by default, only through
those three IP addresses, and it's only egress. So what I'm going to do right
now is combine two steps. I will unreach our manifests
with sidecar proxies for Istio. And at the same time, I
will apply the manifests to that Kubernetes
cluster that we have. KAVITHA RADHAKRISHNAN:
Kamil, could you say more about why we're
enriching using Istio sidecar. KAMIL HAJDUCZENIA: Sure. So we wanted to use Istio
for additional security. And Istio is easily taking
care of all the network traffic management. How it does it is actually
built on top of Envoy, so we put those additional
containers of Envoy as a sidecar to each pod
running our application. And thanks to that,
all the network traffic is going through those
proxies inside pods. This is one of the
most Istio features. And thanks to that,
Istio is going to take care of all
the traffic here. So let's take a look
if our application got already deployed. So I navigate to the
list of applications and, yes, it's here. It's Kubernetes app and it says
that some components are still being installed. But we can take a look
at what is already here. OK. We have two deployments, we have
services, actually one service and an ingress, we have a
config map and a secret. That's great. So let's open that ingress. It's saying that it's
still being configured, but I think it might
be already available. So I click on that and I
see a beautiful Swagger UI. By the way, are you
already using Swagger? Could you please
raise your hand? Yeah, almost everyone
uses Swagger. So Swagger is a way of
defining the documentation or actually the
standard of your API and it can be easily
used in an API gateway, like Apigee to
transform your Swagger configuration to running API. So I have two end points here. Let's see if they work. So I will start with sending
a query for a report. So let's say that I want a
report for the whole year. This nice Swagger UI is actually
allowing me to send REST API calls to my end points. Yes, I have a report. So let's see if it's
also running for updates. This is what our
external systems will do. So I will send an update
for the same resource, I will specify some
date for the last year, let's say in the 1st of January. KAVITHA RADHAKRISHNAN:
We're getting very close to your 25 minute. KAMIL HAJDUCZENIA: OK. I know, I know. But we're almost done. KAVITHA RADHAKRISHNAN: OK. KAMIL HAJDUCZENIA: And
we're actually confirming that everything is working. KAVITHA RADHAKRISHNAN: Which
is a big part of the demo. KAMIL HAJDUCZENIA: Don't
complain that much. Come on. [LAUGHTER] Let's take a look. Here, the previous report
was having some huge number, but we wanted to
actually increase it. It ends with eight, it should
end with zero after the update. Let's try it out. OK. Everything went
fine, so the message was scheduled to the
broker and let's see if it was actually handled. Yeah. It was. KAVITHA RADHAKRISHNAN: Yes. Awesome. [APPLAUSE] KAMIL HAJDUCZENIA: OK. So we have our
application working. So just one small
thing that we could do. In this report, you can see
that the overview is to do. So to do is suggesting
that we are not done yet. So let's make it done. So let's take a look. Where it might be. It might be in
broker controller, if I remember well,
because this is what I did. So we want that to be done,
so let's change it to done. And now let the magic happen. And we'll comment that change. So git add, git comment,
and git comment, git comment once again. And that comment will
say that and send it out. OK. So I sent out a new
comment to my repository. I also added a tag here. And now there is
a piece of magic that I wanted to show you. So there is Cloud Build. And in that Cloud
Build, actually, two new builds where already triggered
thanks to that comment. Why? Because I have a
trigger that is saying-- actually, I have two different
triggers that both of which are saying that whenever
a new change is happening to my repository and
there is a new tag, it's automatically
building container images. And if I'd like, I can
add additional step to actually running a rolling
update on my deployments. And this is a great approach, I
think, to releasing new builds. Everything that you
should care about is actually the source
code, and whenever you are delivering an
update to your code, it should be
automatically picked up by your CI/CD
pipeline and delivered to different environments. There is no point in doing
anyone manual work here. OK. So you say that we are
running out of time, so maybe let's not wait
for the update to happen. It will build out
new container images and this new release can
be used in our deployments. But there is one more thing
that I wanted to show you. So we wanted to have Istio here. So let's see if my Istio
connection is still working. No, so I'll just run a port
forward to make it happen. So port forward for
Kiali on port 8082. KAVITHA RADHAKRISHNAN: So
what are you doing here? KAMIL HAJDUCZENIA: I'm just
running a port forward, so I want to forward the port
of my Istio visualization tool, Kiali, from on prem
cluster to my local computer. And let's see if it's working. So almost there, or
not entirely there. Might also happen. I love those live demos. Let's just establish the
connection once again. And here, I am connecting
to the first proxy. After that, a second. The security. So just a second. And another connection
with port forwarding. And after that, just
one more port forward. Port forward. And, OK. There is already
something listening. So ps aux grep
forward, and kill. Kill the previous forward. And let's do it again. Port forward. Yep. Something is running. Let's see. Well, yeah. It worked. I love it. [APPLAUSE] So just one thing that
I wanted to show you is the service match that was
automatically created by Istio. KAVITHA RADHAKRISHNAN:
That's pretty cool. KAMIL HAJDUCZENIA: So thanks
to that proxy sidecars that we had in our
deployments, Istio was actually able to build
the whole service matters visualization. And we also have a
small addition to that. So Postgres and
Istio configuration. So what I wanted to
show here is basically a kind of securing your
infrastructure with Istio, but just by defining
additional service roles for accessing the specific
services in our Kubernetes cluster. And binding those roles to
specific service accounts or users, we can
define who can connect to a particular endpoint,
like in this case, we define who is able to connect
to our Postgres instance. KAVITHA RADHAKRISHNAN: Testing. So you got through everything? KAMIL HAJDUCZENIA: Yeah. That's all. Are you satisfied? KAVITHA RADHAKRISHNAN:
I'm more than satisfied. What would have taken us
days just took us minutes. So that's amazing. It's awesome. [APPLAUSE] KAMIL HAJDUCZENIA: Thank you. KAVITHA RADHAKRISHNAN: So
what we saw in the demo is the ability to connect,
deploy, create, improve, and secure. And you've seen how we could
use the GKE UI and the GCP Marketplace to
really light that up. So thanks, Kamil, for walking
us through those very, very detailed steps. We also wanted to
just wrap up here with showing you how
the strategy of being able to go after
hybrid and on prem is really attracting
large enterprises to use GCP Marketplace. And we're hoping
that you will also be one of them very soon, if
you're not already using it. We're also announcing
these open source service partners, which you may have
seen at the keynote as well. But these are really going
to help us deliver really popular open source
solutions to our customers through the Marketplace,
and we believe that this will show our deepened
commitment towards open source as well. Finally, I just want to
announce our newest Kubernetes app solution, Palo
Alto Networks, that just announced their
service mesh security adapter using STL, is available
in the Marketplace today. And we're very
excited because this is an example of a partner
that started with VMs and has sort of expanding their
portfolio to cover Kubernetes apps and other
types of solutions within the Marketplace. We're also welcoming a number
of new and updated listings. We're calling it updated
because it shows you just how engaged our partners are
with the marketplace, so you can find new
apps there every day, as well as new features to
existing apps that you can use. You can see a list here of brand
new exciting startups, as well as very established companies. And you've seen the power
of the GCP Marketplace. If you're a partner or a
developer that wants to sell on the marketplace and
really increase your reach, please go to Cloud.google.com
/marketplace/sell. And we can talk you
through about-- there's an entire process in how we can
get you through the Marketplace and really help you
expand your reach. So today, just to
recap, we covered the ways Kubernetes
apps can greatly improve the developer experience
for cloud native projects. We did a demo with
some real-time anxiety in it of building
an app from scratch on Anthos, our new hybrid
and multi-cloud platform. But Kamil showed
you just how easy it is to be able to go from
really nothing to being able to have a fully-deployed
app not just hybrid, but in multi-cloud scenarios. And we showed you how the
GCP Marketplace is integrated into your experience
and supports and offers open source, as well as
commercial solutions that accelerate your development. Two calls for action. One is if you haven't
visited our GCP Marketplace showcase yet, please
go see the demo there. And you can visit us online
at cloud.google.com/kubernetes applications. Thank you. [MUSIC PLAYING]