[MUSIC PLAYING] CHEN GOLDBERG: Just
last year, at Next 2018, we'd been here introducing
Cloud Services Platform, now with its new name, Anthos. Anthos is a true hybrid
and multi-cloud platform that lets you write once
and deploy anywhere. In the eight months between then
and now, we've been quite busy. And we're excited to show some
really awesome technologies as we move this platform to GA. But for me, what was
the most exciting thing was talking with our
customers, the people we've met in this journey. Over 1,000 companies
expressed interest in signing to CSP Anthos
when we announced. We saw incredible interest
from banks, financial services, retail and public sector. We're going to
introduce you today to some of the customers
that partner with us in bringing Anthos to GA. They will share
their stories, how they're leading change
in their industries and how this technology is
unlocking business success. APARNA SINHA: Unlocking
business success-- what do we mean by that? As Chen and I were building
this platform in the early days, we were asking
ourselves the question-- what separates the leaders from
the rest when it comes to IT? And we were hearing
from CIOs, over and over again, that speed, reliability,
and access to information is critical to their business. It's what allows them to win
and to turn around their sales. When we look at
leading companies, like Target and eBay
in the space of IT, we're amazed by
what they are doing. If you think about
it, they need to be able to predict what you
want to buy even before you know what you want to buy. They need to be able to stock
it in a location near you, and they need to be able to get
it to you faster than anybody else. What does that require from IT? Well, that means
they need systems, all the way from their mobile
app to their back office inventory system to their
point-of-sale systems to be fast, to be secure,
and to be intelligent. Is this only true in retail? For those of you that
work in enterprises, you know this is not
just true in retail. This type of IT agility is
expected in every industry, in your enterprise. And yet, there are so
many enterprises that are not able to make this leap. And we think there are
good reasons for that-- actually, three big reasons
that we want to highlight. Number one-- risk. Every enterprise needs
to minimize risk. Maybe your company is
doing that by restricting the services and the data
that you can put in the cloud. Or maybe they're restricting
even which cloud you can use. Users of GKE, BigQuery, and
Google's Cloud ML Services, they tell us the
speed at which they're able to move by using these
services is phenomenal. But we know that there are
many users out there who don't have access
to these services, because they have to
wait for long periods for their central IT to
approve the services. And that's because the IT
is trying to reduce risk. But actually, the
irony of it is, they're increasing the
risk of being left behind. So risk is one. The other challenge
enterprise has is how do you innovate
with what you have? We all know there's lots
of existing and some legacy systems in the enterprise. So maybe your need
for speed is being stalled by that monolith
that just doesn't scale. But does it have to be that way? Can we innovate with monoliths? That's the question we
were asking ourselves. And then lastly-- and I think
this is top-of-mind for all of us, especially all of you-- is talent. Who's going to write
those modern applications? Who's going to transform IT? It's really hard to hire and
retain top technical talent. And it's even harder if you
aren't in an industry that's a non-technical industry. We started CSP because we knew
these challenges are actually imminently solvable, and
we had already solved them to an extent with Kubernetes. But what we hadn't done is
solve them for you on-prem and across clouds. And we really hadn't solved
them for your monoliths. CHEN GOLDBERG: So
what's the solution? Let me give you a hint. It's about putting
the user first. We hear our customers tell
us they want everything baked into the platform-- agility,
security, and reliability. Anthos is a Google [INAUDIBLE]
solution, [INAUDIBLE] solution, building
upon open source technologies like Kubernetes,
Istio, and Canary. This provides the benefits
of workload portability, ensuring no locking into
a specific environment. Starting from the
bottom, we shipped an on-prem
distribution, bringing our years of experience managing
Kubernetes clusters to you. On top of that, we have built
a multi-cluster multi-cloud control plane hosted on GCP
that allows you to connect and manage all of
your environments, including non-GKE cluster,
and a single pane of glass. You can also apply the
same policies and controls from a centralized place. But what about workloads
which are not containerized? For that, we've added our
service management level that is compute
agnostics and that lets you connect all
of your services, both monolithic services as
well as Greenfield services. And on top of everything, we
have an awesome marketplace, introducing Google's first-party
and third-party applications ready to be deployed
wherever the platform is. Anthos removes risk
and gives you choice. None of these things are like
what are out there today. Why? Because most tools are not
built with user choice in mind. Most cloud providers
are not building to maximize your innovation. In fact, they may
increase your risk and slow you down by fragmenting
your talent and locking you in. Anthos gives you choice. Let's start with the first
challenge-- managing risk. Risk management is already
the number one concern that keeps CIOs awake at night. Being unprepared can have a
large impact on your business. Change, new technologies,
new environments increases your risk. Let me tell you about
a conversation I just had yesterday-- changed
the script last night. I was talking with
a group of leaders and told them what we keep
hearing from our customers. When you want to use a new
service-- let's say, BigQuery-- before you can let
your developers use it, you have to approve it,
make sure it meets the bar. But there are
hundreds of services, so you cannot do it manually. So you build tools and
processes to reduce the toil. Unfortunately, this is
not a one-thing effort. It's an ongoing effort that
has to continue and run as the platform, the
services, and the requirements keep changing. Then a person jumps in. And he says, this is why we're
going to start with one cloud. And we're going to streamline
the process and governance. Immediately, another
person engaged. And she says, by
excluding other clouds, you'll be blocking
your own innovation. And this may be a bigger
risk to your business. Now, the amount of work
just got multiplied by the amount of environments
you want to run your workloads. This is just one example
how mitigating risk can increase your workload
and slow you down. It means you are not
spending time innovating. Anthos solved that. In Anthos, we are
introducing Cloud Run to provide automated
developer experience that takes you direct from
code to an internet facing up in minutes without
any platform expertise. Our large customers,
like HSBC and Scotiabank, have tens of thousands
of developers. They don't need to be
platform expertise. They want to make sure that they
can focus on the application service and innovate where
it impacts their business. But you need a platform
that keeps you safe and removes risk of malicious
actors and fat fingers. This is hard, especially
across clouds. Anthos is integrated with
your existing identity stores. It has built-in security
for zero-trust environment. It provides a greater
access control with policies enforcement
across clouds. We keep your platform up-to-date
with all the security features that are needed to make
your environment safe. And the most important thing-- we do it wherever you need
to run your workloads. Now, let's see a demo,
first demo of the day. APARNA SINHA: Whoohoo! CHEN GOLDBERG:
OK, so here I have a GKE highly-available,
regional cluster. It's really easy to create
a cluster from the UI. I will just choose the template
I need of Highly Available. And here, I will also
configure Anthos components into this cluster,
starting with Istio; Stackdriver, for monitoring;
and of course, the Developer Experience, Cloud Run. We will use a
predeployed cluster. And you can see, if I go
through the workloads, I can see that, already, I
have some components installed within my cluster. But those are also open
source components, like Canary and Istio, which means that
whatever is running here can be running in every
Kubernetes clusters. But in this case, GKE actually
managed those solutions for you. And when your cluster
will be upgraded, those components will also
be upgraded with them. But as a developer, I don't
care about any of that. All I want to do is see
my application running. So I can go to Cloud Run. And I can simply
create a new service. Let's put the image. Sorry. And as easy as that, I can
simply choose my GKE cluster. And voila! That's it. Under the hood,
what is happening is that Cloud Run
is already creating a controller that, for example,
scales to zero my deployment. It also connects to
other GCP capabilities. And it is working. As a developer, I'm done. But now, as a
service operator, I want to make sure I can
manage all of my workloads the same way. So we got a-- an internet-facing application
or, maybe a stateful workload, like Redis. I can see all of
them running here. So what you can see here are
two of my applications running. And what's also interesting
to see that in the pods, one of them have zero
pods, because there is no traffic into that service. And the other one does. So this is really great. And I'm very happy. I'm happy with the
reliability and availability that GKE provides. But I also have some of
my own unique policies that I want to build
within my environment. For that, we also installed
Anthos Config Management into this cluster. Let's see what kind of
namespaces exist here. You got it? Sorry. OK. This policy is managed by
central IT or by cloud teams and is the single source
of truth managed in GitHub. You can see, I have multiples
namespaces here, like, orders-dev, orders-prod
that are being-- that exist in my cluster. But let's say, I'm
not a malicious actor, but I do have fat fingers. And accidentally, I
delete one of them. Of course, let's
make it really bad. So we'll do it for prod-- OK, it's deleted. [GASP] But actually, it's
already already here. So automatically,
the system found out that I've deleted this
namespace and created-- three seconds ago-- a
new namespace for me. APARNA SINHA: That was awesome,
so back to the slides. That's removing risk
so you can move fast. What Chen showed is
the developer workflow, which is super fast in
seconds, going from source code to an application that's
live on the internet. And then the other piece is
you can enable that agility while controlling,
from a central place, through your admins, all of
the security and compliance policies that you
want to enforce. And this is across clouds. OK, so risk number two was
innovating with what you have. Well, not all of
your applications are going to be
written by developers. The truth is that,
when you're a startup, yes, you can afford to write
everything from scratch. But as an enterprise, this
is not at all the case. Most of the applications
in an enterprise are packaged applications. And you might not even have
access to the source code. So what do you do then? How do you innovate
with what you have? Well, rewriting is really long. And it's very hard,
and it's quite risky. So is it possible for
enterprises to innovate at all? On GCP, we've found
that actually it's the longstanding
enterprises that have the most
potential to innovate, because they have the
greatest data assets. They have the customer
relationships, and they have existing
systems that are working. What they need to do, though,
is to build on what they have and innovate on top of it. And in fact, with some
of the technologies that we will show you,
it's possible to actually out-innovate most startups. And not only that, as
a large enterprise, you can change the game for
how your industry works. Before we do the
demo though, I want to introduce you to someone
who's actually doing that at a large enterprise. And so I'd like to
invite our first guest speaker to the stage. Please, welcome Dinesh Keswani. He is the CTO of HSBC. [MUSIC PLAYING] Hi, Dinesh. DINESH KESWANI: Hi. Hello, everyone. APARNA SINHA: Thanks
for joining us. So Dinesh and I have known
each other now almost two years and at least-- DINESH KESWANI: No, it's
just been 11 months. APARNA SINHA: Has it
only been 11 months? DINESH KESWANI: Yeah. APARNA SINHA: OK,
well, he has been-- it feels longer, because-- DINESH KESWANI: Yeah, yeah. APARNA SINHA: --we've been
very close working together. And he has been a guiding light
for the development of Anthos from its very start. So I'd like to ask
Dinesh to tell us a little bit about
HSBC, your background, and what excites you about the
change that you are leading. DINESH KESWANI: Sure. Thank you, Aparna and Chen. I have to keep up with
the Greek names here-- Anthos-- another
one to remember. So HSBC is a very unique,
diverse, very global organization. I joined HSBC 11 months ago. As a way of introduction--
besides the numbers on the slide here-- we're a bank that has banked
many generations of customers and organizations. We're 150 years old. We have 39 million customers. We do about $1.5 trillion
in payments a day. That's a lot of zeros. And we manage about $2.5
trillion worth of assets. It's a very unique organization. It gave me an opportunity to
move from California to the UK about 11 months ago. And then two weeks
after I started, the first meeting I had
with Google was with you. APARNA SINHA: Mm-hm. It seems like a
really long time ago. DINESH KESWANI: It does
seem like that, yep. APARNA SINHA: So what's
the mission of the bank and of your group? DINESH KESWANI: We have
a very interesting vision with the bank. We want to be able to
transform the world's banking experience as a whole. And I'm not talking with HSBC. We're talking about the
world's banking experience. HSBC is committed to
invest about $17 billion in technology investments
over the next three years. And as techies, we love solving
big, hairy, audacious problems. You've heard that over
the last 20 years. And HSBC, in that
phase of change right now, attracted me a lot. And we're leading
with the cloud. APARNA SINHA: Wow! DINESH KESWANI: Yeah. APARNA SINHA: Transforming the
banking experience for all. DINESH KESWANI: For everyone. APARNA SINHA: That
is super inspiring. And I keep it top-of-mind as
we're doing our developments. But I want to ask you this
question for our audience. Enterprise IT, as we understand
it, is pretty complex. And I was talking about
existing apps versus new apps. How do you think
about the trade-offs? Do you build everything new? What do you do
with what you have? DINESH KESWANI:
No, absolutely not. We're committed to being, from
day one, a hybrid company. The reason being-- in broadly
classifying these applications, we have four generations
of applications-- Some of you haven't heard of
these-- mainframe, iSeries, G Series-- and then client server tech,
web applications, and then mobile-first cloud applications. So when we look across the
estate, a lot of our core data that we consider
data as an asset is locked up in mainframes and
client server applications. And maybe for
regulatory reasons, maybe for reasons where Google
doesn't have data centers, we sometimes have to keep
the data in our data centers. And the strategy we are
taking is to unlock the data, democratize the data,
make them available, as services for consumption. And that's where
a hybrid becomes really important for us. It is absolutely a priority
for us to go hybrid. APARNA SINHA: Services to
access untapped, on-prem data. DINESH KESWANI: Yeah. APARNA SINHA: So tell
us a little bit more. Why did you choose GKE and
Anthos for this project? DINESH KESWANI: We have
three priorities at the bank. First, we're a bank. So we have to protect
our customers data. Security is of
absolute importance. We can't compromise that. So I keep referring
this to my team. When we have to
make hard decisions, we always choose customer
privacy, customer data. Security is first. And two-- improving engineering
and developer productivity. The less I focus on
solving customer issues, then the more I spend time
on infrastructure and issues that don't matter to the
customer, the less productive I am. So improving developer
productivity-- and we have 40,000 engineers. We're hiring 5,000 more guys,
so if you're interested. Big, big problems to have,
but just the core focus is improving engineering
productivity. And then three is--
how do we constantly innovate within the
boundaries that we've set? So having the flexibility
to develop in one place and deploy in many is absolutely
a game changer for us, in terms of productivity. This might sound academic. But I'm a big believer in
value stream mapping, something that Toyota started early on. And we did some of that
exercise early on in saying, what are the activities
our engineers perform that actually add value
to the customer's outcome? And we found that
most of it is related to us building the right
set of code to impact dev. So that infrastructure
automation, the SRE work needs to be centralized and,
hopefully given out to Google so we can focus on serving
our customers best. So GKE was spot on. We became early design partners. We're glad about that. I could go on and on. APARNA SINHA: OK, great. Well, thank you. So everything as a
service, I find that vision extremely compelling,
really changing how banking works all together. Anything else that you'd like
to share with the audience? DINESH KESWANI: Yeah, the
last 11 months-- or two years, as you put it-- have been very revealing for me. I was a Silicon Valley guy. I moved to UK. What I did realize is
the single biggest factor that makes a difference is
the people that you work with, the productivity, and the
impact you can have with them. So focus on developer
productivity. If you're managing a
team, get their upstake. Everybody faces obstacles. Get them out of the way. And then-- setting
up communities around the big
problems you have. So we have set up
communities around data, security, containerization. Some of it is hybrid. How do we get our data centers
ready for the hybrid mode? So we have teams of devs that
focus on this as a community. They have a purpose. So that's very enlightening. And then finally--
this is debatable but I believe in innovating
with what we have. Setting boundaries, setting
goals, and saying, go innovate, it's very anti-- hey, you have
limitless resources. Go innovate. But I think necessity is
the mother of invention. So when you put constraints
around a problem set and then you try to focus
on the customer outcome, you innovate faster. APARNA SINHA: Thank you, Dinesh. DINESH KESWANI: Thank
you so much, Aparna. APARNA SINHA: Thank you. DINESH KESWANI:
Congrats on the launch. CHEN GOLDBERG:
Thank you, Dinesh. DINESH KESWANI: All right. [APPLAUSE] APARNA SINHA: I love
that-- necessity is the mother of invention. We are so inspired by HSBC
and our design partners that we're wearing the
HSBC colors, in fact. So our platform is really
built for leaders like Dinesh. It allows you to
innovate with what you have without creating
a lot of friction with your existing environment. And how does it do that? It's really three simple things. First of all, GKE
On-Prem, it fits right in. It blends into your existing
environment and runs on vSphere and really brings the cloud
to you, where you are. Number two-- managing everything
as a service, as Dinesh said. That's made possible by
our service mesh, which is available in the cloud
and also as open source Istio On-Prem. And then lastly, we
have the new capability called Anthos Migrate
that automatically moves your applications from
virtual machines on-prem or in other clouds to GKE. So these three modes can
actually be used together, or they can be
used individually. And there are customers
that are using them in all different forms. These are three examples. Kohl's is using GKE On-Prem to
blend their on-prem environment with their cloud environment,
functioning as hybrid, much like HSBC. AutoTrader are running
everything as a service and managing it
in a service mesh. And CardinalHealth is migrating
automatically from VMs to GKE. And this last one, of
course, is rather unique. And so I'd like to give you a
demo of this last piece, which is Anthos Migrate. Actually, if we go back to
the slide for a second-- Anthos Migrate-- it's
a brand new capability. It's possibly a first of
its kind in the industry, especially for the speed and
simplicity of what we do here. And so moving to
the demo, I'm going to switch over and
try to show you what I would call an on-prem system. Many of you might be
familiar with this. Here, I have this vSphere. And vSphere is running
in a typical data center environment. And it is running a typical
packaged application. This is not an application for
which I have the source code. In this case, it's a CRM system. It happens to be sugar CRM. And it has two VMs. So there is a database, which
is a MySQL database, that is running in a VM. That's running CentOS,
as you can see. And then there's the front
end of the application, which is running in a separate VM. And the operating system on that
VM happens to be Suzy Linux. So this is quite common. And the application is actually
exposed at 192.168.11.160. And we can go there. I have that open. And I'll refresh, just to
see that it is actually a live application that
is running on-prem. So this is pretty common. And you probably have a
set of VM administrators and OS administrators
that are patching it. And then you have
a separate admin that is taking care
of the application. And while this is expensive,
it's actually pretty stable. And so if it's not
broken, why fix it? So what if I told
you that we have a streamlined way of
taking this application and moving it to GKE,
moving it into containers, so that you don't have to
have the separate OS and VM dependency and maintenance? Well, that would
be a game changer. So what I'm going
to do is demo that. We have actually built a vSphere
plug-in that's running here. That's going to show you over
here all of the changes that will take place. So we'll look at recent tasks
to see what's happening. And this plugin is also
connecting to our cloud over a [? VAN ?]
optimized connection, so that all I need to
do is to go to GKE. And from there I can
initiate this migration. So let me do that. I'm going to go to my cluster. This is a cluster
running in Europe. And I'm going to go ahead and,
using our handy dandy Cloud Shell, start a command
line environment, and establish a connection
to that cluster. This is just standard. So now, I'm connected
to that cluster. And in that cluster, I'm
going to begin this migration. So that's as simple as kubectl
apply from this file, which is a migrate YAML file. And what this file
is going to do is it's going to initiate a
shutdown of the VMs on-prem. It's going to gracefully shut
those down, take a snapshot, and then move that VM to
this cluster over here. And so let's go back to
the cluster for a second and minimize this. If I look at
workloads, you'll start to see that there are two
new pods that are starting. These are actually
stateful pods that are going to house the
application as it migrates over. And that's going to take
a little bit of time. So let's go back
and actually see what's happening in
the vSphere screen. I told you that we could
monitor what's going on here. And so you see here-- hopefully, you can see it
still even from the back. We have initiated
Guest OS shutdown. And we're reconfiguring
these virtual machines. And you can see that each of
these VMs is now powered off, and it is being migrated
over to the cloud. So as this migration happens--
and it takes about three minutes, which is
exceedingly fast-- you might ask, why
would I want to do this? I had an application
that was running OK. Well, there's a few
benefits of doing this. Number one, it frees you from
the VM and operating systems management. All of that is actually
done automatically in GKE, so the security
that comes along with patching the OS under
the covers, the movement of the container itself, the
portability that you get. Secondly, GKE binpacks
the application. And you know that, right? It is a container. And there could be
other containers running in the same environment. And it will essentially
increase your utilization by binpacking that application. And that reduces your cost,
increases your resource effectiveness. And then thirdly-- and this
is perhaps the most important piece-- is that it brings that
legacy monolith into a DevOps environment and
automatically, for example, installs Istio
service management. So you can view that
service as a service, see the traffic that's
coming to that service and set up policies,
security policies, as well as control that traffic. So these are some
of the benefits of doing this migration. It does look like
we're doing well. So let's see. We're going to go
back and see what's happening here, whether some
of these pods have begun. They're still pending. So we'll wait a little bit more. We want to make sure
that we do this live. I think, one of
the biggest impacts that this technology
has is that you didn't have to change
anything in the monolith. You didn't have to
break the monolith. You didn't have to rewrite or
even adjust the application. And the system is essentially
pulling the application bits out, extracting them
from the underlying OS, bringing the relevant
dependencies, and putting them inside a
system container in GKE, and then booting from there
and running the application as it was with the
storage and the database. OK, it looks like the
database is actually up and the application
is still pending. OK, we'll take maybe a
little bit more time. And let's see if we
actually have a change. So these two pods are here. And then when you
look at the services, we've actually exposed this
application now externally on this IP address 10.10.2.197. And we're going to
cross our fingers and see if it does come up. So I have this IP address. It's not yet available. At the same time,
the application-- I can show you-- is not
running on-prem any longer. So this should stop working. And it has. I'm going to hope that
this comes up soon. All right, well, we
might have to proceed with the rest of the talk
and come back and see when these pods start running. So we're just still
waiting for the last pod. But we're going to
go back to the talk, and we'll come back at the end
and see the migration complete. So back to you, Chen. CHEN GOLDBERG: Thank
you very much, Apana. What I love about
Anthos Migrate, that people always tell me,
so how do I start with Anthos? How do we onboard to it? And this is to me the
best and easiest ways to onboard to Anthos
in an automated way. Now, let's talk about the third
problem that we talked about-- talent. So actually, I was
happy to hear-- so Dinesh was talking about
innovating with what you have. But he spent time talking
about the people and building communities. And we've been talking about
technology and business constraints and requirements. But what all our
customers tell us-- that any digital transformation
starts with the people. As a leader, my
goal is to make sure that my team is
set up for success. And for me, it
means many things. First of all, it
means that I want to make sure that they
can focus on where they can have the most impact. Secondly, I want to make
sure that they have the best tools for doing the job. And last but not least,
I want to make sure that they have the peers
they want to work with and they can learn from. I really want to
give them the best. But the reality is
that, in an enterprise, you have tremendous
fragmentation and complexity, already causing your
talent to be fragmented and spreading too
thin, leaving no time to learn new technologies
and build new capabilities. Clouds can add
additional strain. I am super excited to invite
Hesham Fahmy, VP of Technology at Loblaw Digital, the
biggest grocer in Canada. He will share with us how
he puts his team first and how the technology and
Anthos impact his team. Thank you. [APPLAUSE & CHEERS] HESHAM FAHMY: Thank you, Chen. CHEN GOLDBERG:
Thank you, Hesham. HESHAM FAHMY: Thank you, Chen. I'm really excited
to be here today. Good morning, everyone. I have to admit, I
was kept completely in the dark about the
announcement of Anthos today. But I was really happy to see
that what we were working on for the past 12 months
was really coming to life and being fully
GA'd and supported. And my Slack messenger has
been going like wildfire, with my team reaching out
with a lot of comments about, we really bet on the
right horse here. So this is great. So let me start by sharing a
little bit of Loblaw's story. Loblaw is Canada's
largest retailer, and our purpose is to help
Canadians live life well. We're actually turning
100-years-old this year. And throughout this journey, we
have become a trusted partner for all Canadians. We operate 17 different grocery
banners, a very large pharmacy chain, and a major
apparel brand. And our 2,400 locations
are within 10 minutes of 75% of Canadians. So when you think about
the landmass of Canada, that's pretty huge. And PC Optimum is Canada's
largest loyalty program, with 80 million members. And so that equates
to about 2/3 of the adult Canadian population. We also run Canada's largest
online grocery platform. But Canadians are demanding
more and more from us. And what we've
realized as a company is that we need to
transform from becoming a traditional
retailer to becoming a technology-first company. And it's our mission at Loblaw
Digital to make this happen. So we all know that to
become a technology leader, you really need to build a
very strong engineering team. Your talent becomes your
most valuable asset. And this is easier
said than done. And at Loblaw, we have
three main challenges. The first one was how do we
attract the best engineering talent that's out there? And then how do we make sure
that this talent is actually focused on delivering and
adding customer value? And then lastly, how do
we retain this talent by making sure that
they're always engaged and feeling challenged? By moving our platform
platforms over to GCP, they've really helped a lot
with all three of these. So when we talk about attracting
talent, I mean, let's face it, a 100-year-old grocer
is not the first place that engineers are going for
looking for opportunities. But engineers are
looking for places that allow them to experiment
with the latest and greatest technology, grow
their skills, and have impact at a very large scale. Now, we were lucky at Loblaw
that we could actually definitely offer that very
large scale with our reach of millions of customers. But we actually
needed that platform that can offer our engineers
the latest tools and speed and scale. And so GCP has actually
given us that toolset. And then CSP-- or
now, Anthos-- has re-allowed us to deploy at
massive scale very fast. And as a result of all
of this, Loblaw Digital has now become a hot
destination for some of the best engineering talent in Toronto. And we're actually
very proud of this. So now the risk or the problem
comes that this talent now can get easily
wasted if they aren't laser-focused on creating
delightful customer experiences. Any time that we waste
on managing deployments and infrastructure is
really a waste of time that's adding no
value to our business. So with GKE, we are really
able to abstract out the deployment of
our applications and simplify our monitoring and
running of our infrastructure. And this has freed up countless
hours of our teams to innovate. So an example, like
our data science team now, they're able to run
a lot of very-- train some complex
machine-learning models without needing to worry
about managing the underlying compute infrastructure
needed to do that. Our online grocery
development team now has been able to transform
our platform into a secure set of microservices that
are running on top of GKE and an Istio service mesh. And the lastly, when we
talk about retaining talent, we all know that
engineers take a lot of deep pride in their
problem-solving skills. And any constraints that you put
on them in terms of what tech they can use is
really frustrating. But then the problem becomes
in very large enterprises like ours, you end up having
some very rigid enterprise architectures, and these create
these frustrating constraints for those engineers. But those architectures
are driven out of the need of the enterprise
to be able to secure and operate the complex web of systems that
you have in your enterprise. And it becomes really impossible
in that kind of environment to have a true polyglot system. But now, with CSP-- or Anthos-- we've been able to have the
container as the atomic unit of deployment and packaging. And it's become really easy
to manage and secure these. And so at Loblaw
now we're actually able to package an
application once and run it everywhere and run it anywhere. And we no longer have to
deal with all the pain of the inconsistencies between
all our test environments. It also now has
allowed our developers to essentially put whatever
they want inside the container. And that has unlocked a lot
of opportunities for us. Another example we
have now is we've got some development
teams that are actually working on building some
services in languages like Elixir. And that would have never been
possible about 12 months ago. So in closing, I do want
to say that, if any of you know of any great
engineers out there that want to work on the
latest and greatest tools and want to make a huge impact,
please send them my way. Because we're hiring like
crazy still in Toronto. Thank you very much. CHEN GOLDBERG: Thank you. APARNA SINHA: Thank you, Hesham. [APPLAUSE] CHEN GOLDBERG: Thank you. HESHAM FAHMY: Thank you. CHEN GOLDBERG: Perfect. APARNA SINHA: OK, so the
demo is actually complete. If we can-- CHEN GOLDBERG: In
a second, yeah. APARNA SINHA: OK. CHEN GOLDBERG:
Thank you, Hesham, for sharing your experience. When thinking about
talent, our focus is not about portability of workloads
but portability of skills and making sure the
developers can be productive regardless of the environment
they want to work on. And they can have the
freedom of choice, which service they want to use. And central IT or cloud teams
can support that choice, because they have tools to
manage those environments wherever they need to. Now, let's see this freedom
of choice in action. APARNA SINHA: OK, over to
the demo screen, please. And first, I'm going to show
you that the migration finished. It actually finished just a
minute after we switched over. And so I haven't
done anything here. But if I refresh, you
can see that, in fact, both StatefulSets are up. The app is up. The database is up. And so now, let's go
back to vSphere and see if this thing is working. So we're going to refresh it. It's where we left it. And we're going to
go ahead and proceed to the unsafe connection and
see if our application is up. And yes, sugar CRM is up. [CLAPPING] CHEN GOLDBERG: Whoo! APARNA SINHA: And
I'm going to log in. And I want you to
see the same console that we were seeing before. So let's log in. Yeah, and there's my dashboard. And now, it's
running in the cloud. CHEN GOLDBERG: Very nice. [APPLAUSE] APARNA SINHA: And we did
that migration live-- zero code changes--
from on-prem vSphere to in-the-cloud into
a container ecosystem. That's pretty impressive. All right, so the next
demo that we want to-- so this is the on-prem
one that isn't running. And this is the one that's
in the cloud that is running. So let's move over. We talked about your existing
applications, monoliths and taking care of those. We talked about writing
Greenfield applications whether you're writing
them in the cloud or you're writing them on-prem. That covers a lot of things. So what haven't we covered? Hm, well, let's see. When we say freedom of
choice, we really mean it. And we mean that you can
run your services wherever is best for you. With Anthos, you can
connect your clusters, whether they're running
in GCP or they're running in other clouds. And you could develop once. And you can deploy anywhere. You heard that in
the keynote today. And we're going to show
you how that works here. So I'm going to move over
to another GKE cluster. And what I have
here is I've already registered a number
of different clusters. And you can register
new clusters just by using this
register cluster flow. And you can register clusters
whether they're running in GCP, they're running in Azure,
or they're running on AWS. And so I've already registered
a cluster that's running in GCE. Another one that's
running, using kops in AWS. And of course, I've got my
handy, dandy, trusted GKE cluster. And what I get here is
a workload view that's consistent, that shows me
all of the workloads that are running on all of
the different clusters. So here you see all
of the workloads that are running
on my AWS cluster alongside my GCE cluster. So this is nice-- one place to visualize
all your applications. And the other thing you can do-- and I've done one here already
and one I'm going to do live-- is you can deploy applications
to any of these clusters. So if you think about
a multi-cloud workflow, you can develop and then
deploy your applications to any cloud that
makes sense for you. And so here I had
pre-installed WordPress. And just to show you that
this is not demo magic-- this is live-- here, I have my AWS cluster. And I can see what
services are exposed. And so I'll click on
this load balancer. And here, you see that, yeah,
WordPress is actually running. And here we are. We're looking at it. It's running on AWS. And now, let's do
another live deployment. We have our Kubernetes
marketplace, which has also moved to GA as of today. And so there you
can go shopping. There's lots of applications-- CloudBees and Aqua Security,
Cassandra-- any kind of application that you
can think of that you'd like to deploy to your
Kubernetes clusters, again, across clouds. Let's go ahead and deploy one. So we're going to do Prometheus. Did I spell that right? Prometheus, Prometheus
and Grafano. We're going to search for it. And we've got Prometheus. And I'm going to go
ahead and configure it. So this is something
that's available as a prepackaged application. And this is something that's
supported through our partners as well as through Google Cloud. And with a dropdown,
you can choose which cluster you
want to deploy it to. And so you might have
a workflow that says, hey, I want a test
application in GCP. And then I want a production
application in AWS or I want a production
application on-prem, and then go ahead
and deploy it there. So we're going to deploy it
in the Marketplace namespace in this case and hit Deploy. And this is going to take not
that long, actually much less time than the previous demo. And we'll see live
that Prometheus deploys into your AWS cluster. OK-- all right, almost done. Looks like it's done. That's fantastic. And so now, [CLAPS]
I can go to my-- CHEN GOLDBERG: Yes! APARNA SINHA: --Applications tab. And I see, in
addition to WordPress, I'm running Prometheus
on kops on AWS. And again, all of the same
management capabilities that I saw with
my GKE cluster are available to these
workloads in AWS. So there you have it--
build once, deploy anywhere. CHEN GOLDBERG: Thank you. [APPLAUSE] Thank you, Aparna. And every environment
is in your reach with a consistent experience. Google Cloud is the only cloud
provider that gives you choice. Here on stage, we heard
from both Dinesh and Hesham how important
choice is for them, how much they want to have
the choice where to innovate. They want to have a
choice what to modernize and what they can
leave as it is. They know that they can't
take advantage of choice if it increases their risk. Now, they can mitigate security
risk, talent fragmentation, and lock in. This is exactly what's
different about Anthos. It removes risk, while
giving your choice. Talent likes choice. IT likes choice. Leader likes choice. Business success
requires choice. APARNA SINHA: So
now we've done it-- CHEN GOLDBERG: Yes. APARNA SINHA: --three
live demos, demonstrating all the capabilities
of Anthos to you. If you want to learn more,
there's a lot of deep dives. There's over 50
talks on this topic. But these are the ones that I
would recommend to learn more about GKE On-Prem and how to
manage Kubernetes in your data center. There's a couple of talks
going on later today, directly after this one. And then also,
connecting your clusters across multiple clouds--
highly recommend those two talks-- one
today and one tomorrow. Service Management,
the Easy Way-- and then we will be available
for open Q&A both later today as well as tomorrow morning. And then lastly, of
course, the developer experience that we
showed with serverless that you can use with security,
a number of different talks on that. So thanks again. Thanks to-- CHEN GOLDBERG:
Thank you very much. APARNA SINHA: --our speakers. CHEN GOLDBERG: Thank you. APARNA SINHA: Enjoy the
rest of the conference. [APPLAUSE] [MUSIC PLAYING]