[INSTRUMENTAL MUSIC] STEPHANIE WONG: Hey, everyone. Welcome to Eyes on
Enterprise, where I'll be bringing on
my lovely colleagues to talk about the
tech, strategies, and best practices that are
helping to shape enterprises today. My name is Stephanie
Wong, developer advocate. And today I have Sam Stoelinga
on the show, customer engineer. And we're going to be talking
about hybrid and multi-cloud and how to approach that. So Sam, thanks for
being on the show today. SAM STOELINGA: Yeah,
thank you for having me. Happy to talk about hybrid
and multi-cloud today. And yeah, let's get started. STEPHANIE WONG: First off, no
large enterprise, no matter how well prepared, can simply
move to the cloud in one fell swoop, even
if their goal is to migrate completely
to the cloud, hence multi-cloud and hybrid. So can you help us
define both of those in the context of an enterprise? SAM STOELINGA:
Workloads and processes are unique across
different enterprises. And that's why you also see the
term hybrid and multi-cloud be used inconsistently. So let's start
with hybrid cloud. So hybrid cloud is
really where you have a private,
on-prem data center, and you're using a public
cloud, and how can you have workloads that
are interconnected and be deployed across these
two different environments. Multi-cloud, on the other
hand, is really focused on what if you have several cloud
providers, public clouds, and also, possibly, a private
cloud or on-prem data center, and how can you use these
altogether for your workloads. Multi-cloud, on the
other hand, can also mean if you'd only have
one cloud provider. But you want to make sure
that your workloads are ready to be ported when needed. STEPHANIE WONG: Yeah. So what's interesting
here is it's more about the architectural
portability and readiness of your workloads. And one other thing
you mentioned was it's more about the extension of your
private network into the cloud or from cloud to cloud. SAM STOELINGA: Yes,
that's one technology to enable a hybrid cloud. So we have two
technologies that really enable you to extend
your private data center to the cloud,
where you can even treat a cloud as
an additional rec if you would like
to stay that way. So there is a
dedicated interconnect, and there's the cloud VPN. And both allow you to [? span ?]
a private network that's running on-prem and connect it
to the private network that's running in Google Cloud,
which we call a VPC network. And that allows you to
advertise the on-prem network to the cloud and the
cloud network to on-prem. So any users on-prem
can also directly access any cloud service
that's exposed only over to private VPC network. STEPHANIE WONG: Now, it's
clear that, at this point, there are a lot
of misconceptions about hybrid and multi-cloud. So I wanted to talk about
the top three misconceptions that I've heard, and I want to
hear what you think of them. So the first one is that
there is a hybrid product that makes my workloads hybrid. SAM STOELINGA: Yeah,
so I've had questions where I would get
asked, like, what is your hybrid cloud product? And then they would
assume that there is a product that
takes their workload and makes it automatically
run across their private data center and on the public cloud. And they would expect benefits
that they get out from it, like lower cost, better
reliability, et cetera. However, I noticed that
I don't think there are sorts of product that exist. Actually, hybrid cloud is more
about a set of technologies that may or may not be adopted
by different kind of workloads. Also let's say we have a
batch processing workload. That workload [? needs to ?]
[? be ?] run every night. And it needs to be
finished within four hours before the next day begins. The enterprise has
a hybrid strategy that they want to
use their existing investments for on-premise
as much as possible. So every workload
owner is encouraged to use that first, but
for any excess capacity, they need to go to the cloud,
because they don't want to invest more in
on-premise, but they want to make sure to keep
the existing investments and make the most out of them. So what they do is they have
an interconnect and the burst processing workloads spins
up more VMs in the cloud and then it transfers--
it does some processing, and it sends the
results back to on-prem. So the data stays on-prem. Some capacity is being
used from on-prem. But the majority for this
work needs to be in the cloud because otherwise, they couldn't
finish within four hours. So it's a really good example of
one hybrid architecture that's useful for this work. However, for other workloads
you might see a different kind of hybrid architecture. STEPHANIE WONG: Right. I think that's really
interesting, because again, it's more about the
feature and technology that enables you to burst or
have better redundancy by moving to the cloud. So I want to talk about my
second misconception, which is that by moving
to the public cloud, that inherently makes
your workloads publicly facing to the internet. SAM STOELINGA: Yeah,
and, I mean, it comes from the name of public cloud-- means everything has
to be public, right? So I would say it's not unheard
of that people think that way. However, it's not--
that's not true. You can simply expose your
workloads over a VPC network only and then bridge
that VPC network if you're on-prem using the
interconnect or a cloud VPN that we have talked
about previously. STEPHANIE WONG: Now for
the third misconception-- moving to hybrid and
multi-cloud means moving everything at once. SAM STOELINGA: Yeah. I think that really
comes from the fact that people have
workloads that always have many other dependencies. And these dependencies often
reside on-prem as well, so they can-- they assume that the fact
that they move the workload, they also need to move all
the dependencies with it. However, that's not the case. Like we explained earlier, we
have hybrid cloud connectivity, which allows you to
move the workload but still keep the
dependencies on-prem. STEPHANIE WONG: Do
you mean like running your legacy backend systems
on-premise or your database on-premise and then
your Kubernetes Engine front-end in the cloud? SAM STOELINGA: Yeah,
I think that would be a great example of how you
could migrate your workloads with all its dependencies. Yeah. STEPHANIE WONG: Let's
move on to deciding on a strategy, because I know an
estimated 88% of organizations are adopting multi-cloud today. And that can be for
many reasons, one being risk reduction--
they don't want to put all their eggs in one basket. Another being
architectural similarity, like for like between clouds,
or maybe even best of breed or feature availability
in each cloud. Have you seen any other
motivating factors for companies? SAM STOELINGA: So I think
you mostly captured it. Most customers, they want
to reduce the lock-in they have to any vendor. They want to have the leverage
over the vendors versus losing that altogether. And they want to be able to
choose the cloud provider that provides them the best service. STEPHANIE WONG: What about
maintaining consistency across environments? You need consistent
tooling that's simple that allows you to manage
things, like access policies, and you might have dependencies
between applications and needs for performance and
latency requirements. SAM STOELINGA: Yeah,
I know, exactly. And maintaining consistency
can be a challenge. That's also why Google has
several tools available, such as containers,
Kubernetes, and Anthos that allow you to
maintain consistency across different environments. STEPHANIE WONG: Now
that we've talked about the foundations of
hybrid and multi-cloud, I do want to talk about
a specific case study. What are some typical use cases
in which performance, storage cost, or modernization
of existing apps were key reasons to migrate? SAM STOELINGA: So yeah, I'm
working with this customer today. They're trying to modernize
their [INAUDIBLE] on-prem IT stack. And they're running
a CI/CD workload, Jenkins, currently on-prem. And their technical
requirement is where they need to be able to
access the on-prem artifact storage, and the
on-prem users need to be able to access the
Jenkins instance as well. So with these requirements,
they came to us and said, we want GKE on-prem. STEPHANIE WONG: So
it was clear to them that they needed to
continue running Jenkins but on GKE on-premise. SAM STOELINGA: Yeah, well,
that's what they told, but then we started
talking more and more. And we found out
that, hey, we actually can run all these things,
matching these requirements, while still running
it on the cloud. So we introduced this
customer to several features, one of them being
GKE private masters and private nodes, which
allows you to run GKE clusters in a fully private fashion. So none of the GKE workloads
are exposed publicly. The GKE nodes themselves do not
even have a public IP address. Everything is directly available
over the private network, which is connected to on-prem. And at the same
time, this allows them to run Jenkins
on top of GKE while still being able to
access the artifact storage. So in this way, we had a hybrid
solution for this customer by moving part of
their CI/CD stack to the cloud, while still being
able to connect to on-prem. STEPHANIE WONG:
That's interesting. And I think it
clarifies that you do need to have a
product and networking understanding to know that you
can actually restrict access to public endpoints for
a private GKE cluster. SAM STOELINGA: Yeah, yeah. Networking experience
is always needed. Even people think they
move to the cloud, they don't need network
engineers anymore. But networking knowledge is
required, even in the cloud. STEPHANIE WONG: So let's
talk about migrations now. In most cases, migrations is a
one-time, irreversible effort. But with hybrid
and multi-cloud, it makes it so that you
can move in phases now. So I think portability
here is key. But let's say I'm
an organization, I want to make my
workloads portable, what would be my
first step to do that? SAM STOELINGA: Yeah,
so the first step is to make sure that
your workload is able to shift from one computing
platform to another platform. In order to do that, you need
to separate the application deployment and management
from the artifact of how you delivered the software itself. So there are many tools
that help you do that today. And based on these tools,
we can form a tool chain that helps you do this for any
VM or container-based workload that you might have. So I think key here
is understanding that making your
workload portable doesn't mean that you have
to make it cloud native. This is often a effort that
is not significant to do. And people can start doing that
today with the tools available. STEPHANIE WONG: OK,
let's get specific. Can you give me an
example of a tool chain that you can use to build VM
images and deploy to any cloud then? SAM STOELINGA: There are a few
tools critical to start a tool chain, mostly for, I would
say, there is a CI system, there is the image builder,
there's a source version control system. And all this together
form a tool chain that allows you to
build images that can be deployed across any platform. So let me give an
example of how you would establish, like,
what tools would you use for this tool chain. So for the Image
Builder, there's a great open source
tool by HashiCorp, which is called Packer, which
takes in the application deployment logic
and a base image like, for example, Ubuntu or
another Linux system like RHEL, and then builds an image
that can be deployed across different platforms. But it's always built using
the same deployment script. So this really allows you
to build deployable images for any cloud platform
or on-premise system. The other system
is the CI system which job is really taking the
latest commit from the source version control
system and building an image using the latest
application software. And tying these together
allows you to build a pipeline that you can now build
images automatically for any platform whenever
a new commit gets pushed to the source
version control system. So that's really--
that's a simple example of how you would do it. And most customers,
they should choose what they're already familiar with. So if you're using
today Jenkins for CI, you should continue using that. However, if you're using
[? another tool, ?] you can easily plug and
play different tools to build such a pipeline. STEPHANIE WONG: That
tool chain sounds really great for workloads
that have an automated way to deploy applications in VMs. But what if I have an
application in a VM already and I wanted to deploy
that to the cloud? SAM STOELINGA: Yeah,
so that's actually a pretty common
scenario, especially for a large enterprise that buy
their software from a vendor, and that vendor gave
them a VMware image, and that VMware image doesn't
have the applications separated from the image itself. So they have no way of taking
the application and deploying it [? on a ?] [? base image ?]
that can be deployed to another cloud. So the only solution is to
take that VM as is and somehow get it to the cloud. So there are two solutions
that Google Cloud offers today to customers. There's Migrate for Compute
Engine, which was previously known as Velostrata, which
allows you to take a VM as is and migrate it to the cloud. And it has a lot
of cool features. You can even migrate a VM,
[INAUDIBLE] in the original VM, and test whether the
migration was successful. So you can do a low risk
VM migration to the cloud. The other solution
that I mentioned is Anthos Migrate, which
allows you to take a VM and convert it to
a container and run that container in the cloud. So both solutions allow
you to move workloads from on-prem or out-of-cloud
providers to Google Cloud. STEPHANIE WONG: So when would
you use one over the other then? SAM STOELINGA: The
two solutions are different in what kind
of use case it supports. So Anthos Migrate is a
solution that takes of a VM and converts it to container. And because it's a more
sophisticated technology, it's more limited into what
kind of source workloads it can support. Migrate for Compute
Engine, on the other hand, allows you to migrate much
bigger range of workloads. I would say it support most
workloads that you can imagine. STEPHANIE WONG: In
these cases, we're talking about migrating
existing VM workloads. But what if I want
to create a standard for creating portable
applications that we can move to any cloud? SAM STOELINGA: So that's
really where containers fit in. Containers really
forces everyone within the company to
have a standard that allows you to deploy the
container the same way in any environment. However, if you start
looking at how do I manage a multi-host
environment, you get into a more
sophisticated area. And that's really where
Kubernetes comes in. So Kubernetes solves
the chance of how can I schedule containers across
a host, a set of host in a cluster. How can I do low balancing
across these containers? How do I ensure that I
can scale out when needed? STEPHANIE WONG: Kubernetes has
become the de facto standard, but is there any difference
in developer experience? SAM STOELINGA: Yeah. Definitely. And that's a good thing. I think improvement in
developer experience is always a good thing to have. So developers can
really focus on writing the code versus worrying about
where their code works well in production and in
development because they have the same container
that they build that also gets
used in development but also gets used
in production. So they're able to improve
their production reliability by having this kind of
developer experience. STEPHANIE WONG: I think it
would be helpful to jump into another scenario. Let's say that I am
a company, and I've containerized some
applications, but I'm still primarily using monolithic
applications on-premise. So how can I use Google tools
to help modernize and port to the cloud? SAM STOELINGA: We have a
modernization platform, which we call
Anthos, which allows you to modernize your
applications either on-prem or in the cloud. It's based on open platform. So it says Kubernetes,
which we call Anthos GKE; STO, which we call
Anthos Service Mesh; and Knative, which
we call Cloud Run. And these technologies really
allow you to modernize in place and move to the cloud
when you're ready. STEPHANIE WONG: What does
Anthos look like for IT ops versus developers? What's the user journey like,
when it comes to migrating? SAM STOELINGA: So I've been
involved in a few Anthos deployments. And all of the cases
that I've been involved, both developers and the
IT team have been engaged. Actually, not just IT-- the
networking team, security team, which you could
consider part of IT, they've all been super
engaged, especially during the initial
installation, but also afterwards for making
sure that the Kubernetes platform, the Anthos
platform stays running and up and making sure the develop
have a proper experience. IT operations are
always going to be involved in those scenarios. And developers,
on the other end, are going to be focused on
making sure their workloads get up and running
successfully on Kubernetes and make sure they
stay up and running. So they have different
roles, but they are still both fully engaged. Yeah. STEPHANIE WONG: To
wrap up, what are some of your views on the trends
for hybrid and multi-cloud? SAM STOELINGA: Yeah,
so I have seen, no matter which kind of
customer I work with, you often see them
standardizing on Kubernetes as the platform for workloads
to be running for the future, but also for current
workers to be migrated to. And even industries that are
traditionally been slower to adopt newer
technologies, they're also starting to
adopt Kubernetes as the standard for their
internal applications and external or any application. Yeah. STEPHANIE WONG: So for
those people in that bucket, what can they do to get started
and get some value out of cloud today? SAM STOELINGA: I think that the
biggest thing they should do is just get started
with a smaller pilot, like find a workload that they
want to modernize right now, get it running on Kubernetes,
see what benefits they gain from doing that, and repeat
it for other workloads as well. But really, sometimes it's more
of, just get it done and get started. STEPHANIE WONG: Sam, thank you
so much for being on the show. SAM STOELINGA: No,
thank you for having me. It was fun. STEPHANIE WONG:
Everyone, check out the articles on hybrid and
multi-cloud at the links below. And comment with your own
experience with it as well. Join us next time for
Eyes on Enterprise. [INSTRUMENTAL MUSIC]