ALEXIS MOUSSINE-POUCHKINE:
Containers have drastically changed the workflow
of many developers. And so you might wonder,
how do I run containers on Google Cloud Platform? In this video, we'll
review three ways you can run your containers on GCP. [MUSIC PLAYING] While the concepts underlying
containers have been around for many years,
Docker, Kubernetes, and an entire ecosystem of
products and best practices have emerged in
the last few years enabling many different
kinds of applications to be containerized. As a developer, containers
give you a lot of freedom by letting you package an app
with all of its dependencies into an easy-to-move package. The solutions for
running containers in GCP vary essentially in how much of
the underlying infrastructure is exposed. The first way you can
run a container on GCP is to use Google
Kubernetes Engine, or GKE. As the inventor of
Kubernetes, Google quite naturally offers a fully
managed Kubernetes service, taking care of scheduling
and scaling your containers while monitoring their
health and state. Getting your code
to production on GKE can be as simple as
creating a container deployment with the cluster
being provisioned on the fly. Once running, GKE clusters
are secure by default, highly available, monitored, and
they run on Google Cloud's high-speed network. They can also be fine tuned for
zonal and regional locations and can use specific
machine types with optional GPUs or TPUs. GKE clusters also offer
hassle-free operations, with auto scaling,
auto-repair of failing nodes, and an auto-upgrade
to the latest stable version of Kubernetes. GKE is also a key
component of Anthos, Google Cloud's enterprise
hybrid and multi-cloud platform. With Anthos, you can also
migrate existing VMs directly into containers and move
your workloads freely between on-premises and cloud
environments, such as GCP. What if you could focus on
building your stateless app, not on writing YAML
files, and still deliver code packaged in a container? The second way you can deploy
your containers on Google Cloud is with Cloud Run. This gives you the benefits of
both containers and serverless. There is no cluster or
infrastructure to provision or manage. And Cloud Run
automatically scales any of your stateless containers. Creating a Cloud Run
service with your container only requires selecting a
location, giving it a name, and setting authentication
requirements. Cloud Run supports multiple
requests per container. And it works with any language,
any library, any binary, and even any base Docker image. The result is service
with true pay-for-usage, the ability to scale to
zero, and full out-of-the-box monitoring, logging,
and error reporting. Because Cloud Run is built
using the Knative open source project, which offers a
serverless abstraction on top of Kubernetes, you can have your
own private hosting environment and deploy the
exact same container workload on Cloud Run for
Anthos in GCP or on prem. The third option for
deploying your containers is straight to Google
Compute Engine, or GCE. That's right, you can leverage
your familiar virtual machine environment to run
your containers. This means using your
existing workflow and tools without requiring your team
to ramp up on all things cloud native. When creating a GCE
virtual machine, the container
section will let you specify the image
you'd like to use, as well as a few
important options. When you get to the
boot disk section, the suggested virtual
machine OS is something called a Container-Optimized
OS, an operating system optimized for running
Docker containers and maintained by Google. This operating system image
comes with a Docker Runtime pre-installed, thus enabling
you to bring up your Docker container at the same time you
create your virtual machine. But it also lacks
most of what you expect to find in a
typical Linux distribution, such as a package manager
and many other binaries. This means a
lockdown environment that ensures a smaller attack
surface, keeping your container runtime as safe as possible. The great thing about
running your containers on Compute Engine is
that you can still create scalable services
using managed instance groups, as they offer auto scaling,
auto healing, rolling updates, multi-zone deployment,
and load balancing for the compute instances. Where do these container
images come from? Where do I store them? How do I version them? And how do I restrict
access to them? The answer lies in Google
Container Registry, or GCR, which is a private-by-default
Container Registry that runs on GCP with
consistent uptime and across multiple regions. You can push, pull,
and manage images in GCR from any system, VM
instance, or your own hardware, and maintain control
over who can access, view, and download those images. Note, also, how you
can conveniently deploy to all three run times
we've discussed straight from Container Registry,
deploy to Cloud Run, to Container Engine,
and to Compute Engine. Container Registry works with
popular continuous delivery systems, such as Cloud
Build, Spinnaker, or Jenkins to automatically build
containers on code or tag changes to repository. Finally, Container
Analysis scans containing images
stored in the registry for known
vulnerabilities, and keeps you informed so that you can
review and address issues before deployment. Google Cloud offers
you three solid ways to run your containers, ranging
from a fully managed Kubernetes environment to a truly
serverless platform. Pick the solution that
works best for you, and start deploying your
containerized workloads today. Consider trying some free code
labs linked in the description below to explore these products,
and look forward to upcoming episodes and more overviews. If you like this video, please
like, subscribe, comment, share, and look forward to
more GCP Essentials videos. [MUSIC PLAYING]