(upbeat music) - Hi! Thanks for joining us today. Today, we'll take a look at deploying containerized
applications faster and more reliably with
Azure Red Hat OpenShift. My name is Oren Kashi. I am a product marketing
manager at Red Hat, and I'm joined today with
my colleague Ahmed Sabbour who is a product marketing
manager at Microsoft. So, let's get into it. So, first question, why
Red Hat and Microsoft, and as Satya Nadella mentioned
at Red Hat Summit last year being that Microsoft has
embraced open source, and Red Hat is also known
as the open source company, it's a natural fit for
both of our companies to come together and
really deliver what's best for both of our customers. As over 95% of the Fortune
500 use Microsoft Azure, and 100% of the global
Fortune 500 companies in these industries use Red Hat. It is a natural fit
for us to come together and to utilize our synergies
to bring a complete solution to our customers like
Azure Red Hat OpenShift. So, what do we give you? So, I like to think about this as kind of coming into three areas like the more and more and more, so more than just Kubernetes. So, while downloading
and installing Kubernetes isn't all that hard, but getting it to a
point where it is ready for a enterprise to run
production grade workloads is another story. There are a number of things
that need to be pieced together and need to be tested hardened
and set up for operations. And this is in addition
to other software projects that one may want to add things
maybe like Calico or Istio or Prometheus and each one of
those has its own skillset, its own community, its own release cadence to further complicate it a bit. More than software. So, we deliver a
standardized implementation of OpenShift container
platform using best practices that includes the additional
infrastructure services and operational processes
that go along with that. Therefore it makes it easier to maintain and also makes it easy and quick to get that enterprise grade
cluster up and running. And lastly, more than managed, and this is just a bit
of a differentiation here in that the level of management differs with Azure Red Hat OpenShift. The word managed is used
a lot in the industry, but we actually manage the
entire stack of your cluster, everything down from the infrastructure all the way up through the application, including any sort of upgrades or patching that may have to go along with it. So if one was to go down
the Kubernetes route and the DIY route of Kubernetes, it is not necessarily
a simple thing to do. We see a lot of bullet
points up on the slide here, and each one of these is an activity that somebody is going to have
to go through at some point. With some of these, particularly
in the operate column are going to be more
repetitive and cyclical and kind of like day two operations. So, that's just taking Kubernetes, right? And just taking the upstream Kubernetes to get that up and running.
but on top of that, you're going to add
other things too, right? You're going to want to add
some other open source projects. Maybe you want to do something
with cluster networking. So, you might want to add Calico in there or maybe you want to
do like a service mesh like Istio or Prometheus. And each one of these is
its own open source project, it has its own skillset
that goes along with it, its own tooling, its
own risk that goes along with that as well. So, you really end up in
a position where you have multiple plates that
you're spinning in the air, and each one of these
plates are different. And by the same token when we look at using public Kubernetes
or public Kubernetes services really what you're getting is
you're getting the green box in the middle, which is Kubernetes, but you're going to still
want to add other services to that. You're going to want
to add things like logging for example, or a container
registry or maybe monitoring or CICB. And generally, if
we're going to be using one of the public cloud Kubernetes providers, you're going to be probably using one of their public services as well. So, those usually come at additional costs that need to be considered. But with OpenShift, what we do is we take all those
services, all of those tools including Kubernetes, and we
go through defect remediation, we do performance fixes, we harden it, we validate the integrations
that go along with that, and we package that
nicely into one product into an OpenShift release. So, it makes it easier for you to be able to know that you have a working platform, and not have to invest in cycles to kind of patch the platform, and plumb the platform together. So, why would someone want
a managed OpenShift service? And I think this kind of comes down to a couple of different areas. Firstly, we're talking
about quick time to value. So, in traditional IT organizations, the amount of time that
it takes to get resources is probably something in
the days to weeks timeframe. I've even had one customer
where it was months, and we can talk about that differently, but think about what that means. But if you're going to take days to weeks to be able to just kind of start putting the platform together, then
it's going to take some time to put the platform together. So, think about how much time is invested in just trying to get hands on keyboard, before you get hands on
keyboard to start innovating. By this token it also
increases your efficiencies and operations, and what
I mean by this is that being that you don't
necessarily now have to invest in resources to stand up the platform and to operate the
platform, you're better able to invest those resources
into things that matter to the business, things that matter to your underlying value. Instead of focusing on the plumbing, focus on the innovation. For most of our customers,
really building a platform is not the end goal, right? That is not the business that they're in, but rather the platform
functions as a means to an end. It functions as a means
to make their applications more scalable or more maintainable. So, the value is more in the
applications that are on there, and that's where most of them
would really like to focus on. This is enhanced further
by our 24 by 7 support that comes along with it. So, we do offer a financially
backed 99.95% uptime SLA. We take care of all the upgrades and updates with zero downtime. And we also do proactive
monitoring on the clusters. So, we'll start to notice if there's any sort of
service degradations or a high resource consumption
to stave off the potential for any issues, but nonetheless,
should anything come up, any questions or any issues
you can always open up a support ticket 24 by 7. And lastly, this is all
enhanced by the flexibility that you have in how
you want to consume it. So, whether you need
a cluster for one hour or whether you want to
commit to three years and leverage the most amount of discount, really the choice is yours, and the same experience throughout. Okay. And kind of just comparing
the two in just Kubernetes or public cloud Kubernetes
with Azure Red Hat OpenShift, Azure Red Hat OpenShift is a purposefully engineered together. As we see the stack on the left, we see that all the
components have been tested, have been integrated, and in
most importantly are supported. So that top layer over there,
the support and operations flows through through the entire stack. So, whether there is an
issue with the infrastructure itself, whether it is an
issue on the platform, whether there's an issue at some point with one of the services,
may be service mesh or monitoring or you're logging or even at the application layer, if you're using one of the UBI, then you'd get support for that as well. And all of that is included in the costs. When you're looking at just Kubernetes or Kubernetes as a service
offering, you end up getting more of a managed control plane,
you get the infrastructure, but then you have to add
the services on top of that. So, those top two layers start to become more of an À la carte offering. So, which public service
are you going to consume from the cloud provider? Are you going to use the logging service that comes along with it? Are you going to do your own logging? And then the support and
operations is something that you're going to have
to keep in mind as well 'cause most likely you'll
have to keep some resources to maintain the cluster,
to monitor the cluster, to make sure that the
integrations are done correctly. And in addition, each of these services are supported independently. It's its own service,
it's its own offering from the cloud provider. So, moving on, let's
talk a little bit more about Azure Red Hat OpenShift. So, as I was kind of
mentioning and talking, Azure Red Hat OpenShift
is a jointly engineered, jointly operated and
jointly supported offering by both Microsoft and Red Hat. This allows you to interact
with whichever company you'd like to in terms of seeking support, you are able to scale as you need. So, you could pay as
you go for your cluster, for additional resources or you can commit for the longer term. You're also able to leverage
your Azure monetary commits. And as I mentioned,
this is further enhanced by just the level of support that you get. So, you do get that SLA, you
do get the 24 by 7 support. And also we are compliant with
many of the leading industry certifications as well. So, how easy is it to get
a cluster up and running? Pretty much as simple as this, right? So, you'd run your command in the CLI, it'll go off, do its thing. And what's going to happen
is it's going to spit back to you a Jason object
that is going to contain all of the information
related to your cluster. Now, granted it's not as
fast as in the image here. Maybe we can work towards that, but you'll really get a
full cluster up and running in about 30 minutes. Some of the features that are inherent to Azure Red Hat OpenShift
with OpenShift four you do get full cluster
admin out of the box, we do support private clusters. So, if you want your
cluster to not be accessible over the public internet and only through internal means only that is an option that is available, and we can configure a cluster like that. We can deploy into existing VNets. We do support cluster auto-scaling. So, you can ensure that your
cluster has the resources it needs when it needs it. And we also support
multi-agency clusters as well. Being that it is OpenShift
four we do support operators. It does come with all the
great productivity tools that OpenShift comes with. And as I mentioned, it is compliant with many of the industry
leading certifications. So, now we're going to go through a demo. My colleague Ahmed is going
to take you through a demo about deploying a multi-tiered application to an already existing Azure
Red Hat OpenShift cluster. The front end is deployed
from a pre-built Docker image and the backend is
deployed from a source code that is hosted on GitHub, and it talks to a MongoDB
database on the backend. Ahmed, please get started. - In this demo, I'll take you through a quick rundown of the features that make Azure Red Hat
OpenShift a great platform for container based in
cloud native applications. The first thing you would want to do is to log into the web console using the username and
password that version narrated when the cluster was first created. You'll be presented with a console. This console has two distinct use for the administrator and the developer. Since we're deploying an application, switch to the developer view
and create a new project. The project is a Kubernetes namespace that you can use to host
multiple deployments together and apply certain permissions on them. The first component we're going to deploy is going to be simple
pre-built Docker container that's hosted on a public repository. I'm also going to uncheck the option to automatically create routes 'cause I want to show
you that option later. Finally, I'm going to apply a
few labels to this deployment. Labels help identify and
categorize which components for which aspect of the application. This is what you need to do to get the container image
deployed to OpenShift. This should work with any container image that follows best practices
such as defining an exposed port not needing transpacific as the root user and having the single non exiting command to execute on start. Notice that OpenShift automatically picked up the exposed port
and created the service force. You can manage to scale
the number of replicas and you can also set up
horizontal port auto-scaling accompanied with the machine
auto-scaler on the cluster auto-scaler. Make sure that the cluster
always has the resources to run your application
because this Docker container expose the port, OpenShift
automatically created a service. You will see this as only
accessible within the cluster. While the services provide
internal abstraction and load balancing within
an OpenShift environment. Sometimes clients outside of OpenShift need to access an application. The way that external clients are able to access applications running an OpenShift is through
the OpenShift routing layer and the data object
behind this is a route. The default OpenShift
router, HA proxy uses the HTP header of the incoming request to determine where to
proxy the connection. Now that the route has been created, you should be able to
access the published service on the internet. Now lets review logging. OpenShift is constructed in such a way that expects containers
to log all information as to the out. In this way, both regular
and edit information is captured by a standardized
Docker mechanisms. When exploring the pods log directly, your essentially going
through the Docker daemon to access the containers lock
through the OpenShift API. When your application
constitutes only one port and it never fails, restarts
or has other issues. These ways to view logs may not be so bad. However, in a scaled out application where pods may have started
and scaled up or down, but if you just want to
get historical information, these mechanisms may not be sufficient. Fortunately OpenShift
provides an optional system for log aggregation that
uses Elasticsearch, fluentd and Kibana EFK. This logging stack can be installed as an OpenShift operator. OpenShift provides a number of operators that are community provided
or provided by Red Hat or provided by other partners that enable you to add
functionality to your cluster. These operators are installed
through the OperatorHub. The OperatorHub lists
number of these operators that are published by the marketplace or by the community or that has certified. And they span across
multiple categories from AI and machine learning,
application on times, database and more. Now that we have the front end deployed, we want to deploy the backend service, which is developed in Java
and it exposes a rest endpoint to the visualized application. The application will query
for national park information including its coordinates. The data is stored in a MongoDB database. Earlier, we deployed a preexisting image. Now you will expand on that, by learning how OpenShift
builds container images using source code from
an existing depository. This is accomplished
through source to image. The application is hosted
on a GitHub people. As soon as I provide the
source of the GitHub people OpenShift detects this
as a Java application and chooses the Java builder image. Similarly, I might have to deploy this using a deployment config. I'm going to leave the
automatic creation of the root to the application. Like we've done before, apply labels to identify this deployment confirmation. And this time this is going to be the national parks component. And the role of this
component is the backend. As soon as I hit create, OpenShift is going to kick off a build. You'll see that OpenShift is
cloning the triple locally. And then it's going to
use the Java builder image to build a Docker container
of that application, inject the source code and then push it to the integrated Docker
registry that is running on the cluster. After the build is done, OpenShift will also trigger a deployment, and you can see that
the pod is already done. We're now going to add both readiness and the liveliness probe to the existing national parks deployment. This would ensure that OpenShift
does not add any instance to the service until they
passed the readiness checks. This would also ensure
that unhealthy instances are least started If they
fail the liveness checks. Most useful applications
are stateful or dynamic in some way. And this is usually
achieved with a database or other data storage. In the spark, we are
going to add a MongoDB to our national proxy application, and then rewind it to talk to the database using environment variables via a secret. You're going to use the MongoDB image that is included with OpenShift. In real application, we would
like to use a managed database such as Azure Cosmos
DB to store your data. Now that the database has been deployed, it's time to pass the
credentials of that database to the deployment. And the way to do so is that it can assign the secret that was created to the national parks deployment. This would copy the credentials and the host name of the
MongoDB that was created into the national parks deployment ports, allowing the application to query them and connect to the database. Now let's load up the backend service. This service has an endpoint
to load data into the database. Let's do that. And then verify that
the data has been loaded into the database by
calling the all end point. And you'll see that we've
got a large basement object with all of national
parks have been inserted, but if we go back and
refresh the front end, no data is coming back. So, how does the front end
know which backend to call? It does solve full service discovery by quitting the OpenShift API. The front end application is designed to query the OpenShift
API for routes that match type equal park's map backend. In doing so, service
discovery is automated and the front end is automatically wired to connect to the backend. So, let's go and apply the proper label to the national parks helps. We're going to add the type
equal product map backend label to the national parks route. Now that you've added the proper label, in theory the app should
be able to find it out. However you see that it can't because it doesn't have permissions to query the OpenShift
API in this namespace. In order for this to work, the default service account trying to port must have the proper permissions. To configured them, you've
going to create an old binding. Let's just give it any
name here, view work, and the role would be view. And I'm going to grant this role by me to the service account that is running inside the workshop namespace and that default service
account is called default. All of the ports that
run within namespaces run within this default account
service account permissions. To triggered this change, I'm
going to start to roll out so that this new permission is applied to all of the ports that are running in the parks map deployment configuration. Once you've done that,
you will be able to see that the application immediately was able to pick up the data and
load back the national parks from the database through
the backend service. So far, you've seen how
to deploy an application through S2I and full diet Docker image. Most GitHub repository we service however, support the concept of WebWork calling an external source via HTTPS, when a change in the
code repository happens. OpenShift provides an API endpoint that supports receiving network calls from the most systems in
order to trigger bills. By pointing the code repositories
hook at the OpenShift API automated code build and deploy
pipelines can be achieved. So, in this case, I'm going to configure the GitHub repository to call the WeBWorK of this national park deployment
for every code change. Now, if I go and trigger a
source code change by editing. If I, for example, let's go
and see what would happen. So, I'm going to pick one of the models in one of the controllers
that host my application and I'm going to change the
data that the data returns. Once I commit, you'll see that immediately OpenShift picked up this trigger
and triggered a new build. This build is going to be
using the new source code that was pushed to create a
new version of the application. After the build completes,
a new image is pushed to the container registry which
triggers a new deployment. This deployment is deployed
in a rolling configuration which means there will be no downtime to the live production application. - Thank you Ahmed for that wonderful demo. It was really interesting to watch, and see how easy it was
to deploy an application to Azure Red Hat OpenShift, and really see all of the
components come together. And I hope that our viewers
found that interesting as well. So, in wrapping up, I'd like
to just call your attention to a couple of links and
resources that we have. If you'd like to learn more
or if you'd like to try on a hands-on workshop in doing it yourself, or the documentation or any
feedback about the service, we do have the relevant
links up on the slide here. So, I do encourage you to
take a couple of minutes, come visit us, interact with
us, let us know your thoughts and hopefully this will help you along your container journeys as well. So, thank you for joining us today. (upbeat music)