Alright, looks like we've got
a few attendees that have joined us already. We just
wait a few more minutes before we start just to allow some of the other attendees to join in the session. So just bear with
us for a moment. I am just repeating my earlier meshage for attendees of this recently joined. We're going to
wait a few more minutes just to allow more attendees to join
this session before we kick off. All right, that looks like, uhm, quite enough attendees to kickoff so high everyone and
welcome to this Azure Red Hat Openshift 4 a live webinar. So you'll notice that there's a Q and a window in your web page,
so feel free to ask any questions during this webinar.
This type in your question, in there and one of the speakers
will try to answer that during the session. So today we've got
folks from both Microsoft and Red Hat presenting, so will
briefly introduce ourselves. So I'm Clarence Bakirtzidiz, Azure Technical Specialist working on Microsoft phone based out of Melbourne, Australia and I
focused on cloud native solution. So that includes
things like container platforms, severless this DevOps and of course open source software. Even mute.
Tenneco to Khator, Cortana. No. Nicki took your body. Call my Yahoo kifuka
holding up. Go job with no day. I'll tell you #13 about to cut
off. For those of you don't speak to their motive. My name is Joel Pauling. I'm originally from Taranaki but I live in
Wellington. I am a senior solution architect with Rita at
New Zealand. Prior to that, working in large telco cloud
deployments in North America, South America and Asia pack for
the last sort of five to 10 years. I am currently a platform and cloud specialist with Red Hat at New Zealand. So I do infrastructure. That's Red Hat, enterprise Linux open stack. Open shift for the platform side
of things. An answer will automation. Yeah, and my name is Michael Calizo, so I am a senior solutions architect
baser in Wellington as well. Originally from the
Philippines, I started with red hot three years ago as a senior
time managed to make my way up to principle TAM and now I am a working as a senior solutions architect with focus on openshift. Alright, well, thanks for the
intros so we went to the agenda for today. So this is this is
what we've got planned for you so we're doing the introduction
at the moment. Will focus a bit on the partnershift between
Microsoft. The Red Hat and then we'll move on to play the part
that you're most interested in today which is a product
overview or look at the features and some of the architectural aspects of ARO. The ARO from now. I'll just say AROinstead of Azure Red Hat Openshift. Look at how to get
started. So how do you create your first cluster? How do you spin up? Deploy your first app and then
will head over to Mike and Mike's going to run through some
demonstrations using the open shift web console around the administration and development perspectives and lost. It will
close out with some of the Azure native integrations that kind of compliment the openshift container platform and at the end would ask if you could also provide any feedback so that we
can improve future sessions. So before we go into ARO. Let's have a look a bit of around the partnershift between
Microsoft and red hats. So here we've got a Satya
Nadella the currency over Microsoft and we want to
highlight the strategic alliance that we have between Microsoft
and Red Hat. And primary centered around the
open source collaboration. I'm going to read this quote from
Satya he attended in person at the Red Hat Red Hat Summit last
year, and he said that Microsoft has embraced open source
primarily because it's driven by what he believes is
fundamentally what the customers vectors to do, which is to say
what's best for both companies customers. Microsoft has a
heritage here. We had developer Tools Company 1st and now of course we are all in Open Source And so what's interesting here,
you might not associate Microsoft with open source.
Know, given its long history with proprietary products like
windows and office. For example has red hair has its roots in
open source. Starting with Linux, this is all changed.
Those customer needs the prevalence of open source in the enterprise. So customer
expectation customers expectations nowadays is a
consistent hybrid cloud environment and this is where openshift squarely fits within. So Microsoft's been around for a
long time I found it 45 years ago in the PC era and we've come
a long way now where you see that we across more than 95% of
Fortune 500 companies. He trusted, you know, the business
on Microsoft Azure and which is a leading Pipistrelle Hyperscale
Cloud and strengthen adapting, listening to an enabling
customers such as yourselves on this on this, but not today. Next slide, please. So I guess I'll talk a little
bit about why red hats at the table here. This slide kind of
speaks to our pedigree, so we like to roll this particular
stat out quite often, but we're quite proud of it. So 100% of
Fortune 500 and global Global Fortune 500 leaders are using red hair products and services we represented in every industry,
including aviation, telecommunications in healthcare
and banking and financial services. They all choose rehab
because we have a broad portfolio and a history of
delivering business value from open source solutions that meet
the needs of the enterprise. It be fair to say that
historically Microsoft and Red Hat's relationshift hasn't always been warm. However, that changed sometimes
in last decade and in fact written Microsoft Partnershift
goes back to 2015, so the strength of this partnershift has
been built on the foundation of our work to develop joint
support for retail enterprise Linux, including our operating
systems on each others respective VM hypervisors. Our learning and growth from
that workers demonstrated the unique customer value that we
can deliver to enterprises. When we work together. Since then,
we've collaborated on bringing a lot of Red Hat solutions to
Azure, as well as bringing Microsoft technology to Red Hat customers. A notable example of this is that red hat enterprise linux is is now the preferred platform for running Microsoft's flagshift database product. SQL
Server and Red Hat Summit. Last year we announced the general
availability of Azure Red Hat Openshift. And this was the only
first party fully managed Red Hat openshift service on a
public cloud last month. Azure Red Hat Openshift 4 became generally available open shift 4 launch last year, offers a raft
of exciting enhancements which are now available on ARO So. Kubernetes has become synonymous with container platforms. And today many customers. A building these platforms as
they move towards a cloud native development practice but using kubernetes directly has a steep learning curve and kubernetes in an enterprise setting as hard to get right compliance security and reliability are all considerations that you need to
plan for any well plan for enterprise project and these
considerations are exacerbated by the fact that Kubernetes
upstream projects are rapidly moving target and they are constantly evolving. A stand
that we like to talk about last year is that 95% of the kubernetes codebase changed over the preceding 12 months. So any ecosystem which uses that upstream kubertenes must also rapidly change to match. So if we look at the type
of things that we see that you need to do is organization to deploy kubernetes. We can see that there is an installation phase. We've got the templating and validation steps that you need to carry out. You've got a deployment phase
where you have to integrate were theater existing. I DM and
security access management. You monitoring systems and all your
egress in Ingres points into your communities cluster. Then you have a hardening phase. We have to do this computer
security compliance steps and so security validation and
certification steps to make your network teams happy. And then you have to
operate it and they two operations have
traditionally with kubernetes being a hard and difficult process. So retailers keen to ensure we
learn from our nine year journey, producing in supporting
a container platform and with Openshift four we concentrated
on what customers were telling us as guiding principles for features. In the platform itself. The key reasons for
choosing openshift are reflected in the results of our research.
Every time we went out to survey. What were some of those
issues and challenges that companies faced, we hear about
container security and compliance. Data management
install and upgrade an active an management challenges. On the other hand. Prospects are
looking for a window with a demonstrated leadershift and
contributions to open source communities. platform security long-term commercial support. That's all what red hat has done for 26 years. That's our business model. So, reflecting on these
anecdotes, thinking about where your organization is, we're
development methodology is out today. Where are you in your
container? Adoption journey? What technologies are you
looking at currently in? What do you consider critical
capabilities in your selection process in terms of both product and vendor capability? He is how red hat have addressed container adoption challenges with openshift four and how
value provided by Red Hat Openshift contributes to
addressing each of these challenges. We take container
security very seriously and take a holistic approach to deliver
what we call a trusted enterprise kubernetes platform. As you know, containers are Linux or introduced Linux kernel
features. In other operating systems, and so as kubernetes at its heart. Building on a 26 year old
history of providing supported security hot in Linux, which is
trusted by thousands of enterprises worldwide, we
deliver the same secure platform across the container stack and
stand behind it with long term support. We recognize the
keeping up with the frequent upstream releases can be
challenging, and we've invested in automating capabilities by
technologies like the operator framework and over the air
updates to make it easier to keep up to date with the latest
innovations coming out of the communities and to be able to
deploy containers at scale. And it's all about
the developers that we're investing in. We invested in automation
capabilities within open shift I the continuous integration,
continuous delivery pipelines, AB testing and the way that your
images are deployed in the platform we've invested in
developer tools like code ready workspaces. And we've invested
in cloud native frameworks including spring serverless and windows containers to inspire developers to create the best
code, no matter what applications they building. Chances are you can probably
improve your developer productivity with Openshift. So there's several key reasons
why customers have been choosing openshift. Open shifts delivers a trusted enterprise kubernetes platform. With the containers
are Linux and you know the Red Hat. Enterprise Linux is the
best enterprise Linux distribution industry. Obviously I work for red hat I would say that but. Got their backs us up here. RHEL is also the foundation for kubernetes with features built into the operating system. Like SE, Linux CRI-O runtime support, pod man builder and Scopeo of tooling to replace. Proprietary or specific? Implementations of a container. Platform. So. With those tools in place, were able to put CoreOS which we acquired several years ago at
the heart of an immutable platform for running the
container orchestration engin. CoreOS is retag CoreOS and immutable container optimized RHEL image. And it's designed from the ground up to run containers and it features
things like TLS encryption. Name space than authentication
for everything that's provided in the platform itself, all the
way up into the stack. This trusted enterprise
container platform also provides a consistent layer across every
cloud environment that enables developers and operations teams
to work together seamlessly. Customers also choose open shift
because we support a very broad range of applications from traditional J2EE a modern application, frameworks like Spring, Jango, Analytics platforms like spark and
tensorflow developers are more productive when they build
applications on openshift. Last but not least, open source
leadershift and contributions of the top reasons for customers to choose kubernetes being there. And Red Hat is known for its open source leadershift. As an open
source is becoming a standard and enterprise, we also help
customers participate and contribute in these communities.
And Openshift Commons is a great opportunity for organizations to
engage with these. Sorry, I fell for the mute
button. Uhm yeah, great thanks Joel, I really like this logo here and want to highlight here that you know now I understand
the strengths of Microsoft and Red Hat and what this
partnershift brings out the best of both worlds and ARO is really the result of a jointly engineered and supported effort
leaving talent, innovation and open source leadershift from both
companies. So you can take advantage of the security. The robustness of Joel talking about around open shift and also the flexibility and elasticity of Azure. And it's broader ecosystem
and services. So if you look it up. If you to run a redhead
openshift yourself using infrastructure as a service.
Here we can see what that would look like running that on Azure.
So there's a lot of moving parts you'd end up managing a lot of
stuff yourself. You are supporting infrastructure. You
gotta create the cluster, manage the networking monitoring,
logging, patching, high availability, etc. It takes a
lot of time and effort. You can. You can offload some of these
responsibilities if your example working with a managed service provider. But that doesn't into your by IT budget and takes away some of the self service
benefits of having managed cloud platform in the first place. And if you look on the next slide we can see. What ARO provides? Click on the yeah so you can say here. Essentially, you're
presented with the control plane. You have a CLI, there's
an API. Is the web console, but everything underneath is really
a black box from your perspective, so Microsoft Red Hat taking care of no operating the infrastructure,
applying practices, security, best practices, monitoring and
operating the VMS so we really simplifying the cluster
operation experience and you can focus on building and deploying and scaling apps with confidence. So Clarence is giving you a
brief architectural view of some of those Azure components. Let's have a look at the open shift container platform at a glance.
So the power of the platform is seamlessly integrated.
Capabilities that are available to you out of the box and beyond the certified kubernetes orchestrator openshift
incorporates all the infrastructure components
required in a properly designed container platform that you
might not get with a standard kubernetes distribution. So if we look at it from a top down some of those extra additional
infrastructure services that need to be built in supported in
a container platform that are included with openshift, are cluster services. So these provide things like the
registry, the logging show back monitoring. These are all built
in an available by design inside of Openshift. We look at things like the service mesh components by way of Istio, Jaeger, and K Native serverless frameworks and all these things allow us to do server less function based programming which
is so hot right now. These are all provided alongside certified
middleware runtimes as well. So not only that, you've got the
Red Hat Certified Middleware Stack, but we also make
available third party integrated software vendor stacks through operator marketplace. So within
open shifts dashboard you get all of these features and
functionality straight up. Mike's going to go over some of
these later on those demos, in particular the K Native Serverless functions, but it's important to note that that is all capability
which you have to build yourself. If you go with an
upstream based distribution. If we look at the developer
services that are included with Openshift. Um, we've already
kind of touched on some of these, but we have a continuous
integration delivery pipeline using a lightweight CCD
platform called Tipton, and that is built into openshift
itself. These components are specifically designed around a
container now native development practice, and so
this allows you to have a full source image capability within
the platform delivered via browser, developer ID
environment and inside of Openshift. We call this code
reading workspaces. All of this is built on Red Hat core OS which is delivered as a zero touch and platform layer
managed from within openshift itself. Like any other container
workload, this immutable OS technologies enable the Seamless
roll forward and rollback an upgrade of the platform
delivered on the same finely tuned enterprise Linux kernel
and Harden userspace tools as Enterprise Red Hat Linux. These are the same
capabilities that are available anywhere. Open shift
is deployed too, but mashed into into Azure specific
strengths and capabilities by red hat and Microsoft in the are offering. So going into some of those
features a little bit more deep on this slide. Let's examine the new capabilities and open shift 4.3 and ARO. So I'd like to highlight that openshift 4.2 to now is 100% FIPS compliant, and so this is a
big thing, especially in the financial services in government
areas, and that things like encryption of data at rest and
in flight and now 100% certified and guaranteed in the openshift platform. Three years ago, Red Hat had acquired core OS and as part of the acquisition
technology like the container registry scanner, clear and the
container Linux OS itself have been incorporated into open shift 4. This makes the key infrastructure components and smaller management and security
surface is possible. Leveraging Redhead Enterprise Linux Harden
in quality assured kernel and userspace libraries in
combination with the roll forward and rollback capabilities of core OS Open Shift 4 allows for full operation of the entire cluster platform within the Kubernetes dashboard,
and until you've seen that in action, it's really hard to
appreciate just how cool that is. And this is all delivered
through the operator technology that Mike is going to talk a
little bit about later on. Affectively offer it is
codifying and Automate the standard operating intelligence
that you might use as a System Administrator into a discrete
package that you can install. And use within the kubernetes platform the installation of open shift for itself is a
single goal operator Binary, which then installs itself
into the bootstrap cluster and allows for the Seamless
data operations to enable as close to possible zero OPS
experience as we can deliver. And on security Red Hat has worked with all the components of the Linux userspace to build up to the RHEL8 release last May. And this was to ensure things like TLS 1.3 and stringent FIPS, another cryptographic security
compliance certifications can be met with the RHEL8 release. Once we had that, we've been
able to take that to all about other platforms, and these enhancements are incorporated into the open shift four platform,
meaning you can rest assured the regardless of where you choose
to deploy up and shift that it will include capabilities such
as encryption at rest and in flight and fernet token authentication. Beyond this open shift 4 comes ready for serverless functions, which are
very touched on, and this supports micro architectures and
massive resilience and scale up and down to zero. These are
enabled through the service mesh capabilities and the K native support. And the best news of
all is that it's available today. So here you can see I
see a map of the well and what we want to highlight here is
that Azure has presence around the many countries, so we've got
58 cloud regions, which is more than any other cloud provider.
And if you focus in or zoom in on Asia, we can see there's 7
Seven regions that we have there. The ones it with green are where ARO 4 went, GA's actually it was just on the 28th
of April, so very recently got 2 1/2 weeks ago. ARO 4 in GA. The other regions you can see there are still on Aro 3 version three will progressively be rolling out 4 to those regions in the coming months. And some good news soon that
number up there will be 8 because about a week ago
Microsoft announced with opening a region in New Zealand. So Joel and Michael be happy because. They reside in New Zealand. Yeah, the Prime Minister even
gave you guys have plugged the other day. Yeah, that's right. Unexpected, but welcome. We want
to the next slide, so just on this one here we just want to
highlight some of the changes that have occurred between ARO
311 and 4.3. So today you know with direct you to ARO four is a lot of the new benefits that you get so jealous. Already touched
on the OS differences there. And you'll see that the underlying Kubernetes platform is also more modern. It's moved
up to 116 interesting thing is the minimum cluster so if you want to create an ARO cluster with all the defaults. You spin
the new cluster up. This is what you would get you would get 3
masternodes and three app. Nodes and the minimum requirement in
terms of node sizes detailed there, so an 8 core VM for master than a 4 or standard VM for apps you can change those
to larger sizes. But that's the minimum footprint. One of the most
requested features and personals, but they wanted
cluster admin so you can see that that's now provided for.
That enables a lot of things like installing operators, then helm charts and CRDs and so forth. And you can also turn on
privilege container access if required. They have a lot of
control. There is a policy we published on our website that
says you know what you're allowed to do, so see if you
go mesh around with some of the system components, then
violate this policy. But in general you have the ability
to administer, cluster, assign you rolls, and so forth. We're going to next one. So just we just touch on some of
the some of the new features here before we go into some demos so. When you create a new, uh, ARO cluster, you're going to have to assign an identity
provider. They initially you get special user called kubeAdmin and you get a password. You can retrieve a password for that
user that's meant to be just a temporary user, and then you
choose your favorite identity provider. You can see the
extensive list there, and you can also use Azure Active
Directory to single sign-on and will cover that a bit more further in the webinar. Another key feature is ability
to have a private cluster, so by default you have come. A public API server, so in terms
of working with the CLI or the it's called the OC command line interface. That's a public
endpoint that you can communicate with a lot of
customers don't want that. They want to have it all private, so
you can turn that into a private endpoint. Of course, if you do
that, you would need a way to access your server, your cluster for administering it. So you'd either need like a bastion host,
or some way to route to that private network through
potential, like a VPN or express route. You can also have a
private Ingress, so for end user workload so APIs and web apps
that you're running for example in the cluster. If you don't
intend to have any of your applications publicly exposed,
you can set that to a private Ingress and then of course you
can access those endpoints within your on premise network or other vnets We continually upgrading as your
agents in support availability zone. So you need to check out
documentation to see which regions currently have
availability zones and when they're on the road map to be upgraded. if you choose region with availability zones, good news is that, uh, ARO will automatically distribute your nodes that you create all your Vms under the whole, across all those . So for example. Yeah, if you looked at say yeah you for example, which was the first region, re-enabled ARO 4. You would get the free masternodes spread across three Azeez and same as the worker nodes. So I guess this is just
reiterating the FIPS compliance, and I guess this is come up
since the announcement of the New Zealand Azure region and that Microsoft are a US based company, they potentially could
be compelled and so having a platform on top of Azure that
also works with Azure as security features allows you to
meet your compliance obligations. An open shift is
able to deliver that. So from a support and operations
perspective you can see that Microsoft and Red Hat work
closely together, so we have a unified support. We have
ascential single V team of site reliability engineers, and
whenever you have a problem with your ARO cluster you just use the one channel essentially to raise that concern. Now you
could raise it through the Red Hat portal, or you could raise it through the Microsoft Azure Support Channel. Uh, it it
would be triaged. And um, this communication both ways
between both teams and then that will end up going to the
right team. So for example, if it was more of a
underlying platform issue that would get routed to
Microsoft if it was more open shift internals, the base OS
or the middle where other components in every shift
then that would go to the Red Hat we have access to each
other's support system through single sign on. Yeah, and I guess you know so
together you have the experience of the second half of the
establishing organization behind kubernetes, which is red hat, by the way, which I think I have to highlight, because then people
don't realize that. So you've got 26 years of experience from
a company that is used to dealing with delivering rapidly
moving innovative open source into these stable, secure and
supported business value oriented products. And so with
Microsoft is the partner that let's you use. That pedigree today, in a
familiar and scalable fashion with a cloud partner who in
the back end we are working with intimately to make sure
that you are successful. So on the next slide, here we can see. At a high level, what
the underlying architecture for ARO looks like now as I said as an end user, you don't need to really be concerned about this, but this is just more for you understanding of what's
being managed under the hood, and this can change overtime,
but apparently this is kind of what you would get. You would
get your master virtual machines and then you'll work with
virtual machines there, protected by a security group.
And if you look on the right hand side. Iy default you get an internal Openshift Container Registry. If you're already
using uhm Azure, you might want to use the agent Container
Registry. So we set up. But let's go to service end point
from your cluster to the Azure Container Registry, which allows
for private routing over Microsoft Backbone to that paas service rather than going over the Internet. You can also
appear to other virtual networks within Azure, and you can also
have expressed route set up to your on premise. On premise, you can then exit
infrastructure. There you look over on the left side of the diagram. You will get a. A mix of public or internal load
balancers depending on your options. So for example, if you
have set private Ingress, you have just internal load
balancer. If you've set private API server, you're going to have
the public load balance from public IP, exposing API server
and then you'll have also Azure DNS, so either public or private
zones ending on your options during setup. And the last
thing I want to highlight there is the management plane. So
Microsoft and Red Hat and administer your cluster through
what's called a private link service. So even if you have
set your cluster to be private, we need to be able to access
your vnet to manage your VMS. As part of this LA. So we have
that capability. That was a high level
introduction into ARO and main sort of features. And what's
new? What will look at now is how to get started with ARO. So. Both Microsoft and Red Hat
have extensive documentation on ARO and you can see what kind of looks like here in the URLs. Now. Microsoft stocks will be
primarily pertain to. Setting up at the cluster in
Azure integrations with Azure and so forth. And what you find is redh hats documentation will be more comprehensive on openshift
itself. The configuration of open shift and its features. So
you can kind of use both of these sorts of documentation. To take to get you going. So from a cluster creation
perspective for advance forward, please. What you find is is
single command to create a cluster. So if you're
familiar with you and your views, they should. CLI is this command called a z so you can do a z aro, create an in passing parameters and then after a
while you have a custom up and running, so we'll go
through what that looks like in a demo shortly. We have a very easy to follow
tutorial. I'm actually gonna look at this in a minute. Which colleges through zero to
hero, so to speak, creating a cluster from scratch? And. A summary of those steps is
it sent. This is highlighted here and will go through this in
a moment. So you have to install an extension to your Azure CLI that understands aro commands. Register your subscription to
have access to the Openshift Resource in points or someone or
something and then you can create a Vineet with two
subnets. So that's really only bit that you have to kind of
provide the net after that. It's a single command. Then you
can sit on the right day created you create your cluster. You can
choose whether you want. Private or public disability on
your API server and interest and point, and then you retrieve
your credentials to log in with that initial user we talked about you kubeadmin. We're going to run through a
demo of creating a cluster. I just thought transparency.
This is recorded with that was easier with multiple
people to record these and walk you through the process
to play that video. Alright, so. Area at the
tutorial page that I was talking about an uhm said it's very
straightforward to follow. You will need an Azure subscription
to get started. So the first thing in terms of
prerequisites you need to create a virtual network and
two subnets, and once you've done that then you can go
ahead and deploy your cluster. So currently the experience is
CLI driven. If you're automating your cluster, it's most likely
that you would be using some sort of CLI tool script. Potentially arm templates and
probably in the future will have updated the terraform plugin as well. For this general, we
just use the CLI and we are working on an integrated a UI
wizard experience in the portal that's currently in
development. So here I'm just showing you that I've got the
Azure CLI installed in my terminal. So once you've got that
installed, you need to install an extension. So Azure CLI supports extensions for your new services and
components like that. So here I'm adding the ARO extension. And you can see I've
already got. It is still previously, so it was
pretty quick and now you'll see I can type. AZ aro and I get all these sub commands. I can create
a class, the delete list and then retrieve my
credentials and so forth. This is a one off step you
need to do so. Everything that was represented by
resource provider, so it's wonderful Red Hat Openshift. For your subscription, you need to register. Your subscription
have access to that. Providers essentially enables the API in
Azure for your subscription. So that's now registered. And what would just do one final
step? Verify that my CLI has the ARO extension installed to under extensions. You can see the ARO extension is installed and you can run an update on that periodically. Just to check
for changes at this step, here is optional, but highly
recommended. It's essentially downloading what's gotta pull
secret. So if you work with registries, typically you need
to have a full sequel to access them to be authenticated and
this will give you access to red hats for sure. Container
registry is an additional content, such as officially
supported operators that will cover more later. You can use a free account. We
just sign up um redheads website and then click on that link and
you can download this secret as a text file. I'm just listing on
my file system here that I actually have the post secret.
I'm not going to display it 'cause it's actually a secret. And now we can proceed to. Starting to create the resources
we need because we working from the command line we want to save
piping so we define some environment variables. Yeah, I'm
just using the defaults. You could use a location where ARO is available for example SEA, East asia for example. I'm gonna call my cluster
cluster and then the resource group name is there. The first step is to create a
resource group. If you're new to Azure Resource Group is
essentially just a container. The houses one or more resources
you need to have more resources within. A Resource Group, and
it's also security boundaries he can control. You have access to
that Resource Group. Now we create the vnet for the virtual network. It's very
straightforward. The hardest part of this is deciding what
network address range you want to use. Um, now why that's
important is chief got an isolated cluster that it doesn't
integrate other networks. It's fine if you're doing vineet
hearing for example, or peering with on premise network. You
need to ensure that that network that just change doesn't overlap. With, uh, any
other networks? Once the vnet it's created, we can go ahead and create the two subnets in the V net. That is
one for the master nodes and one for their worker nodes. And you'll see that the service
end point with enabling. If we use container registry then that
set up for us to to then talk the container registry securely. At least last a stable private
link service. There was talking about is earlier was that
Microsoft Red Hat need access to your Vineet via our management
plane through this private link service to better administer your cluster if you don't do that. You might potentially lock
us out from having access to your Vineet. So we just saying
for the private link service disable any network policies of define. So that we can have
access to your Vineet. It's all my kids over there. Be
quiet high boys is a lot of people on this call. So yeah, so now we've created
the venetz, the subnet. Sorry, it looks like I accidentally
forgot to type in the the worker subnet creation step, so just
run that final one. So now we're up to the interesting part which
is creating the cluster. You can see there's a single command. They essentially it's saying
What Resource, Group, what name vnet and then mapping the subnets across and some other. There's other options you can
pass even once again not listed here. For example, we want to
change the number of worker nodes or the size of those VMS.
You can override the defaults here. So what I'm going to do is
I'm going to pass in that for secret. I don't download a
previously, I'm just gonna add one more parameter
here called full secret. And shortly it will say
running that takes about 30 to 35 minutes to fully create
your cluster. So the provision of the VM setup the
load balances, set up everything else that
necessary that house or a high level overview of what
that looks like in the architecture diagram earlier. So it's not gonna wait here for. For 30 minutes, he's one of the
benefits of having this recorded. It says running now.
So what we do is come back to that. But I'm gonna show you
here is if we list the clusters that I have to find, it will say that. One of these clusters is
now in creating states. There it is creating and they
can see this three worker nodes. So you can scale those worker
nodes up further, uhm? You can scale further down,
but you can't, uh, you shouldn't go below three nodes 'cause that would violate the support policy,
right? So part time pastors and their customers now
created. Yeah, nodes presented with the URL so that URL is to gain access to the open shift
console. She's a web interface. Have used. Uh openshift
container platform before other than it saying, You know, Azure
in the title there and the URL looking different. This looks
and feels just like any other openshift container platform. So now we going to log in with
the default user, she's administrator user and there's a
command that you have to type in to retrieve the password for
this user. So normally what you do is your login with this user
and then define and set up your identity provider and then you
can disable this user. I'm not going to execute this command is
that would show the admin password. I still have this
cluster actually running, so I'm doing off screen. Executing that retrieving
the password, I can now paste that into login. And, uh, in a moment there we
go, we logged in and you can see this a meshage that says I'm
logged in as a temporary administrative user and
prompting me to set up from any provider. You can see everything
is healthy. The cost of the control plane. Uh,
everything looks good. I'll just quickly show you
here. This is where you can add the other identity
providers for that drop down. It's very straightforward and
this comprehensive documentation, both on
Microsoft website and a Red Hat's Openshift documentation. So here, um, if you want to log
into the CLI, you can click that menu button to get a token that you can use to log you can actually log in with a username and password. As well, or you can do it this
way you get a temporary token timer token to log in. So now
I'm logged in and I can use the OC command line tool for checking the status of my cluster. A now talks, which is actually the 2nd tutorial follows on from the one we just looked at. Explains how to install your uhm? Yeah, you say light tool is
what you do to just go to the help menu in your cluster command line tools. And you
can find the appropriate CLI tool for your operating
system of choice. So that was a demo of
creating a cluster. As you can see, it's really
straightforward to do that. And now that we've got our
cluster, I want to do something with it. So let's let's imagine
that we're at developer with with got this new cluster we
wanted deploy an application for the application we're going to
deploy is one that presents a web interface, allows you to
vote on your favorite smoothie. Unfortunately, is no mango in their Joel's complaining 'cause we don't have mango Rd in there. From an architectural
perspective away, this is implemented is the Micro
Services application. This web app that uses a few JS. It's
hosted out of a node runtime and then the API is a restful API.
Adjacent API that runs in mode as well and the back end state
is stored in mongo DB. Here mongo DB is running inside the
cluster using persistent storage Maps to an Azure disk. Or she could hear from your
state outside of the class that you prefer to do that. So let's go into the demo.
Now of the point now first application. excuse me So he went back at our, uh, openshift cluster that we're looking at earlier. I've got some instructions and
you can follow this yourself. Once you can get your own
clusters on GitHub or put some instructions on how to deploy
this microservices application. So anything you do on Openshift
will require a project essentially like say you want to
deploy something you put it within a project project is from
Kubernetes objective Maps to essentially namespace. kubernetes isolation boundary it's also folder to how's your artifacts are. Your
services, deployment pods, etc. And it's also security boundary. So now that we've got the names
project or my demos. I wanted to deploy Mongo DB. Now, if you say I'm not in mongo DB expert. I'd rather use a
recipe that tells me how to install. It was already being
codified with expert knowledge, and that's where I've just
listed the built in templates and you can create your own
template so there's a bunch of them, therefore database as a
middleware that come out in here I'm listing. What the mongo DB
persistent a template would do if you're familiar with Kubernetes, this is just the kubernetes manifest. I don't be scared by
Kubernetes like mentioning that just for your knowledge,
but they really need to know a lot about humanity to work
with. Open shifted, simplifies things greatly. So now that found the template I
want to use, I'm just overriding some of the parameters, set,
username, password, etc. And Lastly, I'm passing it to
the OC create command which will execute and create all
the underlying objects for our database instance. So if you jump back into the
console, you can switch to the developer perspective. Yeah, that's uh. Um, what's called the
topology, which shows you a visual representation of your project assets. They could see Mongo DB. Showing up there and
it's currently deploying that's pulling down a container. And you can check the logs at
any point in time to see what's going on in. For
example, if you encounter an error and now we have the
database that's deploy our API, you can see that the APIs
residing on GitHub, so you can see the source code for. If
you want to check that out later. So the way we deploy an app
is using new app and you can choose different deployment strategies here i am using source to image where I don't even
need to worry about creating a dockerfile or creating a
Docker image. I just say hey please deploy my app from
this source code repository. And what will happen is a build
will be created in inside the cluster for me automatically. And then my container will be
deployed from that that bill image. Because he had no need to
talk to Mongo DB, I'm setting an environment variable and passing
that through to the API. So you can do that from the command
line. Or you could also do the UI. Now DBS running you can see
the Blue Circle around it, which is a visual indicator. And for the API, I just want to
jump into the deployment config for it. I can show you that that
environment variable that I set from the command line now shows
up here. Of course I could do that through the UI as well. So now we've got the API. The
last episode by the front end. Again, I'm gonna take advantage of. The source to image
build strategy. So I'm executing another app
within the same project. That's the focus of the
source code resides. And it also needs an environment
variable so that the the way back can talk to the API and
you'll notice here API is the short. Uh, domain name. This is not an SQDN or fully qualified domain name, and
that's because within the cluster you can resolve. In result services within the
same project using a short name, you don't need a fully qualified
name, so it's officially internal. DNS reference. Now, because this is the front
end web app, I want to expose are out. There's a route is kind
of like an Ingress Endpoint where you get a URL, allows
allows you to externally to then route to that web app or service
within the cluster. Here I'm just listening the
route that was generated. That route is publicly accessible. And if we go to the
topology view with a developer perspective and see the API's up
and running, the mongo DB is up and running. A way back is
currently building for us and that little icon on the top
right is the route endpoint and she click on it. Uhm yeah, the URL will open up
in another browser tab. They can say they experience in
deploying apps with Openshift is very easy and it's a great
developer experience. If it jumped into the logs for this is
the build logs for the Web app. You can see that it's currently
installing some MPM packages 'cause it's it's a node app. Are you busy? Uh, yes, I know
that Node JS and. Is the front end and then here
we can see now where, uhm. Wait, we built the image,
we pushed it to the internal registry. That will be the default
registry and unless you specify different registry. Have you jump back here? It's
currently light blue now looks like blue mean lightly mean
square creating the container for the underlying, um, pod. So
let's kubernetes terminology. But essentially container runs
in a pod. And Lastly, everything is blue,
so we should get a access away about and there it is so. Very straightforward. There's no meshing around block
of files or understanding content images, it just use the
source code to get this up and running. Some voting on my
preferences for smoothies. Submit that. And uhm, just want to show you
this should jump into the logs for the API. You'll be able to
see when I vote. The Jason ratings
will come through. I just Uh, then jump in here and
you can see there. So there without all the ratings that
were submitted and received. That's quite a nice develop
experience. Interact with your app and then check the logs. Have logs. You can you can deploy this
application yourself on your on your own cluster once you Follow the tutorial real and then
jump to that GitHub link that I showed earlier. So that wraps up. The second demo, which was
creating sorry, deploying your first application, so I hope
they found that useful. Actually having this work with openshift. So now Mike's being sitting
there patiently, a very quietly. So she's chance now to tell you
all about the administration Development experience in Openshift. Cool thanks a clarence
some now that you guys I hope you understand and get how easy
you can deploye open shift in in in ARO uh what I'm going to
show basically are the more advanced features of ARO 4.3. And not only advanced, but
exciting features that we bring into this a platform so things
that I will be talking and doing a demo is about operator and
operator hub and how you can use that operator. And I'll be doing
a demo as well for service mesh and server less so. An operator. Well, how did we
arrive and get this operator? So I'll just give you a little bit
of background. The operator hub was lunch by Red Hat and our
partners like Microsoft, a AWS and Google earlier this year it
provides a single place of Internet place on the Internet
for developers to Publish Operator. And for customers to
find and consume curated operators, these operators meet
certain quality standards so you can have confidence they will
deliver what they promise. You can deploy this operators on
any kubernetes distribution, by the way, not just open shift. But we also integrated operator
hub in open shift and that's what I'm going to show in a bit
where you can also find fully certified operators from our
partner that meet a very high standard of quality. In openshift we have two kinds
of operator. These are certified operators which are supported by
software vendors like Red Hat and other partners, and
community operators where you need to rely to get the support
from community currently in open shift we have about 100 plus
certified operators available, next slide please. So an operator is a way
of managing kubernetes application a kubertenes. This
application is deployed in Kubernetes and also managed by
kubernetes. This this is a cohesive set of APIs to service
the and manage. Your application operator automates action,
usually performed manually. Simple operator could just do a
simple application deployment. An advance operator could
provide you a day to operation activity. Such
automation backup and recovery, as well as
updates in reality. The reality is that maintaining
containerized application requires manual interventions. Intervention this becomes
difficult as hail. Imagine if you have services or containers
running in openshift, the worst thing that can happen in
your production is that a manual process creates misconfiguration
that can leave systems vulnerable. Kubernetes
operator changed this Operator extend kubernetes is to
streamline and automate installation updates and
management of container based services with operator. kubernetes this run and managed
containerize software according to the best practice
and this is possible through the operator control loop that
continually checks your cluster to ensure your
describe ideal state. Next slide please. So now what I'm going to do
because I introduces you to operator in high level. I'm
going to actually use operator by deploying. Colts based
database cluster as well as the show some of those functionality
on the operator hub attach or in open shift platform. Can you play the video? Yeah, so
here I am. Log in as admin. I went to the operator hub which
is available on ARO 4.3. I mean here it reflects all the
available operators that I can automatically use and brought as
part of the installation. You can this. Our operators are
categorized depending on their usage, and I can even see
whether I have uninstalled operator already that I'm using. And of course, the operators
that are not installed yet. I can also see from the operator
hub the operator providers like Red Hat. Operators which is
33 certified operators. These are basically
operators from our partners. As well as community operators
that are available for you to use. Also.
I can see the providers if I want to dig deeper into
specific operators. An example that is anchor for,
you know the security operator. Were stunning. And I can also
find the level of capability, meaning if it's basic operator
or advance operator. Now let's try to
use this operator. What I'm going to do now is I'm
going to create. I created a couchbase demo project as what you
always need to do in uh, in open shift. You need to have a
namespace for you to deploy an application as explained by
Clarence earlier. Now I'm going to look for couchbase
operator, the operator that I'll be using to deploy. A 3 node couch cluster couchbase
, so when I click couchbase I already be presented
or what the operator can do. Who created it and who
supports it? As well as what is the what
are the required parameters before I can use this
operator? Now that I understand that I'll click
the install, this will open a new window where I can select
which namespace I'll be using, whether I'll be using
a specific channel. And after clicking subscribe
that will install operator. So now that I haven't installed operator. What I'm going to do
is actually to deployer cluster, because this is just this
operator just deployed. And operator so that I can
deploy a couchbased cluster. When I click install, this will
give me a window where I can configure what features of cloud
space I will be enabled. And of course I need to change the out
secret because that's one of the requirements. Earlier I've
created a note secret already on that namespace that we created
for the sake of this demo to certain the installation, I'll
be removing Analytics as you can see I'll be deploying three
nodes cluster for cloud space. And if I go to my command line,
I'll do also get pods watch. You can see that the operator
is now deploying. Containers. Not now that the
containers are created. What I'm going to do is I'm going
back to the operator again, installed operator while I'm
still in the couch space demo namespace and check. And see if my deployment is done
after deployment is still have parameters that you can change. In the in this uh operator? And I can do that using you. I I
can do that using command line by editing the CRDs. Now what I'm going to do because
I am confident and done on installing. I want to make sure
that the service says created by this operator. I are available
but there is no route. So how can I access that? So let me
create that route, use the service that I for UI know
we've seen earlier and creator out. So in a command line you
just do oh see, you know, create route. Also expose the service
and this will create the route. Now. I will click that URL
and this will be. This will present me a couchbase
server, UI admin login so I'll be using the secrets
that I created earlier. And this now gives me a
couchbase cluster. I can go to the check the logs, I can check
some security updates there, and I'm going to do so actually
check this servers. I can see that the servers is not yet 100%
healthy because of the deployment is still ongoing. Now
after a while that is actually showing me a green cluster. So this basically it
concludes the operator demo and showing you how you can
utilize the operators that are already available in
operator hub within open shift. Now I'm going to, uh,
talk about service mesh. Next slide is so service mesh is
another open shift of 4.3 features that is delivered as
part of ARO that is very interesting and exciting in my
opinion. So service mesh and how you use this basically depends on. If you basically organization move. To build
application using microservices so they quickly face on managing
complex interaction between instances on highly distributed
microservice environment. And that's natural because while
you're maturing on your container, adoption developers
going to deploy more and more application because they are,
they can move fast now. But the data operation will be difficult
for that transitional service, communication and virtualized
environment don't align. With the requirements of
microservices world and there are some rethinking needed to
architect the solution properly managing complex environment
becomes DevOps. Problem developer are forced to create
communication logic to its services. Often they also need
to add configuration server to manage this logic with Red Hat
open shift service mesh. This features brings behavioral. Insights an operational control
to the service, communication and traffic. It provides a
consistent framework for connecting and securing as well
as monitoring containerized application inside open shift. Built in open source, STO
project, service mesh provides control plane and infrastructure
that's transparently enAbles, this traffic management
capability without requiring changes on developers code. Red Hat service mesh also
augments ISTIO capability. By adding and tracing observe ability. Uh, in this deployment? Next slide, please. So this is how. Uh, the istio service mesh
ecosystems looks like, so you'll have ISTIO which is the
center of the universe. That money is the control plane. You
also service mesh in in open shift. Also added Kiali, Jaeger,
Grafana, and Prometheus for observe ability in security
perspective using ISTIO handles mutual TLS
authentication transparent to the service. Uh, for connection
and control ISTIO do rate limiting at that attempts to
protect this service against large amount of request. An
example where that is TLS. Next slide, please. And it's in. And it's important to understand
as well. How ISTIO do this or serverless do this so in
open shift there is a concept called sidecar container. A
sidecar container is a pattern where two or more containers are
deploy in the same pod the container share the same
namespace, network and other resources and all containers in
the pod share the same life cycle. So in the concept of
service mesh the sidecar pattern. Is often used to
enhance the application container in the steel cycle.
Pattern is used to add container code envoy proxy so you have a
control plane which has Jaeger Pilot Mixer, and auth which is the
native is ISTIO features and you will be adding invoice to your
pod or your container to actually apply security route
rules, policies and report traffic Telemetry. At the pod
level. Next slide please. Here. Next series
I'm going to do a demo on how you can use a service mesh
features using a uhm, we call this a an application called
book info from ISTIO project and we will be controlling a
micro a Java Microservices where there will be multiple version
of reviews that will be managing by. A service mesh. So here I'm log in again as an
admin. For now I'll create a project where I'll be deploying
my service mesh. So let's call it ISTIO systems. Once I am done with that,
actually I have a GitHub repository for this specific
demo as well as service mesh that you can. I will be later
on. I'm sharing with you. So what I show earlier is there a
for me to deploy service mesh in this openshift platform. I'll be
needing four operators first will deploy. Elastic search, so
using the operator hub. I'll search for elastic search. Operator. And then I'll just the
install this and for this demo I'll be deploying a cluster wide
service mesh. And of course you can deploy this on my namespace
specific service mesh. After deploying that
Elasticsearch. I will try. I will deploy
the second operator. This time I'll be deploying
Kiali and Kiali operators earlier discuss is used for observe ability. And again, I'll
be deploying this operator for cluster wide service mesh deploy
deployment, so I deploy in the default project. Next I'll be deploying. Jaeger Jaeger is
useful tracing, and if you notice when I deploy red hat
supported operator for this purpose because I want to make
sure that if I'm deploying service mesh I will get a Red
Hat support as well. And of course, if you deploy
that in ARO environment, you'll get the support from
ARO team. Now I'll be deploying a service mesh
operator. And after awhile, so the four
operators will be installed. Now that the operators are
installed, what I'm going to do is I'm going to actually deploye
the control plane. So here I want to monitor what's
going on during this deployment, so I'll be doing an oc get. pods and watch the progress with
a deploy the control plane. So just like you know, because
this is deploy using operator I already be. I'll be using the
default configuration for this, and as you can see in the
command line, it creates more people. pods right away that
will be part of a, uh, open shift service. Mesh
control plane. Now what I'm going to show here
is I'm actually going to the developers perspective so that
while waiting for this deployment to finish, I can see
and monitor what's going on in in in my in the namespace that I
am currently with. Now that the deployment is
done, I'm going back to my admin administrators login. UI mean and go to the
operator and then the next step for me is actually to create a
service mesh roll. So service mesh, roll control
suites, namespace or project. I want to apply service mesh. So
imagine if I'm touching a production environment, I will
have multiple namespace, but I want to control book info
namespace only, so that's what I did after that. I now I'm ready
to deploy an application that the service mesh want
control. So I'll be creating a new project called book info. Going to the project UI. You can see that there is a new
project called book info. Now I'm going to change this
view to the book info namespace. So that when I deploy application. Coming from. Coming from his book info construct. As you can see, the
the deployment is very quick because I did this the second
time already. That means that they images already available
in the in my in my local registry internal registry. Now that I have an application
running, what I'm going to do is I'm going to apply the construct
inherited from upstream ISTIO to control to control the application. After that, I'll be
creating a gateway. For me to actually access the
book info application. And after that I'll be creating
the destination rule so that I can control all the destinations
that's going to hit my application. Now that I'm
satisfied with all the, uh, you know, uh, uh, created resources.
I can. I want to go and check in the book info to show you the
services that is running now in ISTIO. I also see the same
services but and then I can see the routes as well. The routes
for my application. Yeah, and the routes for Jaeger.
So let US Open the Jaeger UI as well as the Kiali UI. Because this is the first
time that we are, you know, opening the Jaeger
application, it will ask for log in. Now this is actually
the application that I just deployed, as you can see what
I'm showing here. I'm refreshing it to show that
the application book reviews actually is doing a round
Robin because I haven't applied the control on the
routes of the application. So
here. I am trying to log into the
Jaeger earlier and then the Kiali so Kiali gives me observe ability. And here I can view
the graphs of a specific namespace where my application is running. And in Jaeger I can
use this for trace ability to understand what's going on with
my application and use this for troubleshooting purposes as well. So what I'm doing here is
I'm sending traffic to my application in open shift so that
I can see. The traffic flowing as well,
almost in real time in Kiali so that I can observe what's going
on here. I'm changing the updates to 10 seconds. Now I'm clicking the application
to view to send more traffic and as you can see. There there will be a week or
this around Rubin. All the all the apps that are running in my
pods are is being hit now what I just did. This actually created
a route control to my application to allow only
version one of my application to be accessed. Version 1 means no
stars and if you go to our Kiali UI you can see that version one has 100%. Traffic going version
two and version tree has empty traffic. Now what I'm going to
do is I'm going to change the logic and. Control the traffic
that if user Jason is log in it will only show 2 Blacks are
two stars Black Stars I mean and if I go to Kiali I can see
that actually it goes to version 2. The traffic goes to
version 2 as well. Now if I log out using that a
route control that I applied, I'll be seeing red stars only.
As you can see earlier and that means version three of my mic at
Java microservice application. Now what I'm going to
do next is actually. Um? To do. To control my traffic and
actually tell my application STO my ISTIO or the service mesh
that controls the routes to only use version two which version 2 is? A Blackstar only? So you will see in Kiali
here that all traffics in Green is basically going
through version two of my micro service application. Now, one of the functional
functionality that you can use is actually fault injection or
Chaos Monkey. So I just applied a traffic control in my application. As you can see,
there are now errors on the the front end of my application. And it will also show in my
Kiali UI because this is showing almost a real time. As
you can see there are orange and red application there and they
go through a Jaeger. I can see that there are error and if I
want to dig deeper what this error is about I can also do
that in Jaeger and this is actually useful for troubleshooting purposes. And
that's basically the end of my demo for service mesh. Now next
slide please. So next I'm going to talk about
is Serverless in ARO 4.3. This features is actually thick
review, but the current release of often shift, which is 4.4.
This is already GA, so open what is open shift server less Open
shift server list is based on an open source project called K
native, one of the fastest growing serverless projects in
the market. I can tell you. This ensures that you don't. Also suffer the log in concerns
that can still get the innovation from the growing open source community.
Next slide, please. So when I talk
about serverless, to different people that I met. Many of them. Well, think of several lists
about a AWS Lambda or function as a service, but in reality,
or at least in red hats perspective as well as CNCF
serverless concept is actually broader than function as a
service. If you compare serverless too. Function as a
service is like saying function as a services server, less the
same way that you say square us rectangle. Next slide please. So. Open shift serverless is
basically based on uh. 3K native features which is
building in server in. Open shift 4.3 we use
tekton eventing basically. You can use sources such
as Kafka meshage file upload to storage, timers for recording jobs and. Uh, sources like
sales, sports or service now and even email, and this is powered
by Tamil KA AM Q and the third features is serving now in open shift 4.3. The features that you
can use for now is only serving, and that's what I'm going to
show and talk to you about. Serving is a module that
receives the output of build and responsible for running the
container in the event driven world. This allows for auto
scale on demand but also scale to 0, which is the you know how
you want the concept of server less, which is critical for
server less workload and can expand the density of worker
nodes and cluster tremendously. So next slide please. What I'm going to do
now is I'm going to do a demo of server less. Can you play the video? Right, so here I am. I created
already a namespace namespace tool or project code name K
native serving. That's where I'm going to deploy the serverless
operator, so going back to operator hub again. I'm going to choose or look for serverless operator. And then
subscribe to that. I've choose the version 4.3 for this
specific deployment. Now that I have the open shift serverless, operator deploy. What I need to
do is I'm going to create a K native serving instance. K native serving instance is
what we're going to use for the demo to show. We call this the functionality
of Serverless in open shift. Now that I clicked the
creation, it will actually. Show you. Pods or containers that is
needed for a for me to use. K native If so, uh, it's
currently being created. So here I'm just showing you
guys the you know the resources that is being
created as part of this deployment. So these are the
pods that is needed for me to use the features of
Serverless. Now what I'm going to do is I'm
going to create a server less demo project where I am going to
add. I create an application. So in my GitHub which I'm going to
show, I have a small golang application and the small and
this application will create a hello World Service. So when I did this step
recording, I need to update some of the, uh, serverless,
the email file. And after that I created a
successfully an application. So. Here, I'm actually
trying to show the server less tab that is created additional
to the normal tab in open shift admin cluster UI. I go to my developer UI. This
is now the application that I just deployed in a command line.
This is the goal line application. So now I trigger
an access that triggers the creation of the container. So it shows me that I have the
Hello World working and application actually scale up
from zero to one. Now what I want to do is to show you. Other functionality of
Serverless, which I can control traffic. Of course, using
UI or using command line on this on the application or
service that is running. What I'm doing here is update the. Deployment config. Now that you see there are
two revisions now with two revisions available, I can
now set a traffic. Say for example a 5050 or uh. A 5050 split of
traffic going to an application. So let's call it split one and
split two. I'll just choose the right container. It's available
for me and using the command line I can. You know I can, uh,
I I can show or that there will be a two end points and the same
URL now that it's split in 5050, I need to click one more times
and it actually creates the second container or spin up the
second container and in a while after that. It will spin up
to zero as well. So here we what I'm trying to
show. Here is I will be showing how you can access server less
using the UI. So I used my springboot GitHub. I can see
that there is actually a K native functionality right away
on the build config and I can update the way called the
scaling and I can even use the the labels and of course labels
are important when if you are using service mesh functionality. So that you can. You know you are aware with
uh, of the, you know, application utilization
within the specific project. So this is just me showing
that it's possible for you to do that as well. So in that other
applications show shows that it scaled down to 0 as well. And
here I'm just showing what? Of a capability or what, uh,
features you can get as well and you can access from this
from developers UI. Here I am going. I am actually
showing the populated server less, uh? Features already here
that shows the available routes and shows that the revisions as
well as the service. And that concludes my demo. And, Uh, Uh, I'll give back to
Clarence for the next topic. Thanks Mike, All great demos so
so now we understand that ARO at its core is an open shift container
platform, but it's a managed service and has a
familiar Azure tooling to create monitored scale the cluster
surrounding that. Let's look at some of the other native
integrations for ARO from Azure. So next slide. So we've talked already about
single sign on one of the first things you'd normally do set up
their identity provider. So 1 issue that's going to be
usually a Azure Active Directory. Now you can set up
multiple providers. Such example maybe uh for internal users.
These Azure Active Directory. And if not, maybe some
contractors or external users you want to manage in a
different identity provider. You can have more than one set up. Next slide, please. So. Um, another interesting one
is Azure Monetary Integration. So by default you will get a
Prometheus and grafana set up in your ARO cluster for getting
observe ability to the performance and metrics and logs
of your applications and and cluster. You can also use Azure
monitor externally, monitor their cluster and it is useful.
You already managing other resources on Azure and your subscription. For example like maybe
got multiple clusters. Your on premise clusters. Maybe you've
got. You know other types of cluster like AKS or something you
want to have a unified single pane of clusters, sort of overall
health health. You can use this integration. It's currently in
preview and you install it from the command line in future
release will have this more integrated where essentially
like a checkbox in the portal or a command line argument. When
you create your cluster to to activate that integration. You can see that a screenshot
of what you kind of get in terms of clusters and node
and um, you can drill down to individual containers,
see their logs as well. Next slide, please. So the other aspect is some
scaling your nodes. So we saw how you could scale. Uh, you can
scale pods. You can scale using the cane 80 of serving, for
example, but this time we need to scale the underlying
infrastructure so the underlying VMS or nodes. Now a openshift support opens a
autoscaling through, uh, what's called the cluster auto scaler.
So this is essentially a based on the upstream Cuban eddies
auto scaler component. And another concept that, uh,
open shift as is a machine sets to define a machine set up for
each type of node, and you can set up a machine scale set to
set up a lower and upper bounds. So for example, in this, in this
slide you can see we've set men workers to one and Max are two.
Now we can actually then have the cluster auto scaler drives
the scaling of this scale set automatically if we start
launching a lot of. pods, for example, we now
cluster no hit a pending state is there's no more
capacity, and that's an indicator to the cluster
order scaler to start creating more nodes or
instances. So then will start getting VMS created
on Azure through our cloud controller that integrated
with the Azure platform. Next slide, please. There are. There are quite a few
other integrations is highlight a couple here. So for managing secrets. Rest, the secrets are
already encrypted in in, Uh, ARO, an open shift. If you wanna smell your secrets
externally, uhm, you know you can use Azure key vault, which
is azure's Secret store, and we have a driver that you
currently. This is the component you install yourself just like
home chart. It will then allow transparent access to those
secret, so that means you don't need to change your apps to know
how to talk to key vault. They just received their secrets
through a mounted file for example or environment variables. And it's driver will
then retrieve those secrets from Key vault and make them available to
your to your pods workloads. And Lastly, there's the Azure
Container Registry, so in in ARO you have the Openshift
Container Registry which is built into the cluster.
Redhat also have qua another enterprise ready. And a registry that you can
either self host or use assess offering. And if you're already
using Azure and using containers elsewhere in Azure,
you can use the Azure Container registry. Openshift as
well to make use of this you just need to update what's
called the service principle, which is like a service account
if you like that is used when you create the cluster, you
just need to assign a role called ACR pool to authorize
the pastor to be able to talk to the Azure Container
Registry. Now before we move into the closing uhm. Wanted to, uh
show you a couple of things very quickly. Right, so you should be able
to see my screen now I just get confirmation of that. From one of the speakers. It's buffering showing up. Yeah, yeah. So I just quickly
before we going to the closing. Just want to show a couple of
things so we have documentation on setting the Azure AD
integration, those talking about. So for example here we
can see on my cluster I've actually gotten additional
identity provider here. Uh, which is Azure ID and you
can you can have multiple Internet providers because I'm
already signed in previously to the Azure portal. Notice that
when I click on that link, I'm seriously signed in using my. My Active Directory Azure Active Directory identity. And the last
thing when it's show you is. Um? ARO will have the you know
the built in Prometheus in Grafana for components for
monitoring, which is great. This was the Azure Monitor
extension that I was talking about, so if you already
managing other clusters, for example here, I've got some AKS
clusters and then I've got my ARO cluster showing up here. I
can see it at a glance what's going on and you can drill down
into that cluster to see overall health. Here’s my cluster
utilization? I can go down into the specific health to see the
components that are running in there and the nodes. And you can drill down further
into the individual modes. You know, the pods that are
running on each node, and eventually you can get down to
hear the containers and you can see their logs. OK, so. Next slide, please. So we hope you enjoyed this
webinaR and gained a good understanding of ARO 4
and it came to try it out. This is as I said, it's a
jointly managed offering by Microsoft and Red Hat. And if you haven't, if you
haven't, if you haven't answered your questions, we've got
questions that you want to ask. You. Feel free to type those
into the Q&A box now. Now we value feedback from all
your all the attendees today, so there will be a Q and a sort of
feedback form link that will be posted in the Q&A box. So just
have a look for that now and there's an optional opt in if
you want any follow-up engagement from either Microsoft
or Red Hat. So if you can opt into that if you're interested. The next slide. Uh, one more slide. So there's a
few resources pointing out here. Do you want to learn more about
Azure Red Hat Openshift can see that that link at the top left
the top right length shows you how to get started creating a
cluster like we went through earlier is also an FA Q If you
understand some of the limitations, road map etc.
Frequently asked questions, have a look at the link. And Lastly
if you've got feedback for the product team. So rather than
ask, but like if you want to provide feedback to improve
the product or potentially influence the road Maps, then
you can submit feedback via that link in the bottom right. So once again, thanks for
watching. We hope to see you in future webinars. They will stick
around to answer any questions that we haven't. Please submit
any feedback you've got via the link in the QA box. Thank you. So we stick around for five minutes or so, yeah. Yeah, and hang around for a few minutes. Skype. Should we read out some of
these questions? I think that could be useful. Way to spend these five minutes
on this makes some good questions coming through, which
I've answered. Some Mikes answered sum and clarence answered some. So this is one here. In
the moment we haven't answered on a high level. What are the
typical task of operators admins versus task of Devs. I
guess at a high level and operator is really something
that you would do as an administrator. Devs don't
necessarily need to even see or deal with the operators, but you
can set up, say, unprivileged level of access for developers
to a subset of an operator which they might find something like.
I need to deploy. Uh. You know, and you
go find a cluster. For my project. You might make it
available as an operator, and so it's more about consumption
versus administration options there, depending on what your role is. Yeah, in my experience
based on you know my engagement with our customers. A developer
is just really hand creates the operators and give it to the day
two day three type of people because they want to make sure
that you know all their deployments are based on their standards. So
as what you will explain, operators handles. You know
administration task, but really if you want if you want to use
their different kinds of language that you can use in
creating operators, go is. Of course there you can use ansible
and you can use helm other customers of ours use Java to
create an operator. So yeah there is also an operator SDK
that you can use which provide frameworks on how you are supposed to. Create operators
for your for open shift. answer in there is that you know an operator is not a red hats
specific or openshift specific thing there at the kubernetes
upstream project. What we're doing is we're working with
various vendors and they could be a security appliance being
the could be a middleware vendor to make sure that if they
delivering a container ready application stack, they do it
via an operator. So effectively at its most basic, if it's just
the deployment method like a MSI install or windows. Or RPM in Linux.
That's what an offer it can be. But it's most complex. Some of
the operators include things like machine learning pipelines
to autotune themselves, so they might do a scaling of a database
platform that's deployed as an operator and it automatically
scale up the number of worker nodes for that database
platform. Do all the replication required all the scores of the
sinking operations to automate that completely seamless to the end user. Operators in
person, right? Yes. You wouldn't expect a an operator. Versus a dev to
have to actually be installing or using openshift from scratch.
Mean that's a deal we just go into. Space inside using it. Yeah, I mean. I mean, I think
I think open shift in in in my perspective enables dev OPS
so that Devs and OPS can work a Cohee, cohesively and this
is a perfect platform that to to do that because. It enables a developer to
deploy or develop application on a stable environment where
operators manage the the type of infrastructure task to make
you know the water and feed and provide that platform
for developer to continue working on their development really. Yeah, I mean that's
that's something I talk to. Right from the start right. One of
the big things between red hat versus the community is that the
community have goals and aspirations. Their enterprise
customers may not necessarily have would be aligned to an even
where you go to an upstream first approach, there's always
going to be this disconnect between what you need to deliver
to actually drive value for your business and what you actually
have to deal with in terms of the project. And so the
point of open shift is to make Kubernetes into
something that's able to deliver your business
value. An which in your digital transformation
terms, it's a leisure applications for what is
very developer applications interact with. That's
effectively more businesses do to digitally transform. Alright, looks like a. We've
answered all the questions. Uhm, thanks again everyone for
attending and we hope that you enjoy this webinar, we
certainly did and we yeah we hope to see you again soon and
please get that feedback in and uh yeah go ahead and try out Azure Redhat Openshift. Thanks guys. Yep, now machinery,
Terra Cotta. Thank you.