Container Orchestration Explained

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hi everyone, my name is Sai Vennam, and I'm with the IBM Cloud team. Today, we want to talk about container orchestration. I know that in the past, we've talked about containerization technology - as well as dived into Kubernetes as an orchestration platform. But, let's take a step back, and talk about why container orchestration was necessary in the first place. We'll start with an example. Let's say that we've got 3 different microservices that have already been containerized. We've got the frontend, we'll have the backend, as well as a database access service. These 3 services will be working together, and are also exposed to end users, so they can access that application. The developer has a very focused look at this layout. So, they're thinking about the end user, the end user accessing that frontend application, that frontend, which relies on the backend, which may, in turn, store things using the database service. The developer is focused entirely on this layer. Underneath it, we've got an orchestration layer. So, we can call that a master, and I'm thinking about Kubernetes right now, where you would have something like a master node that manages the various applications running on your computer resources. But, again, a developer has a very singular focused look at this layout and they're really only looking at this stack right here. They're thinking about the specific containers and what's happening within them. Within those containers, there are a few key things. So, there's going to be the application itself, there's also going to be things like the operating system, as well as dependencies. And there are going to be a number of other things that you define, but all of those things are contained within those containers themselves. An operations team has a much larger view of the world. They're looking at the entire stack. So, an operations team: there's a number of things that they need to focus on, but we'll use this side to kind of explain how they work with deploying an application that is made up of multiple services. So, first, we'll talk about deploying. So, taking a look here, it's very similar to over here, but the key difference is these are no longer containers, but the actual computing resources. This can be things like VMs (Virtual Machines) or, in the Kubernetes world, we call these "worker nodes". So, each one of these would be an actual computing worker node. So, you know, it could be something like 4 vCPUs (virtual CPUs) with 8 GB of RAM per each one of these different boxes that we have laid out here. The first thing you would use an orchestration platform to do is something simple - just deploying an application. Let's say that we start with a single node. And, again, here we've got the master. On that single node, we'll deploy 3 different microservices - one instance each. So, we'll start with the front end, we'll have the backend, as well as the database access service. Already, let's assume that we've consumed a good bit of the compute resources that are available on that worker node. So, we realize - let's add additional worker nodes to our master and start scheduling out and scaling our application. So, that's the next piece of the puzzle. The next thing an orchestration platform cares about is scaling an application out. So, let's say that we want to scale out the frontend twice. The backend, we'll scale it out 3 times. And the database access service, let's say we scale this one out 3 times as well. An orchestration platform will schedule out our different microservices and containers to make sure that we utilize the computer resource in the best possible way. One of the key things that an orchestration platform does is scheduling. Next, we need to talk about networking and how we enable other people to access those services. That's the third thing that we can do with an orchestration platform. So, that includes creating things like services that represent each of our individual containers. The problem is: without having something like an orchestration platform take care of this for you - you would have to create your own load balancers. In addition, you would have to manage your own services and service discovery, as well. So, by that, basically I mean that if these services need to talk to one another, they're not going to try to find the IP addresses of each different container and resolve those and see if they're running. That's something the orchestration platform needs to do - is handle that system around it. So, with this, we have the ability to expose singular points of access for each of those services. And again, very similarly, an end user might access that frontend application - so the orchestration platform would expose that service to the world, while keeping these services internal - where the frontend can access the backend, and the backend can access that database. Let's say that that's the third thing that an orchestration platform will do for you. The last thing I want to highlight here is insight. Insight is very important when working with an application in production. So, developers are focused on the applications themselves, but let's say that one of these pods accidentally goes down. What the orchestration platform will do is it will rapidly bring up another one, and bring it within the purview of that service. It will do that for you automatically. In addition, an orchestration platform has a number of pluggable points where you can use key open source technologies - things like Prometheus and Istio - to plug in directly into the platform and expose capabilities that let you do things like a logging, analytics, and there's even a cool one, something that I want to sketch out here, - the ability to see the entire service mesh. Many times, you might want to lay out all of the different microservices that you have and see how they communicate with one another. In this example, it's fairly straightforward, but let's go through the exercise anyway. So, we've got our end user; and the end user would likely be accessing the frontend application. And, we've got the two other services as well: the database, as well as the backend. In this particular example, I'll admit, we have a very simple service mesh - we've only got three services. But seeing at how they communicate with one another can still be very valuable. So, the user accesses the frontend, the frontend accesses the backend, and we expect the backend to access the database. But, let's say the operations team finds that, oh actually, sometimes the frontend is directly accessing the database service. They can see how often, as well. With things like a service mesh, you get insight into things like the operations per second. Let's say that every time - or let's say there are 5 operations per second hitting the frontend, maybe 8 that go to the backend, maybe 3 that go per second to the database service, but then .5 requests per second going from the frontend to the database service. The operations team has identified, by taking a look at the requests and tracing them through the different services, that here's where the issue is. This is a simple example about how you can use something out like Istio and Kiali (which is a key service-meshing capability) to gain insight into running services. Orchestration platforms have a number of capabilities that they need to support, and this why operations teams and these roles that we're seeing pop up - things like SREs (Site Reliability Engineers) - and we're seeing the growth of those roles because there are a lot of things that they need to concern themselves with when running an application in production. Developers see a very singular view of the world, where they're focusing on the things within the containers themselves. Thanks for joining me for this quick overview of container orchestration technology. If you like this video please be sure to drop a comment below or leave us any feedback and we'll get back to you. Be sure to subscribe, and stay tuned for more videos in the future. Thank you.
Info
Channel: IBM Technology
Views: 102,618
Rating: undefined out of 5
Keywords: containers, containerization, vendor lock in, kubernetes, appmodernization, cloudapps, ibmcloud, container orchestration, what is container orchestration?, Istio, Knative, open source, microservices, database, orchestration layer, master node, worker node, applications, operating system, deploying an application, deploying, VMs, networking, load balancers, pods, clusters, service mesh, prometheus, Kiali
Id: kBF6Bvth0zw
Channel Id: undefined
Length: 9min 0sec (540 seconds)
Published: Wed Apr 17 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.