Red Hat OpenShift Architecture and Strategy

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
you my name is out there the openshift engineering team and I'm excited to talk to you about the openshift architecture and strategy just to introduce myself a little bit I'm a long-standing member of the Caribbean community so I've been working in the kubernetes space for well over five years as Chris talked about Red Hat was one of the earliest believers in the kubernetes technology I think it's fair to say I'm probably the first person outside of Google to ever install kubernetes when it was a much smaller project than it is today instead I've had the luxury of helping both grow the community and the ecosystem and our open ship product to support enterprise customer needs today presently privileged enough to serve as a coronary steering committee member as well as co-chair two of the special interest groups upstream on architecture and node some excited to have a chance to speak to the panel today feel free to pause me and ask a question and I'll do my best to answer but let's dive right in so as Chris talked about when we think about open shift as a container platform it really comes in multiple layers so we take the upstream innovation that's happening the Correa's community mariette was what we view as our best in breed Linux distribution in rel and augment it with a set of core cluster services that are needed to facilitate any viable kubernetes environment in production so that might be a networking choice you're monitoring stack you're in rest choice and that forms the basis what we call our open shift kubernetes engine and then on top of that there's a lot of ecosystem projects that are flourishing around kubernetes that people are probably familiar with which I'll touch on briefly that augment the platform with additional platform capabilities application development capabilities or higher order developer services so while many folks can be productive with crew Nettie's at the lowest level primitives we recognize there's a broad ecosystem of developers you like to engage at higher levels of infrastructure above the stack across the set of platforms that make available our goal is of course to make OpenShift a pleasant experience throwing both physical virtual private and public and managed cloud environments and I look forward to demonstrating some that today finally afterwards Michael elder I like the mine is going to be talking about what we're doing around multi cluster management and adding policy across all these clusters that might be in the environment and so there's a lot to cover right at OpenShift four is our current version distribution of our kubernetes platforms when we look to build our distribution obviously as Chris talked about Karuna knees as is the heart of that orchestration tier across multiple hosts and we marry that with an immutable container operating system which we call Red Hat Enterprise Linux core OS and we'll talk about the mobile detail and then we look across the broader set of open source projects and innovations and ask which things can we stitch together to provide a stable platform for enterprises to both reduce the risk reduce their costs and provide a platform to accelerate their innovation and so there's a a sampling of icons here and obviously innovations come and go across the ecosystem and what we may or may not choose the chip on top of the platform could change but at its heart Linux and kubernetes are foundational to open shipped when you think about red-hot open shift you can think of it as one platform but available in many consumption models and I'm going to try to give a quick survey of them afterwards in the demo but we offer openshift as a managed service across all of the major public clouds Amazon Microsoft Google and IBM as well as a service that you can run in your data center will remand your self directly which we referred to as a self managed platform and in the open ship container platform now no matter how you consume openshift whether through a managed service model or manage it directly yourself the unuse or cluster admin and expiry is the same and the set of components that come in that distribution are the same early in the days of kubernetes and our own customer journey with our user community in the beginning it was just it amazes many and still humbles me that we've gotten Corinne I used to be a successful as a project it is we spent the first few years of our of my own personal experience in the openshift community trying to demonstrate that kubernetes was the next platform for the industry I think as Chris alluded it's clear that that has been established as the future direction but when I look at like the next wave of innovation that's happening around kubernetes it's more and more about how we can help customers be successful in the platform and work together no matter where you run that platform in a managed setting in your data center in the cloud or at the edge to make sure that your app is actually successful in production right so there's no sense in having all these computers in this rich orchestration engine if at the end of the day it's not a stable platform for you to trust your business critical apps to run and so over the last few years Red Hat has made a theming investment in some back-end services which I'll show that allow our clusters that have a harmonious relationship backed RedHat to provide feedback loops image content distribution and a very rich policy update engine and all this is aggregated together in what we call the openshift cluster manager which is a SAS front-end that allows you to install register and manage a set of connected openshift clusters so with that I think we've had a lot of talking like to switch to a demo and start showing some of what how open ship everywhere so what you see right here is the entryway to what we call cloud RedHat calm and I'm going to dive into a couple of the services to give you an idea of this back-end service control plan that I've talked about and how it intersects with the end user cluster so right here in the middle is our Red Hat OpenShift cluster manager this is a service that we introduced over a year ago when we first rolled out our version for distribution and it continues like all back-end services to get updated and enhanced monthly so I'll drill in here into the cluster manager and you can see I have my Red Hat account here as associating a number of clusters to this account and so I have a view of yep I didn't understand so cloud.com is it a you know sass portal for management of the of your products or is it what is it so cloud that Red Hat calm is a sass portal that is providing a connected experience back to red huh to amplify and improve your management experience of costs right have products including Enterprise Linux open shift it's a front-end portal that you'll see many new capabilities get added over time what I wanted to zoom in here right now is talk about on the open ship cluster manager itself how if you choose to connect your clusters back to you by that or you consume open shifts through a managed service from Red Hat we have a integrated experience here that you can you can view in the subsequent presentation from my colleague Michael you'll hear about how we take some of these innovations from the SADS offering and bring it down for you to run elsewhere so what you're seeing right here is a list of open shift clusters that I have connected back to Red Hat some of these clusters are running in a managed offering which we call open shift dedicated and so you'll see the type here is OSD and some of these clusters I've spun up just myself and I running directly in various parts of Amazon each of those clusters I can see when they were created the version of the distribution they were running and the various clouds that they operate on as well as a general overview of the health of that cluster if I drill down into a couple these clusters I'll drill down into a openshift dedicated cluster running on Google cloud platform I can get an overview of usage of that cluster like how much CPU and memory is being consumed some general idea details about that cluster how its deployed multi a-z or single zoned how many load balancers it might be consuming etc as well as this is the openshift dedicated cluster right so I want to be able to see actions that happen when Red Hat essary operate on that cluster on your behalf so I get a very rich history of changes that were made to that cluster in addition I can drill down in here and see if there are any issues with this cluster so one of the kibble is I'm excited to talk to you about a little bit is really zoom in on how we've invested deeply on cluster monitoring capabilities and how we engage in certain upstream projects like Prometheus and Thanos to really drive this back-end monitoring platform but you'll hear that you'll see and so in general I can see that there's no alerts firing on this cluster all the platform operators that enrich kubernetes with some core additional capability or all running and healthy now I don't get this view even though it's just a open chef dedicated cluster but I can also drill in and go into the cluster admin experience and again and so now I can log into this cluster and see the actual native administrative console for this cluster which is the same admin console I would see if I'm running openshift in any of the other environments or managed service environments this is the I'm now operating in kubernetes and I'm experiencing Carini's in its fullness if I drill back out through the cluster manager though will show us some other clusters here two of the clusters I want to zoom in on or the what happens if you run openshift yourself you're not consuming it through one of our managed offerings and so I have two clusters I have a prod cluster which is running for 2:00 to 9:00 and I developer cluster I created for this which is actually running one of our Creoles Knightley's so as Chris talked about everything that Red Hat is open-source you can download everything we build in the open and so a lot of our ISP partners and actually clients like to engage him that model and give us feedback and so it's really exciting but I'll drill into this product buster and just like the cluster I could see that Red Hat was operating on my behalf this cluster whether it's running on bare metal in virtualization or in another cloud that Red Hots not directly engaging with it can send back data to Red Hat to tell me the health of that cluster in the singular view and so just like in the dedicated cluster I showed you earlier you can see consumption details of health the cluster as well as drive into monitoring and see if alerts are firing the idea here is we want to make it very visible to admins how to be successful on the platform so one of the alerts that's firing here it's interesting it's telling me Oh alert manager is not actually configured the sent alerts to your operations team when that clusters down right so that's important but if I look at the dedicated cost Eric's warning red hat sre are being configured to know when that cluster is down if you're running the cluster yourself you want to be able to know to faqad when there's an environment while they're sending alerts to slack or email or however you paging system works and so this is telling me as a user I might want to drill in and fix something here and so I can one click from that experience into this cluster which is my prod environment get detail about alert manager and say oh I really should configure that for sending alerts to my operations teams and to do that it's as simple as going into our administration panel choosing some cluster settings and then ultimately configuring alert manager in the global configuration for the cluster but what do I do when I drill into this cluster settings it's just some other information that you can see so I can see a history of this cluster I installed this on the twenty-seventh and preparation and because all good demos upgrade in production one of the things that I hope to drill in deeply today is saying how this relationship between redha and our users helps us improve not just Oh but all the community projects define trends and anomalies to ultimately improve the open-source improve our customer success and make upgrades safe and so one of the capabilities you have an open ship now is when you choose to upgrade your cluster I can subscribe to a channel and that measures my desired risk and change cadence so in this case my cluster is subscribed to the for - stable channel and I'm currently running for two to nine we aspire to send updates out every week on the latest security fixes and patches for all the various upstream components that we ship and make that available to our users so they can be successful in secure and production even when they operate the platform themselves so what I want to do here is change this cluster to upgrade to our 4:3 environment and I do that by changing the channel to 4/3 and I'm very quickly notified oh there's an update available back at Red Hat a Red Hat can see that you're running for 2:00 to 9:00 if you're connected and it g'loona suggests a new upgrade target for me to go to and this upgrade target suggestion is informed by the experience of the telemetry data we can collect and aggregate both from our geometry and our dedicated services as well as what our customers choose to provide to us via remote health monitoring and so I'm going to upgrade to 4/3 13 and in one click the platform will start to upgrade and progressively roll out the update to the control plane and my host and I can see if I draw it back into the open shift cluster manager if i refresh eventually during the course of this demo might be lagging 30 seconds to 2 minutes you'll see that this pod cluster will notify me in the central dashboard that it's currently updating between 4 2 2 9 and 4 3 13 ok excuse me I have a couple of questions here one is ok so you can go down drill down and eventually set up a you know a policy to upgrade the cluster and it does everything automatically what happens if something goes wrong and they need to roll back I mean some sort of incompatibility somewhere or whatever and it doesn't make any check before doing this so how does he know that my application run perfectly on the new version for example yes that's an excellent question I'm gonna try to answer it in two parts so one when you upgrade kubernetes from one minor level to a new minor level which that's 113 or 114 or 114 115 traditionally kubernetes does not strongly recommend roll back to the previous minor level and so what we might inform our customers to do is you could often subscribe a new cluster to a fast 4 3 4 4 distribution do some smoke testing make sure things are working in production for your environment so that that capability or that desire is not shut away now if you choose to rollback if there is an issue one of the things that our upgrade intelligence is able to do is it's able to safely stop the rollout when it detects there's a problem and I'm going to talk about that a little bit later when we talk through the presentation in order to reduce damage and so for example if I'm upgrading from a 4-2 to 9 to a 4 to 30 you should be able to safely go back to the fire Z stream release if you're upgrading from a 4-2 to a 4-3 and you're revving the kubernetes level we do automated testing to validate that that can work but inherently you know that is a situation we were to encourage our customer if they did encounter a problem to reach out to support but what I'm hoping I can talk about a little bit later is how we can more proactively identify those issues and make users successful you can the one thing I do want to call out a little bit though is when you upgrade from 4 to 2 for 3 you're upgrading the entire platform so you're upgrading your logging stock your monitoring stock your operating system the kubernetes system everything and you're upgrading it has an atomic unit so back at Red Hat when we build this distribution we can certify and know ok you are running for 313 this is the version of rel and the cubelet and the container runtime that are all working together so my past experience in earlier iterations of OpenShift we often let our customers update the operating system at a different cadence than the rest of the surrounding platform or update the container on time at a different cadence than the rest of the platform as a ninja are working on the product and trying to make customers successful that's a nightmare trying to gather information on knowing oh you're running this version of the kernel with this version of the container runtime and now I see a cubelets life cycle action is causing a leak that's a really really hard thing to diagnose like the other reason that do you you you know consider the entice that because I'm not I'm unique and not just kubernetes well we're testing it's the operating system all the way I think it so can I manage all these cluster as a federation of classes so apply operations empower to many of them and if I have 100 processors so how can I manage my fleet of clusters yeah so what I'm showing you right here is what you can do on a per cluster Davis this is obviously when we operate clusters at scale we have technology capability to do that across the fleet I'm gonna let Michael other who's gonna give a presentation afterwards show some of that or discuss some of that and I'll touch on it when we get to some of the actual technical documents on how you can drive configuration change across the set of clusters quick question for you where all can you create clusters right now so that's what excellent question I want to talk through first from this experience when you choose to create a cluster you can create a cluster procuring from Red Hat running as a managed service on both Amazon and Google and if I drill in here I get a nice experience asking how I want to be billed and my overall experience of picking the fleet of machines I want to run how much storage they want it create cluster and a few minutes later I get a cluster that experience is the same whether you're on GC P or Google I'm sorry on Google or on Amazon if you choose to create a cluster and run it operate it yourself you can choose to run our OpenShift container platform and it's available in a wide set of infrastructure providers all the major clouds all the major virtualization technology forms as well as even bare metal and so I think we've done a good job in OpenShift making kubernetes as accessible as possible to the widest set of infrastructures that we see our customers running today but you can add vCloud director instead of just vSphere so right now when we deploy on vSphere probably targeted based on our directly yeah we deploy to vSphere six five and six seven we've done work to ensure that nothing is broken when running on seven as for future product plans and what amount of VMware prerequisites we are iterating and engaging on see I think the interest would be you know the vCenter api's ar-ar-ar interesting in the enterprise is probably there but the the cloud providers were utilizing the cloud director and you really can't interact with the vCenter api's without breaking be caught director so you know without going up up the stack here it's a little bit of a challenge for us to try to deploy this product instead of our cloud environments okay I'd be interested in following plant maybe some more details afterwards but today right now we have a number of customers being successful running openshift on on vSphere and we contain a see the evolution of that platform and see how we can best fit uh just give me Derek where is IBM no I'm so perfect yeah so exactly so you see IBM right here where you can support running IBM on Z and Linux one what I want to talk for you is I'm talking about that game clout and you have that perfect agent yes when you want to when you want to procure openshift from IBM cloud it's actually natively integrated in the IBM cloud cloud experience so right here I'm logged into cloud that IBM com I have a dashboard for openshift clusters and you can see I'm running a for 312 open ship cluster I will drill in I can very quickly get information about this cluster and drop into the web console and that same cluster administration experience now when you could cure openshift through the IBM cloud you're getting that procurement through native billing from IBM cloud this is one of the nice things we have in Red Hat with our deep relationships with various managed cloud providers and so we first explored that relationship prior to the acquisition obviously with Microsoft subsequent to being included in the broader IBM family here we've been working together to make sure that openshift has an awesome experience on the IBM cloud platform and so right here you can see that opens just running just fine today on for 312 with IBM Club hey guys Derrick yep can I let me ask one question real quick so if we don't need to talk about it now that's fine and if my question is ridiculous that's fine as well but what I want to be clear on from my perspective is when we're talking about this OpenShift cluster and we're talking about coronet ease here we're talking about all this other you know doing the management and all this stuff are we talking about the management plane of kubernetes that d let's say the developers would not be using this kubernetes cluster to do application deployments this is just a management plan right yes I'm hoping I can very quickly just I want to address your question and I'll address it directly what I'm trying to throw here is that well open sift is one platform you can procure it in many modalities each modality might have its unique management plan to procure it and so I'll talk in a little bit detail about how the openshift dedicated management claim works and now we're going to make that available outside of Red Hat but what I hope to highlight here is when you saw I was getting a cluster through the IBM portal or in this case this is openshift natively integrated with Azure and so you might have seen an announcement yesterday that openshift 4/3 is now available on Azure in this case I can procure an open ship with native integration with Azure billing services that's jointly managed and jointly engineer between Red Hat and Microsoft and get into that cluster and as a developer be successful working on that cluster so no matter how you procure that platform no matter which service plane you used to procure OpenShift at the end-user experience who's actually touching and rating the cluster is the same and the last point I'll touch on is very quickly from a native CLI experience if you procuring an open shift from a from a juror we have deep integration with the address CLI so this is a aro command that is a plug into the address CLI and it's telling me oh you've installed open shift on Azure and this is how you can access it and in the IBM cloud experience I have a similar type I have a similar command to find out the list of clusters I figured the service control plane that users use to provision clusters and deep provision clusters is an important part of open shift and that's an area that we continue invest in we'll make available but I hope you just want to highlight here and then I'll talk to your next question on what we're doing to make that plane available and drill into more clusters and detail is open shipped is generally consumable everywhere and if you choose to get it from Red Hat or choose to work with it from amount of pride or like a zoo or an onion there you get some benefits with native integrated and billing but the end experience that distribution of what is in that cluster and how it works is the same I have the question read Erica so what you showed was all these clusters being managed can you also manage that provisioned Orai sir in in Azure and the IBM cluster all through that open shift cluster manager or because they were provisioned through a different service you have to use the the tooling that exists in IBM cloud or a sure yeah so if you choose to deploy a cluster on Azure yourself not through the managed service right which is a perfectly fine thing to do there it's your choice to connect that cluster back to cloud that redhead calm and you'll see it integrated in that openshift cluster manager if you procure that cluster through the managed service in ro some of the things as I'll talk about when we want to do compliance and certification not all of those clusters will connect back to cloud that Red Hat today but we can talk about that one without detail afterwards but in general if you wanted to run a cluster on a G or you want to run a cluster on IBM you want to run a cluster on AWS and you want to manage it yourself it's always your choice to connect it back to redhead or not if you're procuring that cluster through a managed service partner which might have sovereign cloud issues and data can't leave that cloud then we try to run that same capability native in the Azure environment as an example with the new capabilities that you'll hear about subsequent to my session you can then choose to stitch all those clusters together in a service point of your choice by running an agent you need to cluster to make visibility available whether or not you got it from a managed cluster or not but hopefully that addressed your question um yes thank you a little bit of detail here so open shift has been run as a service for many years at Red Hat and so I want to talk about the open shift dedicated service control plane the basic pattern it uses and it's really around the core criminales operator pattern of decorative state deciding what you want your cluster to be and reconciling a controller to say make it so so the open sure cluster manager I showed you when you procured a cluster at the end the day it's interfacing with what's called a cluster deployment and a set of machine pools it runs that resource on a hands that resource to a project we call hive which then is a kubernetes operator running on the cluster that says ok go and create a version of the openshift distribution as specified then if you want to do bulk actions across a set of those clusters the hive project includes a capability called sync sets which basically is a bag of yamo delivery protocol that you can say give this configuration to these clusters either by a label selector that's of a thing and at the end that configuration is read by the local cluster operators and drives the change to that cluster so I think in Rico us an earlier question on can you do management across many clusters yes you definitely can and that's supported in the API model that we exposed what I'm excited to talk about here is this hive project that is the backbone of the openshift service control plan for our dedicated offering and continues to get enriched every day is going to be made available for our users to run themselves and you read that advance kubernetes management solution you'll hear about from Michael afterwards now when you run a managed service with a cloud partner like Azure and you deeply integrate with that cloud partner that same pattern of a decorative state and a reconciling of that state is is applied everywhere but when we have a chance to work with a partner like Azure we can work very deeply in the back end of that cloud so that we can ensure we get all the actual security compliance standards apply to our O+ ters as you would for any other service out of Azure and so if you wanted a HIPAA or PCI or DSS or any of these appropriate standards you can secure that for a service control plan that we host on Azure which is also open source but at the end of day just a reader at that point as a user who you get your cluster and your awning to actually work with your app on that cluster you have a common experience afterwards so one talk a little about like why open shift as I said I've been involved in the communities project and community for many many many years and criminal's has a defined beginning and end point right it's a standard container Orchestrator and a rich set of primitives to build those orchestration concepts on top but the Chimera project itself is not enough to go be successful in production right you have to have an operating system like container runtime DNS a load balancer an ingress solution and at the other day you have to make choices and stitch these things together and test them together as a distribution to be successful in the point when any let's go many of our users can make choices at any one of these tiers we do provide a pinion ated choice in our out-of-the-box distribution on what we would recommend for each item but that doesn't necessarily like a user into that forever choice it just means that we're users would choose to deviate they take on that burden to do that integrated testing but in the end when you procure OpenShift you're getting a distribution that is deeply integrated with the kubernetes community upstream we know and understand what's going on in that community some interesting stats here I wanna talk about that came from a recent steak Architecture survey you know people that just procure kubernetes from the the open source community often make mistakes right and no one's perfect one of the things that concern me as many users are reporting running alpha api's in production a lot of beta api's and kubernetes independent of their maturity or often put on by default as well as there isn't a lot of control and consuming which feature gates users may or may not choose to turn on so at Red Hat we try to see the state of that community and that's what can we do to provide guardrails to provide that stable platform reduce the customers risk but not block innovation and so as a matter of policy will disable all alpha ap is an open shift right because they have no they have an unclear future increment endings select beta you guys we may choose to disable because certain things are just stuck in a permanent beta status and may never actually promote the GA and so we're we understand those changes are happening we might choose to do that but we don't want to stop users from like seeing the newest and greatest stuff and so we'll make things available is what we call a tech preview but when you do that your cluster upgrade is disabled this understanding of like the state of Kirwan any's when you think about all those other projects you had to assemble into the distribution is magnified when you start layering and things like a service mesh or a monitoring platform or a functions platform and really one of the things I'm proud about Red Hat is we engage deeply and all of the open source upstream communities to make that same analysis for every project we choose to bring in to the distributions and so what the other day we make choices on the products we ship and how we make them available to our customers we make those choices informed by our deep engagement in those communities how do we do all this successfully so when Red had acquired chorus and the chorus and Red Hat teams are getting together there was this emerging pattern being codified in the kubernetes ecosystem around the concept of an operator and really its core to how kubernetes works right you have a configuration that the user says what I'd like to happen and operator reads that configuration sees another resource that it should manage reconciles observes its state and sends that information impact so when you install OpenShift you get a core operator called the cluster version operator it installs a set of secondary platform operators that basically are caring and tending and feeding and nurturing individual pillars within a kubernetes distribution to make sure that they're successful as well as providing a configuration surface for them so when you want to interact with an ingress or you want to interact with monitoring or you want to enrich the platform by adding a service mesh we provide a very clear prescriptive configuration service for doing that when you drill it into that cluster and you want to see the actual health and state of all those operators we provide a single dashboard that lets you see their status what version they're running and how things are going when we were building on this pattern we saw an emerging trend happening in kubernetes where more and more innovation was happening outside of the core project so things like DNS solutions networking solutions CSI solutions ingress a lot of these innovations are happening in and around kubernetes and in order to be successful you had to bring these into the platform and enrich them our platform operator pattern is that pattern that we follow both at Red Hat as well as we engage our isp community to make those innovations accessible to openshift so if for example we bring on a new CSI driver support you'll see an operator that knows how to manage and lifecycle that CSI driver one of the things I want to zoom in a little bit here was our cluster monitoring stack to me this is super critical like we build these platforms for apps to be successful apps are only successful if you know they're successful by watching them so every open shift cluster by default has a rich cluster monitoring distribution including 8 which is building on Prometheus alert manager and another project called Thanos to allow you to have a rich queryable surface on all the metrics in the cluster if you choose as a customer to connect that cluster back to Red Hat we have a new component called the telemetry client which is what was sending that data back when I looked at the open cluster manager to know if alerts are firing to know of the levels and states of those platform operators now that view you saw an open shift cluster manager was a view that you see as an end user now of course this data is coming back anonymous but red head is able to see trends and observe issues so for example if we see customers deploying a particular operator for a service mesh maybe a database operator and a particular level of kubernetes and we see issues or anomalies are back in reliability engineers are able to observe this in rather rich dashboarding and start to do more analysis right like there might be an issue when three or four different components are integrated together on a cluster that like neither of those disparate communities understood and could realize right and our goal at Red Hat is ultimately to go and improve all those communities by finding these issues and making the world better for our users so our engineers in this case this is a happy cluster like it's got a high availability we we know it's essa t object count is good sometimes you get unhappy clusters right and so what do we do then oftentimes you see a weird thing and you'll be like well what is going on what has been configured on that cluster how can I do a deeper analysis and so one of the reasons we've pivoted so hard towards this operator pattern is it provides a standard mechanism for all these higher-order services to expose their configuration for us to understand how to do higher-order insights around so the other component we pair our monitoring solution with is this insights operator component and it can go and send the anonymize configuration data to supplement that telemetry data so then we see trends where errors might be occurring we can proactively often help our customers address the problem before they ever knew they had a problem that affected their production app now in the case where you in Rico you talked about a customer might be unsuccessful and upgrade what you do often times in the last year red hat has been better at identifying that that customer had a problem before the customer identified that they had a problem I'm super proud of our teams we're being able to do that and where we did need help from that customer we enriched our capability here with a must gather tool that can do deeper log analysis so support can ultimately come around a fix as quickly as possible so when I had that cluster and I drove an upgrade it was phoning home back to our open shift policy engine and asking okay given what we see fleet wide you might be running kurenai's at the edge you might be running it in Amazon you might be running in GCP or on your virtualized infrastructure how do I recommend the safe update for you as the end customer and so this is where that cluster phones home back to our connected services and can make more intelligent choices on how you can choose to safely navigate and when we exposed a update protocol that's called Cincinnati that when I showed you earlier you can subscribe a cluster to a channel which measures your desired risk and cadence and then you can choose when to actively roll that upgrade out and at the other day the idea here is that by taking this information we can work closely together their customers and be successful with their apps in production we hope to make that one-click cluster wide update experience as safe as possible talked about earlier I know I'm running late on time here when you update the cluster you are also updating the operating system rel core OS is in a mutable host that's version with openshift it's derived from rel content so when you run rel core OS you're you're running rel 8 it's just immutable and it has the cubelet and the container runtime for that level of openshift version within it and then when you update that operating system it actually knows how to update itself in line with that version of OpenShift so I know if there's a problem between a relationship c groups and the cubelet that you can roll out a patch that will update the whole platform as an atomic unit what's really nice about this update system is it works great no matter where you run it you might be running in the cloud or on some virtualization infrastructure and its ability update in place is nice but you might be running on bare metal right you might be running in an edge environment where you can't just throw away a computer and bring back a new one and rel core s's ability to upgrade in place is really awesome in those environments so we layer that immutable host and with additional security controls which we've always been proud about in OpenShift which was early days and kubernetes and containers people just downloaded random software from the Ranas root we do enrich new security features on top of kubernetes to do what we call security context constraints and this basically makes it that you can very strictly control the permissions that any user has when running a container on the platform and so this is important for a lot of our financial services customers telco customers etc who want to control and reduce risk in their environment but not prohibit innovation coupled together with SELinux on that operating system we think we provide a relatively strong security boundary that's layered between the app than the content and at the end I know I'm over time here we touched briefly on the operator framework the operator framework is our mechanism for ISPs to bring new content into the cluster 260 ISP partners we're integrating with this in operating hub here with solutions both from the community as well as their certified offerings right prior to joining here it was over 280 so it's awesome to see this continue to grow this is not oh no innovation around kubernetes comes from Red Hat and we recognize that and want to allow innovations from partners to happen one of the thing that's really cool is when I work with clients a lot of clients are going and writing their own operators so particular clients might choose to write a Redis operator or a operator that meets their operational needs they can go and make that operator available in their own catalogue and control which software their customers can consume just like we can and it's been really positively received and it's one of the things on privacy happen when you upgrade every operator and you're offering the service as a platform on on kubernetes just like we offer upgrade channels which lie to control risk and and desired change you can do that same on any granular ISV or customer supplied operator as well so what you're seeing here is the Red Hat service mesh providing a stable Update Channel and you can choose mine and how updates happen either automatic or manual didn't get the chance to talk about this too deeply but just want to cover on the developer front we offer our breadth of developer tools whether that's helm operators or a traditional pinch of templates to go and let users be successful on that cluster which is you know the day what's most important and in addition you know there's a lot of exciting innovation happening around several s development patterns on top of kubernetes and so OpenShift service provides that capability as well integrated natively into our console so just to recap proud to say we here at OpenShift we want to provide a platform that allows our customers to have a stable base across multiple infrastructures we aspire hard to support that full stack from the operating system to criminate to the app dev tiers and at the other day we really want customers apps to be successful in production and so we try to work together in partnership with that customer to build a rather rich telemetry and insights pipeline too often proactively find problems before our customers have and ultimately keep apps running healthy and stable you in the wild
Info
Channel: Tech Field Day
Views: 8,854
Rating: 4.9595962 out of 5
Keywords:
Id: WsvKhagxoPc
Channel Id: undefined
Length: 43min 2sec (2582 seconds)
Published: Thu Apr 30 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.