Red Hat Virtualization with OpenShift

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so my name is Andrew Sullivan I'm a technical marketing manager with the Red Hat cloud platforms business units and joining me after a few minutes is going to be Fabian Deutsch was an engineering manager who focuses on the Red Hat fertilisation portfolio so I want to take a few minutes to first thank our delegates I really do appreciate all of the interactivity that you've had throughout the day so far and I definitely want to continue not that I need to ask you all to participate or interrupt and ask questions but I definitely want to make sure that we continue that because I hope despite the fact that it's not DevOps Theory etc that it will be an interesting session for you so you're probably thinking to yourself Andrew why are you here to talk about virtualization with OpenShift you're probably thinking right we know this story we deploy kubernetes we deploy open shift into virtual machines of some form shape or another whether it's on-premises whether it's in the cloud right and there's not a lot more to that story but what we want to talk about in this session is ultimately that there is a lot more to that story and there's some interesting things that are changing in order to unify containers and virtual machines so what starts as any good story does at the beginning so my career over the last 15 years has spans a lot of virtualization I started in virtualization back before the recession caused Freight such a surge and adoption right when brutalization was a niche it wasn't really in the data center or it was in very small corners of the data center and not widely adopted and then through various forces both economic as well as technological we saw this great surge happen this is when me as an administrator went from being just a unix linux administrator to being a virtualization administrator to being a storage administrator because i had to grow my knowledge sets in order to be able to understand in order to be able to administer in order to be able to troubleshoot an architect these infrastructures that were supporting all of my virtual machines pre virtualization each server had basically one purpose its section of the application either the application in whole or whatever new term is micro-service right but whatever component of the application it was responsible for storage was the same way storage was responsible only for the IEEE ops related to the application not for the entire server and then seemingly overnights all of that changed organizationally our businesses had to adapt this as well the infrastructure teams in many cases merged together where they shared a management team so that's storage and network and compute and operating systems and all of these groups could therefore work more closely together to be able to deliver a platform that was capable of meeting the business's needs so what was Stoll as a result of this along with some additional technological advances so think things like hyper-converged infrastructure where it became much easier for virtualization to be accessible to everybody not just enterprises not just organizations with the talent and the money to spend to create a bespoke virtualization infrastructure but it became accessible it became easy in our applications adapted to this as well our applications got used to being able to request and receive resources in a matter of hours or days instead of weeks to months as it was pre visualization but all of this is old news I know I looked on the tech field a website I know that this team of delegates is extremely verse with virtualization I'm preaching to the choir so why is this relevant here and I think it comes down to two things one we need to understand how to modernize those virtual machines you've heard from several presenters today how containers are the new default for applications new applications being created are defaulting to containers they're defaulting to being deployed and managed leveraging kubernetes but there's still a large majority of applications and application components that are in virtual machines quite simply it would be nice to think that virtual machines are just going to go away in favor of containers we're going to be able to move all of those applications into containers its 2020 just recently we heard in the news where some states here in the US are having issues with things like unemployment because they need COBOL programmers these aren't going away and just because it's a virtual machine doesn't change that paradigm as we migrate as we modernize we're seeing the emergence of two infrastructures two architectures for hosting these you see we have one that's dedicated to virtual machines and sometimes these overlap because my kubernetes infrastructure is also deployed to those infrastructures excuse me to those virtual machines but at the end of the day I still have two separates things that I'm deploying applications to as a system administrator as the people responsible oftentimes for troubleshooting these things how do I rectify that how do I simplify that I like to pick on networking usually in this instance not just because networking is the holds long time picked on group right if it's broken is probably the networking DNS I guess is a subset of that but what happens when I deploy an application to containers that needs a high throughput low latency connection to an application component in virtual machines if something goes wrong I need to have a good understanding of the physical of the virtual and of this software to find a networking that's happening at each one of those layers and that's not including the actual path that that data is taking do I need to understand the ingress egress right double encapsulation of Sdn all of these other things that are inside of there so how do we make it easier for our system administrators to be able to manage and monitor and control these modern applications developers even though we highlight here to separate api's really they have to understand two separate infrastructures in the same way it's two sets of deployment automation two sets of code whether I'm interacting with a virtual machine or whether I'm interacting with a kubernetes platform adding complain which ultimately affects the broader organization complexity introduces risk so we want to simplify as much as possible so I am happy to be here in front of you to be talking about OpenShift virtualization a feature of Red Hat OpenShift that allows for the deployment of virtual machines inside of containers I'll pause for a moment not just for drama but to let that digest for a moment what we're talking about here is leveraging the KVM hypervisor inside of the Linux kernel and container technology to encapsulate those virtual machine processes now Fabian's going to cover the technical aspect of this so I'm just going to scratch the surface associated with it when we think of a virtual machine irrespective of which hypervisor we're talking about at the very lowest level there are almost always processes with KBM it happens to be a key move process that's running all of the tasks associated with that virtual machine and what is a container a container is nothing more than an isolation unit straight a unit of isolation for one or more processes so it's a natural fit that my virtual machine processes can be put into a container now when we move from containers to kubernetes we introduce things like pods a pod is nothing more than one or more containers in kubernetes I can associate things that virtual machines need storage Network compute resources and define all of those using native kubernetes objects so through a pod definition I can define the CPU and RAM requirements through the network attachment I can define which networks I want it to attach to through persistent volume claims I can request storage just like any other virtual machine so these two paradigms fit really really well and at the end of the day our goal is to bring the management of VMs and containers to the benefit of applications closer together we want so that our applications teams can consume the resources that they need and the infrastructure team can simplify and provide the resources that they need with a minimum amount of confusion or overhead associated with them so from an application standpoint this holds a number of different benefits so first and foremost these are standard regular virtual machines not doing anything to change the VM it's still the same KVM hypervisor it's still the same k vm virtual machine running inside of that container which means that i can pick up that virtual machine that's running on Red Hat Enterprise Linux Red Hat virtualization red head OpenStack the same way it has been for the better part of a decade and move it directly into my openshift virtualization environment and this includes not just Linux virtual machines but windows-based virtual machines as well so if you happen to be watching the news closely this morning we announced with open shift 4 for a tech preview of Windows containers on Windows Server 2019 but many windows-based applications aren't running on Windows Server 2019 never mind being in a container or a state that's are ready to be containerized with KBM with open shift virtualization we can take any VM that is running Windows Server 2008 r2 or later and bring it into this container management paradigm again their native kubernetes objects they are deployed managed interacted with accessed in the same way as any other kubernetes object inside of that platform for application developers we're going to see a little bit of overlap here we see in the bottom left hand side windows apps and openshift so again same principle of bringing in the existing application without having to modify which plays into the bottom right hand side I don't have to be under a deadline or a whether it's imposed by administration right by bureaucracy or whether it's imposed by technology to refactor my VM I can do it at my pace and importantly it means that I can do it with care so that I can guarantee that I can work on the quality of my Apple without having to rush to get to the container world my VM is consumed just like any other container it's just a VM and a container as a developer it means that I can unify all of my tools it means that I no longer have to understand two sets of api's I no longer have to maintain all of those tools and automation and everything else that's running inside of there I can simplify my CI CD pipeline and all of the other things that I'm using so that it's one set of api's I create a kubernetes object associated with my VM definition and the platform handles the rest of it for me and finally for our infrastructure owners as a longtime infrastructure person right I have a background in virtualization in storage and many other things I am an old-school on-prem hardware administrator that is my background I need to learn kubernetes it's not going away it's growing it's increasing the pace of adoption we see this in survey after survey after survey from all of the various data points including internal to Red Hat and IBM career Nettie's is growing and it's being used more and more in production which means that if it hasn't affected me yet I'm going to be told that I need to begin supporting kubernetes deployments so by providing a consistent management experience for both containers and virtual machines it means that I can again simplify yes there is going to be some learning curve but it's one learning curve for both of those technologies and then I can consolidate now you've heard earlier today sometimes I don't Udrih I might be missing something here so as I understand it there's a couple of ways that I can assume virtual machines inside of kubernetes and dust openshift I can go the native VM that kubernetes and open chef is just orchestrating virtual machines or I can have virtual machines running inside of containers with queue vert one can you clarify which way are you describing in this and then to redhead has a very robust virtualization of platform today do I lose any of the management capabilities that I have today and the redhead virtualization platform when I move this to being managed natively by OpenShift so I don't know I'm not extremely familiar with redheads virtualization platform so if you can just you know from a high level tell me if I do lose functionality from a management perspective or if I have to or if I if it's just a one to one mapping yeah so I will try and answer all of those if I skip one or if I miss one just keep me honest and Keith I'm horribly offended and disappointed that you don't have a deep understanding of every virtualization technology out there so we'll work on that so to address your first question which is there's two types of VMs inside of openshift yes you can almost argue that there's a third type of VM right so I can deploy OpenShift to a virtual infrastructure and it's simply inside of virtual machines right there is no visibility no knowledge no integration between OpenShift and that underlying virtualization platform quite simply this would be with hypervisors other than Red Hat virtualization and vmware vsphere so the second one is something that we call either full stack automation or sometimes installer provision infrastructure where we leverage a cloud provider inside of openshift to give us visibility into that cluster so the virtual machines that are created by machine API leveraging the cloud provider here we have some visibility into what's going on there this is how we implement things like cluster Auto scale so that's based on the metrics that are defined and the cluster autoscaler it will do things like automatically create new nodes in the cluster so today that primarily works with the hyper scale providers on-prem when open shift 4.4 goes generally available we'll support that with Red Hat virtualization so the third way which is the way that open shift fertilization and couvert support which is deploying use accessible so application accessible virtual machines as pods into the cluster so the second way machine API etc those virtual machines are used strictly for worker nodes they are only used for container applications that are deploying well containers it is technically possible although as you would expect with any nested virtualization experience it is technically possible to deploy a VM on a VM we would of course not recommend that for various reasons so the second part of your question was around the manageability experience so Red Hat virtualization is our long-standing I think the first release was in 2010 so roughly a decade technology that was earlier than that koomer net acquisition happened in 2008 so Red Hat virtualization and the management plan there is what you're familiar with from a traditional virtualization experience where we have data centers and clusters and nodes and all of these other things inside of there so the good news is and this is something that is being discussed quite literally right now in one of the Red Hat Summit sessions is that Red Hat virtualization 4.4 when it is made available later this year will have the ability to talk to and interact and provide that same management interface that a traditional hypervisor traditional virtualization administrators familiar with with openshift virtualization clusters so if regardless of whether I'm a developer who is entirely kubernetes centric or I am a traditional virtualization administrator who is in the process of learning new skills so that I can adopt and adapt to this new world right i can use the interface that i'm familiar with in order to achieve that i think i covered both questions there sorry i drew i'm another consumer when we talk about containers we are used to the stateless stateless things and then you know that can scale out they are designed to work on kubernetes at the end they are designed to work in this fashion okay what when we think about a VM it's a totally different you know talks everything is stateful you need to Lisbon the machine itself probably is way bigger than a container and so and so so all the mattress is totally different I mean it's not that if something fazed you decide to spin up another with VM in another you know they'll ports anyway because maybe it's it takes a lot of run to to make it happen it you know there was a state that was necessary to maintain and you know how do you manage all these things I mean they are very very very VM related I mean II I'm not getting the point even if you are the same API in everything you are consistent with a fully virtualized environment that you are used to so the difference between a good question and a truly great and amazing question is one which we have a demo for so Fabian will cover some of that when we get to the demo as well as during his slides in the next section I will quickly basically let you know that you can choose how your virtual machine behaves inside of the openshift virtualization environment so what do I mean by that when I create that virtual machine I can tell it to behave like a pod always does in other words when I cordon and drain a node inside of openshift it terminates that pod it reschedules it and then it reattaches the persistent storage on the new node and turns back on that virtual machine or I can have it do a live migration where when I cordon and drain that node it causes a live migration of those virtual machines to other nodes in the cluster I can also at any point in time just like you would expect from a traditional hypervisor or virtualization experience I can trigger a live migration of those VMs to other nodes in the cluster so I still have those same management paradigms that I'm familiar with from providing high availability rates or zero downtime type of operations due to infrastructure management operations just to follow up on this do you mean that over time we expect that everything utilization with reddit will become OpenShift and traditional reductive transition practical with you know will die so there is no intention to stop developing stop supporting things like Red Hat virtualization Red Hat OpenStack great the support policy extends all the way out to I think 2025 or 2026 with Red Hat virtualization it's not going anywhere but that being said we are seeing some of the newer features some of the new things that are going to start you going to start being seen in open shift virtualization so that we can really take advantage of this again common simplified unified platform for both container and virtual machines from an application perspective yeah this is Keith again I think from a high level I get the Q Bert I get why you would do it I'm just and hopefully the demo will clarify some of this but you know as I think of just how virtual machine networking is quite different than container networking etc and how my and I think beyond just mic and virtualization management platform but how other solutions integrate into those or whether it's backup applications configuration management etc all of those constructs get changed once I put a VM inside of a container orchestrated by kubernetes so hopefully we'll touch some of it yeah you're absolutely right and again I'll rely on Fabian for the deep technical details so scratching across the surface there networking is provided through the Montes plugin so through OpenShift networking through the network operator that's inside of openshift I can define additional Malta's networks so things like on each one of my worker nodes with this specific label create a new linux bridge with this VLAN identifier and then my virtual machine can be attached to to that layer to network and that network will follow it wherever it happens to be across the cluster because it is automatically applied across all of those nodes for me so he I heard somebody else pipe in I'll let you finish I'm sorry yeah so um actually I lost my train of thought so that's every time I think I think I've touched on at least one of those things Andrew all I was gonna say and I didn't mean to cut you off is is is what we're talking about is immutable VMs right I mean we're talking about him VMs taking the same kind of basically like you already said like like a container so a container being immutable we're talking about DM z-- being immutable and obviously as part of that the whole application deployment and configuration the state and all of that management becomes much different than a traditional VM and I think that's where keith was kind of going with that is that you have to look at these things a little bit different right because the whole not necessarily so I think I think there's some validity in the argument that if I adopt containers as a method of packaging and shipping applications some of those applications are going to be ambience because they can't be containerized they they just don't fit inside of a container you know a network function virtualization is an example of that I can't there's no container equivalent to that yet they're still working on that so if I take that thing if I take the appliance the I as V appliance from a load balancer or a firewall etc etc if I take that appliance that virtual appliance stick in a cube verb I can now ship that whole thing as I would a container and I can move it and orchestrate it the way that I want to like I've embraced kubernetes so it's not just for immutable infrastructure or a mutable workloads is also for these things that just don't fit well in containers and I have to keep running VMs so what I'm what I'm what my disconnect is is that with that comes all of the craft of running virtual machines I still need all this management capability in my previous world including super including persistence which you know kubernetes isn't you know that's not the point of containers yes so persistence is ironically one of the easier ones we solve it in the kubernetes native way using persistent volume claims we can think of this as being just like any other virtual machine definition there is the definition of how much CPU how much RAM which network connections which disconnections and how are they connected into my VM as being the pod definition just like any other container just like any other pod it has that definition inside of kubernetes when I need to persist data whether it's a database that has files on a file system whether it's a virtual machine that has a VM disk writes that queue cow to or that's raw image it gets put into a persistent volume when I go to instantiate my virtual machine I am connecting my persistent volume to the host and then inside of the pod we rely on QE mu and Lib verts in order to connect that disk image and instantiate our virtual machine just like it would with any other hypervisor whether its Rev or OpenStack or doing so on rel directly so I'll I'll let Fabien answer those questions in the best way possible which is through demo and hopefully that will help answer some of these questions but if not certainly happy to continue with that so that is the last slide that I have and I will go ahead and stop sharing and hand over all right hello my name is Fabian Deutsch I'm an engineering manager here at red head and I think Andrew already gave a good overview of what open chip virtualization is I know try to fill some of the gaps and try to answer some of the additional questions or some of the great questions we already hear so I think I wanted to start with saying you know to master the complexity which is coming up by having two different stacks and I think the important thing I would like to add here is you know stacks can be stacked upon each other or they can be living side by side right but the complexity regardless if they're on top of each other or side by side it still exists because you have two different infrastructures which you need to manage so our answer to provide one modern platform based on kubernetes is is actually open ship right we want to use open ship which is already able to run on on bare metal to also run games because OpenShift already has that strong focus on application developers whose life we want to make easier and it also has all the features that Derrick went into early on today about simplifying the operational burden of maintaining such a trusted platform but because OpenShift is not natively capable of running VMs and kubernetes this isn't as well that's what key word comes in right so Qbert is by now a cloud native computing foundation C and C of sandbox project and it's providing a virtualization API and runtime for kubernetes in order to run and manage virtual machines and here we speak about the traditional the classical virtual machines who do have persistence right persistent needs and I would also like to highlight that it's about an API and to run time because the beams themselves are run inside the cluster now Qbert is not that young it actually started with openshift version 3 getting rebased into kubernetes in 2016 the first idea of how to avoid that we will have two schedulers one for V and 1 for containers emerge write to see how can we converge them to one platform and then last year Hubert actually joined the CNC F with the support of the quickly growing community right we have a lot of early adopters who are actually using cubital provide production today and just yesterday it was announced that authorship virtualization will be generally available throughout this year all right before we get to the q''-word architecture and Qbert is actually at the heart of open ship virtualization right it's providing the compute components civilization then contains some additional components to to fill them gaps which are just there like converting existing VM images into a format that is used on OpenShift virtualization now the gate guarding principles the first is we need to we want to align on containers for a unified resource model thus VMs have to live in ponds right VMs consume resources let it be compute networking with storage resources and they should consume them from where kubernetes is providing them and the place our pods weighs the benefit of such an approach is that it makes DBMS actually transparent to kubernetes and not only to kubernetes but all of its ecosystem now the next guiding principle is that the to have a dedicated API for the virtualization or closed because we aren't running in pods right so what if would it have been an option to us actually clean the pod API to to bring up those pods and run VMs and the answer is no because virtualization has other requirements then containers when it comes for example to the workload definition right for VMs you need to specify virtual devices bios informations sometimes and on the other side there are also distinct virtualization operations like live migrations or restart operations which just don't exist for the immortal or mortal containers and last but not least one of the guiding principles to to get to really a consistent and usable extension to kubernetes is to say that we want to focus on the usability of the virtualization features in kubernetes that means that we are working to enhance kubernetes obviously right and that we want to bring all necessary virtualization features to kubernetes or to cubed as well but the virtualization features we bring into Cubert need to be exposed in the kubernetes negative way in order to keep usability right of a user consistent regardless if that user is working with containers or virtual machines all right so with this guiding principles in mind let's take a look at the cubed architecture and again Cubert is the compute component of workmanship virtualization and there are some others the components shown in gray or the word controllability pirate hand and some others and they are actually containers running as pods on the kubernetes cluster and that actually means that Cubert itself you know the virtualization infrastructure layer effectively is a cloud native application which we are deploying and running on top of open ship now let's take a look at a few of these containers we're seeing here let's start with the vert launcher pod right the vert launcher pod is the pod in which the VM is ultimately running the VM or KVM VM is actually qumu process accessing the kernel kavia module or leveraging that one and as Andrew looted before processes can nicely run in containers now looking at the diagram on the left hand side you are seeing resources that kubernetes is managing on the cluster level and plumbing of the node level and providing two pods all of the resources that are going into the pod provided to the queue process that is all done by communities the remaining gap or that the responsibility of Cubert is actually then to connect the resources that are provided to pod to that humour process to the vm process that is responsibility of cuber and it's doing that automatically right and we will be seeing how it's doing that in a second one thing or a good thing of this is that because the Cubert containers of process are running inside a container means that whenever community is seeing enhancements on the cluster level on the node level plumbing layer that these benefits will be directly use like shivered as well we might need to do some tuning in order to leverage these new resources but in general we are benefiting from them for example adding affinity and handling affinity once when that landed in kubernetes that was directly usable by Qbert all right the just note on delivering human components liberating chuimu are actually taken from rel AV which is rated Enterprise Linux advanced virtualization and this this part is also used in our reddit virtualization reddit OpenStack products so the the really D the tiny little bits are really the same and shared between our products to sum it up if VM in open shut virtualization is in the end just a quite special process inside a regular kubernetes pod so to kubernetes actually it this part this vm pod does not look like any other pod which is running a completely different process alright so the VM actually won't let me go one step back or two in this case we had the question about persistence early on and I would like to answer it here so the storage that is going into the pod as Andrew said the storage going into the pot or provided to the pod are persistent volumes right so that is on the node level that's usually provided by CSI or CSI can also use on the cluster level so whatever a personal assistant volume is getting attached to pod the VM can use that resource right and that means implicitly that if the the virtual disk image is either laying on a block device or in the filesystem device we can leverage that persistence a small lot of the network inside so I said that network is denoted by communities as well and actually here we also we can build upon the CNI plugins which are also used for regular pots all right let's continue real quick and I'm going to assume you're going to get into that this what I'm gonna ask when we're talking about networking are we also is the networking from a VM perspective going to be the same as a container so it's going to be isolated and I expose that through services are you going to get into that type of detail um we can do it here I think that's a good point and we'll can take a look at it at the demo later on so we did a divide-and-conquer approach right so on the one hand side we want the end so we're running an open ship virtue that OpenShift virtualization to be useful in the container context right or in the pod context and the Native communities contents and that is why we tie into the pod Network right so every VM is getting by default and Nick which is attached to the kubernetes network or the open ship network with whatever networking plug-in is used beneath so from a functional perspective right VM if it comes up it's the same network connection as the partner but we also know because we have got reddit virtualization OpenStack is that there are more needs around networking and that is why we have the ability to specify using motors additional networks which can be attached to VM right so you can attach the an arbitrary number where the our arbitrary number is limited by whatever motors is supporting to to that VM we've got a couple of ways of how the networks can be connected to the VM so again we can actually do SR IV path through ok today and I was in rigid yeah and so you know you've put all that layer 2 connectivity which you need for pixie pixie booting and multicast and that kind of stuff you can do that right because we can bridge it we can do B if I will pass through and S or a V pass through Motors is giving us a flexibility sometimes a dedicated CNI plug-in is needed but they are not so hard to write and they are actually containing the complexity in that one place ok thank you yeah sure and I think that's giving us a lot of flexibility and it also ties and actually one note here is if we're connect the networks in the end right the logical Network entities they are actually shared across the whole platform right so we can have one way of how we attach the bm-21 network but on the pod can also be attached to the very same network and we see a hint of that later on all right now nobody is creating those pots by hand right that is important to know so that is what we provide that aforementioned dedicated API we use custom resource definitions CRTs and aggregated API servers in order to extend the communities a to API with these standard mechanisms I mean they are not special today anymore the vert control is actually that compare component which is looking at the objects which are created through the series and aggregated API servers and creating the pods we've been looking at before right and that is just the as Derek showed us earlier today that is just the controller pattern we know from kubernetes you define the system state you want you to find the VM state and the controller is there to craft you the pod you need in order to run that VM DVM it cell or the API itself the dedicated API for vision means it's pretty fine-grained right on the one hand side it's declarative and domain-specific but it's covering a lot of the needed or come and virtualization functionality which we need to on the one insight to define the virtual machines right I mentioned that you can specify SMB SM BIOS serial numbers for licensing reasons or that you need to specify buses or the connected storage and network devices so we we acknowledge that right and we give you the API to do that and on the other side which is not shown here on this slide is that we also provide you the entry points to do virtualization specific operations which is for example triggering a life migration or restarting EVM alright to wrap this part of my slides up I just want to add a few more things all of that so cube R is totally open source everything is developed in upstream and as I mentioned before we'd really try to be a kubernetes native and friendly really try to make it further development for the develop operator who is working with communities so that it looks seamless to them right there's no real tension between Hubert objects and the native kubernetes objects and Indiana um I have a question in regards to that this slide is perfect for my question actually so let's say that I am interested in in running my VMs in containers how locked am i into the open shift implementation of Cubert it would it be possible for me to take these virtual machines and move them to a different distribution of kubernetes if I wanted to or am I pretty locked into the way that you've gone about creating these virtual machines inside the container um so first one node M for the not for the sake of the discussion but fundamental picture from I think it's important to I mean it's technically correct that we'd run VMs and containers but we really want to establish the view that we effectively running VMs on ownership but maybe that's too nitpicky now getting to a compatibility question there is nothing specific to open shift so operative virtualization is the production ready right build of Qbert that red is providing to you but it's the very same bits we're taking all of that from upstream there are like I'm aware of one downstream specific patch which is for licensing reasons but otherwise it's all the same right and also if we speak about the data formats it's all the same so you can actually take if you take the VM definitions you've got an open chef glossary uvm there you've got your PV with your windows image there you can copy that to different cluster Native communities cluster with cube root there and you could run it there as well so you've got the freedom of choice to do it however yes one note just one important addition is however openshift is providing right infrastructure it's providing you network network tooling like sorry we provide motors right with certain scene I plugins and DSRV device plug-in for example you need to take into account that these preconditions need to be met on the destination cluster as well right and that is what we we've open ship virtualization I mean we provide you that package and you can for sure rebuild that elsewhere but these preconditions need to be met on the destination hand side sorry forbidden soda so you are implying that it's a huge lock-in I mean if I want to have a mixin environment with focus shift jke and I don't know aks whatever one of the major providers they do not provide the same network they do not provide Cubert on top of their infrastructure because they don't have KTM so I didn't have an access to get yen so it's no longer portable right I mean you can you can obviously take it by metal instance right from a public cloud provider you can put it onto IBM cloud on a bare metal instance or an AWS yeah you know neither of us has an ecosystem I mean it's not only kubernetes there are several services and some of the services maybe I want to use them and similar services are available so we are saying so if you're working with overshift kubernetes plus VMs works everywhere and I totally get it I mean I can buy bare metal from almost everybody now but actually I need openshift to make sure that everything works so or I have to rebuild something that maybe is compatible like having covered on top of kubernetes your network Software Defined stack and everything else to make it work right so he's so Mike my problem is so can i replicate so can I take my application as is as a developer in the independent software vendor okay so I develop a new application that is made for an absurd reason on containers plus VMs okay and move it across different kubernetes clouds from different vendors like I would do with a container only application as said before I mean you can install Cubert on different kubernetes clusters and that will give you the same runtime which we have an open ship but then it depends on what requirements your workload has and one note on the public cloud providers the biggest issue there will be that the managed kubernetes services of the calculators are running almost always in VMs right and then you will have a nesting problem and that is something we cannot recommend so Alyssa let me separate the question to two parts I would not say that there is some kind of any any lock and when it comes to the workload because you can easily take it out and run it on a different kubernetes cluster but the and the second part is that you Cubert itself for running VMs on openshift requires bimetal beneath it but in that is usually not met in the cloud except for if you take in bare metal instances yeah so I understand your your your question about the services I would defer that question in order to you know stick to the time and be able to show the demo because yeah we just have a few minutes left if that's fine for you alright so one note is I just want to highlight that cubed itself is actually driven by operators right it itself is cloud native application and we've written and operated two to deploy it and actually take care to to perform all the lifecycle management things you need to do for that application itself not for the MS but for cubed as an application and with that now let me quickly close and actually go to the demo so this is a regular reddit OpenShift cluster running running on bare metal um as said before openshift itself cannot run DMC natively so as I said before is a bunch of virtualization is actually delivered as an operator we see that today OpenShift has robbed there are 283 operators available and because it takes some time like a few minutes I've pre-installed OpenShift virtualization on this cluster so this is contain a native virtualization we're just like the name we had until recently if we if we enter that operator we see that it's looking like any other operator I could look at any other operating would very much look the same the important part I would like to highlight is that just like with any other operating you can actually go into an all automatic update approval mechanism right that's important because with this updating the virtualization infrastructure in OpenShift becomes automatic right we we take care on the reddit side to test the upgrades of our supported upgrade paths and if we've tested them then we provide these updates and your classic can automatically update the infrastructure now that is the basic enablement of OpenShift in order to run virtual machines nothing else needs to be done you only install the operator once you've installed the operator we actually see that the virtual machines and virtual machine templates in this case are shown alongside the other workloads because after all after all virtual machines are just a different form factor right pods are smaller and virtual machines are usually a little bit larger now let's take a look at the virtual machine section itself it's empty now but if I switch to the correct namespace default then we see that there are a couple of VMs so usually the question we had them before is how do we how do we get the VMS into a breadhead open ship right there are a couple of ways so one way is to create them net new right from scratch in this dialog so you can select the template which we'll get to in a second and you can select the source from where you want to install and the virtual machine pixie booted as as mentioned before it will require layer 2 networking and you've got a couple of other options I use a container just as an example so we can go to the next pages of the assistant of the wizard you can select operating systems so we've got a pre selection of operating systems which internally will set the write defaults for the operating system you want to look at you want to launch but you've got a set of flavors you can choose from in order to get the right sizing and you can specify workload profile in order to tune the defaults a little bit if we continue then here we see that by default we attach to the pod network and you have the ability to add more to add more network interfaces as I mentioned before the networks shown here are again denoted by open shipped as a platform and we'll see where the networks are defined and they are actually used usable by the pods as well so here we could choose to use a very fast Network as well we've got different ways of how to attach them to the VN obviously the requirement is that the network provider supports that kind of binning mode all right storage we've got a couple of ways of how storage can to be provided to the VM it can be pulled from a URL it can be provided in a container so we can ship academi a VM image inside a container which is especially interesting for stateless workloads and then we can do which is also more known from the traditional traditional virtualization is to actually clone a regular disk and run that VM of that cloned disk then the interface without connected I think that is all pretty normal stuff let me continue so if the operating system the guest is supporting it where you can specify some cloud in its custom script here provide some details we can attach some additional hardware you can review it and in the end you can create the virtual machine I'm not doing this because I just provided some dummy de force the other way of how to get a virtual machine into open ship is actually to import an existing one right and this is a complete different session I think there was the reddit summit contained a session about that so you can select it provider and then specify the vCenter instance from where you want to import a VM you will have to provide infrastructure mappings right so you will need to tell which source storage locations and network infrastructure or networks right map to which OpenShift networks and storage locations or storage classes this is done all in this wizard which I'm not coming through right now but I think that's an important point to understand how workloads can be migrated over to open ship virtualization and again we can run stateful virtual machines the last one is and I'm just showing that because it's a fancy but because we also want to address future needs right and for future needs that means we want to tie in with what you're expecting from kubernetes so defining virtual machines in a get ops pattern right you can also do that it's using the same API underneath as the other two flows are showed before all right so let's look into one VM so this is an overview when you see a famous Fedora which is running and we see the node and the IP address where it's currently running on so if we dive into that H they see a few more detour details and some metrics that were on that VM and there seems to be I'm not sure what the memory is not shown but usually you see it you get some additional details labels and namespaces I think and rotations I think that's important highlight because they they're effectively applied to the pod as well which allow you actually to use the ends if they use the port network to be leveraged by routes by services by ingress as well just like pots can be used right and that's important because then you can really break up the boundaries because of how you're using pods and how you're using VMs because the kubernetes it's really the same annotations labels I mean they're always helpful to prover keep some metadata actually here at the bottom for your convenience you will see all the services which are tied to this VM interesting is also I mean we spoke about the dedicated virtualization features and having a graphical console is unique to VMs right so here for your convenience we provide actually the graphical access to that VM events that's actually a feature of kubernetes and usually used for pots and other workloads we provide them for games as well the baddest that there are currently no VM events for this specific VM now this was a Linux VM but we can I mean because it's VM this right is just regular VMs we know from other platforms as well we can obviously also run under other guest operating systems so you it's actually a Windows 2000 19 which is running on this machine and let's see yeah here it is so you can actually I mean this graphical console is intended for administrative access it's not intended for a VDI use case but you can you can administer your VM here as you know it from other solutions like OpenStack for example all right so this is it to the virtual machines we saw how to get them in we saw how you can operate with them virtual machine templates so the templates are really what the name says you can create now you can predefined virtual machines with a certain amount of parameters so that you then it's much easy to instantiate them in future wants to be up to them alright so one thing I want to note is or that actually closes the loop to what we discussed when we were speaking about you know how the VMS were run they were run in pods so that actually comes out when we look at the monitoring and alerting side so if we look at the community's dashboard of the open ship dashboard we see actually that the resource comes on in consumption of the VMS are shown here as well so for the default name space we actually see and it's a little bit tiny but these resource consumptions map to the pod and the two VMs which are running this new space if we switch to a different namespace an arbitrary one we actually see in the very same manner and I mean internally the same API set used just two different namespaces the resource consumption on different namespaces the alerting and openshift is building upon metrics and that means that you can use or write the same alerting rules and they will apply to virtual machines or PUMS the last thing because I'm looking at the time is I mentioned the persistent volumes so all of the PBS here they're used for for the pods but the one I'm using for Windows has actually shown down here and the reason I'm pointing that out is because it's literally the same right it's really the very same storage subsystem that is providing storage as well as the network's true DBMS and pods yeah for networking networking attachment definitions are the way of how to define networks that can be defined here and it's really internally it's getting walters is used and network attachment definitions you know I think there's a lot of debate whether this is the right direction for a lot of organizations because this has a real appeal to a very niche focus right I like it but I think this starts showing the future of how VMs eventually blend in and you never care the difference as if that's fair to say I'm you're enforcing a but what you're really enforcing is it a common deployment methodology that is consistent across multiple platforms does that make sense there's there's a lot of good stuff here and the other thing that I would say is being a former KBM user and dealing with some of the not so fun parts of it I think this actually helps solve some of those challenges so I'm going to end with that thank you yeah thank you for the feedback so I think the important point here is we want you know we want we want to align on one platform to provide a consistent user experience right because in the end it's really about simplifying the life of operations and developers application developers and an open ship is that platform for us so kubernetes right I mean the discussion if we focus on Cobra is an open shift happened before so let's not done dive into that but if you have open ship virtualization we saying that is what we see in convergence point yeah it's it's definitely not a replacement for all virtualization workloads we have not there today and we are aware of that fact is why andrew was mentioning the reddit virtualization integration that was announced like yesterday um because it is not a change that is happening of night right we're looking here at a perspective for four companies over the next five to ten years right where do we want to be do we want to go down that container road and then they will have to ask themselves what do we do about the remaining VMS do we want to continue to maintain a virtualization stack just for the remaining n VMs or can we converge them maybe with you know on on this other platform for containers as well in order to simplify operations we're not expecting that people tear down their stacks today and move to open ship virtualization and that is not what it's intended for right because we know that changes are hard people need to be involved in changes and it will take time and we just want to outline how we're converged you know solution can look I'd even take it a step further that you added ventually at some point you actually only allow declarative definitions of VMs being set up in this platform yeah actually good that you mention it because one thing I just wanted to highlight is actually which I've just missed in the example is I mean it's not just declarative because there are certain imperative actions which are specific to VMs like you know we're restarting or migrating yeah right so it's already there and and I mean with the dedicated API we are we are able to meet the current things we have implemented obviously what we're also very confident that we can deliver more of the features that are needed in order to address more of the workloads which are out there today thank you so sorry from the point of view so as you said if this is the proposition for for the future I mean so yes I am going to deploy application that are based on container for you know but still I have a couple of the ants around that I can't I can manage to you know to refactor to engineer and everything I totally get it I mean it's easy but if you have a complex environment like most enterprises have today visualizing or you know moving from a hypervisor to the other to build a an application made of the ants plus containers it becomes really complicated at the moment I mean there are so many and again it undermines somehow the portability of the application at least there could be a despot so but I get the vision for the long term I would say organizations already have the challenge that they need to see how do we you know how do we deliver services and applications and future efficiently so they need to look any way of how we're writing new applications what do we do about the existing ones so they already face a change right so they need to make a call what do we do right do we stay with our environment and and do the best right or do we try to to work more efficiently right so my point is actually I don't need to go into the DEA's because my point is organisations have to make the decision anyway regardless of the solution they need to say where do we go from here do we want to go down that is our containers a way to write our applications more efficiently and if they say yes and go down that road then again at some point they need to think about DBMS other other organizations might say okay containers are nothing for us right and we want to stay with whatever solution we have that that is another option and that may be open shape virtualization is not the first choice but again we see that the the industry is changing and we want to provide them option 2 to converge because after all it's all about workloads and I think one of the things that have not yet been explicitly mentioned is that we believe that everybody needs to provide applications and everybody needs to continue solve problems right so the requirements on the underlying infrastructure regardless of how the application is delivered right so the requirements will actually be very similar regardless if the applications are delivered in virtual machines or containers and I've lost my track that my path so these these don't change so what I'm saying is yeah we will need in future we still need to write applications and meet those services right and we're seeing in order to make that efficiently we rather expect that over time other workload form factors will be used in order to meet these new yeah requirements not to meet them more efficiently then they can be met with VMs so if you stick to them yeah I think the bottom line is we expect that the change will continue right and that we need to provide an option to our customers i Fabian this is Pietro I have a question that is more related to architect architecture and require infrastructure requirements would you would you recommend or mandate that the OpenShift hosts that run VMs in containers to be physical or can these be run also onto worker nodes that are built on machine themselves today we only support to run virtual machines on physical hosts so the worker nodes need to be one by metal we are one yeah so that I think that full stop right so VMs who are running over shift need to run of workers which are physical machines
Info
Channel: Tech Field Day
Views: 3,976
Rating: 5 out of 5
Keywords:
Id: vHAjvX8QfhE
Channel Id: undefined
Length: 63min 25sec (3805 seconds)
Published: Thu Apr 30 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.