Ask an OpenShift Admin (Ep 25): Installation methods redux

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] going live good morning good afternoon good evening wherever you're healing from welcome to another edition of ask an open shift admin i am chris short executive producer of openshift tv and i am joined by the one and only andrew sullivan andrew we have a special guest on today please introduce yourself and our guest we do uh so thank you chris and uh looking forward to today's show it was fortunately our guest um catherine dube who is a product manager with openshift was very gracious and very accommodating because i asked her very very last minute yesterday afternoon if she was able to join so uh i think you dm me at like six o'clock last night yeah yes yeah something like that so um i thank you in advance before i i even introduce you catherine thank you in advance uh so this is the ask an openshift admin office hour it is one of the office hours series of live streams here on openshift.tv uh so what that means is that we are here to answer any of the things that are on your mind right whatever it is that is bothering you that comes to mind whatever questions you happen to have you've got three experts here um catherine is somebody that i rely on pretty heavily to help answer questions to uh reassure me that i am either right or wrong as the case may be so if we don't know the answer we are more than happy to go and find the answer whether that means reaching back into the rest of the product management team reaching into engineering finding those relevant people to to find those or get those answers for you that being said in the absence of questions from you all we are very happy to have today's guest katherine dubay and today's topic which is revisiting the installing the installer in the install process uh so if you recall last year kind of the some of the first episodes that we did here of the ask an openshift admin live stream back when it was called the openshift admin office hours we talked about installers we talked about the install process we kind of did a bit of a deep dive down into that and all the different things that are going on inside of there but of course things change there's been some new things there's been some old things that have changed if you will so we wanted to revisit that because it's important to keep up with what's going on and make sure you're aware of all of the options that are available out there and quite literally i could not think of a better subject matter expert or a better person to come on and talk about this than catherine so catherine if you will please introduce yourself hi everyone i'm uh catherine dubay i am part of the openshift product management team my focus is on installation and updating of openshift yay all right so we'll get to that in just a moment we'll talk about uh the install process we'll talk about you know a day in the life of catherine which is um i'm sure even more chaotic than chris and i uh but first you know in in long-standing tradition uh the things that are top of mind things that have come up recently that i want to highlight and make sure that you all are aware of uh so this was a relatively quiet week red hat had a holiday over the weekend so it was just it was quiet i don't know i'm going to treat that as a good thing right as administrators if it's quiet uh that means everything is running smoothly i don't know if uh i think it was futurama right where where bender was the you know you know you've done it right if nobody thinks you've done anything at all right that's a good administrator if everybody goes what's that guy doing here you know that means that you're you know nothing's broken that's a good thing uh so a couple of things so i got asked this comes up periodically uh around the intri vmware storage provisioner so when we deploy a cluster that is vsphere integrated so vsphere ipi vsphere upi we configure the entry storage provisioner and that provisioner if you look in the upstream docs supports some things or is capable of doing some things that openshift doesn't support or is not capable of doing so things like multi-drs cluster multi-v center importantly things like storage drs clusters so i saw at least one example this week where the customer was i think they did a upi install and they were able to successfully deploy using a datastore cluster however what happened is storage drs did what storage drs does it moved one of the disks for a virtual machine and it caused a ripple of things to happen uh not the least of which was there was a bunch of pvs that wouldn't connect because it lost track of where they were at so please be aware please be conscious of um you know if you're if you have a drs cluster storage drs cluster that you don't want to use that for openshift deployments you want to use just a standard data store for that cool uh the second thing that i had was oh somebody sent me an email so one of our ibm compatriots sent me an email asking about examples for deploying the registry and using storage non-default storage or other storage that is configured so we were chatting about this before the show started and catherine very kindly corrected me in that when the cluster is after the cluster is deployed the registry operator kind of detects what infrastructure it's on and then configures the storage appropriate for that um and catherine i'm gonna help help to keep me honest here um so basically if it says hey i'm on azure it does things like talks to the credential mentor and says hey please give me some you know azure object storage or whatever storage uh credentials so i can request my pvc or connect to that object storage and then goes through and configures all of that but what if i want to do something different how can i do that so i'm going to work on and and i've got some background things going on uh hopefully next week i'll be able to talk about show some of those examples i'm also going to work on a blog post for openshift.com where we can talk about that yeah so the way the way it works i'm just really quick um not to spend too much time on it but uh yeah when when any of the openshift components require cloud api access they will request a credentials request that essentially is what gives them enough credentials to go create the the resources that need so in the case of azure uh since you brought that up that would be its own storage account so from there once it has the permissions it can then request what it needs for the registry and then that gets created as part of the operator itself hopefully that provides the right level of detail you're looking for that's great thank you uh so the last thing that i have is something that just came out this morning uh it was shared by chris and i's teammate eric jacobs who also has a live stream or two or five on here yeah uh so chris or chris so eric has been working in conjunction with kirsten newcomer and a handful of other folks around updating what we call the red hat openshift container platform architecture design guide for pci dss which is a huge mouthful yeah effectively if you are a customer who is uh interested in or has pci dss requirements this is meant to help you with that architecture process right how do i design deploy manage my openshift deployment within those constraints so i posted the link to that into the chat there be sure to check it out if you have any questions feel free to reach out to me or chris uh andrew.sullivan redhat.com chris.short at redhead.com oh c short yeah that's right we were talking about yeah sorry it's chris short on twitter but c short on email i'll get you should i i mean like if i went about and created like a consistent handle across all platforms people wouldn't know where i am anymore so yeah we're only this is what episode 25 yeah we're only 25 of us so episodes maybe in another 25 i'll get it yeah eventually all right uh so i see there's a couple of questions already um so i i think we should go ahead and address those and then we can you know go from there and and again please at any time feel free to ask any questions that you have whether or not they're related to today's topic uh so you know the installer is today's topic but don't let that inhibit you uh so the first one from uh let's see rahul uh i have to migrate from openshift 3 to 4. uh could you please share any reference links uh so chris i see you shared the migration topic link there i think that's a good one yeah so we do have a number of migration tools not the least of which is the migration toolkit for containers that's what i was thinking of remember the name of so basically you stand up the new cluster you points i think it gets deployed to the new cluster and then you point it at the old cluster and say hey migrate all this stuff for me and we just did a stream on that with storage so you can do this with storage you can do this with just about anything and drop the link in chat there to the actually that's the wrong link yeah yeah as i'm i i spent a little bit of time yesterday planning out shows planning out episodes for the rest of this quarter uh so i think i'm going to tack that migration topic on to the end so hopefully sometime in june we'll be able to uh cover that here on this show okay um let's see another question we had from dean if i use the assisted installer with bare metal how can i add a new node post installation so catherine i'll i'll kick this one over to you um even though i know assisted installer is kind of parallel to what you're doing um and if you want to elaborate on that please do so let me maybe just ask the question one more time there so i understand what the so you add a node to a assisted installed cluster yep yeah i believe so so it actually can be done a couple different ways i believe assisted installer itself you could um and this isn't something i'm involved with but i'll sort of you know fumble through it as best as i can i believe you can still use the bootable assets um under um or whatever they call it the boot media i guess is the right iso that they give you yeah yeah yeah so you can you can still stand up nodes and adam in that way but you actually really don't have to it's an open ship cluster at the end of the day all you really need to know is booting of the operating system and where to go get the ignition and the ignition is always hosted um on the the master nodes so there's a ignition boot service that you can you can leverage so it's no different from whether you do it on day one or you do it on day 27 um it doesn't make a difference right right but that's the typical method and that works across any of the installation types so not just in a system installer not just you know upi ipi uh it could be done any different way so you can manually add nodes in an ipi cluster you can use machine sets but essentially all it really is is you're passing a user data field to that node which is a location where to go get its ignition config so depending on the the node node's role on whether it's a worker or a master in the case of if you had to replace the control plane mode you would boot with that ignition that in turn would then uh work on joining the cluster and then you'll need to approve the csrs for that node which essentially is their certificates the client and server certificates to be able to securely uh communicate so just kind of to rewind that back it sounds almost as easy as using the assisted installer minus the fact you have to go manually approve csrs yeah yourself to do it in the case especially the bare node the bare node cluster because you don't really know that the node itself belongs in the cluster um we get a little smarter with um some of the cloud providers where we can say well hey we kind of guess you are because we know you're you know on this this cloud infrastructure you're um you know the the networks that you're running on there's a network you should be on so there's things it looks at to do the auto approval uh whereas sort of bare metal is a little more challenging there's not quite as much information there yeah yeah dean says using the assisted installer doesn't have to deal with ignition files right yeah it's they're still there they're just invisible to you uh which is one of the nice things about that right yeah yeah definitely i think the original name of uh assistant installer maybe to kind of you know paint a different picture on it was uh upi plus plus so that's the original name and and they you know they call it assist installer so the way i describe upi to people is user provision infrastructure is essentially like no installation right there's no installer you're just getting all the necessary artifacts that you need to bring up a cluster but from there you're responsible for provisioning everything right right with ipi obviously that's full stack automation that's means it's provisioning everything for you it automatically just comes up and you have a running cluster so assistant installer is really kind of the middle of the road right where you have the notion of you know boot media or isos those nodes boot up they essentially establish identity going by connecting back to the service and then you form a cluster based on where you're sort of running those those um images so it it makes it a lot easier as opposed to manually having to go to each node and sort of provision it right so again anyway yeah so i noticed there you you answered or probably out of habit at this point what i can only imagine is your most frequently you know asked and answered question which is the difference between ipi and upi so i i have a i guess two questions related to that um so one from from your perspective are there you know advantages or disadvantages to one install method versus the other and then the second part or the follow-on to that is is there a difference in the cluster that gets provisioned right at the end of that yeah that's a that's a great question you know it definitely i think there's a large perception that upi is inferior to ipi i would argue the other direction ipi is probably more inferior to upi and the fact that it's a bit more prescriptive right we we try to you know get the 80 i think the best way to describe is the 80 use case like the 80 of customers what they're intending to use for options and settings on their deployment we cover as part of that um automated installation process whereas upi really sort of opens the door to i want to do customizing of the infrastructure and this is sometimes what people get confused they equate it to customizing of the install of openshift that's actually customizing of the infrastructure that open shift runs on so the resulting cluster itself isn't doesn't necessarily have to be different and well the docs sort of explain it to the maybe the far extreme of you manually provision everything including your nodes your control plane nodes and everything else it actually doesn't have to so you can still use some of the ipi uh functionality in a upi cluster and you can actually do it on day one for example say you wanted to do machine set creation we we actually generate those as part of the installation or openshift manifest um even on a upi cluster so it's up to the admin to want to apply those and use those so they can actually be done in advance of creating a physical node like you would do in a normal ipi a upi process you could actually instead apply the machine sets and then the cluster would go off and provision in the case where there is a machine api provider available for the um the the platform you're deploying on so you know the case would be like vmware or aws or gcp or azure etc so you can make those clusters look nearly identical i would say there's probably a small small really tiny differences but not enough from an operational perspective these clusters will perform identically you can have elastic in dynamic uh compute capacity same way you can do an ipi and you get the the the advantage of being able to customize the infrastructure in cases where you wouldn't be able to do this with ipi there's a lot of cases where that may be um more beneficial to do it in upi and sort of script it for your organization as opposed to trying to make ipi fit a model that it's really not intended for yeah i know and one of the things i want to follow up with you on in a couple of minutes here is you know you and i we tend to answer a lot of questions um you know with the field and with customers around the installation process and a lot of times that bleeds over into architecture so i definitely want to talk about that a little bit so dean dean has a question here which you you kind of partially answered so why do we consider upi to be superior will an ipi amount allows automatically scaling nodes using effectively an oc command um you know and whereas with upi you know he's saying you have to do your own automation which i see christian and chris are chatting down below saying that that's that's not true and you said the same thing um and then kind of the second one which is uh tangentially related which uh acm requires ipi uh clusters ipi deployed clusters yeah so the the reason for that um has to do with the provisioning technology under the covers so ecm doesn't require ipi it requires ipi to be able to provision a cluster but it doesn't require ipi to adopt a cluster so you could still do a upi deployment adopt that deployment for management through acm it's only when you want acm to provision the underlying cluster that you would need ipi and the reason for that has to do with the integration of how it's provisioned so it's actually it integrates through a service called openshift hive which is a api for provisioning clusters now as part of that for that service to work it needs to know how to provision the underlying infrastructure and this is where i you know sometimes maybe people struggle a little bit as why one is better than the other in cases where you need that additional customization of the infrastructure hive or anything else isn't gonna know how to do it right you're gonna be your own admin your own um sort of controller of how you want that to look like you know what you know what shape what size whatever and that's where you know you could still look at automating the provisioning a good example is say you're on aws we provide cloud formation templates that will pretty much you follow you know one by one there's six stacks you would get a running functioning cluster but do you really need to have six confirmation stacks no actually not you could you could tie it into all different you know one script with a couple questions and it would look just like ipi at the end of the day you'd have an identical cluster you can absolutely do that um so that's the case where you really need that customization and that's why it won't work with acm is because it doesn't know that special infrastructure customization it only works with kind of like here's kind of the 80 use case with ipi and here's you know if you can you can adhere to this then we can go provision it because we know how to do that as part of the regular installation process so i'm going to take some questions a little out of order here so yeah ricky i see your question um i'll get to that in just a moment because um and then multiple answers there yeah and then jp dades um i see you chatting about um some things as well so i'll we'll address those um so just to kind of round out or complete the thought process here uh with dean um so i'll first i'm going to expose my uh lack of knowledge around acm uh so acm advanced cluster manager i think is the name of it yes yep so advanced cluster manager is red hat's multi-cluster management tool so effectively right it's a management plane where you either create rate deploy as catherine was saying or join existing openshift clusters into that management plan and then you can do things like manage right apply security policies apply rbac et cetera across all of those member clusters uh so kind of walking through you know with with that set so dean here is saying things like cluster pooling are coming out that will require ipi uh i'm not familiar with acms roadmap i don't think acm requires ipi it requires like you have to install that acm node what they call the node or the hub um that has to be installed in an api fashion but after that you can add additional clusters that are not ipi well i'm i'm assuming what you're saying here is that acm has a feature called cluster pooling that will use ipi and what what dean's building here is a case uh i i guess against catherine's statement that uh upi is the superior method um which i will say that i i i agree but it's like a 51 to 49 degree um i i agree because uh the the flexibility and the scalability particularly um or i should say the simplicity um around upi i find to be better particularly the load balancer aspect right yes there's an integrated load balancer that comes with ipi it is fairly limited both in scale as well as in configuration whereas with upi you can configure that however you like in whatever manner you want and still add machine sets day two to be able to get the cluster auto scaling so i'll take a step back now um catherine if you you know do you have any thoughts do you have anything to add there yeah yeah you you started hitting that nail on the head with that one so the reason why i still stand by that comment is ipi is good for a certain prescriptive use case and i i would say we're trying to get better and expanding that out and i think the metrics we've we've sort of seen on you know who's using what whether upi versus ipi uh we're starting to see a definitely an uptick in ipi don't get me wrong we want people to use ipi it's a lot more it's a lot easier process right so i i'm not trying to argue there but if you look in terms of um what you can do with one versus the other one ups actually the more superior one you can pretty much do anything you want a good example is so not just the uh built-in internal load balancer um and uh dns functionality that we have with ipis actually for all the on-premise ones but even in the cloud you run into the same scenario so say for instance you have a case where a customer wants to extend an on-premise networking service like for dns um the way we do ipi today is route 53 on ams or nothing right so you sort of don't have that option and that's where on aws with upi you'd be able to do that so like i think that's where the the you know a lot of the differences are one is like you know we give you what's the term we give you the bullets we give you the gun and we let you use it any way you want um whereas whereas ibi we put one in the chamber and we point the gun so we we make sure you don't get hurt with it so that's that's a big difference in kind of how you can do it one is pretty much the flexibility to do anything you want uh with the the risk of it's you know could be a lot more complex or it can be um you know sort of is as custom as you want i guess you know customization adds complexity right just as a sort of fundamental statement whereas we look at ipi as sort of the most robust near reliable almost perfect way to get a cluster 100 of the time but we limit what you can do as part of that initial bring up and then everything else that doesn't fit this is probably another majorly important distinction is installation in openshift 4 has really gotten a lot more simplistic we purposely prevent a lot of options from getting in the installer on day one and we push those to day two and the only really sort of guidelines we have is that you know this feature is prolific like everyone wants it um the second thing is it's unsafe to change on day two or it's required um for installation on day one so we're very strict on that but as such you don't have that infrastructure customization flexibility you get with upi hopefully that was not too good so i i want to poke on that just a little bit of you know i i know we've discussed you know here on the ask an admin uh live stream before of you know the mic with the change from openshift three to openshift four we went from ansible playbooks that had i think somebody told me 1200 plus options that you could set in the in the values.yaml um or preferences or whatever it was right um to you know openshift install in version 4 where you know the install config you can customize it to some degree right openshift install explain is your friend i love that command because you can go through and you can see all of the different options and all the things that you can configure but it might be a hundred options total across all of the infrastructures all of the different things that you can do and i think you you really highlighted that of the the result of openshift install is the cluster being up and running and ready to do all of those other things which was you know probably 1100 of those 1200 options in version 3 for doing things like you know configuring all of these little cluster add-ons and minutiae and all that other stuff and with three it was unless all of it succeeds none of it succeeds whereas with four if the cluster install succeeds great you've got a running cluster and now you can go through that you know hopefully not but potentially iterative uh process of deploying all those add-on services um so and where i'm going with all of that is we sometimes get asked and catherine you and i have had these conversations before like partners will ask well hey i want to add and customers as well i want to add maybe you know configuring my load balancer my external load balancer as a part of the installer right i want i want open shift install i want there to be a stanza for my f5 so it'll go out and configure the f5 can we add that why can't we do that type of stuff um so i i appreciate your thoughts your perspective on that yeah it's it that's a that's a great way to put it i i don't know that it was 1200 um but if it is that's awesome i know it's i know it's many many hundreds um put it that way so um you know that was always the thing i think a lot of this um i i sort of inherited the installer uh midway through uh three direct stays so can't necessarily blame anyone um or or i don't want to shift the blame anyone but it was always that one more option it was just one more option i'd have the cluster the way we wanted it for this customer and what we did is a lot of snowflakey things where it was unique special really wasn't generally applicable but that one customer had that option you know it was all written and ansible everyone could change it they could submit pr so it was it ended up kind of getting to this point where the reliability suffered a lot because of the permeation of different options and then which ones could be you know put in the same different combination would cause different results so that was a that was a pretty tricky problem and as we get to four we've really we've been strict uh one of the things that we don't allow is any flags whatsoever i think there's there's probably like two flags in the whole installer i know like you know minus minus durr is to get the directory and i want to say there's a there's one or two other like log level and things like that but there's very little and and you know we've gotten the the factions that oh we could just you know parameterize it we'll just put a bunch of tags or cv flags and we'll we'll just we'll be able to just run it without you know changing anything we're like but that's not the api right so i think we've been we've been very strict on this and it had shown from the reliability perspective from the ci that we've got in place you know the the when you run this command and you basically fill out all the options you're getting a cluster right and unless it's something you know broke or missed permissions or you know we we're trying to check more and more and more validation on this environment so once a hundred percent perfect but it's significantly more reliable so we've sort of shifted the problem now from day one to understanding how you can figure kubernetes on day two and that that's always been a struggle right because we sort of looked at a knob bell and whistle it makes it easier on day two but we sort of broke the installer by making it more complex so that's been a little bit of a struggle getting people to think more kubernetes like more config maps more manifests you know you can still do a lot of that you can still add those in in day one you know create manifest so we do have that option so anything on oc apply think right we can make it happen uh if you wanted more than one machine set if you wanted infras on day one you know everyone argues it can't be done and i'm like yeah actually it can be done you just gotta create the manifest so we sort of given the approach but we've moved it from a flag or a field to a kubernetes way of doing it and i think that aligns a lot more with how customers are doing config management with ecm or git ops types of situations for policy enforcement so these are the things that i think we just need to kind of shift our mindset from you know having a bell and whistle or a knob or a flag to really thinking in kubernetes how do we do that that's been i think the biggest challenge but i think people are starting to come around to it and i think as we see more and more environments roll out we see the get ops types of deployments where they want to be able to just you know check some prn and you know be able to boom push out a config and make their cluster declaratively conform to what they've defined it to be yeah and i think the the popularity of christians live stream um you know the get ops happy hour which is happening tomorrow by the way uh you know shows that i you know we as a kubernetes using uh industry are maturing right and adapting and adopting many of these new philosophies um like any new technology it takes a little while um okay so i'm gonna go back and revisit some of those questions that we were talking about before i'm so lost now so apologies for anybody who's chatted anything in about the last five minutes because i've been holding the chat um in my screen just where ricky asked his question which is um i would like to host multiple nodes for students to access remotely is there a high level roadmap on how i could accomplish this with openshift and i think roadmap here is not like roadmap futures but like how can i do this type of thing so so ricky first a couple of things um so when you say nodes are you referring to openshift clusters in which case the simplest thing to do would be to use the ipi installer against like azure or aws or google or one of the public clouds and just spin up clusters and then at the end of that install process it'll spit out the connection endpoints and credentials and you can just hand those over to your students and let them do what they do uh alternatively if it's a shared environment you know essentially you know spin up a cluster somewhere that's publicly accessible again aws azure et cetera and then deploy so deploy that cluster connect in and then use something simple like ht password authentication and give each one of those users a you know set of credentials that they can access the cluster with entitle them you know or give them permissions to whatever it is that they need permissions to and kind of go from there so the other scenario that i i you know might be possible or that i might be thinking of here is you know maybe you want to give them something that's lighter weight than a full you know five or six node cluster um you know in which case code ready workspaces or excuse me code ready containers would be the answer for that so either helping them to deploy that locally to whatever resources they have or potentially deploying that onto something that they can publicly access so i don't know maybe it's not if uh packet is no longer packet it's now uh equinix equinix metal yeah so you know renting a server from them you know deploying a number of instances inside of there and then handing over credentials for that is is another way to potentially do that um catherine chris anything to add there yeah we we've got the other option is you you know if it's just like a student case you could use hive you know you could leverage a hub and spoke model deploy hive in your hub and then provision through cluster deployments additional clusters as then they're needed again i'm not sure if it's clusters or nodes um in in the question but that is another option and that's fairly minimal to do in terms of using hive um good to know i need to i need to learn more about hive yeah here like i don't need it's an api yeah um so jp dade can we do upi for vsphere with windows worker nodes and ovn kubernetes networking uh so i don't think this is an installer limitation it's a uh windows wimco what is it it's the windows machine config operator i always think windows media for some reason which is not the same thing yeah maybe so i i don't think this is that's an installer limitation i think it's a windows nodes windows uh machine config operator limitation there i thought they're they're pretty close to that i know the the byo uh windows isn't quite there yet i think that's coming soon though but i thought the vmware support um sort of like think like a machine set type of deployment where you're spinning up your own uh windows i thought that's just about available or was already available ipi it works um that's that's ga uh i think it's ga yeah yeah the byo is still i think a release away if i recall yeah christian says it's in uh version 2.0.next so thank you for uh spending some time on chat during your your workout christian [Music] uh let's see scrolling down here i know there was some others so i'm going to take just a second to read through the chat uh does the installer wrap scalia reeves does the installer allow for mixed ipi and upi deployments if not is there an easy-ish way to add manual upi nodes to an ipi cluster um so no i i'm just trying to figure out which way we're asking this whether we're trying to automate a upi uh install or try to go the other way around we're doing an ipa and then adding i think what he's asking is can i deploy with ipi and then add manually provision nodes as day two right you can uh biggest caveat is they must be on the same platform that you've deployed the cluster on one exception is so uh let me let me just maybe maybe take that one step further yeah if you're on aws and you want to put a bare metal node manually joining the cluster that won't work but if you did a platform agnostic install where you didn't pick up a cloud provider so think in the install config platform none would essentially allow you to mix any which way you want to mix right the downside is you wouldn't enable any of the platform integration so you wouldn't be able to do auto scaling you wouldn't be able to do dynamic storage you wouldn't be able to you know provision the underlying infrastructure all that so pretty much any ipi method wouldn't work in that situation um but you could do it if it's within the same platform so you could do a manual machine edition in the same platform what you would end up doing is you would have to be able to get there is a url for the for the ignition config for a worker on the masters and i have to look up exactly where it is but i think it's even covered in the dock somewhere um you would pass it in through user data depending on the platform you add you know whether it's hosted in a web server somewhere or you know again using it off the cluster which is probably the easiest way to do it you would then boot that node and when it's adjoined it would be a single worker node on its own so essentially it so this is an interesting one and it's not one that i i think off off the top of my head i would have said no to this um but effectively it's the same as a upi and then adding machine sets day two just in reverse so you're adding nodes that wouldn't be a member of a machine set but would be a part of a machine config pool and you're just following the exact same process yep um so that's good to know uh so i'll follow on that question and it might be asked later on i just haven't gotten there yet can you convert between ipi and upi like can i deploy ipi and then change that to a upi cluster or vice versa i think it's one of those trick questions because there's really really not any nuance of ipi or upi within the environment think of upi as user provision infrastructure so if i use a provision the infrastructure i can can obviously do that right you know you can set up all the resources you need um to perform a successful deployment of openshift vice versa if you use ipi to have the installer provision that on your behalf you could do that as well the trick is is what are you trying to manage on day two like what are you trying to go from installer provision to user provision because the cluster on d2 is nearly identical i want to say 99 and 40 hundredths or ivory soap identical so you're still going to use a lot of the same operators that would essentially manage resources a good example is say the ingress operator on aws it would still be managing the network load balancer for startup apps or ingress on day two you could disable that right so as part of that operation in the operator you can disable that likewise you can go to the internal registry and say i want to use different storage you could um i'm trying to think of the other thing uh you could stop using machine api so machine sets right and do manual nodes so i think it just depends on what you're trying to change i don't think it's a i'm going to convert a upi to an ipi or a up to an api like there's not really a notion of that it just depends on what secondary level services you want to enable or disable got it yeah and i i see here um so usama who's same apologies for butchering anybody's name is it possible to use ipi with an external load balancer at the same time and i think the the inverse is true and what what i was thinking as you were saying that is you know we see people ask can i use the integrated load balancer of ipi with a upi deployment on premises and i i think what you just said was more or less along the lines of yeah deploy ipi and then just don't use the machine sets you know to to scale nodes it's a little trickier in their question i think i think what they're probably asking for and maybe i'm mistaken or reading between the lines here my guess is it like for instance they did a vmware install where we would have the keep alive the aha proxy doing load balancing for the cluster um i'm sort of wondering are they talking about no i'd like to really use an f5 after i deploy the posture that's definitely what they're asking i i i asked a separate or i added on to that um so yes you are answering their questions yeah so so that's that's the trick um i want to say you probably have the most experience of this but i'm gonna i'm gonna sort of um flubber through it there's no good way to manage that i'm aware of the internal uh keep alive dha proxy setup i believe it's just you like going through mco to basically tell it not to work anymore but i don't recall what other problems that digs up as as you do that i think it's technically possible and i'm sure someone's probably done it you probably have some more better insights than i do so to answer your question um directly yes but no um so what katherine is saying is is true technically you could go in and using mco you could basically remove or disable the uh keep id functionality that's associated with the ingress endpoint and then basically remove that virtual ip address uh or the dns name associated with it to an f5 or a citrix or whatever external load balancer you're using um but that that goes back to the whole you know now you're you're breaking or deliberately modifying that opinionated you know ipi installation process and should you really be using upi in that instance and if you want to continue to do things like automatically scale machine sets well great you can add that too so the real answer here is and the way that doesn't you know deliberately break core you know ipi functionality is to simply add a second domain and then point that domain to the uh the external load balancer so if you know the the default ingress is star.apps so maybe you have a star dot prod or a star dots you know something and that's hosted on i'm gonna pick on f5 right so and f5 load balancer so when you create your routes you simply say that they are managed by that external you know that second set of uh uh route instances or excuse me ingress instances and then particularly with our partners who have certified uh operators words are hard at the moment certified operators right they'll do things like automatically updates that's external load balancer configuration for additional worker nodes as they get provisioned read all the things that they normally need to do so it is possible it's just a little bit different than you might be expecting maybe yeah maybe to take that one step deeper because i always i always sort of think the sometimes it's a misunderstanding of why we even use this in the first place so the reason why we have these services as part of the ipi deployment and they're not on the upi deployment is upi we're sort of at that point of assuming you're you have control of your own you know infrastructure so you're probably going to bring your own dns you're probably going to bring your own load balancer chances are you have an f5 or something equivalent since that's you know what we keep talking about here um but i in the case of ipi we still need a service to automate the bring up of the cluster so the reason for this has to do with the whole inception problem of how do you bring kubernetes under management if you don't have kubernetes running right so we have this notion of a bootstrap node and what that is is our our temporary control plane now we need to be able to during the pivot from the temporary control plane to the permanent control plane which is your three uh control plane nodes that you end up with on a running cluster is we need to be able to perform tasks against the api server and at that time you really don't know where the api server is running you know it's running somewhere it's got an api dot you know cluster name dot domain name but it you don't really kind of know where it is so what we use is a load balancer and it could be performed a couple different ways you can have a load balancer you know with health checks to you know with a with a bunch of uh dns uh names below it or you can just use ron rob and dns it really doesn't matter per se on the on the bootstrapping operation um but what do you need to do is when you resolve api or api int is actually the right one here you have to be able to get to the right running control plane so the way we do that is we have the master in there and the three masters and the bootstrap node and depending on where it is in the cycle someone's going to be responding so without an on cluster service we have no way to provision an external service to do that as part of that bring up so that's sort of our work around to bringing kubernetes under management of kubernetes it's the whole bootstrap process and that's why we're leveraging on cluster services for internal communication yeah um so jumping back to some questions and i'm i'm falling way behind on chat here my apologies um so dean has another question that i think is a an important one here um other than not having access to dhcp what are some of the main oppositions we see customers have against using ipi and i see some of our other audience members chiming in here with reasons so walid saying not knowing the name right both of nodes or i guess of nodes uh for compliance and dns approvals um let's see keep alive d plus h a proxy floating ip failover time i mean we have somebody asking in chat about you know they have a pool of mac addresses and this is the only pool of mac addresses they're allowed to use i mean that's a reason right like yeah so catherine being you know being the product manager are what other things do you see or hear about in that respect yeah um a lot of it so definitely dhcp is one of them and this is this is an argument we get in all the time um the trick with this is it would require a significant retooling of the platform even if you handed me a bunch of the of mac addresses and say just go fill them out right because everyone's definition of it isn't even just here's a bunch of mac addresses some people say well here's a bunch of ip addresses right so it's it's even more specific but it's really more of the cattle versus pet mentality right and i think in some environments you know where you're very strict and you can't have dhcp it's it's definitely going to be a lot more restrict on how you assign things what you let on the network so it it it sort of breaks this paradigm of dynamic compun capacity you know this whole thing that we have with ipi so that's one of them and that's a big one i would say the other one is credentialing could also be sometimes challenging um where they don't want to automate a lot of stuff they want to make sure that like credentials are locked down as hard as they can and then sort of just give just enough to get a cluster up and running and the way we do it today i wouldn't say good or bad i'm not going to sort of defend this but we require admin credentials for provisioning a lot of the things today so sometimes that's a little bit of a rub for folks and i i think it's understandable i don't what we're not trying to say otherwise but we are trying to improve upon that which will help a number of customers who are using upi move to ipi so i think there's there's definitely some work there and it's in the other one that i've also seen is sometimes some of the architectures are very restrictive that customers need to work and work in a good example is like just throwing gcp out there for instance i don't feel like that's got enough face time today um it uses something called typically customers use something called cross project networking well it sounds great on paper you know you create all this you create a shared vpc with all your networks in it and you share it out all the other projects or open shift in a different project the issue is that you lock the account that's provisioning open shift down from accessing just beyond reading the networks in the shared vpc so it can't make any changes you can't update any firewall rules iam whatever you can't pretty much create anything so ipi is under the assumption i'm going to create everything you need to be successful in having a perfectly running cluster out of the box well if you can't create things like firewall rules or failure i am i mean you pretty much broke the model so that's another reason why sometimes the restrictive natures of the architectures by locking things down prevents automation and and there's really sort of no good way around it you just don't have the right permissions as the account installing openshift so um we've only got about eight minutes left and i think we have a hard stop today we have a very hard stop yes okay so i want to do a bit of rapid fire with questions here um so for any questions that we don't uh answer here on the stream or if we you know have an incomplete i'll make sure to put all of those into the blog post uh so friday morning on openshift.com blog just look for the blog post that summarizes uh this this particular episode and we'll have all of those inside of there um let's see [Music] any option to use ipi plus external load balancer other than router sharding unfortunately no um although i think the assisted installer folks are working on the ability to do something like that um basically be today when they deploy they use that integrated keep a live d load balancer functionality i think i have heard that they are working to be able to add the ability to specify an external load balancer as a part of that and catherine please feel free to jump in and add anything if needed um so if i have nodes on overt slash rev i can't add bare metal nodes i've kind of answered this yeah so in chat so and this is one you know catherine again i see you answer this question all the time i answer it probably just as frequently which is um and you've already said this we can't mix infrastructure types so if if there is or let me be more specific if there is a cloud provider infrastructure provider integration configured then you can't mix infrastructure types so if you deployed say vsphere ipi or upi or rev ipi or upi then you can't add a physical server to that because it would not have the same you know cloud provider infrastructure provider integrations available to it and therefore kubernetes not openshift kubernetes won't allow it to join the cluster yeah so maybe maybe just taking that one step further i think technically you may be able to get away with it in rev because i don't think they implement a kubernetes cloud provider yet this is the one i hate because i hate saying you can and then if they ever do down the road and it breaks things everyone's going to be mad and come after me but the reality why it actually doesn't work it has to do with the kubernetes cloud provider and anyone that implements a node lifecycle controller will essentially think those nodes aren't supposed to be part of the cluster it'll basically say hey you get this four node here i don't know what this is and it actually removes it right so it doesn't know how to deal with the integrations on that it doesn't it knows that it's different and unique and special and it thinks it should never be part of the cluster so until the kubernetes itself as you mentioned has the notion to ignore um node types that are external to the cloud provider you've deployed on or the provider you've deployed that's enough to be a cloud provider but the provider you deployed upon it will never allow those nodes in the cluster so it's a physical limitation of kubernetes and i we get that question all the time like i just want to bring a metal node with vmware i just wanna like you know so anyway uh rapscallion reeves um so i see your comments in here about rev uh going away in favor of openshift virtualization and all of that um please reach out to me about that um i'll i'll you know we'll we'll have a conversation about that and what the future holds and how we can help address that so andrew.sullivan at redhat.com or dme on twitter practical andrew and we'll we'll set that up and we can have that conversation uh i think i saw uh uh the pm for that in the chat here so i'm sure he's aware and and we'll loop him in as well um let's see i'm scrolling through chat quickly just a minute i'm answering the the question from ricky about code and his student's needs okay um so jose uh openshift delivers is a ready cluster day two configuration is another level of automation the client needs to map business logic kind of difference um so i think hosea and christian have been having a conversation around get ops here i'm i'm keeping up with the we're trying to keep up with the shot or go cd good good use case usef uh for an ipi installation and vsphere is there no option to distribute control plane virtual machines on different esxi hosts so that's a question that's come up quite a bit recently and catherine again please keep me honest here and feel free to add on so red hat created the machine api provider for vsphere um so if you look in the machine config operator repo and i'll i'll dig up a link if i can here before the end of the show and post it and included basically all the functionality there so today it has no awareness of the underlying cluster topology or settings or features or anything like that basically it says what cluster do you want me you know what's the path in the infrastructure the vsphere infrastructure do you want me to deploy these vms to and that's what it does it doesn't reach in and ask you know hey what are the drs configure what's the drs configuration for this cluster or anything like that um i don't know whether or not that's on the roadmap honestly i would be a little surprised if it's on the roadmap i think there's an rfe for it because that's pretty complex and i think that different people have different preferences or different desires there and we don't want to presume right what that should be or or something like that um but i'll let catherine speak to that if she sure sure take the take the good ones um yeah so so yes they're actually is an rfe on this um i will tell you there's some technical limitations that need to be fixed right now uh in the cloud provider i don't want to sort of blame anyone but i know that there's a bug there that to be able to use something like say multi-cluster which is what a lot of folks would like to do and i think it's a good one so there's the good reason is is scalability so you don't hit that um there is also the the idea of um sort of multi vcenter let's just call that a fantasy right now i think that's one's a little too far out but the notion of multi-cluster is definitely one that we want to look at we did do sort of like a poc to figure out if we could do it with upi but even upi isn't going to work because you have to hard code the user name and password of vcenter in the cloud provider which is who in their right mind would do that so that's a bug that needs to be fixed in the upstream um before we could even say that we would bless that as a poc so i i think you know i hear the the question i hear the ask it's come up with the number customers uh we want to try to do something but right now we need to fix some fundamental implementation issues before we can even look at figuring out how we sort of offer that as a as a deployment method okay and i i'll see if i can dig up the rfe and i'll include that in the summary blog post as well yeah there's a there's a like i think want to say like there's three rf's in it and they all have different um desires so they're not even common among people what they want to do yeah all right well as christian just reminded us in chat so we've only got oh well now less than a minute left yeah uh thank you so much catherine for coming on today i really appreciate you uh accommodating us short term um i know that this has been a fantastic episode having you here right and answering all these questions to our audience thank you so much for all of your questions um i know that we missed a few inside of there again i'll go through i'll pull out all those questions we'll make sure to address them in the blog post so keep an eye on openshift.com blog alex has been getting those published at like six a.m eastern time or something so uh if you're an early riser you'll you'll be good to go um even uh so i will make a plug real quick uh i will be guest hosting for chris uh tomorrow's in the clouds with marco bill martin bill peter yes thank you uh who is svp of cxno customer experience and operations uh so if you have time tomorrow please feel free to join would appreciate you being there otherwise thank you so much everybody we will see you next week at the same time bye all thank you catherine thank you everyone you
Info
Channel: OpenShift
Views: 972
Rating: 5 out of 5
Keywords: OpenShift, open source, containers, container platform, Kubernetes, K8s, Red Hat, RHEL, Red Hat Enterprise Linux, Linux, OpenShift Online, OpenShift Dedicated, OpenShift Origin, installation platforms, Kubernetes clusters
Id: BYaFd7KKdtU
Channel Id: undefined
Length: 62min 25sec (3745 seconds)
Published: Wed Apr 07 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.