OpenShift Administrator’s Office Hour (Ep 3): Assisted Installer with Special Guests

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] good morning good afternoon good evening wherever you're hailing from welcome to another episode of openshift administrator office hours on openshift tv i am chris short executive producer of openshift tv i am joined today by two of my fellow red hatters the one and only uh something of get ops christian hernandez i'm not sure what to call you yet i'm not sure what you want to be called but it's still pending yeah it's still pending um and then as as i referred to him in an internal email a the role of andrew sullivan today will be played by a uh was it otter sounding but smoother carbon based life exactly what you used i'm not sure whether i'm offended or not yes so like better better sounding like different sounding but like you know smoother right like so yeah so but to be honest uh andrew's off helping one of our customers right now and we are talking about something that reese is well reese introduce yourself and then we'll get to what we're talking real quick yeah thanks chris uh so good morning good afternoon good evening uh everyone my name is reese oxendum um i am probably as surprised as surprised as you are to be invited back again um to deliver another uh session we are gonna be going through uh the assisted installer uh which is something that i've been working on considerably and we're gonna go into that in a lot more detail i'm going to try and do a live demonstration of this and all being well um doing it on real bare metal systems so there's gonna be a little bit of a delay in a few different areas but uh but yeah some exciting stuff yeah no like i can't wait and uh if everything hits the fan i have a nice clean uh server across the uh office here that we can use this thing for sure sounds good i got three bare metal machines that we're going to try and deploy openshift on today using the assisted installer so i just wiped them let's see how we get on all right let's get to it let's it do it all right that sounds like a plan jump all over it um so what is the goal of the assistant installer basically rhys or christian either one of you can answer that question i guess yeah i i'm happy to give it give a stab at what i as i would explain it so we have tried for a very long time to make the consumption of openshift you know a very good experience right we know that customers are looking to deploy openshift across a wide variety of footprints be that on top of virtual platforms be that on top of public clouds private clouds whatever they are but we also know that a lot of customers are looking at deploying on top of bare metal there's lots of reasons why customers want to deploy on top of bare metal you know whether it's what they have particular requirements for their applications be that performance or they might be want to exploit underlying implementations like gpus or you know fpgas whatever it might be and so we have to also make openshift really really consumable on top of those platforms we have a number of different investments around things like the bare metal ipi implementation so you know automating the infrastructure right the way from the ground up but we want to go that little bit further and make it even easier to deploy an openshift cluster and we do we're doing that in the form of a web-based utility which i'm going to show you you can see it live in action for driving all of that you know right from bootstrapping the machine up deploying openshift sharing everything you need hopefully with dns records as carlos santana points out yeah yeah we'll we'll go into that a little bit yeah yeah yeah yeah so my my quick um my quick question i guess my uh my open-ended question is here um the differences or how would you categorize the bare metal ipi versus the assisted installer right because there is a lot of there's there's some overlap in terms of like the technologies we use um in in both situations so when would you use one over the other or when what's what are the uh the major differences the nuances here yeah yeah so actually the way or the resulting cluster at the end is almost identical so a cluster that you deploy with the assisted installer is looks and behaves in the exact same way as a bare metal ipi cluster would the main difference for me is just ease in getting that set up and and you know getting up and running in the first place so with bare metal ipi you have to find a system that you want to run the openshift installer on just like you'd be deploying on top of any infrastructure public cloud vmware rare openstack whatever it might be you then have to tell it well these are all my nodes that i want you to use you know you have to fill out the the ipmi out-of-band management configurations used in passwords you need to make sure your openshift installer machine has networking access to those machines so they can provision them run the bootstrap vm on top of them and and so on whereas the assistive installer requires nothing but the ability to attach and iso to the machines you power them on manually through the out-of-band management platform they provision create a cluster and away you go there are very minimal requirements to getting this up and running and the beauty of it is is that you can do it all via the web ui you don't have to download and run the openshift installer or generate your own install install convig everything is done inside of a web ui so literally assisted it it assists you yeah each step of the way versus right um the uh the regular ipi method where you um have to download the uh uh the the cli and then you have to kind of pretty much prep everything beforehand um this act literally assists you in each step of the way yeah exactly and a lot of the benefits will just become apparent as i show it and perhaps we should just i should just start sharing my screen and i can actually show you it live yeah let's let's let's let's it's been the tires on this puppy yeah there's there's there's a expression here in the us i don't know if you've heard it over there in the uk but um this is i think chris shorts in the show me state right it's like we're in the no i am not on the show state that's missouri oh missouri michigan okay no i don't even know what michigan is what is michigan's thing what is the machine the glove or something no michigan is pure michigan i think that's the thing like now right but it's there's like a longer version than that somewhere um yeah cool i've worked for red hat for coming up to 12 years now and had the pleasure of working with a lot of different uh red hatters from all over the world i still cannot keep up with all the various different americanisms yeah there's a lot of expressions yeah yeah we're not we're not shy no no all right so the beauty of the assisted installer is that it's now directly integrated within cloud.redhat.com so this is hosted by red hat you don't have to download and install anything on premise it is just out there it's all you know sas sort of based now i'm inside of my login to this it's just my employee account and it's got a whole list of various different clusters deployed across the world now you're seeing lots of clusters here just because i'm part of an employee group we can ignore these for now now all i'm going to do is i'm going to say i'm going to create a cluster and at this point it's agnostic right it's not going down the the assisted installer route yet um but i will show you how we get there so i just want to deploy openshift container platform i'm not going to deploy openshift dedicated here this is just an employee environment and like normal it's going to ask well where do i want to deploy onto public cloud platforms that we we support inside of my data center or indeed on top of my laptop now the entryway or the path to getting into the assisted installer is to say run on bare metal so i'll hit that and this is where we get the two options um you're probably expecting to see three options here because we've talked about that third option that bare metal ipi infrastructure um that is you know that's not a fully supported mechanism right now that's still in the dev preview or technology preview realm it can be used um but it's it's not yet supported that's why you only see this user provisioned infrastructure path here and the assisted bare metal installer so this is how you get into it and this is where you get into the pipeline of deploying the um the bare metal cluster with the assisted installer all right so this is where it's just going to ask us a number of different questions about what it is that we what that we're actually deploying here now you can run through this and this is where um we start we can start talking about you know dns and the requirements from an external perspective the assist installer has very very minimal requirements when you're deploying an openshift cluster using the tool you can deploy into an environment that has dns already set up or it can work in a in a way where it will host all of the various different dns internally and you point your etsy hosts directly at the cluster so i do have inside of my environment partial dns configured so when it comes to setting up my cluster name i want it to match what my dns domain name is inside of the bare metal infrastructure and what we'll see if i jump over to my jump host um all of my machines i'm not sure whether you can see it these are all it's an internal address but pemlab.rdu2.red so i want to make my cluster basically match this dns domain name so when i go to cl sorry question yeah so quick question where is this so you said that um you can deploy this on a pre-existing like dns right like as in what you have right like i have my own dns server yeah or you can use etsy host where's that sliding scale how much um in can i have something like in the middle right can i have like a host of dns somewhere else or what what's the what's that sliding scale in terms of sure what's allowed so right now because we follow the bare metal ipi path the cluster itself holds or manages internal cluster dns for everything it needs it also manages the the ingress and the api vips itself so we have no requirements from an external you know perhaps a helper node or external infrastructure to manage the dns or the load balancing or the um the vips we self-manage all of that inside of the cluster itself that solves for the cluster right but obviously i as an administrator or my clients they need to get access to it as well so what you can do is you have a number of choices you can either update your corporate dns afterwards to point at the ips the assisted installer used um or you can update your etsy hosts and that's that's what i'll do at the end of this just for convenience and to show you how it works um but right now there's not really a sliding scale where you can say well no i don't want you to do that it's kind of that's the way it works and after deployment you can choose to make it easier for your clients to access the cluster by using the ips that the assisted installer used and you can define them if you want to and i'll show you how you define them um but but yeah it's not it's not really a choice gotcha gotcha so it um this uses the same mechanism let's say as like vsphere ipi or uh openstack ipi where the mdns kind of just takes care of the cluster itself and then um as a day two task let's say or or even like now as a day one test or a separate task you can point your dns um to this cluster exactly right yeah so carlos santana is asking for like a drawing diagram or something like is this inside of epc you know whatever like uh maybe you can talk about that kind of infra layout but then there he mentions that like infrastructure will never let you edit corporate dns so this is kind of a good thing right like yeah so we're not suggesting that the installer will expect to update corporate dns dynamically what we're saying here is that the assisted installer will allow you to choose the [Music] the ip addresses you want to use and it will host basic dns in the environment so if you point at it you know with an etsy host override it will all work out of the box but if you have the ability to you know for example you could ask corporate i t well i want two dns records one for my api one for my ingress and you tell me what ips i can use you can set those inside of the assisted installer so the clustering uses those addresses nice so you don't even have to worry about dns for the first little bit no you don't no you don't at all so long as there are as long as the cluster has ip addresses that it can use um you're good to go nice yeah let me just talk about before we proceed with this let me just briefly talk about the environment that i am using and i appreciate it's a little bit small i did my best to um try to enlarge this that's why we're lying a little behind with the show today was trying to figure out how to make this screen bigger yeah indeed um so the assisted installer as i mentioned it's really designed for bare metal clusters you can use it for virtual clusters if you want it really doesn't matter but it's really designed for the bare metal path because that's typically the more difficult configuration to to bring up for obvious reasons inside of my lab i just have three bare metal machines so we're really looking at deploying a converged worker and master configuration so the nodes are simultaneously you know all roles this is not a sort of typical production type configuration but it does suit certain configurations that i know that we and our customers are looking at you can think of small footprint edge type configurations where we really want to minimize that footprint so just sort of proves that there's a bit of flexibility in how we deploy openshift and i'll go into some of that flexibility um in the next next pane but the idea here is i just want to show you how we would use the assist installer and how we would get these machines provisioned using it because even though we try and make it as easy as possible there is a little bit of manual manual work involved in bringing these machines up so we'll jump between this window and the next and don't worry so much about the fact that you can't see some of this text it's more of an implementation detail i'll tell you exactly what i'm doing yeah so even though it's sort of automated you still need to have your out of bands set up set up beforehand like it just doesn't magically have uh you're out of band all set up for you so exactly and and all i've got here is i'm just it's just a jump box that i'm i'm using to allow me access to these uh to these machines um and also as as we'll explain how we use the iso to provision them all these machines that they're currently in in in the raleigh area um i'm in the uk if we were to pull if i was to attach a virtual media iso to all these machines from my machine it's it'll take a while yeah those bits is uh it will take a while to go over the pond exactly exactly all right so let's get into it um so because i want to somewhat match the dns environment that i currently have in my environment i'm going to call this pem lab it's just the the name of the environment um openshift version we can only select 4.6 nightly at the moment so this is a nightly build um we're not using you know it's obviously pre-release bits at the moment um and the really cool thing now that this is integrated within cloud.redhead.com is that it already knows my poor secret so yeah yeah no more copy pasta exactly so um that's already in there because it's associated with my account which is really really convenient so go ahead and click save and continue and what we now have is um an additional pain and there's lots of different options in here and we're going to go through these um but the first thing it's going to tell us to do is to generate a discovery iso now this is the clever part um when i want to add my machines to this potential deployment i simply need to provision them using a discovery iso we attach the iso over the virtual console virtual media interface or whatever it might be or you know as simple as you could burn the iso and attach it using a real cd-rom if you really wanted to but the fact is these machines they only need to come up with this uh with this discovery iso they then connect back in to cloud.redhead.com using this dynamically generated you know generated just for this cluster iso they appear here and then we can configure it all and tell it to kick off the installation directly from here yeah someone someone just mentioned in the chat that um out of band is technically optional right because you can literally um as as i joked on the on the uh on the is you can literally uh burn this to a dvd or a cd and have the intern go and you know boot it off the uh of the cd-rom or dvd round drive yeah and there's also no reason why you couldn't create a usb key correct right with the same iso or spinning disk whatever you wanted yeah whatever you wanted to do um all we need is to get the machine to boot this iso series of floppies whatever you want right yeah 100 673 floppies yeah exactly whatever floats your boat exactly so [Laughter] all right so there's a few additional things he's going to want to ask us now um first of all ssh public key now this i'm just going to put my key in the benefit of this is that if these machines have any problems booting up or they can't access this console perhaps as a networking issue somewhere perhaps a dns issue because you know these machines they still need to be able to connect out to the internet they need to reach cloud.redder.com they need to pull down container images so there needs to be basic dns inside of the environment but we don't have to worry about full dns already pre-established in the environment as i said the environment will take the openshift cluster will manage the core dns of the cluster um this is you know hosts must be connected to the internet to form a cluster using this installer each host want a valid ip address assigned by dhcp right so obviously it needs an ip it needs to have a route to cloud.redder.com so the dns capable for that and crucially all machines need to be on the same layer 2 network now this is again because the cluster is self managing all of its virtual ips those virtual ips have to be on the same subnet same layer 2 network um so okay ssh public key i'll just pull this in and oops daisies oopsie daisies so you say my son says that that's right and my son says it because it's a midwestern thing it is yeah yeah but you're not midwestern yeah apparently maybe it's an english speaker maybe it's an old anyone from australia where is where's august maybe oh he's still sleeping probably yeah 23-hour time difference or whatever it is yeah all right so this um discovery iso is now ready for me uh what's really cool is they give you the w get command or right like you can just go grab it or the s3 the s3 bucket where you can just you know you don't have to select yeah it's pretty slick yeah exactly so it throws it into an s3 bucket it's ready to go i can copy it there the one bit i did just um gloss over accidentally is there was an option that if i have a http proxy inside of my environment i can set it before it generates that iso and so when my machines come up if they need to use http proxy to access cloud.redder.com or any of the container image repositories sure you're good to go yeah so i'm going to copy that now again i'm not going to use my browser here to attach to the to the machines i'm going to dive straight into this machine here and this is my my jump host i'm going to download this is this the jump post on the terrible box with the bad font or can we increase the font here oh yeah we can we can increase this thank you yeah all i'm doing i'm just downloading because this machine for some reason doesn't have wget and i could have installed it but i'm used to just using curl output and lib curl is a thing yeah and this shouldn't take too long um at all and what i'll do then is i'm going to attach this to all three of the physical machines um again you could do this in a number of different ways my three machines they're just um they're dell blades essentially little fc430 nodes and i'll just use the virtual media interface to to grab those nice and there we are that's done so i'll start off with my first machine so connect virtual media and again i apologize that this is likely a little small on your machine but i'll show you exactly what i'm doing i'm mapping a cd slash dvd a browse for the file discovery.iso and i map that device i'll close that and what i will do is i will return to the dirac console i need to go to setup change the first boot device because by default these machines will want to pixie inside of this environment i will say virtual cd dvd apply that and i will then go to power slash thermal and i will power on the system and yes i definitely want to power on the system and what you'll see is that this machine should turn on in a second there we go this machine is now turning on and it takes a little while of course remember it's still pulling this it's still a real bare metal machine um and it's going to get to the point where it's going to try and boot the virtual cd iso and it's going to be pulling across the network now these machines the the this jump host i'm using is in the same rack it's a vm inside of the same rack so it shouldn't be too bad certainly a lot quicker than pulling over the uh over the atlantic but it still does take a little while so whatever the atlantic sounds like a police term so i'll leave this one going what i'm going to do pretty quickly is just do the exact same on the other two machines so i'll connect virtual media and i'll just do this quickly i remember when the out-of-bound management uh first came out and i got that it was i was like this is a game changer i don't have to run down to the data center every time yeah i don't have to just heard something yeah yeah exactly i don't have to take the golf cart over to uh right yeah yeah yeah i remember those days yeah i'll take that i have to go to the data center too one of the things yeah yeah i'm off to the backup data center i'm off to the primary data center does anybody know anything yeah exactly while i'm there because i'm only going once right right i'm not coming back all right now just this last one that happens you can see you know ipmi boot to virtual cd requested that's expected and the last one boot the virtual cd dvd power so in the um when it generated the iso it essentially it embedded all these uh all the options it needed yeah um for the install and it's actually kind of basic it doesn't embed the install convict because there's still additional configuration options we need to go through but it's enough so that it knows when the iso comes up and you'll see it's just a core os image when that comes up it connects out to the installer and uh the assisted installer pane that allows you to then set the rest of the configuration and then we can actually say we'll install a cluster now and away it goes but we'll see that uh we'll see that happen in a few minutes all right so now it's loading the kernel it's gonna do the ram disc and the ignition convic as well but this is unfortunately going to take a little while um just because of again bare metal machines and indeed see this rel core os um and because we're pulling all three of these isos at the same time sorry the same iso to three machines at the same time that you know that that might be labor intensive on a network a little bit or depending on the hardware right i would imagine yeah it's also you know it's pushing all this through a java interface um oh that's right right right yeah yeah i forgot there was a there was a time where everything was written in java i forgot about that yeah i forgot about that like well even even uh vsphere now they just they just deprecated like now they're all going html5 they just deprecated the the flash right so they went from i remember when the html5 stuff came out like it was like five six years no no barely everything is barely starting to move over i actually think this is html5 is it so maybe i'm maybe maybe i'm strong anything's possible indeed um so yeah you're gonna see these these three machines they'll eventually now boot up into the coreos image and what we're gonna see if i just quickly revert back to this window um i can close that now what we're going to see is that essentially this window pane is just going to sit there waiting for these hosts to appear so they're going to boot up they're going to look at their essentially just an ignition convig which defines exactly what the machine needs to do you know needs to start you know a basic container that brings the connects into this it's going to start getting information about all of the machines that get provisioned we can see all the specification we can make sure that they're you know they're the right nodes that they have the right cpu memory disk networking configuration and we can start doing some additional um tasks which i i will show you very so there's an implicit um uh prerequisite here since the hosts are actually essentially phoning home right they're they are connecting to our um sas cloud.redhead.com um there's an implicit this needs your service needs access to the internet right so correct it actually needs to be able to reach out and um communicate with cloud.redhead.com yeah so they need an internet connection that doesn't have to be direct it can be via a proxy and that's why when we generated the discovery iso it you know unfortunately i'm sorry i skipped over it um there's an option where you can say well this is my http proxy so you can provide that information and they can get out but obviously when these machines come up the first thing they're going to try and do is they're going to try and dhcp you need to have a dhcp environment in the environment that you're using so that it gets an ip address once it's got that ip address obviously you need gateway gateway will allow it to access this but also you need to make sure that there is at least basic dns in the environment because when the dhcp issues the name server that name server needs to be able to resolve not all of the node names that you're going to use inside of the cluster not api or the ingress vips just you know cloud.redhat.com and the the various different image registries where we're going to need to pull the open shift um images from during the installation yeah there's a there's a question and i think i know the answer but i'll let you answer um it says can you run this on premise or is it red hat so is is this is this hosted only or is there a plan to like bring this into um let's say i assume they're asking for like disconnected or yeah so so right now this is only on cloud.reddit.com um of course like everything that we do it's all open source and so you can go out on github and you can see how you can deploy instances of this um but in terms of a supported configuration in a sort of disconnected environment or something you just want to deploy inside of your own infrastructure i do not know i'm afraid i would need to check in with the product management teams that are you know it's that that decision to make yeah sure we got you oh and and someone actually just mentioned yeah you can actually run this as a as a container um i guess that's the uh um the open source version as you as you mentioned yeah okay so you can see these machines are coming up this one's just finished so what we should see is this one should actually pop up right about now all being well live demos and all that yeah and all that yeah what's funny to me is you're doing a live demo my computer is like super struggling right now gotta put a fan on it by big it is 66 degrees in my office right now there's not much more i can do [Laughter] the laptop you're using uh it's like the new 16-inch macbook pro that red hat's giving out yeah so it's it's it's got a lot of horsepower and everything i'm just doing a lot with it right now that's fair it's time to start closing things yeah let's see what's going oh as i clicked on this it appeared so here you can see our first node has uh just come up and what we'll see is you know if we we dive into you can see it's some additional information about it so yes a dell machine it's an fc430 blade you know cpu memory bmc some additional information about the disk disk sizes the networking that we have inside of this machine and detected ips and subnets and you'll see that all three of these machines will will all start to come in so this second one node 12 should be our next one and then what do we have here oh failed to mount yeah i might have might have pushed this a little bit hard doing all three at the same time let's just reboot this guy and get him uh oh there you go see before this would have been like oh now i have to drive back with the golf cart to the data center yeah so i may need to ask this to cd boot again because i don't know no that's windows uh yeah safe mode i think it's f11 on these yeah i'll just make sure it selects the cd there was also a question while um um i know you're multi-threaded right now but um there was a question about if you can take the the contents of the iso and just boot the uh the bare metal servers off of pixie there's no reason why you can't do that yeah okay that's not something that we provide the instructions to do at least not in the context of the assisted installer yeah but there's no reason why you couldn't um you know deliver the iso in that format um it's it don't it only runs that iso temporarily just to get it up and running and then we you know we we redeploy core os onto the root disk anyway so yeah because i mean you know just in general you can you can unpack the iso you can just mount it read only wherever you want to and my pixie off of it so this is funny this this system i did actually have a little bit of uh trouble with the other day um is some kind of it was complaining about some kind of networking issue between oh is that the thing you were talking about in slack the other day maybe because this is taking its time it doesn't usually take that long time for remote hands yes they're going to pull the wrong blade anyway so yeah because that never happens right what's bad is when they pull the wrong disc you want them to uh yeah it's flashing flashing oh damn it that now we have two servers down yeah [Laughter] yeah that's always a problem yeah no it's funny my my first job out of the air force uh my it was right before we moved up here uh or right after we moved into our new home in wake forest uh that we bought while we were down there uh i became the person that lived closest to the data center the primary data center so it was always like all right if anybody's got a list of primary data center stuff just write it on my whiteboard here outside my cubicle i'll hit it on the way home you know yeah that's right i'll stop by and then i'll just check them off one by one yep so this one should go a little quicker because it's going to be the only one actually pulling that iso um but whilst that is doing that i can revert to this and i can show you some of the additional things we have um in here so um we've been through you know this shows you all about the information about the machine that is there um you'll also see that it's brought through a host name now those host names match the names of the machines you would have seen in the idrac now the only reason why that has happened is because my dhcp server allocates hostnames so it provides that you know through an option but your dhcp server when you're using this doesn't have to provide hostnames if your dhcp server isn't issuing host names you'll simply have a random uuid displayed here and to identify the machine you can of course use lots of different things perhaps serial numbers bmc addresses so you know exactly which machine is which but you can then set a host name in here edit host and it's got a discovered hostname and you can override it so if you want to well make sure that this has a known host name something you obviously you want to use you can set it here and then all of the nodes inside of the cluster will be able to contact this machine through the host name you said here no requirements yeah using the uh the mdns yeah using mdns but also core dns so every machine has a core dns implementation on every machine and so yeah it again it follows the bare metal ipi path for all of that so just waiting for this guy still and yeah there's a few additional things you can do here as well you can disable it you can view host events or you can delete it so viewing host events you can um this is very it's not full right now but when you are actually deploying and you're actually running through a deployment this will be filled with really useful information about what is going on inside of that machine so you know for troubleshooting purposes you can absolutely dive into this and see exactly you know what's going on at what stage you know deploying core os onto the root disk what the progress is on that it's incredibly uh useful um and someone someone asked um and i don't know if you're gonna get to it when the third note is added um they asked early on like how do you tell the iso whether something's a master whether it's a bootstrap um but yeah that's something that you can actually do directly with the assisted installer correct you can and i'll absolutely talk about that um so the the role what we'll cover now so here you can see roll it can either be automatic master or worker now if you've only got three machines you're by default all three of them are going to be both workers and masters but you could you know i could be provisioning six seven eight you know however many nodes that i would like to and i can i can address which ones are which there's also some logic in here you could leave them all on automatic and then the assisted installer would based on you know best fit what resources do i have or specification of the machines which one would which ones would make the best masters which one should make the best workers and so it will automatically assign them but it makes no difference here i can just leave this on automatic because i only have i only have three machines so what happens to so anytime i do a um uh an install right um there is uh an implicit for like even if you're doing a compact cluster there's an implicit fourth node right yeah what role um or where is the bootstrap right is is that in the platform itself is that in the install where where is what what's actually really cool with this is that you don't need an additional node for the bootstrap process it still has a bootstrap and you'll absolutely see this shortly but what happens is one of the nodes will get chosen to be the bootstrap machine temporarily so what happens is and you will watch this happen in real time shortly but the two nodes that aren't the bootstrap machine they will get provisioned into coreos you know of course to get written rel core os will be written to the root disk they'll be rebooted the openshift cluster will start to come up on those two nodes in a temporary two node cluster configuration it will start at cd bring up the openshift control plane and once that has happened that third node that was the temporary bootstrap will then pivot redeploy into coreos and become that third cluster node so that bootstrap is just running temporarily like it would normally but the real beauty of this is that it doesn't need that additional note to do it it just runs on one of the one of the the main notes that's awesome that is so cool so that was that was actually early on i i um i asked i'm like hey uh can we um like make the bootstrap like a container can i run it like on my laptop it'd be cool like i just do a podman run but i think this is a lot cooler implementation um especially if you're like low on resources or if you're doing like a bare metal um install or if you like really only have three nodes it'll pivot that that that uh one of the one of the masters uh sorry the bootstraps into uh into a master so that's that's really cool exactly and you know you think about some of the configurations that you know customers and partners are looking to do your minimal footprint right at the edge you don't want to have that fourth node there or you don't you know if you can help it right just so just having three nodes that that's incredibly useful all right how's this getting on all right that's finally up and running so we should see that additional node coming in here in just a couple of minutes um and then we should be able to proceed with the rest of the installation you'll see it starting some additional containers and it's going to do a little bit of discovery of the configuration of the machine you can see the classic mount invalid yeah i think there's an error then they told me don't worry about that error it was never explained to me what it was but okay yeah all right we now have our three masters or our three nodes reporting into the assisted installer ui now again i'm just going to leave this role automatic i could force it to be a master i couldn't force it to be workers because that would mean there were no masters and the install would definitely fail um so i'm just going to leave this on automatic for now but as i said there's logic built into the assisted installer ui to try and do that best fit placement of of the roles should i have more than three nodes okay so now we can move on um assuming there's no additional questions on what's uh what i've shown so far now we have to provide the base domain now again just like any install convic you have to provide a cluster name and a base domain plus the name we've already defined as pemlab because i wanted to somewhat match the dns environment that it's going to be going into so for me this is just rdu2.redhat.com so it's just an internal co-location that we have so we go the full cluster address pemlab.rdu2.redhat.com now then um it's going to ask me some additional config questions what does my network configuration look like now basic is very very simple we use the out of the box subnet allocations so um you know the the the cluster um ips that that would be used and the various other subnets that we use inside of an openshift cluster you know for the sdn type networking and the pod networks i can override those or i can leave them as they are like by selecting advanced and you can sort of select things here service network cluster network and various different things but i'll just leave it with defaults and just go with the basic allocations we select which subnet we want to use now um i only have one network configured inside of this environment but if you had multiple you could say well i want to use this network to do all of the um which is where i want to put all of my apis and the ingress going on so i'm just going to select this default network it'll verify that all of those is available are available on all three nodes now this is important because as i said earlier the cluster is responsible for doing all of the load balancing all of the ip the keeping those vips up up and running and managing internal dns so it has to make sure that the network that you select for where those ips listen is common across all three of your masters now then the last one question is around what are the ip addresses i want to use for the um the api and the ingress now again if you want to use known ip addresses for example you've already communicated with it and you've said well i want my i need dns entries for api and ingress and i just need to know what i p addresses you've set up for those you can plug them in here and the cluster will bring those ip addresses up and it will just receive a question you can rely on your external dns to point all the traffic to those ips that we put in here nice um but what's really cool one of the newest features of this is i can allocate these virtual ips by my dhcp server so if you're unsure of what ips you want to use and remember that i just want to put this into an environment where i have very minimal external requirements and i'm going to update my etsy host to point to this environment when i'm done this is really cool so if you don't know what ips you can use well just hit allocate virtual ips by the dhcp server and it will find ips that you can use now inside of my environment my dhcp server will happily allocate ips to any device that wants an ip right that that may not be uh you know the same for all of our customers i completely get that but in environments that will this is handy but of course it has the option where you can fix them should you want to um what i'm going to do at this stage i'm going to say validate and save changes because i wanted to try and grab ips you'll see that you have these this spinning wheel here where it'll try and grab some ips from my dhcp server in the lab that should only take a couple of seconds to find ips if my dhcp server is being friendly if someone and i guess you alluded to the answer is no but if someone has does have a dhcp server but they do like mac address filtering is there any way to um have the have them to automatically assign it or do they have to go the static route at that point i i believe they'd have to go down the static route first because i think that the mac addresses that this that that we use here are randomly generated gotcha so we wouldn't know although i'm sure we could find out from from some of the engineers i don't think that we know what they are up front um but they are if i recall just sort of the they just use the libert type mac address range the range of okay yeah um so yeah now we've got these ips so 10 11 173 189 for the api and 187 for the ingress um traffic yeah allocated by a dhcp server but it's the cluster's responsibility to keep um uh renewing this lease and again those ips will simply listen uh using uh keeper ifd right last question do i wanna use the same host discovery ssh key so the same ssh key for the resulting cluster yep i absolutely do what you'll then see is the cluster is ready to be installed and you'll also see that these three have turned to known so these machines are ready to go so i'll go ahead and click install cluster and what you'll see is that in a second one of these will turn to be the bootstrap machine you'll see it come up there in brackets bootstrap nice and take a couple of seconds and you'll see it you know lists a few different things about the environment um that are that have been coded for me because i selected the default so standard cluster network standard service network and so on and whilst it's doing that you'll see that nothing is happening on these machines just yet because the back end environment is still working on setting all of this up for me there we go that third node node 11 is our bootstrap machine and you see it's starting to progress at 14 percent we can go in and view cluster events and this is this is fantastic that this will will show you all the information about what's going on as and when you want it so you can see you know node 11 is writing image to disk so that's actually writing down the core os image directly to the root disk and if i was to go on to the environment here node 11 you'll see in a few seconds it'll you'll see this machine essentially rebooting so it's no longer reliant on that discovery iso it'll be running off it's off its root disk and again takes takes a little while for it to do it but this this this information here is is really uh really useful to have i'm about to type one of my uh idioms in or is it is it idioism i don't know what it's called but like you know technology exists to improve human existence this is the evolution here folks yeah yeah the criticisms right yeah criticism there you go shortisms i don't know i don't know which shortisms short it is whatever yeah evolution at scale that's right so i mean there's not really too much to see what's going on now note 11 that's our temporary bootstrap machine so you can see this you're going to see this one you know keep provisioning additional um container images they're going to come up because that's you know this is the one that's going to help orchestrate the deployment of the openshift cluster the temporary two node openshift cluster on this node 10 and node 12. so we're going to see these two reboot first into the uh into their disk there you go this guy's off this guy's going someone mentioned the fonts are small yeah so in the beginning if you didn't catch that this is a remote desktop into another another system so it's kind of hard to scale multi layers of remote desktop yeah yeah so he yeah so reese's remote desktop being and then from there remote desktop being into some some somewhere else so it's yeah sorry it's not fun yeah what's written here is is not really too important um i'm essentially just saying that these machines will have their root disk provisioned first by the assisted installer and they will come up as the two node cluster so i just wanted you to see that these two machines will reboot first so you can see node 10 is currently in its bios node 12 has just been rebooted so those two will come up with the openshift cluster first a temporary two node cluster you know again two node clusters are not supported but it's only temporary because we're using this third node or in our case node 11 as you can see here node 11 as the bootstrap machine and then once the openshift installation is complete on the 2 node 10 and node 12 it will then pivot and become that third cluster node so just a bit of a technical question when when this node when it's a temporary bootstrap is is that running off the iso or okay okay and then the last part of it is to actually write the master configuration to the disk and then it'll reboot so it doesn't yeah it doesn't reboot as bootstrap and it does some magic for it to reboot as master it it's all running off the live iso yeah exactly and you know you'll you'll see that this one hasn't rebooted it's only the note 10 and 12. i'm sorry for jumping back and forth 10 and 12 were our masters and note 11 was the bootstrap machine you'll see this note 11 it won't reboot until much later on in the process so you can see there's as well some additional steps will show exactly where it is so node 10 it knows it's in the rebooting step which is step four of seven node 12 will also be rebooting node 11 which is our bootstrap you see waiting for control plane so it wants to make sure that the open shift control plane has been established on those two nodes before it will actually proceed um any further and again all of the cluster events they will continue to you know spread out information about what exactly what's going on all the various different stages you know node 12 which was one of the last masters provisioned its its root disk and now it's rebooting and now it's reached the configuring stage so you'll see if i go back you see this node 12 is now in the fifth stage which is configuring so it's actually bringing up that openshift control plane on that machine so whilst we're waiting for some of those things you know you can abort installations at this stage you know if things go wrong and there are options for you know retrying installation it's it it's actually really powerful um there's also the ability to download installation logs so um you know we'll be downloading halfway through so it's not really a good um indication of what's going on and in fact you've only got two of the machines which will be our two masters so far um you can go in here and you can gather additional information so installer logs and agent logs so you can see exactly what's going on um again just in case you really love small font yeah sorry about that um i could probably open this in i thought i could change it in here um it's probably on the text editor menu on the top left right is there oh no just right there use system font so yeah put yeah make it like 32 or something ridiculous yeah yeah so this is just um output out of one of my master machines so you'll see for example you know it's all about talking about the root disks writing out the coreos metal image downloading it from the essentially openshift ci writing that out to the file system and then essentially rebooting that machine so we downloaded these logs a little bit early in the process um but you know you can continue to download these files and get more insight into into what's going on or if for example the install failed you know you can exactly see what the openshift installer is has tried to do you know where did that fail you know is it timing out because you can't get access to a hdp proxy or does it not have uh dns you know it can't retrieve uh the um or can't access the container registries because of perhaps a dns problem or or something like that or did it just time out there's lots of different reasons as to why this can fail and having access to those installed logs is is incredibly um incredibly useful especially being able to download it directly from this ui yeah yeah well even though we endeavored to try to make things as automated and as as as self-contained as possible it it's it'll still inevitably fail somewhere right because we can't account for every environmental right um issue the one of the issues that people actually uh run into as um even as someone that um i wrote the the helper node right so a lot of people know about that i actually get a lot of github issues and it turns out and actually chris who had to step away for a second actually had a similar issue um where the discs were just too slow right so etsy speak of the devil and he shall appear [Music] so you know there's a lot of environments environmental issues outside you know essentially our control right one of which exactly make sure your discs are fast enough to run at cd there's actually a script you can run to make sure it's super quick and easy script too i'll drop the link to it because i actually yeah had actually had the issue i helped how to help him debug something yeah like while you were moving it was like dude what is going on with my like i had like the three masternodes were just like like one would reboot every once in a while i was like i have never seen this before what could possibly be going on and he was like what kind of disc are these and i'm like no they're hdd 10ks yeah how many do you have um like four and he's like that might not be fast enough to run this thing and yeah sure no it was definitely not fast enough to run etsy d let alone the rest of openshift or kubernetes in general right so if you can't run ncd you're not gonna be able to run anything else so yeah it now has shiny new ssds in it fantastic so first of all it's good to be here and apologies for being here late i hear you but i don't i hear you can't see you yeah i hear your face is the voice of god i see him i see him i see him on the twitch why don't you all see him oh do you have him in a gallery view but i don't see him in the gallery of you know yeah you got to change yourself to gallery view gotcha got you got you i don't know how to do that so i'm not going to do that no i um i had a previous commitment that somehow i missed on my calendar so not that i had any worry that you all actually needed me for any reason um but i need you you know uh that's that's not for the stream that went wrong quickly yeah so i did want to comment on the etsy the performance thing um i don't know if you all noticed i sent a long diatribe to our internal openshift sme mailing list around that the other day of you know xcd and the way that it functions is extremely latency sensitive and not just for disks but also for the network and the lower that latency is the better an experience you'll have and the higher it goes the worse things get they you really want to get that um i think our unofficial recommendations are uh what 10 milliseconds of disk latency and i think five milliseconds of network latency which i have now way more than that i have way more like less latency and way more speed than what i need now which is just and now that assistant installer is here like i'm kicking the tires on it right now i just downloaded the iso yeah assisted installer is uh it's it's nice i'm excited about it i'm i'm excited to see it in action and see it's um really getting the tires kicked so to speak someone asked is the fcd benchmark part of the assisted installer would it warn us would it warn us if our systems were shoddy that is a very good question it's a good question if it isn't it should yeah like we like that's a very simple command to run in there folks like it's not that hard yeah so you know essentially though when the when the um the iso boots up there's in theory when we are doing the discovery there's no reason why we couldn't actually run a disk benchmark or or something like that just to check and put a warning here i mean that sounds like a great rfe so i could see you know yeah i could go into my node and say okay this machine you know it has like a little like a little badge or something yeah exactly you know discs are good to go at least at this stage you know right um and if they're not you can still force it through and you know your mileage may vary but yeah this someone says uh faster disks for masters and also i'll actually reiterate what andrew said also network latency right because the um the algorithm has to run over the network so yeah the way it commits data is it traverses the network i think three times to do the lcd very chatty store yeah so effectively the data is received by the lead node the lead nodes then sends it to the follower nodes the follower nodes then act back so now everybody has the data then the lead node then says okay now everybody commit the data and at that point it is actually stored which is why there's network traversal in there which is why a stretch cluster won't work right for masters at least right if you have a master in la and one in the uk that's just or why it's highly highly uh complex right well it's highly complex that's definitely true so the way i've seen people do quote multi-cloud and there's a lot of talk about multi-cloud and how it's bad right now but the way i've seen people do multi-cloud is they actually have open shift instances in two different clouds and there's work you know independent work loads that happen in each cloud kind of thing right but those independent workloads could in theory be the same and you can use bgp anycast and make it look like you have two different data centers at the same time kind of thing so um that's entirely possible right like multi-cloud doing it that way is the multi-cloud that i think of right not necessarily like i'm using multiple clouds at the same time just because i can kind of thing right like well yeah in and not to go too far off off topic right um is in the end um multi-cloud is really dependent on the application right the people are trying to solve it with infrastructure and you you can right it's it's all it's all in the app right so if your app can't run in multiple clouds then it doesn't matter how you set up your underlying infrastructure so yeah i agree exactly all right sorry to interject so both of my master nodes are now installed right so that's you know with those two are good to go um this one is still waiting for everything to now properly come up on those two machines um and then it'll go ahead and it'll proceed and um it'll then pivot and become that third master node and you see waiting for control plane this is this is installed so this has completed its installation successfully okay now whilst that third machine actually then goes ahead and we might actually see this uh rebooting it no it's not so um i want to show you uh download coupe convic so directly right from the ui we can say i want to download the kubecon vig and um i will bring this up and you can see that there now um i can't contact this yet primarily because i haven't set my dns to point at this environment i could go in and i could edit the dns records inside of my lab the corporate dns records to you know point to the ips that it allocated me inside of this cluster but i'm going to wait until the openshift cluster is fully up and running to do that um but you know it's very convenient to be able to use this uh use this or grab this cube comic directly from the ui nice and uh what i will do is uh whilst we're waiting for that third machine to come online i will do export coupon big no what was it called will i download that file less no alright download that file let's let me download that i don't want to show in a folder no ingress i remember sorry let me that's a heavy-handed yeah remove all my cube configs yeah it's okay you're doing lots of these uh so reese or christian um reese you're typing so i'll poke it christian um there's a there's a question from uh c santana uh how does system file partition configuration work uh for example infrastructure nodes that need some special disk setup for logs or registry images you that would be that would be a post step right like you would have well yeah there is um the the partitioning happens i know automatically i don't know how that layout um looks like i would have to just like dig in and just kind of just see what that that um that layout looks like um i know you can as chris alluded to as a day two step add additional disks um where those um where where the containers live i i don't think it's configurable at this point um yeah someone mentioned by default we installed the entire on everything on the first disk right so if you have like 12 disks we installed everything on that first disk which is why i require i think we require a minimum 120 gig right yeah yeah so by very much to that point right by default we don't configure it at all it assumes that the entire installation disk which can be controlled when you install coreos right you can specify um i think it's inst disk or something like that as sda sdb whatever you want to use for that um but it will assume that it can occupy the entire disk and there is no partitioning there is no right we used to recommend things like graph storage being you know for for the docker demon being on its own partition or its own disk et cetera um the official recommendation is none of that is a recommendation anymore um okay so and and implementing that at install time can be done but it's not straightforward um and reese i don't know if if any of that is um incorporated into the assisted install or not i don't think it is no not at this stage yeah our effect effectively our generic recommendation is one one disc for everything um and then clean it up afterwards if if needed which we don't think will be needed from many or most instances all right so like of all the things for twitch not to pick up on yes all right like so we have a running we have a running joke the mod actions panel in twitch is literally like like yesterday it blocked christian posta his name right it's just like and i accidentally type in the wrong thing as you know s and c or close on the keyboard and there we go yeah uh thanks twitch my bad sorry everybody that was not my intention and of course there's no edit button or yeah i guess i could delete it or leave it in all its glory we don't as a yeah i don't know as a warning to others yes yes yeah black christian posted don't know why some i think it was universe that mentioned it it like put your thing on pause and i was like wait why it doesn't tell you why it just says automod that's it it's just okay is it the word christian i don't i don't know actually is this a multi-language thing or i sent that screenshot to posta and he laughed yeah narendra was the one that mentioned it it was just like what and of course today it's like oh no you can go ahead chris it's probably because i'm like a moderator or whatever yeah your spike is your mod you should always mod the mods all right folks so where we are now is this third machine our node 11 which was our bootstrap machine this is now since rebooted and it's you know this machine has a bunch of uh nics on it so it's just gonna go through and try and dhcp on them all just by default it's eventually going to give up which it just has and you can you will see this machine then bring up all the necessary control plane services you know lcd all of various different uh kubernetes bits and pieces to be that that full third third master node and you'll see that now we're at 95 complete this is in the configuring stage which we previously saw on the two masters and hopefully in just a few minutes this will go to 100 complete and it's going to give us some additional options just at the top where we can go ahead and actually connect into our new bare metal cluster that have been deployed with the assisted installer so all right just a few more minutes and we should be good to go yeah i saw also uh rhys just in case you care um i figured out how to bring andrew up on my view oh you did yeah yeah there's like a little triangle uh next to oh yeah angle if you click that triangle it is it makes sense here the triangle makes sense oh because you scrolled off the screen are your windows small and you like scrolling yeah yeah it basically yeah sorry i was like on like a 4k screen right now so all right oh so it looks like it came up as a master yeah this one is up and it's just going to take a few more minutes to get exactly where we need to be um what i can probably show this the latest installation logs and uh let's see 31 kilobytes that's going to be our bootstrap machine and you can see you got the boot cue blogs the agent logs which is the agent is for the um is the logs the so the agent is the is a service that we bring up on each node when it boots up as part of the assisted installer and so this will show us or give us logs for pretty much everything that that's trying to do you know connect out to the clouderreda.com get everything set up bootcube well we all know what bootcube is that's the service that establishes or helps us bring up that cluster in the first place and then we of course we have the installer logs and so this is the openshift installer logs and we can see you know everything that's that's going on this is a pretty long file you can you know we're gonna go through all the text in here but this is obviously really helpful to uh to troubleshoot anything that goes on but there's plenty of information that's available just at the click of a button from the ui always handy yeah usually you have the ssh into the bootstrap and then run the journal command and then pipe crap and yeah i mean we we still can right you know we can um i can do and assuming there's no issues with my key or things so you know this this this is how one of our machines i'm i can you know directly connect into um as you can see these are all the pods that are running i mean this is this is a machine that is that i have provisioned this is one of the ones that is installed by this so you know i have direct connectivity and obviously it took my key because i didn't ask me for a password and so that got injected during the uh the provisioning of this so you know you still can go in there if uh if if you want to you can view the logs you can watch big cube running if the bootstrap machine was still you know up and running but um you know we're kind of past that at this all stage yeah someone mentioned um and it was already answered but someone had mentioned that this is um rel core os only right it's not rel7 correct correct yeah it's it's archos all the way across yeah by the way rail seven um seven dot nine right seven to nine which is the last release of rail seven which seems weird it reveals work because i remember like using real like real three roll four or whatever yeah uh-huh yeah yeah i remember that being the thing and then and then i remember the big like sea change that seven was in like now eight uh like seven's retiring and it's like yeah yeah well you saw four years but i mean it's in the last you know phase of its updates right like it's the last official release kind of thing yeah i remember being an essay and going around to my customers when ralph six came out and you know positioning rel six and all the new features the latest and greatest some good times i found a lot of the old presentations i used for real six yeah it's kind of funny i should be almost there folks i guess i mean it means i need to upgrade from rail six most of my servers are on either fedora i'm like either on either end right they're either on rail six or on fedora like 33 no 32 whatever their latest one is 32 is the latest 33 is better right now yeah i was gonna say i saw an email about 33 being on on beta i think it releases in mid to late october yeah yeah it's october releases usually right october and i forget april or something like that or they're going to yearly now i'm not sure fedora is i think or i think or like the 20.1 20.12 yeah yeah naming convention or versioning standard or whatever i think yeah who knows i have a question for you that came up the other day in one of the chat rooms which was support for static ips with the assisted installer not yet because we rely on dhcp as part of the process so the only way you're going to get a fixed ip is if you're statically assigning it because you know the mac address or you know the machines that are going to be requesting an ip you can of course have static ips for api and ingress in the previous pane we showed where you could force those yeah but the ips that we grab as part of this well the ips that are used for the nodes they we rely on dhcp for those and you might have already covered this right my fault for being late um is does this leverage an external load balancer or no it's managed so it follows exactly it's it's it's exactly the ipi bare metal model here okay all right so already by the way uh yes i believe we did yeah all right so installation has now finished we're kind of good to go um we're gonna have a problem here because again i randomly allocated my api and my ingress ips so they're not going to have a corresponding record so if i click on this it's simply not going to work but what's really cool is if i say well i'm not able to access the web console it's going to give me all of the entries that i can throw directly into etsy hosts so i will just go ahead and i will disconnect out of that guy and yes i'm using nano that guy and i don't care using nano [Laughter] so there's a reason why i use nano um when i was in my teens i guess uh i got really into gen 2 uh linux yeah and all of the documentation for gen 2 uses nano and so i just started to use it and i know every single person i've ever run into completely slates me for using nano but it's just what you're used to you know you're used to all the keyboard shortcuts for doing everything so i'm just gonna stick with it i think all right so i'm gonna so all i've done there is i've pasted those in there so you can see 187 which is the ingress and i've got some you know fixed entries in there and the api uh on 189 which corresponds to the ip addresses that got allocated by my dhcp server so i can save that now and right away i should be able to do oc get hosts at nodes come on race and those three machines are there and you see they are all both master and worker and you'll see we have a discrepancy here between these two machines have been online for 16 and 19 minutes respectively and the third one is only two minutes 44 because that was that third temporary bootstrap machine all right um oc version yeah four or six nightly as expected my client is four five but that's irrelevant so now what i should be able to do i should be able to launch the openshift console connection is not private to be expected it's gonna do that for a second time as it goes to the oauth but this is a good sign and let me make that a little bit bigger and get rid of that and that so by default it's cube admin and the password i can grab directly from here i can copy it there and i'll paste that and away i go so you can see we're up and running openshift 4.6 um provider is bare metal um so again because it's following the bare metal ipi path it has already deployed all of the pieces you know so it's got the keeper id it's got the core dns the mdns everything all ready to go up and running for me um you'll see that some of these machines some of the pods and stuff are still coming up and running it's got the bare metal host century so i can dive straight straight into this now the important thing to note here is that it does deploy all of the metal cubed pieces for full bare metal management but metal cube is currently disabled in this release the main reason for that is that when we use the metal cubed configuration today it relies on a second dedicated provisioning network right now assisted installer we try and make it very very simple and easy to use and only require a single network so until we catch up i think around about the 4.7 time frame is when we're going to drop the requirement for that second provisioning network where we will be able to enable metal cube right out of the box so what i'll be able to do in here is go and edit this bare metal configuration set the bmc or the outer band management configurations and openshift will be able to manage the the control plane manage the underlying infrastructure just as it would with a like it does on a full bare metal ipi um configuration beautiful that's amazing but yeah we're uh we're there and yeah we have arrived we've arrived well even with that small hiccup right the small iso hiccup yeah we did it wow 90 minutes right yeah like about 80 minutes yeah yeah like that so very impressive install install started 1555 installed 1624 so the installation wow 30 minutes wow dang that's impressive yeah like i'm very happy with that right like having stood up enough kubernetes clusters to know that like like that is like almost as good as like our you know partner demo system right or faster even right the the fact that you can get a node in their cluster in 30 minutes wow that's just amazing yeah someone mentioned in the comments this is actually pretty cool now all you need is to have an operator to update the firmware of bmc on this nodes that would actually be pretty cool that was amazing that would be really cool i actually think that the metal cube stuff is looking into things like that like the firmware update manager yeah because because metalcubed uses openstack ironic um an openstack ironic does have some capabilities to do you know out of sort of out of band configurations um i don't know how whether progress was ever made on the firmware updating but in theory there's no reason why that interface couldn't be used at some point to do that kind of stuff yeah that's awesome yeah and you know this is a bare metal environment so we can do you know we could deploy openshift virtualization if we wanted to right uh yeah you don't worry about nested vertices yeah no need for nested vert right because you're running on bare metal exactly you can do all the fun things oh yeah um awesome and that's really technically super simple install openshift on openshift right using um for openshift vert right because they're just virtual machines sort of well i mean if you use if you use the agnostic installer so let's talk about that um technically you can deploy openshift to openshift virtualization virtual machines um so why am i talking about it the way that i'm talking about it uh so yeah you see we seem like not a fan because we need to clearly understand the expectations right am i creating child clusters where the parent physical cluster is for example i'm creating routes there that route to applications in the child clusters excuse me sorry for bumping the microphone um or am i creating virtual machines you know open shift instances that are connected directly to an external network just like a traditional one so it's not as straightforward as just oh yeah deploy openshift right into into openshift it's what is my expectation around how it's deployed how it's configured and importantly how the applications inside are deployed and connected to and that's why there's some hesitation or or some it was level expectation right so yeah so but yes there is nothing that technically prevents you from especially if you have configured um you know a direct l2 you know layer 2 network connection for those virtual machines right you know sure deploy it into a virtual machine connect it out just like you would with a traditional hypervisor not a kubernetes based hypervisor um and and connect to a standard openshift cluster but if you expect any kind of integration between parent and child that that's the complex part or the missing part yeah there's a um there's uh the the way andrew answered this is indicative of the questions we get asked because sometimes we'll get asked questions and go yeah that's like pause technically possible and then people take it as like oh i can do it and expect the same performance or i i can do it and it's and it's and it's supported it's supported yeah it's like no no like we're just talking you know sometimes yeah like okay like you can't but but yeah or we recommend doing this or as long as you keep this in mind you can do it so like we've um this and andrew talks like uh someone who's been burned on that a few times just like well it's funny right because there is um chris you were in the military right of there's kind of two different types of people that you'll encounter sometimes right those who if it's not explicitly permitted it's denied denial yeah yeah and those who if it's not explicitly denied is permitted so sometimes you have to be careful of you know that what christian was saying around what's technically possible but not supported um and also setting expectations yeah highly important oh look at that as we were talking the operator installed openshift virtualization ta-da operators are grand aren't they look at that he's creating a vm i might like a bit too quickly but like andrew said he's just showing off this is cool right like i'm spinning up vms like on my local instance and i'm seeing them appear on cloud.redhead.com like this is really awesome stuff right like this is slick i really like this and you know you can you can view all of the clusters that you've deployed in here so you know again you could have lots of different clusters in in different places and and you know i've deployed this on bare metal there's no reason why you couldn't follow the same path if you're deploying on top of you know your virtualization cluster you know it doesn't really matter just attach the iso away you go you can view all of the information here you can delete the cluster if you want just deleting it here just deletes it from the assist installer window uh it doesn't actually um decommission the actual cluster itself but you know you can imagine you've got a list in here you can go into them and you can launch the open shift console directly from here you know you've got everything you need your coup convig passwords it's all stored by the assisted installer configuration so it's it's incredibly convenient someone someone asked and um this is way off topic but um i want to make sure we get to that it says uh what's the estimate date for 4.7 release right so our i think so 4.6 is coming 4.6 is coming i think it's nine months i think we've been doing a release cycle i'm not sure six to nine months after it's every three months it's everything releases for gates well it's because i'm thinking about the the eus release of 4.6 um has that changed did that change anything i don't that won't change the release cycle that changes the support cycle the support cycle yeah yeah so there you go um so 4.5 was june no july july so expect 4.6 roughly three months later right again subject to all kinds of things and then expect disclaimer from serena from the future yeah um so and then expect 4.7 roughly three months after that yep i mean we try to follow the cadence of the gates upstream right that's the biggest thing one one behind or whatever yeah usually sometimes uh 4.5 was yeah 4.5 kubernetes 1.18 so it was same as upstream yeah when it was released it was yes now 119's out so yeah i see you sitting at that vm console reese yeah i just wanted to prove that [Laughter] on top of my bare metal openshift cluster so there you go 30 or 29 minutes to go from bare metal to open shift and then another four minutes to go from openshift to vms yeah wow that's so amazing this is this installer's gonna change my life i feel like yeah it's really convenient yeah this is awesome all right well andrew it's your show what do you want to do now um so reese are you at liberty to discuss anything road map related or uh no i would say not at this stage um we've just made this available you know what i've just done anyone can do you know just because yeah just because i have an employee account doesn't mean you shouldn't see that same um direction uh or you shouldn't see that that same option inside of your cloud.redder.com interface but this is the first time that we've made this available you know it's obviously been a work in progress we've been working on this for a number of months um i've been doing some internal demos to show it and the rate at which this has improved you know new features and rfes that have been brought um from folks in in you know in my team and and lots of different areas um they've been implemented and it's really really strong it has a strong roadmap we talked about the possibility of having this available inside of your own data center you know that's something that we're really looking at as well um that pixie boot aspect yeah yeah there could be things like that but um at this stage no i am not really at liberty to to discuss the the roadmap for it that's fair um i will i will say that i saw the first video that you created of this and saying that there has been tremendous change and improvements and expansion of capabilities is a bit of an understatement yeah the the team and i don't know the team that's behind this very well i know their product managers a little bit um because i used to work with uh work with them on the virtualization side but uh they're they're doing some incredible work not that the rest of the shift isn't but um really really really impressive um so there's a question in the chat um and i think that it will probably be a somewhat subjective answer but i'll i'll throw you under the bus first christian of uh in general for a fixed amount of resources would you recommend fewer large virtual machines or more smaller virtual machines so the the context here i think is i've done a sizing exercise i've determined that i need x cpu and y ram should i break that up into you know fewer larger nodes or more smaller nodes in order to accommodate my uh my application workload yeah so the first thing that you need to keep in mind is that there is a ceiling in terms of what how many pods per per node kubernetes and by extension openshift supports so it depends how dense you think your workload's going to be right if you're going to run containers that are five gigs each then yeah yeah then yeah i mean you won't hit that the ceiling yeah how many people how many people do you honestly think are porting over five yeah that's right that's the first step let's just put everything in one giant container which i wouldn't recommend but people do that um we can't change that you can't change that um i'm a fan of actually of leveraging um kubernetes and openshift just how it was designed it was designed to scale out and not necessarily to scale up um i've always been a fan of having lots of number of smaller vms right and letting kubernetes totally do the scheduling uh do do do what it does best in scheduling those workloads right um but again you know going back to it it all it all depends on like your workload right so or which ceiling are you gonna hit right are you gonna hit um capacity ceiling first or are you gonna hit uh the the total amount of supportable um containers and and that's also networking right because there's just a finite number of of um addresses that you need yeah addresses so you have to take some of those into considerations as well but i'm a fan of the many smaller ones yeah i'll add a couple of additional considerations um so one it's not just cpu and ram and on the node and of the pods themselves it's also storage right so you mentioned the size of the images but also how much of that local storage they're using so for example if you instantiate a container image and it's writing gigabytes of log data you know every minute that's a lot of local storage and that local storage needs to be sized appropriately both gigabytes and iops and latency um for that workload and that could affect how dense your pods can be which then therefore affects the size of those nodes and the quantity that you will need same thing with network right if i'm running you know i'll pick on cnfs right containerized network functions if i've got a cnf that's consuming an sr iov or dpdk device and eating tens of gigabytes of or gigabits of throughput i might not be able to you know put those very densely um you know even just a database node right or a database pod if my database pod has massive amounts of traffic or web server pod or redus pod or whatever it may be that's something to take into account which is the the the amount of traffic that that is capable of supporting and i think even with virtual deployments that's important because your virtualization deployments are consolidating those nodes as well right i may have 30 nodes in openshift but if they're on four physical nodes in my hypervisor right stay still have to be aware of those things and the last thing before i i talked over you chris so i'll hand it back to you but um the last thing is failure domain right yeah your application and understanding the application architecture and how it handles failure is important so if your application is you know built assuming that it's going to fan out horizontally and have you know 40 instances to be able to tolerate you know node failure or something like that and then you've only got three nodes where suddenly it loses 33 percent of its capacity because one node failed that's something to take into account um so be aware of your application or at least make sure the application is aware of your infrastructure architecture so that they can make decisions as well it always comes back to you have to know your application stack you have to know what's going on inside your data center and you know we we can expose all those metrics you want but if you don't understand that the message queue in your workcenter is like you know the most important thing and most i o intensive network intensive thing in your entire stack like you've got to treat that accordingly if you don't know that you're going to hit a bump in the road real quick right like got to know your infrastructure yeah and i i think it goes both ways right of the infrastructure team or the platform team needs to understand the application and vice versa you know i used to give a talk about um strangely enough it was vulnerability in i.t right and bene brown who um gives a phenomenal ted talk about vulnerability and relationships right and of course it's often in the context of spouses right or you know long-term partners et cetera but the same thing applies oftentimes in work relationships and in particular through teams like well developers and infrastructure of the infrastructure team can be or the business is best suited by the infrastructure team being honest and open with the developer team about hey the infrastructure can't do this or it doesn't do this well can you make up for that can you compensate at the application layer and vice versa hey it's really hard really complex really expensive for this to do this at the application layer can the infrastructure do it right and it's scary right because oftentimes those teams are measured by very different things right developers are measured by number of features added by you know number of bugs fixed et cetera infrastructure is measured usually by efficiency can i provide the minimum amount of resources to keep things running um and nobody complaining so being open and honest about that is it's scary you've got to work together though right like and it's i have a talk about this like heaven is not a cloud right like it's important to bring in all the people at the table and i mentioned in that talk specifically finance because they know how to talk aws better than you do right like when it comes down to brass tacks and dollar bills they own that piece you got to bring in more than just devanops sometimes too is a lesson i've learned i see jp dave talking about uh you know the the ramming speed in there which i saw a post about um uh cd projekt red who you know their ceo swore up and down that they would never have you know crunch for releases they just announced that they're doing crunch well i'm not you know we we have a relationship with engineering um and i'm aware of what they're going on there i'm not aware of like a shift engineering very much to the answer that you provided in chat um you know our our teams work hard they they do their jobs really well and it makes it when it makes it um yeah and and uh example there's a windows container worker nodes it makes it when it makes it it makes it when it's ready right so like we you know yeah like we're not gonna put out like a crap product that doesn't work all you know like we don't want to be the fail whale right like we have to make sure things are fully baked i froze you both of that one yeah i think you know going back to the application does that well and i guess that's like the whole idea of devops right it's not necessarily like a department or like a thing it's like how you work together right the concept it's a process yeah it's a process right yeah exactly and i think also uh eric i i think eric's on the chat eric he was earlier yeah eric always says that openshift doesn't fix your crappy application design and that's all that doesn't fit your broken culture either yeah yeah well i mean that that's right you can't just like i'm gonna buy this off the shelf and you know uh he said it kind of abrasively but i mean it's true like you know you can't buy this off the shelf um and then just like throw it at a problem and then the problem just like magically goes away right so you do have to and judging by the questions that you know the coming through it makes sense right like how how do i size this like these are the questions you need to like start asking but it's more important these other questions that conversations you need to start having um internally as yeah well just as a as an organization just in general right like how are we gonna and if this platform is very powerful how you know how do we fully utilize it yeah yeah so just to let i'm gonna post uh i can't post screenshots in the stream that sucks uh but i'll post it on twitter that's fine how about that uh i am in the process of installing using the assist installer on my art 820 across the house all right did you have to run the cable yourself i ran three cables actually [Laughter] they're they're they're right above me oh okay okay well because like because i thought you meant like in the like in the tool i imagine that you outside with the pickaxe you know trying to run a fight no i'm not like eric or our boss splicing together his own fiber that somebody else got that's just the next level uh wow but it was that's i could do that but no thank you all right so andrew what else you got for us anything fun um i know we're coming up on the the end of our time together so uh have you talked about next week no have not what are we gonna do next week so next week um the the admin hour is not happening what what i i know i know you know how hard it is no no do you understand how hard it was not to have uh like just like morning routine wise not to have langdon's show last week right like like like it broke my morning routine and it kind of like threw my whole day off kilter right like so so you're telling me now and now i have to do this with you too only only once well once so far um so the good news is the good news is something that's i think is uh very relevant for the audience for the admin hour and that is we are having the first of the openshift uh product management team presenting what's new in openshift 4.6 so yeah so this is actually the the same thing that goes out to our sales and our solutions architects and our you know our product managers our tmm's like me our pmms the whole nine yards you're gonna get it at the same time this month uh 4.6 everybody learns what's new at the same time this is now an internal meeting that we're opening up to the world through openshift and i think that's that's the big thing is like this is usually a meeting we have internally we are now having this public externally with everyone i'm pretty sure chris is going to be broadcasting the blue jeans meeting right yes like i'm literally broadcasting blue jeans yeah yeah it's the real meeting folks you will see our normal hold music and everything that we see internally all the time everyone here on the ground knows what i'm talking about the on-hold blue jeans music so importantly for red hat folks who will still be joining the blue jeans you can still ask questions you can still do all that other stuff the pm team and some others will be there to help answer internal questions and there will be a whole team of folks here on the live stream that will be doing the same thing we'll be answering questions we'll be transferring questions over back and forth internal chats and all that other stuff so i think the one thing to point out for our internal like for red hatters right like the q a thing will still be there in blue jeans for you right like now not be broadcast it will not be broadcast so you will not as the viewer see the internal questions that's the only kind of like firewall we're putting up in between here right like i just want to give you the presentation i don't necessarily want you diving into all the questions and answers i want you to ask the questions and answers there's nothing they apply to you right like right you every skeleton because there's going to be some bespoke question about like a specific customer needs this kind of configuration is this being satisfied by this version kind of thing right like there's going to be questions like that y'all don't necessarily they're not your main to you right like i don't want to show you that stuff what i want to show you is the actual presentation and i want you to be able to have questions answered there'll be people here on openshift tv from the pm team to help us all i think the three of us at least from the tmm side will be here to help reese i think you're presenting during that that whole thing so yeah like everyone on this call is going to be here as part of that call next week it's going to be super fun i am legitimately excited yes yeah i i i did not expect us to be broadcasting um that one came to me i was like hell yes [Laughter] all right guys or gentlemen folks thank you all for joining uh coming up immediately after this so that was next next week we'll have the what's new and 4 6 show uh starting at the same time this show started today actually uh started at the 10 o'clock slot am time eastern uh 1400 utc but coming up next is openshift commerce briefing modernize your application development for openshift with uh jorgette it's a joget however you want to say that tool but ravish dawan will be joining us from jogai or however you say it which is a fun little kubernetes tool so yeah should be fun and in two weeks the admin hour will be back as regularly scheduled i've already talked with and confirmed with richard vanderpool who will be our guest from cee joining us very fun i cannot wait i always love having our support folks on yeah and also right real quick if i may use this platform absolutely promote um i am starting a new uh a new series right a bi-weekly series uh called the um the get ops happy hour which will um premiere august uh sorry august october october 8th tomorrow yeah tomorrow it'll premiere october 8th um uh at 3 p.m eastern we'll be talking basically every two weeks uh kind of similar format we'll be talking get ops um we'll be bringing in a special guest here and there yeah i mean bring your bring your get up questions christian is questions christian is the kelsey hightower of getups yes right i refer to him when it comes to get ops questions so right yeah so um so yes so um yeah october 8th 3 p.m uh eastern every two weeks um i'll maybe um increase that cadence if uh if need be but um yeah so come and join me uh sorry andrew using your show as a springboard oh no that's fine we're all about to promote to promote uh yeah you're doing me a favor by remembering to say it so yeah yeah we have us we have a very small marketing budget so we have to kind of we got to plug what we got yeah so narendev keeps hyping this up as it's a battle between christian and i and that could not be further from the truth christian and i like actually like march like lockstep hand in hand together through the get up realms [Laughter] yeah so uh this will be fun so join us here in a few minutes for openshift commons thank you for joining us today for this wonderful uh extended open shift uh admin hours admin office hours and we'll be back next week with the what's new briefing in this time slot so it'll be fun we'll be there and stay tuned when in doubt check out the uh if you want to try openshift do what we did today head on over to that tri link that i just dropped in chat and subscribe to our calendar so that you know what shows are coming and that's all we got for this show and i'll see you here in a few minutes for those tuning in dope shift comments thanks y'all all right thank special thanks to uh recent christian for filling in while i was delayed they did a great job great job guys i have no doubt whatsoever you
Info
Channel: OpenShift
Views: 3,117
Rating: 5 out of 5
Keywords:
Id: BNC-5ISbxpU
Channel Id: undefined
Length: 114min 0sec (6840 seconds)
Published: Wed Sep 30 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.