OKD4 on Libvirt Bare metal UPI – Charro Gruver OKD Live Deployment Marathon

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] charo i know you're up next with the bare metal which always sounds to me like a heavy metal band kind of deployment um and i saw the guitars behind you so it might be appropriate if we pause uh now and let the aws lives thing go um and let charo queue up um for his um deployment and share his screen so thanks very much there uh christian for uh hanging out with us and i hope you can spend some more time today because i'm sure we'll be repeating some of these questions yeah sure i'll be here i'll be here thanks all right do you see a whole bunch of open terminal windows i do and i see your smiling face and i'm going to turn my smiling face off why don't you introduce yourself and what you're going to demo now okay uh i'm charu groover uh i am a new uh architect for uh red hat services uh here in the southeast you've reached the horizon audio conferencing system after the tone enter your conference security code followed by the pound sign let me find out who that is pause for a second everyone and we'll figure out who is doing something odd here with sound [Music] it's like it's uh in your lip yes yes i'm looking for him and i'm just muting him there you go all right so start that again all right carrying on well like diane has said a couple of times these are live demos so um we're fully expecting a a bill gates moment um it might not be a blue screen but we might see a stat trace of death uh and all kinds of other interruptions but i'm char groover uh i like i said i i've been with red hat for for one week but i've been a consumer of red hat products both upstream and um subscription based for most of my 20-year career in i.t so this is kind of the the dream job that i never knew i always wanted and today what i'm going to demonstrate for you guys is a deployment of a bare metal uh kubernetes cluster using okd um this is going to be simulated bare metal in that i'm actually using liber to to run the machines uh so that one so that you guys can actually see what's going on right because it'd be hard to get you console views to bare metal machines in this current configuration this is a user provision infrastructure deployment so the installer is not going to be provisioning the machines for us these machines are already provisioned if you see in this terminal right here i've given you sort of a verse list view of the machines that are currently provisioned you can see we've got a bootstrap node that is not running we've got three master nodes and we will have three worker nodes and throughout this install i'm going to guide you through the process of deploying the cluster first through the bootstrap process and then we're going to add the three worker nodes to that cluster now i'm using virtualbmc which is a tool that comes out of the openstack world to simulate the ipmi management of these virtual bare metal machines and these machines are going to boot uh into ipixi and using the mac address of the machine as it boots it's going to pull the appropriate ipixie boot configuration file that sets its kernel parameters sets the fedora co core os install uh url and the ignition file that it's going to use to to start from i'm using fixed ips for this particular lab setup so everything is already provisioned in dns and i'm using a fedora core os tool called fcct to manipulate the ignition config files to inject the ip configuration into each of the hosts i've got all of this written up uh in in a a little tutorial that i've got out in my github page which we can provide a link to but without further ado we'll go ahead and fire this thing up so the first thing i'm going to do over here in the left terminal is i'm going to power on the bootstrap node now i'm going to attach to its console and what we're going to watch here it's going to do an ipixie boot the it's a chained boot so it first pulls um just a boot.ipixie file is what's being served up by the the dhcp server for it to pull from tftp that then chains it to look for a file that is named after its mac address it pulls that file you see it got its kernel and its initial ram disk the kernel parameters that were passed to it gave it its instructions for installing fedora core os and you can see right now it's actually pulling um that f cause image across now we've got an h a proxy a load balancer it's this guy right here okay d4 lb01 that is already running and is configured to sit in front of this new cluster as it comes up this will take a little bit um with the scrolling logs it's pulling like i said it's pulling down the image one other thing i'll point out while we're waiting for the bootstrap node to to complete its install is that we're also doing a mirrored install today which hopefully makes this go a little bit faster than pulling all of the images across the wire what i have is a local instance of a sonotype nexus that i have mirrored all of the images into if you can see this eye chart and so the install is actually going to pull its images from the sonotype nexus right now i've got quay.io in a dns sinkhole so that it can't it can't resolve and because it can't resolve it's going to assume it's an air gap installation and it will pull from the the configured mirror image all right fedora core os is booting now it's going to overlay the rpm os tree and when it finishes it will boot one more time and it will start the bootstrap which we will watch right here okay so it just finished the os tree overlay and now it's coming back up when it completes booting should begin the bootstrap okay now i'm going to go ahead and fire up the master nodes so i'm just running a little script here that's going to do an ipmi tool command against those three masternodes and start them up and the fans on my little intel nooks just lit up hot okay and in the top right corner here um i'm going to run the openshift install command and direct it to monitor the bootstrap process and if you uh if you do this at home and you monitor the logs like this don't be alarmed by these failed failed failed entries that you see coming out in the logs this this is the bootstrap process waiting for its resources to go live and so it will continue to loop uh until the various resources come up and you can see the api just uh came up so so our api is now live and we're waiting for the bootstrap process to complete down here in the bottom right hand corner we're just um tailing the journal control logs of the bootstrap process itself this this all in takes about 10 minutes from the the bootstrap node firing up to the bootstrap process itself completing the um the installation itself will complete after about another 25 minutes so we we've got some time now to um pick some questions if folks want yeah james cassell is asking um from twitch um is the sinkhole necessary to use mirror i think it still is um i i know it has been for a while that if you don't create the sinkhole and it can resolve the external host it will pull the images from the from quay.io and that that's why that's why i created the sinkhole to simulate a disconnected install where where i'm behind um a bunch of firewalls and proxies that that prevent my nodes from having direct internet access a couple of questions just to double check um the link to the documentation on this is this the same as the stuff that you did in the okd4upi lab setup yes yes there's a there's a new branch um called ipixi that uh when we're done today i've got a little more cleanup on the documentation to do but i'm going to merge that branch into master the the old tutorial that was the centos 7 based one i've branched master to a centos 7 branch so anybody that's still running centos 7 would want to use the centos 7 branch i've upgraded my entire lab to centos 8 and have enabled ipixi even for the for the hardware for the bare metal itself so the so that just by creating a an ipixie boot file with the mac address of you know a new piece of metal all i have to do is plug it into the network click the power button and it will provision itself with whatever personality i want it to have i'm just checking the other feeds here the other feeds are nanosecond behind us so um in blue jeans so trying to do there and brian jacob hepworth is saying that he really likes the fedora core os news and seeing that kudos so is this going to take us another 20 minutes or 30 minutes here uh well as soon as the bootstrap completes then we'll be about 23 minutes out from completion the bootstrap usually takes about 10 minutes in this environment i'm going to do another pitch for people to join the okd working group while we are waiting here because that's what i'm charged with is getting more folks in so if you're liking what you're seeing here or if there's features missing or other platforms that we should be demoing to or testing on or that you're using okd on or wishing to do so please join the okd working group the mailing list is here i just put it in the chat and um it is in open google group and we have a lot of meetings every but we meet bi-weekly um and we have a meeting tomorrow and i'll throw the fedora for os and a chef thanks for joining us and we will do the azure one that you requested earlier um that is our second to last demo i think today is azure deploy the fedora calendar link here all right the bootstrap is getting close okay it um bootstrap has succeeded and it's going to wait just a little bit longer to send the event and then you'll see okay there it went so the bootstrap is now done you can see in the middle terminal that we do have three master nodes that are live i'm going to now remove the bootstrap node and i'm going to take it out of the h a proxy configuration as well so that we will forget everything that we know about the bootstrap name now we'll watch the install complete all right so we are working towards 4.5.0 okay now the this is something odd about um this install monitor here it will say 42 complete um here in a minute it may barf a couple of errors as some of the resources restart and it will also reset the clock so it it's um it plays with you a little bit you'll get up 74 complete and then all of a sudden you'll see 12 percent complete and then it will quickly wind its way back up i'm making a bold assumption here that that is actually the result of it monitoring some of the resources that through this process update themselves and so that percentage of complete becomes a little bit variable so if you if you see that um running this at home don't don't be alarmed it is actually working towards completion and you need to be patient because from this point it does take about another 23 minutes 23 minutes well you want to talk a little bit while you're doing this about um the work you're doing around che um sure well actually it wasn't it turned out not to be much work at all uh and in fact if if we end up with enough time um i can i can deploy a hyper-converged ceph instance into this cluster to give us a storage provisioner because that's really i think i think the folks that might have struggled with getting um eclipse che up and running is that it does need persistent volumes both for um postgres it deploys an instance of of support in an instance of key cloak that provides the identity provisioning identity management for your eclipse che environment but the workspaces themselves also require persistent volumes you can probably make it work with ephemeral volumes just understanding that if those pods ever got evicted you lose everything which would be significantly detrimental to your postgres instance so so it does require that you have some kind of a persistent storage provisioner i have done it in the past in older 3.11 clusters with iscsi but now with with the ceph operator using the rook operator to deploy ceph it's much much easier now something else i'll mention here um i'll run this again so you see we've got three masternodes that are running but they're also designated as worker nodes that's an artifact of how we're provisioning here because the install config that we used does not designate any worker nodes so the installer by default makes the masters schedulable um when the installation is complete that's something that that we're gonna we're gonna change we will add the three worker nodes and then we will make the um masters unscheduleable all right fernando is asking is it possible to specify a different ignition version during the ing or gn i'm going to say that wrong again dot ignition files creation i don't think so i believe the answer so it's all it's it's not possible yeah we're stuck with one you should at this at this time you should always be using ignition version 3.1.0 for everything flight correction an ignition spec version 3.1.0 it's about to say i'm pretty sure there's more than that the ignition versions don't match the spec version at all yeah it's ignition v2 point x with spec v3 point x and our current spec config spec version is 3.1.0 so for the ignition config always use the spec version 3.1 at this time we should probably just bump the ignition versions just to make this a lot less confusing yeah because there's no particular reason not to as far as i'm aware just going to introduce that new is neil gampa from that datto is in the house so well hi yes i just sort of forgot that i hadn't actually been uh um introduced so i'll just oh yeah i can't why does it say the camera isn't used by whatever anyway the microphone works figure out why the camera doesn't in a little bit um uh i'm i'm a devops engineer at datto i'm here as an okay working group member and i'm going to be assisting dusty in a little bit once we once he he and i get to our part of this okd deployment fund and where i will just talk randomly and while well dusty pushes buttons and stuff um but uh yeah so here i'll i'll walk you through a few of the things that that were prepared ahead of time i i said a lot of words to describe it um one of the especially the the way i'm i'm doing this with with fixed ip addresses one of the things that you have to provision are dns records a few key dns records you can see i've got in here the provisioning for several different clusters that i run but this is this is the one that we're presently looking at right here so each of the master nodes worker nodes and the etcd nodes requires an a record the the master and the etsyd obviously are sharing the same node so so they're going to have a records with the with the same ip address you also need um three server records for the um etsy b and then you need a pointer record for reverse lookup for each of the of the physical nodes so your masters and your worker nodes you'll need pointer records for those but the as you can see the dns setup is not onerous but it is necessary and here i'll show you what i'm using an open wrt router it's actually a travel router to actually provide my dhcp and ipixi capabilities so the the boot.ipixi as you can see is very simple um i'm echoing some information just to make sure the right host booted and then chaining in an ipixie file that is literally named after the mac address with hyphens replacing the colons and here's one of them right here that i believe will be one of the worker nudes and so this right here um gives it the kernel parameters necessary to boot tells it yes we want to install core os tells it where to install core os tells it where to get core os and it tells it which ignition file to use and that's really the secret sauce there not very secret yes you just kind of told the whole world i did i know all right i've already published it in my github all right we are in theory at 84 complete um i expected to reset the clock at least once while it's while it's doing this but this is how do you determine this percentages because like i don't see anything on screen that would tell you percentages oh right here can you see the the oh okay there it is okay it helped when you highlighted it there's a lot of word soup on screen yes there is uh this is how i keep the install from being boring is give you lots of journal control and logs to look at because otherwise there's not a lot to look at no no so how did you come up with this setup for i mean you're doing the bare metal right so yeah how'd you how did you come up with it oh gosh because like i i remember that that bare metal is like the least fleshed out deployment method of them all so the fact that you came up with something is impressive all on its own so that's worth the story i'm sure yeah you know i back in at the end of 2017 um i got addicted to the intel nook um machines and you know those little form factor boxes are are they're not they're not cheap comparatively but considering the amount of compute that you can pack into one of them for a for a home lab set up they they are pretty affordable and if you buy the right chipset you can put 64 gigabytes of ram in one of those little suckers so you know you get one with a core i7 um the newest ones that the 10th generation they've got six cpus so you've got 12 um vcpus available and 64 gig ram you you can run quite a bit on them and and my idea was actually get an openshift cluster running on the the nux um and then i stumbled across this thing called nested uh virtualization with libert and um well i don't do openstack i had a curiosity about it and that's how i came across um virtual bmc and and so decided to basically bump it up a level and um used lib virtual machines with virtual bmc to simulate uh bare metal and then it was just sort of uh i want to make this work so i powered through making it work to get bare metal install of okd up and running submitted a few tickets to the fedora core os team that they were very very very gracious to help out somebody that didn't know what they were doing um i i had never you know touched uh four os before so so that was quite a bit of a learning experience and thanks for being part of the community uh yeah dusty and those guys were they were incredibly helpful um and so it's it's kind of involved from from that point the the latest iteration of it now uses the the fcct tool to inject some customization into the machines um actually while we're while we're still waiting for that oh there hey quick here here's the reset i was talking about see how we went back to zero percent complete don't panic um i don't know why it resets the clock like this maybe somebody in engineering could tell us but it is still progressing i assure you that is very confusing and kind of frightening uh actually it looks like it resets after it downloads an update so it probably loses all of its state when it does that yeah that that that's my suspicion because it does go through several um iterations of updating some operators yeah so it's just probably losing its state every time that happens which is unfortunate and i'm not sure that makes sense but the best i got it still works that's the important part so don't freak out when it goes from 80 to 90 to zero yeah so right here if you guys can if i don't know if this is readable but but you can get to it on my github page so so this is zoom it up just a little bit just zooming up one level there we go then it's readable yeah this is a shell script um that i wrote that actually does the the provisioning of the of the quote unquote bare metal for me and right right here this is a yaml file that gets created where it's injecting the customizations that i want each of the machines to have so in this case what i'm doing is i'm creating a um basically a rename of the primary nic to nick 0 so that it doesn't come up as some funky enp blah blah blah blah blah i want it i want it to be more than predictable i want it to be predictable and known and so i'm using the mac address of the machine to explicitly name that network interconnect device as nik 0. uh and that way i always know what it's going to be and where it's going to be and then i inject into that it's specific configuration so i'm setting you know it's it's name server it's domain it's ip address with the netmask and a gateway then i'm also injecting its host name so that it persists its host name there's a bunch of other stuff that the the script does which is one thing i am going to do i'm going to add um better comments into this so that if any of you are are looking at how this thing is working you'll understand what each of these sections is doing all right we're back up to 84 complete at this point um i'm going to go ahead and fire up the worker nodes it is safe to do so now actually i could have done it a while back but i'm going to go ahead and do it now i'm sending each of them an ipmi command given a 10 second pause and in between each one just so they don't um slam my poor little travel router with um dhcp and file pull requests at the same time and we'll go ahead and watch one of those guys boot up there's one of the workers it's going to do the the same thing that you guys saw the bootstrap node doing um it's pulling the core os image right now and then it's going to go through the same process uh except that it will retrieve its ignition file once it once it processes the initial ignition overlays the the os tree and starts its process to join the cluster it's going to get its ign its ignition file from the cluster that will give it the personality of a worker node and if you watch the left hand side of the screen um closely you you should see it hit a point where it's um waiting on and then you'll see it very quickly pull that ignition config and at that point it will start to join the cluster oh there it was right there the the start job and there it go it got its ignition and so now it is booting up going to ask to be a worker node so just to give you a quick update on the aws cluster it's still waiting for the cluster api to come up i do have to leave now for like 15 minutes 20 minutes um i'll be back after that and i hope my cluster will be up by then and i'll see you in a moment all right see you in a bit christian our cluster is up and you see awesome it gave us our initial password so let's go ahead and log in and prove to the world hopefully that this little guy is alive all right and as before um self-signed certs so in whatever os and browser you're using you are going to have to accept those certs it's okay self-signed certs are fine all right now it creates a temporary cluster administrator for you and that it dumps that password at the end of the install process that you can use to gain access to your cluster and there we are now there will still be some operator updating things going on and your control plane will still be settling out but at this point we have a live cluster if you will indulge me for a few minutes we'll go ahead and finish adding the worker nodes and then we'll do a couple of housekeeping things on our cluster so you see we've got some pending um certificate signing requests um that is also an artifact of the way we're doing this um user provision infrastructure install is that it's not automatically going to approve those search because it doesn't necessarily trust anybody that wants to join the cluster so i'm going to approve those certs and there should be another batch of three they're going to come up pending um yep and so now we have three worker nodes they're not ready yet they're still completing their own personal bootstrap and that'll take a another minute or two for them to come live and i'm going to do a couple of house cleaning things here one is i'm going to remove the samples operator because it unless something has changed recently unfortunately christian isn't here we can ask him later the samples operator because you don't have an official red hat secret at this point it won't be fully functional and can in fact impede updates to your cluster so i yank it out not using it anyway at least at this point i'm also going to create a ephemeral storage for the image registry because it will also be in a removed state because it doesn't have a persistent volume so i'm patching its configuration with an empty dir specification for a persistent volume and i'm going to create an image pruner to run it midnight every night because the the console will gripe at you if you don't have an image pruner configured until you do so anything older than 60 minutes it's going to prune at midnight every night or 60 days rather 60 minutes would be um aggressive yes yes i mean you i don't know what kind of storage you have but 60 minutes might be appropriate if you basically only have enough for the cluster itself to run and there we are we have yay faster okay now huge caveat our our masters are still schedulable our workers are schedulable but that's not bad well it's not but there is a gotcha in here which of course i never tripped over um your ingress odds will deploy on a schedulable node well if your load balancer is only configured to look at certain nodes here you see i've got my um the port 80 and port 443 and port 6443 they're all directed to the master nodes well if those ingress pods got evicted and rescheduled themselves on a node that is not in your load balancer configuration then you will lose access to your cluster important safety tip so so the key the key here is either to span your load balancer which i don't really want to do because that's a lot of extra cruft in the the load balancer configuration or designate some infrastructure nodes and that's the path that that i chose to take so what i'm going to do real quick is i'm going to designate my master nodes to also be infrastructure nodes why doesn't it do that by default um well because the the the best practice is to create a couple of worker nodes that you set aside as infrastructure nodes why i don't know good okay just making sure because like i've seen these recommendations listed in the documentation but there doesn't seem to be any particular reasoning to back them up like historically speaking i've seen clusters typically do the masters as in for nodes because that way they handle essentially the stuff that keeps the cluster itself running and the worker nodes are free to work on developer user workloads yeah i think one of the things you need to consider is how how beefy you make your masternodes you know if you've got heavy heavy heavy ingress operations um you know given everything else that the masternodes are doing um that that might be a little overwhelming for them in my particular lab environment um the the master nodes are heavyweight enough each of them has 30 gig of ram and um six vcpus so so i feel pretty confident designating them as infra nodes so what you do once you once you run this label on them then you need to patch the scheduler so that the master nodes are no longer schedulable you'll see right now they are infra master and worker nodes when i run this now they're just infra and master nodes now at this point nothing got evicted off of them so if you want to boot things off of them that you don't want running on there anymore um you do need to either go through and evict all the pods that are running on each of those nodes manually or reboot your master nodes which is a bit more of an aggressive way of doing it now i'm going to patch the ingress operator to tell it that it's okay for it to run on those master nodes and if you can read the eye chart here i'll explain what it's doing it's setting a node placement policy giving it a match label of infra node it's also that's not enough you also have to set some tolerations because the master node is now tainted um so so you need to give it a toleration that it's okay for it to run with a node that has a taint of no schedule and a taint of masternode and so now that that is done you will see the ingress operator one of them is terminating there's a new one running that is not in a ready state yet as soon as this one is in a running state the second one will begin terminating don't panic that your other one sits in a pending state for a while because it has an anti-affinity rule that it won't run on a node that already has an ingress pod running on it so it has to wait for one of those terminating pods to complete terminating before it will schedule on the master node wow and so there you go now we've got one running we've got one pending and we've got two terminating and it will remain in that state until one of the terminating pods completes terminating and then the anti-affinity rule can be satisfied and the the pending pod will also deploy and and these take a while to terminate because they're shedding load that they're they're greasefully shutting down okay there you go so one of them is done terminating we now have two running uh ingress pods one of them is in a ready state one of them is still bootstrapping and the last thing i'm going to do is get rid of that cubit admin account because its password is sitting there in plain text in your installation folder so oh so it does get written down to this somewhere yeah i was gonna ask are we just do you have to make sure you you save that output text or will it actually be somewhere where you can get to it yeah if you if you look at the the directory that you used for the installation so there's um you know there's the boot the the ignition files that it um created and the metadata it creates an auth directory and in that auth directory it creates an initial cubic config which you can load to give you access to your cluster directly from your command line and it dumps that plain text password right there but if you get rid of the cube admin user doesn't everything that like links to the cube admin user break it's a temporary account so here's what we're gonna do um i i created an ht password file uh ahead of time my tutorial um has instructions for how to do that so so i've got an admin user and a dev user with passwords already in there um you saw me just create a secret right here so i apply created a secret in the openshift config namespace called ht password secret from that file and now i'm going to apply a custom resource that i've already here let me so this is the custom resource that we're going to apply um it's setting up an hd password identity provider and it's going to link it to that secret that we just created the hd password secret so i will apply that uh it complains that i used apply instead of create but i'm just in a habit of using apply to update objects so you can ignore that that complaint there and then the last thing i need to do is this admin user that i i just set up a secret for but doesn't exist i'm going to give him cluster admin rights and now i'm going to be brave and i'm going to delete well it also says the admin user doesn't exist that's correct um but it creates it in the background what yeah it's not intuitive no or obvious but it does and it works okay so there we go i just logged in with my new somewhat more secure cluster admin account and you can see our four green check boxes we've got a happy cluster it will complain about alerts until you like set up a slack channel or something to send your alerts to it's actually pretty easy to do you create a receiver and walk through it but i have used up most of my allotted time so i'll stop playing now and i think the playing is fine no i'm gonna give you that that was easy button all right well played and um can you do one more thing for me just um because i think people keep asking me these questions go back to the console and show uh the operators that are installed in your installation sure i will do that all right so you go to operators operator hub um are there no operators found operators don't exist and it's almost i think you know it may still be it may still be updating well the operator hub operator might not actually be up yet yeah because it does it does take a while after you know that that initial install took us another 23 minutes it does take things a while to settle down um let me let me show you what it does look like because i have another cluster that i um stood up this morning um it seems less healthy uh yeah i think i i think i did something to upset it but here's the here are the operators that are available quite a few you can see there's if you want code ready workspaces the the upstream of it um eclipse che is in here do you have enough time to try and install the eclipse j one um i might especially if you don't mind going a couple minutes over because the first thing i need to do is um deploy oh actually no i can't because i've already got let me make sure i've got um death deployed in this cluster so we're going to go to the brook ceph namespace yes yes we do the fact that the rook staff namespace kind of indicates you have it set up it doesn't it shouldn't exist if you don't have it no it can exist and i haven't completed the install yet but well okay there's that all right so we'll go back to the operator hub and we'll find the eclipse che operator and yeah it's a community operator if i call red hat they're not gonna help me with it but if i go on the slack channel they're usually nice enough okay and unless you want to do something different about it you install and we're going to keep the stable um it is going to create the eclipse che namespace and we're gonna let it have an automatic strategy for um its approval if you switch that to manual then when the installer installs you you have to go to the installer and then say yes you can actually install that seems painful well if you think about it you know i'm doing everything as a cluster administrator um so if you're not a cluster administrator but you you know you want to request something [Music] that's part of what what we've got going on here because there's all kinds of configurable r back um capabilities within this thing so when you install this operator as an as a cluster admin does that mean that anybody who logs in with an account can then instantiate it afterwards absolutely yes absolutely the workspaces people will be able to get in and create um workspaces again um you know it's got lots of role-based uh access control so so that you can control who can do what uh but yes anybody that you've got um created an account in in this cluster should be able to log into che create an account in che which will provision them into the key cloak instance that it's going to create and then they can create a workspace so let me switch this real quick to the workloads okay our operator is running it is alive so we should be able to provision a check cluster you see what i did from the from the operator here's the installed operators the provided apis that's what i clicked on to get to this view here that i can now create a che cluster it's going to name it eclipse che unless i tell it to do something else lots of things you can configure in here i'm going to take the defaults on everything except storage and this is what i was mentioning earlier that i believe has probably hung some people up is postgres is going to need a pvc and then any workspace that you provision is also going to need a pvc which almost requires that you have a dynamic storage provisioner for this to work so i am going to give it the name of the storage class and actually i'm going to cancel out of this go down here to storage show you that we do in fact have a storage class it's a block provisioner as part of seth and when we create our cluster i'm going to tell it to use that for postgrass and i'm going to tell it to use that for the workspaces uh also note each workspace is going to get a gigabyte of provision storage that may or may not be enough depending on the type of development that you're doing um that's pretty minimal so you might want to crank that up to 5 or 10 gigabytes depending on you know how how big the artifacts are going to be built in the code base and you know everything about the development environments that that you're going to be working with so i'll create create on that switch back to the pod view and you can see it's provisioning postgres hopefully our storage provisioner is working and we do in fact have a postgres data that is bound so our storage provisioner is working okay postgres is running not ready so it's still it's still deploying itself this will take this takes a couple of minutes and then key cloak is going to provision itself after postgres is done so now key cloak is provisioning and key cloak actually goes through a couple of phases it has an init phase um that it that it runs through so you'll see that pod come up and then terminate and then and be replaced by another key color pod that will be your your final configuration and you won't see the the che controller come up until both postgres and key cloak have completed their provisioning and about how long does that take diane has to um not terribly long um a couple of minutes cool it feels like a long time when you're staring at the screen that's all right i have plenty of coffee today and um michael has just pointed out um maybe there you still have quay dot io block via dns and you know what i i i don't that was a good point out i snuck that in while neil was talking um i right here i blasted a command to my dns server to remove the sinkholes for quay.io and for registry.server i did actually notice which is why i didn't repeat the question that he was saying because i figured on screen it was obvious that you got rid of your quay i o block no i slipped that in and didn't mention it all right so we've got um key cloak is is bootstrapping itself now so you'll see you'll see some activity go there all right and there it is so now you see another key cloak instance provisioning and it will take over from the the first one here in a minute as we all wait with baited breath in other news uh christian says that his full-blown aws cluster has finished installation so when we're done we'll pop over and let him prove that and then we'll we'll grab dusty when he's back and we'll hit up the digital ocean stuff okay the cloak is running any of you who are joining us for the digital ocean um demo we'll probably get started on that one a few minutes after the hour um we're running pretty close to on time which i think is amazing indeed and we'll we'll probably lose that thread at some point but hey quick plug for my favorite java framework carcass there we go there's the corkus ad thank you and and and what does that have to do with this well once your cluster is up and running you gotta run something in it right oh so you're gonna make something with carcass okay so that mad programming skills yeah yes indeed so so the the first key cloak instance you see it terminating now so it's getting itself out of the way the plug-in registry is fired up now you see other activity there's our che controller right here that is creating we've got a dev file registry we've got a plug-in registry and as soon as this guy becomes ready i wish you could hear the fans on my little nooks i wish i had a fan here the temperature was popping up here in uh canada on the west coast it's probably going to hit 32 today so all right so all of the resources are up they're all in a ready state we've had no restarts which is always a good sign although occasionally a restart is not necessarily a bad thing if we click over here to the routes um we have a route for che and if i'm brave and open that okay now self sign cert again so what you have to do at this point is grab that search i'm going to create a folder here for you guys so you don't have to see all the craft on my screen uh i'm going to go here and show the certificate this is safari specific obviously um so follow the instructions for your favorite browser safari is not my favorite but here it is grab that and then what you're going to do is once you've got that certificate you need to add it to the trust store of your operating system so in my case i'm going to go into keychain and i'm going to drop that certificate into keychain and i'm going to make it trusted i'm going to do that for you guys here real quick i'm going to drop it into my search system default search you see there's there's an old one from a previous install i'm going to pick the one that we just downloaded i'm going to replace okay now i'm going to open this up and i'm going to say always trust now it's going to make me certified that i am me one more time now ta-da and i'm gonna say yeah allow these permissions and now it's gonna it's gonna ask you to create an account now another important safety tip if you do what i did there is an admin account that che creates well i named my cluster administrator admin so i need to give this [Music] a different name or i will cause some pain for myself and there we go let's check up running ready for your code that is awesome sauce thank you very much for that that that makes my day this is awesome yay thank you yeah i think you've just made the entire eclipse che community happy too so well [Music] done you
Info
Channel: OpenShift
Views: 1,272
Rating: 5 out of 5
Keywords:
Id: wd2SWTC1j80
Channel Id: undefined
Length: 65min 8sec (3908 seconds)
Published: Wed Sep 02 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.