How to install OpenShift 4 on Bare Metal - User Provisioned Infrastructure (UPI)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey what's up i'm ryan haye and in this video i'm going to show you how to install an openshift 45 cluster in a bare metal environment using the upi method of installation so that's user provision infrastructure and what that means is that the user is responsible for provisioning and maintaining the infrastructure that the platform runs on now this is a little different to the ipi method that's also available that's installer provisioned infrastructure and that's available in public clouds and also some on-prem stacks like openstack and vsphere the ipi installer will automatically provision and maintain the infrastructure for you so for the most part if you're in an environment that's supported by the ipi installer you probably want to use that method it's a lot simpler if you are in an environment that is not supported by ipi however you'll have to fall back on the user provisioned infrastructure method there's two requirements to follow along with this video first of all you need access to seven different machines for this i'm using vmware's esxi hypervisor where i create seven different virtual machines so in total they come to about 50 gig of ram 450 gig of hard drive and i use four virtual cpus on each host you could also use something like proxmox or kvm or of course if you have seven physical machines laying around and the network together you could use those too the second requirement is a red hat account that's free to sign up and with that you're going to get access to a 60-day trial license for openshift container platform and also access to all the downloads that we need to get the cluster up and running all right so let's take a look at the architecture now here we have the architecture diagram for the environment that we're going to be using throughout the video you can see the environment is split up into two different networks so we have the local area network at the top and that should already exist and then we have the openshift network at the bottom and we're going to create that network in a moment using a poor group in esxi this machine here is the local workstation so in my case that's just my mac you can see here that we've also got what i've labeled a helper node and that helper node joins the two networks together so you can see that it has two interfaces one on the local network and another in the openshift network now the helper node runs a number of services that are required to spin up an openshift cluster firstly we have a nat gateway and that's required because each node needs to be able to pull container images from the internet so for example this control plane node will come up and it will use the helper node as a gateway to get out to the internet and pull images now in the case that you do not want to expose these nodes to the internet you do have the option of setting up what's called a mirror registry here and you can configure each of these nodes to pull directly from this registry and you have alternate means to populate the registry but we're not going to do that in this video we're just going to stick to the internet there's also a number of required dns entries to ensure that all the nodes in the cluster can communicate with one another for a number of different reasons we've got a dhcp service as well this one is actually less important because you can provide each of the cluster machines with a static ip when you're booting them however i just find that it's easier to provide dhcp and static ips from a single source of truth we also have an apache web server this will host each of the configs and also the red hat core os image that each of the hosts will actually pull during their boot process we will also set up an h a proxy and load balancer for getting request requests around the cluster and also into the cluster from external networks as well so in this case the aha proxy is not ha but all of the ocp backend services that are proxies to are and finally we will also set up an nfs server and that is so that we can host the openshift image registry now each host machine and actually the bootstrap machine as well for that matter uses red hat enterprise linux core os as the operating system so there are some fairly big differences between red hat core os and just regular red hat enterprise linux first of all core os is an immutable container focused version of red hat enterprise linux coreos uses os tree for versioning like entire file systems basically updates are delivered in a container as what's called like an atomic unit and the new file system that is included in that is applied upon a reboot of the host coreos also includes something called ignition which ocp uses as like a first boot system that can manipulate disks and and configure the machine and also core os by default includes cubelet and also the creo container runtime engine which now replaces docker in version 4 of ocp it is compulsory to run core os for the control plane nodes so these nodes here these worker nodes you do have the option of using red hat enterprise linux still but just keep in mind that if you do choose to use red hat enterprise linux for these nodes it can increase the management overhead of the cluster because you won't be able to take advantage of ocp managing the host os upgrades for you so in my opinion it's best to stick to red hat core os and only make changes to each of these hosts by the machine config operator once the cluster's up and running so the general flow of the install is that we're going to use the helper node to generate the config files required for the deployment of ocp and we're going to host them along with the red hat core os image using the apache web server then as a first step we're going to boot the bootstrap node that will start first and pull in the bootstrap ignition config from the web server and it will also pull in the red hat core os image to boot the os once the bootstrap machine starts it will itself start hosting some resources that the control plane machines are going to use we will then start up the control plane machines these will also point to the web server so that they can get their ignition config and they will also get a copy of the red hat core os image to boot from and then once they've booted they will pull in whatever resources they need from the bootstrap machine the control plane nodes then together with the bootstrap node work together to create a temporary kubernetes control plane this temp control plane including the bootstrap node that is then schedules a permanent control plane to adjust the three control plane nodes the bootstrap machine then shuts down the temporary control plane schedules some more workload to the permanent control plane and is then ready to be shut down and removed itself finally the worker nodes will then be started up and with a little manual intervention with certificate sign-in requests we'll get them joined into the cluster i've created a repo with the instructions that i'll be using for this video so i'll put the link in the description to the video and we can get started creating the environment now we can get started by downloading all the software that we need to boot the cluster first of all we need to download the latest image for centos eight so you can come to this link here and you can select just this x86 64 iso version of centos i don't need to download it again because i already have next up we can log into the red hat openshift cluster manager if you don't already have an account for this then you can create one as well to get access once it's loaded you can select this create cluster button then we want to select red hat openshift container platform from here you have all the different type of infrastructure providers so a lot of these ones up here will support the ipi method of installation we want to select run on bare metal so once you get to this page this has all the links that we need to download the software that we need so the first piece of software that we need is the openshift installer so we're going to run this installer on that helper node and that helper node is the central state machine so we can select linux and you can download the installer i've already downloaded it also the pull secret you want to download a copy of that the command line interface so these are the oc and the cube control tools you can download that for whatever os you're using as your workstation so for me that would be mac and you'll also need a linux version of that as well for the again for the centos eight helper node then finally download a copy of reddit enterprise core os so if you open this there's actually two files that we need here one is the metal install the raw dot gz file and we also need the one just above that which is red hat core os the installer but it's an iso this is the one that we're going to boot inside esxi so once you download all of those assets we can come to our virtual virtual machine environment esxi and we can first of all upload some of those files so you can come here to the data browser and i just upload things to an os folder that i've created so you want to upload the centos 8 image and you also want to upload this red hat core os the iso only you don't need to upload the dot raw.jz file so just those two files once they've been uploaded we can come into networking and we can create a new port group so this will be the open shift this would be for the openshift network which is that network towards the bottom of the diagram we can call it ocp and then just a random vlan number that is not in use the rest can stick the rest can stay default we'll add that we can now start to build the virtual machines that make up the cluster if we come back to the github repo the specs for each of the machines are listed in this section here prepare the bare metal environment and you can see that we've got specs for the control plane nodes the worker nodes the bootstrap node and also the services node now these aren't the minimum required specs that the openshift documentation specifies it's actually a little bit less but that doesn't matter so much for testing environments uh if you're looking for a production environment you'll want to consult that documentation and follow those minimum requirements there but what i'll do is i'll get started by explaining how i can how i create one of the control plane nodes and then i'll speed up the video creating the other control plane nodes the worker node and the bootstrap node and then i can slow it back down again and talk about this services machine that we build so we come back to the esxi dashboard we select virtual machines we create slash register a vm we can create a new virtual machine select next we can call this ocp dash control plane dash one uh we select linux and it's a core os machine because we're doing the control plane first we can select next i've only got one data store next now according to the documentation this here was four v cpus the memory was 8 gig the hard disk i'm going to thin provision and set that at 50 gig we can remove the usb controller if you don't really want to keep that around and then the network adapter here is going to connect to the ocp network and then finally the cd dvd drive is going to boot the red hat core os iso image which is this one right here select next and finished so we just need to repeat that now for the remaining nodes for the ocp services machine we're going to boot as a centos image this machine actually does a little bit less than the other machines so we can give it a little bit less ram i'm going to set that at 6 gig but it does need a much larger hard drive space and the reason for that is the services machine hosts the nfs server and according to the requirements of the openshift image registry which is backed by the nfs which is backed by an nfs share it needs at least 100 gigabytes of memory so we're going to set this up to 150 gig the other difference here is that we will be connecting to the vm network on adapter one and because this vm this services vm actually joins both networks together we'll add an additional network adapter and with this additional adapter will join the ocp network that way with adapter 1 we're going to get a lan ip address and with adapter 2 we can set an open shift network ip address and finally here we're going to boot from the centos 8 image now that we're finished building the vms we need to collect the mac addresses that are assigned to each one now unfortunately if you come into the network adapter settings of any of the hosts you'll notice that the mac address doesn't get assigned without actually starting the vm so what we'll do is we'll start each of these vms so that the mac address can be assigned and then we'll take note of them we can actually immediately shut everything off except for the services node because we'll be using that first and now you can see if we go into the control plane one and we have a look at the network adapter we can see that it now has a network address sorry a mac address assigned so i'm going to take note of these mac addresses because we'll use those when we're assigning static ip addresses we can now continue by installing and configuring the ocp services machine select your preferred language end time and date so i'm going to select server based environment and i'm going to select the guest agents so that the vms resource statistics show up in the esxi dashboard i will also configure the installation destination so what i'll do is i'll select custom and then done and then click here to create them automatically you can see here that it's actually using 92.47 gbytes for the home directory i don't want that so i'm just going to remove the home directory and you can see that the root directory is currently configured with 50 gigabytes if we just clear that out it will use all remaining disk space just for the root directory we can then click network and you can see that we have the two network devices that we created the first network is part of the lan so if we turn that on we should get a dhcp address that's good so we can see here that we have an ip address of 192.168.0.96 and the second is on the ocp network so you'll see if we turn this on nothing's going to happen because we don't have any dhcp based service running at the moment so that'll actually be what this host is for so we can turn that off for now we can assign the hostname and we can begin the installation we can now remove the cd-rom from the machine and reboot once we get the login prompt we can have a look and see that we still have that network assigned so we can see 192.168.0.96 that's the ipad that's assigned to this host so what i'm going to do is just ssh into that machine to make this a little bit easier okay great so we're able to ssh into the machine that's a good start now what we need to do is move all of the files that we've downloaded before onto this machine so we just want to do an scp and i have everything still in my downloads so we want to move the open shift install for linux we also want to move the openshift client and also the red hat coreos installer and in this case we want to get the raw.jz file we're going to send that to root and we can just do it in the user's home directory so we can see we have all these files here now so what i want to do next is we want to extract the the clients that gives us oc and cube control and we can move them to the user local bin directory now we should be able to view control version we can see the version and oc version as well so we have the client tools working we also want to extract the installer and you can see we now have this open shift install application we we need to update the operating system so that any new packages that we install now will get the latest versions next up we will install git so that we can clone the repository with all the configuration files in it okay let's clone that repo and there we go we have the ocp for metal install just have a quick look in this you can see we've got dhcpd configuration we've got the diagram that we've talked about we've got some dns configuration the aha proxy as well the install config that's fed to the openshift install the some manifest files that we're going to configure openshift with once we have it up and installed and the readme is the instructions for this now the next step is optional i just prefer do it because i think it makes editing yaml files and just using bim a lot easier we're just going to turn syntax on then we're going to set number we're also going to have expand tabs and auto index set soft tab stop to 0 tab stop to 2 shift width to 2 list is very helpful for yaml files because it will show you where you have tabs and where you have spaces and then hls is to highlight search so we'll save that and we're just also going to update our preferred editor to vim for each of the tools oc and queue control so now we have to set the static ip for the openshift network interface on this host so what we can do is we can use the network manager text ui and we can edit the ens224 interface which is our second interface and we can update the ipv4 configuration by coming down to automatic changing that to manual select show we can add our address in here 102.168.22.1 so if we come back to the diagram quickly we can see that we're setting up this interface here interface two we can change the dns server to localhost at the moment we do not have the dns server up and running but we'll get to that in the next step and we can also update the search domain so the domain we're using for this for this network is ocp.lan never use this network for the default route and also automatically connect okay so now that we've done that we want to confirm that that has taken effect so we can see here ens224 now has that ip address if it doesn't you can bounce the nick and now we can set up the firewall so firstly we want to set up these zones so we've got an internal zone and we've got an external zone so to do that we can run the network manager cli we should now be able to query our active zones so you can see that in that list we have the external and the internal we can also check the active zones and we can see that we have external and internal are active and they're assigned to the correct nics next up we have to set masquerading on both zones so masquerading is basically sourcenet and to give a quick example of what that means it means for every packet leaving the external interface which in this case is ens192 after those packets have been routed they will have their source address altered to the interface address of ens192 and that way any packets that go outside of the network will be able to find their way back to that interface where the reverse will happen and the pack will be passed on to the correct host again we can use the firewall command to do that and we can reload the firewall config again we can check that those changes have taken effect so we have mask grading turned on on both interfaces now we want to make sure that ipforward is also configured i believe when you enable mask rating in centos 8 it automatically enables it by default but we can check this and we can see that that's enabled next we can install dns we can now apply the config that was included in that git repo that we cloned earlier i'll just quickly go through that first though if you navigate into the dns directory in this directory we've got the named.com which is the primary configuration file that points to the zones that's mostly default settings you can see everything up the top is basically just a default configuration um the only differences are the google dns is put in here and then also a link to the zone files that we're also including in the zones directory we have the forward and reverse lookup here you can see we've just assigned a records to a lot of the machines that we're using because the services node will also host the aha proxy we've pointed the openshift required dns addresses to that proxy and importantly we also have these service records down here for the lcd cluster so when the lcd cluster is booting it can find out how to contact each other i've left these two entries in here down the bottom because later on these will be required in the lan network so we're going to add them to the hosts file of the workstation that we're working on but you could also configure dns in that network as well so you'd have to take those two entries and also this wildcard entry here and put that into another dns we can take a quick look now at the reverse record as well and again we just have a number of pointer records in this case that are also required by openshift and all these records are in the openshift documentation as well we can apply this configuration now by copying it over from the git repository to the service configuration once again we can reload the firewall for the config to take effect and we can enable and start the dns service and there we go dns is up and running so let's do a quick test and look up the details for the domain that we're using the ocp.lan domain you can see here that this host is not using the correct dns server the dns server that we just set up it's still using the dns server from the lan address you can see down here the server that it's queried is one nine two 192.168.0.1 which is the land network so we want to change that now we can do that by updating the external nick the ens192 nick on the lan network to automatically ignore any dns entries that have been obtained by dhcp and also to point to the local host for any dns queries we can then restart network manager so that takes effect we can see the machine is now using localhost for dns queries we can do a reverse lookup test that should resolve to the ocp bootstrap machine for example great so you can see here that we've resolved that hostname now it's time to install and configure dhcp so that each of our servers get a static ip when they boot and again we have a template that we require for the configuration of the dhcp server except this one here we'll have to update with the mac addresses that we got from the esxi console earlier once we've completed that we can save the config and apply it we can then update the firewall to allow dhcp on the internal zone and then we'll need to enable and start the dhcpd service and the service is up and running you can safely ignore this warning message here because we're not providing dhcp on the ens 192 interface that's the lan interface next up we can install and configure the apache web server by default apache listens on port 80 but we'll be proxying traffic for port 80 and 443 using nginx so we're going to update the port that apache listens on to port 8080 once again we can update the firewall to allow access to port 8080 on the internal zone and we can enable and start the service here we can see that we have apache running on port 8080 but we can always do a quick test with curl as well this just prints out the html of the default apache web page okay two more services to go we have h a proxy and the nfs server left so let's do h a proxy first we can install that before we copy over the config we'll take a quick look at it the top of the config is mostly just defaults we then come down the page and we can see that the ha proxy stats is enabled so that's at the uri of slash stats and the port 9000 so this is not a requirement of open shift but i think it helps when you are doing the install you can see each of the front end and back end light up green as the hosts come online we then have the front end and back-end configuration for the cube api server so everything coming in to this host here on port 6443 will be at first proxied to each of the control plane nodes as well as the bootstrap node so later we'll come through and we'll remove that bootstrap node moving down we have the front end and backend for the machine config server so this is for each of the hosts to pull their machine config that they boot against and finally at the bottom of the config we have the http and https front end and back end configured to point to the worker nodes these are just running at the layer 4 tcp mode the reason that we're not doing this with layer 7 http is because we just want to pass through the connection to the worker node and let the ingress controller decide what to do at the http layer so let's copy that config over now obviously aha proxy is listening on a lot of ports here so we have quite a few firewall rules to add so i'm just going to copy and paste the file rule block now before we enable and start the aha proxy service we just want to quickly set an sa linux boolean value as well and then we can enable and start the service and finally we need to install nfs and configure a share if we have a look at our disk we can see that we have 139 gig free on the root file system which is where we will be making our share so we can make that share now and then we want to export the share we can check that that worked you can see we're now exporting the shares registry directory to any host on the 192.168 network and again we can set some firewall rules for this service so if you're following along with the github repo there's a block of firewall rule commands that we can paste and then finally we can enable and start the service okay so we've finally reached the end of setting up the environmental services now we can get on to actually generating and hosting the install config so that we can boot the openshift cluster so the first thing we need to do is generate an ssh keypair that will be used to authenticate to each of the nodes in the openshift cluster we can leave everything as default coreos doesn't provide a default password for the for the core user so this will be the only way to authenticate against those servers next we want to create an install directory where all of our install files will be held we can then copy the install config yaml file that's included in the github repo into this new directory we then need to update the install config yaml file with a couple of pieces of our own private information on line 23 we need to enter the pool secret that we got from the red hat cluster manager site and then on line 24 we need to enter the public ssh key that we just generated and then we can save that file we're now going to generate some install files that will take in that install config file and it's destructive so if you want to take a copy of the install config file and just move it to somewhere temporary you can do that now first we're going to generate the kubernetes manifest files we do that using the open shift install command you can see here that it gives you a warning that it's making the control plane schedulable by setting the master schedule to true if you want to run workloads on your master nodes you can leave it as is if you don't want to run workloads we can actually change that now we do that by updating the ocp install manifests cluster scheduler o2 config file and we can alter the master scheduler value from true to false and save that next up we generate the ignition configs that will be used by the bootstrap the masters and the worker and we also generate the kubernetes auth files as well so again we're going to use the open shift install command and we will point it to the ocp install directory again and you can see that we now have in the install directory an auth directory as well as the bootstrap master and worker ignition files and then also a metadata file that goes along with those now what we need to do is create a directory that apache will use to host these files so that each of the vms can pull them when they're booting we can just call that directory ocp4 and we're just going to copy everything that's in that ocp install directory into this new ocp4 directory we also need to move the red hat core os raw image that we uploaded to the server at the beginning of this server setup it's a good idea that when we're moving the file we rename it to something shorter just because we're going to be typing it a few times soon we can change ownership of all the files inside the ocp for directory to apache so that they're not owned by the root user we can also modify the permissions of all the files in that directory and because we're using se linux we can also change the context type of all the files in that directory as well to confirm everything looks good we can do a quick curl test and here we can see that we have the auth directory we have the bootstrap ignition the master ignition metadata.json file red hat core os image and the worker ignition as well so that's everything that we need we can now look to use these files when we deploy the openshift nodes to do that we can navigate back to the esxi console we can start with the bootstrap node so we power the machine on and it loads core os we can hit the tab key and that will bring up the boot options these boot options will mostly be the same for all of the machines except for the ignition url that will point to obviously a different ignition file depending on the type of node that it is so here we can see that we set the install device to sda that will be the same for all nodes we set the image url to the apache web server and then we navigate into the ocp4 directory and point out the red hat core os raw jz file that will be the same for all of the nodes and then finally we have an additional setting for ignition url that also points to the apache web server and depending on the type of node you would specify bootstrap.ign master.ign or worker.ign so we can also set up the control plane nodes as well now so here you can see that the only difference is that the ignition url is now pointing to the master.ign file so we repeat that config for all three of the control plane nodes and then we can start these up so we start with the bootstrap machine first you can see here that it will start to pull down the image and then it will install the image as the os using ignition to boot the host we can do the same with the control plane nodes now we can also monitor this via a command that we can run on the services node so here i've specified debug but there's also the options of error and info while we're waiting for some feedback here we can also go and start the worker nodes as well and we can start those you can see we're now waiting up to 40 minutes for bootstrapping to complete now is also a good time to bring up the ha proxy stats page so that we can watch some of the front ends and backends turn from red to green there isn't really a strict order to any of this because all the different components just wait for their dependencies to load you can start the machines in a different order however if the dependencies aren't met by the time a timeout is reached then you can have issues with installing if you get bored you can also ssh into any of the nodes and watch as the containers come up and go away as their tasks finish now we can see that the bootstrap has completed it took a total time of 19 minutes and 21 seconds we can see with the h a proxy stats page that the control plane nodes are now all up and the bootstrap node is now red the info message in the terminal says that the bootstrap node is now safe to remove so what we can do is we can come back to our services machine and we can update the aha proxy config to remove that bootstrap node from the backend services and then we can restart the service we should now see the bootstrap machine removed from the stats page we can now listen for the install complete event again using the open shift install command so because we said that the master nodes were not schedulable when we were building the configuration we need to bring the workers up for the installation to complete this is because the openshift router will be deployed to the workers and there's some cluster operators that rely on that router to come up like the authentication and the console operators so once the worker nodes load the router can load and therefore all the cluster operators can load successfully so what we can do is we can leave this command running and we can make another connection to the ocp services host and we can join those worker nodes in so first what we want to do is we want to export our cubeconfig environment variable that points to our cubeconfig once we've done that we can test that we can access the control plane you can see here that we're able to authenticate as the system admin user now we need to take a look at the certificate signing requests that have come from each of the nodes so you can see that we've already got two pending requests i'm going to copy a command that approves all pending requests you can see that has approved two csrs we just need to keep an eye on some more coming in and there we go we have worker one as well so we can run that same command again to approve those and we can keep an eye on the ha proxy stats page we can also run oc get nodes we can see that the cluster now knows about the worker nodes they're not ready yet though so as long as all these csr requests have been approved the cluster should continue to install successfully we can come back and we can wait for our install complete command to finish okay so that's all green now which is good while we're waiting for that to complete we can actually configure the storage for the image registry so if we come back to our other connection we can have a look at our cluster operators we can see that the console and the authentication operator is still unavailable here it says that the image registry is available but that will change so what we can do is we can just update the storage now regardless what we want to do here is update the the management state you can see here it's been removed so we want to change that to managed and we also want to update the storage section at the moment it's an empty object this is where the bimrc configurations that i pointed out at the start of the video would help but i didn't save that to the bash profile so it's not taking effect right now so we just want to add an empty claim here and then we can save so that should create us a pvc you can see we now have a pvc that's in a pending state that was created 19 seconds ago so what we want to do now is actually create a persistent volume so that this pvc can bind to it we have the manifest for that available in the github repo that we downloaded earlier you can see here it's a pretty standard nfs persistent volume with the access mode read write many specified and 100 gibby bytes of space we can have a quick look at the image registry storage pvc as well and just confirm that that will match up so we can see that that's requesting the access mode of read write many and also storage of 100 gb so that should be fine we can then create that persistent volume and we can see that that is now bound we can also see that all the cluster operators are now up and running so we can come back to our install complete command still looks like it hasn't finished okay i'm not sure why that hasn't finished so we'll give that another minute and just create the first admin user while we're still waiting otherwise i think we can ignore that because all the cluster operators are up and running anyway oh there we go all right great so that's complete now we can see that the installation was successful um a couple of things to note from this output just remember to export the cube config each time you come into the server if you haven't added it to your user's bash profile where you can also take note of the console address here so i'll just copy that take a note of that and also we want to take note of the cube admin user's password we'll need that in a moment as well now if you don't have this output for some reason don't worry about the password too much there's another way that you can get that it's in the auth directory that we when we originally created the configuration files for the cluster you can see it's under the cube admin password so we don't really want to use the cube admin credential all the time so it's a good idea now to create the first admin user so i've actually supplied the manifest for that as well in the repo so we can have a quick look at that you can see this yaml file actually has two manifests in it the first is just creating a secret and the content of that secret is an ht password field so you can see i've got a note in there that i've used ht password with the username admin and the password password so if you want to substitute that you can just substitute that text and then down the bottom we have an oauth config that just maps that hd password secret to the oauth provider so we can apply that now we can safely ignore those warnings we now have an admin user that's been created so we now need to give that admin user cluster admin permissions okay and that admin user will exist when we log in so now we can finally access the openshift console as the very last step so ideally we'd like to do that from our local workstation so to get that set up we have to add an entry to our etsy host directory and we just want to paste in an entry that i've included on the readme in the github repo so basically what this is saying is any of these dns names that we attempt to navigate to are going to be directed to the 192.1680.96 host and we know that that's our ocp services machine so once it hits that it will then hit the h a proxy which will then transfer the request to the appropriate endpoint now etsy host doesn't let you use a wildcard address otherwise we could just have star.apps.lab.ocp.lan so if you don't want to be adding a new entry in this file here for every new service that you create on the cluster you can sort out dns perhaps you can use the dns server that we've just created on the ocp services machine to also serve dns to your local workstation as well but for now this is an easy way to get up and running so we can save this we can get that url again we can come back to chrome and we can navigate to the console address now and now we have the option of logging in directly as the cube admin user or we can also use the oauth provider where we configured hdht password access so we'll choose the hd password we can enter admin and password and there you go a fully functioning ha open shift and all green ticks means it's a healthy cluster and there we have it ocp running in a bare metal environment now you're going to want to leave the cluster running for at least 24 hours because the first certificate rotation happens in the first day once that completes you're free to shut the cluster down and bring it up whenever you like just remember shut the worker nodes down first then the control plane and then you can shut the proxy down when you're bringing it back up again do that in reverse bring the proxy up then the control plane and then the worker nodes so go enjoy build and run all your containerized workloads and if you enjoyed the content please leave a like and stay tuned for more videos coming soon thanks bye
Info
Channel: Ryan Hay
Views: 32,029
Rating: 4.9726496 out of 5
Keywords: openshift, kubernetes, ocp, upi
Id: d03xg2PKOPg
Channel Id: undefined
Length: 48min 30sec (2910 seconds)
Published: Fri Aug 28 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.