How to Build an Awesome Kubernetes Cluster using Proxmox Virtual Environment

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you so much for tuning in to learn linux tv your source for linux related fun and learning i love producing linux related content for you but i can't do it alone if the content on this channel has been helpful to you then please consider supporting learn linux tv and one of the ways that you could do that is by becoming a patron which will give you access to exclusive perks also be sure to check out my latest book mastering ubu server fourth edition and while you're here be sure to subscribe new content is uploaded each and every week thank you so much for your support i really appreciate it now let's get started with today's video [Music] hello and welcome back to learn linux tv of all the technologies that i've covered on this channel proxmox is among my favorites but do you know what i also like i also quite like kubernetes as well so how about we combine the two into one how about we combine kubernetes and proxmox well actually that's exactly what we're going to do in today's video in this video what i'll do is walk you through setting up a kubernetes cluster every step of the way we're going to build the cluster completely from scratch and it's going to be a lot of fun but you still might be wondering though why would i use kubernetes when containers are built right into proxmox why not use the built-in solution is one better than the other well actually no they're both great solutions so i'm not going to tell you to use one over the other because ultimately it depends on your use case but i know a lot of you are interested in learning kubernetes so that could be reason enough in and of itself but even if not kubernetes is a ton of fun and it's something that i recommend that you learn now one thing i want to get out of the way first and foremost is that proxmox is not actually required for this particular tutorial if you have a virtualization solution of your own maybe you're running something like vmware or one of the other technologies out there the commands that i'm going to give you to build the cluster are the same regardless of what type of virtualization platform you might be using now i will be using proxmox to show you the examples in this video but again you don't have to have proxmox but what you will need are at least two virtual machines running ubuntu server i have a video already on the channel that shows you how to install ubuntu server and more specifically i'll be showing you the process with ubuntu 2204 so that's what the virtual machines will be running that will ultimately be part of the kubernetes cluster now what i'll do on my end when we get into the tutorial is i'll be setting up some virtual machines that are running ubuntu 2204 and i'll be using the very same template that i've actually created in a recent video where i showed you guys how to use an ubuntu cloud image as a proxmox template so if you don't already have that set up i recommend that you follow that video first because it just makes everything else that much easier but even if you don't want to do that you could just set up a few virtual machines running ubuntu and you'd be good to go but if nothing else just follow along with me and i'll walk you through the process every step of the way but before i get into that i do want to mention something very important and no this particular video is not sponsored but i do want to let you guys know about my latest book mastering ubuntu server 4th edition and mastering ubuntu server is a book that i'm incredibly proud of and the fourth edition is due out very soon in fact it might even be out already by the time this particular video is edited and released just like the third edition before it the new version will teach you everything you need to know in order to hone your linux server management skills but this time it's been fully updated for ubuntu 2204 in fact this particular video was actually inspired by chapter 18 from this book and in that chapter you'll actually learn about container orchestration i worked really hard on this book and i can't wait for you guys to be able to read it and you'll be able to read it very soon if you go to ubuntuserverbook.com which is a special website that i've set up specifically for this book you can find links to the various places that you can get it from so definitely go to ubuntuserbook.com and check it out i would really appreciate it so with that out of the way what we're going to do is set up a kubernetes cluster on proxmox and it's going to be so much fun i'll have time codes down below so that way you can go right to the section that most interests you or best reflects where you're at in the process it's going to be a lot of fun in the next section what i'll do is give you guys more information about what you'll need to get started i've already mentioned ubuntu server so in the next section we'll take care of that and then as we go through the remainder of the video you will build your very own kubernetes cluster and by the end of the video you'll have a kubernetes cluster that's all set and ready to go and you'll be able to run your containers on that cluster so let's go ahead and dive in [Music] all right so in this section what we're going to do is talk about what you need in order to get started i've already mentioned that ubuntu server 2204 is going to be the base that will be building our cluster on so you'll need at least two instances of ubuntu server you know one for the controller and then another for at least one node now if you want to set up additional nodes i highly recommend that you do that because that's the power of kubernetes you can actually have your containers run against different nodes and that helps spread the load but either way what you'll need at least is a virtual machine that will become the controller and a virtual machine that will become a node now when it comes to how to spec these particular virtual machines for the controller i recommend at least two cores and at least one gig of ram now the one gig of ram i just mentioned is a very very low requirement so you probably want to be closer to two gigs for well everything even the nodes if you can go even higher that's even better but actually you probably shouldn't go too much higher on the nodes because the whole point of kubernetes again is to have more than one node so it's probably better to spin up more nodes than it is to give one node more ram but when it comes to the controller you want at least two gigs when it comes to the nodes you want at least one gig or possibly two and that should be good enough now for the individual vms they should be running ubuntu server 2204 yes it is absolutely possible for you to run a kubernetes cluster on a different distribution you can absolutely do that and many people do actually do that but i can't cover every single distribution in one video i figured ubuntu is probably a good way to get you guys started and from there if you want to experiment with other distributions you could definitely do that but the commands that i'm going to give you in this video were only tested against ubuntu server 2204 and like i mentioned in the intro i'm not going to walk you through the process of installing ubuntu 2204 because i've done that in a separate video and in a recent video on this channel i've also showed you guys how to use ubuntu server cloud images to serve as a template in proxmox and i highly recommend that you do that first because when i go to create my instances that's exactly what i'm going to do as well i'm going to use that very same template it just makes things that much easier so if you don't have ubuntu server already set up but you do actually have that template that we've created in that video set up on your proxmox cluster then you can follow along with me and i'll be using that same template you'll see the process if you're using something else or you don't want to create a template just like i've done in that video then i'll leave it up to you to spin up ubuntu server on your own anyway in the next section what we'll do is we'll just walk through the process of using that template to spin up a couple ubuntu server instances and then from there we'll actually build a cluster [Music] all right so right here we have my actual real proxmox cluster as you can see i have two nodes in my cluster pve1 and pve 2. i'm not using high availability at my end because you need at least three nodes for that but this cluster has actually served me quite well and this particular implementation of proxmox is going to serve as the example as i walk you through the process of building your own kubernetes cluster now if i expand this node right here you can actually see that i have an ubuntu 2204 template as i mentioned earlier i'm going to be using this template right here this is actually from a previous video where i showed you how to use an ubuntu cloud image to build a template within proxmox now when i build the cluster i'm actually going to have three nodes in my cluster so for now what i'll do is just clone this twice and i think that'll be good enough until we need to have other machines later so what i'll do then is right click right here and i'll click clone it can run on the same host i think that's fine so for the vmid i'll just give it 850 i want it to stand alone from the others and i want them to be sequentially in order you don't have to do that and it really doesn't matter which id you give it you could accept the default but i'll just leave mine at 8.50 because what that's going to do is it's going to place everything above the template but after the most recent vm that i've spun up so for the name i'll call it k8s the abbreviation for kubernetes hyphen then ctrl r the abbreviation for controller i think that should be good enough and what i'll do is create this as a full clone and then i'll click clone and while that one's actually cloning i'll just start the next clone in the process here so i'll give it 851. for the name i'll call it k8s hyphen node i think that should be descriptive enough and just like before i'm going to make it a full clone and we can see on the left side there it's actually building now there's some additional tweaks that we should consider for these particular instances here so if we go to options what we'll do here is we'll make sure that start at boot is enabled i'm just making an assumption here but i assume that if you're running a kubernetes cluster you probably want it to be running all the time and also you might want it to start up automatically when proxmox itself starts up i set the started boot option in the template so i didn't have to configure that but that might be something that you might want to take a look at for the start shutdown order that might be something that you may want to configure but i'm going to leave that alone at least for this one and then for the node i'm going to check the same thing it's set to start at boot but for the start shutdown order you might want to consider changing that for the node so i'll click edit and what i'm going to do is just give this a bit of a delay to ensure that the controller starts up first now there's no actual requirement for this all the nodes will connect to the controller as soon as it's available but in my opinion it's good form to just have the controller be ready before the nodes are ready themselves so what i'll do is just add a 15 second startup delay that should be good enough and i almost forgot to have you guys check the qemu guest agent this one right here i recommend that you set that to be enabled we'll install the qemu guest agent later in the video but it's a good idea to make sure that this is turned on so that way when we install the guest agent it'll already have the settings in place that it needs so we'll just make sure that this is set to enabled you could always click the edit button up here to change any of these options if you'd like but so far i think i'm good to go on these so let's go ahead and start up the virtual machines then we'll start the controller and the node as well and we should see them start up right here and it definitely is doing that and this one is ready to go now of course we see cloud init is doing its thing so we're going to wait for that to finish and the same thing is probably happening here too and it is so on my end what i'll do is i'll just wait for this to finish and actually it's done the ssh host keys are regenerated at the end of the cloud init process so what i should be able to do is press enter here and log in so let's do that and what we're going to do is run sudo apt install and then the package is going to be qemu hyphen guest hyphen agent so let's get that installed and we can press enter for all of these prompts right here we don't really care at this point there's nothing really running on these particular servers yet so that should be all there is for that and then we'll go over here to the node instance here and we're going to do the exact same thing so i'll log in and i'll install the exact same package and it looks like the process of installing the qemu agent is all set on this instance as well and that actually concludes this particular section i wanted to try to get all of the proxmox specific steps done in one section which i've done here so the next section what we'll do is set up some preliminary tweaks within these instances to prep them before we install kubernetes and then after that section we'll actually start building our cluster [Music] all right so in this section we're going to do a little bit of preparation before we start building the cluster just some odds and ends that we want to make sure are taken care of on each of these instances before we build them into becoming what they're going to become so the first thing i'm going to do is find the ip addresses of these instances so in proxmox at least i can click on summary and it gives me the ip address right here so this one has an ip address that ends in 214 and the controller has one that ends in 213 so now that i know what the ip addresses are what i'm going to do is ssh into these instances so i'll switch to another workspace and here i have the terminal and i should probably open up another tab so i'll take care of that so what i'm going to do right now is ssh into the controller node so the ip address for that in my case is 10.10.10.213. so now i have a session open to that particular server and i'll do the same over here and i typed in the ip address to the node instance and we're connected now the first thing that i want to do is actually install all the updates but depending on how you have your setup you might not have to do that with my templates they actually use cloud init and what cloud in it does when an instance boots for the very first time is it makes sure that all the updates are installed so i probably don't have anything but it's a good idea to check anyway i'll just run sudo apt update ampersand ampersand and then sudo apt just upgrade just to make sure that everything is in fact installed and there are a few packages here that have been kept back but we're going to ignore those and now the next thing that we're going to do is make sure that the controller has a static ip actually you might want to make sure that both of them have a static ip but what i'm going to do is make the second instance the node instance a template so i'm not going to bother when it comes to giving it a static ip just yet but i will go ahead and take care of that here for the main node the controller node now i always recommend static leases you know dhcp reservations over static ips but depending on your configuration that may or may not be a possibility for you and i know a lot of you actually prefer the tried and true approach to creating a static ip so what i'm going to do right now is walk you through the process now later on you'll need to do the same thing for the node instance as well again i'm going to make it a template which is why i'm not going to set up a static ip on that just yet but let's go ahead and take care of it for the controller so what we'll do is change directory into etsy netplan and there we have a file that file name might be different for you there are several different names that file can have depending on how you set up ubuntu server so don't worry if this file has a different name on your end that's perfectly normal so what we should do is create a backup copy of that file before we modify it so we'll just go ahead and copy this to the same name but we'll end it in bak just in case we make a mistake we have something to go back to and what we'll do is run sudo nano and we will edit that file the only file that's in that directory and here we have the file so what i've done is i've removed most of the configuration here underneath the eth0 or eth0 interface and what i'm going to do is just hand type the rest of it now i will have a blog post that's going to be linked in the description below which will have config file samples and we'll also have any commands that i'm going to be using as well so if you want to grab the sample file you can do that just make sure that the interface name matches the actual interface name otherwise it won't work so for me it's eth0 but if it's different for you then you can go ahead and change it so what we'll do here is we'll type addresses then in brackets we'll go ahead and place the ip address my case it was 10.10.10.213 for the controller i'm just using the same ipv that dhcp would actually assign this anyway on your end just make sure it's outside of your dcp range on my end i'm going to undo everything i'm doing right now and just stick to dhcp because i have static leases but we want to make sure that we're using an ip address that's not used anywhere else and we want to include a subnet identifier here so slash 24 or my network and then one level down we'll just go ahead and line it all the way up here we'll type name servers we'll type addresses and the name server in my case is going to be 10.10.10.1 then we'll line up the next line with name servers and we're going to type routes and a hyphen 2 and then default the default route and then via so i've typed in the default gateway ip address as you can see in my case it's the same ip address as the primary dns server so pretty simple and what i'll do is save the file control o and then enter and then control x to exit out and the next thing we should do is make sure that we test this we definitely don't want to have a situation where the network can't come up because there's a syntax issue so to test it and apply it we can actually run sudo netplan and then the keyword of try we want to try it out now if there was a syntax error it would have told me but there isn't one so i'll just press enter to accept it and believe me if there's a problem you would definitely know because it would tell you there's a problem and there'd be some kinds of error messages there so if you see some errors that must mean that you might have had a typo just be very careful and if you don't see any complaints when you go to apply it you should be good to go now on my end i mentioned that i'm using dhcp reservations so what i'm going to do is just move the backup copy to the original file name to undo basically every change that we've made let's go ahead and apply that and i'll press enter and now i should be back to the default settings now on your end if you are not going to actually create a template for your node instance like i'm going to do then you can proceed to setting a static ip for that as well now another thing that we want to check is whether or not the host name is set since i'm using cloud init with a proxmox template my hostname is definitely going to be set already and to prove that i could just cut the contents of etsy hostname as you can see it's k8s hyphen controller abbreviated that's exactly what i wanted it to be and if we take a look at the etsy host file let's see what's inside there we can see the 127.0.0.1 localhost that's there but above that we have 127.0.1.1 and it has the host name for the controller if yours doesn't actually have the hostname then you would want to go in these files and make the adjustments accordingly now the next thing that we'll do is set up a container runtime that's something that we'll actually need because without a container runtime well we can't run any containers so we may as well take care of that first now going forward each of the commands unless i let you guys know otherwise need to be run on every instance so in my case both the controller and the node if you have more than one node you'll need to run these commands in those as well when we get to a point where anything is specific to the controller i'll definitely let you know so until then just go ahead and run each of the commands that i'm going to show you on each of your instances so to install the container runtime what we'll do is run sudo apt install and what we'll install is a package called container d i'll press enter and then i'll go over here to the other tab i'll do the same thing and that one's done and the controller is finished as well now the next thing we'll do is just check the status of container d just to make sure that it's actually running and i don't see any reason why it wouldn't be we have yet to introduce any custom configuration at all but we just want to make sure so systemctl status container d just like that and in my case it's running we see active and running right there towards the top of the screen and anyway it's running so we're good i'll press q to break out of there let's check it on the other instance as well and as you can see it's active and running next what we'll do is we'll create a brand new directory so sudo mkdir and the directory that we're going to make is underneath the etsy directory and we're going to call it big surprise container d and inside that file what we're going to do is place the default configuration for container d to generate that default configuration here's what we'll do we'll run container d config default and we'll pipe that into sudo t and then slash etsy container d config.toml and the t command is going to print the config to the screen but also it's going to copy it to the file as well we can see the file right there so we'll do the exact same thing here on the node and we now have that config file on both of these particular instances and the default config is mostly okay but there is one change that we'll need to make so we'll run sudo and then nano let's edit that file so now we have the file open on both of these machines right here and specifically there's just one option that we'll need to edit and to make it a lot easier we can hold ctrl and press w and we can type part of the actual line that's before the line that we want to edit and we could type run c dot options and that takes us right there let's go ahead and do that in the other one as well so i'll search for the same thing and if you scroll down just a little bit once you've found this particular line we see an option here systemd c group we want to set this to true and that's it control o and then enter to save the file let's do the same thing here and let's save the file and now that part's done now the next thing that we'll want to do is ensure that swap is disabled and we can find out whether or not we have swap by typing free dash m and in my case i don't have swap we can see the last line is all zeros for swap and the same thing is true here as well now normally swap is actually a good idea but for our use case we do want to disable this for kubernetes kubernetes will actually yell at you if you have swap enabled so we need to make sure that it's not now depending on how you create your instance you might actually have swap and one way that you can disable swap is by editing the fstab file so sudo nano etsy fstab and if you have a line in here for swap you can comment that out with a hash symbol right in front of it obviously i don't have swap enabled at all i don't even have it listed here either so i have nothing to do i'm all set for that particular requirement so i'm good to go on that so next we'll need to edit yet another file so sudo nano and this particular file is going to be under the etsy directory it says ctl.conf so that's the one that we'll want to edit and what we're going to do is go all the way down and we're going to look for this line right here the ip forward line and this is going to enable bridging we definitely want to enable this and we do so by simply uncommenting this line right here let's go ahead and scroll down and find it here as well and here it is and there we go so next up we have yet another file and that's going to be a common thing in this tutorial we have a lot of files to edit so the next file is going to be under the etsy directory under the modules load directory and the particular file is going to be called k8s.conf just like that and inside this file we're going to type br underscore net filter then we'll save it and the same thing here and i'll save it and now we should have the bridge net filter enabled the next time we put our instances a full explanation of the bridge net filter is beyond the scope of this particular video but a short summary of what it does is that it ensures that bridging is fully supported within the cluster sure it's way more technical than that but i think that's a good enough explanation bridging is critical in the cluster since it facilitates communication cluster-wide so we definitely want to make sure that bridging is fully enabled anyway at this point we've worked through all of the preliminary tweaks that i wanted you guys to implement so let's reboot these particular machines and when these come up we'll be ready to go in the next section we're going to go even deeper into the process of building a cluster and this is going to be so much fun so as soon as you've actually rebooted your instances and you confirm that they're back up and ready to go we can go ahead and continue [Music] all right so in this section we're actually going to install kubernetes in the previous section we've implemented some various tweaks that we needed to support the cluster and now that we have all of that set up we should be ready to go to actually start the process when it comes to installing kubernetes so let's do it just to make sure we're on the same page though you should have two instances running ubuntu server 2204 one of them is going to be the controller and the other is going to represent a node so on my end i'll reconnect to each of these particular vms right here and now we're in so what we're going to do right now is install the repository that we'll need for kubernetes to support that repository we will need the gpg key so let's go ahead and install that and all of these commands will be in the blog post that i'll have linked down below so if you want to copy and paste any of these commands you can feel free to grab them from that article and as you're about to see some of these commands are a bit on the longer side so you might want to go ahead and consider copying and pasting the commands from the article that i mentioned so on my end i'll paste in the command right here and what it's going to do is save it locally under the user share key rings directory and it's going to give it a file name of kubernetes archive key ring dot gpg so i'll press enter and that seems good to go and i'll do the same thing here of course i want to make sure that each of these are consistent up to this point and now we have the gpg key on both of these particular vms and the next command that will run is going to be a longer one as well so i'll just paste it in and this command will actually install the repository that will be pulling the kubernetes packages from now at least some of you that are paying very close attention will probably notice that the code name that it refers to here for the release of the distribution that is 4 is xenial which is absolutely not the release code name for the latest version of ubuntu server and yes i am aware of that but i'm also not in charge of what they call their repository this is correct i would really hope that they update the name but they haven't done that not even in the previous version of the book either so that's the name and this is the command right here to install the repository then we can run sudo apt update make sure that everything works and we have no problems and everything looks fine here let's do the same thing again we'll add the repository and then we'll run sudo apt update and now we have the repository so next what we're going to do is install the packages that are required for kubernetes so we'll run sudo apt install the first package is going to be cube adm the second is the cube control or cube ctl package and then the third one is going to be cubelet but what do these packages actually do well cube adm gives us tools that we can use to bootstrap our cluster for example cube adm is able to help us initialize a brand new cluster and it's also able to assist us with joining a node to an existing cluster as well as upgrading the cluster to a newer version down the line cubectl is going to provide us with the cubectl command and that's a command line utility that we'll use to manage our cluster and interact with it you'll be seeing some examples of cube ctl later in the video the cubelet package is like an agent it facilitates communication between the nodes and it also provides us with an api that we could use for additional functionality and sure that was an overly simplified description of what these do but i think that's probably good enough for us let's go ahead and press enter enter again and i'll do the same thing over here as well we'll wait for that to install and there we go now the next command that i'm going to have you guys run is actually specific to the controller but before we get into that what i'm going to do is take a quick break from building the cluster here because i want to create a template of the node instance that way whenever i want to spin up another kubernetes node then all i have to do is spin up some additional vms from that template and considering that we have the kubernetes packages and repository installed in this instance then that's going to save us from a lot of work anytime we want to deploy a new node now to do that i'll run sudo cloud init and then clean and to be sure that everything related to cloud init has been reset i'll run sudo rm-rf be very careful with this command and we'll run that against var live cloud then instances and there's one more command that we'll need to do and i'm not really sure why this is even necessary in the first place the thing is cloud init is supposed to reset the machine id but for one reason or another it just doesn't do that it might be something with the proxmox implementation or something like that but we could easily fix this particular problem and the way we could do that is run sudo and then truncate s and then zero and etsy machine hyphen id that basically erases everything within that file the sc machine id file if you don't clear that out then what's going to happen is every single instance you create from the template will have the same ip address we definitely don't want that next what we'll do is run sudo rm we want to remove var lib dbus machine hyphen id so now that that's removed let's go ahead and create a symbolic link we'll link the etsy machine id file link that too slash var lib dbus and machine hyphen id let's go ahead and just double check that we can see that it links back to the etsy machine id which is now empty and now with that out of the way we can go ahead and shut this down so i'll do that now so you do power off then back up here we're just waiting for this node to shut down and now it is so what i'll do is convert this to a template and as you can see our node instance has been converted to a template so what that should allow me to do is make some clones of this let's go ahead and see if it works so i'll click clone i'll give it an id of 852 and what i'll do is just name this k8s hyphen node hyphen 1 make it a full clone and i'll do it again just right click clone 853. clone that and one more time or however many times you want it really doesn't matter whatever you have the resources for and full clone again and the next thing to do is to well of course start up those nodes but before i do that i do want to make sure that i've set up the resources appropriately i don't think i did though i almost completely forgot so here we want to customize the specs for these particular vms now i'm on the controller right now and it's actually running so nothing that i set here will take effect while it's running so i'll shut it down just go back over here let's power that off i'll go back up to proxmox wait for that to shut down and what we'll do is set the ram to at least 2048 actually i'm going to go with 40 96 or 4 gigs of ram i think that actually should be more appropriate for something that's going to be doing a lot of work for the cpus we want at least two and i'm actually going to set that to four just to make sure that this thing is not going to slow down at all so that one's good let's go ahead and start this instance here now one gig one cpu is fine now i'm going to set this to two gigs of ram though i think that's actually fine for these and for cpu we may as well bump that up to two cores this is up to you and your environment the default of one gig and one cpu is fine i have tested this with those specs and everything worked just fine so same thing here now you could even consider upgrading the memory to four gigs but i think at a certain point just makes more sense to spin up more nodes than actually increase the memory but if you do have a container that is very memory hungry well you might actually consider bumping up the memory but but i think the specs that i have here are probably fine for most let's go ahead and start these up and we should see that cloud init goes ahead and does what it does which it's actually already doing we can see the output of cloud init actually interrupt the output of the boot process that's perfectly normal so let's go ahead and check the ip addresses just to make sure that each of the nodes has a different ip so we'll check the summary here and this one received an ip address ending in 215. this one received one ending in 218 and then we have one ending in 217 so that's pretty cool so we have the node instances ready to go we also have the template right here if we want to add additional nodes so we're in a pretty good position right now let's go back to the actual shell and we'll come back to the node instances later but for right now what we'll need to do is reconnect to the main controller node here so let's go ahead and do that so we're on the controller and now what we're going to do is actually initialize the pod network now what i'm going to do is copy the command from the blog article and you guys could do the same but don't press enter just yet we're going to paste the command into the terminal window but there's going to be a few things that you'll want to change that will be specific to your environment so i'll paste in the command and here it is so first of all what we'll want to do is adjust the ip address right here and make this endpoint ip to be the same as the ip address for the controller in my case that's going to be 10.10.10.213. and for the node name we'll want to make sure that it's the same as the hostname for the actual controller itself and in my case the command i pasted had the wrong name anyway we'll just set that to k8s hyphen controller abbreviated just like i've named the controller note here and when it comes to this ip address right here the last one we definitely don't want to change that if we do then well things aren't going to work you can customize the pod network but i recommend that you don't because there's going to be a lot of other things that you'll need to customize as well and in that case you're just creating additional work for yourself so i don't recommend that so let's leave the last ip address here the same what this command will do is create and initialize our kubernetes cluster it'll also provide us with a command that we can use to join nodes to our cluster so let's press enter after you ensure that the values match your environment to get the process started we can ignore that warning that shows up right there that's not going to slow us down at all and like it says it might take a minute or two so we'll wait for this to finish and now we actually have a real kubernetes cluster it's not really all that useful because we haven't even joined a single node to the cluster yet but the cluster does technically exist and that's a great step forward now what we'll want to do is copy the join command right here so we'll just go ahead and highlight that and i'll copy it and what i'll do is open up a text editor so i'll just open up g edit in my case and i will paste the join command right here basically what we want to do is just save this aside this join command we want to set it aside we're going to need it later we're not going to use it right now as an aside you definitely will not want to show the join command in clear text to anyone especially not in a youtube video that thousands of people are going to see but in my case i will be destroying this cluster and rebuilding it later so technically this particular cluster will not exist any longer by the time you're seeing this video so that's why i didn't feel any shame in showing this particular output but just don't get in the habit of that definitely want to keep this private anyway if we scroll up we have the next three commands right here that we'll need to run so you'll need to run this command here this command and this command and all of these commands are being run against the controller at this time so what i'll do is just paste in those same commands from the output all you should have to do is scroll up to get those commands or you could just get them from the blog post these commands actually don't change from one installation to the next and what these commands will do is give our local user access to manage the cluster that's very useful so the first of the commands was this one right here and the second command was this one again these commands right here were in the output when we initialized our kubernetes cluster and here's the third command right here so we should be good to go on that from this point forward we should be able to manage the cluster without needing route our local user should have permission to do the administrative tasks that we'll need done so far so good so now at this point what we could do is run cube ctl this will be the very first cube control command that i've given you in this video and we could check to see what pods we actually have within our cluster right here so we'll just type cube ctl get pods and we'll want to check all namespaces we have a handful that are running so the two at the top are waiting on something but what exactly are they waiting on well they're actually waiting on an overlay network that we'll need to create so let's go ahead and take care of that so we can make sure that the core dns pods get what they need to be happy and what i'll do is paste in the command right here this will grab the flannel overlay network configuration file or yaml file this will actually give us what the core dns pod is looking for so i'll press enter and well the output there certainly looks promising doesn't it so let's run the previous command where we actually checked the pods in our cluster and now we can see that the top two are now running sometimes it could take a little bit of time for that to happen so if yours still says pending just wait a few minutes and then come back and check it again and when everything says running like you see on my screen right here then you should be good to go next let's go ahead and join some nodes to our cluster if we run cubectl git nodes then all we're going to have is our controller here because we have yet to add a single node to our cluster so i'll switch over to a new tab here and i'll connect to the first of the nodes that we'll want to use so for that one i'll go ahead and ssh as myself then dot 215 for the ip address for the first of the nodes we'll say yes and here we have node one so in another tab i'll do the same thing and the second if i remember correctly ended in 218 and there it is and then let's connect to the third node here and that one i believe was 217 let's see and there it is now we're connected to each of the three nodes that i've cloned from the template and what we're going to do is join these particular nodes to the cluster and let's get that process started so what i'll do is just go back here to the very first node and let's grab the join command that we set aside earlier and i'm going to paste it right into this particular terminal window we will need sudo so i'll type that in first and then i'll paste in the command there it is so i'll press enter and it looks like we have an error here and what this usually means at least anytime i've run into it is that too much time has passed from when the join command was generated there is a time limit on that so what i'm going to do to attempt to fix this is paste in this command right here so it's cube adm token create and then print join command just like you see on the screen right now this will regenerate the actual join command that i'll be using so let's see if this fixes the problem so i'll press enter and here we have a join command so i'll grab that and then we'll move over to the other tab here so what i'll do is type sudo and then paste in the command and here it is and let's see what happens well that's certainly a lot more promising isn't it so let's go ahead and just well run this on the other nodes as well let's see what happens and so far so good the output is definitely a lot better so let's run cube ctl get nodes and we can already see that two of the nodes are starting to check in they're not ready quite yet but they are in the process of becoming ready so we can go through and look at the process looks like that one's done so is that one and that one so sometimes it just takes a few minutes for everything to become ready so let's run the command again and see if the status has changed and yeah we can see that there's been some changes the first two nodes are ready and the third one is still in the process of becoming ready so we can simply just keep running this command over and over again until they're all ready and now they are so as you can see here we have three nodes in the cluster and again if the join command that you have actually timed out then this is the command right here that you would run to regenerate that to get the other nodes added to the cluster but each of the nodes are added to the cluster so at this point we're actually completely finished we have a kubernetes cluster but you might be wondering well what can i do with that cluster how do i run something on that cluster well in the next section what i'm going to do is show you a quick example of running an nginx container and also a node port service that'll give you a start at least when it comes to running things on your cluster [Music] all right so in this section what we're going to do is actually launch a container within our cluster and to do that we'll create a yaml file and inside that yaml file there's going to be some instructions that will pertain to kubernetes to tell it how to run that container so i'll just use nano and i'll call it pod.yml so we have an empty window right here and what i'm going to do is paste the contents that should go inside this file and here it is so i will go over this fairly quickly a full walkthrough of kubernetes yaml files is beyond the scope of this video but this one is pretty simple though so api version version one we're going to leave that alone it needs to be set to exactly that the kind is set to pod we want to create a pod and in case you didn't already know a pod is what containers run inside of when it comes to kubernetes so we need to create a pod we can have other containers in here as well but we're only going to have one we're going to be launching nginx for metadata i'm setting the name as nginx example that's just the name of this particular deployment or this particular pod labels are a key value pair so i'm creating a label called app and i'm setting that equal to nginx then we have the spec section so we're setting the specifications for the container that we want to create so we're going to pull the nginx container we're going to grab that from the linux server registry that's at linux server.io if you didn't already know and there there's a container called nginx so we're going to grab that and if you're curious why i chose linux server.io well other than the fact that it's an awesome registry they also have some raspberry pi compatible containers there too actually i think all of them are so even if you are running a kubernetes cluster on raspberry pi as we've done on this channel before then you'll be well served by linux server.io but even if you're not using raspberry pi it's still fine because they have x86 containers as well so we're setting a container port of 80 that's the default port for nginx anyway and we're applying a name to that port of nginx http so let's save the file and we have our yaml file our pod config file if you will right there so let's go ahead and apply it what we'll do is run cube ctl apply dash f and then the name of the file and if this works it should actually deploy a container into our cluster well that was simple it just tells me that it was created but let's go ahead and take a closer look at that what we'll do is run cube ctl get pods and we can see that the nginx example pod right there is ready to go already another thing that we can do is add another option here so we can add dash o and then wide and that also tells us which node that particular pod is running on so it's running on node one in case you're curious but anyway we actually have a real pod and container running in our cluster that's awesome but one thing we can't really do at least not yet is we can't access that container at least not normally the pod network is actually segregated and there's no door so to speak that we can go through to access the container from our local network we'll need to set something up to achieve that but there is one way that we could actually test to make sure that it's working so let's run cube ctl get pods again dash o and then wide and we have the ip address there so what we could do is run curl against that ip address that's an ip address within the pod network so we'll just type it in as is and press enter and check this out we have the html for the sample page that comes with the nginx container which means it's actually working but wouldn't it be a lot better if we could actually access this from a web browser on our network i mean that's kind of the whole point right well let's go ahead and set that up so again we'll create another yaml file we'll call this one service hyphen node port dot yml and just like before i'll go ahead and paste that right in here so as you can see from the second line what we're doing right now is we're creating a service there's different types of services and a full walkthrough goes beyond the scope of this video but i plan on covering kubernetes in more detail anyway later in the life of the channel so stay tuned for that we're going to give the service a name of nginx example and the type of service that it is is a node port service and what we're going to do is map port 80 to node port 30080 now port 80 is the port that the nginx container itself is using port 30080 which i've chosen at random actually is going to be the port that we'll use to access the container from our local network so we're pointing that over to the nginx http port declaration that we've added in the previous file and for the selector the selector is the label that we created earlier the label we called app that we set to engine x anyway let's save this file just like before what we'll do is run cube ctl apply dash f service node port dot yml so let's apply it now to check the status of that we can run cube cubectl get and then service and we have our actual node port service there at the end of the output we can see that port 80 is mapped to port 30080 so this should actually expose the container to the remainder of the network so what i'll do is just open up a fresh web browser here and what we'll do is we'll type in the ip address 10.10.10 and then it was 215 if i remember correctly and then port 30080 and we actually have right here the default start page or the example page for nginx which means it's working but out of curiosity what happens if i change the ip address here to end in dot 213 if you recall that's the ip address to the controller what's going to happen well let's find out still works okay well what if i put in the ip address for node three it's not even running on node three so this is going to fail right no it works fine well actually that's how kubernetes works you could actually type in the ip address for the controller or one of the nodes and it's all the same because that particular node port service is mapped to port 30080 and that's cluster wide and the beauty of this is well you can access this from any of the ips that are associated with the cluster that's pretty cool but what this does is it proves that our cluster is actually working so congratulations you have a fully working kubernetes cluster and if you're like me and you ran this in proxmox then you have it running in proxmox and that's pretty cool so there you go [Music] and that brings us to the end of this video thank you guys so much for checking out this video i hope you had a lot of fun i always have fun when i'm working with kubernetes it's just so cool so definitely subscribe if you haven't already done so because there's some additional tutorials coming very soon and i can't wait for you guys to see them i'm not going to spoil the surprise but definitely some really cool things so click that subscribe button and if this video has helped you out and also click that like button as well anyway again thank you so much for watching and i'll see you next time [Music] you
Info
Channel: Learn Linux TV
Views: 46,799
Rating: undefined out of 5
Keywords: Linux, gnu/linux, LearnLinuxTV, Learn Linux TV, LearnLinux.TV, Learn Linux, Linux Training, Linux Tutorials, Proxmox, Proxmox Virtual Environment, K8s, Kubernetes, Cluster, Ubuntu, Ubuntu Server, Ubuntu Server 22.04, proxmox ve full course, proxmox ve install, home lab, proxmox ve, proxmox ve tutorial, linux for beginners 2022, proxmox tutorial, virtual machine, cloud image, home lab setup, linux administrator, proxmox install, linux commands, proxmox virtual environment tutorial
Id: U1VzcjCB_sY
Channel Id: undefined
Length: 57min 18sec (3438 seconds)
Published: Fri Sep 23 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.