Deploy Kubernetes in your Homelab: How to use Kubespray

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
today I'm pretty excited to bring you a tutorial on kubernetes we're going to be installing kubernetes into our proxmox cluster which is basically installing it on bare metal as a junior devops engineer this is how I learned kubernetes by installing it and breaking it on a daily basis learning for my mistakes I highly encourage you to do the same as a devops engineer or anyone striving to become one mastering kubernetes is become more and more important in the global market 92% of large and mediumsized or organizations are actually using kubernetes for their containerized applications but this is not only large and mediumsized organizations but also 78% of small organizations are using kubernetes as well in this video I want to talk about how we're doing the architecture layout k3s versus K8 the subnetting and resource requirements for kubernetes and of course deploying a multimaster cluster with Cube spray so now I know this topic can be intimidating and has been done a few times however I think there's a clear Market Gap with how much people actually know about kubernetes being deployed as they likely deploy k3s instead so if you want to skip the why and the why not and instead just get to how we're deploying kubernetes go ahead and hit those chapters down below and you can skip to whatever part you want so now let's get into it and dive into why we should learn kubernetes let's think about the banking industry here transactions are going to be at their highest during the middle of the day and the lowest during the middle of the night you will also have inter and Peaks around the holidays you will have scaling issues that normal servers just cannot maintain so what do you do well kubernetes helps you solve this by automating that scaling Factor right it can look at how many transactions you have in Q and then based on that it can scale up or scale down the amount of PODS you have deployed the banking industry is also mandated to have a specific amount of uptime usually by governments or slas with larger clients kubernetes also solves this by allowing it to scale up and scale down if a node goes offline often you hear about deploying kubernetes as k3s however these are not the same while they offer a compatible interface the deployment methodology and other information under the hood is different when it comes to kubernetes we have more Enterprise features this includes built-in support for complex networking layouts storage and cluster configurations and because of this we will have a high degree of flexibility and extensibility and customization however some of the trade-offs are that we will have a steeper learning curve and more components to manage but we will have a larger community and more third party Integrations here on the other hand if we look at k3s we will have limited features because this is lighter weight and has limited features it will also be good for Edge deployments or iot devices however this is not good for learning as we want to make sure that we can experience all the features that kubernetes has to offer so I'm going to highlight a few options here to show you why you should choose kubernetes versus k3s to be clear here both are solid projects and both have their trade-offs however if you don't have exposure or are trying to learn kubernetes Administration or you need to deploy complex applications it is my thoughts and many others that you should run kubernetes now that we know we're going to deploy kubernetes and not k3s why are we choosing Cub spray well Cub spray is actually one of the recommended methods of installing kubernetes and actually leverages Cube ADM to install kubernetes as well so it's really listed here twice in the kubernetes official documentation now QPR is a composition of anible playbooks inventory and provisioning tools but it's also a domain of knowledge for kubernetes clusters and configuration management tasks it offers a way to install upgrade add and remove nodes and even reset your nodes if you want to test out other implementations of cnis or want more practice with other configurations Cube spray offers the ability to customize many aspects of the deployment this includes your DNS configuration your choice of control plane whether it's binary native or containerized component versions Calico route reflectors and even component runtime options such as Docker or container D Cube spray customizations can also be made via a variable file however if you're getting started with kubernetes consider using the cube spray defaults to deploy your cluster and explore kubernetes at a base deployment before we begin let's talk about how we're going to layout our kubernetes environment for kubernetes are a few requirements that we have to have in order to run it one is that our machines have to be run Debian or an RPM compatible Linux OS for us this is going to be a m 2204 the next one is that we have to have at least 2 gigs of RAM or more per machine as any less leaves little room for your apps to run so for us we're going to put 4 gigs on the control plane and 16 GB for our workers another requirement is that we have at least two CPUs per machine to use as your control plane node for us we're going to be assigning four cores the East machine in our cluster and we'll address this later as needed additionally we'll need to have full network connectivity amongst all our machines in the cluster you can either use public or private but for us we're going to be using a private Network so let's talk about how we're going to layout our kubernetes cluster inside of our proxmox cluster so for our cluster we're actually going to be using the first three nodes in our proxmox cluster here we are going to first spin up three control plane nodes each on one of our proxmox servers these control plane nodes will consist of four cores and 4 GB of RAM next we're going to spin up three more worker nodes which will consist of four cores and 16 GB of RAM each we will lay out these nodes in a sl24 network and our control plane nodes will be allocated to a /27 beginning at32 as for our worker nodes we will all ocate these also in a /27 but we will start at 128 this will allow us to assign IPS in a sequential order and allow for expansion in the future if needed so let's take a look at the GitHub repo for this because I want to highlight an issue I ran into when spinning this up in the past now as you can see on their GitHub if you click here in the top right you'll be able to see a main or Master Branch for this project now when I previously deployed this in the past this was not stable so you actually have to do is go to a release on the side here and select which release you want to deploy of cube spray and I highly recommend doing this as it will make sure that the release is stable and you won't run into any issues where it potentially just breaks in the middle of the deployment inside of the releases package we could see that the latest tag that we want to have is 223 right so we know what tag we want to pull down when we're actually cloning this to our desktop now let's run Cub spray and deploy kubernetes to our environment there are two methodologies to accomplish this during the video I'm going to use a virtual environment recommendation however at the end of the video I'm going to show you how to leverage their Docker container as it is my preferred methodology prior to running the Playbook we need to generate our inventory and then configure our variables we want to change to generate our inventory we need to first clone the cube spray repository and then check out the release tag so let's run a git clone and then paste in the link from the Repository after switching into the repository folder we will need to run a G checkout tags slv2 23.0 this will put us on a specific release tag that we want to have for our runtime environment now that we are on the release tag we want to run a command found in the documentation for installing the requirements text so the first thing we want to do is to set up a virtual environment using Python and we're going to name it cubes spray-z EnV the next thing we want to do is to act activate our virtual environment and then install our requirements text using pip 3 so now that we have our environment set up we're going to copy over a sample inventory into a new inventory folder this will initialize all the defaults for our Cub spray Playbook that we're going to deploy to our nodes next we're going to declare an array of variables called IPS this will contain all of your cluster IPS that we want to have configured so we're going to type declare d a and then our IPS are going to be equal to our node IPS making sure to leave a SP space in between each IP finally we want to run an inventory python script found in the repository which outputs the host yaml in our inventory location and takes in our declared IP variable and as you can see from the output we have all our nodes added to an inventory file next inside of vs code we can see that we have our new inventory folder generated with our host GMO file inside of our inventory file we need to make sure we modify the node names otherwise it will will rename your nodes and this could lead to a problem with DNS resolution also we will want to make sure we modify our control plane group to have the three nodes to match what we discussed earlier with that we have our inventory up to date with what we discussed earlier next we will want to Define a variable file that we're going to pass into our Playbook command so let's create a cluster D variable. yo file to hold all of our variables the first variable we will want to modify is our Cube version when deploying anything it is always good to try and pin it to a specific version that way you know there will be no unexpected change to your environment to do this we will need to define a cube unor version entry in this case we're going to pin it to a release version of v1.27 do5 since this is a new cluster next we will want to define a common kubernetes tool of Helm adding Helm underscore and enabled true will configure our cluster to have Helm installed when we go to deploy applications later the third thing we want to change is our Cube proxy mode for this we will be setting cubor proxy mode equal to IP Tables by default cubes Ray will deploy a cluster with ipvs which is not installed by default on our nodes ipvs is not required for our cluster as it is a small deployment however I will leave articles in the description for comparing ipvs and IP tables this way you can choose the proxy mode you prefer finally I'm going to insert a comment that would normally enable metal lb however when deploying this previously it seems like Cub spray is not properly installing metal lb thus I'm going to leave this commented out in our variables file in case we want to revisit this in the future because this will not get installed at this point I will touch on what metal lb is and why it is required for our home lb install when we go to install this later and that's all the variables we will need to configure of course there are a lot more but I will leave a link below to their documentation for the others following that we can bootstrap our cluster back in our terminal let's kick off the anable Playbook command passing in the additional arguments we will need to Define for our deployment we are going to pass in our inventory file and our variables that we've defined earlier yes the at sign is required here we will also need to pass in a become and a become user argument to Grant elevation rights to the anable Playbook so it can use pseudo when required additionally for my deployment this will be running under my anible user so I'm going to pass in that as well if you are deploying this in your home lab you will be using your SSH user that you connect to the machine with finally at the end we're going to put in our cluster yaml file which is the playbook for the install hitting enter here we will see a flurry of output this should take about 18 to 20 minutes for our deployment so I'll see you in a few minutes welcome back back and as you can see we finished running our Playbook but now we want to scroll up and confirm there are no failures from our run and as you can see there are none so we are good to go following this we are going to SSH into our control plane server for me I am going to SSH with my anible user by running SSH anible at k8- cp-01 home. looc inside the server we want to open our Cube config file which is located at Etsy kubernetes so first we need to elevate to our root user and then run cat Etsy kubernetes admin.com now let's copy this file and go back to our host machine on our host machine we will first want to create a directory in our home directory of cube then we will use VI to create a file called config in that folder inside a VI we will want to paste in our config making sure to modify the server address to point to one of our IPS for our control plane saving this file we will be able to run the cube CTL get nodes command to validate all of our nodes in a ready State and waiting for a workload with our kubernetes cluster up and running I want to test to make sure that we can deploy an application and that it comes up without any issue in order to do this we're going to leverage a service called Helm which is a package manager for kubernetes using Helm we're able to find different containers and different applications that can be easily deployed so that we can test our application for our test we're going to be using engine X this is not the engine x engress controller but it is just a application that can be used to expose a web front end so we can test that our applications are being deployed correctly so in order to do this we're going to run a Helm install passing in a my release tag which will be the name of your release and then we're going to use the oci pointer pointing to their registry and their repository that they've def find inside of this chart and then we're going to point to the engine X chart inside of the repository so now let's switch over to our terminal and run this command as you can see we've modified the oci command pointing to registry. docker.io bidami charts slinex this will allow us to pull a latest release for our Helm chart and then we can go ahead and install this in our kubernetes cluster so now let's check on the status of our pods prior to checking to see if we can reach it so we're going to run Cube CTL get pods and we can see that our engine X pod is up and running so now we can also check to see how we can connect to this by looking at our qctl service with our default Nam space however the biggest issue we can see here is we do not have an external IP Instead This is stuck in a pending State This Is Where Metal lb will come into play so now that we know our problem is with our networking for our service let's take a look at how kubernetes does our IP address and assignment for our service that we want to expose inside of kubernetes each pod will get its own IP address that will allow internal pod communication between replicas and things of that nature however for internal Services inside of the kubernetes cluster you will need a service that is exposing a different IP of type cluster IP this will create a different IP that your front-end service can use to connect to the mySQL database on the back end you will not be able to connect to it directly however when we give this same IP address to our end users to try and connect to the MySQL service they will get an error saying connection failed because that IP is not within the scope of the routing tables that they have outside of the kubernetes cluster we will instead need to switch this over to a load balance service and that way we will get an external IP assigned to it and then our users will be able to connect to this service successfully however when it comes to our cluster we are getting an EXT external IP that is in a state of pending so we do not know what IP address to give our users in order to establish a connection so now why are we in a pending State well by default kubernetes does not offer an implementation of network load balancers for bare metal clusters the implementation of network load balancers that kubernetes does ship with are all glue code that is associated to various IAS platforms like gcp AWS and Azure and if you're not running on a supported Ias platform load balancers will remain in a pending State indefinitely bare metal cluster operators are left with two lesser tools to bring traffic into the cluster node port and external IPS both of these options have significant downsides for production use which makes bare metal clusters second class citizens in the ecosystem however we're going to work around this by installing metal lb and metal lb aims to reduce this imbalance by offering Network load balancer implementation that that integrates with standard Network equipment so let's take a look at how we can actually install and what this looks like for metal lb system so when we install metal lb we're going to deploy a Damon set that is in the metal lb system this Damon set will reach out to our Nick and establish a connection and pull information from that Nick from there we will assign and create two different types of Kinds for the metal L system one is an address pool that will Define the range of ips that we want to expose from kubernetes the next one is a kind of L2 advertisement this will allow you to expose specific IP pools on specific nodes this can be useful for when there are scenarios where only a specific subset of nodes are exposed to a given Network this will allow you to limit those nodes as potential entry points for service IPS so now instead of our load balancer staying in a pending State we will instead a sign that IP advertise it out pointing it back to our Mac address for our Nick so now that we know how metal lb works there's one more thing I want to touch on before we actually install this if you opted to use ipvs in your installation you will have to enable strict ARP this is mentioned in the installation documentation and I'll leave a link to this below so now if we scroll down we'll be able to see that we find the installation by Manifest this will actually install metal lb and the syst system namespace into our kubernetes cluster so now we're going to copy this down and run this against our kubernetes cluster so now that we have our manifest installed we are going to create a new folder inside of our inventory to Define how we are installing metal lb and the different kinds we will be creating so as mentioned previously we need to create an IP address pool and we're going to do this by actually creating a IP address pool. yo file and we're going to actually insert our IP address pool definition for me I'm going to be exposing 17264 3.1 to 17216 43128 and you can actually Define different ranges if you want to do different subnets or different ranges of ips to divide up your subnet further next we're going to be creating another file for the L2 advertisement yaml inside of here we're going to paste in a L2 advertisement with a name of default advertisement this is because we're not limiting what IP address pool we're going to be picking from in order to Define out our advertisement IPS next we're going to jump back into our terminal and actually apply our definitions that we set up so we're going to run Cube C apply DF and pass in our path to our metal L definitions once they're created we can run our Cube CTL get service command and we can see that we now have an external IP defined copying this external IP we can see that we're able to access our engine X Ingress by going to that IP because it is is exposed on Port 80 so now that we have our kubernetes client set up and using the virtual environment let's talk about how we can instead leverage their Docker container this will just be a repeat and a summation of how I can use the docker container to do the exact same thing except we will be using the tag from the docker container instead of having to check out a specific tag in git I will put this into the pragmatic engineering YouTube organization on GitHub and also leave a link to this down in the description below so you can understand understand exactly what's happening here so inside of our run. sh file we have a image tag declared at the top and we're going to be doing a Docker pull of the cube spray container with that image tag we are then going to run the docker container mounting our inventory into a root inventory file we're going to mount our scripts into the root inventory file and we're also going to mount our ID RSA file into the Container as well so we can use it for anible we're then going going to run the bash command so we can get into the container and run our anable playbooks from there so let's run the script now so as you can see we are inside of the container in the cube spray directory so inside of our container we should be able to list out our scripts and make sure it's mounted correctly so as you can see we got our cluster script and our generate inventory script if we look at our generate inventory script we could see that we're naming out the proxmox cluster that we're deploying in our case for this example we're going to be using proxmox one instead of proxmox 01 from there we're going to be copying the same sample directory and placing Anor inventory declaring our IPS and then putting our host jaml into that same inventory file so let's go ahead and run this and as you can see our proxmox folder got created for inventory for brevity sake I'm going to copy our proxmox host file that we set up earlier in our virtual EnV into our host file here from there I'm also going to copy over our cluster variables that we set up earlier now inside of our container we can actually run this cluster sh script this will check that we're passing in an inventory and our directory actually exists that we have a host file being defined and that we have our cluster variables being defined you'll see at the very bottom that we are able to kick off the anible Playbook command passing in all of the variables that we needed earlier remember if you're running this locally you may need to change your anle user so now in order to kick off the script all we have to do is run scripts cluster and provide it the inventory that we want to run it against so we're going to run prox MOX one now if we hit enter this will start to run our Playbook the reason I like this methodology better is because it allows you to maintain a whole git repository without merging in their code this will keep your inventory along with any other clusters that you set up separate and organized you also Al be able to expand your scripts folder into actually running other scripts more dynamically or other playbooks more dynamically to upgrade or add nodes as we expand our cluster in the future and that's it hopefully you have your kubernetes cluster up and running if there's any issues you ran into please reach out to me in the comments below if you enjoyed this video please like and share as it helps our content reach more people now before we go there's a couple things I want to say one if you made it to the end of the video please like share and subscribe as it helps me greatly secondly thank you all for 100 subscribers as it means a lot to me my goal for the end of the year was only to reach 100 subscribers however we blasted past that right in the middle of October this really helps to motivate me to produce more content and make our content even better in the future if you'd like to see something specific on this channel drop a comment below and I'll respond back to you to see if we can make that happen thanks again this is James from pragmatic engineering
Info
Channel: James Rhoat
Views: 9,325
Rating: undefined out of 5
Keywords: kubernetes, kubernetes tutorial, ansible, kubespray, devops, devops tutorial, kubernetes deployment, kubernetes vs docker, kubespray vs kubeadm, docker and kubernetes, ansible kubespray, metallb, helm, HomeLab, DevOps, Kubectl, MultiMaster, Nginx, Tutorial, helm charts kubernetes, kubernetes tutorial for beginners, kubernetes explained, automation, etcd kubernetes, devops engineer, devops roadmap, devops automation, devops automation projects, devops automation engineer, learn kubernetes
Id: lvkpIoySt3U
Channel Id: undefined
Length: 23min 53sec (1433 seconds)
Published: Sun Oct 29 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.