How to Setup an HA K3S Cluster - with Sample Deployment and Tests

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
let's install kubernetes first off i will layout the architecture of the cluster that we are going to build with some basics of kubernetes then the prerequisites setting up the tools we need to get this up and running and then the actual installation finally deploy a simple app and do some testing a kubernetes cluster should have at least one master node but since I'm building an HA cluster I will set up three master nodes But why three a master node consists of several essential components kube-api server as a front end for the control plane and handles all external communications meaning that this is what we are interacting with when using command line tool like kubectl or UI like Rancher or OpenShift kube-scheduler which basically schedules the pods on available work notes kube-control-manager which runs control processes and implements governance across the cluster and data store an external or embedded database you can use postgresql mysql sqlite or etcd for this since we are using K3S initially it's supported only sqlite for embedded DB option but as of release 1.19.5.k3s-1 K3S has added full support for embedded etcd so we'll be using embedded etcd well etc needs quorum meaning you need odd number of nodes greater than one to get a majority vote that's the reason I'm going to set up a three node cluster best practice is not to schedule any workloads other than kubernetes control plane processes into these master nodes hence, I'm going to add two more worker nodes to the cluster to run my own workloads before all this I'm going to setup a layer 4 load balancer in front of the cluster to make control plane traffic from kubernetes agents and kube-control to the api server fault tolerant instead of communicating with a selected master node so all together we have one lb node three master nodes and two worker nodes I am going to use 6 identical local VMs with 2 CPUs and 2Gb Memory and CentOS 8 installed these are my IPs and host names node 1 to node 3 for masters worker 1 and 2 for workers and lb for load balancer also i have already setup local dns with hostname as cname and in domain hostname.home.lab okay let's setup our loadbanner node first and configure the cluster through that node using k3sup first of all i am going to disable se-linux and reboot the node to take effect because otherwise I will have to configure it to allow ports since these are non-standard ports I will put a link to all the commands and config in the description now that's done let's install nginx I'm using dnf package manager dnf install nginx kube-api server uses port 6443 so with this block of code we can set up a proxy for tcp and udp traffic on port 6443 to our master nodes notice the upstream k3s block balances the load among three master nodes and server block exposes it in the load balancer node on the same port that's it we have our load balancer all we have to do now is enable the service so it could serve a reboot and start it done and now i will install k3sup it's a handy little tool that ssh into a remote host and install k3s you can get it from here and the installation is really easy for k3sup to work we need to setup password less ssh access to the nodes we are going to install k3s into ok, let's set it up real quick all right before getting into cluster installation let's install kubectl the command line tool we'll be using to talk to our clusters just like k3s few commands download and install cool and now we install the first master node if you want a specific kubernetes version you can use --k3s-version={version} and cluster flag is what tells it to start the server in cluster mode using embedded etcd I have put my load balance IP for the --tls-san argument as an alternative name for the certificate now before setting up the second node let's see how our cluster is doing for that we need to configure kubectl and to configure we need to get the cluster certificate from the node we just installed alright let's remote into it and copy and paste it into ./kube/config file in the home directory and then edit it and modify the local address to our LoadBalancer IP this way kubectl will be talking to LoadBalancer Done and now we can run kubectl get nodes and we can see our first master node up and running All-right let's spin up other two master nodes to join the cluster with the command k3sup join let's check our cluster status seems the last node is still getting ready and give it a second and try now our three master nodes are ready and now let's join the two workers but before that this is the command we used to get our last master node up and this is what we're going to use to start our worker node see the difference? we no longer have the server flag and the node taint let's get those two workers up and running Done let's see the status now give it a second and all ready and see that our workers don't have etcd and master roles like the above three master nodes Perfect but let's do some tests to see if it's really HA for this i'm going to deploy a really simple nginx container and a service to the cluster we create the deployment yaml first now this is a slightly modified version of the yaml that you can find in the ranch documentation it has both name and app labels as mysite container port is set to 80 because we are using nginx base image and this toleration section is what i have added because by default kubernetes will wait for 5 minutes to schedule the pod into another node in case of a node failure but keep in mind that this is only one way of doing it you can always do this globally at a cluster level too Alright let's save it and create a new namespace for our app and apply the yaml we just created now let's see the status it's deployed and in ready state to access this I'm gonna create a service and expose it as a NodePort create the yaml first I have set the NodePort to be 30 000 exposed the port 8080 inside the cluster and target port is 80 which is same as the container port of our previous container let's save it and deploy check the status it's ready Let's check if it's accessible from the browser here I'm using the IP of our first master node but you can use any node IP and it will be accessible because we exposed as a NodePort and we have our title Good Now if we want to confirm we can always log into the container and change something or add some text to the HTML let's do that First get the name of the pod and log into it let's see what's inside nginx config root is this let's go there and add one little body tag Done Let's refresh and there we go All-right Now I'm going to power off the worker node which our pod is running to see what happens Let's see which node it's running on it's node two Let's power it off and watch the cluster status All-right it's offline Now let's check what will happen to our pod it just spawn up a new pod on worker1 Now we have to keep in mind that this new pod will have only have the original content of the image and will not have the body tag we added previously refresh the browser and our new pod is reachable and now let's terminate one of our master nodes and see if our cluster fails ssh into node 1 and power off let's watch the cluster status All-right node one is offline check our pod if you can see we still have the old pod which was on worker2 in terminating state that's because it acts like a queued job and when the worker2 has finally recovered and if it still has our old pod this will terminate it and clear the status let's check via the browser this time we'll have to use another IP because this is the IP of the first node which we just powered off Let's use node2 good it's reachable Now I'm going to watch the cluster status and power up those two nodes to see if they'll automatically join the cluster powering them up Nice Now we have our cluster restored let's check if our old pod is still in terminating state still not Lets watch it there you go and that's it In next video I'll install ranch into our cluster and thank you for watching
Info
Channel: ivcode
Views: 739
Rating: undefined out of 5
Keywords: k3s, kubernetes, k3sup, rancher, container, how to install k3s, ha k3s, high availability kubernetes cluster, k3s high availability, load balancer, nginx, homelab, home lab, howto setup k3s, how to setup kubernetes, k3s architecture, kubernetes architecture, embedded etcd, etcd, lightweight kubernetes, how to install kubernetes, k3s server, k3s workers, k3s master, k3s agents, what is k3s, what is k3sup, how to use k3sup, k3s setup, k3s tutorial, kubernetes tutorial
Id: QDwhbMvikGQ
Channel Id: undefined
Length: 19min 24sec (1164 seconds)
Published: Thu Sep 23 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.