The FASTEST Way to run Kubernetes at Home - k3s Ansible Automation

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
words to describe setting up k3s this is hard this is so difficult to set up isn't this overkill what is the load balancer again why do i need two load balancers should i use that cd so wait i need two load balancers and keep alive d what is metal lb again have you heard of cubevip um isn't that a single point of failure i know i'll automate the whole thing today we're not only going to set up k3s with ncd and ha installation with cube vip and middle of b but we're also going to automate the whole entire thing so that we can't really mess this up and so we're going to fully automate the installation of k3s so that it's 100 repeatable and then we're gonna tear it all down as if it never happened but before we do a huge thanks to our sponsor microcenter if you're thinking of building a new pc you should look no further than microcenter if you've never been to microcenter you're missing out on seeing a huge selection of technology in person they've got everything for custom pc builders from ssds and hard drives to power supplies to memory to air and water cooling to motherboards to video cards to processors and more microcenter is your one-stop shop to totally customize your next pc build and don't worry if it's your first time building a pc they have lots of helpful and eligible staff that are there to help you out and will point you in the right direction so that you don't attempt to apply thermal paste like this microcenter has been kind enough to give all new customers a free ssd and it's available in store only so see the link in the description for details so how did i get here well as you may or may not know i've been running k3s in my own environment for quite some time and i even have a video on setting up k3s with my sequel now there's nothing wrong with the k3s version of my sequel it runs great but at the time the lcd version wasn't available and the lcd version is super interesting because it creates a high availability database on the nodes instead of hosting it outside of the cluster and right around that time i saw jeff galen create a video on ansible and that sent me down a rabbit hole learning ansible creating a video on ansible and automating a lot of tasks well you know how that goes anyway so i found that github repo i cloned it and created some virtual machines and then i tried to provision a high availability cluster but there was just one problem the ansible playbook only supported spinning up one lcd node and that meant only one server node which isn't ha i mean it's configured for ha but i would have to manually add additional server nodes to make it ha and that's no fun so technically it wasn't ha out of the box so i decided to dig around in the code and in the branches and i found a fork where somebody actually fixed that issue so i could actually create an h a cluster out of the box with ansible and i saw they also added support for cube vip this was awesome because this is exactly what i was trying to do i love open source so a huge thank you to user 212 850a this gave me a nice starting point to automate the rest again huge thank you to open source community jeff gearling and user 212 850a so after poking around for a little bit i found that most of it was working but it did need some updates and some configuration changes to work with the latest version of qvim along with some other features i wanted to add so i decided to roll up my sleeves and start hacking away at this fork in my own branch and before making it public i wanted to accomplish a few things i wanted to make sure that anyone using this could start with an unlimited amount of nodes i wanted to make sure that qvip was rock solid and then it would actually create a load balancer that you could use to make k3s fault tolerant i also wanted to automate an external load balancer so that when you expose the service you get an ip address for that service from your cluster and then anyone can use that ip address to access services within k3s so i had a few choices for this step and a quick clarification for these two load balancers the first load balancer you typically need in k3s is a load balancer for your kubernetes api this is the load balancer for control plane and this should be fault tolerance so that if you issue k3s commands you can still get a response back and the other load balancer is a service load balancer or kubernetes for you to expose services on in most cloud environments they supply a cloud load balancer for you to expose services on and this service load balancer that i'm talking about is for non-cloud environments in self-hosted environments and since we don't have a cloud load balancer to give us ips to expose our services outside we need to use something that can emulate a cloud load balancer that kubernetes can ask for an ip address from so our services can be exposed so i had choices to make for load balancers qvip can actually do both it can be a service load balancer or a load balancer for your control plane for your kubernetes lcd nodes this sounded like a great solution because then i didn't have to use metal lb but i love metal lv but taking on one less dependency sounded like a good idea especially when it comes to breaking changes it's just less to manage so then of course the other option for exposing my services was just to use metal albeit and honestly after hours and hours of trying to get qvic service load balancer to be able to work with my services i decided to fall back on good old trusty metal lb and metal lb just works and i could use my existing configuration for it so it really wasn't a loss at all so at this point i had my architecture pretty much decided qubit for my kubernetes control plane and metal llb for my service load balancer and once i solved creating multiple server nodes configuring cube vip and configuring middle of b it was time to do some testing for my test i created five nodes and these are standard ubuntu cloud image notes and i just recently created a video on provisioning new ubuntu machines using cloud image and cloud init they're the perfect ubuntu minimal server for k3s just really check it out so once i had these five servers up and running and made note of their ip addresses it was time to configure myansible playbook so here in the groupbars file is where all of my variables are set for ansible first you can specify the k3s version and then you could specify an ansible user and this is the user that ansible will run as and another quick tip if you need to set up ansible i got a really quick video on the bare minimum stuff you need to do in order to set up ansible it's a great primer for this too next is setting a system directory and you won't really need to touch this next is setting a flannel interface of eth0 so flannel's responsible for networking in k3s and it's pretty dense but if you want to know more about it you should totally check out their github repo but as i understand it it's responsible for layer 3 communication between nodes in a cluster and so here i set at 0 because that's the ethernet interface on these virtual machines next i'm setting a server endpoint and this is the ip address of the vip that will get created for kubernetes control plane and so this vip gets created instead of you having to create external load balancers along with keepa live d this creates a vip that is highly available that's exposed through the kubernetes cluster that we can communicate with and kubernetes can too so it's pretty awesome that takes care of two to three additional virtual machines that you don't have to maintain anymore next i set my k3s token and this should be a secret that you should obviously keep secret but it's your password or your token for k3s and you'll only need this in the beginning or if you join additional nodes later i then added some additional arguments to my server and to my agents but as far as the server goes i disabled the service load balancer we'll want to do that if we're running metal of b or a service load balancer which we are i'm telling it not to deploy traffic this is up to you if you want to deploy traffic you can just delete that arg but i'm going to delete it so i can install it with helm later because i like to install traffic on my own later with helm but if you wanted to install it you could just delete this argument this next argument is just setting permissions on coop config and this is really just for convenience so i don't have to run sudo when i'm remoted into a node to run coupe control it's probably a good idea not to this but i got so tired of typing in sudo every time i was testing this the thousand times i spun this up that i just changed the permissions of this file but feel free to remove that argument if you want and the next string of arguments are quite a few but i'll leave these in the documentation but to summarize the rest of these args as well as the agent args you see here is that i found that i needed most of these args to make k3s a little more responsive what do i mean by that one of the defaults for k3s is that if the node's not ready it won't schedule additional pods on it until that node becomes ready but the timeout is like five minutes long which is a long time i mean it's not a long time if you're running multiple replicas of a pod and you're running pods in aj you would almost not notice at all especially in larger installations but in smaller installations like home labs i found that five minutes is a really long time especially if you're running a replica of one that means your service is down for at least five minutes so i've scraped the internet found a lot of these arguments and i've been using these in my home production home lab for about a year now and they seem to work pretty well but you might need to do some tweaking depending on your services your hardware and what works best for you and again k3s will work without any of those arguments i just mentioned and maybe you should try it that way first next i set the tag version for cube vib and this is just the container image tag the current version is v0.4.2 and so that's what i'm specifying here and i did similar things for metal lb too so for metal lb there's a speaker container which the latest version is 0.12.1 and then there's a controller tag as well which i also set to 0.12.1 now these should be lockstep in the same version but i made it configurable in my template just in case they're not so that i didn't have to figure that out in the future and next i chose an ip range for metal lb so this is the range of ips that when you expose services they'll be exposed on and you can communicate with them i'll show you some examples here in a little bit but i set a range from 192 168 30.80 all the way up to 90. so i get 10 so i get 11 ips here typically i only need one or two but i set the range from 80 to 90 just in case after that i checked my host.ini to make sure i had all of the ip addresses in here and the three virtual machines i am going to use for my masters are 38 39 and 40. these are also referred to as your server notes and then my worker nodes or my agents are going to be 41 and 42. so this means three servers with kubernetes control plane and ncd making it highly available and then two worker nodes to run my user workloads and if i had more virtual machines i would just add them below so with all of this configured i ran the site playbook and pointed it at my host.ini but before i did that i started pinging my vip obviously it's not there as soon as it comes up it should respond so i ran the playbook and it installed and configured k3s on one of the server nodes shortly after that the vip started responding so this means qvip is installed on that machine and the vip is up and then it started joining other machines to the cluster and then shortly after that i had a high availability kubernetes cluster on k3s and that's a ha cluster with that cd with a load balancer that's also ha for my control plane and aha load balancers for all of my services but we need to verify hopefully you trust me but let's also verify so we can ssh into one of our server nodes once we're there we can run sudo coop control kit nodes and we can see we have five nodes and they're all online you can see i have three control plane at cd masters and two workers or agents ready for workloads super super awesome so instead of ssh into this server let's actually copy our coup config locally so we can run the rest of the commands so let's exit out of here you'll want to make a directory for your coup config file if you've never done this before or back up your existing coupe config file if it's there then we'll just scp or secure copy that file from one of the servers back to our local machine after it transfers we can run a coupe control get nodes and see the same thing awesome so now we have cube control running on this machine next i created a super simple nginx deployment for kubernetes this deploys an alpine version of nginx and sets the replicas to three i did that by running coop control apply dash f and then the path to the deployment manifest and then kubernetes told me that deployment was created then i wanted to check to see how this deployment was doing so i ran coop control describe deployment nginx and you can see it is deployed and the desired state is three and three were updated three total three available and zero unavailable so all three of my nginx pods are up and running but this doesn't give me access to these pods outside of kubernetes this is where a service and a load balancer comes in the exact reason why we installed metal lb so then i created a super simple service file this service file is just a service pointing to the app of nginx that we just created that deployment and we tell this service to expose it on port 80 and that the target port for that container is also port 80 and here's where the magic takes place we tell it that the type is type load balancer this tells kubernetes to tell our cloud load balancer to give us an ip and our cloud load balancer right now is metal lb so metal lb should hand us an ip address that we specified in that range and if all of that happens we should be able to get to our service so then i ran coop control apply dash f and then the path to the service file kubernetes told me it created the service for me and then i wanted to verify that so i ran coop control describe service nginx and we could see here that it exposed a load balancer ingress of one of the ip addresses that we specified in metal lb so this means my nginx deployment of three pods is now exposed on a load balancer at this ip 192 168 30.80 and if we go to that ip address we can see the hello world page from engine x this is so awesome so this proves all the way through that middle lb is working but we never really tested the ha side of cubit we know that we can issue kubernetes commands right now with goog control but we didn't take any of those notes down so let's do that too so i started pinging that vip and while doing it i remote it into my first master node or the server node that's running the control plane and it's also one of the nodes that's running cube vip that's supplying this vip so i decided to shut it down and as you can see on the right i'm still getting responses and you can see on the left i'm not getting a response from that machine so this means we have an h a vip now now i can't shut down a second node an aja cluster of only three nodes can only lose one machine so if i shut down another machine i won't have access to kubernetes but i will still have access to all of my workloads that are running it's just that i can't change the state of kubernetes nor access it over coupe control so this is so awesome so i started that other node back up and it's responding and obviously qbip is still responding so what does one do after we build the perfect k3s cluster we burn it down of course there's also a playbook to totally reset k3s back to its initial state so running this playbook and pointing at the same host will totally clean it up it'll clean up all nodes remove all containers and reset it back to the state it was before we ran this playbook this was super handy as i was testing on my changes must have run this at least a thousand times and after it's done we're back to a good state one note you might want to actually reboot them afterwards i've noticed that the vip stays up and it will respond so i have a playbook to reboot all of these machines and this playbook will actually wait for them to respond before it reports a success just like that and so this is everything that everyone struggles with when setting up k3s no more using mysql and making that ha if you don't want to no more spinning up additional load balancers and keep a live d and making those aj if you don't want to no more configuring metal lb or installing with helm if you don't want to just one simple playbook that spins up all of that in one shot and then you can burn it down if you want to too so again a huge thanks to the k3s community who made this original playbook along with jeff gearling thank you so much and also thank you to github user212 850a thank you so much i'll have links in the description to all of the code that i have in the description below so what do you think of spinning up a truly ha version of k3s using ansible is there anything i should contribute to the script to make it easier for you let me know in the comments section below and remember if you found anything in this video helpful don't forget to like and subscribe thanks for watching fix the lights i if you weren't here last week small episode with the lights i couldn't figure out what was going on with my bottom lights my bottom lights ended up having a small issue and it took me a long time to figure out it ended up being a firewall rule so if it's not dns it is a firewall rule all right changing the light as soon as i mention them if it's not dns it's a firewall rule now now you're really testing mines all right it's gonna happen it's gonna happen so
Info
Channel: Techno Tim
Views: 89,398
Rating: undefined out of 5
Keywords: k3s, etcd, kubernetes, ha, high availabilty, k3s etcd, ha k3s install, ansible, automate k3s, rancher, suse, how to install k3s, github, kube-vip, metallb, service load balancer, embedded db, metal lb, kube vip, keepalived, load balancer, self-hosted, selfhosted, self hosting, k8s at home, kubernetes at home, techno tim, homelab, home lab, database, nodes, node, server, home server, ha k3s cluster, cluster, clustering, k8s, open source, high availability, cloud native, k3s setup, k3s install
Id: CbkEWcUZ7zM
Channel Id: undefined
Length: 18min 41sec (1121 seconds)
Published: Sat Mar 26 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.