Building A Kubernetes Cluster With BROKEN PCs

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
when I first bought these three Lenovo mini PCS they wouldn't even post but in some previous videos I managed to fix them and also looked at some ways they might be put to use but today I'm going to use these in a much more interesting Way by setting up all three in a high availability kubernetes cluster this cluster can be used to run containerized web apps and services but the great thing about high availability is that if one machine dies all of the services still stay up and running there is one small downside though I've never actually set up a kubernetes cluster but I'm going to figure it out I promise so don't click away if you don't trust me you can always trust the future version of me that's editing this video [Music] these three PCS here are the Lenovo m715q gen twos that I bought a while back on eBay where they were listed for as parts these come with ryzen 5 2400 ge's and 8 gigabytes of RAM and I've somehow managed to get all three working one had a dead stick of RAM and the other two had a corrupted bios that I was able to fix with a programmer I honestly was just hoping to get one or maybe two fixed so when all three worked I immediately had the idea to do some sort of cluster I had originally thought to do a proxmox cluster kind of like what radial did here with three Zima boards but I've been doing a lot in proxmox lately and sort of just wanted to push myself and learn something new and I've always wanted to learn more about kubernetes so I figured this might be the perfect opportunity now this might go without saying but this isn't a full-on tutorial I won't go into great detail partly because I don't feel super confident in the subject matter but also because I don't want this video to get super long but I will still walk you through the process I went through and the hurdles I had to overcome and I'll also just point you in the direction of some great resources that I used kubernetes is pretty dense which is why I'm posting this video five months after the original video where I fixed these running a YouTube channel is a lot of work and I haven't had a ton of free time but over the last few days I've spent hours doing diligent research exhaustive testing and mostly just watching a ton of techno Tim videos but I think I have a good game plan of what I want to do with these Now setting up kubernetes is complex and making a video about it is even more complex but thankfully I use notion the sponsor of today's video I am super excited to work with notion because I've used it to write the majority of my scripts and I use it to stay on track with all of my various projects notion is a super powerful project management solution and I use it as my sort of main hub for everything I do with the hardware Haven channel from brainstorming and planning to writing scripts with the help of AI to keeping up with goals and deadlines you can pretty much centralize your entire workflow with notion rather than having to hop from one service to another because while notion is incredibly simple and flexible it also has a ton of great Integrations and features with custom automations at the click of a button tons of formatting tools and Integrations with just about any other service such as slack or GitHub it has everything you need to manage your projects effectively notion even has optional integrated AI features to help improve your riding skills create summaries and much more whether you need to plan your next Big Marketing project quickly knock out some Sprints or just quickly keep track of projects at home notion is a great tool that you should definitely check out you can go create your workspace for free today and the AI features are only an extra ten dollars a month which is pretty awesome so start managing your projects like a pro with notion there are many flavors of kubernetes but I landed on using k3s due to being lightweight and easy to set up in high availability or AJ using embedded at CD instead of something like MySQL I could do the install from scratch following the documentation but to speed things up I'm going to follow this awesome video from techno Tim here which covers setting up an ha k3s cluster using ansible if you're interested in doing something like this definitely go check that video out and maybe even leave a comment saying Hardware Haven sent you to a real expert if you're unfamiliar with ansible don't worry I only recently started dabbling in it myself ansible is a tool that lets you automate workflows across multiple servers by running plays and it's pretty awesome The ansible Playbook I'll be running will set up k3s on all three server nodes and then install cubevip as a load balancer for the control plane and metal lb as a load balancer for all the services and if this all sounds confusing to you go watch the Techno Tim video on it because he does a much better breakdown than I ever would once the cluster is set up I'll also install Rancher for a web UI and then Longhorn for some high availability persistent volumes basically just storage for containers before running my actual cluster I did a lot of testing and troubleshooting using virtual machines and proxmox it took a bit of trial and error but I think I got it all working so now I'm going to set it all up on the cluster before hooking anything up I made sure each of these had eight gigabytes of ddr4 as well as a 256 gigabyte nvme SSD I also installed Debian 12 on each system made sure the Haven user had group privileges and then set them all up on static IPS of 192.168.10.21-23 oh yeah I was just going to name them nodes one through three but landed on Lenovo node instead because it's pretty fun to say I didn't want to go through the hassle of installing ansible or cube control on windows so I just set up a Debian 11 virtual machine to use for administration with all that out of the way I think it's time to pow these puppies on and start setting things up all right I'm over here at my desk now I have the three Lenovo think centers with Debian 12 installed and we should be able to access them from our administrative Debian VM here which I have pulled up and really quick I'm just going to try sshing into one of our servers here yes and then we're in Lenovo node one and that should work for all three okay cool it all works and that's important because we're going to need SSH to work for ansible to work and we actually need to install ansible which I already have on this VM but I'll show you kind of how I did it really quick just in case and there are multiple ways to install ansible but I installed it using python so to do that I had to make sure and run sudo apt install Python 3 Dash pip which this is already installed so nothing will happen and then run python3 Dash M pip install and then it's important to type in a user here and then ansible and this is all on the ansible website you can pull that up yourself there's a really good chance you'll get an error message that your home folder isn't in path and so you can actually run this command here for whatever your users are for me export path equals this and then a pin that to path basically so now I should be able to type in interval version and there we go now I'd like to be able to use ansible without having to type in a password every time I SSH into these machines so I'm going to set up SSH keys for all of these so I'm going to do that really quick if you don't know how it's pretty easy to Google but you can kind of follow along with this probably because I'm going to speed it up now with ansible setup and SSH key setup I should be able to run some ansible commands ad hoc commands I think is what they're called just make sure that ansible is working properly so first of all I have this test inventory right here and if I take a look at that you can see I have some hosts I have our hosts here and then I have this variable for our ansible user and that'll make things pretty easy so what I can do is type in ansible dash M for ansible modules and then ping then Dash I for our inventory oh I need to say againstable all I think then Dash I for our inventory which is this test in Dot yaml and hit enter and we should get this nice pong success back from all three machines that's great and so actually I could do something with ansible like install curl on all three of these servers if I wanted to so let's do that really quick change this to apt name equals curl State we want to make sure it's present I think that's right I'm still very new to ansible so run this and uh oh we're gonna get a permission denied error and that's because we're not running this as root so to fix that I'm going to run the exact same command but append a dash capital K which is going to ask me for a become password because this is basically going to become root using sudo well that's not good okay I'm done I just forgot I actually have to type in the dash dash come command but you can see we actually get this back here it's a random whole bunch of stuff and if we run it again we'll see it won't actually change anything because it was already installed so it installed curl on all these machines now which is pretty cool but one of the problems is I don't want to have to type my password in every time I run a command because I may want some of these commands to be run as cron jobs or something like that in the future so I'll show you how we can set that up later but for now ansible is working so we can move on to installing kubernetes and for setting this up I literally just used techno Tim's guide that I'll have Linked In the description that you can follow along with in the host.ini I changed all of the IP addresses in the master group to match my servers and cleared out everything in the node group because I'm not going to be using any dedicated worker nodes I also added a group labeled Master colon vars and put in ansible user Haven and ansible become password with my password and this is how I'm able to use ansible without needing to type in my password now there are definitely much better ways of doing this other than leaving it in plain text but this was quick and easy for me however if this is a production environment for you I definitely wouldn't leave my password in plain text in the group vars slash all.yaml I changed my ansible user to Haven just to be safe changed my time zone and then for the flannel interface I actually needed to check the name of the interface on one of my servers and then copied that in I also set up an API server endpoint of 192.168.10.29 made up some gibberish for the k3s token as I'm going to be taking this server down after this video and then down in the metal lb IP range I gave it a range of 192.168.10.30 339 to give me 10 IP addresses to expose services on after following through the rest of the documentation I ran ansible Dash Playbook site.yaml gave up my inventory and our kubernetes cluster started spinning up now you may run into one error code that I ran into which was not having the python net address package so to fix that all you need to do is type in pip3 install net addr I already have it installed so yeah but it should install for you and that may fix an issue if you run into it once the ansible Playbook finished without any errors I've continued following along the guide installing Cube control and then I needed to make a directory in my home directory called dot Cube and then I copied over the kubernetes config from one of my nodes into that new directory at this point I could run Cube control git nodes and see all three of my Lenovo nodes I ran the example deployment of nginx which spins up three replicas giving us High availability and also spun up the example service which would expose that with metal lb to an IP address in this case 192.168.10.30 and nginx was working as expected to get a web UI rather than having to use Cube control I followed and he had another techno Tim guide to install Rancher I'll also have that guide in the description the guide works as is except for a few changes you might need to make when running the helm install Rancher command you'll need to add a set global.cattle.psp.enabledeagles false flag for it to work properly with the most recent version once Rancher installed I also needed to expose it to my load balancer so I ran this Cube cuddle exposed deployment Rancher Dash in cattle Dash system type equals load balancer name equals Rancher lb and Port equals 443 command to expose that on a new IP address which the next one available was 192.168.10.31 so I headed to that IP address and was able to access the Rancher web UI after setting up a password I landed at the dashboard and now I'm going to hop back to me doing this all in real time then here we go we're set up in Rancho we can click our local cluster here we can see we have three nodes 13 deployments we can see our memory usage and all this other cool stuff we can go to workloads deployments and we'll do we can do all namespaces but down here you know only username spaces is fine we have this nginx deployment we can see we have our three replicas we can go to service Discovery and see we have this nginx service here as well as the Rancher service here cool so clusters up and running we have Rancher for a web UI but nginx is what you would call stateless which means it doesn't need storage of any kind it doesn't need to read from storage or write to storage which is great for high availability because containers can just spin up and you know come down as much as they need to but what if you want to run a container that does need storage like for example a Minecraft server well it's going to be pretty hard to run that in high availability unless you designed the container to really do so and manage storage that way but we can still run something like that on here and then if the node that it's running on crashes or I pull the plug it can still move over to another node and spin up it'll just take some time and there'll be a little bit of downtime but we need storage and we need that storage to not be just on the Node that might die because then it can't spin up on another node so we need High availability storage which is where Longhorn comes in and that's what we're going to use to have some high availability ability storage that can be shared across all of our nodes installing Longhorn is pretty easy because you can use the chart that's on the apps page in Rancher but according to the longhorn documentation there are some requirements for the nodes that I'll be installing Longhorn on I needed to make sure I had two packages installed open iSCSI and NFS common so I just used ansible ad hoc commands and the APT module to install those once that was done I created a project in Rancher called storage and then installed the longhorn chart selected that storage project and used all of the default settings once it finished installing I was able to go to the new Longhorn tab in Rancher and find myself on the longhorn dashboard and here I'm going to jump back to me doing this once again in real time and we can see we have 459 gigabytes of total storage across three nodes and this should be fault tolerant as long as we don't lose more than two nodes so we can lose one node and still have all of our storage available so to put this to the test I'm actually going to set up a Minecraft server that will need storage to save all the information about the world and see if we can get it working so I'm gonna go back to Rancher and go to deployments and then create a new deployment we're going to call this just MC for the container I'm going to use the smart TV Minecraft paper MC server that I've used in the past and instead of latest I'm just going to use 1.19 because that's what I have currently installed on my computer and then we will need some storage but I'm going to set that up over here under pod we're going to go to storage add volume create persistent volume claim storage class we're going to pick Longhorn and then capacity I don't know how much we really need I'm going to say 16 gigabytes and then access modes we're just going to do single node read write because we shouldn't ever have more than one node needing to access this at any given time under the volume claim name we'll call This MC Dash PVC and then volume name MCE volume really quick here with pods I'm going to go to labels and annotations and add an annotation and we'll say app and the value is MC and you'll see why later and I think oh no we need to add storage on our container here so we'll go storage select volume and we'll select this and the mount point is going to be slash data namespace put this in the default namespace call This MC server and we're only going to have one replica because remember this is a stateful application it's not going to be able to have multiple replicas because then you'd have multiple nodes trying to access this same data store all at the same time and you'd start having lots of issues so this only have one replica but if we you know lose power to one node or it crashes it may take a little while usually a little over five minutes but it will spin up on a another server which is still pretty cool so we'll hit create and we can go down here to This MC server click this got to go into the Container or into the Pod to get to The Container we'll hit view logs and if we look at the logs we can see it's starting to spin up here okay so it's running but we don't have a way to access it yet so we need to set up a service so we'll go to service discovery services and then create and this is going to be type load balancer and we'll call This MC service namespace is default and then Port name we don't need to give it a name but we do need to listen on Port 25565 which was Minecraft and then Target Port 25565. and then under selectors we need to say key app and value is MC which is what we put in earlier oh I put an annotation I'm in a label crap under deployments MC server I'm going to edit this config under pod I don't know why I put that under annotations That was supposed to be under labels I'm a dummy dummy mcdumdum but we do need a name okay so MZ there we go and then selectors appmc so now that works we need to call this the MC service again and then I think I can go to IP address and say load balancer IP and I'm going to put this on 39 which should be the last IP in my range and hit create and we'll see if this works I actually haven't tested this yet in keep control though it does look like I have 192.168.10.39 set up as my external IP address though so let's give this a shot all right so I have my servers here I have this Minecraft server that's on 10.39 and let's see if it works hey and it looks like we're in it is already night time which is unfortunate and the glare off my lights means I can't see what I'm doing so but hey we are in it seems to be working I love that I spawned at the top of a tree two hours later okay so I'm dumb I've been pretty dumb on this channel before so that's no surprise but when I set up this Minecraft server deployment and I set up the storage here if I go to config you can't see it but when I set up this MC persistent volume claim I set it up as a single node and I needed to set it up in mini because even though we're not running multiple instances of this Minecraft server whenever we have a node go down go back to this deployment so right now node one is down so this is showing up as terminating because until the node actually comes back online it's not going to get rid of all the stuff safely for it it's just going to keep it in this state and so it's going to spin up a new one but if this is still in this terminating state it hasn't let go of the persistent volume claim and so only one node can access it at a time so we need to set that up as many as well so they can access that so I set that up now and you can see I have it running on this server I killed node one now it's running on Lenovo Note 3 and if I go back to the server it drops me into this world here that I created earlier and made a little platform so I wouldn't die this time around so yeah our Minecraft server actually will re-spin itself back up after a few minutes we just have to make sure we have the persistent volume claim stuff sorted out correctly all right well it is set up and seems to be working properly with high availability for stateless containers and at least little down time for stateful ones like our Minecraft server I'm pretty sure I've done at least some I know I've done at least something poorly but I'm sure I've done plenty of things poorly so if there's any feedback or advice you have please feel free to put it down in the comments below oh I should probably figure out what the power draw of this is hold on okay yeah so sitting at idle so not running a Minecraft server the whole clusters drawing between 30 and 35 or so while just running all the stuff we installed minus the Minecraft server and that's about what I expected when I tested these on their own they only pulled around nine Watts at idle and they're they're not really at idle because they're running all of the services to run this cluster so around 30 watts is about what I expected so not too terrible obviously if you wanted something lower powered than that you could set this up with something like some raspberry pies or something but yeah 30 Watts isn't too terrible to have a high availability cluster that you can run you know your website or other services on that's pretty cool now if you're wanting to learn kubernetes or just mess around with it I probably wouldn't suggest doing this I would suggest setting up a single machine with proxmox or some other kind of hypervisor and using virtual machines to set up your cluster because it's going to be a lot easier to save and roll back to snapshots and delete or add more nodes or just wipe the whole thing when you're done it's going to save you a lot of time rather than having to do it across all these systems but if you do want to run a setup like this on bare metal either just because it's cool or you want to run it in production it's it's pretty awesome how you can do this on pretty old or you know budge it Hardware like these mini PCS you can actually find a lot of great deals on similar systems and lots of three or five or whatever on eBay for pretty good prices right now so make sure and check those out I want to give a huge shout out to techno Tim because I used so much of his documentation and his great videos to make this video possible so definitely go show him some love go check out those videos that I'll have linked down in the description and also thanks to notion for sponsoring this video notion's pretty cool check that out that's about it for this one though so as always thank you guys so much for watching stay curious and I can't wait to see you in the next one [Music] this cluster can be used to run containerized container asked and maybe even leave a comment saying Hardware even sent you that's stupid that was so stupid and start setting everything up I thought it was farther as always thank you guys so much for watching watching I hope you watch I hope you're clean
Info
Channel: Hardware Haven
Views: 28,882
Rating: undefined out of 5
Keywords:
Id: S_pp_nc5QuI
Channel Id: undefined
Length: 24min 34sec (1474 seconds)
Published: Fri Jun 30 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.