HomeLab Services Tour 2024 - What Am I Self Hosting?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
After giving you a tour of the hardware in my home lab, many of you asked, "What are you running in your home lab?" Well, it ranges from dashboards, to hypervisors, to an NAS, to DNS, to network management, to Docker and Kubernetes, to GitOps, to CI and CD, to multiple reverse proxies, monitoring and data visualization, home automation, and much, much more. And this time I've decided to include network diagrams, as well as logical diagrams for where everything lives, to help you understand how this all comes together. That's why I was a little bit late getting it out. And if you want to see more about this diagram, I'll have links below in the description where you can find it. So let's take a look at everything that I'm self-hosting in my home lab. First, let me show you how my network is laid out. Now before you go accusing me of just taking free stuff from Ubiquiti, I purchased all of these things you see on here, with the exception of one thing, and it was this one switch, "Replace an existing 48-port switch". But this is my UDM SE, the internet comes into here, this is my firewall, and my gateway, and it manages all of my VLANs. Then connected to this is my Switch Aggregation Pro, which you saw in the hardware tour. And then from there I have the 48-port switch uplink to there, as well as this 24-port PoE switch that's in here in the studio uplink to there. This one's uplinked at 20GB, really it's a LAGG of 2 10GB, and this one's uplinked at 10GB. And then here in my studio I have a 16-port PoE switch which is connected back there, and then a Flex Mini switch that's also connected to that TV back there to provide some VLANs. And then I have this USW Flex XG which has a couple of 10GB ports and it's powered by PoE. That's on my workbench back there, you can kind of see it. Just for testing I was going to use that for 10GB devices in here, but I ended up putting this switch right here. And then my 48-port switch that's in my rack, I have a couple of access points connected to it. Some cameras are connected to that. Some additional cameras, some other mini switches around the home, and one of those mini switches you can see on the wall down there in my server room, and another access point here which is connected to my doorbell. And then if you see this access point here, it's actually a wireless bridge to my garage that has another access point there. And then from there a switch connects to there, and then some cameras and some other devices in there. Now, I'm not showing all of my devices here on the UniFi devices. If I showed all of my devices, it would be kind of hectic to look at. And as you can see, that's a lot of devices. So I figured I'd just show you the network backbone or just the network devices that are connected. But this is the physical layout and not the logical layout. If you want to see my VLANs, you can see right here. So a default, the guest, cameras, main, IOT, server trusted, and server untrusted, and then traveled. That's one I was testing with. And these are the VLANs that I use to logically group all of these devices together based on their role or their needs. And here's a network diagram of all of my devices in which VLANs they fall into. So they're logically grouped according to the VLAN and according to their role. This is why this video is a little bit late. But as you can see, I have the same VLANs that we had earlier, and then the devices that fall into this group. Now let's start with how things come into my network. And that's the internet. So I'm connected to the internet with my modem here. And then from here, I firewall and have my UDM SE right here. So this is my UDSME, my router per se. And I have VLANs and I have firewalls in between each VLAN. And if we look at one of these VLANs, this default network right here, so this default network is no VLAN at all. And this is where I keep a lot of my network equipment as well as my Proxmox servers. Now you might be asking why do you keep your Proxmox server on a network that has all packets tagged and untagged? It's because they need to listen for those tags or those VLANs so that they can use them for the virtual machines. So I have four Proxmox servers that we'll talk about here in a little bit. And they sit on this default network. And then I have virtual machines that attach to different VLANs. So that's kind of boring. But if we look here, I also have a camera VLAN. And in my camera VLAN, I have wired and wireless security cameras on their own VLAN. Now you might be asking why do you want to put them on their own VLAN? Really it's to segment traffic and it's also to limit the internet to these devices. That's something that I chose to do. So these are firewalled from the rest of my networks. No one can communicate with them. And they're on a network that doesn't allow any internet outgoing. So it's a pretty secure VLAN. Next you can see my IOT VLAN. And this VLAN is really for devices that I don't have control over. I can't really remote into them. I may or may not be able to patch their firmware or give them software updates or even see if they're up to date. And so I put things that other vendors control into this VLAN so that if something ever happened, they can only communicate with these devices. So my printer, it's on the IOT VLAN. Kubernetes node, we'll talk about that here in a little bit, but this Kubernetes node is here so that it can communicate with all of these devices. And it does live on that VLAN. Even though I trust this server and meet some of the criteria for trusted, I still decided to put it here rather than create a lot of firewall rules. It was just easier. Then I have lots of IOT devices and that's a lot of the devices you've probably seen here. That includes TVs, smart lights, other lights, switches, things like that. So they're all in this IOT VLAN. My wife's work laptop, yeah. Her laptop is in there. That's because it's managed by another company. I don't know what that company's doing. And I don't have a way to patch it myself and get into it and take care of it if something goes wrong. So I'm considering her laptop just an IOT device. Arguably, I could put it on the guest network, but that network's not up all the time. And then testing devices. Anytime I'm testing a new device on my workbench back there, I'd be sure to plug it in the IOT network so that it only really has access to the internet and not my main VLAN. Next is my main network and this is really where my trusted PCs and client devices kind of hang out. So I have PCs, I have Macs. I've also decided to put phones and tablets, Apple TVs and home pods here. Now, that was a choice I had to make and a choice that you might have to make sometime too. Remember when I mentioned back here I didn't want to create a whole bunch of firewall rules so that devices can communicate with each other? Well, originally I had all of my phones and tablets and home pods over in the IOT side, which made it easy to communicate with different devices on the IOT network, but not so easy to communicate with my PCs and everything else that's on my main network. I decided that I do trust Apple devices enough to put them on my main network. My guest network, this really isn't used as I was talking about earlier. It's there but I don't advertise this SSID unless people come over and visit and bring devices, which is pretty much never. I'm being honest. Most people don't ask, "Can I use your Wi-Fi when they come over?" and that also assumes that people come to my house. But anyways, I have this here. It's here if it's needed and this is where guests, devices will connect if they come and visit. One thing that I did configure on this network, really for fun, but also for security, was to isolate these devices. You can see that on my firewall rule right here. So these devices can only get out to the internet and they can't communicate with other devices because they're isolated. Server is untrusted. It might be easier to talk about servers trusted first so let's hop over here. Servers trusted, as I kind of hinted at earlier, are servers or devices that I trust. They're devices that I can remote into, I can patch, I can replace, I can wipe if I need to. And on top of that, they're servers so they're performing some kind of service on the network. You can see I have a trusted Kubernetes cluster and this is providing services that I trust internally only. I have my DNS servers. Yes, I do have three DNS servers. We'll talk about that here in a little bit, but they provide DNS to my network. My NAS which supplies storage services. My PyKVM, I decided to put this on this trusted network as well because I can patch it, I can remote into it. It's running Arch Linux and it fits all of the criteria that I set up to say that whether it's trusted or not. A Windows VM that's doing one or two tasks, that's there too. And then IPMI and UPS. Now this is kind of debatable but I decided to put my IPMI devices on my trusted network rather than on my IoT network. And again, this is a decision that you're going to have to make. The reason I put them here is because I didn't really feel comfortable putting them on the same network as all of these other IoT devices like switches and stuff like that. I felt like if one of these devices had a breach and they were able to get to this, that was probably more likely than my IPMI which isn't even public facing having some kind of security breach which they can't even remote into anyways and then breaking out and getting into my trusted servers. Same with my UPS, these aren't public facing at all. And so I decided to put them on my servers trusted rather than on this IoT network even though technically they are IoT devices. And again, these are tradeoffs you're going to have to make when you segment your network if you haven't already. And lastly, you see this middle IP. This is an IP address that is a load balancer that is how machines or clients get into this trusted network. And again, this is only internal. So I have an IP address that I can communicate with and get to the websites and stuff that's running in my trusted network. And now for my untrusted, since you understand what my trusted is, maybe untrusted might be a little more clear or a little more confusing. So my untrusted network isn't really devices that I don't trust. It's that I don't trust the network itself because I host things publicly here. So, you know, my website, my blog, my wiki, some of my bots, and some of my webhooks that are all public facing, they all live in here. And so security is pretty tight here in my servers untrusted VLAN. There's not a lot that it can do other than communicate with other devices on here. And so this is where public workloads run. And I have Metal LB running here too to create a load balancer and IP address that people from the public can get to. So if they come in from the public, they're actually first hitting CloudFlare, which I use for my reverse proxy, that will filter out any bots or anything like that or any bad actors for the most part. That will come in through CloudFlare. It'll come in through the internet. It'll come into my UDM SE. That will forward the traffic to my untrusted network to this Metal LB IP. And then that will forward the traffic down to my servers trusted depending on the workload that it needs to get to. So again, these are all servers technically that still fit the criteria of trusted. I can remote into them. I can patch them. I can wipe them. I can do anything that I want to them. But they're living on a network that allows public incoming traffic. And so I decided to segment that in its own VLAN. I probably should have called it public VLAN rather than servers untrusted, but I was already on this whole trusted and untrusted in servers kind of scheme, so I went with that. So this is logically how my networks laid out. But now let's dive into some of these services that are running within these different VLANs. So first let's start with dashboard. It is still Heimdall. Heimdall is a great product. I think I said last year I might switch off of it. I think next year I might too, just as some complications that it has with a storage mount and a volume and reading and writing data. And that has gotten corrupt for me once or twice and I've had to rebuild that volume or rebuild my dashboard. And I want to switch to something that's more config driven so I can feed it to config and not worry about mounting a volume to it. But anyways, long story short, it's still great. Still looks good. It still works. It does exactly what I need it to. But I might be checking out some more in the future. Next let's start on Hypervisors. Hypervisors is a big one for me. I'm still using Proxmox and I love Proxmox. I can't imagine using anything else at this point. And as you can see I have four nodes here. So I do have four nodes and they are in a cluster. And you're probably wondering about Quorum since I have an even amount of devices but I've made some adjustments so that I can have Quorum even with four. Anyways, not important. I do run this in a cluster but I don't have high availability virtual machines. Instead of having high availability virtual machines, I build high availability into my services when I can and so I have high availability services rather than virtual machines. Just doing it one step lower. But anyways, Stornator. So that's the big Stornator device you saw on the rack. This does have some virtual machines. The biggest virtual machine it used to have was Andromeda which was my virtualized TrueNAS but I moved from the virtualized TrueNAS to a physical TrueNAS with the HL15. We'll talk about that here in a little bit. So the things that Stornator needs to do is kind of dwindling a little bit. We'll talk about its roles here in a little bit but I have a few virtual machines on for testing and then you can see I have, let me see, one, two, three. So this is Xing One, two, and three. So I've migrated all of my services to these Intel NUCs and they're running here. And so I have one, two, three, four, five, six virtual machines you can see here. They're all running fine. You can see the CPU usage is pretty good. Eight CPUs or eight logical cores on each of these. I/O delay is pretty low. RAM, 64 gigs it's used and it has a lot of KSM sharing so a lot of this RAM is being shared. 15 gigs worth. But the swap is still pretty low. One megabyte of swap. So only one megabyte is being used for swap which means it's really out of RAM. But these Intel NUCs are a beast. They're running five, six virtual machines each and they are rock solid. And they're all attached to my NAS for backups as you can see here. So since we're talking about my NAS, let's go into my NAS which is TrueNAS Scale. So I'm still on TrueNAS. I've been using TrueNAS since it was called FreeNAS and then it was TrueNAS and then TrueNAS Core I think was in there and then I migrated to TrueNAS Scale. I ended up going with TrueNAS Scale because it's a little bit easier for me to manage a Linux box or this is Debian Linux than it is a FreeBSD. And so this NAS is running on the HL15 and it now is bare metal which is kind of nice. I've been virtualizing my NAS for about three, four years and now I'm back to bare metal. It's kind of nice having it bare metal because now when I reboot my hypervisor I don't have to worry about if my NAS is down. But anyways, this is running on the HL15. I did add some additional RAM so I have 128 gigs of RAM. I did extend my ARC or my RAM so it uses more than 50%. Thanks Tom Lawrence. He has a video on that. You should check it out if you need to use more than 50% of your RAM. It does have what, six cores? They're all low clock speed but this is perfect for NAS because it uses very little power, all things considered for NAS that has about ten drives in it. But my NAS provides a lot of storage services is what I call it. But it provides SMB so Windows shares, NFS shares and also iSCSI targets because I need some iSCSI drives for some of my machines. And one additional thing that it's also doing is running Minio on top of it. That's the only app that I use and that gives me object storage so that I can do my backups and I can use object storage within my network instead of using S3 or anything like that. The nice thing about running it on here is then it's on top of ZFS and then I can do snapshots. And it's not inside of my virtual machine. It's not inside of my cluster. It's outside of my virtual machines and outside of my cluster. So all the things I need to back up that require S3 or object storage within my cluster back up to this which is outside of my cluster. But it's pretty nice if you want to self-host object storage. And as you can see here I have one, two, three, four, five so I have ten drives, it's mirrored VDevs and I have five pairs of mirrored VDevs, all 14 terabytes a piece. I've done a ton of optimization stuff and I should probably create a TrueNAS video at some point talking about all of those optimizations in one. If you're interested in that let me know in the comments below. But this is my NAS. Rock solid, couldn't ask for a better NAS. So since we're talking about some core services on my network, another one is DNS and I'm still using PiHole. Yes, I do have three instances of PiHole. So I have two instances of PiHole that are on virtual machines within those Intel NUCs. And then I have one more PiHole instance that's running on that PiZero that you saw on the wall in my server room. The reason I do that is so that if my Intel NUCs are down, the PiZero that's running PiHole on the wall will still be serving DNS for the rest of my network. This means I can take my cluster all the way down without having to worry about DNS and my wife asking if the internet's down. Hey, is the internet down? And you're probably wondering, whoa, how do you do that with 3DNS? Primary, secondary, and tertiary? Something like that. The way that I do that is I use KeepAliveD and I create a load balancer between the second and third DNS with its own IP. So that means I technically have 3DNS servers, but I only hand out IPs of the primary and the secondary, and the secondary is a load balance between the two. Anyways, kind of complicated. I have a video on it if you want to see. But PiHole is what I still use. Is it the best thing out there? No, obviously not, but it is great for what it does. And then I use Gravity Sync on top of that to synchronize all of my lists and all of my DNS entries across all three devices. And I do use it pretty heavily for local DNS as well. For network management, I'm obviously using the UniFi network controller because I have a lot of Ubiquiti devices. And I choose this because I have the hardware, it's a single pane of glass to manage all of my network, all of my firmware, and even my cameras. And on top of that, their mobile apps are nice, and as you can see, the UI is pretty nice here too. And speaking of cameras, I'm going to blur some of these out. And speaking of cameras, I use UniFi Protect for my home security, keeps all of my security footage on this device, have about ten cameras you can see down here, and it's a 14 terabyte drive, and that will get me 64 days of continuous recording. Not to mention, this is a super low power device that is also acting as my firewall. Next up is containerization, and this includes two parts. There's Docker-only hosts, and then there's Kubernetes. And as far as Docker-only hosts, I run Portainer to manage that. One migration I did finish up this year was moving all of my Docker-only hosts to Kubernetes. So if we look in Portainer, which I use to manage my Docker-only hosts, we can see that I have one stack, and that one stack is Watchtower, and that's just there for testing. And that's kind of how I use Docker right now. I use Docker to spin it up, to test it out, see if I like it, run it for a couple of days, and if I like what it is, I'll move it over to my Kubernetes cluster. And speaking of my Kubernetes cluster, I have three Kubernetes clusters, and I manage them all with Rancher. I have my local cluster, which is K3S, and it's only running Rancher. Then I have my cluster-01, which is kind of my untrusted cluster. It's a little bit of both right now because I'm in the middle of a migration, but it runs all of my public workloads along with a few internal workloads too. But that's where cluster-02 comes in, and this is meant to run all of my private or my internal services, and nothing in it will be public facing. And so with Rancher, I manage all three of these clusters, and you might be wondering, "Well, why do you need Rancher?" Technically, I don't need Rancher. I'm very familiar and very comfortable with the CLI, or using any kind of tools like Lens or anything like that. But I found having Rancher installed in my cluster gives me web management, gives me a lot of discoverability, and it gives me an easy way to debug something when something goes wrong. That being said, I don't deploy my workloads with Rancher, and I don't manage my workloads with Rancher. I do that with GitOps, which we'll talk about here in a little bit. But I run a ton of workloads in Kubernetes. As you can see, I have 85 pods running in the user name space, so those are all pods that I put there. A lot of stuff going on in here that we'll talk about here in a little bit, but I do use Kubernetes pretty heavily. But if you want to dive into the nodes that are running in this cluster, I have 1, 2, 3, 4, 5 nodes that are workers, so these are running workloads, and then I have 1, 2, 3 that are running ControlPlane and etcd. So what does that mean? Well, 5 of them are running all of the workloads that I put on there, and then 3 of them are running Kubernetes itself, running the ControlPlane to communicate with the entire cluster, and running etcd, which is the internal database for it to store all of the Kubernetes information and metadata about the cluster. And you can see I have some labels on these nodes, so I have a label for network, and these 3 nodes here are running in the untrusted VLAN, and then these last 2 nodes are running 1 is in server trusted, that's the one that was on the diagram, and the other one is running in IoT. And I mentioned earlier it's running in the IoT VLAN because it's easier to communicate with those IoT devices. Well, that's where Home Assistant and a few other containers that I have running are living on that node which lives in the IoT VLAN. And so this label is a way that I can say that, hey, only put Home Assistant on this node, or basically put any workload on this node that has this label. And the same goes for this node right here. I have a few workloads that are internal, that are trusted, but slowly they're being migrated to my trusted cluster. So for completeness, let's look at my cluster-02, this is my trusted cluster, and I have 3 nodes there, and you can see it has all roles, so I decided to do that. So all 3 nodes are running all roles, plus they're a worker 2 taking on workloads. Gonna separate that out here in a little bit as soon as I get some more resources. And the same goes for my local cluster 2, 3 nodes, all roles, plus running workloads. And this is a majority of what's running on those Intel NUCs. Now GitOps, this is a big topic, and the idea behind GitOps is that you treat your infrastructure as code. So what does that mean? That means every piece of service and infrastructure as far as load balancers, and possibly even nodes, you define and code and you commit them to a Git repo, and you push them up and then you let CI take over and deploy those resources to your cluster. So then your Git repo becomes the source of truth, and not the cluster itself, or the UI that you're looking at. And I follow this practice pretty exclusively. And so there isn't a lot to show you other than some YAML. But for example, you can see this is my deployment for Uptime Kuma. I have a lot of properties set on here, but at the end of the day, when I commit and deploy this, this is going to tell my cluster what the desired state is, and then the cluster will say, "Okay, well, I'll make it that way." So for example, if I wanted to give it a little more CPU, I could change this value right here, commit that, deploy it, and then within a minute or so my cluster would see this come in and it would apply it, and then it would redeploy this container with the new request value. And so what does that for me? Well, it's Flux. I Flux installed on my cluster and it helps orchestrate all of this stuff within Kubernetes. Worth mentioning too that I'm using Renovate, or the Renovate bot. So the Renovate bot will look at my repo, see if there are updates for some of the containers that I'm using. If there are, it'll open a pull request or a merge request, and then all I have to do is approve or merge in that pull request, and then that gets deployed. So we can see that here if we look at the uptime kuma image. You can see right now I'm using 123.11 Alpine, but you can see there was a pull request yesterday to update this to .11. And so previously it was .10. I got a pull request from a bot that I created that runs Renovate bot. It wanted to merge in .11. I accepted that merge, and then that got deployed. So these two work really well together to help keep your cluster up to date, and then keep it functioning. And if you're curious about GitOps with Flux or Renovate bot, I have videos on both of those. Next is reverse proxy, and I'm still using traefik. Now I have three instances of traefik now running, two inside of one cluster, and one inside another cluster. And so technically this diagram isn't right. I actually have two of these. I have two of these Metal LB IPs right here, but I guess this diagram is kind of my desired state because I'm moving all of my trusted workloads or my internal workloads over here. But I'm using traefik to do that, and so it's a reverse proxy. I can use it as an ingress controller to create ingress objects within Kubernetes. I know that sounds super complicated, but at the end of the day it's a reverse proxy, and it will take requests, look at those requests, and route them accordingly. So when someone requests access to my documentation site, techno-tim.live, first they hit my external reverse proxy, which is Cloudflare. It's worth mentioning. They'll hit that external reverse proxy. That will come in, that will come into my router, my UDM SE, into my firewall. I do some inspections there. Then eventually that will get routed to my server's untrusted, and then within my untrusted it'll hit this Metal LB IP, then there's traefik running the reverse proxy that will look at that request, and that will route it accordingly to the service or the pod that's running within Kubernetes. Actually, I made a mistake because these two are supposed to be down here. Whatever. I'm not going to mess with the dual right now. But I actually have two Metal LB IPs in my untrusted, not in my trusted. But yes, I'm still using traefik as my reverse proxy, and I'm also using cert manager along with that. Cert manager is a Kubernetes service that helps me provision and store my certificates as secrets. So rather than traefik go out and get those certificates for me, cert manager will do that for me. Then I can use them as certificates so I can scale both traefik and cert manager and kind of separate those concerns. So I don't use traefik anymore to get my certs. It's the moral of the story. I use cert manager to do that. But all of these work in cooperation to help me route traffic securely over TLS. Monitoring and logging. I've had a lot of changes to this. Yes, I still use Uptime Kuma to do my internal monitoring and logging, but it works great, as you can see, most of the time as I don't have any statuses being reported. This should update here in a second. I do wish that Uptime Kuma actually didn't use SQLite because it's kind of, I don't know. Not to dunk on SQLite, but it gets corrupted a lot. Probably my fault, but I'd rather have it with an external database. That's neither here nor there. Anyways, I still use Uptime Kuma. It's great. Use it for my internal stuff. My external stuff, I use Uptime Robot. And you're probably like, "Well, why do you use one internal and one external?" Well, my internal Uptime Kuma monitors a lot of things that aren't even exposed publicly. The only way to do that is within my cluster or within my infrastructure. And then Uptime Robot is kind of watching the watcher, but it watches everything that I have publicly hosted to make sure that that's still up. And as far as Loki, Grafana, and Prometheus goes, sad to say I ended up pulling these out of my cluster. So when I was running my OneU servers, I had plenty of headroom, plenty of CPU, plenty of disk space to support these types of services. But after I moved to my Intel NUCs, I realized that they were a little too busy running these services. Now, don't worry. I know it's important to monitor and to log and to visualize your data. I'm going to be bringing those back when I move my OneU servers to a colo. Be sure you're subscribed to see that video. But when I move those to a colo, I'm going to bring these services back and then leave those only for my public facing stuff. But I still might bring them back internally for my other things, but I need to find a way of doing that. Home automation. Yes, I'm using Home Assistant and it's actually running in my Kubernetes cluster. I think last year I was using HomeBridge to get some of my devices into iOS and to HomeKit, but I decided to get rid of that and just use Home Assistant to do the same thing. As you can see, I have lots of devices in here. I've spent lots of time to get them in here and I have some automations and other things as well. It runs great as you would assume that Home Assistant would run. This is the reason why I have Home Assistant. As you can see, this OneKates node that's running here in the IOTV LAN is so it can communicate with all of these devices rather than putting it somewhere over here and creating lots of firewalls to do that and MDNS and everything else that comes along with the complexity. I decided that, hey, all of my Kates nodes are hardened anyways and it was a lot easier to put it here to live with these devices to keep this traffic isolated and really get the benefits out of VLANs. Now there's so much you can do with Home Assistant. I've only scratched the surface and you could probably make a career out of Home Assistant if you wanted to, but right now I only use Home Assistant kind of as an interface to get a lot of my devices into HomeKit so that I can access them there from my phone or from my tablet or from my Apple TVs. And so with that, I'm actually running Scripted as well. And so Scripted is kind of like an interface too that allows you to import your cameras into the home hub of your choice. Now it works with a wide range of cameras and it will allow you to import them into say Google Home or into HomeKit. Now as I mentioned earlier, I run Unifi Protect, but I still want to see those cameras within HomeKit and maybe trigger some automations with my HomeKit stuff based on that. And since Unifi Protect isn't compatible with HomeKit, this is where Scripted comes in. And so all Scripted is is one big interface. It's all local and all it does is expose my cameras in a HomeKit friendly way so I can get them into HomeKit. And you can do the same with Alexa, Google Home, and if you want to see a video on that, I have a video on that too. But really cool open source project. Everything that I've tried and I've used works perfectly in here. It's almost as if my cameras are HomeKit certified. So another thing in my home automation is this Broadlink control. This is something that I built a long time ago so I can control my Broadlink devices. I can control them through a UI. But I also created an API that I can control them so I can remote control those lights back there during my Twitch stream. As you can see, if I click on Flash, wait a second, this should flash those lights back there. And then if I click on back to Blue, it should stop and they should just go to blue. So I wrote this a long time ago. I should probably just use Home Assistant to do that now and then write code against Home Assistant instead of writing custom code against my custom solution. I might get more mileage out of writing it against Home Assistant now, but that also includes rewriting some of my bots. So hey, maybe next year. But this is free and open source. It's out on GitHub if you're interested. Next is Data Sync and I think I'm going to get rid of this category because I'm not doing any data synchronization anymore. At least not with SyncThing. So I was using SyncThing to synchronize data from server to server so then I could back it up. But now that I've simplified my NAS and simplified a lot of my network and my virtual machines, I don't need to do that anymore. So I got rid of SyncThing. It's great if you need it, but I outgrew it now because I don't use it. So the only syncing I think I'll do in the future is synchronizing my snapshots in ZFS inside of TrueNAS to some external source at some point. Next up is Links Page and I'm still using my own Links Page. This is free and open source. It's on GitHub if you want to use it. But it's a really easy way to create a, I think, good looking homepage that is a collection of all your links so you can send people to this one page and they don't have to go anywhere else to get to all your contacts or any of the links you want them to get to. I'm sure there are better solutions out there, but I built this myself. So I'm voting for myself in this category. But there are a lot of other Link Page type solutions out there, but I think mine turned out pretty good and I've had a lot of people contribute to it so I'm still going to use it. Link Shortener. So I use a Link Shortener for a lot of my links so that I have control over where those links go. Not only do they look nice, but I can change them if things change. For example, if my GitHub page changed, I wouldn't have to go through all of my videos and recreate all of those descriptions and edit them with a new link. I can control it here and I can say, "Hey, my GitHub page is now this." And all it does really is just redirect someone to the link that you want. Click on this GitHub redirect link and it sends you to my GitHub page. So you can see these get used quite often and it also means that I have to have this highly available, which I do. I run, I don't know, three or four of these pods. The back end is Postgres, which is super nice. It's a database on the back end that I manage that is also highly available. And all of that runs in my Kubernetes cluster. But great solution if you need a Link Shortener. Schlink is absolutely the way to go. It's modern, feels nice, uses a modern data store like Postgres, and you get a little bit of metrics and some reporting and stuff like that. But really, really nice solution. As far as home entertainment goes, I use Plex and I've been using it for years and it's great. I've had a Plex Pass for, I don't know, eight or nine years and it gives me everything that I need and more. One of the big reasons why I choose Plex is for recorded TV. I record a lot of over-the-air stuff that I save to my NAS that I can watch later, skip commercials, and all of that. That I don't have to pay for services like Hulu or Peacock or anything like that. I just record them when they're live, skip the commercials when I watch them, and it works great. And they have a great EPG2 that's always up to date and their electronic programming guide is hands down the best one that's out there. I've used a lot all the way back to Windows XP Media Center edition, so I've used a system like this to record my TV over a long period of time. Anyways, you know what Plex is? I've been using it, it works great. And Plex is living on a Windows virtual machine, believe it or not. So I've a Windows virtual machine in Proxmox that has a GPU pass-through to it and it was doing some tasks already. And I had this on Docker only and I thought, "Well, why am I doing that? Let me just install Plex on here." So Plex is actually running on a Windows machine, it's running pretty good. Soon I might attempt to move it to Kubernetes, but I don't know if I'm ready for that pain. Minimally, I am going to move it to Linux VM here next year, or maybe a little sooner. So power management, I'm still using Nut Server or Network UPS Tools. I monitor three UPSs, you can see there's something going on with the input of this one. It's nice to know. But I have three UPSs being monitored. This is the one that's in the rack for my servers. And then you can see I have my second one, so this is the one that's on the wall in my server room. This is reporting on that. And then you can see the last one was the one that's in rack for network only. And you can see information on that. So I use Nut Server not only to visualize this, but also gracefully shut stuff down. Although I did turn some of it off because it kind of scares me sometimes that it could just automatically shut stuff down. So I need to tune my rules a little bit more. But if you want to automatically shut stuff down and start stuff up, when your UPS gets low, you could definitely do that. And then along with Nut Server, I also use the vendor supplied UI to manage my UPSs. So I have one for Eaton and one for TripLite. And that just gives me a little bit more configuration for notifications and things like that. Content management systems. I still use three content management systems. I use WordPress for one of my old blogs. I use Ghost blog for my wife's portfolio. And I use Wiki.js for a community based wiki for my Discord community. Now I'd love to get down to one CMS system, but they all do stuff slightly different and they all have different use cases. So I guess I'm stuck with three again this year. I think I could move my WordPress blog to Ghost, but that's a lot of work for very little value. But yes, still three content management systems all do things a little bit different in the approach. Content management a little bit different, but all find solutions. Static site generators. So I'm using two of them. I'm using both Hugo and Jekyll. So if you're not familiar with static site generators, you could basically write a website in markdown and it will create the markup for you or the website. So basically you only need to fill in the content and it will generate a good looking website for you. For instance, my documentation site is based on Jekyll. So I write all of this in markdown. I run a process and it generates the markup for me and I can deploy it and it's hosted in my cluster. And as you can see it creates a pretty good looking website for only writing markdown. And the same goes for Hugo. As you can see this is a Hugo site and you're going to get the same thing. You write some markdown, it generates markup, and then you could deploy a website with it. Now I probably could get down to one static site generator, but I like using two to keep myself familiar with both. For CI and CD or continuous integration and continuous delivery, I'm actually using two solutions for this. I'm using GitHub Actions runners that are hosted within my network in my cluster and I'm also using GitLab runners to build and compile different pieces of code and deploy things to my cluster. Why two? Well, because I use both GitHub and GitLab. And so hosting the runners myself ensures that one, I'm not running my code on shared runners. Not that I'm that worried about anything leaking, but I don't have to worry about that. And then two, I get priority and unlimited time and resources for those CI jobs to run. If I run them internally I don't have to worry about their limits. So even though the code is hosted externally I run my runners internally. And as you can see with this commit right here, it ran a build, it ran a container, and it deployed to my Kubernetes cluster. And the same goes for my GitHub Actions on GitHub. But it's pretty cool to self-host these runners and build your code and deploy your code or your infrastructure to your own internal infrastructure. It's pretty awesome. And then there's everything else in my environment. I'm running custom code for bots on Twitch, Twitter, Discord. I'm running Longhorn for storage within my Kubernetes cluster, running NetBoot XYZ, which really doesn't fit into a category, but I use that to network boot my devices so I can install operating systems on them. And the list goes on and on and on. And as you can see, I'm pretty passionate about self-hosting. Well, I learned a lot this year about self-hosting services, consolidating and securing services, and I hope you learned something too. And remember if you found anything in this video helpful don't forget to like and subscribe. Thanks for watching.
Info
Channel: Techno Tim
Views: 100,830
Rating: undefined out of 5
Keywords: techno tim, technotim, homelab, home lab, self-host, self-hosting, open source, home server, homeserver, opensource, 2024, proxmox
Id: MpaAu3HVDYE
Channel Id: undefined
Length: 40min 0sec (2400 seconds)
Published: Thu Jan 04 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.