The moment has finally come Today we’re gonna take this
rat’s nest of network equipment and tangled wires in a cheap IKEA cupboard… And transform it into this beautiful server rack. It will take less space, be much neater,
easily expandable, upgradable and repairable… and the best part – it won’t break the
bank. At least comparatively speaking. Server equipment is still pretty
expensive, even if you buy used. So join me on this rollercoaster of emotions, during which I do as many things
wrong in one video as possible No, seriously, if you’re a system integrator, a sysadmin or something of that
sort, don’t watch this video. It will hurt. So last year, I built my ultimate
small form factor home server. It’s based on the Asrock C236 WSI
motherboard and an Intel i3-6100. It currently has 4 hard drives in it, 2 SATA SSDs
and 2 NVMe SSDs that I use for video editing. With 10 gig networking I can comfortably edit 4K ProRes footage off of it and it helps
to free up the space on my laptop. However, as my home network grew, managing it in
an IKEA closet became more and more inconvenient. For instance, I had three network cables
going into my main home server slash NAS machine only – that’s a lot of
cables, with no way to manage them. Plus, the case that I used for my NAS was
just something left over from my desktop PC. Even though it can theoretically fit as many as
6 full-size hard drives and 12 2.5 inch drives, it isn’t really made for NAS use – for instance, replacing or adding new
hard drives was a nightmare. So I started looking for a rackmount
enclosure to house my server. I know that I mentioned that I don’t
want a rack mounted case because they’re loud – and I was wrong. As you’ll see in this video – rack
components don’t have to be loud. So I decided that I want a 3U
case with hot swap storage bays. And I found out pretty quickly that
they’re all prohibitively expensive. Since I live in a small city, I also couldn’t
find anyone selling an enclosure locally I almost gave up on the idea, until
one day I saw a listing on eBay for this Supermicro SC833 enclosure. For $119 dollars, it came with
8 hot swap hard drive bays, one internal 3.5 inch bay and a power supply. The only caveat is that it didn’t come with
any rails, so I had to buy that myself. With the main part of my homelab taken
care of, let’s talk about the rack itslef So during my extensive research,
a.k.a. 5 minutes on Amazon, I found this Samson 12 unit
rack for musical equipment. I looked at the price, said “Hm, i guess i
could afford it”, and added it to my wishlist. What I didn’t take into account is that
this rack is only 46 centimeter deep, and most server enclosures, including the
Supermicro SCC833, are at least 65cm deep. After some further search I came
across this 12U rack from StarTech. At €277 it was two times more
expensive than the Samson rack, but it’s built like a tank and has adjustable
rails with different depth settings. The reason I went with an open rack design is because full-blown rack closets
are actually even more expensive. This one for example, is
€385, and this one is €350. Both of those closets are 80cm deep,
with no way to adjust the depth, and in general, an open rack would
just be more convenient to work on and provide better airflow for the components. Building the rack was kind of like
building IKEA furniture, but easier. It came with pretty easy instructions and
the whole process took me around 40 minutes. This is what the end result looks like, and with
the rack taken care of, let’s build our server. First I had to prepare the enclosure itself. It arrived in a pretty good condition, and still had all the original parts –
including the fans and the power supply. So that had to be fixed. There’s nothing wrong with the stock fans
installed in rack servers like this one. However, you have to keep in mind that those
servers are made to be used in data centers, where the most important
things are static pressure, airflow and ensuring that the components
are running as cool as possible. Same goes for the power supply. I’m sure that there’s going to be a
lot of angry people in the comments, absolutely furious at me
for removing the Supermicro power supply – the pinnacle of
indurstrial server hardware, and replacing it with a measly Corsair SF450. But, once again, the power supply that
the case came with had a loud fan in it and only had a 80 Plus Gold rating – whereas the fan in my Corsair SF450
only ramps up under heavy loads And the power supply has an 80 Platinum rating,
which will definitely help with power efficiency. I won’t be using this server
in a data center setting, it will be standing right next to my desk. And because of that, low noise and power
efficiency are much more important to me. So out went the fans, the power
supply, and the fan bracket. Then it was time to take the
server out of the old case. Not much to say here, this case
is not the easiest to work in, but I pretty much know it
inside out at this point. I connected the SATA cables to the backplane, as well as my PCIe devices – the 10
gig network card and the SSD adapter. I’m using this PCIe bifurcating riser, which
basically splits a PCIe x16 slot into two slots, x8 and x4, and also gives you
an M.2 slot to put an SSD into. For the fans, I used three Noctua NF-A12x25 fans. There were no mounting holes for 120mm fans in
the case, even though the height allowed for it, so I used the only proper way to fix
fans in an industrial server enclosure. Zipties. Don’t say I didn’t warn you After mounting the third fan I realized that it won’t spin freely – because of this
metal post. So I took care of it. And there we go! All three fans are
“secured” to the case and spin freely. Once again, I would not do any of this if you’re
actually building a server for any kind of serious, mission critical application,
data center use or anything like that. The reason why I went with this janky
fan setup and an SFX power supply, is because my server rack will
be standing right next to my desk and I’m using it at home, not in a data center. Even when picking the components, I had made
a concsious choice to go with low noise, low heat and low power consumption hardware, and don’t worry, and I did a lot of testing and monitoring afterwards to make sure
my components are running cool. Finally, shout out to this ridiculous
proprietary front panel connector I didn’t have any male to female jumper
wires, so I just cut the other end completely and used a multimeter and this scheme that I
found on the Internet to map the connections. One thing that definitely got
the professional enterprise makeover it deserved were the hard drives. I initially started off with
shucked SMR drives from WD MyBooks, and ran mergerfs/snapraid
combo for my storage array. But this time, I wanted to do everything
“the proper way” and set up ZFS, So I reached out to Western Digital,
and they actually agreed to send me 4 of their 8 terabyte Western
Digital Red Pro hard drives. Those drives are CMR so they’ll
work really well in a ZFS set up, and I’m actually thinking about switching
to TrueNAS, now that I can actually use ZFS. So thank you Western Digital for sponsoring
this video and providing the drives. Unfortunately I didn’t manage
to get a lot of footage of me actually putting the stuff in the rack, So I’m going to show you what it looks like now, and walk you through all of the
components that I installed in it. First off, I got this nice wooden top from a
hardware store and put it on top of the rack, to kind of hide my poor cable management. And as an added bonus, I can also put stuff on it! So the power strip is probably
the easiest one of the bunch. There are a lot of rack mountable patch
panels for sale and I went with this one Mostly because it had the power plugs on the rear. I installed it at the top of the
rack, and that was pretty much it. The only annoying thing is that European
power plugs are oriented diagonally, and a lot of devices have massive power bricks. Because of that, one power
brick can basically block three ports, including the two that
are adjacent to it. Which is not ideal. I’m definitely going to invest into an
actual UPS at some point in the future, But for now this power strip has
been doing its job pretty well. Next, the patch panel. I needed some way to manage all of those network cables for devices that don’t have
any front facing Ethernet ports, like my NAS or the Seeed CM4 router board. So I got this 16 port patch panel from DeLock. I didn’t actually know that you needed a special
tool to punch in the wires into the patch panel, And just used a flathead
screwdrier and some cutters. Don’t do it though, just buy the tool. Or
better yet – buy a keystone patch panel. They’re usually on a pricier
side, but trust me, it’s worth it. Next up, the switch. I was using a
TP-Link unmanaged PoE switch and this Miktotik 4-port SFP+ switch
for my 10 gig connectivity. But since I wanted untagged VLANs, I
needed something smarter than that. I decided to go with this Mikrotik CRS326 switch. It is passively cooled, has 24
Gigabit ports and two SFP+ ports and consumes around 5 watts at
idle, with no ports connected. The SwOS web interface is also super
nice and simple, which is great. Can't say the same about RouterOS unfortunately Some people might wonder why I didn’t go
with a Unifi switch and Unifi gear is nice, but an equivalent 24-port rackmount switch
from Ubiquiti would cost me 431 euros. Since the patch panel and the
switch were so close together, I wanted to make some short custom cables
to make the rack look a little bit nicer. I… didn’t really have any luck with that. None of the cables that I made worked, and the cable tester also reported
a bunch of faulty contacts. So I gave up and bought a pack of these super
short, thin and flexible LAN cables from Amazon, and they’ve been working great. Next we got this short 1U shelf
that houses my Raspberry Pis. This is the Seeed CM4 RouterBoard
that runs Home Assistant, Internal reverse Proxy, Grafana and Prometheus. And this is a regular Raspberry
Pi that I use to run PiKVM. Next up, we got my main router – Fujitsu S920. I’ve made a video about it which
I’m going to link in the top right corner – it’s an amazing and
inexpensive OpenWRT machine. It consumes very little power and
has been working really great so far. It’s about as powerful as the Raspberry Pi 4, but unlike Raspberry Pi 4, it’s cheaper
and easier to get, at least here in Europe, and has a PCIe x4 slot. So it’s also a good
option for running Home Assistant, Plex, and other self-hosted applications. Below the router and the Pis,
we got the “drawer of shame”. It's basically just my ISP router. Next, we’ve got 2U of just empty space… And below that is my main server Now the main server is the
heaviest rack module that I have, so I've put it at the bottom of the rack. I’ve made a mistake of buying a
pair of these universal rack rails, and don’t get me wrong, they do a decent
job of holding the enclosure in the rack, but getting the server out of the
rack is a huge pain in the butt. This is because the enclosure lid is
just wide enough to fit into the rack, and the rails are actually
wider than the enclosure. So in order to take it out, I pretty much
have to unlatch the lid, tilt it a little bit, and then take the whole thing out. I’m definitely going to invest into
the original Supermicro rails for that enclosure at some point in the future. So there you have it! This is my “virginity
corner” containing almost my entire home lab, and I’ve been pretty happy with it so far. It’s definitely easier to work
on than what I had before, and even though it’s bigger in
size, it’s way more expandable And since I live in a bigger apartment now
and my homelab has expanded in size anyway, I’m okay with it.