This is the newly released HL15 from 45 Homelab, a division of 45 drives. It's a server meant to meet the needs of the Homelab community while bringing the build quality and design from their enterprise offerings. It's a 15-bayserver that can be used for just about whatever you want. It goes without saying that you're probably thinking about buying this to be your next storage server. There's a lot to unpack with this new server, so this will be an in-depth look at the HL15 Homelab storage server. We'll cover everything from the chassis, the backplane, the motherboard, the CPU, the power supply and power consumption, cooling, software selection, and even the price and value proposition. Because at the end of the day, if you don't think it's worth buying, you're probably not going to buy it. So, is it any good? Let's find out. Full disclosure, 45 drives did send the HL15 to me for review, and they've also sent devices to me in the past to help with my storage needs. However, this video is not sponsored and I was not paid. With all that out of the way, let's unpack this machine. When purchasing this machine, you have a few options. Now, you might have stick or shock when seeing these prices, but we'll talk about that a little bit later in this video. But your options are chassis and backplane only, chassis, backplane, data cables, and PSU, or fully loaded and tested, which is the machine they sent to me for testing. You have your choice of color, power cables, and some additional add-ons if you like. They sent a white one to me, which is what I was hoping for. So first, let's dive into the chassis. This chassis is made of steel and it's solid. Like, really solid. If you've only had aluminum cases in the past, you'll notice the difference right away. It has a powder coat finish that comes in white or black. Mine's obviously white. And this design is really nice. It sort of has a Star Wars Hoth vibe to it, which I'm a fan of. Now, looking at this case, you can see that it's almost entirely metal and screws, which I think is a good thing. Why does that matter? Well, if you've ever had a rivet pop off one of your cases, it's near impossible to fix. At least for me, I've never used rivets and wouldn't even know where to start. You can open the case using these thumbscrews. And the first thing you'll see is this top loading drive cage. It holds 15 drives that can easily slide in and out and it's easy to see the serial numbers. So no more labels for me. And you can see it has these nice little springs to keep the drives in place. You'll also notice, no caddies, which is something that 45 drives does not like. And after loading quite a few drives in other systems, I think it's starting to rub off on me. Caddies just add more parts, more screws, and ultimately more time and complexity when servicing the drives. It's something that I can now appreciate after dealing with the status quo for so many years. The other nice thing about the case is that you can lay it flat or even stack it up like a desktop with the included feet. If you do choose to rack mount this, you'll just need to attach the included rack ears and then also pick up some rack rails, which don't come with the server. Now, if you don't want the official rack rails, one of the universal rack shelf type rails will work just fine. So one of the downsides is that the PCIe slots have breakaways. I mean, that's great if you never have to add an item. It's less parts, less screws, less things banging around. But awkward if you do shuffle around hardware. This is a carryover from their enterprise servers, something that I've also seen in my AV-15. I know you see the backplane and we'll go a little bit more in depth on the backplane and some of its features a little bit later, but first let's focus on the motherboard and connectivity to help you understand how this all works. This is a Supermicro X11SPH-NCTPF. It comes in two flavors, one with SFP+ networking and the other with 10 gigabit ethernet networking. It has a single socket LGA-3647 motherboard and you can socket a CPU with up to 28 cores, but we'll cover this specific CPU a little bit later. Now, it can hold two terabytes of RAM across the eight DIMM slots. It takes DDR4-3200 dual rank ECC registered RAM. Now, I opted for 32 gigs, two by 16 DIMMs, and picked up more RAM from eBay and now I have 128 gigs. It's priced pretty fair there if you're thinking about buying some extras. I made sure that I tested all of the RAM with Memtest 86 and it all passed. If you do decide to do this, be sure to populate your DIMMs in dual rank mode according to the motherboard's manual. If you're looking at the specs and read that it has 10 serial ATA ports, that's because it has eight SATA ports and two SATA DOMs. Now, the two SATA DOMs are nice for the OS or for something else, but the eight SATA ports are wired up to the backplane. Also, it has eight SAS ports, which are also wired up to the backplane, but eight in eight is 16. We'll talk about that in the backplane section. It also has an NVMe slot on the motherboard and mine came with a Kingston drive. This board can support two more NVMe drives via Oculink using the U.2 spec, which I'm not sure where you would put these drives, but it's definitely possible. But if you're thinking about adding more NVMe drives, I would just look at one of the PCIe add-on cards from Supermicro. Speaking of PCIe slots, this has four PCIe 3.0 slots. There's a 16x slot and then three 8x slots, but really one of them's running in 4x mode. As far as networking goes, this motherboard has two SFP+ ports that give me 10 gigabit per second. Now, I'm with this route because I feel like it's a little more flexible than going 10 gig ethernet and I ended up buying a few 10 gig RG45 transceivers. Some other notable things about this motherboard is that it has IPMI. IPMI allows me to remote into this machine, whether it's powered on or powered off, flash firmware, install operating systems, and do anything I want to do remotely. I used it quite a bit while testing this and even flashed the BMC firmware shortly after powering it on. Now, we did buy the remote license update from Supermicro. I typically do this with all of my Supermicro motherboards, but that allows me to flash the BIOS through the UI. Probably allows me to do a few other things, but that's what I use it for. It costs about 27 bucks and while it stinks to pay for this license, there's no easy way around it if you want to flash your BIOS remotely or want to access some of those other features. The ones that I don't ever use. There are lots of additional headers on the board if you need, like TPM, USB, and even the additional serial. If we look at the I/O in the back, we can see that this has VGA, which is totally fine for me, I have PMI, but this is also pretty typical of the servers. You can see USB 2.0 ports, 3.0 ports, and our network ports again, and serial. So overall, I think it's a solid board for this configuration and what I want to use it for, and it plays well with the CPU. The CPU is a second gen Intel Xeon scalable processor. It's Cascade Lake, it has six cores and no hyperthreading, it has a max clock speed of 1.9 gigahertz, which is low for the single thread performance, but it's fine for me and using it for a storage server. It can address up to 48 PCIe lanes, which is great for this board. It has a TDP of 85 watts, it supports VTX, VTD, AES-NI, and more of the typical tech you see on Xeon CPUs. But in the real world, is this enough CPU? Looking at the specs, it doesn't look like this is a lot of CPU. You're getting six cores, six logical cores and physical cores because there isn't hyperthreading, and a max clock speed of 1.9 gigahertz. And this might not look like enough, but I'll say for me, it will be. I'll be doing very little compute on this server, and I will be using it primarily for a storage server. And if you plan on using this as a hypervisor, say like Proxmox, you might want to look into another CPU. I checked other CPU options on the used market, and it's actually not too bad if you want to upgrade this CPU. But if you're getting the CPU, it's great for storage, great for containers, but if you start to spin up a lot of virtual machines, that's where you're going to start to see it get a little sluggish. This backplane was created by 45 drives. It can address up to 15 drives, obviously the HL15. They've wired up eight to the SATA controller and seven to the SAS controller. The eight SATA run through the C622 onboard controller, that's six gigabits per second, and seven of the SAS run through the Intel 3008 controller, which is 12 gigabits per second. These aren't clearly marked on the backplane, so if you plan on taking advantage of SAS, you might have to trace that cable to see which ones they're wired up to. Me personally, I'm going to use all SATA drives, so it doesn't really matter to me. Now this backplane with some drives in the right configuration can push up to 2000 megabytes per second. That's two gigabytes per second. You can easily saturate a 10 gig link, and you're basically getting NVMe speeds over the network. It's the same backplane that they put in their enterprise servers. They're stornators. There's no difference between the two, and it's the one that they just developed. The first feature is universal hotswap, and you're probably thinking, don't all drives support universal hotswap? Well, kind of sort of. Really, this is a power limiting feature so that when you plug in a drive, it won't affect its neighboring drives and causing the neighboring drives to drop in voltage. Another cool feature is staggering spin-up. Yeah, it will stagger the spin-up of each drive when the machine powers on. This prevents the machine from pulling down a ton of power when you power it on and possibly avoiding a surge. After learning about this feature, I think I finally figured out why one of my old JBOD servers tripped my power supply when I powered on, probably because it doesn't support staggered spin-up. Now, this power supply is the Corsair RM750E. It's ATX, and it's fully modular, making sure that you only need to use the cables that you need, reducing a lot of clutter in the case. This is welcome because when swapping out the power supply in my AV15, there were lots of cables everywhere, and I had to use a specialized power supply. So it's great they're using a standard ATX power supply now. Now, this power supply has low noise because it's a 120 millimeter fan, and it can even spin down to zero RPMs on low loads. Now, it has an 80 plus gold certified rating, meaning that it's up to 90 percent efficient on steady loads. Could you get a more efficient one? Sure, but it's going to cost a lot more. But overall, it's a solid power supply, but 750 watts might not leave a ton of headroom if you plan on adding all 15 drives, along with some other components. Keep that in mind. So let's talk about power consumption on a server. Surprisingly pretty good. Turning it on without anything plugged in, it's going to pull about 80 watts of power while doing system tests. And if you plug in a few additional USB devices, the monitor, a couple of GBICs, it's going to pull an additional 10. When fully booted into Ubuntu 2204 LTS without anything plugged in, it's going to pull about 70 watts of power. The only drives I had free to test with here are these 8 terabyte Seagate IronWolf 7200 RPM NAS drives. After inserting each drive and letting them spin up and normalizing, each drive added about an additional 8 to 9 watts of power. After inserting 6 of these drives without anything else plugged in, we're sitting at about 114 watts. After plugging everything back in, like the monitors, the USB devices, and the Nicks, we're sitting right around at about 125 watts. Now, that's not too bad, all things considered. I disconnected the fans just to see, and it looks like they're using about 4 watts a piece. So any way you slice it, 125 watts of power is a lot of power. However, this is considerably lower than some of my other servers with the same amount of drives in them. And I think that has a lot to do with the CPU and the drives. As far as cooling goes, this has 620 millimeter fans. There's 3 in the front and then 3 in the back. They're cool guys fans and they move a ton of air. The ones in front obviously pull air in and push it across the hard drives, and the 3 in the middle pull the air then through and push it across all of the components on the motherboard. Now, these fans do move a lot of air and you don't need to worry about your components ever overheating. They do this at the cost of noise though. How loud are the fans? Well, I'm not really sure how to put this, but they are quiet for enterprise servers and loud for a home server. How about that? Now, they are much quieter than their enterprise counterpart, the AV-15, but still not something you want to put like in your living room. And like my AV-15, I'm thinking about replacing these with a Zigbee controller and RGB fans like I did in that customization videos, but I haven't had time to rip it apart yet. But if you do decide to replace these fans, just be sure that you're getting something on par with these and specifically ones with enough static pressure. Now, the CPU cooler is passive and I think that has a lot to do with why they're using these fans. On my AV-15, I did a swap out for a Noctua CPU cooler that kept it just as cool, if not more, and then allowed me to replace the fans with quieter fans and with less static pressure. I'll link the CPU cooler I'm going to use down below if you're interested in doing the same. The HL-15 ships with Rocky Linux, but I assume that's just there to run their QA tests. Since this is an open system, you can install anything you like on it, from Proxmox to True NAS to VMware or any other operating system. If you're buying this as a storage server, most likely it's going to be something that handles storage nicely. Now, I wanted to take a slightly different approach with this system and decided not to install a Hypervisor and install Ubuntu LTS. This time, I might go bare metal and see how manageable Ubuntu is with some services like SMB, NFS, ZFS, some Docker containers, and possibly KVM if I want some virtualization. I also decided to give Cockpit a shot, or the Houston UI as 45Drive calls it. This gives me a friendly UI to manage some of the services which are otherwise pretty complex. That's when I found out they don't yet support Ubuntu 2204, only 2004, which is the previous LTS from about three years ago. Because of that, I tried to install Cockpit without the 45Drive special sauce. But Cockpit on Ubuntu 2204 had its own issues, so for the sake of showing off Cockpit in Houston, I decided to install Ubuntu 20.04, which is still supported. Hopefully, 2204 will be supported or even 2404 in the spring. So installing that was super easy with 45Drive script, and we'll install many different modules to help you manage your server, and after setting it up, you'll reboot and you can access the Cockpit dashboard. At the time of testing Houston for this video, it seems like some of the modules are not working with the HL15. For example, the 45Drive disks and the 45Drive's motherboards do not load, and the 45Drive system area is partially loading, and there are some services that fail to load altogether like ZNAP, ZEND. These things aside, the rest of the UI seems to be working. I can create ZFS pools, create SMB and NFS shares, and even create some virtual machines if I like. Although, the UI is pretty basic for this. Creating users, charts, metrics, benchmarks, and even accessing the terminal all seem fine. You can also install applications using the CLI and finding an application on the project page. However, most of them are already installed. For example, I installed OpenLDAP server on my machine and that worked just fine. However, beta applications like Tailscale and Cloudflare Tunnels aren't available on this version of Cockpit, so you won't be able to install them. Arguably, that's probably better suited on something like Docker and Containers, and if you're going to install and manage Containers, you'll want to install Portainer to manage them. So if I end up going this route, I will leave Cockpit for managing hardware type services and configuration, and leave the rest to Docker. All that being said, I know that this is a new line for 45Drive, but it would have been nice if this were here working so that I wouldn't be forced to use Proxmox because I don't need virtualization and not use TrueNAS because I don't want to run their apps. I might just go bare metal without any GUI, but managing configs for SMBs, ZFS, and NFS, etc. is such a pain. Hopefully some enhancements come to Houston and Cockpit soon so that we can all manage our HL15 using their UI. Hey y'all, editing Tim here. So I talked to 45Drives and they said that they're going to have separate repos for these in the future. One for 45 Homelab systems and continue the ones for 45Drives enterprise systems. So I guess this wasn't supposed to be used yet, but I thought I'd give it a shot. Anyway, so more to come on my decisions for the OS. I still have time to make my decision and the nice thing is it's flexible so I can choose whatever I want. I decided to do a little network testing on the HL15. I wanted to test the CPU, the RAM, and the board all at the same time. I wanted to make sure that this configuration could really saturate a 10 gigabit network. So I connected the HL15 to the network via 10 gig and then connected another machine on the network too. I then ran the iPerf server on the HL15 and then on my other machine I set up a client. Saturating a 10 gig link on the HL15 was easy. It didn't even break a sweat. But then I thought, well, I have two NICs. Let me lag these together to see if I can saturate 20 gigabits per second. Now when I lag these NICs I'm not getting a combined 20 gig per second on one connection. I'm actually getting two different 10 gigabit connections. I learned a little bit about lag in the past. Basically, two highway lanes each are 10 gigabit per second versus one combined lane that is faster. That's how I like to think about it in my head. But anyways, I lagged them together in the OS, lagged them together on my switch, and then powered on another one of my machines and spun up another iPerf3 server on the HL15. So now I have two instances of iPerf3 running on the HL15, each listening on a different port. And then I have two clients connecting to the HL15, each on a different port. That was the test I set up to make sure that two of these clients were going to communicate with the HL15 itself, both on different ports. And then when I ran iPerf3 on the second machine, you could see here. Now I'm getting almost 20 gigabits per second, not even breaking a sweat. Now I'm not going to use these speeds at home unless my wife gets 10 gigabit per second and I use 10 gigabit per second and we're both pushing files back and forth. But I wanted to make sure the HL15 could handle this in its current configuration. And as you can see, no problem at all. So let's talk about some of the configuration options you saw earlier, along with the value you're getting. You can choose the chassis plus backplane for $7.99, which is going to get you the chassis with the direct wire backplane. Everything else is up to you to supply and connect. The next option at $910 is the chassis plus backplane and PSU, which is going to get you the chassis, the direct wire backplane, data cables, and the 750 watt ATX power supply. Again, everything else is up for you to supply. The final option is $2,000 and it's the fully built and tested system. It's the one that's back there. Now let's not beat around the bush. All of these options are pretty high. No matter which way you look at it, $800, $900, $2,000 is a lot of money no matter what you're spending it on. And I think you'll have to determine if there's enough value here for you. It's getting a steel repairable case with a backplane that can hold 15 caddy-less drives and give you NVMe speeds over the network important to you. If it is, you only have a few options out there. And of those, none of these really have the features that this has. So this isn't a system that you buy that comes with a proprietary motherboard and components. It's a chassis that's open enough to accommodate any motherboard and any CPU combination you can throw at it now and in the future. And if you're comparing it to other storage vendors like Synology, it's on par with the price. But if you're comparing it to old used gear, you're going to think that this is really expensive. That being said, I would love to see cheaper options for those that are in the market for a home lab chassis like this one. And if not, maybe those people will see this on the secondhand market later in life. But if you think there's enough value in this price, I think you'll find a case that has everything you're going to want for a storage server now and in the future. It looks like 45 drives addressed everything that I mentioned in the AV-15 that I had problems with using it at home with the HL-15. Well, except RGB. Unless you're counting that power switch on the back. No matter which camp you're in, too expensive or just right, you have to give 45 drives some credit because they took a huge risk at bringing something to the market that's as niche as home lab. Because to my knowledge, they are one of the first to do it and brand it with home lab in the title. Well, I learned a lot about the HL-15 storage servers and network throughput testing, and I hope you learned something too. And remember, if you found anything in this video helpful, don't forget to like and subscribe. Thanks for watching.