Hardware RAID for the fastest Raspberry Pi CM4 NAS!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Blog post version if you're more textually-inclined: https://www.jeffgeerling.com/blog/2021/hardware-raid-on-raspberry-pi-cm4

πŸ‘οΈŽ︎ 13 πŸ‘€οΈŽ︎ u/geerlingguy πŸ“…οΈŽ︎ Feb 26 2021 πŸ—«︎ replies

Cool, but if you are only one step away from putting sand in your oven to make the chips yourself.

πŸ‘οΈŽ︎ 8 πŸ‘€οΈŽ︎ u/IMABEARLAWL πŸ“…οΈŽ︎ Feb 26 2021 πŸ—«︎ replies

Anyone got experience with USB3 raid enclosures? I need max size and speed per dollar for a scrappy little search engine research project I've got going on my rpi4-cluster, and I'm eyeing something in those lines. A proper NAS would be nice, but it's a bit beyond what I'm able to invest this early on.

Right now it's running off a single external hard drive and it's just slightly not enough.

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/BobTheSCV πŸ“…οΈŽ︎ Feb 26 2021 πŸ—«︎ replies
Captions
you know how i posted a video about enterprise sas raid on the raspberry pi but never actually showed a sas drive and then i posted a video about the fastest raid ever on a raspberry pi well today i'm going to show you actual enterprise sas drives running on a hardware raid controller on a raspberry pi and it's even faster than the fastest rate array i set up before josh a storage engineer at broadcom watched those videos and realized the lsi card i tried was way too old to work properly with arm processors like the one in the raspberry pi so he asked if i wanted to test a much newer storage controller that might actually work i said yes so he sent me this broadcom mega raid storage controller card along with an 8 bay universal backplane to go with it and spoiler alert we got it working it required about 50 kernel recompiles and i think i'm getting to the point where i should make a shirt for that anyways we got this thing working and i can finally say without any caveats that i have enterprise grade sas raid on a raspberry pi but what is sas raid and what makes hardware raid any better than software raid like i used in the last video well here's a quick primer the drives you might use in a nas or a server today usually fall into three categories sata sas or pci express nvme and all three of these drives can use solid-state storage for high iops and fast transfer speeds sata and sas drives might also use rotational storage which offers higher capacity at lower prices though there's a severe latency trade-off with that kind of drive raid which stands for redundant array of independent disks is a method of taking two or more drives and putting them together into a volume that your operating system can see as if it were just one drive raid can help with redundancy so you can have hard drives fail and not lose access to your data immediately raid can also help with performance by allowing multiple drives to read or write data simultaneously and some raid tech can speed things up even more with extra caching or provide even better data protection with separate flash storage that caches write data when power goes away if you can fit all your data on one hard drive though and have a good backup system in place and you don't need maximum availability you probably don't need raid but for people like me who manage tens of gigabytes of video files every day raid is helpful to make sure data is accessible and fast i go into a lot more detail on raid in my raspberry pi sata raid nas video so go check that out if you want to learn more now back to sata sas and nvme all three are interfaces used for storage devices and the coolest thing about a modern card like the one i'm using is you can mix and match all of these in hardware rate arrays connected through one hba or host bus adapter card now if you can spend a couple thousand bucks on a fast pc with lots of ram software raid solutions like zfs or btrfs offer a lot of great features and are pretty reliable but on a system like my raspberry pi i found that software based raid takes up most of the pi's cpu and ram and doesn't perform as well if you watched my earlier video on raid the fastest this speed i could get with software raid was about 325 megabytes per second and that was with raid 10. parity calculations make that maximum speed even lower using hardware raid where the raid operations are offloaded to an external card frees up the raspberry pi cpu and allowed me to get over 400 megabytes per second that's a 20 performance increase and it leaves the pi cpu free to do other things we'll get more into performance later but first i have to address the elephant in the room people are always asking in comments why i test all these outrageously overpowered cards in this case a storage card that costs 600 bucks used on a low-powered raspberry pi i even have a 10th gen intel desktop in my office so why not use that well first of all if you've watched this channel at all before you know my first response it's fun i enjoy the challenge and i get to learn a lot since failure teaches me a lot more than easy success but with this card there are two other good reasons first it's great for a little storage lab in a tiny corner of my desk i can put eight drives an enterprise raid controller my pi and some other gear for testing when i do the same thing with my desktop i quickly run out of desk space i don't have the space or time to set everything up and play around with it as much as i do with the pi and second as i mentioned before using true hardware raid takes the i o burden off the pi's already slow processor having a fast dedicated raid chip and an extra 4 gigabytes of ddr4 cache for storage makes the pi reliable and fast for any kind of disk io if you don't need to pump through gigabytes per second of data the pi and a raid storage controller is actually more energy efficient than running software raid on a faster cpu i tested my storage setup plugged into a kilowatt and the pi with the card only used 10 to 20 watts of power on the other hand my intel i3 desktop running by itself with no raid card at all idles at 25 watts and normally runs at 40 to 50 watts when in use that said i don't see this pi hardware raid solution taking over the storage industry and winding up in amazon's data centers the pi just isn't built for that but i do think this is a compelling storage setup that was never really possible until the compute module 4 came along now let's talk a little bit about the raid controller card that josh sent me the card is pcie gen 3.1 and supports 8 lanes of pcie bandwidth unfortunately the pi can only handle one lane at gen 2 speeds meaning you're not gonna get a full six plus gigabytes per second of storage throughput on the pi but the pi can use four fancy tricks this card has that the sata cards i tested earlier don't have first it has a powerful sas raid on chip which is a computer in its own regard taking care of all the raid storage operations this frees up the host computer in our case a pi with a slow cpu from having to manage raid operations second it has a four gigabyte ddr4 sdram cache so it can speed up i o on slower drives and the card doesn't have to use any of the raspberry pi's own limited ram third it has a thing called cash vault flash backup if you buy an extra super capacitor and there's a power outage it dumps all the memory in the card's write cache to this little built-in flash storage chip fourth it has what's called tri-mode ports meaning you can plug any kind of drive into the card like sata sas or even nvme and it will automatically switch modes to work with that drive it also does all the work internally so using multiple nvme drives won't bottleneck the cpu which can even be a huge problem with faster processors than my raspberry pi has i don't have a u.2 or u.3 nvme drive on hand to test right now but maybe that'll be a fun topic to explore in a future video oh and did i mention it can connect up to 24 nvme drives or a whopping 240 sas or sata drives to the pi my budget right now is a little bit limited especially after that nearly thousand dollar wi-fi six testing video so i've only been able to test eight drives at once so far maybe if this channel ever hits a million subscribers i'll budget for a giant 240 drive raspberry pi petabyte nas anyways i'm getting off track the goal was to see if a newer card which was designed in an era where arm processors on servers were a thing could work on the pi's limited pci express bus the first try was pretty rough right out of the gate josh sent over a driver that i had trouble compiling i also realized the card needed at least 10 to 20 watts of power and the 2 amp 12 volt power supply i was using with the compute module was probably not adequate so i replaced it with a 5 amp power supply i should note i also tried using external powered risers but there are two reasons i didn't use them here first one of those risers released the magic smoke in one of my network cards a few weeks ago and second when they do work they only work some of the time so it's kind of like russian roulette if your storage comes up when you boot the pi so i chose to increase the amperage to the compute module board itself which allowed the card to run without any issues with the power issue solved i tried getting the card's driver to compile on the 64-bit pi os beta but i ran into more problems i had to build the kernel headers myself since at the time the raspberry pi kernel headers package wasn't available for the beta os after i got that figured out we found msix wasn't supported on raspberry pi os but the card needed it to load the driver it was a happy coincidence that some other people who are trying to get google's coral ai accelerator working had the same issue we did and lucky for us in a forum post phil elwell mentioned he committed a tweet to the raspberry pi kernel source to enable msix with a few little limitations so it was time to compile a new kernel well we did that but ran into some other issues with the driver so the bring up was on hold i also tried compiling the driver in ubuntu 64-bit server edition for the pi but ran into the same msi x issue and i wasn't really set up to recompile the ubuntu kernel after a little time passed a new driver version was sent my way and i recompiled the kernel so i'd have msix support i compiled the driver with a new kernel and then got a bunch of messages about irq pull features not being defined apparently i had to recompile the kernel again this time configuring it with the option config irq poll set to y so i did that and copied the kernel from my fast cross compile environment on my mac to the pi then found out the kernel headers and source required by the driver weren't compiled for arm when i built the kernel on my mac so guess what i had to recompile the kernel yet again this time on the raspberry pi itself an hour later with a pi compiled kernel the driver compiled without errors for the first time i excitedly ran sudo insmod mega raidsas.ko and then it hung the driver initialization just failed after hanging for five minutes or so and at this point i knew things were serious because we had two other broadcom engineers on a conference call and they told me to pull out a usb to uart adapter and watch the serial data coming off the card itself while it was initializing i got to learn a bit about minicom and debugging broadcom cards which was neat but after a few hours we found that the driver would work on 32-bit ios but not on the 64-bit version which is a little strange since many drivers support 64-bit ios better than the 32-bit ios anyways we dumped a ton of memory data to text files i sent that data over to a driver engineer at broadcom and finally a few days later he sent a patch which fixed the problem on 64-bit ios which turned out to be related to the use of the write q function which is apparently not that well supported on the pi's pcie bus and so after many recompiles and a lot of iterations of the driver for this card we got it to work when the card initializes d message shows that the broadcom mega raid driver can see the attached storage enclosures and then i can use an app called store cli to interact with the card and configure raid i'm not going to get into the details of using store cli since you could write a book on it and it looks like broadcom has already but the process for setting up a volume went like this first i used store cli to create a virtual drive using this command the command makes a raid 5 array named sas r5 using drives 4 to 7 in the storage enclosure i have attached which has an id of 97 it sets a few options for setting up caching and then sets the strip size for the rate array to 64 kilobytes depending on the type of drives you have and the performance options you need you might want to use a different options here like a larger strip for slower spinning drives like i said there's practically a book on how to set up raid with store cli so if you're serious about it you should probably read that after i created a raid 5 volume for my four kingston sa-400 sata ssds and another one for four hp proliant 10k sas drives i used lsblk to make sure the new devices sda and sdb were visible on my system once i knew linux could see these volumes i partitioned them with fdisk and then formatted them with make fs and boom i had two new raid volumes a 333 gigabyte ssd array and an 836 gigabyte sas array i also wanted to make sure the storage arrays were available at boot time so i could do things like automatically mount them and share them via nfs so i installed the compiled driver module into my kernel i copied the module into the kernel drivers directory for my compiled kernel then i added the module name mega raid underscore sas to the end of the etsy modules file with t finally i ran depth mod and rebooted and after boot it looked like everything came up perfectly one thing to note is that a raid card like this can take a minute or two to initialize since it does a full boot process on its own so boot times for the pie will be a little longer if you want to wait for the storage card to come online first now it's finally time to unleash the beast before i turn everything on again for some performance testing i guess i should mention that the easiest way you can tell if something's designed for enterprise use is how loud the fans are when you turn the thing on 90 decibels that's impressive don't run this thing right next to your head on your desk for eight hours or you might get permanent hearing damage i think i can hear some viewers from r slash home lab shouting what did you just say or the noise of their rackmount servers posting anyways for these tests i'm using four kingston sa400 ssds 2 or 240 gigabytes and two are 120 gigabytes why because i'm cheap and that's all the extra ssds i have i'm also using four used hp 300 gigabyte 10k sas drives and how do i know that they're used well just listen to the things when they're active i i don't think it's normal for a hard drive to make its entire drive tray physically move when it's doing heavy ride activity but it is normal for these guys so with this thing fully online and operational i threw some tests at it using fio which is a commonly used i o testing utility for storage devices the first test was running one megabyte random reads and that showed 399 megabytes per second on the ssd ray and 114 megabytes per second on the sas array then i did one megabyte random writes and that gave me 300 megabytes per second on the ssds and 98 megabytes per second on the sas drives these results show two things first even cheap ssds are still faster than spinning sas drives no real surprise there but second the limit to the ssd speed is the pi's own pci express bus in testing a few different network cards i was able to get between 3.2 and 3.3 gigabits of network bandwidth with the storage controller i'm able to get 3.35 gigabits of bandwidth and that's actually better than the bandwidth i could pump through a 10 gig network adapter which could only get up to 3.27 gigabits there are tons of other tests i could do but my main intention this month is to see how a bunch of different options fare for network storage so i next installed samba and nfs to run some benchmarks i was kind of amazed that both samba and nfs got almost wire speed for reads meaning the pi was able to feed data to my mac as fast as the gigabit interface could pump it through if you remember from my sata raid video the fastest i could get with nfs was around 106 megabytes per second and that speed would fluctuate when packets would get queued up while the pi was busy sorting out software raid with the storage controller handling the raid the pi was staying a solid 117 megabytes per second continuously as long as i was copying things across write speeds are a little bit lower but not much using the raid storage controller freed up the raspberry pi so it can transfer data over the network at full speed with no bottlenecks at all and at this point in the video i was going to plug in a pci switch so i could plug in a 2.5 gigabit lan card and the storage card at the same time to see if i could blow past the megabyte per second limit but i ran into a problem it seems like both of my pci switches couldn't provide enough power for both cards using my 2 amp molex power supply so i pulled out my big 600 watt psu and was hacking a connector together to make it easier to tell the psu to turn on since i don't have it attached to a real pc with a power button well unfortunately i released the psu's magic smoke the sad thing is i can't even blame this one on red shirt jeff anyways because of that i can't power up both the network and storage cards at the same time so for now you'll have to wait to see how the pie gets on as a 2.5 gigabit nas and maybe support me on patreon or github or click that applause button on youtube if i can get the funds i'll be able to replace this power supply for my next video so to wrap this up i finally got true hardware sas raid running on the raspberry pi i can't say i was the first person to do it though because that honor belongs to josh who got the whole setup running on 32-bit pios before me am i going to recommend you go buy this thousand dollar storage controller for a homemade pie based nas maybe not but if you do want high-end storage on the pi there is a lower end version you can get the 9440-8i which should still work and it's less than 200 bucks used you have to make sure you get the right cables to work with your drives or storage enclosure though but even that might be overkill if you just want to build a cheap nas and use only sata drives i'll be covering a more inexpensive nas setup this month and i'll go into more depth on new storage standards on the pi in upcoming videos so make sure you subscribe and if i can get 2.5 gigabit networking working with a new power supply i'd love to see if there's a way to contain that the storage controller and the compute module 4 all inside a storage backplane like this one until next time i'm jeff gearling i have a link in there i almost started reading the link that's nope i said yes i sure did she had a gad backup system i found that oh i don't know what i found that's a long sentence wow so i installed the compiled kernel module into my driver i installed the current i installed the what it is so hard to say mebabytes i hate that word so much packets would get queued up while the pie was buzzing while the pie was buzzing hey hey remote control work there you go thank you
Info
Channel: Jeff Geerling
Views: 184,848
Rating: undefined out of 5
Keywords: raspberry pi, raid, sas, sata, nvme, storage, enterprise, hardware, software, zfs, btrfs, broadcom, lsi, pcie, pci express, pci, hba, host bus adapter, backplane, u.3, u.2, SFF-TA-1005, SFF-TA-1001, small form factor, SFF-8639, mini sas, edsff, rack, uart, debug, driver, kernel, recompile, linux, pi os, ubuntu, sbc, bus, card, nas, nfs, samba, smb, performance, benchmark, storcli
Id: Zpfq8ZC2hyI
Channel Id: undefined
Length: 18min 39sec (1119 seconds)
Published: Fri Feb 26 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.