6-in-1: Build a 6-node Ceph cluster on this Mini ITX Motherboard

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

In addition to the video, there's a blog post if you'd rather read through my review and setup notes. I also have this GitHub repo with the Ansible automation I mention in the video.

This was the first time I tried running Ceph on a Pi cluster, and it worked pretty much out of the box, though I couldn't get NFS to work. I have an open issue to keep digging into that.

πŸ‘οΈŽ︎ 14 πŸ‘€οΈŽ︎ u/geerlingguy πŸ“…οΈŽ︎ Aug 17 2022 πŸ—«︎ replies

Wow, this is rad.

This category of products (multi-Pi platforms/cases/whatever) needs a whole lot more competition. Nobody has really managed to bring power and network to multiple Pis in this way before. It seems like such an obvious thing to build, but between this and the Turing Pis, I don't even know of anyone else even bothering to shake up the "get a network switch and a powered USB hub" mentality that runs rampant in the Pi clustering world.

Looks like this one has some manufacturing issues. No backplane included, weird case button/led behavior, extra pin on the USB header, inconsistent tolerances... It works, but it's not perfect.

I can't wait to see more manufacturers enter this space. It's nice that CM4s are a bit easier to find than actual Pi 4 Bs these days, too.

πŸ‘οΈŽ︎ 7 πŸ‘€οΈŽ︎ u/bmn001 πŸ“…οΈŽ︎ Aug 17 2022 πŸ—«︎ replies

I ordered a board after I saw your teaser last week. Should be here next Wednesday!

πŸ‘οΈŽ︎ 5 πŸ‘€οΈŽ︎ u/Odinnswolf πŸ“…οΈŽ︎ Aug 17 2022 πŸ—«︎ replies

Thanks for the awesome video again Jeff! Hope you’re feeling better dude!

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/Petelah πŸ“…οΈŽ︎ Aug 17 2022 πŸ—«︎ replies

thank you for this, just watched your youtube video, and its great. But unfortunately very frustrating given there have been no general availability of RPi4's for so long.

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/flipper1935 πŸ“…οΈŽ︎ Aug 18 2022 πŸ—«︎ replies
Captions
this mini itx motherboard has six computers on it and if i flip it over there are six ssds on the back what time is it it's clustering time this is the super 6c a raspberry pi cluster board like the turing pi 2 supercomputer i built last year it fits in normal pc cases but unlike the turing pi 2 it holds two more pi's it fits inside a 1u enclosure and you can buy it right now for 200 bucks a lot of people test out things like kubernetes or k3s on a little cluster like this but today i'm going to explore something a little different i'm going to test this thing by turning it into a ceph storage cluster so i can pool together all these nvme drives and we'll see if it can even play in the same league as a 12 000 storage appliance spoiler it can't but we'll get to that i bought my super 6c from dwm zone but deskpi also sells it on their website it comes with a 100 watt power adapter some standoff so you can run the board without a case and a set of screws to mount up to six m.2 ssds on the bottom there are spaces for up to six raspberry pi compute module fours that's this little guy which is an entire computer that's smaller than a playing card and the super 6c has a built-in gigabit switch now that doesn't take all the pies and put them together and make one giant pie you still have to manage the pies yourself it's just this board makes it a lot simpler since you wouldn't need to buy six pi fours six power adapters six ethernet cables and a separate network switch that's how i built my raspberry pi drambl cluster back in the day but after using this board i don't think i'll ever go back this board is truly a cluster in a box and it's super thin i'm gonna install it in this tiny pc case and i also designed an io shield actually i had to design four versions before i could get this one that fit because the tolerances for some of the ports are a little off while we're looking at the ports i guess i'll talk about io there's the power connector that accepts 19 to 24 volts and i'll talk more about power consumption later there are two gigabit ethernet ports connected directly to this realtek ethernet switch chip which connects through to each pi at a full gigabit then there are two hdmi ports a micro usb port two usb 2.0 ports all connected straight through to cm4 number one meaning you can manage this entire cluster standalone if you plug in a keyboard mouse and monitor finally on the back there are these six green leds and they show you the activity for each pi in the cluster but if you like leds that's not all you get each pi gets its own set of four leds two for power and activity and two for ethernet each pi also gets its own little micro sd card slot on the back which is useful if you have light compute modules and each pi gets its own micro usb port on the top side for flashing modules with emmc storage the board has its own little pmu or power management unit and it comes with a power and reset button over here you can also plug in front panel connections but i noticed the power led and any fans you have plugged in are always on that means it can be hard to tell whether the cluster's running if you have it installed inside a case but you can turn on the cluster by pressing the power button and you can force shut it down by holding down the power button for five seconds i asked despite if their board's firmware is open or if they have any remote access features planned but they said no also the onboard ethernet switch is not managed so you can't do things like set up vlans or do any other advanced routing and no just plugging in two connections to a gigabit switch won't double the bandwidth of this board i actually tested that both of those features are coming on the turning pi 2 so with all these kinds of boards there are always trade-offs i mean looking at the board space i can understand trying to cram every feature possible for six computers on a mini itx motherboard is pretty much impossible so i plugged all the pies in on the top but i couldn't use the heatsinks i normally use so for now i popped them off and mounted a big fan to the case i actually started with a noxua but even with its quiet adapter cable it was a little bit loud since this board doesn't have pwm i swapped in this arctic f12 silent it was almost silent everything stayed cool even without heat sinks on the bottom i installed six of these these are keoksia xg6 nvme ssds and they're overkill for raspberry pi but since i had a couple on hand already kyocera reached out and sent four more so if you see the includes paid sponsorship thing on this video that's why they didn't pay me but they did send me four ssds to fill up the rest of these slots after i put the drives in i flashed 64-bit pios to six micro sd cards and put one in each slot when i flashed the os i made sure to add host names like desk pi 1 desk pi 2 and so on that way when it comes time to connect to them i don't have to figure out their ip addresses i think if i had one major complaint about the hardware it's how hard it is to access the microsd card slots they're all at different angles and since they're flat against the bottom i can't even fit my fingers in to remove a card a couple times i even had to use a spudger to get the card out but now with everything put together it's time to mount this thing in my case i found this cheap mini itx case on amazon and i'll put a link to it below i slipped in my 3d printed i o shield put in the motherboard and screwed it in i plugged in the front panel connectors but then when i went to plug in the front panel usb 2.0 plug i noticed the header on the motherboard had all the pins on it it's supposed to be a keyed connector with one pin missing so i couldn't plug in the usb connector i also asked desk about this and they said that this was a problem with the first production run but it's been fixed now so on to the first boot when i plugged it in the front panel power led lit up and the fan started spinning right away it looks like those are just always on when i press the power button all the pies lit up almost at the same time and it was kind of fun and mesmerizing watching all the blinking activity leds on the back it reminded me a little of whopper from the movie war games the whopper spends all its time thinking about world war three i also measured power consumption and found that the board uses less than a watt powered down with just the fan running about 17 watts with six pies running 24 watts max and 11 watts if you shut down the pi's but don't power off the cluster using the power button one thing i don't like about the design is how the ethernet ports on the back don't have status lights i had to check the other end of the cable on my switch to make sure it was actually connected but everything booted up great except for one of the cm4s had a bootloader issue which meant its activity led would just light up green and it wouldn't boot after a quick trip over to my pie tray to reflash the pi's firmware i got it booting too and a few quick notes about board power it doesn't look like there's a way to just shut off power to one pi at a time so hot swapping raspberry pi's doesn't seem like a very safe thing to do on this board on the turning pi 2 it has more advanced firmware so you can control power to each slot that's not a huge issue here but it is something to be aware of and you know how i mentioned this board doesn't just take six raspberry pi's and slap them together into one big raspberry pi well the natural question then is how am i gonna manage six raspberry pi's well you could log into each one with ssh but in my case i'm going to use a tool that's perfect for managing a pie cluster ansible i have a whole book and even a youtube series on ansible and i'll link to them below but to get started all i need to do is create an inventory file to tell ansible where to find the servers that's this file here and then a playbook to tell ansible what to do with them i won't get too deep into it here but i've posted the entire playbook in a more detailed guide on my github so check the link below the first thing i wanted to do was make sure all the pies were up to date running the latest version of raspberry pi os so i wrote this playbook and ran it on the pi's it updates them all and automatically reboots them if they need to then i wanted to try out some software i've never used before ceph ceph is a distributed storage system you might know raid where you can plug multiple hard drives into one computer and mash them together for redundancy or performance well ceph is like that but instead of multiple drives on one computer you can put multiple drives from multiple computers together in a storage array and let ceph deal with redundancy and networking i followed the instructions from this blog post and used a tool called seth admin to set everything up first i just tried installing safadmin but app said it couldn't find it i actually had to turn on the unstable debian repository since cephadmin is so new it's not in the repos that ship with the raspberry pi yet but i did that updated apps cache and installed seth admin without any problems then i ran this bootstrap command and to get the cluster setup i had to know the pi's ip address so i hopped over to another terminal and grabbed that it took about five minutes but at the end it spits out some information for accessing the ceph dashboard in a browser before doing that i had to copy the seth public key from the main pie to all the other pies but i didn't have that pie set up to log into all the other pies yet so i decided to just do all of it in ansible since that would make the process easier so i wrote this task that saves the public sef key from the main server and then this task that tells ansible to add it to the authorized keys for the root user on all the other pies this way ceph will be able to manage all the other pies when i connect them and it was a good thing i was using ansible because i realized after i logged into the dashboard that you had to have some other stuff installed too before seph can work on all those pies it was saying it needed a container engine and also something called lv create which is part of the lvm2 package so over in ansible i added this test that makes sure podman and lvm2 are both installed on all the pies alright with all that done i could add all the hosts it took a few minutes for seth to get them all healthy but after five or so minutes they all showed up and seth could tell me how much raw storage was on each pie looking over at the main dashboard the cluster status was now okay and it showed the total raw capacity available to seth is 3.3 tbs not bad i wanted to get nfs working so i could test everything for my mac on the network so i tried that but i kept getting this error i even tried installing some missing packages but with that it still wouldn't create the nfs service so instead i set up a local storage pool and ran benchmarks using ceph's built-in benchmarking tool write speeds were around 70 megabytes per second and read speeds were around 110 megabytes per second that's better than i thought i'd get and in fact that's about how much speed i could get through a gigabit network so i'm not really disappointed encryption or other fancy features will slow it down but getting 110 megabytes per second on a pi is pretty decent and you might be tempted to think just plugging in two ethernet cables into the board will double the network bandwidth but it just doesn't work that way since the switch on the board isn't managed you can't configure anything like link aggregation so at best it'll just provide a redundant network path i actually tested this out and the total speed when running iperf between my mac and the board no matter how many pi's i connected to always maxed out at one gig and to prove it's not just a weird networking issue on my mac i even ran a benchmark across two switches to two computers simultaneously and total throughput was always under a gigabit in fact sometimes the network performance was a little worse and i wonder if the switch chip might have been overheating a little it only got up to about 50 degrees celsius in my thermal testing but maybe it doesn't like that much switching not sure so after all that could this thing replace a 12 000 enterprise storage appliance like the mars 400 in a word no and there are two main reasons for that networking and the i o limitations of the raspberry pi the mars 400 has less memory and fewer cpu cores but it has faster internal ethernet connections plus four 10 gigabit uplinks so this thing could pump through gigabytes per second the pi can't really touch that especially since the super 6c can only put through one gigabit over the network so this board is an interesting one the turnpi 2 is a very different product it has managed ethernet so you could assign vlans or setup link aggregation this board has a dumb switch so having two ports on the back isn't actually all that useful the turing pi 2 can work with other form factors like the jets and nano the super 6c only works with cm4 compatible boards and the turing pi 2 has other neat tricks like mini pcie slots which i used to build a remote 4g kubernetes cluster that i ran out on my cousin's farm so both of these boards have their virtues but i think the biggest thing going for this board is how thin it is and how it crams six cm4s on one board it can be very useful for experimentation or even for some types of edge computing oh also you can buy it right now and you don't have to wait for a kickstarter assuming raspberry pi's ever become available again i think this board would be ideal for doing things like learning ceph or kubernetes or other clustering tools it's certainly a lot less hassle than putting together separate raspberry pi's power supplies ethernet cables and a switch if you want to pick one up check out the link in the description i've also linked to all the other parts i used in my build and all the guides and code used in this video until next time i'm jeff gearling
Info
Channel: Jeff Geerling
Views: 499,055
Rating: undefined out of 5
Keywords: raspberry pi, compute module, cm4, model b, cluster, supercluster, clusterin, homelab, devops, mini itx, motherboard, k3s, k8s, kubernetes, ceph, storage, nvme, ssd, pcie, pci express, deskpi, super6c, node, controller, ethernet, switch, led, activity, control, plane, performance, ansible, tutorial, how-to, setup, hard drive, compute, power, efficient, edge, resources, install, experiment, testing, platform, iot, thin, 1u, rackmount, case, itx
Id: ecdm3oA-QdQ
Channel Id: undefined
Length: 13min 3sec (783 seconds)
Published: Wed Aug 17 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.