Clustering: Trying to Destroy A Clustered Volume During a File Transfer

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Setting up a replicated cluster using (4) Storinator XL60s built using GlusterFS and ZFS. The purpose of the test is to show off the resiliency of GlusterFS and ZFS. By starting a file transfer from the cluster to a client, we try to bring the transfer down by unplugging network cables, unplugging drives and shutting down servers.

👍︎︎ 32 👤︎︎ u/cmcgean45 📅︎︎ Nov 10 2016 🗫︎ replies

I would like to see the recovery process and how performance is effected

👍︎︎ 19 👤︎︎ u/Nobleitguy 📅︎︎ Nov 10 2016 🗫︎ replies

You should put transferring Linux ISOs in the title, as it's thoroughly appropriate.

Also, is there a specific RAID setup on the drives? Like Z2 striped across 2 volumes so it's like RAID 60?

👍︎︎ 8 👤︎︎ u/SarcasticOptimist 📅︎︎ Nov 10 2016 🗫︎ replies

This is really cool, I love seeing videos like this. However, I wouldn't mind seeing them go into more detail. It seemed like it was a bit too much straight to the point.

👍︎︎ 5 👤︎︎ u/i_pk_pjers_i 📅︎︎ Nov 11 2016 🗫︎ replies

It is awesome, it is just a damn shame that GbE is as fast as it goes. We've had it for 10yrs. It damned slow.

👍︎︎ 3 👤︎︎ u/jrwren 📅︎︎ Nov 10 2016 🗫︎ replies

would this be the type of system used in massive enterprise / cloud computing?

👍︎︎ 3 👤︎︎ u/MySweetUsername 📅︎︎ Nov 11 2016 🗫︎ replies

Great demonstration!

👍︎︎ 2 👤︎︎ u/kabanossi 📅︎︎ Nov 11 2016 🗫︎ replies

What is that mini dashboard?

It says Gluster Cluster 9000 but it seems to tie both ZFS and glusterFS together? I'd be interested in setting something like that up to compare it to Ceph.

👍︎︎ 1 👤︎︎ u/B4r4n 📅︎︎ Nov 11 2016 🗫︎ replies
Captions
hey guys brette kelly here 45 drives so today we're in the lab and i wanted to show you something we've been working on lately clustering of our strong native storage pods so here i have forests ordinator excel Sexy's all set up in a replicated cluster that i built using cluster of s and ZFS so what I want to show off to you guys today is how resilient the combination of Gloucester fest and ZFS really is so what I'm going to do is I'm going to start a file transfer from my cluster to a client and I'm just going to start simulating failure of various components to try to bring the volume down so I'm going to pull out Ethernet chords to simulate neck failure I'm going to pull out some drives simulate drive failure and I'll even just fail a node turn a couple of them off and we'll try to bring the volume down quick little background on my networks setup I have all for storing aiders turn into a 10 gigabit switch so they can all talk to each other at 10 GB speeds and that is tied in to our existing lab network at 1gb so my client will only see speeds 111 to 120 megabytes so let's get started so we'll drag this over here so this is my mounted cluster as you can see it's 705 terabytes it's a distributed leprechaun distributed replica so now I'm just going to go behind and start pulling Ethernet cords out well got one here got another one you got another one let's see yep still going so the Nick backup works what's next no it let's just turn one of them off so all remote into number two I'll say shutdown now so you can see it closed so she stopped and here's my figuration oh yeah she's off so here you go you can see that cluster 2 is down but our transfer is still chugging along no problem so that's pretty awesome what's next turn off another one now due to my configuration I can do this as well I just shut down the fourth one as well so it's about to go and it's off now - and our transfer is still chugging along at full line speed pretty awesome so what's next let's fail a draw it come over to this guy pop the lid off them pull this guy out softly pull this one out pull this guy out it should be good three we should do it let's see if we're still up and running so the file transfers still going that's good oh and you can see Gloucester ones degraded so we have failed 204 servers we have failed disks in one of the remaining server and our transfer is still going so let's fail some more oh just out of laziness I'm only going to pull one this time yeah why not I'll do two and what do we got here still chugging along let's see if it registered there goes degraded so I have failed disks in two of the pods straight-up turned off to the other ones yet the volume is still available and still chugging along at full speed cluster fest CFS pretty awesome that's pretty much it
Info
Channel: 45Drives
Views: 74,753
Rating: 4.8955226 out of 5
Keywords: storage, clustering, glusterfs, centos, zfs, high speed transfer, NAS, 45 drives, storinator, big data
Id: A0wV4k58RIs
Channel Id: undefined
Length: 4min 47sec (287 seconds)
Published: Wed Nov 09 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.