INSANE PetaByte Homelab! (TrueNAS Scale ZFS + 10Gb Networking + 40Gb SMB Fail)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
today we're going to be going over our petabyte plus setup on true Nas scale we're going to be doing this on an amazing jbod the j-bod that I just reviewed yesterday which is the amazing NetApp de 6600 and these are going for great prices on eBay right now so if you're in the market for a large jbod then that is something you might want to check out so we're going to be using our 20 terabyte SATA disks for this and we've got 60 of them here we're going to go over the setup and installation of the software in trunes and we're going to look at what kind of performance you can expect over things like a Samba share at really large sizes with ZFS arrays so I'm going to go ahead and start getting these opened up here so we've got our 20 terabyte SATA discs and so I use these white label discs which you can get from shop.digitalspaceport.com at great prices and I want to take a chance to talk about them really quick so these 20 terabyters are just starting to come available in White Label before it was pretty much 18s was the biggest size now we've got 20 terabyters starting to show up and so these 20 terabyte white label OS drives that means off spec so these drives have been returned to the manufacturer redone something to them so they either turn them into green label or white label drives so I like to buy the white label ones because they're the cheapest and that gives you the best kind of cost perspective so I'm going to show you how you can get all of the trays open at the same time on the de 6600 the first thing that you want to do is when you pop them out don't go all the way so just leave it about like that maybe three quarters of the way and then you can get the rest of them to pop open and then you can come back down and open them the rest of the way [Applause] so as I've mentioned before on these you do not need caddies so these 20 terabyte SATA drives No caddies needed you can just go ahead slide them in no problems that's what we're going to do here now for a minimum on this disc jbod you do need to have all of the fronts that's 20 discs minimum that you would need to fire it up and so to get the discs seated in place here just simply give them a push back and like I mentioned there is very firm rubber gripping in here so Caddy's not needed and that's a big cost saver also because these caddies are actually pretty expensive so we got our first tray done let's get the rest of them in all right let's bring up jbod number five number five alive number five is alive [Applause] and so our secret weapon here for seeing exactly how fast we'll be able to write over the network share is this melanox connect X3 qsfp card and the melanox SX 6036 40 gigabit switch with four terabytes of switching capacity this thing is insane so I'm going to go ahead plug one end of our incredibly long cable that I have to eventually run in the Attic but my gosh it's going to be a pain in the butt I'm going to go ahead and pull this through the door over there and we're just going to have it kind of inside really quick just to see for temporary what kind of speeds we can get riding to the array pull the cable this direction feed it through this cat door I'm fired for being a network engineer boom that quick all right so we'll go ahead and plug this end of the connector up all right all right it's surprisingly close it's just that much extra that we've got to make it the rest of the way here so we've got a little bit of extra pulling that we're going to have to do uh that's going to be interesting we need about three feet just barely wow that is very very close it will be enough okay yeah I can shut this down now and this looks like a good slot for it right here and this is the new AMD epic build and this has no shortage of pcie lanes so not going to run into the same problems I've ran into in the past when I've tried to connect a bunch of really hungry pcie devices that need a lot of lanes to it okay cool very generous case to work in this defined meshify XL2 the build video on this case coming very soon all right and now we've got our qsfp cable slot it in wow no room to spare that's very cool we're also going to be creating some arrays of these really nice px055 Toshibas these are great little SAS actually which is surprising ssds and these are actually SAS 12 gigabit per second foreign so let's take a look really quick at some of the hardware that we've got installed in our trunes instance we have 384 gigabytes of RAM and that's at only 1333 speed and those are 16 gigabyte dimms and those are loaded up into every single dim slot that's 24 of them total we also have our processors these are E5 26 67 V2s the nice thing about these are they are very cheap right now and they also are very fast if you look at what they turbo up to so this is actually very good if you need to have something that is like an NFS or an SMB file share where you're going to have single threaded performance dictate some of the throughput that you're going to be able to see as far as your machine's access our trunes instance is true Nas scale that we're running over here and if we look we can see that I've got quite a few discs already running here I've got more than a few storage pools and quite a bit of data stored here but we're going to go ahead and set up some new storage pools today with the ssds and also the large array of 20 terabyte drives that way we can see what kind of performance we can get over SMB writes to this so we're going to create our first pool here and this is going to be a baseline pool this will be with the three Toshiba 800 gigabyte ssds this should give us a really good estimate of what the top end range looks like for performance on this and we're just going to call this ssd3 and add those over here instead of a raid Z we're going to do this as a stripe of course you don't want to do stripes in real life for most types of data out there we're going to click on confirm and create the pool and next we're going to create our second pool on the 60 Bay jbod with the 20 terabyte drives we're actually just going to do a stripe for the first set you should not set up anything like this ever in production this is just for us to see what the maximum extent looks like as far as our capabilities and let me uncheck those two toss that over there all right so we've got our 1.01 pivy bytes of space and we're going to set that as a stripe again such a bad idea you've got a click confirm several times and we're going to call this uh large boy and click create and now if we go down to the bottom we can see that we've got our new pool here our ssd3 pool that's 2.11 TB bytes of space effective on that and we should also have our large boy pool here also and this is 1.06 Pippi bytes of space the next thing that we're going to do is we're going to go ahead and create data sets on these and so we're going to get a large boy here we're going to edit this really quick and another bad idea but good for performance is turning off compression disabling sync and let's see here if we can set record size check some leave that one megabyte okay and Save and now we're going to add a data set you can leave these all to inherit and our share type we're going to set to SMB here for us we're going to hit save on this and then we're going to come down to our large boy set here and we're going to edit the permissions and we're going to go ahead and use a preset for the ACL and we're going to do NFS open continue and save Access Control list and now we're going to go back to our shares over here and we're going to add a Windows SMB share off of our large boy and we're just going to leave default share parameters here and we'll click save on that and restart the service next I'm going to do pretty much the same for the SSD array and since it's always fun to look at how much space you've got we're going to go ahead and map the network drive for these and we can see that we've got our 1.06 petabytes of free space here this is actually misnomer the way that they label this is actually 1.06 maybe bytes but they put an appropriately bytes so just keep that in mind what you see in true nav is reported as 1.06 is the actual space here the labeling inside windows is wrong for mysterious reasons I can't answer for you and let's go ahead and map our network drive also for our ssds okay great so now that we've got these let's take a look at moving some files because this is what I think people always love to see and what you usually don't see in a lot of high performance networking videos is folks actually demonstrating the speed that you're going to see moving files uh I've got some large files here so this should be pretty good let's start out by moving them to the ssd3 that way we can get a really good idea of what the actual performance of this array looks like and we can see that that is about yeah that is great that is almost two gigabytes per second that we're seeing there at the peak looks like it's coming back down to like 1.5 1.6 somewhere in that range now this is over a 40 gigabit connection so let's take a look really quick at the 40 gigabit Network that we're moving this over right now and I think it's down here and you can see that our throughput is right around 13 to 14 gigabits per second at current on this now could we change a couple of settings and possibly get this higher yes we're going to probably change and adjust the RSS scale and also the number of processors that are going to be assigned to cues for this to see if we can get that to go just a little bit faster because I think that we should be getting into the 18 to possibly even 20 range here uh on the 40 gigabit connection that we've got to this and another good thing to check whenever you're doing a high performance transfers is to see whether or not you're maxing out any of your threads and you can see that our highest utilization right now is just at 50 so that is not maxing out anything by any stretch of the imagination so that is a good sign and you can also see that we are using the ZFS cache for writing to so that should be speeding things up quite significantly also as far as the data transfer and if we look at the read speed off of our SSD here you can see that that's about 1.92 gigabytes per second that it's reading at so that actually could be a limiter right there uh that might have been what we were seeing actually and this is 356 gigabytes that we're moving across this now let's move them over to the large boy storage and see what kind of performance we can get there and I'm not convinced that I'm not being limited by the speed of the SSD here that we're reading off of so we're going to go ahead and get I am disk we're going to create a ram disk we've got more than enough RAM on this system as you can see 256 gigabytes at 3200 so this is quite fast we'll be storing the files in there and then sending them off that way and we now have our Ram disk here so let's go ahead and copy those files up to the ram disk we're going to put two of them up here this should give us a very good idea of the speed and whether we're being limited by the read speed of the hard drive itself that these are located on and all of this is being pushed by the AMD epic 7302 16 core processor that we've set up on this system that build video coming very soon and I've got a better chip that we're going to be throwing into it here shortly and some better gpus as well so that's pretty exciting stuff make sure you hit like And subscribe ring the bell down below so you get notified for those videos when they go live my theory is that we were seeing some weird performance numbers that were vaguely similar for both of those two plot copies because we were essentially being limited by the speed that we were going to be able to attain on the network by the hard drive that was reading them so I'm going to go ahead and delete these that we copied over there Make Way for us to put these two new ones over there and let's see what kind of speed we get now so it looks like we're getting a bit more consistent of a speed but it is dropping down there so maybe maybe I was wrong I think the real test will be when we copy these two over to the SSD array we'll see really then I think what kind of performance we should be expecting and so we're seeing that clearly nothing drastic as far as improvements on this so I think it's safe to say it was not the read speed of the device itself or the ram disk itself it's the wire speed that we're looking at on the line for the 40 gigabit connection and we probably need to do some tinkering around with that to get that speed up faster so that we can hit at least two gigabytes per second and so let's run a little bit of Crystal this Mark benching and see if we can find out what's going on here so we'll start out by benching on our Ram disk just to make sure that that is not going to be slowing us down and so we can see that we have really good sequential read and write so definitely once the ram is allocated it is incredibly fast for a ram disk now let's take a look at how fast we can write a plot file up there now and I think we can see that our single thread performance here is what is limiting us on file copies and so now we have this set up in a much more common sense one raid Z2 60 wide with 18.19 Tippy bytes per Drive yielding us 969.79 effective space on our Drive let's check out what the performance looks like now and you can see that's actually pretty decent uh this is not a huge hit that we're taking for having the raid Z level set a little bit higher here again I think what we're seeing is not related to the actual uh anything asides from single threadedness of Windows I bet if we take a look here at the task manager right now we'll see that there's at least one thread pegged at a hundred percent here and there it is and it'll kind of switch to another one here before much longer I think I can squeeze a little bit more out of this uh we're actually running at 2.77 here and we should be able to Turbo this up by switching off things like uh The Logical processors and disabling some of the virtualization Tech so we're going to try that out see if I can get a little bit faster but that's all going to be in the next video where we do some optimization around 40 gigabit transfers certainly I was able to see much better speeds when I was using higher core frequency ryzens in the past for doing this kind of file transferring to the same uh processors the 2667 b2s so we should theoretically be able to get well into the 20s which should give us 2 to 2.3 gigabytes per second of write speed and I'm not going to be settled until we get there but that's going to have to wait for another day I hope you guys have enjoyed this hit like And subscribe be sure to ring the bell down below and you can always check out shop.digitalspaceport.com for great deals on hard drives and also digitalspaceport.com forward slash hardware for Hardware reviews on the de 6600 this is part two of the series do check out the video that is linked here for the de 6600 video that has amazing amount of information in it and these are really good if you look at the per trade cost of these right now from this seller these are probably the best deal in jbods out there check the links in the description below for that alright everybody have a great rest of your day and we'll check you guys out next time
Info
Channel: Digital Spaceport
Views: 42,061
Rating: undefined out of 5
Keywords: Petabyte, TrueNAS, TrueNAS Scale, SMB, 10Gb networking, 40Gb Networking, SMB share, SMB truenas, TrueNAS SMB, 1PB, PB, PiB, Project, Homelab, network share, datacenter, data center, home lab, 20TB Hard Drives, ZFS Raid, RaidZ-1, Raid-z Stripe, Raid 0 ZFS, Raid-z1, raid-z2, raid-z3, ZFS, 60 bay, jbod, jbod storage, homelab tour, homelab setup, homelab storage, storage array, petabyte project, petabyte storage, home network, home data center
Id: HOZWOl0DSQY
Channel Id: undefined
Length: 22min 3sec (1323 seconds)
Published: Mon Apr 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.