2x NVMe in RAID 0 β€” Double The Speed? β€” PCI-E 4.0 Tested

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

RAID1 should improve random performance since the request can be serviced from either drive.

That said, the real answer for more random performance is to go buy an Optane drive - 900P/905P put out absolutely sick random 4K numbers, if you can afford it. It's just like when SSDs were first introduced - you put the stuff that really needs to go fast on the Optane drive and everything else on a normal SSD.

Or in theory, tiered storage software like Primocache or FuzeDrive/StoreMI should be able to get you the best of both worlds automatically.

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/capn_hector πŸ“…οΈŽ︎ Sep 01 2019 πŸ—«︎ replies
Captions
hello and welcome to tech deals to raid or not to raid that is the question today we are testing 2 1 terabyte PCI Express Gen 4 nvme SSDs on the AMD x5 70 platform in two configurations on the chipset lanes and on the CPU lanes and yes there is a difference I will have an explanation of that and how they differ and what the various lanes are here in just a second and then we will get to the benchmarks now I have already reviewed one of these drives link to that in the description below if you would like all the details of PCI Express Gen 4 how it compares to Gen 3 how a single Drive performs and those sorts of details check out that video because I'm not going to repeat that here to try to keep this short and sweet and today we're only focusing on a to drive configuration the testbench configuration is also the same as the last video so I won't repeat it all instead I'll stand over here and put a bunch of text up here for you to read if you want to read it or go watch the original video a word about PCI Express Lanes chipset vs. CPU how many you have and how they work to do that I'm going to stand over here a little bit taking these boxes off to give myself some room because we're going to put some text up here on the screen the Rison 3000 series of CPUs and it's worth noting this is different for different generations of chips the older Rison Intel they're all going to be different we are only talking about rise in 3000 series otherwise known as Xen 2 CPUs here and we are not talking about the AP use the 3400 G and 3200 GT o s don't count only the 3600 and above installed on an x5 70 motherboard come with 24 PCI Express 4 lanes on the CPU and 16 PCI Express lanes on the chipset for those of you playing the home game that is a total of 40 PCI Express 4 lanes that sounds really impressive and it is in many ways there are a couple of caveats to those numbers however number one you actually can't use all 40 of them four are reserved for the inner link between the CPU and the chipset so the CPU actually only has 20 that are usable 16 go to the graphics card or split 8x8 X on a board like this if you want to split it between two 2 slots for go to the first MDOT 2 slot usually it is configurable by the motherboard manufacturer but they virtually all go to the first MDOT 2 slot and then the last 4 go to the chipset and thus really aren't usable since you have to have a chipset now the 16 on the chipset are usable however there is only a four-lane link between the CPU and the chipset so if you install multiple high-speed devices such as multiple PCI Express 4 nvme drives on chipset lanes all their data has to be rerouted through just a four-lane link between the CPU and the chipset so if you're moving a lot of data back and forth installing 2 3 4 drives on the chipset is not going to give you 16 lanes of bandwidth to the CPU it can give them bandwidth directly to each other and allow you to simultaneously connect multiple devices but not necessarily use them all at the same time there is however a solution as I said before the 16 primary lanes are normally used for your graphics card the x16 slot you install whatever graphics card you put into your machine however X 570 boards such as this can split those 16 lanes between the first and the second slot if you will notice I have a nice dual MDOT 2 card here on the table also provided by MSI I wish they sold them they don't they bundled these with some of their high-end motherboards this particular one came out of a previous generation god-like board but this is a dual MDOT 2 adapter that allows you to put 2 4x devices on a single 8 X slot in the benchmarks today you're going to see chipset benchmarks and CPU benchmarks on these drives now this adapter originally came out with PCI Express 3 I did check with crystal disk info I verified that working and yes when both of these PCI Express four drives were installed on this adapter they were running at four at PCI Express Gen four on that second slot it is also worth noting that when you install this it drags the the graphics card down to eight lanes that does not seriously impact performance eight lanes is enough PCI Express is not currently a bottleneck for most graphics cards so that's not a serious concern and frankly if bandwidth is important to you well maybe we should be talking about the high-end desktop platforms but putting that issue aside do note that it does lower your graphics card down to eight lanes you also have to have a motherboard that does something called bifurcation this will not directly run 2 m2 drives on any board you have to have a motherboard where you can go under the BIOS and tell the second PCI Express slot to split into two 4x slots effectively creating two virtual slots out of one this board will do that in fact this board will also buy Furyk 8 the first slot 2 8 X 8 X or 2 for 4x slots although if you used it that way you couldn't use the second slot because it would be putting them all on the first and that sort of configuration doesn't really make any sense because you need a video card of some kind but putting that issue aside it does at least support that in the BIOS so it's worth noting that lower end motherboards won't support this sort of configuration but who's buying $75 motherboards to buy 2 of these drives and put them into raid on an adapter card probably nobody in fact I'm willing to bet that the vast majority of people doing this will install them on the MDOT 2 slots on the board now here's an interesting configuration thought you can put one drive on the first slot which goes directly to the CPU and the second Drive on the second slot which goes to the chipset except you can't boot that configuration so that could be a data drive but then what's your boot drive on the third m2 slot going to the chipset so now your boot drive is competing with your rate config yes that's a terrible idea so for all of you thinking well there's the solution just let them yet no that's horrible don't do that your boot drive should be on the first MDOT two slot full stop no questions whatsoever you don't want that competing with all the other devices in your system that need bandwidth so you're left with the option of putting two drives on the two m dot two slots or putting them on a riser card or not using two drives in raid but that's a we'll talk about whether or not it's worth it at the end of the video and with that short suite explanation out of the way let's get to the benchmarks but a quick reminder links in the video description below take you to Amazon and Newegg for all the drives talked about here today if you use those links when you shop at no extra cost to you it supports the channel and helps to stay independent bringing you honest and forthright reviews your support is greatly appreciated on with the benchmarks the first chart is going to be the sequential transfer test queue depth of 32 thread count of 1 we are showing you the original results from when I tested these 3 drives originally along with the new tests on the chipset and CPU lanes the red bar is a single Oris 4.0 nvme drive by itself the orange bar next to it are two of them in raid 0 on the chipset lanes this is the second and the third MDOT 2 slot on the motherboard the yellow bar is on the CPU lanes the yellow bar is with both drives installed on that riser card that I showed you on the second PCI Express x16 slot that's electrically 8 and it's by Furyk aidid 2 4x4 in the motherboard BIOS so they're running directly to the CPU notice the dramatic difference in performance between chipset lanes and CPU lanes when it comes to sequential transfer rate the first thing some people might say is hold on a second here how can two drives in raid 0 striped across possibly not be any faster in read speed on the CPU lanes and actually be that much slower on the chipset lanes I understand but I've actually tested these in the past with 2 samsung 970 evos and found that two drives together are not always faster than one sometimes they're slower of course every drives gonna be a bit different we'll get to the randoms here in a second notice the right speed is dramatically improved on the right speed a single Drive is 4.5 gigabytes per second 2 drives on the chipset is 6.3 gigabytes per second and 2 drives on the CPU lanes is nearly double at 8.2 gigabytes per second so the right speed is showing the improvement but the read speed really doesn't moving on to 4k random read and write queued up the 32 thread count of 1 we see pretty much the same thing we did the last time we looked at these drives the performance evens out across the board notice that a single drive is faster than the raid 0 drives in either configuration there's an overhead in doing software striping these drives are set up striped within windows and so it frankly with random performance doesn't really make any difference even with the queue depth of 32 reversing the principle lowering queue depth to 8 but increasing thread count to 8 also doesn't do anything for these drives they just don't care reading writing a single drive is a bit faster putting the two drives together gets you twice as much space on a single drive letter it does of course obviously spread out the reads and writes but it doesn't actually give you much in terms of performance and finally queue depth of one thread count of 1 the real-world normal situation that most of you are gonna find yourself in most of the time yes occasionally if you're updating a lot of things are doing a lot of things at once if you're a super heavy user you'll go a bit above beyond this but regardless of which random tests you look at it doesn't really make any difference to a performance I was asked this question multiple times during the original review of these gigabyte Ora's PCI Express 4 drives and I get asked quite often well why can't it just raid my drives and make them faster because it doesn't make them faster it makes the sequential number look pretty sort of kind of depending on which test you look at it doesn't do anything for performance raid 0 is really pointless for SSDs by a larger SSD that Intel drive is a 2 terabyte drive and cost a whole lot less than 2 of any of these 1 terabyte drives 2 raid 0 for SSDs is one of the most pointless things ever there is almost certainly a corner case out there there's gonna be somebody in the comments below who says yes but I have this one special situation where it really helped me out great wonderful the two of you in the back can sit down for the 99.9% of the rest of you raid zero doesn't do anything other than give you a drive twice as large that simply makes accessing a little bit easier it doesn't help you with performance in the real world it doesn't even help you with synthetic benchmarks other than sequential transfer rates except not really notice that the read speed on the chipset lanes was worse with two drives than it was with one it is not this wonderful well I'll just put two drives together and suddenly get more performance I get asked this question all the time well why can't I just raid them because it doesn't do anything that's not how raid works raid puts the drives together and stripes the data and increase the sequential transfer rate except as you noticed not if they're competing with resources on the chipset and this is not really a solution on the consumer platform I actually do raid SSDs I have four two terabyte 660 peas on my thread Ripper machine that is eight terabytes of space I haven't won a masseuse m2 hypercard that is not for performance they run fine all by themselves that's to give me a single eight terabyte scratch space for all the video editing I do for you too the fact that they're braided means nothing it's just simplicity for installation if they offer it an eight terabyte nvme m2 drive I would just get one and put it in and not bother with the rain it would be much simpler but I do that because I want the extra space but for the 99.9% of you watching this buy the largest drive you can afford which honestly in my opinion still remains the two terabyte 660 P from Intel for a hundred and eighty five dollars that is the screaming value up here I have multiple of those not only on my content creation machine but I have two different machines where that's the boot drive and I also use it as the game drive for all of the benchmarks on this the boot drive on this on my test bench is actually a samsung 970 Evo 500 gig drive but the data drive where all the games are installed is a two terabyte six 60p that's installed on my test bench my previous test benches had sat at two terabyte SSD s but the nvme is nice makes game updating a little bit faster it's not a huge difference but it's very convenient cuz it installs on the motherboard as opposed to having to have a separate drive and a separate cable now when I did these benchmarks now that I've said that some of you're gonna say wait a minute I took that drive off to do the benchmarks I don't need the game drive on so I took that drive off to do these installations and do all these tests but normally the game drive is on the second m2 slot on this board one final thought everything I've said today about the pointlessness of raid 0 for SSDs applies to nvme and SATA drives this doesn't get any better if you do it on Saturday as vs. nvme drives it also doesn't get any better if you use hard drives putting four hard drives together and right:0 gives you wonderful sequential transfer rates but it really doesn't do anything for random performance that's not what Ray does and of course raid really wasn't meant for raid zero in the first place it was meant for data redundancy but that's a separate conversation and for those of you curious about the other braid levels that's not what this video was about this video is about performance and testing and if there's interest let me know down below and maybe we'll do some coverage of other raid levels raid 1 raid 5 raid 6 at some point in the future in any case thank you all so much for watching and I will see all of you next time
Info
Channel: Tech Deals
Views: 212,787
Rating: undefined out of 5
Keywords: NVMe, PCI-E 4.0, PCI-E Gen 4, PCI-E Gen 4 NVMe, SSD, NVMe SS, PCI-E 4.0 NVMe, Gigabyte AORUS Gen4, SSD Review, NVMe Review, Gen3 vs Gen4, PCI-E 3.0 vs 4.0, Tech Deals, RAID, RAID 0, RAID-0, NVMe RAID, RAID Worth it?, SSD RAID, RAID Gen 4
Id: Ffxkvf4KOt0
Channel Id: undefined
Length: 14min 42sec (882 seconds)
Published: Sat Aug 31 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.