Where Intel is in REAL Trouble... - AMD EPYC Server Upgrade

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Awesome video contrasting the polar-opposites of potential between Windows & Linux. As always, Linux pushing the boundaries as a forefront mechanism of change.

πŸ‘οΈŽ︎ 8 πŸ‘€οΈŽ︎ u/SilverSovereign πŸ“…οΈŽ︎ Dec 26 2019 πŸ—«︎ replies

SinusTechTips

πŸ‘οΈŽ︎ 5 πŸ‘€οΈŽ︎ u/[deleted] πŸ“…οΈŽ︎ Dec 26 2019 πŸ—«︎ replies

6:17 is your heatsink on Windows

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/[deleted] πŸ“…οΈŽ︎ Dec 26 2019 πŸ—«︎ replies

When puberty hits Linus hard, and then goes away

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/thexavier666 πŸ“…οΈŽ︎ Dec 26 2019 πŸ—«︎ replies

It's him, Linus TechTips, creator of Linux

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/Y1ff πŸ“…οΈŽ︎ Dec 27 2019 πŸ—«︎ replies

Isn't this the same guy who made video about why windows "is better" than linux?

EDIT: Yes it is

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/Dragonaax πŸ“…οΈŽ︎ Dec 26 2019 πŸ—«︎ replies

Meh, classic is well put. Kernel Panics on his hypervisor thooo. That tech is shiny

πŸ‘οΈŽ︎ 1 πŸ‘€οΈŽ︎ u/anthr76 πŸ“…οΈŽ︎ Dec 26 2019 πŸ—«︎ replies
Captions
I'm not feeling great today but we have a dire situation here Atlanta's tech tips our one Xserve er our main production editing storage server is completely full fortunately I have been planning an upgrade to it for the last little while that is going to be freakin awesome this is new Wanek it's gonna have more storage space whoa over 100 terabytes of nvme storage it's gonna be faster it is going to have lower power consumption it is going to have more processing cores it is going to be epic and perhaps the most epic thing about it is going to be how many PCI Express Lanes it has you see PCI Express gets used for all kinds of things graphics cards video capture devices peripheral expansion but thing is it's been fast enough for all those consumer uses for many years now so what's driving the creation of faster and faster PCI Express is actually the server world where it gets used for storage that's right that is a direct PCI Express buy for connection at the back of all 24 of these drives and the truly unbelievable thing about this server is that unlike our existing wanax server which takes a handful of those PCI Express connections and splits it across 48 drives this one has a dedicated connection for every single one we are gonna be putting up some crazy numbers today ladies and gentlemen and it's gonna be brought to you by C Sonic no marketing today just happy holidays from C sonic go check him out at the link in the video description really they paid front spot for that nice guys while I'm cracking these open let's talk about the drives that I'm using for the server and why reason number one is that Wanek is full so I needed a capacity upgrade we're going from 1.2 terabyte drives to 4 terabyte drives because that's what a couple of years of nvme development does for you reason number two is we didn't want to go back to SATA because nvme was such a great upgrade for us in terms of not just performance but stability last time thanks to the extremely low access latency reason number three and really the main reason for these drives in particular is that we got a great tip from Wendell over at level one text that Facebook was apparently flipping a bunch of these on eBay because they're upgrading to octane so these were a really great deal just a hair over three hundred and fifty dollars for Intel data center grade drives that apparently are still under warranty on top of everything else now they're not the fastest thing in the world by today's standards 3.2 gigabytes per second reads 1.8 gigabytes per second writes maximum read I ops of 645,000 and writes of 48,000 but remember guys it's almost not going to matter at all because I'm going to have 24 of them in the same server and it's just an as anyway now let's meet our server this is the gigabyte R 272 Zed 32 and we chose this particular model based on the glowing review that was given to it by our buddy Patrick Kennedy over at serve the home heavy boy this is a 2-u server so that's to say it is to rack units in height and it was designed from the ground up for AMD's epoch 7000 to roam platform and the advantage we get from that is that it actually has compatibility with PCI Express Gen 4 not all epic servers are PCIe gen 4 ready what Gen 4 compatibility means for us in practical terms is not necessarily that much today but in the event that we wanted to upgrade it means that we could effectively double the bandwidth to almost the whole system so let's take a quick tour here we've got dual 1.2 thousand watt power supplies here those are redundant in case one fails we've got our CPU socket here with support for up to 64 cores 128 threads we've got 16 memory slots that run an 8 channel mode freakin incredible bandwidth and on the subject of bandwidth most of the internal PCI Express slots are taken up of course by the 24 you 2 slots at the front so the way that they're fed is they're using their mezzanine card slot here to run 4 of them you can see that managed over here SATA actually goes to these two bays in the back so that's what you would typically get off of now the CPU that we're using is pretty overkill but AMD sent over a bunch of CPUs and fortunately all the ones that kind of make more sense for this project like lower core count ones I sort of have earmarks for other projects so unfortunately I am locking away my one 32 core basically forever that's ok though we really need the up there it really says a lot about the efficiency of these things that all we need is one of these basic passive heat pipe coolers thrown on here one of these shrouds and just the airflow from one of the chassis eighty millimeter fans to keep it cool and remember guys it's validated for up to the 64 core one this is cool for my boot drive I had intended to use the SATA Bay's at the back of the chassis but then I realized I've actually got to end on twos that share their PCI Express Lanes with this 8x slot over here so I figured if I have a couple of these old optimum twos lying around what the hey might as well throw them in raid 1 and go hold PCI Express on Emrys alright let's turn it around and oh oh that's funny getting paid sent over a demo unit it seems to be a little bit broken that's okay Oh they've got the stupid cable management things make it really hard again I mean that also makes it hard to come out accidentally just whoa okay now I've actually worked with this motherboard a little bit already for a whole home one CPU and it takes forever to post so I'm gonna fire it up while we check the compatibility of the sleds that our drives were pre-installed on and see if we're gonna have to swap all those out I really really hope I don't have to it's up that's good let's get in the BIOS and see if everything's detected correctly come on baby everything detector glow wood memory training air what were the what now that is not all of our RAM that is four hundred and fifty eight gigs of RAM uh-oh okay Brendan I've got good news and bad news the good news is reseeding those two memory modules boom 512 gigs of ram so we're ready to rock as far as that goes the bad news is that due to AMD's architecture of their epic processors where there's no actual chipset with functionality like you know RAID controller or anything like that this has no support for hardware RAID either on the end on two drives or the SATA ones if you want to run raid on your SATA drives you have to put an ad in raid card in and I asked gigabyte well why didn't you guys just put a raid chip on the motherboard and they were like well it would have used up PCI Express Lanes which I mean as part of the design of this board we were trying to reserve for all these nvme drives so I kind of went ok fair enough now in Linux you can run your OS unrated drives but Windows has no easy way to do that so that's just something we're gonna have to consider as we build out this machine I'm gonna shut it down for now though because the last thing that I really want to know is if I am remounting all of these drives - they included sleds let's see if they happen to be inter compatible that would be so cool that would be so cool oh yeah they are it is so tedious it's so tedious well that's not a good sign only my two m dot 2 drives are in here which means that maybe the sleds don't go all the way back oh that would blow alright I put one Drive in each one and lined it up and you can see it doesn't go back far enough I got a swap them all yeah and I've had people ask me why so many people work here you thought I was gonna shuck all these drives by myself and put them in mounts well one person can do slowly four people can do all so slowly but less pretty tedious now I just need to install a couple of roles and features including file server then yes we're gonna shut it down and put all the drives in now really the most benefit for a solution like this is when you've got like a storage attached Network and you're using like banks and banks of these to act as the storage for like a whack of compute servers that are connected over fiber optics on you know the other end of the data center or whatever the case may be but come on guys this line is tech tips so we're using it as an 8k video editing Nass I really hope these it'll pick up worst case scenario though we've got a couple of bad ones and the keen-eyed among you might have noticed there's two drives not installed these are either cold spares or to account for if a couple of our drives are just do way I mean that's the thing with used hardware you get a good deal you might get a couple bad ones all right that's a good sign every one of them has its light lit but we won't know if everything's picked up until we actually see them in here boom okay they're all there then storage basis takes a lot of your capacity by default oh actually that's not that bad it's about 70 terabytes okay let's try that oh you got to be kidding Nate this stupid error I think five years or whatever it's had this problem and they don't just allow you to choose your columns in the GUI you have to do it via PowerShell so Dom well you can hear ramping up now boys I have 25 gigs a second well that result is horrible worse across the board than old Wanek clearly there's a configuration issue here all right so I did a simple mirrored space this time and we're gonna try running that we've got forty four terabytes here well that's better when it comes to sequential reads almost 11 gigs a second not even close to what we should be getting though okay that's it we're trying a simple virtual disk which is basically just striped if this doesn't perform well then I'm at a loss wow that is complete garbage time it's even worse how is that even possible and crystal disk mark is not the issue there's a Microsoft tool called disk speed that you can use via command line to run sort of any mixture of loads that you want and it seems like there's just this hard cap at around 10 gigabytes a second which is like I mean obviously it's like good or whatever but for this hardware it's terrible so we're gonna FreeNAS and it's been about four days so we did manage to push Windows performance a little bit further the drives needed firmware updates and they were doing some idle garbage collection and stuff like that but we capped out at about ten gigabytes a second which to be clear is plenty for anything you're gonna access over the network I mean we would need a 100 gigabit per second network card in order to saturate that but that's not enough for us we know this thing should be capable of so much more so we moved on to Linux Jake loaded it up with proxmox and built a pool with 4 V devs each with six drives in them running in raids e1 so raid z1 is kind of like raid 5 but without the right hole issue meaning that we should be giving up let's see 4 drives worth of capacity total but with the resiliency of being able to lose up to 4 drives as long as we don't lose more than one in a single v10 so you would be let's see six said four chunks like that of drives working together now that should have resulted in some rocking performance except it didn't it was actually complete dog crap like what did we max out it like a hundred gig of megabytes a second or two hundred Meg's a second terrible so turns out as soon as ZFS was being loaded up our machine was actually spitting out kernel panics that's not a chicken emergency it's like really bad you liked it shut up you liked it that makes no sense a single one of these drives if I plug it into my computer should be doing 15 to 20 times that performance and we verified this we checked in with our bud Patrick over at served the home he's got one of these boxes and he was getting great numbers with just 10 drives so what could we do well we called in the tech support for tech support Wendel from level one techs got on the horn with Jake chatted up on discord and between the two of them they managed to figure out that it looks like there is some kind of compatibility issue between a particular linux kernel that the latest version of proxmox is running and ZFS in our particular system that's causing these problems so we said you know what okay we're just gonna run a benchmark where we load up each of the drives individually and slam them all at the same time take hit he wandered away he'll be back ah okay you hear that thing ramping up oh my goodness oh my goodness what's going on here is that over a gigabyte per second per drive 28 gigabytes a second at those kinds of speeds we're at the limits of AMD's infinity fabric and in fact to get to those speeds we actually had to overclock it a little bit Thank You Wendell for the tip there we pushed it too far it's actually spitting out some memory errors we're gonna have to dial it back a little bit but whatever you guys want to see impressive numbers right well here they are now I'm running a second script that he created that writes what was it 15 gigs of data so this is gonna write to the drives this time these numbers are nuts there you go 23 points seven gigabytes a second that are right up against the theoretical limits of what can be done so that's it guys that is basically the pinnacle of modern technology nearly 30 gigabytes a second of reads and just over 20 gigabytes a second of writes guys to put that in perspective your home network is gonna be probably gigabit this would be 300 times that to read it you could read one full blu-ray disk almost every second blu-ray blu-ray blu-ray it's crazy stuff now we just need to figure out some minor details like how we're actually gonna get an array loaded up on the thing because this is just I mean this is just theoretical stuff right now and theoretically I could have a good segue for our change but we all knew that wasn't gonna happen thanks to our sponsor fresh books for making this video possible today fresh books is the cloud accounting solution that's built for now you want to work and you can work not just how you want but anywhere you want thanks to the FreshBooks mobile app you can create professional-looking invoices on-the-go it snap pictures of your receipt so you don't lose them stay on top of important conversations and never miss an update for example being able to see when a client has viewed your invoice for the first time so you can get your money you know what I'm talking about yeah you do so visit fresh ebooks.com forward slash tech tips to get your free 30-day trial today we're gonna have that linked in the video description so thanks again for watching guys if you're looking for something else to watch server related maybe check out the video where I lost all our data you know I was accused in the comments of that video of being a very bad actor Brandon out of 10 how much was i acting in that video negative 10 that happened I was a real thing
Info
Channel: Linus Tech Tips
Views: 2,116,857
Rating: undefined out of 5
Keywords:
Id: uY_jeaxVgIE
Channel Id: undefined
Length: 17min 6sec (1026 seconds)
Published: Wed Dec 25 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.