Dell EMC PowerEdge C6525 AMD EPYC Powered Kilo-Thread Server Review

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys this is patrick from sth and today we're going to talk about something that is super exciting and that is the dell poweredge c6525 now the dell poweredge c6525 is not just your normal 2u dell poweredge server that we've been seeing for years oh no instead the dell poweredge c6525 is actually a 2u four node system and what gets even a little bit more exciting than that is that it's not xeon based instead this is an amd epic based server specifically eight designed for the amd epic 7002 series codename rome and that means you can put up to 64 cores in each socket you can have two sockets per machine which means you get 128 cores per node and then in the dell poweredge c6525 you actually get up to four nodes which means that you get 512 amd epic cores and since you can run these things with smt2 that means that you can actually get up to 1024 cores per 2u and that makes this system a one kilo thread system so in this video we're going to talk about the hardware of the dell poweredge c6525 we're going to compare it to some of the other options that we've seen both on the xeon side but also on the amd epic side since we've done actually a lot of these servers the 2u4 node servers recently we're also going to take a look at the management and then we're going to look at some of the features that are just kind of comparatively just really interesting and kind of quirky about this machine so that's enough of an intro let's get to it so let's start with the chassis the dell powered c6525 comes in the dell poweredge c6400 chassis now this is a shared chassis with the c6420 which is actually an intel xeon platform and so this system was really kind of first launched for xeon and now we have amd epic nodes that go into that shell something that we want to know is i did ask dell whether or not you can just go mix and match the amd epic and intel xeon nodes you can do that on some of the other systems in the market and you cannot do that on the c6525 i think they have some kind of firmware that needs to get updated so that's just something different on this server we specifically had the 24 by two and a half bay chassis but there is a 12 by three and a half inch so if you want to use big hard drives you can do that too generally what that means in these types of servers is that you get six bays per node and you have four nodes and that gives you your 24 drive base just behind the drive bays we actually have a set of four hot swap fans and so if you think about it you actually get a total of up to a kilo thread of performance and you only have four fan modules in the system plus the power supplies and those fan modules are actually a big deal because in the world of 2u4 nodes where the fans are located actually has some really big impacts and generally when we have the fans in the center like this and not on the nodes themselves we tend to get better power efficiency just because we have fewer fans and we get to use larger fans now some other designs in the market have the fans that are actually attached to the nodes themselves so they use 1u fans on each of the individual nodes and that means that you have a lot more fans the fans are smaller which means that they're less efficient so part of our standard 2u4 node testing is that we actually look at the power efficiency of these systems versus if we used one new servers by themselves we're going to get to that a little bit later but it turns out that with the dell powered c6525 like other systems that have this configuration we tend to see a little bit better power consumption than we do on the nodes that actually have cooling directly on the nodes themselves from a power efficiency standpoint this is actually really good in case you want to know why anybody would even do it the other way the answer is that a lot of companies and organizations prefer to have nodes self-contained and so when they service the nodes they can pull them out and they have all the fans they have the entire nodes and they can service them in that level whereas it's a lot harder to get to the mid plane fans in a c650 than it is if they're just on the nodes themselves so that's the reason so let's keep continuing on the chassis level before we get to the nodes themselves and if you look at the middle of the chassis you see all the power distribution and all of that kind of complex in the middle now something that's interesting on the dell powered c6525 that if you look at a lot of modern 2u4 node systems is that a lot of the a-speed ast 2500 which is kind of the industry standard bmc systems they actually have bmcs that are at the system chassis level and they're on management functions at the chassis level now dell i think because of idrac and ijek9 they just don't do that they don't put a controller in the chassis and i drag controller in the chassis that's a big difference because the way that dell thinks about managing these systems is that you have a collection of individual nodes and then you use openmanage to go manage the set whereas if you look at how hyperscalers build and a lot of other organizations that use kind of more industry standard management interfaces those systems tend to have bmc's because they think of it at a chassis level they want the monitoring and then they want the individual nodes tied up through that chassis and this is something that because we do a lot of reviews we kind of see these differences but you probably wouldn't notice it otherwise moving to the rear of the chassis we have the four nodes and we're gonna talk about all the nodes individually later but the one feature that we do wanna talk about is power supply dell has different power supply options so you get 2.4 kilowatt 2 kilowatt i think they have a 1.6 kilowatt power supply option so depending on the processors and how you configure the systems dell actually has power supplies that help manage that power efficiency by getting you right in the sweet spot of the power supplies okay so let's talk about the nodes now you have four nodes you access them through the rear of the chassis and you can pretty easily pull them out dell has a great mechanism for that the connectors themselves are not just a standard goldfinger pcb setup there's actually high density connectors which help increase airflow through the system and that's a great feature that dell has we've seen that on some other systems like for example the cisco ucs systems have features like that and there's just something that's a kind of higher end feature once we get through the high density connectors that connect to the rest of the chassis what we have is let's talk about cpus here for a second so with the dell poweredge c6525 the system is actually designed to be able to accept a pretty wide range of processors and what that means is that you can go put in low end processors that are you know maybe 155 watts in that range but you can also go up and you can have 225 watt processors in there like such as the 64 core 48 core offerings our test system was equipped with amd epic 7452 processors which they're pretty good they're 32 core processors we actually did a review of those not too long ago and we're going to talk a little bit about performance in this chassis a little bit later in this video now because we have two u4 nodes there are some sacrifices that we do make in order to just physically fit the form factor and one of those major changes is that we can only have eight dimms for each cpu now the amd epic 7002 series roam can actually take up to two dims per channel which would be 16 dimms per cpu or 32 dimms per server now on some of the larger platforms that's definitely what dell has but on this particular platform they have eight dimms per socket which means they get 16 dimms per node and that actually works out pretty well because the rom series has eight memory channels and so the power edge c6525 is designed to have one dim per memory channel and so you still get the full memory bandwidth even though you don't get necessarily full dimm capacity and just drawing a quick simile here when we look at the poweredge c64 20 nodes those are intel xeon nodes but they do only support eight dimms per socket just like this but the on the intel xeon side you only get six channel memory until we get to ice lake now let's get to one of the coolest features of the system that we did not get to test but i kind of wish we did but it would be a little bit harder to our system is air cooled and it's really set up for these lower power processors but dell can go much higher one of the major markets for this is actually the high performance computing market and in the high performance computing market sometimes people don't necessarily want just low power 32 core chips they want higher power higher core counts such as the aim the epic 7 h12 which is the 280 watt high performance computing processor with 64 course 128 threads from amd now generally supporting anything up to 280 watts what that means is that you're going to look for something that's a little bit more in terms of cooling because well it's kind of hard to cool for 280 watt processors in a tu chassis and it's even harder to manage to cool eight of them in a 2u chassis and so for that dell actually powers with coolant systems and there is an entire solution to go put water blocks in here and bring water into the dell poweredge c6525 so that way you get more cooling and more efficient cooling and for high performance computing data centers that's actually pretty common that they're going to have facility water these days and we have facility-based water it's actually more efficient so that's kind of why a system like the c6525 would want to take advantage of that other big markets are things like virtualization and if you think about it even these 32 core parts that we have here this is more cores than intel currently offers on any of the intel xeon scalable first and second generation and even the third generation cooper lake if you you don't want to go there but yeah they only go up to 28 cores right now and so getting 32 cores at a pretty low tdp and be able to stuff them in a box like this is something that's actually much different for the amd side versus the intel xeon side and it's something that dell's definitely trading on all right let's get away from the cpus and memory because that's exciting but let's see what else is in the system and one of the big features that dell has here is the dell boss card now a lot of operating systems you want to have mirror boot devices but you might need a controller like a raid controller to be able to do that but you don't want to use your primary high performance rate controller just for boot devices so dell has the dell boss card and i just love the name dell boss i mean and how could you not love the name dell boss i mean that is an awesome name for a boot device so that allows you to take m.2 drives and use those for boot while freeing up your hot swap bays for your higher capacity high performance storage something that's interesting on the amd epic platforms is that they don't have platform controller hubs or pchs that you would see on an intel xeon platform so you have something on the 6420 c6420 like the lewisburg pch and you don't have that in the system so there's not that extra ic to cool but that also means that you do need to have features like boss if you want to have raid because well you need something to go do that let's talk expansion here for a second so our system had a single pcie by 16 slot and one of the really interesting features that dell has in this is they actually support pcie gen 4 by 16 slots and technically they support up to two pcie by 16 gen 4 slots which is really kind of unique a lot of the early amd epic systems and epic roam systems only supported pcie gen 3. now we aren't really at the point where there are a ton of devices other than knicks and stuff like that that have pc agent 4 support but i think in the next couple months we're going to start seeing a lot more so this can become more important there's also another really interesting feature and one that we're going to talk about a little bit more in our test system we actually had one of the two pcie slots populated and that was kind of a bummer we really wish that we had both of them populated so we could do a little bit more testing but we just didn't have it there is another feature that's actually kind of historically interesting in this system and that is that this actually is using the ocp 3.0 nick or nick 3.0 however you want to say it but the ocp nik form factor now this is a pcie gen 4 and kind of looking even beyond that form factor and this is something we're going to see in most other systems most vendors have at this point decided that they're going to support it because all the hyperscaler guys said yep we're going to use that as our nick form factor and so instead of having a world of custom mezzanine cards and stuff like that we actually have in custom loms we actually have a standardized ocp nic that's going to be common across basically the entire industry so that's actually a really different approach for dell and it's one that's absolutely awesome that we're seeing in this machine now if you want to know why this thing is so historically relevant you have to look back a couple generations back in the day in the let's call it the xeon 5500 5600 generations dell pumped out a really awesome system which was the dell power edge i don't even know if it was power edge even at that time but it was the c6100 chassis and the c6100 system was two u4 nodes for that intel xeon 55 5600 series and they went to big organizations i mean we're talking twitter facebook those types of organizations use that chassis because they were modular they could scale out get a lot of density and they were super popular back in those days at sth we actually have guides that are really complete in terms of looking at some of those older systems just from years and years ago on sdh but since that was you know probably about 10 years ago or so the market has moved on and big companies like facebook have gone into the open compute platform they no longer use dell servers because they don't really you know care about idrac at all they want to use their own bmc's they want to use industry standard bmc's in their own firmware on it so they don't care about idrac they also don't care about you know dell's margins or anything like that so they want to go to white box vendors that can get them lower margins and they want to have custom form factors so it really doesn't fit the model of the dell poweredge c6500 c6400 series but as you can imagine this is where it comes full circle because now those hyperscalers are making innovations specifically the ocpnec 3.0 form factor and now we have that integrated into these systems that dell is selling to enterprises and high performance computing centers so the world comes full circle in the computing world and before we leave this ocp nick i want to talk about one other just absolutely crazy mechanical design feature on the ocp nick you're going to see this little blue latch and this little latch you pull up and that is your lock for your ocp nick so you might think to yourself okay i've pulled my sled out of the system i need to service my nick maybe i want to do an upgrade or maybe i had a port fail something like that so to go pull the snake out so you pull the whole node out you go you pull this little latch up and now you have access to be able to pull the ocp neck out right and the answer to that is actually no you can't do that which is insane instead the riser actually covers the ocp nick and keeps it in there beyond that lock so not only do you have to unlock the system but then you have to remove the riser above it and well because that's just above it on that side you actually have to remove the riser or unscrew the riser on the other side of the chassis on the left side to be able to get to the one on the right to be able to pull out the nick so you end up having to basically dismantle two riser slot adapter things to be able to get to the ocp nick and pull that thing out even though there's an internal latch when i saw this i was totally speechless i mean what in the heck is going on there why i don't know the other thing i wanted to talk about on those pcie slots and kind of going along with this is the fact that dell is definitely not leading in terms of pcie accessibility at this point a lot of the other designs in the market now have actually surpassed what dell is doing on a mechanical design here these pcie slots are nowhere near what some of the other vendors are doing in terms of pcie slot accessibility in their 2u4 node chassis and so i think this is something that dell really needs to start innovating on if it wants to keep that high bar of having excellent mechanical design this kind of seems like a system that was designed a couple years ago and not something that's more modern for example recently we looked at an intel xeon based inspired 2u4 node system and they have a awesome and probably one of the best pcie slot systems around i think vcs system was actually also really good in this this is probably about the level of a super micro system in terms of accessibility it's just not what we would expect especially when you look at the rest of the dell power edge line you just kind of expect a little something a little bit better here i'm sure that there's some mechanical engineer out there that can say oh we need to have all of these screws and lots and lots of screws for structural rigidity and all that kind of stuff and that's great but at the end of the day if you had to go install something it's going to take longer in a dell system than it is in another system just because of this design now in the back of the system you have a usb 3 slot you have a idrac management port so you do have an out-of-band management port you don't have networking so that's really kind of based on the loan or in this case the ocp nick 3.0 slot so you're going to want to use your that for your high-speed networking there's also a direct micro usb port for idrac one of the things that i think is a little bit different here is that because of the way that dell constructed the system there is not room for a full vga cable vga is really old but it's also kind of the standard in the data center if you have a kvm car you most likely have vga based systems today it's not super expensive to go and change that but it is a little bit different that dell's using a micro display port here it's just another cable another adapter maybe changing some monitors whatever it is but it's something that's a little bit different now some other vendors are actually using high density breakout cables and so it is kind of nice that dell isn't relying upon that but it is something that's a little different in terms of management this is idrac this is idrac 9 this is the same management scheme that you see on standard poweredge servers so if you have a high performance if you're doing a hyper converged system or something like that you get the same manageability that you get on a normal power edge and that's great and since we have standard idrac even though it's an amd system we still get to go put it into openmanage and because we get to use openmanage tools for this all the kind of standard tooling that you would have for a normal dell organization and for your servers in a dell organization will work with this which is awesome let's talk about power and performance real quick in terms of power consumption we did a comparison using our sth sandwich where we go when we put the poweredge c6525 between two other tu4 node systems and we just heat up the top and bottom to make sure that we're getting a more realistic version of what the system would see in the real world what we found a couple years ago is that if you take a 2d4 node system and you leave space on top and bottom of it you actually get better cooling and that better cooling means that you get better performance in a lot of cases if you don't have good cooling in the chassis itself that leads to better performance numbers in a test scenario than you would actually get in real world deployments so what we do is we put a sandwich together and we use two to four node systems top and bottom to heat up the both surfaces and be able to go and get more realistic power and performance out of this what we saw on the performance side was we actually saw performance that was very good in fact we saw performance that was almost on par was very very close to what we would see in four one use servers that are dual socket so what that basically tells us is that we're getting almost the same performance even though we're getting twice the density that we would in those separate platforms that's really just what we wanted to see if you want to see more on the performance of the amd epic 7452 processors we have a whole review on that in the sdh main site we're going to link that in the description the other advantage of the sandwich is that we get to see the impact of power consumption so what we can do is we can take the power consumption the 2u4 node chassis and compare that to four 1u servers that each are dual socket and when we did that what we found was we actually got a decent amount of power savings now this isn't saving 20 power or anything like that but single digit percentage and when you're in a data center and you're densely packing these high performance systems that's those single digit percentages actually make a big difference now we are going to copy out that and just say that this is with kind of lower tdp processors not necessarily at the high end so we are testing specifically for that we can't change the processors that are in the system because dell actually blows field programmable fuses with idrac and their firmware so you can't actually take processors out and put them in here and then put back into other systems a lot people don't know that but it's a feature of dell systems there's some other vendors that do it but still overall if you are looking to consolidate and you don't necessarily need all the dimm capacity you don't need all the expansion capacity and all that kind of stuff and you just need higher density compute this is actually a really good option because the fans are being used more efficiently the power supplies are being used more efficiently and so you should definitely look at something like this man that was a ton to unpack but hopefully you enjoyed this look at the dell poweredge c6525 hey if you made it this far why don't you click subscribe turn on those notifications so you can see whenever we come with new content we have a whole bunch of dell and amd content coming up in the pipeline so you definitely want to subscribe for that as always thanks for watching and have an awesome day
Info
Channel: ServeTheHome
Views: 35,543
Rating: 4.8905983 out of 5
Keywords: Dell, Dell EMC, PowerEdge, Dell PowerEdge, dell emc poweredge, poweredge c6525, poweredge c6400, poweredge c6420, amd epyc, epyc 7002, epyc rome, epyc 7h12, coolit systems, coolit, epyc 7452, dell amd, amd poweredge, epyc poweredge, dell epyc, rome, dell rome, rome server
Id: ZW4cCSEfs1g
Channel Id: undefined
Length: 20min 29sec (1229 seconds)
Published: Mon Aug 24 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.