Let's Look At Some Big, Expensive Old Servers!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Gross, I tossed an old poweredge just the other day easily weighed 80lbs it was miserable.

👍︎︎ 2 👤︎︎ u/Ubiquitous-Toss 📅︎︎ Aug 16 2018 🗫︎ replies

I have 4 HP 580g7s in my lab, had to run a dedicated 30amp circuit to power them, 4 x xeon 7550s like the G6s, 256GB of RAM, 64 4GB sticks, made the mistake when I bought them, I thought it was 4 x 64GB sticks, boy was I wrong.

https://www.runlevelone.net/attachments/221/homelab_v4_3.jpg

Just some real feedback, they run hot, really hot. I purposely placed the servers against a window and I have a dedicated exhaust fan blowing the air out of the house. In a 14x14 room, with central cooling and no exhaust fan, within 30 minutes the room will reach 95 degrees. It's about 50 dollars a month to run each server in electric.

I found a deal on the servers, 1100/each shipped. The other lab owner bought two and I purchased two. It was a no brainer for us, we were thinking about building a 256GB hypervisor with consumer/prograde parts, 256GB of RAM at the time (2 years ago) was easily 200 dollars per 32GB, so thats 1600 dollars in RAM and we got each server ready to roll for 1100.

👍︎︎ 1 👤︎︎ u/side_control 📅︎︎ Aug 20 2018 🗫︎ replies
Captions
hey everyone it's Colin how's it going figured I'd show you something that I normally don't cover on the channel but you might be interested in anyway I decommissioned a few servers I've got this guy and then a few more over to the other side look at in a little bit there's something kind of interesting about this server and the other ones that you might be interested in I don't know I'm filming this on my lunch break at work this is an IBM x5 model 38 50 it's from probably around 2011-2012 something like that freshly decommissioned most servers are like one or two rack you high so they're about half as high as this guy but this thing is really really beefy and I figured I'd kind of give you a bit of a tour of it cuz not too many people get to see this class of hardware I guess you could say but let's pop the hood and I'll show you what makes this thing a little bit different so you flip up this latch push it back this whole lid comes up and you can see there's lots of information on here this is classic IBM this system like I said from probably 2011 2012 it was purchased before I started working here but it's also pre lenovo by out you know famously lenovo ended up buying ibm's hardware you know computer hardware business off of them and so this is a true IBM system i remember having to call in once for support on this years ago and i was actually talking to IBM which was an experience in and of itself the first thing that makes this box kind of interesting is this setup so you unlatch this guy and again trying to do this one-handed flip it up and there we go and then you pull this daughter card out and you can see there's some ram on there there are eight ram slots ddr3 ECC so this is server grade memory they're correcting each of these sticks is 16 gigs so that's 64 gig of ram on this daughter card a decent amount for a server right I mean it's maybe not a spectacular amount given today's numbers but 2011-2012 64 Giga RAM was a pretty healthy amount now here's the thing so I've got this daughter card with 64 Giga RAM on it I never had to try and flip this card and one-handed again there we go but all of these cards are actually populated the same way so it's not just this riser card all of these are populated with 64 Giga Ram so that means this machine has 512 gigabytes of RAM in it which is a lot for today but a ton for like 2012 you can see here there's a fan there's more fans in this unit but of course with servers a lot of this stuff is hot swappable so the fan assembly just comes out pops back in no big deal so what also makes this server kind of special is hidden under here flip those latches pick this guy up and there are four heat sinks under here what's nice is I like how they've got kind of like the removable handle so you can take them out individually but this is a quad socket system most servers that you see these days are dual socket being two separate CPUs of course most consumer computers are just one socket but each of these sockets has a separate CPU in it these are all xeon e7 48:20 CPUs I believe that's an 8 core 2 gigahertz processor with hyper-threading so 16 threads multiply that across so this server has 32 cores 64 threads now given like what AMD and Intel are doing yeah you can get a healthy number of cores out of a single CPU now but remember 2012 that was a lot of computer power in a single box so four CPUs separate heatsink for each now you may be thinking okay so this server you know it's it's pretty beefy box a lot of RAM a lot of CPU well okay so you know what about storage there must be just like a ton of hard drives and stuff in here right well if we look at the front a little dark but you can see we've got these drive bays there's eight of them they take two and a half inch SAS drives but if we pull these trays out they're all just the dummies this server actually has no hard drives in it like at all so how does it boot an operating system well that's hidden back under this cover if we flip this up again and you look really carefully down in there you'll see a couple of USB ports that thing looks like a USB stick if pretty much is in the industry it's called a Dom or disk on memory that particular one I think is 2 gigabytes and that is the only bit of storage that this server has in it the idea is you just boot your operating system off of that now windows obviously isn't gonna fly with that and most versions of Linux aren't going to fly with that so what could this server possibly have been used for well if you work in IT you've probably guessed it already but if you don't it's a VMware box this thing runs VMware ESXi release it ran because it's decommissioned now so it had a whole ton of virtual machines running on it hence the lots of RAM and lots of CPU power but no need for disk all of the virtual machines like their storage was actually stored on a separate system called a SAN or storage area network it's basically another server that's got a whole bunch of hard drives in it or SSDs and its sole purpose is to serve up those files I guess you could say for things like hypervisors to access their files you can also run files and stuff like that off of a sand as well but all of this server and the other ones next to it they don't have any hard drives in them because they would just get all their their hard drive storage off of the sand so let's take a look around back and you can see back here we've got quite a bit of networking now because this box is from 2012 we actually use this particular system as kind of like a dev test type of box nothing really critical production ran on it these are all just gigabit ports but there's still a lot of networking here so some of these ports would have been used for the network for that that's--and access some of them would have been used for just the regular production network traffic some for management you know you can various kind of slice and dice these how you want to use them and of course we had multiple ports in use for each type of network because obviously you don't want that whole thing to go down it's like one network card or cable or whatever were to fail so a lot of redundancy on here around back we've got it's kind of your typical other server ports believe it or not a lot of servers even to these days still just have VGA ports on there for video because you know for servers most of the time they just run what they call headless with no monitor anything hooked up so why do you need really good video on them USB ports serial a couple of onboard Ethernet and then this management port which is Ethernet and that serves up a web page specifically for managing the hardware of the server it's what they call lights-out management various vendors have different names for it but the point is you can use it to log into this web interface and like remotely power the server on and off and see what's on the screen access the keyboard and mouse and everything just from when from within a web browser so that makes things really really useful if you are in a different physical location than the server is so like if this thing were to fail in the middle of the night I could just remote in like from home and see what's going on with it instead of having to drive into work or whatever very useful feature another sign as to how beefy this server is it's not surprising to see servers with redundant power supplies it's a standard thing but if you take a look at this thing this thing this just it's a monster if you have to like slide this over and lift this big handle up and then it's you know pull it out so at your standard kind of household 127 volts the power supply will put out 875 watts decent amount of power but if you can feed this power supply between 200 and 240 volts it'll do 2 kilowatts that's some serious juice now because this is a redundant power supply obviously it means you can lose one of them and the server stays up and running so an entire power supply can fail maybe the source of power that's feeding this can fail whatever the whole server will stay up and running so this thing at maximum will draw like 2 kilowatts of power that's quite a bit of juice and it also outputs quite a bit of heat so there are various fans throughout the system the power supplies themselves have fans built-in and then what cools the RAM and the CPU since those don't have fans built-in is that around front here let's see if I can do this one-handed you can pop these tabs front panel comes off and then there are fans underneath here and of course these are like hot swappable as well all right decent-sized fan and so that's what basically pushes air through the front of the system and out the back servers are generally front to back kind of airflow so all the cool air in the server room goes in through the front gets pushed through all the components and then the hot air comes out the back to get sucked back into the air conditioner and you know circulate it around there's a view of the PCI cards there and there's a few slots that we're not using so here's a look at the other servers on this cart these were all decommissioned around the same time these four servers were used in a cluster they also ran VMware they're more or less identical but they were purchased in pairs at different times so the specs are very slightly different these are hewlett-packard HP DL 580 gen 7s this unit and then the other one sitting down there on the cart were purchased at the same time and they're the older of the two we can pop the lid on this guy and I'll show you what's a little bit different inside here they've got very similar specs to the IBM that we just looked at but the internal arrangement of hardware is actually a bit different so you can see the fans here are right in the middle of the machine so instead of having fans up front that push air through these actually kind of suck the air from the front and then also exhausted out the back there's four of them they're also hot-swap as you'd imagine and really good diagnostics off of all this stuff if any parts fail you can see in here we've got kind of this mid plane type of board it's what's got the built in grade card again for SAS drives here's the backplane for the power supplies where all the Power Distribution happens and then just a couple of exhilarates for handling a few of the built-in ports on the back just like with the IBM unit over here these don't have any hard drives in to me that they've also got a two and a half inch SAS drive bays but they're all just dummies what's different though is instead of using a USB Dom like the IBM for booting up VMware ESXi instead you just got an SD card just slots in here this is a four gig what's nice about VMware is when it boots up it runs directly from RAM it never really needs to touch its original storage medium again unless you make a config change that it wants to save in fact that SD card could fail while this server is up and running and the server would just be fine it wouldn't care it would throw a little warning saying hey your sdcard failed dude you might want to fix it and if you tried to reboot the whole server obviously it wouldn't be able to boot up but because VMware runs directly from RAM it works fine so you can see we've got a few more PCIe card slots in here this one's got kind of two back planes going on what's different about this box compared to the IBM is the selection of networking on the back so I've got some built in gigabit ports here these four port cards are also gigabit but these dual port cards are ten gigabit that speeds things up considerably especially with storage access for you know getting the data on and off for running the VM off of the sand basically again multiple cards for redundancy minimum of two links for everything so that if a card fails a port fails a cable fails or just gets disconnected or whatever everything stays up and running the last thing you want is to have a whole bunch of virtual machines stead of running on this thing to suddenly crash on you you can see on the back same thing VGA serial ps2 even the lights out management port for managing the hardware getting in remotely turning it on installing the OS and all that power supply arrangement is a little bit different on this guy and that these are each 1200 Watts they're physically quite a bit smaller they do still have a little fan in the back but this server requires a minimum of two power supplies just to start to kind of wake up that lights-out management is effectively a computer within the computer the server doesn't need to be powered on for that lights-out management to work as soon as the server receives sufficient power that lights-out management will start to boot up so as long as you've got two power cords plugged in the back lights out management will wake up boot up and then you can get in the web interface actually remotely press the power button on the front but because of the way this server is configured with the CPUs and the RAM and everything you need at least three power supplies to supply enough jus for the server to stay up and running as far as I understand I've never tried to run one of these with less than three power supplies in it generally a bad idea when you kind of rely on a system like this all day every day to work and that's something that I should mention is these servers weren't just turned on and off on demand when they needed to be used these all the servers on this cart were literally turned on and running 24/7 for anywhere between five and seven years straight with very little downtime the only downtime these HP's saw was probably about a half an hour each a couple years ago when I upgraded them to those 10 gig cards because I obviously had to power the system off to upgrade those cards so I kind of went through the stack one by one moved all the virtual machines off onto one of the other nodes powered it off swap the cards parted on did the config changes and then moved some VMs from the next node onto this one did the same thing to it VMware is really really cool in terms of what you can do when you've got multiple servers like this because you can cluster them together each VM virtual machine only runs really on one server at a time but you have the ability to what they call vMotion running virtual machines they stay up and running the entire time serving data from one node to another on the fly which is kind of mind-blowing when you think about it so without any loss of service for the virtual machines I could basically move them all from say this node to this node or this node to that node whatever and it all goes through the network and it just kind of copies the contents of that virtual machines RAM from one node to the other and then just does this handoff or okay now you've got it and it just starts doing all the CPU tasks and everything and the virtual machine doesn't know the difference the other big difference in these servers compared to the IBM is how the CPU and the RAM is arranged it actually comes out the front here so you lift up on this tab and then you get this big handle that you can pull down and then it's all in kind of this assembly that slides out I need two hands to do it so I'll set that up on top all right so this whole assembly just slots you know comes out the front there you can take this lid off oh I think this handle actually needs to be down there we go and then you can see the inside here so it's kind of the same arrangement as the IBM that x5 where you've got these separate daughter cards for the RAM there's eight of them in this case it's all eight gig RAM sticks a little bit faster Ram but it's still ddr3 ECC these are eight gigs but each of these is fully populated so there's eight sticks in each one of these instead of four sticks of sixteen gig you've got eight sticks 8 so it's the same amount of RAM each of these is 512 giggle you've got four CPUs here separate heat sinks like before the CPUs between two of these systems are different than the other two so the IBM was a set of four xeon e7 48 20s which are if I remember correctly the Westmere generation of CPUs this particular server and then the other one underneath all the rack rails and stuff is one gen older it's the Nehalem series they are Xeon X 75 50s it's the same arrangement where it's eight cores 16 threads two gigahertz but it's just one generation older so these are a hundred and thirty watt TDP CPUs those East 7:48 20s are 105 watt TDP if memory serves the VMS really don't care what CPU or anything they run on the hypervisor just kind of takes care of all of that I mean you have to stay you know x86 you can't I don't know x86 based VMs on spark or something like that but in terms of like if the CPU models don't match between some of your servers and you want vmotion a vm from one server to another the VMware just kind of takes care of that for you but an interesting label on here is this one about weight because this is the quad CPU version you can order this particular server with fewer CPUs if you wanted but fully loaded just this assembly weighs 40 pounds the entire server weighs about a hundred and ten with the power supplies and everything in them these are massive systems there's a lot of just steel in the chassis and everything there they're built to be physically durable they're built to house a lot of components and they are not fun to install or remove from equipment racks unfortunately I had to decommission all of these by myself so the way I did it in 110 pounds is too much for any single person really to manage especially with the server of this size when you're also working in a rack full of other equipment that can't get affected so how I managed to unrack each of these was I pulled this assembly out which cut some weight and then I pulled the power supplies out the back which cut some more weight that got each chassis down to about 45 to 50 pounds something like that and that was manageable for me to slide it out of the rack and then set it on a cart to wheel out of the server room the IBM was a little bit of a different story because the CPUs are just stuck in there and I didn't really want to pull those heatsinks out but I pulled the power supplies out and I pulled all those Ram daughter cards out and that reduced the weight of this unit down to I'm gonna guess 70 pounds 75 pounds thankfully this particular unit was installed at the bottom of the rack at about the same height as the cart that I used to haul that out of the server room so it was fairly straightforward and manageable for me to just basically slide it out and then pick it up and hop it onto the cart I didn't really have to hold that way for very long but these are a little bit further up in the rack so taking all the parts out helped quite a bit so these all decommission basically just because they got old as with most computers there is kind of a life expectancy to um while these things were all running just fine when I decommissioned them and took them out of the rack they were between five and seven years old it was starting to get expensive to maintain support on these because obviously if a part fails I want to get a replacement part pretty quickly for it some weird glitch or something happens or access to firmware updates anything like that generally with this enterprise-grade kind of stuff you need to pay for support to maintain access to those resources so you know when servers start to get to that age the likelihood of parts failing even though these are built really really well with lots of redundancy built in and ability to see like what's going on with the hardware for example here's a little control panel that tucks inside there this is all just for false anything that may be wrong with the server anything that may break this little control panel can tell you at least from a hardware perspective so you can see power supply status fan status CPU status and then each of these LEDs in this grid is for all of the RAM modules a separate LED for every single Ram module so if anything were to throw a fault or an error this can kind of tell you exactly where it is you know you can check this panel okay fan 2 is failed because the red lights on you know that kind of thing plus that lights out management web interface has the ability to send emails when stuff goes wrong so you would get an email saying oh hey the fan or the power supply or whatever has died in this server and then you know you can look at that little control panel and confirm exactly what what which one it is so with all that said these servers they came at a cost not I wasn't working here when they were purchased I started just after they all got bought and kind of put into production but I'm gonna estimate that each of these was probably around $35,000 each one brand new and then tack on another I don't know maybe 15 to 20 thousand per unit over the years to maintain that support so it's really kind of funny this entire cart with all this stuff when it was brand new is well over a hundred thousand dollars all this stuff cost these days though just because of the age of the system even though they work fine even though like literally just a few weeks ago I took these out of production in favor of some brand new systems once you get to five to seven years on a server the likelihood of parts failing starts to go up dramatically so people don't really want to buy used servers this old so each of these if we were to sell them or even try to buy a replacement one on the used market maybe a couple thousand bucks these still have tons of horsepower to them even though they have CPUs that are many generations old compared to current I mean we're talking to hey LEM and Westmere but they still you know can get viable work done it's just really interesting to see compared to the consumer market especially just how dramatic that drop-off and price is from new to used in just the span of a few years so anyway hopefully you enjoyed this one it was kind of impromptu I figured you know this is the kind of thing that people don't really get to see that frequently especially with servers this big the the majority of people go with kind of a scale out approach and that's ultimately what I did with the servers that replaced this I can't show them to you because they've got names and proprietary information and everything on them but instead of going with for for socket systems for the production cluster the new set up is six servers that each has two sockets in them same amount of ram per server so half a terabyte of RAM per server and I should mention that maxed out each of these chassis I think could hold up to two terabytes of RAM if you've got bigger modules this entire production cluster was two terabytes of RAM plus the 512 4 dev test the new production cluster is three terabytes of RAM because it's six boxes with 512 gig of ram each and then instead of eight core CPUs the new systems have 12 core CPUs I was never really CPU bound in terms of running all the virtual machines in general it was Ram and then to some extent you know disk access on the sand but they still sell it they still sell brand-new versions of all of this the big four socket systems if you're not using them for virtualization then you're often using them for like databases anything you just got a crunch a ton of numbers you're not gonna see it first such commodity stuff like running web servers or whatever you you find a much more efficient way to do that but you can still buy this kind of stuff it's just the markets definitely heading away from that anyway if you like this video I would appreciate a thumbs up be sure to subscribe if you haven't already you can follow me on Twitter and Instagram at this does not comp and as always thanks for watching [Music]
Info
Channel: This Does Not Compute
Views: 585,703
Rating: undefined out of 5
Keywords: IBM, HP, System x, x5, server, computer, hardware, Intel, Hewlett-Packard, Xeon, RAM, hard drive, Ethernet, networking, VMware, ESXi, hypervisor, virtualization, This Does Not Compute
Id: XqDJNtTPS4k
Channel Id: undefined
Length: 27min 44sec (1664 seconds)
Published: Tue Jul 24 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.