Exclusive Insight: Visiting one of the Most Advanced Datacenters in the World

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hi and welcome back to a new video today we are on the road we are actually in a city which is called bielefeld it's about 400 kilometers away from berlin you might remember that in about january we were posting videos about ibm power 5 servers and back then i mean i just organized the server because i know that ibm was doing some crazy stuff apart from our world which we know like the x86 intel amd world ibm is doing some very very special things and then when i got the ibm power 5 server it was quite difficult to find reliable information about the ibm power stuff and then there is a company which is called urdif and a guy called dominic reached out to me and said hey we are working a lot with ibm power servers why don't you just come by and stop and we will show you around and urduf is an id company which belongs to edgar you might not know that if you're from the international viewer base but um dr edgar it's called it's one of the biggest food processing companies in germany they are huge they are really huge like every single pizza you can buy somewhere in the supermarket is made by dr edgar and also a lot of like bakery stuff and things and they also from 1970 created their own like it businesses for their own data processing and then about in 1995 they split up the company to be a contractor for a lot of different companies which are also mainly in the food processing business and they all work with ibm power servers and they also have ibm power 10 here which we apparently can check out today we are now standing in front of the first data center which we will talk about in detail today in general we will have two videos coming the first video will be just about the general infrastructure about the data center how is the power supply how does the cooling solution work all of this will be in the first video and in the second video we will take apart some of the servers like a blade server like a power server and look at these more in detail in general udif has two data centers in bile feld which are connected with something you could call a ring bus and it's made this way so whenever there would be a damage to one of the connections the second connection would still be fully functional and you would never have any kind of data losses but let's just start where the data arrives at the data center this video is powered by thermal grizzly and the new minus pad extreme thermal pad with over 20 watt per meter kelvin the minus pad extreme offers the highest performance available on the market ideal for better temperatures on latest gen graphics cards for example available in different sizes and 0.5 to 3 millimeter thickness find out more in the link below obviously it's not that simple that you just walk into a data center like this there were several security checks passes we had to manage to access this but let's just start yeah that's what happens just cannot go anywhere by myself we are now at the basement and that's also where the data arrives if you look to the left you can see this is in german einstein most it means that's the point of arrival for the data for example this exists in green and it also exists west in red so everything that exists in the data center starting from where the energy arrives where the data arrives and like the cooling and everything everything exists twice but we will just look at the green area but everything that we're looking at just exists twice because something could fail and you always want to have 100 performance and 100 availability all the time the area on the top left right here this is the entrance for the glass fiber connection and these black three things right here are just one for the ordinary internet connection the second one is for a big data hub in frankfurt and the third is the connection to the second data center here in bitterfeld and just to get an idea of how quick the data can flow from one data center to the second this thing right here can reach a speed of about 500 gigabytes per second the power for the data center is distributed over these three thick black wires which contain a voltage of 10 000 volt what i personally find extremely impressive is the fact that if you just think about one of these connections which is then forming the connection to the hub in frankfurt that's a distance of about 400 kilometers for those who are not familiar with german uh like cities but if you have a distance of 400 kilometers between like frankfurt and bielefeld then you have four cable connections to there or like glass fiber connections and they must not cross at any point which is a huge effort to just build them in the ground and to make sure that if you're digging like somewhere i don't know you have a construction on your autobahn then you don't want to damage any cable anywhere and this way you have to make sure that all the cables they just don't cross anywhere and due to the fact that you're building them in different shapes and different lengths at the end you have to make sure that all of them have the same length in the end to have all the data synced and this also leads to the fact that you artificially have to increase the length of some of the cables to then have them synced again in the end we are now in the next room even though this would technically belong to the entire cooling infrastructure we will talk about later in detail but since it's just the room next door we will quickly look at this because this is the water processing water preparation drinkable water arrives here but that can contain all of kind all kind of minerals and this has to be cleaned because this will later then be used for the cooling first of all when i entered this room the first time i thought that the water would be stored in here but that's not the case inside of these big tanks we can find some type of salt which will then clean the water that's flowing through these tanks what this is for exactly we will figure out once we're outside for the outdoor cooling continuing to the electrical controls technically speaking the 10 000 volt would be transformed into 400 volt but but we will not look at this we will straight go to the 400 volt unit these big gray things next to me are part of the power distribution and they are responsible that you can decide that you either rely on external power or internal power so for example whenever something happens to your external provider then you can use your diesel engine which then provides power internally for several time but you can switch between those two power sources what is very interesting are those red things above which are tubes they're not wires those are tubes which are constantly absorbing air out of these gray racks and therefore smoke detection they can detect the smallest amount of particles which can theoretically be inside these wrecks so if there is like a fire building up even the tiniest amount of smoke can be detected and what is crucial for the detection is that you always have the same amount of air absorbed from every single rack so it has to be even that you're sure that when the air travels through the tube on top and goes to the sensor that you always have the same amount of air out of each rack and this has to be distributed evenly one thing that you will notice for sure especially if you compare it with my room at home is that everything is extremely clean and straight everything is completely organized that's something you will definitely notice through our entire tour we are now in the room of the ups uninterruptible power supply but we will not spend too much time in here because of the noise which can be a bit annoying but this is mainly to have a very clean power distribution to our servers and just if you look at this area right here this is for the entire iit so for everything that is connecting the servers and there is one more rack which is right next to the camera we will film this which is for the entire infrastructure around this building itself and you can see the size difference of how much power the it the servers are consuming versus just the building itself in here we have the batteries for the ups and it's an online ups which means that the current is constantly flowing through these batteries 480 in total each of them has a voltage of 6 volt but dc which means that the current which arrives from the external provider 400 volt first has to go through a rectifier and then it goes through these batteries back into an inverter which then transfers it back to ac which then can be used in the servers themselves the batteries in here 480 in total is enough to power the entire data center for about 20 minutes or for at least 20 minutes okay all jokes aside this actually happened to me yesterday even though it's quite visible that there is something in the way but it still happened to me and i was already wondering yesterday why would you have some kind of like obstacle just right in the way artificially didn't make any sense to me but then you can also pay attention to the ceiling on the wall and then just theoretically speaking all of those 480 batteries which we just saw and there are more in addition because the 480 are just powering the it and then you have the entire infrastructure as well which requires additional batteries and theoretically speaking all of them could leak at once for whatever reason and then you have all the assets inside the battery and if they're all leaking at the same time for whatever reason the acid would build up until here so they they just straight calculated how much like room it would require inside this room to have the asset go until here and then this would make sure that none of the asset is leaking out of outside of the room there are actually doors i can use because this is emergency exit door and we were now we will now continue our journey to the diesel generator if we think back about the batteries i said that they are sufficient to use the data center for at least 20 minutes but what happens after 20 minutes and that's typically the region which you would never reach because next to me we have the diesel generator which reaches 100 power in about seven seconds and i guess if you check between seven seconds and 20 minutes there is a lot of time left and what's also interesting is if i just touch this thing i can feel it has something between 40 and 50 degrees celsius which is required because you don't want to cold start an engine because it would take a lot longer to reach 100 performance because this is always preheated every single second of the year it can always immediately start and then provide 100 power within like seven seconds apart from that it's just a typical diesel engine you have a turbo charger right here you have your intake cold air intake and then this is the cooling part just typical ordinary radiator which you would also have in your like car engine and because a diesel generator with 18 cylinders and about 1500 horsepower also yeah requires a lot of diesel we have a tank right here but that's not the main tank the main tank is sitting outside with several 10 000 liters and this is just like a small tank that's sitting directly next to the engine which can provide enough diesel for several minutes if you would have your diesel generator just standing in this room for a year or two and you would never use it you would not know if there would be some kind of technical difficulties maybe that's why each of the two they have because as i pointed out before they always have two of everything they use this after two weeks and then two weeks later they use the other one so like one time every month they are testing every single unit to make sure that it's always functional because in case of an emergency you want to be sure that everything works out properly now we are much more in the center of the data center very close to the server hardware itself which we will look at in a second but above me we have those power lines 400 volts you can again find green and red areas from the infrastructure and following these power lines the servers will be connected to the energy now opening the door you can see it's getting a bit more noisy which is a good indicator that we're very close to the actual servers above me again we have these power lines but now the power lines are getting split to the servers directly but you can see like above me where you have these like splitters i would call them you have wires which are thick as an arm and these are the connections to the servers themselves also interesting next to me these big racks are part of the cooling of the servers but to follow the entire cooling procedure and how the cooling works inside this data center we will now actually hop to in server and check out how the power arrives at the server and then how the cooling works above me the area where the power is distributed to the server itself each of these boxes is marked with 63 amps so each box can supply 63 amps to each of these racks but again everything is distributed twice so we have again the red and the green power line and if we now open up one of these racks and just inspect it from the back on the left you can see green cables from the green power line and on the right side we can see the red line from the red power line now looking at a server which obviously has two psus in the back for independent power supply the power supply itself like the external power supply visible by the green and the red lines is also independent again if we think back to our intro of the video i was talking about ibm power servers and right next to me we have an ibm power 9 enterprise server 1408 cpu cores 64 terabytes terabyte of memory and it consumes 16 kilowatts per hour which is insane and i can also feel that because there's a lot of wind coming towards me and it's very warm and this warm air is first of all obviously cold for the cooling and that's on the other side we are now entering and i'm very sorry that it will be very loud again but we're entering the coal aisle containment which means that underneath we have a big channel where cold air can flow through with higher pressure so the cold air will be blown inside this entire channel right here with higher pressure because we have different machines in here like the power nine which is extremely loud and requires a lot of cooling but you also have different servers in here which have less power draw less cooling they require and because they require less cooling they have to make sure that the pressure in here is sufficient enough that every single server even as it has slower fans still has enough cooling air available and we will now follow the chain where this cold air actually comes from we are back in the room which we just visited that's the room for the power distribution like the final power distribution to the servers you might recognize this again and if you pay attention to the site this is the area where the hot air like the hot air outtake from the servers will be again sucked into this room and then we have those air circulation coolers right here so the air will be sucked into these racks and inside we have filter units and what i find quite interesting and also amazing is if you look closely at these filters you will recognize there's just no dust there's nothing in here which is dirty there's zero dust anywhere which is very impressive but after these filters the air will pass through some radiators very similar to what you would have in a car or if in your radiator in the pc these radiators are cooled by cold water this way the air is then again cool will go back through the floor and that's where the air then will be distributed again to the servers so the air is cooled inside these racks and even though these floor tiles on the bottom they look like plastic they look light i can assure you they are far from that that's why they gave me this tool to remove one of these tiles they are very heavy but underneath one of these tiles you can see all the piping all these pipes and these pipes contain the cold water and also if i put my hand in here i can feel like a lot of air cold air it's very refreshing but we will now follow the way where this cold water comes from this is also a room we already visited again above me we have the power lines and behind me we have cold water it's marked with cold water and you can see it's like intake and outtake on the left side it's reading a temperature of 14 degrees celsius and on the right side it's reading a temperature of 20 degrees celsius and this is again some obscure german thing because there are some regulations at which point you can call something like hot or warm water and in this case because it's both relatively cold both is marked as cold water the left side with 14 degrees celsius is the water which will go to the chambers which we just inspected where the air is then cooled down and the right side is the warm water with 20 degrees celsius which is then required to be cooled down again leaving the data center again we have these pipes cold water marked and they head towards the headquarter and the water is used to heat the headquarter in winter because just through the normal radiator you can heat up the air inside the building and in summer because the water is still fairly cold you can also use the same water for cooling purposes following our cold water pipes the heat has to go somewhere we have to distribute the heat somewhere and in the end this is just a massive radiator that's the same what you have on your cooling solution when you have a custom water cooling and you use a radiator it's the same thing we can see all these lamellas we have some pipes going through where the water is flowing through but it's actually a different water loop so it's not the same type of water which we just saw down there we will get to that in a second but eventually the heat will be dissipated from here and this works up to temperature to 12 degrees celsius plus we have some fans also sitting on top luckily they're not spinning right now so it's a little bit more quiet which is convenient but for summer if the temperature reaches 37 degrees celsius they can use the cooling which we saw earlier downstairs we had these big greenish tanks where i said there's some salt for cleaning the water inside and this cleaned water can be sprayed on the lamella and then you have phase change the water is evaporating and this way you can also in summer even though the temperature is much higher than what you would have in the cooling channel where you want to aim for the 14 degree celsius which we saw earlier saw earlier then you can use this spray cooling and another fun fact on the side because we're just in the shadow there as i said multiple times before everything exists twice here same goes for the cooling and right now we are on the like shadow side of things on the other side of the building where the sun is bashing onto the cooling units this site is not used for cooling but because we're in the shadow because it's much colder in here and i can feel it right now that's why the shadow side is used because it's simply colder behind me we have two massive plate heat exchangers we already had a plate heat exchanger topic on my channel quite a while ago it's the same type of principle just a thousand times bigger and the water which was flowing through the outside cooling unit which we just saw then enters these plate heat exchanger units and then we have two huge tanks right here each of them contains 6700 liters again it's not only because you have to even out the temperature but again something could go wrong with the cooling and then you have a lot of time to heat up 6700 liters per tank and again this room exists twice which then equals that you have four of these tanks in total as i pointed out outside if the ambient temperature exceeds 12 degrees celsius you cannot rely on the outside cooling anymore which then requires to use these machines it's the same like a huge air conditioner like a huge chiller which you already also had multiple times on my channel it's the same principle just 10 times bigger and that's the smaller unit of the two it has a power consumption of 87 kilowatt and the bigger one has 140 kilowatt which also explains why you typically want to avoid using them because the power consumption is a lot it's just much more convenient much more efficient also for the data center to rely on the ambient outside cooling that's why luckily these are also not running otherwise it would also probably be quite loud and again both of them also exist twice we are currently inside a tiny lock we already passed this area multiple times during the day also yesterday you might have noticed this sign previously and because it's german you probably had no idea what it means but it basically tells you caution it's an oxygen reduced area looking at the right side you can also spot an oxygen meter which currently reads 17.1 oxygen in the area which we want to enter typically in the ambient air we have about 20 oxygen the reason why they are lowering the amount of oxygen in the air is a safety measure because you absolutely want to avoid to have any kind of open fire inside here and at about 17 oxygen in the air it's not possible to have an open flame anymore and they have multiple sensors inside the building units like this one which can detect any kind of particle in the air they can even detect that i'm standing right next to it and especially if you just for example you would have a capacitor blowing up inside one of the servers inside a main board i would not be able to detect it with my nose but the sensor can do it and because they have almost 2 000 sensors in the entire data center like spread all over they can even detect the rough area where the incident happened and theoretically they could even reduce the amount of oxygen even lower which is still possible for me to breathe but then makes it even more difficult to have any kind of like burning or open flames inside the server if we for a second assume you have a main board just laying in front of you and you have for example a cap which is blowing up then you would have your smoke coming up from the main board and then you might ask yourself the question we had these sensors attached to the wall how are they going to detect anything if they're just sitting on a wall and you have your component blowing up inside your server rack somewhere that's why they rely on internal air they have no connection to the ambient air from outside like there's no window or anything in this building this way the air is constantly circulating inside the building with a huge amount of air this way the air and all the particles inside the air are always evenly distributed and you can use the sensors which are for example sitting on the wall right here and one more reason why they're using the like internal circulating air and not the outside air is because of the oxygen reduction because obviously if you would just open up a window and you would have air coming in from the outside it would have a huge impact on the amount of oxygen in the air because it's very like cost intensive and also very complex i personally didn't even know how it's possible to just reduct the amount of oxygen in the air yeah it's very complex and we will check that out how this works now for the oxygen reduction we have this thing which is called oxy reducts looks pretty simple and i didn't even know that something like this exists the way it works on the left on your on your left we have this huge compressor which has an air intake from outside and then it compresses the air to a huge amount of pressure which is then inserted into the oxy reduct inside the oxy reduct there is a membrane and this membrane can filter out partially the amount of oxygen which that means for example you have different gases inside the ambient air could be nitrogen for example to a huge amount all the other gases are still in there but you have a reduced amount of oxygen and then you can use this remaining gas and push it down to the data center area where in a result you can then for example have the 17.1 percent of oxygen as a result talking about nitrogen we have huge amount of nitrogen tanks every single one has 300 bar pressure which is enormous and this is the fire extinguisher area and even though they have all this high-tech detection sensors and everything it could still happen that theoretically you have a fire inside a server inside a different area and then all of these tanks could be emptied they're going through these valves then you have the different areas like the fire extinguisher area five two six three one depending where you would detect your fire the nitrogen would be dissipated in and then this way you're making sure that you're lowering the amount of oxygen so much that it's not possible to have any kind of fire in there anymore and if there is no oxygen anymore you could assume that it's also becoming dangerous for a human being in there but you can be sure there are several alarm signals which would warn you that something could happen and even if you don't make that for example you're not conscious anymore for i don't know what kind of reason it's only for a very short time period to extinguish the fire and then a few seconds later you would have the normal amount of oxygen again so it's not that dangerous actually we are now done with the first part of the video which was just covering the entire infrastructure no worries we will get to some very impressive servers soon but i just wanted to get you a feel of how complex the infrastructure here is if i compare this to like a lower end server hoster maybe or if you would have your server at home and you would be okay with the fact that sometimes the server crashes and you maybe have a downtime of 10 minutes 30 minutes if you want to update things then a certain downtime might be okay but and i don't know what kind of customers they have because obviously that's kind of protected because you have to protect the data but in the end they could have some critical customers like a hospital maybe or some logistics customers now just imagine that you would have a customer which has 500 trucks and they're on their way to a warehouse picking up things then they arrive at the warehouse and for whatever reason the software there does not work like the logistics management software and then your trucks are stuck they have to wait for an hour then you have a traffic jam created maybe on an autobahn caused by this then you could have like a supermarket for example where the cashier is sitting there trying to do your shopping and then from one moment to the other it doesn't work anymore because your software is not responding then i mean certainly you don't want to be at the supermarket waiting for an hour to get your shopping and that's more like critical infrastructure and that's typically the kind of customer they would probably have and yeah so downtime here is absolutely no option we will get to very impressive servers like an ibm mainframe and just statistically speaking we calculated this yesterday the iba mainframe machine had a statistic downtime of 52 seconds over the entire year but it's just statistically speaking so it could be that over three years there is no downtime whatsoever that's what i would typically assume and that also explains why they have this like huge amount of effort they're putting in the infrastructure to have everything like twice maybe even four times depending on what we're looking at and we tried to get you a detailed insight of how much effort it really is even though i mean we covered like 90 i think you could go into more details on some aspects but i think we covered it quite well but now it's time to look at some of these impressive servers so we are now in the data center at urdup and we are next to an ibm power 10. it's already reading e1080 and e1080 indicates the 10 that is an ibm power 10 server and even though this might not look that spectacular we have four nodes like one of these units is a node which has cpu sitting inside memory sitting inside that is a node so we have four nodes in total and the control unit right on the bottom the control unit is required for the entire thing to start basic server could look like the control unit and the node on top and yeah we will if everything works out we have a technician and ibm technician will be able to from the back side access the server and then if everything works out we can take a look at some of the parts which are sitting inside so i think we'll just hop over to the back side of the power 10 because if you just look at the front it doesn't look like spectacular but i can tell you at least the price the price of this thing it's millions it costs millions and for a single server that's completely insane and if you're wondering what the hell is so loud in here it's this thing it's an ibm power9 and this is streaming like hell like this is so extremely loud we have all these tiny fans in here which are screaming like hell and just for fun if you compare the power 9 with the power 10 this has an intake through this like mesh mesh front right here which you can see if we go back to the power 10 you can see it has cutouts because the power 10 has such high power consumption high temperatures and everything that even having this mesh in front of here would restrict the airflow too much but now finally let's hop over to the back side so before we're going to take it apart from the front just look from the back and we have all these cables here these cables are typically inserted in the slots above there and these cables form connections between each of the cpus so every single cpu is connected with every other single cpu and then also allows to have access that cpu one can access memory from like cpu 12 for example but let's just go to the front and disassemble it quickly luckily we have a technician here from ibm because that's also a very obscure but also fun thing even though urdif purchased these servers from ibm they are not allowed to work themselves on the ibm power servers they always need to have a technician even if like a single memory module has to be changed they are not allowed to like touch the server and open it they always have to have an ibm technician here on hand on site who is then allowed to open the server take out like an mirror module replace like a network card because yeah i guess it's like a warranty thing because these servers are a bit costly yeah but luckily they organized one so we can open one he already pointed out that we will not be able to take a look at a single cpu just simply because they are so expensive and there is always a certain risk in breaking some parts and we absolutely do not want that they are damaging a power 10 cpu for us but i think we will be able to at least look like at some memory modules or something but yeah let's see what we can expect just for example this would be a networking card for the power 10 and i mean first look it looks quite i don't know what it's supposed to look like but internally you can see it's just a simple pci express adapter we have a pci express times 16 slot where the network adapter is sitting in starter connection on front and this is a connector which will then go into the motherboard of the power 10. now that's the server which we are going to take apart you can already see we have four psus in here each one has 1950 watts but they have redundancy which means that two of them can technically be defective and then can be replaced same goes for the fans they also have indicators if they're working fine or not they can be easily replaced fun story on the side ibm typically only allows to go until height unit 24 for the power 10 systems if you want to build them higher like totally upfront for like i think the highest is 43 or 44 about 48 would make sense but if they want to use the maximum height for their build ibm said you have to purchase a ladder and a safety helmet from us to be allowed to work on a maximum height [Music] so that's the power 10 server and if you just look at it from the top you can see four cpus you start sitting in line what is very interesting is the fact that you have these cables coming from the back and these cables are directly connected to the cpu so you have those tiny fibers and then if you take a look from the right side you can see those black connectors and they're directly connecting the cpu from the side so it's not like you have your main board which is then forming the data connection but you have your cpu which is directly connected over a cable we will now remove one memory stim from this server and it's also a completely different form factor than these usually saw before that is an omi this is the dim which is sitting inside of the server and you can see there are plenty in total we have 16 terabyte of memory inside the single server this single bim has a capacity of 256 gigabytes it's a ddr4 module here are the memory chips and in the center we have a tiny controller which also requires active cooling that's why we have this copper pin on top and the area on top right here this is the voltage supply for the module and this is called an or medium now everything is back in place everything is back assembled and i have the honor to press the starter button everyone behind the camera is actually laughing like hell i'm not sure why that's the case maybe it's because it's getting so loud but we will find out two hours later [Music] [Music] [Music] okay talking about the different types of servers and different types of architectures you might ask yourself the question why even ibm power like what does this make special or better than an ordinary x86 platform from intel for example and there are multiple aspects why this is special and a lot more enterprise class than an x86 we can for example start talking about memory and just the pure amount of memory inside one of these systems which can be managed is a lot higher for example one of the instances they have on one of these systems not on this one but on a different one has 14 terabyte of memory for a single instance with about 400 cpu cores which is enormous and the ibm power is very special for managing databases on the memory that's what you would typically do for example sap hana that's a typical database application you would have on the memory like just on the memory no ssds but directly on the memory and the memory speed the cpu can manage like the speed between the cpu and the memory on the ibm power 10 for example is gigabytes per second and to give you a perspective an ordinary desktop intel or amd system has about 50 or 60 gigabytes per second so it's a completely different world and the availability on the ibm power machines is so much higher because of the redundancy and the amount of components which just exist two times three times four times and in this case eight times like just looking at a tpm module for example or one of these vrm modules which we just took out of the power power 10 server these exist eight times and theoretically speaking six of these could fail and the machine would still work same goes for a core on the cpu which i find very impressive so for example if you have a 15 core cpu in there or like a 13 core or like a 30 core cpu in there a single core could fail and the machine can disable this specific core and just run normal there would be no performance impact whereas if you talk about x86 on an intel cpu if a single core fails the entire cpu fails even though we talked a lot about ibm power machines already in this video adif also has a lot of x86 in fact they have about 80 percent x86 hardware and we're just opening one of these rack servers from the back and that's what most of the people would consider a normal server that's just your ordinary rack server looking from the back side and if we move over to the next machine on the right head these are x86 blade servers and blaze servers have a much higher density when it comes to the amount of cpu cores and also memory but blade servers require a chassis where they are pushed in and in both of these two enclosures right here we have a total of 80 blade servers 8 0 80 blade servers and all of them are dual socket which means we have 160 x86 intel socket systems inside here considering that all of them have at least 20 cpu cores some have 22 some at 24. it has a total of about 3 200 cpu cores just inside here and about 80 terabyte of memory but then if we think back about ibm power the ibm power machine we just were inspecting the ibm power 10 which had like 50 of the size had 64 terabyte of memory which then shows again how much higher the memory density of the ibm power machine is versus your ordinary blade system we can now see the blade surface from the front which we just inspected from the back and this size right here is a single blade server which is pushed into the chassis and contains dual cycle into the cpus we will now hop over to back to the headquarter to open up one of these blade services to show you the internals in general though the second video of this video series will contain much more like just playing around with hardware open up different servers and show you the internals but i think we have to look at one the unit we have here on this table is the cisco m4 blade server the ones which we just saw in the chassis were m5 so a newer generation technically not much different though they could have a different type of socket like these are socket 2011 they have 512 gigabyte of memory so the newer m5 could have a higher capacity of memory sticks for example but what i found quite interesting looking at heatsink the first one has quite wide fins so a lower fin density than the second heatsink which makes sense because the first heatsink will always receive colder air and this way because the fin density is higher the cooling capability on the second heatsink is roughly the same with the higher air temperature apart from that we don't find any storage in this unit and in the back you could have on some blade servers also flat gpus but in this case it's just network apart from the intel x86 and the ibm power there is also this the z ibm mainframe and the c stands for zero downtime and obviously it could just be a marketing term but would you have told me that they've been using mainframe for 20 years and over the period of 20 years they never had a single downtime which is completely it's completely insane and if you think back about ibm power where we're talking about that a single core could fail inside mainframe an entire cpu could fail and it would still work which is very impressive and those machines are used for very time sensitive applications for example banks or stock exchange or like insurance companies they are using mainframe and mainframe was designed in 1960 so a long time ago and the base is still the same so if you had a program an application which was written in 1960 you could still use it today you would not have to adapt your software which is special but at the same time meanwhile in the time the hardware evolves drastically and this is speeding up things a lot and this is the latest generation z15 you might have noticed that all the different servers and systems we inspected so far did maybe with a few exceptions not contain storage directly so like no nvme drives no sata drives or anything you can think of because that is in a completely different room where the storage is sitting dedicatedly and the connection to access the storage has to be obviously quick that's why we have this nice pink cables those are not cables but fiber connections and the pink color also indicates the speed in this case 32 or 64 gigabytes per second apart from your storage you also need your admin access for example or you want to transfer some data to your server that's what this type of connection would be for that could be an ordinary ethernet connection like you can spot right here just a normal network switch which only has one g connection like one gigabyte connection if you would have a more data intensive transfer for example then they also have this fiber connection available on top which is 25 g connection over fiber but then also you might have noticed again on this network switch on the left we have green cables that's also why we opened the one an exit on the right which then again has red cables so those two wrecks again form one unit because it could again happen theoretically speaking that the power delivery for example of one of these fails then the entire thing is dead then you would have to have your redundancy again for the second one i just had to smile again because we just entered the storage area by the way and they told me they are only starting at mid-range which is everything over here that's where the production storage would be everything on the left side of you those isilon things those are containing backup data in total eight petabytes of data so eight thousand terabytes of data now this is a more enterprise thing it's called dell emc power max and if you know pay attention to all these blinking leds all these tiny slots you can read every single one says 7.68 terabyte of storage so every single one is an ssd they don't have any mechanical drives anywhere here because they consider this old tech it kind of makes sense everything in the center with all these fans those other controllers for example rate controllers to access all the ssds everything what we can find on top here are also present right on the bottom here and again in between a lot of control units the last thing we're looking at now is the tape library because that's pretty much the last backup process in here we have a lot of tapes and tape readers and riders where an automatic arm can insert tapes and even though i thought those kind of tapes are kind of outdated i used to do that like 10-15 years ago when i was part of the army i had to do this when i was doing server backup on some tiny tapes but these tapes they can carry 30 terabytes per tape and what is done in the first step is that the data which is coming from this data center from the backup storage here will be written on tapes in the second data center and the second data center which also has all the backup data then has the data stored inside here and this way you have a certain physical separation of the data not only from this storage to here but also from the tapes to the second data center and just if something would catastrophically go wrong everything would burn down you would still have all the data saved on all the tapes which is electrically not accessible unless it's inserted into the tape reader we finally reached at the end of our tour here at udif and i have to add that this was one of the most fascinating things i've ever seen in the it industry especially looking at everything around the server the server itself was already quite impressive but the entire infrastructure also looking from an efficiency point of view because this data center has an efficiency rating of 1.35 1.0 would mean that all the energy you're consuming are just 100 consumed by the server itself 2.0 for example would mean that if you have one kilowatt for example for your server you would also have one kilowatt for the infrastructure like cooling and everything and 2.0 is like the standard right now in germany 1.5 would be considered great and this thing here had 1.35 this just shows what kind of top-notch technology we could just witness and it was an honor being here thank you very much to edif to dominic chris marvin everyone who was involved thank you very much for your time showing us around and i hope you enjoyed this video thanks for tuning in see you next time bye you
Info
Channel: der8auer EN
Views: 343,992
Rating: undefined out of 5
Keywords:
Id: bpTNcbnZjvY
Channel Id: undefined
Length: 47min 22sec (2842 seconds)
Published: Sun Apr 03 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.