Dell EMC VMAX All Flash Overview with Vince Westin

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so we're now going to proceed on and talk about some of the more the hardware overview what are we doing how do the products work so again i'm vince weston adnan Sahin and we're going to talk through these pieces and carry on as we do this I'm going to lead some of the introductory sections and Adnan it's going to be driving our nvme discussions and all as we get forward so talking about the family you had asked earlier what are we positioned in the market how are things selling so our current most up-to-date versions of the v-max are the 250 and the 950 we have we still offer the 450 and the 850 we still offer the hybrid boxes but these are the current products the 250 is designed as very small simple scalable the one you saw up there right - bricks in half half Iraq able deliver lots of I ops lots of capacity 64 front-end ports and you can scale a little and again as we go down mark it with this because we have a number of customers interested in some of these resilience features we're finding that we're selling about half of our product now as 250s right so both well from from petabytes and from a dollar revenue perspective it's about half on 250s alright and the other half is on the 950 mostly now we flip there's there's a small percentage of other things still going on but the 950 has given us a great boost we've got now 18 cores per director and some of those kind of things we'll talk through the hardware in a bit so these are you saying that customers are focusing more on smaller many more smaller boxes than they are on the mega boxes that they may have done in the past well so we have two types of customers who are buying these some of the folks were buying these are buying has their whole implementation is smaller they only want 50 or 100 200 terabytes yeah to say smaller and it's still a petabyte what was it five years ago we were really impressed with a petabyte customer yeah yeah but when you look at it those two configurations are vastly diverse 1 petabyte 4 petabytes yeah 1 million 6.7 million so there's quite a diversion there yes so it depends there are some folks who want the the additional apps for kind of thing right if you look at this is four times the capacity but over six times the I ops right so if you need much higher i/o density you might decide you want the 950 the other thing that causes some people to think about how they want to do this is something that the Starbucks IT folks call the blast radius right how much data is probably to put in one array because if a grenade went off and it was dead right what would be the blast radius to my applications how much would I lose yeah I was wondering as much as as to whether it was the fact that people are using different other types of storage in the fact that the growth is perhaps in in structured in other areas and therefore there isn't necessarily a need to have the bigger high-end boxes as them I have been in the you know the twenty third twenty years ago moving we still have I've got one customer that routinely buys seventy five or so of these a quarter right um we have customers who consume a lot of this stuff a court order yes put two of them in Iraq and it only takes up 35 or so racks yeah you know we have other customers who oh there there proves the hyper scale all runs hyper-converged their doughnut we have other customers who are buying you know five plus petabytes a quarter and when you're buying five petabytes a quarter you may not want as many small boxes you may not have the isolation you know it depends on your app style right if you're building small applications that you can reasonably isolate and so you can build pods of things then a number of small boxes may help you keep your pod definitions and manage all it that way if you're building large applications that have half petabyte petabyte or more of associated data that can't live without all of it like buying one larger array and putting it all in there gives you a different way to manage it so it really depends again we've got we're splitting our revenue about half and half and it doesn't really matter how big the customer is because he and I have customers who buy you know dozens or more of these frames every quarter you know something luckily we have some who buy lots of 950 s every quarter - so it really depends and then on the 250 we get a whole lot of customers who are buying two or one right but lately we've seen a lot of - with active active in a metro type configuration just because the availability you get out of that from most companies if they had - you know real data centers wiped out by some event their whole company's gone anyway right out of region doesn't make much sense to somebody who doesn't have any of their infrastructure outside of their region so being able to do active active and a couple of fellows gives them a great opportunity to do that and we can just you know drop these intern RDF go right and it makes it really easy for them to have that kind of configuration what's the story from an Operations perspective I remember when I had v-max - back in 2012-2013 mmm there was quite a long provisioning time to stand up a v-max yep so two weeks does that improve we worked very hard as we moved into v-max 3 to simplify a lot of things so we pre-configured the arrays the the drives are carved up we'll talk about the drive layouts and a couple of slides we carve all that up it's pre-configured and ready to go so when you turn on the box you you plug it in you turn it on you you know give access to the hosts on the front-end ports you start setting up loans and doing i/o you can be doing IO in 30 minutes or less right it's it's designed to be dramatically easier than the things we used to do we used to have limits on the size you could make of a device right which was you know in the gigabytes range you can now take a device started at megabytes an online scale it to 64 terabytes or more if you want to not that I think that giant lungs are really good because you're gonna have some queuing challenges but we do have some very large customers who partial out and there's 60 terabyte LUNs over and over again because when you're managing you know 950 s with multiple petabytes of capacity in them managing small ones will drive you crazy okay so you know it all depends on the scale of your operations and what you're trying to get done but we've gotta make things excuse me isn't this why we automate things so yes so the REST API isn't all being able to automate all that is good you know also as we talked about snaps we'll talk about how we automate that because if you build you know one customer is running two million snaps in each array on multiple arrays if you actually had to actively manage two million snaps right you'd last about a week you'd pull your hair out you'd go home and check into the insane asylum so you know we've figured out some ways to manage some of that so we've tried to take a lot of the complexity that used to be there and make it all more virtual real time easy to do right there's one big pool of storage you create Lunz they sit in the pool the manages all that done all right so no micromanagement of various things that was kind of fun all right speaking about automation and so now you guys like you know sending them a message about like surprise a code you know this kind of thing composable storage you know HP thing like talking about a lot more compostable infrastructures and so on right so we make it easy if you want to use like the locks right to be able to increment increment that with v-max and all the other components and make it easy to mix and match pieces you know the storage the Foundation's within the array are locked right we don't dynamically do a lot of stuff we're not currently adding applications if externally into the array we do offer the ability to embed nads in the box we do offer the ability to embed all the management tools we've done some other things so did you guys like thunder driver or things like that for oh we got the cinder driver right you can you can pick that up we got the REST API so you can you can use us as a part of that infrastructure but we're not trying to you know change the nature of the internals we're making it easier to manage externally okay um but yes we understand that you need to be able to automate all that you need the REST API hooks you need the cinder drivers you need those other things to be able to do this we're trying to get all our customers away from scripts and moving all into REST API management and those kinds of things you know we got XML outputs if you're running the command line right you can do XML outputs of all that and put it into automation as well so you're not doing scraping of fixed the limited field what do you say you're hoping customer so you do you have some kind of services you're referring to the customers that they can till weave your development team and you can help them to kind of rewrite all this thing in api's we've got we've got a services team we've actually got a an EMC code group that's public that we share a lot of the ideas of hey you know if you want to do this kind of thing with your array here's how you do it examples of lots of different management pieces of reusable code here take it have fun as well as professional services if you want us to take your specific scripts and work to migrate them into that kind of environment absolutely looking at the slide you had before you had a kind of differentiation on the 950 F if I want to plug my v-max with zseries mainframe you have to go for the 950 F because I'm looking the the poles are no different we do not have five consort on the 250 yeah right oh yeah oh thanks yep okay so in terms of the architecture as we moved from the older V maxes into the V Max three world we decided that we were going to instead of having cores or threads dedicated to ports we pooled our resources so originally the CPUs were dedicated to ports because the hardware was laid out that way right you had literally ports plugged into the core and that's the way it worked and it would you could at one point on some of it like DMX's you'd be taking a razor blade and scratch across the middle of the board's between the logical directors and not hit any traces right it was if they were separate isolated directors now that's all software and so we have multiple cores running front-end code multiple running back-end code multiple running the infrastructure management the EDS pieces which is the engineered data services we do all of that in separate pieces in part for isolation we don't physically isolate them on CPUs we can move threads back and forth between these various services on the fly but we do this because if there's a fault somewhere for example if you somehow had a front-end where the entire block of code running front end went sideways right you can reboot that front-end without touching any of the other pieces back-end access or anything else right same kind of thing on the other directors if you lose this piece of back-end I can fail those services over to a partner and reboot this and bring it all back up the array doesn't take any issue with that and so by being able to do this in compartments it allows us to isolate challenges that do come up because we always endeavor to write perfect code but we never succeed all right there's always going to be something and so by doing isolation we can do a better job of managing faults and making sure that we maintain the resilience that we're looking for so this has allowed us to scale up the front-end ports allowed us to scale up the back-end ports be able to easily manage where things are connected and give the customers the flexibility of where this is going to go right and as you as you embed you known as on the front end we can drop Nazz in here put some ports on some CPU cores threads on Nazz right and start serving up various nez protocols and all that kind of thing as well right scuzzy ficon whatever you want to do each managed as a separate logical so the pools all balanced dynamically they're currently balanced in a fairly static we can move a few some of the threads will move a bit but it's fairly static as we do more and more code we're making that more and more dynamic so removing more threads real time based on where the workloads are that's one of those things that's that's somewhat tricky to manage you want to make the array as effective as possible without doing any automation that might lead you down a crazy path yeah absolutely so it's on the Nez Perce pect 'iv it's running in the front end right you're running I guess like a virtualized process was your ena as I guess right right the domain for that Matt that is I guess for an address space that's serving as a defense or you know as to be able to priming the domain for that is within that frame within the two controllers of that v-max so the way we implement ena as if you know about the Solara code or vnx and has but it's the same kind of thing so we load the code in here this has a logical data mover can have multiple virtual data movers and they each have an address space but the way that particular code works it doesn't share its data with any other so it lives on one director but it has an ability to failover to a partner on a different director all right so we'd have like a three plus one active three active in a standby kind of logical data mover configuration so in the event that something goes wrong here again if I need to reboot this I simply pass all the I over to the other it picks up all the IP addresses everything keeps running and then we just fail it back you do the same kind of thing for upgrades of the nazca i say you guys are you know you guys looking at making like a Manavi in the last perspective if i keep on putting the frames like your customer buys 75 minute report or whatever it is to be able to just add on and have an as it scales out that way in that in that you know you guys looking at doing it so we're looking at file system options as we become part of dell you know we had ideas of where we were gonna go and then we became part of dell and that's kind of changed a few things so as we're part of dell we're looking at what do we do for file system across all of this we would love to have a file system that would do scale out so not only could we lay it out across multiple directors in the array but with Metro could lay it out across multiple right with a single file on the platform that one that does that a little bit right so yes we do unfortunately the way their file system the way the file system is is designed in that it doesn't do it's not easy to flip that into a new block back-end right they already have all the raid they have everything else it's it's it's hooked deep in that code and so while Isilon has options for that right there other things we can do so we're we have a strategy on that for where we're gonna go with it it's just gonna take a little while to to make that happen yep since you're talking about scale out and so on you guys did the elastic fall thing right is that gonna go into a lot of events we just did which a lofty fault you guys did a partnership with you last default right so again we're looking at where our next file system will be okay system for v-max so we we do have some customers who buy v-max systems and set up different file systems on servers where it's a OCFS or whatever the file system they choose and use the fact that the Lunz are active active and scale it across even with metro across datacenters right so there are some folks who are doing that now there are multiple file system options our ena is one option that we embed in the array that doesn't mean it has to be your choice it's not the perfect solution for everyone and that's part of the why we have you know standard interfaces and you can go do all those kinds you know as an extra cost option right Ynez is extra cost on the F it's included in FX so you just have to pick different front end cards when you're buying the box for the FX because the FX includes pretty much every possible option you could ever want right in I've already F it includes ena as it includes everything and ena as penetration is about 20% of our open frames a little bit higher than I thought so yeah it's higher than I originally thought too but again as we're especially we're doing a lot of the smaller boxes all these customers like well I only have 100 terabytes if I put mean as in there I don't have to buy anything out yeah right and one thing to man unify one one box right the whole unified theory is there and especially as you go down market that becomes much more valuable and then then you end up with the customers who point at the array angle and that's our Sam right we also have some customers for example I had a doctors group where they had an Oracle database that managed everything that was coming in but all of the dictation was done in two files that sat on a file system well when they replicated that they needed to be absolutely sure that the database in the file system were in sync right we cannot ever fail over and have entries in the database that aren't in the file system or dictations in the file system learn can't you you need to lose the exact same data at all times because you're doing a sync replication you will do something sure that's not how they worded it and so with the async replication because you're doing it in a single unified box you get a unified result in the target so in terms of scale we start off with if you buy a single brick system you've got the the pair of directors all the cash and all that is set up across the memory in those two as you expand out right we can online add fabric and add a second brick with a second set of directors and all the metadata just extends out to the cache on those and you can keep going all the way out to eight bricks right on the 950 and all of the metadata becomes spread across everything and so you have the option to start small and grow and we don't lock down where things live now as we do this you know you'll add the nubret the new brick will have new capacity once you turn the new capacity on the array will say I'm gonna go pick up a chunk of the data and try to equalize the utilization of all the drives so they all become equally full which is the same whether you're spreading additional drives within a given brick or adding additional bricks the functionality works the same the data just get Sri spread across the bottom so when you say you're spreading the metadata are you are you replicating it more than once are you so there are a few key pieces of metadata that we keep three copies of and those will be distributed across cache other than that we keep two copies of pretty much everything and all of the data all the the host data is all mirrored in cache at all times well should the writes are mirrored the reads have a mirror slot available so if you write to them you don't never gonna find a new slot so the so an IO comes in a right IO that comes in generates a replica across two notes yes yeah and I've actually got an IO diagram coming up in a couple slides to talk through exactly how we do that and back in media is attached to bricks back end media is attached to each brick to each birth right you've got redundant connections from the two directors going down to the media okay but you had data spread across the box right we don't have I mean brick failures are not a problem that that is happening well they're not yet there are in the not very frequent but right you're in the not very frequent business they're there so you know we won't say that there's any product in the world that has more than a very limited number of installs that has no outages right there are issues that happen so the trick is how do you make sure you limit the number that you limit the duration and that you make data unavailability and not data loss all right those are the tricks so you might want to think about scope to and have a party you know certainly scope yeah and we'll talk about some of the other things we do with already have to actually help with that too because you could lose an entire set of components here on drives and read the data remotely off RDF while you get those da's back online yeah well of course right so lots of options for how you want to manage availability like confirm someone that's newer to be Mac so you've got a pair of bricks starting throw bricks of a pair of directors you can then add discs onto the back of that if you want to scale up or you want to scale out you can then add another brick system yeah so it scales drives behind the bricks and then bricks across and in terms of performance scaling right we had some folks from Gartner as mr. faucitt made the the the joke who actually said well show us something about your scale so we said okay we'll take a baseline we'll draw a green line here that says this is you know 2 this is 4 this is 6 this is 8 right and then we'll draw the performance for read misses because who really cares about all the hits right if these are 2 right then where do i scale and pretty much the 4 is right about you know we're sitting around the 4 and the 8 bricks is sitting right around the 8 bricks both for 8k back-end iOS and 128 K large block back-end is right so the the scale of this is fairly linear right you can just add brick after brick after brick and keep going so again we've been doing this for quite a while that's not to say that you know if you had a single director doing things you couldn't do things faster than what anyone director effectively gets in this there is some cost of the overhead of having to talk between the many devices but if you only have one or two you can't scale it like this all right so we can do much larger workloads because of the fact that we can scale what are some of the use cases that are not a good fit for v-max oil flush well currently again if you walked in and said I've got a ton of VDI it D DUP's really well it compresses really well you know and that's all I want to do with this box right is i've got my virtual infrastructure i would say gee you should probably look at extreme because it can probably do a more cost-effective solution for that problem right so it really depends we've got extreme i/o we've got also Isilon we've got unity we've got SC we've got a plethora of prob products because our customers have a wide variety of challenges that they want to solve what about apps that are mission critical business critical are there any particular types that when you hear about them as a use case that this is not a fit now Vmax originally was doing better with small block and we didn't have the big pipes to do all the bandwidth when we built v-max 3 we really ripped open the pipes and made sure that we could do large block i/o so if you go look at the SPC to numbers that we put out for this and it's monstrous numbers right we're sustaining 80 G's a second of throughput the thing's a monster what are what are some of the things when you look at your current architecture that you're trying to improve or you realize that a pain points or risky that you're trying to fix an engineering well Adnan's going to talk about where we seen things going with nvme so you can have some of that discussion there we don't discuss a lot of NDA stuff in a public forum like this so there limits to what I can can talk about on that but there are changes coming in the technology that get you access to more CPU cores they get you access to more cache they get you access to things that are lower latency like storage class memory all of those kinds of things are going on and so we're looking very carefully the architecture sure that where we're going is going to be well-suited to all the new technologies that are becoming available so that we can take festival exits that to that point let me ask a question on that so as nvme has come out obviously latency is a king in that world and so we've talked to a few vendors over the last couple of years about this and they started to measure and and talk about what their data path they're their software data path it you know the latency and you know nanoseconds typically how much they're adding to the latency from you know from media up to presentation know what what's helpful or do you measure that and do you talk about that so we do things to measure it we don't really talk about it okay but it is you know we're looking at how many especially you start talking about things like the the envy I mean and and then the storage class memory and other things talking in milliseconds is pretty meaningless right yeah you're into microseconds or nanosecond discussions and so yeah adnan is going to talk about a good bit of that as we talk in the nvme stuff at the end of the deck and you guys are working with the intel with the SP DK guys right - on Hampton we are working with Intel and several other vendors on storage class memory to be able to augment the storage in the array so that you can have that very low latency stuff along with other part of the challenge that we're at we're having fun working with on this is if you start buying drives that have latency times in the you know 10 20 30 microsecond range right what does that mean when you start doing things like compression where your hard work on compression on the track is 50 langston right everything's got a cost right so now do I compress the data that I put on that very expensive media cuz currently the forecast has been 5 to 10 X the price of NAND well if you find a v-max I guess you're not as sensitive to cost is some of the other guys man you're certainly not sensitive to relative cost right absolute cost sure thing but the the yeah the the relative is still is relatively expensive right so you you have to in current numbers you'd have to get a lot of value out of that right so if you're on a trading floor that might well be easily worth it right if you're running a website where your average customer is multiple mil seconds away plus or minus a half a millisecond on an IO may not mean anything to your business right and so until that cost gets closer now there has also been some interesting things going on over the last couple weeks where some of the early drives on storage class memory are coming out and the pricing is coming closer than five acts it may be more like 3x right which may mean a faster switch of customers onto that technology and so again as we look at how we're going to position that yeah and I'll talk more about nvm you stuff just too quick just on that slide so do you have any customers that are asking for more than 8 v bricks is that a requirement that you're trying to meet in the future we've had customers ask for it and the larger you scale the more complexity you have you know as we as you scale up from 64,000 Lunz you know we now have support in the software for a million Lunz well but that went over the 16 bits you're now 24 bit of dressing for luns and all of our customers have lots of infrastructure and they know that the ones of 16-bit address they have any wrong now but they know it right and so we've had that ability since we can't with a v-max three over three years ago we aren't using it yet for host front end devices because we're still waiting for our customers to move everything over to a new software that understands that it's not all 16 bits because the last thing we want is somebody to do y ou to two different loans thinking they're the same because they rounded off the upper bits right that would be bad okay put input in special cases like V vols where you're using right so V Vols ones you support more than 64 okay right what would you 64 K today what still 60 pork involved but we have the again the architecture goes if you if you look at the lunges for some of our internal devices those are up in the millions right so we support lots of ones we just haven't yet opened that up on the front end again because of a lot of the management tools that our customers have where they know that that 16 bits is how it does things and we're still working a few things yeah I would still argue that you open it up for V Vols via subordinate Lunz and avoid that problem you can avoid that problem and coz 64,000 VM decays on a six petabyte device is gonna be easy to hit yeah so again that's it's a software tweak right we change it in software yeah so currently our V vol implementation count is low so as the the everybody's we've all implementation count is relatively low and the people who are most likely to be running V valves are also the people who would like the new shiny architecture R as opposed to the old stable architecture and so yeah so we're having the discussion and it's where do you put your engineering efforts right and your qualifier and all that so you know it's it's there and we can do it it has on the road and exactly where on the road map it is depends on which customers yell for it exactly so the answer at the moment is eight is where you're staying right we don't plan to increase above eight in the sitting areas operation a couple of generations right yeah just because of yeah and and also you know the more pieces you have the more risk there is on service and everything else right if I've got 48 bricks in there and I go and muck one up I've now mucked up a 48 brick system all right we don't have a lot of customers saying gee I really wish I could put 25 petabytes in one frame the again the blast radius on that just gets really big so in general eight has been enough to meet what most customers are comfortable buying they aren't saying they'd really like more we've had the engineering discussions we're just we're not doing it yet thankfully I'm sorry and great vais bricks fits into how many racks is that a single currently that's a four rack 950 we're working on things to smush right coat it in the compressor and storage will you discuss Federation between boxes because that Federation between boxes in some respects eliminate some of that need to build one enormous book right the non-disruptive mobility be able to pick things up as well as with already half being able to coordinate across yes are you talking about that absolutely yeah because that is part of the key value proposition right all right in terms of the hardware I'm not going to spend a lot of time drilling into the hardware pieces if you want to I brought every hardware slide I can show you how the power is connected and all those things if you really want it because I know that this audience gets rather technical and sometimes you want that stuff yeah so if you want all that I can do it but in the interest of time we thought we'd start with this so in general the system can have multiple bricks each brick is a dual director engine so this is how the engines look from the from the backend cable side the front side not very interesting because it's just fans right you pull those out of the way to get to the directors and then so your your CPU your cache all the hosts access ports all those things are there as well as the connections to the drives then we have daes right that hold the actual media and then we have standby power supply so in the event that we lose one leg of power we keep right on running we keep caching data we keep doing all those things the cache in the array is just dense right it is not a non-volatile store if it was a non-volatile store we would have to encrypt it right because data at rest encryption it's just standard dims but we have the batteries here that power the box so if we lose both a and B power the batteries hold the directors up and we vault all the memory on to nvm a slicks that are sitting in the back of the director right so we don't we don't keep the drive DS powered we just keep the directors powered but they can vault onto non-volatile storage and then we can shut down cleanly that way also we're not dealing with battery problems because you all may remember the power outage that happened in the Northeast a few years ago that in many data centers they were out of power for up to 72 hours or so and the life of the better was the battery life how long they can sustain a filler or plum oh no if you if you lose a and B power feeds we stop upfront and IO we D stage to this and we shut down in 30 seconds the batteries can do five minutes but we don't keep the array alive okay we're just their batteries yeah they're real real lithium-ion you can't ship them by air fully charged batteries we have challenges our customers have a choice we can even ship them to you by air in which case they have to be 30% charged and they'll charge in your data center or we can ship them by ground fully charged all right but the the FAA will not allow us to ship these things are big right they the truck blows up right now yeah well but that's why you keep them locally stocked and you have other things would you just you just run the you run the batteries 30 percent you charge them when they get there so yeah basically instead of doing the power failed detect dump to flash in the dim you've centralized it right and it allows us to make sure that we can manage things we can also have it be resilient so within the box we can make multiple copies striped across the internal flash so that if I lose a director and a flash and I'm coming up I still have copies of all that wonderful metadata right because without the metadata in any you modern array right if the metadata is well the data's gone oh yeah right so we've got to be very paranoid about how we back all that stuff up right flash made all this so much easier oh absolutely at Aries directly supporting DRAM was such a disaster yeah yeah the only problem did you ask the question from from the Twitter feed no but you're not yet I thought it might be coming yeah the only the only challenge of course is the bandwidth you need people dump all that memory into those slicks in time to let the battery shut down right so right which is which is why every almost everybody else does it Envy dims because then you distribute that bandwidth right but when you do it in env dims you now have to you now have to encrypt all the data going to them in order to have proper dere support because your data is on that dip just don't have any hope I do not believe the vast majority of other people do that but that's a whole nother story yeah that's a separate you know dare discussion yeah but that's why we do it that way right because we use the same encryption module to encrypt the data that's going to these as we used to go into the other drives and so it's all consistent we have one key from Drive and all that wonderful right and its encrypt on powerful yes even the power of vault stuff is all encrypted as well yes so but yeah we run with the cache unencrypted because we didn't want that overhead yeah that memory bandwidth that gets expensive yes to encrypt in real time yeah even have a lot of CPU you're putting a lot of hardware in and we didn't want to do either one it's a it is problematical right and as we discussed as you do multiple bricks we have a virtual matrix on the 250 we actually cheat on the 250 we just connect the matrix back to the other director in the other brick right because there's only two and so I can cross connect here I now you can use the PCIe cross memory interconnect between the directors in the same engine so that's all I need but since the 950 goes bigger we've got these 18 port InfiniBand switches that scale the switch out and then ethernet to allow all the things to talk nicely again on the 250 we just cross connect a whole bunch in Ethernet wires in fun ways between the little management modules you can see here and then in the larger box we have Ethernet switches that cross connect everything so if you have several 250s those each of those act as a single thought or can you interconnect them between each other so the 250 with two bricks act as a single array if you put four bricks in a frame they're two separate arise they don't they don't even know about each other okay they're completely isolated because you can't make all those connections work without the virtual matrix at which point you're buying the 950 yeah all right so what is the smallest configuration I can buy you can buy a single brick with on the 250 you can buy a single brick with 13 terabytes of useable flash okay right that's a pretty small box yep and then you can scale that up to a petabyte at by and the dual bricks and then on the 950 on the open side we do a minimum of 50 terabytes per brick and part of the reason we do that is if you look at the cost of the brick versus the cost of the flash it's not really worth buying 10 terabyte bricks so how about an upgrade process from a 250 to 150 yeah yeah we have a very nice forklift operation we can drop the nice we do non-disruptive migration and we'll talk about that so we can do it without taking the data offline and we take the old box services no no it's a it's a forklift so it's a hand truck the things have gotten smaller yeah special hand truck all right so drilling into a little more to the engine again like I said on the front side all you see is fans but if you pull the fan block off you can get to the director the directors are serviced out the front of the array I don't know if y'all pulled them out out there but if you slide the director out you don't change any of the cables you don't mess with any of the slicks on the back end so the nice thing is if you have to replace a director you power it off you pull it out you drop the new one in you plug it back put the fans on it back on and you're done you don't mess with all the cables no cables are nuttin to remove a director right and then if you need to replace the select you do individual slip so there's a midplane in between makes all that fairly manageable the power as you can see here there's two power supplies there's actually four power supplies here two for each so you a and B power going separately to two power supplies that power the director there's no shared power between these directors right they each get a n B power feeds just yeah there's no power in the backplane these are actually very simple pass through cables so the power plugs stay on the back so you never any cables on the front of the system but it gave us a way to make sure that gee if I have you know a one power supply a fail and one power supply B fail in something else fail none of that will take out both directors right we're rather paranoid about how we keep the directors up and then on the back end again you've got a management unit you've got flash drives you've got front-end hose connections you get the fiber the InfiniBand interconnect matrix between the bricks to go back to this question really quick since you can't upgrade to 250 without a forklift what's the smallest 950 configuration you can do single brick or 50 terabytes yeah five zero so that's the basic on this again we got a bunch of PCIe slots here that we stick things in and that's where our our vault flash lives in these as part of you know we stick in more vault depending on how much cash you put in the directors hardware specs for the individual components the real differences here are the number of cores and the speed not a lot of speed difference our Intel chips are kept we have high fans running on them so we keep them cool so they all run in turbo mode from Intel right so you'll get an effective CPU rate higher than what's shown here unless you're doing swamp cooling and running your data center at 90 degrees at which point they will slow down a bit right so if you're doing swamp cooling don't do that in Arizona it gets too hot well actually works really well in Arizona because it's very cool at night the problem is went 110 degrees or in the day yeah but evaporative cooling still works really well because it's ten percent humidity it will it will evaporate that's for sure that's interesting the 250 is more dense yeah so currently the 250 does 32 front-end ports there's some configuration reasons internal to the brick why we do 24 on the 950 like I need to use some of the PCIe slots for the InfiniBand yeah it's well it's this has the same InfiniBand in it it's actually we we have certain configurations on the 950 where we use for flash licks into and so we don't have room this guy only uses three at most so and just like is Delhi MC yeah it's just the the PCIe card we put in the slot yeah just I thought I thought I got that but yeah yeah so that's the interface the interface car is yours you adopted language yes the interface cards that we drop into each of those slots and so because we have the ability to do four of the nvme bricks or nvme selects in the in the chassis we can't guarantee room for for front-end cards so you get three there as we do no generations we may adjust that
Info
Channel: Tech Field Day
Views: 1,887
Rating: 4.5 out of 5
Keywords: Tech Field Day, TFD, Storage Field Day, SFD, Storage Field Day 14, SFD14, Dell EMC, Vince Westin, VMAX
Id: 3ZJ3ZePKbyk
Channel Id: undefined
Length: 38min 39sec (2319 seconds)
Published: Fri Nov 10 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.