Dell Technologies PowerStore Deep Dive with Chief Architect Dan Cummins

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
we're going to jump right into the hardware overview and kind of break down the the sheet metal break down the array in and of itself and kind of get into what the the first slide here what i like to call the ice cream truck right so here's all the different flavors of power store this is we have two different deployment methodologies within power store we have the powerstor t and the powerstor x we'll go through those differentiators as we progress through the session and presentation but just know that both the t and the x are actually using the same hardware they're using the exact same storage array the exact same controllers and the exact same specifications as far as hardware you'll see we have the 1000 is where we kind of start out and then we can scale that up to the 9 000. these all hey jody uh the nvme sem drives is that are there you've got nvme sem and nvme flash are there limitations on both of those or you mentioned 384 drives here but yeah so on the nvme sc on scm devices we populate those on the base enclosure only and actually have a slide that will talk about that so we'll actually show you the population methodologies and capabilities of the individual drive types but before continuing i don't know if you um catching this later but actually so you talked about a confusing portfolio which you know we all agree and it we are all happy that you are you know harmonizing some other portfolio with this new product but what is the migration path from you know all the all the platforms to this new one i mean uh since you talked with all your customers and they are all emc customers of course and you have unity and the sc and yeah yeah so when you meet rico i hear you oh yeah so we do have a slide we're gonna be talking about that uh a little bit later in the presentation so you know if you don't mind you know kind of let that flow but we'll definitely answer that question for you okay yeah we'll uh we'll talk directly about it and uh enrico i'll also show you that in the interface i'll show you how simple it is for those pre-existing customers to bring data in yeah and i do want to mention i do have some color to add you know to the hardware here so you can see from a power source 1000 32 cores all the way up to a power store 9000 um that's for our power storage model if you take a look at the power store x model um where where we have uh an embedded uh esx hypervisor where we're able to take on uh applications customer applications running alongside our storage function roughly half of the resources are available to serve the storage array functions and the other half are dedicated know directly to user vms we are going to talk about the powerstor x implementation but what i wanted to say was that the choice to use 50 of the resources for the storage function and 50 of the resources for user applications was a design choice that we made um we really need to kind of see how our customers are really going to leverage you know you know the number of cores that they really need for their applications so in our first release you know this was fixed uh it is what it is not it is not an indication of you know how heavy our storage vm is it's just simply how many cores we're dedicating for serving storage at the same time hosting applications and you'll see along the road map will begin to give choice for customers where they can they can flex that deviation if they have more storage heavy you know external serving capabilities they can assign more cores for that um if they have you know more uh user vms you know less storage intensive uh in a small footprint they can they can actually add uh more uh cores for for user vms so so so go ahead rico go okay thank you uh so you're talking about 50 of resources free for applications but actually what happens uh when a failures occur so if i lose one of the two controllers okay i lose 50 percent of that 50 percent means meaning that i have only 25 is it enough to uh to run my workloads at full power so full throughput full everything okay it's supporting symmetric active active architecture right so if you're going to exceed the capability of both controllers or more than 50 of the resources in the array and you don't have another appliance where you can migrate that vm too um yeah you're gonna you're gonna you're gonna exceed the capability of the array right so within our sizer uh so when we're working with a customer to deploy let's say a powerstor x implementation we have the ability to take all of their requirements virtual cpu virtual memory we can plug those indicators uh into our size or we can do planning so to kind of understand where you know where does the customer need to be where are they wanting to go how much headroom do they need we can we can work with a customer to make those determinations through our sizing tools before we do an implementation all right so my question is the um in the x series uh where you're our virtual where you have a esx um virtual machine all that stuff are you still servicing fiber channel iscsi and those sorts of ios as well outside of the virtual environment we absolutely absolutely uh matter of fact why don't we just jump to that slide right sorry now let's uh yeah let me introduce something that um you know others uh keep asking and i kind of want to clear the fear uncertainty and doubt around the implementation because we do have a unique implementation uh where we've actually filed some some patents on that and and we're waiting for them to be issued but jump right jump right down to my slide where i have the um the guest vm architecture next slide this one right here okay so what you're looking at on the right hand side is a single controller there's actually two controllers right in a storage array um and we work very close so there's a couple of challenges when you're implementing a virtualized storage uh platform that can serve you know your you know external storage functions as well as serve i o to locally running guests and providing high availability and on top of that being able to provide all the enterprise level features how do i drive low latency tub sub 20 microsecond media how do i handle pci express faults from nvme devices right i want my customer you know wants to be able to to be able to provide hot plug for these devices without having to reboot a controller right um so in in in other architectures that implemented this way where you have your storage function running as a guest alongside user guests normally the i o flow for the storage guest would have to go through uh in this case the esx hypervisor would have to go through the esx i o path okay and that is adding another layer of latency um and then you're also relying on all the hardware management capabilities of that hypervisors uh so what we did is i don't know if you guys are familiar with the intel's volume management device but this was an innovation that our dell server team worked with intel years ago um you know and the intel volume management device is a device that allows you to uh constrain or catch pci express errors so that you can handle them out of band if you were to let those pci express errors uh because they're not buffered by a controller to directly attach the cpu if those errors traverse to the cpu you can get a machine check and nobody wants to do that right we take a look at a sas architecture right that's why there's a sas controller there right you you you have your sas devices behind it sas controller catches any errors like if you're you know if you have um you know you know a problem with the interface you know for for uh reseeding you know devices that's caught by the controller and nvme that doesn't exist you know outside of vmd right so um there's um in order to drive low latency um for say uh storage class memories or nvrams we wanted an i o stack that can drive drive that latency and by traversing through the esx i o stack you know we couldn't achieve that so we are leveraging um vmware has the esx has the capability of what's called vm direct i o direct path right that's basically where you can pass through a number of pci express devices uh and then as long as those devices handle a certain type of reset capability um you can pass them through to a guest and then that guest can use their local guest drivers right however that's only limited to about 16 pci express devices you know if you look at powerstor you know it has 25 nvme drive slots you know in the front uh we also want the ability ray like you were talking about before the ability to serve both fiber channel and iscsi uh block and file protocols externally while at the same time being able to have those uh user guests also access that same storage right so there's other devices that we need direct control over so we work very closely with intel and vmware to leverage intel's volume management device uh and when we first tried to use it it didn't work right uh it it didn't work in a hypervisor environment there needed to be specific support for it but we did work with them we did uh co-develop a solution with intel vmware and the power store team we delivered this capability in vco67 update two and what it provides it provides us the ability to do uh pci paths pass-through for a large number of devices um larger than 16 so basically we are creating this guest managed infrastructure domain where we hide all of these devices from the hypervisor and since we put them inside this domain we can just pass through that one function that one vmd function due to our guest and then behind that we can then have our guest drivers address all of the devices behind it we also have the ability to support plug and play right in some uh competing architectures that implement similar uh architectures you know you have to reboot the node when you insert uh uh you know a new device we don't have to do that we get native plug-and-play okay so for service ability that's great value to our customers we also have the ability with direct io to drive our our low latency right the person who can drive the utilization of this media is the person that's going to win right so we get bare metal performance on a per core basis out of our storage guest between power storage and power store x so 10 cores say for example 10 cores of power store x performance is equivalent to the same 10 cores of performance on a power source i think of it that way i already talked about pci fault containment that's another advantage and the other thing that we work with is uh with vmware and intel on is you know damn can we break for a second the because there's a lot of information uh a little bit non-stop there uh can we take it uh one layer higher and kind of get some definitions here uh one why is this a storage array that acts specifically and not a converged system because it is a converged system by definition and then two uh can you talk about the specifics around just from a high level what intel family of processors are you using the second thing that's a little confusing is resource management so from a hypervisor perspective we're used to in these converged type systems esxi being the core hypervisor being managed or the king of the hardware that doesn't seem the case in this so can you answer it kind of those three things uh why the product category why is this not categorized as converged system to what intel processors are you guys using in the x series specifically and then three uh how what's the underlying uh controller for the uh uh resource contention add one to that and that's predicted use cases so where do you see this fitting that i think that adds on to keeps question that that's okay well let's start let's start with that last question first because that's the easiest one to answer so if you take a look at the mid market um our customers are very cost sensitive um and so you know when we're looking at both you know traditional use cases and also emerging use cases where we can expand that uh we want to be you know we're getting a lot of feedback you know from our customers and we wanted to expand our market use cases so um you want to do more with less and you want to do it easily so it's really about consolidation right how can i consolidate infrastructure so if a customer has a few um you know applications you know whether they're domain controllers or or what have you they don't want to have to manage extra uh hardware infrastructure but they also have a need to serve you know uh traditional um mid-range storage functions for external storage this provides a nice option for them it's a nice small footprint it doesn't require uh multiple storage arrays so from a consolidation perspective um that's really nice i'll give you a let me give you a couple of examples there and keith answer your question on on processors these are intel xeons that that are consistent but for the t and the x uh so the that's common architecture there's just how we divvy up the course uh between the t and the x model um so zeon is a very broad term or like specifically what this is this is sky league what what generation what this generation the current generation sky lake yep okay and as we as we look at the the the deployment use cases and methodologies as dan's talking about here and i flip to a different slide on the screen for edge-based deployments there there's a lot of interest especially at the edge location uh or remote office locations around deploying this and using localized deployment applications these are very different than say this is different than hyper converge so you know keith to your point okay you've got somewhat of some converged here you're you're running applications and storage and compute kind of localizing appliance it is completely different than say a hyper-converged model where hyper-converged everything scales linear right you've got cpu memory and storage that all scales in a linear fashion we have the ability with with powerstor x to independently scale the storage aspect and simultaneously run a subset of static applications and i say static applications because we are limited to those two controllers and the resources of those particular two controllers as far as cpu why didn't you start with kubernetes and maybe a serverless kind of uh functionality you know containers and you have you have full control of the infrastructure especially on the file services so you think if you think about surveillance every time you have a new file coming in you can do some uh some compute work on top of it without uh moving it around and it's you know there are a lot of applications now that use this kind of methodology so actually serverless meaning you know horizontally distributed you know uh scale out data planes well i think about notifications messages and then you know running something that uh directly on the controller so uh removing all the hypervisor and uh you know the files well there's challenges with that right so if you know so we do have a container-based operating environment and that's something that we looked on very early on is the desire you know there was this movement to commoditize virtual machines with containers right um and that's still not solved today because of security and isolation reasons right containers share the kernel so if i were to allow a customer's uh container to run alongside my storage container i'm at risk my customers data's at risk okay the only way to truly solve that is with just enough virtualization and we haven't seen the market solve it today so typically if a customer is going to run their kubernetes environment um in a multi-client way or to protect themselves against the the storage and security functions that they have it's it's going to be wrapped in some level of virtualization yeah and actually work that was where i was going based on keith's question that's actually where i'm going um and it's kind of as i've got three extremely large customers that i've met with over the past several weeks that are looking at this at edge-based retail locations for example one of those retail locations has several hundred locations uh globally and they're looking at this because they want to create an application development model uh for their retail applications that's more based on containers and being able to deploy containers you know at those edge locations so for example it may be at a checkout kiosk for example being able to run a container-based application that is doing some type of security scanning or security monitoring or maintaining the the live camera or video feed or also the actual database that's handling uh the local inventory schema right and running that in an environment that allows them to create and then move forward with say for example integration in the future uh with vsphere 7 and all of the integration with containers that's coming there this is a very attractive offering to say look i've got a single to to go back to keith's point earlier where you've kind of got this you know converge stack so now inside of a single appliance i have a single package that updates everything i've my esx is getting updated as long as well as my my power store operating system so the storage environment the underlying hypervisor and any associated firmware is all being updated out of a single package and single deployment methodology simultaneously having independently scalable storage that could go in a single 2u form factor you know at four to one data reduction you know we're hitting just shy of one petabyte in the in the current configuration option hey jody i have a question um since you talked about vsphere 7. so this sounds like you guys are following um the vxrail model it's just the best way to put it where this containerized area where this not container is this area will depend on having integrations tightly built with vmware whatever you're offering so which version of vsphere are you currently offering with this and then my second kind of related question is this seems like bitfusion would be a fantastic application to be running especially when you think about um also centralizing gpus so like our gpu is available are you guys working with bitfusion yeah so let me answer the the first question on the versioning so i i threw up this slide this will give you an idea so the power store os for powerstor x it is a pre-packaged bundle that comes with you know the the validated version that we're supporting for esxi that is 6.7 update two as dan mentioned earlier uh that's what we're rolling out in version one uh we'll have a service pack that'll come out that'll be the powers think of it as you know power store service pack two for example just the you know maybe the nomenclature is different but it'll be a service pack that will then encompass 6.7 update three right and then we'll align a future release that will then inc incorporate esxi version seven right so the way that that operates from a customer perspective is the customer does not independently deploy or manage that esx structure that is part of the single consolidated validated package that comes from dell emc storage right so it's a single package that downloads into power powerstor x you deploy it and then it's it's updating the underlying hypervisor that's running on bare metal in the storage controllers and then the entire stack up from there so that's how that's the version we're shipping with today and how that methodology works moving forward so jody does that mean they can't customers can't update if they want to so if they wanted to go to seven so they could get bit fusion are they able to do that not no no no not until we not until dell emc storage releases that version in collaboration with the bundle power store uh x operating environment right and so to answer your question about gpus that can be uh so how do you do that from a security perspective so something goes wrong and you have to update um because there's a security leak or something like that we have to wait for you guys to come up with um an update on that yeah we actually have in collaboration you know with vmware and the engineering staff you know that we have if there is a cves or some type of vulnerabilities that that we work within a very specific time frame to take that that's been pushed out from vmware and then roll that into a service pack to make sure that the customers are protected so to go back on gina the the gpu questions an interesting one we so we can't we don't support putting i can take that one right so so gina you do bring a good point on you know if you take a look at powerstor today it um we don't have the power envelope you know to to host gpus right there are edge applications you know um you know that can do inferencing that don't require gpus right but if a customer wanted to um you know do some sort of machine learning or or have an inferencing application that really does require gpus you have to have a way to scale a gpu as a service and bit fusion is a great way to do that so you can think of a power store and a 2u in a power edge server with a uh with a gpu in it or over the network and you can offer those gpus as a service using bitfusion you know powerstoryx is deeply integrated with with vsphere and so getting to vsphere um 7.0 is something that we're um you know it's along the roadmap um i can tell you that uh we have been in discussion uh with vmware bitfusion and those you know that level you'll hear more about that collaboration you know in the future but you're absolutely right now if you take a look at our vxrail product that just released the nice thing that they did is they're using powered servers with t4 gpus right so you know if you have a vcn cluster now with this capability that's another application uh the difference with powerstor x is what we're really focusing on is that small footprint right um and one of the things that you're going to hear about is like especially when you talk about our scaling model um we're really giving our customers you know the ability to scale resources independently and that's important right and a lot of these other solutions you know they're not bad you know they're really good for their use cases but we're really focused on incremental scale giving the customer the power to add a single drive to add compute when they need to add compute and then if you think about bit fusion that really fits into that story i need to add gpu when i need it and how i need it a lot of times our customers would say to me like i can't tell you how many times i was in an executive briefing center right and you know a customer would come up to me and they would say hey dan how come you don't offer object in the mid-range array i was like why would i ever offer object storage out of a traditional mid-range storage array and they would say to me well you know i have a small demand for it now and i don't have budget for a full-blown ecs so if you can help me out by giving me a little bit of object until my demand and my budget increases where i can go to a solution that would help me out right and you know it was that sort of thinking that led us to this whole containerized architecture but it's an example of how we have to be incremental and how we grow our resources based on our customers demand um and their budgets right so we're in business with these customers and so powerstor is really about giving them that flexibility yeah i drew this up on the on the whiteboard here gina the one of the customers that i was working with a few weeks ago that was kind of their exact scenario was they were going to be driving um leveraging gpus they were going to be like driving specific application workloads at an edge location and in doing so they wanted to simultaneously run a set of of static applications uh you know that were localized to the storage rate so they had this single consolidated footprint but knowing that certain things were going to need gpu capabilities well power store x can service both of those we can run those localized static applications they can simultaneously throw in you know a couple of 1u servers with gpus and then leverage externally attached storage from the same powerstor x and have a highly scalable storage model that can support hundreds and hundreds of terabytes you know without having to have you know any kind of differing or massive growth scale yeah when you that first of all i love this whiteboarding online that's awesome but um yeah when you were talking about it in the first place it just makes so much sense to to combine the two and i'm really glad you guys are doing that that's pretty cool going back to the slide here in the slide you said that the the cluster basically is a two node sxi cluster correct yes it's the maximum size of this type of let's say converge the solution for power store extra so the basically the maximum cluster size for a power source is okay so this really helps to understand also the use case because if you if we are talking about class a to node cluster we know what we can do with the resources another question was about why we need the esxi enterprise plus are you using some specific feature of enterprise plus like storage io control or so on because a two load cluster can be perfect for branch office for edge and maybe the license cost can be yes andrea it's a great question um and actually inside of the product group we're actually having conversations and collaboration you know with vmware and being able to potentially use their robo licensing model so that's actually being discussed right now uh currently we do require uh an enterprise plus license per socket so it's it's four licenses per appliance understandably that depending on your methodology with in agreements with vmware that could be expensive right and but we we actually are having discussions right now about potentially being able to use the robo licensing uh for these deployment methodologies so so stay tuned on that one great and another question you have built this solution based on uh more hypervisor are you also thinking to adapt for other hypervisor in the future our current field of market and collaboration and innovation is with vmware um currently we don't have any plans to offer say kvm for example as an alternate hypervisor so our strategy is to leverage [Music] vmware okay now we do have internally we do have alternate versions of our storage stack running on other hypervisors that's usually for dev test environments we have a virtualized version um that we use internally but there's no plan to bring that to market so let's um let's go back to slide two uh since we've got like 20 minutes left or something and kind of feed back into the the hardware basics hardware design and then we'll work our way back into those deployment methodologies and and have further discussion uh as we get there so to give you an idea we're dealing with a what we call a an appliance that appliance the base enclosure the you know the terminology that we use within within dell emc is base enclosure that base enclosure is a 2u form factor that has dual controllers and then dan mentioned earlier 25 slots on the front side all right so those 25 nvme connected slots on the front side you can see the drive support here 3d tlc devices uh 1.9 3.8 7.6 is 15.3 just to give you an idea if you'll note here uh let me go into uh annotate mode here real quick and then um notice right here and we'll we'll classify this out on the next slide but these are our nvram devices and i'll talk about those in just a moment but we this basically gives us 21 user devices or 21 flash devices that are capable of user data in the base enclosure those 21 devices at a 4 1 data reduction nets us out to right in the ballpark just shy of one petabyte of effective capacity using the 15.3 devices so if we go with the 15.3s at four to one data reduction we can net out just shy of one petabyte and two u now that applies to both the t and the x right so you can get a lot of capacity just localized inside of the base enclosure in and of itself digging in a little bit deeper into that you can see here are the the nvram devices uh over on the right side we we have four of those slots we reserve those slots depending on if you're in a 1000 or a 3000 model both t or x it doesn't matter it's it's the same for both of them so if you're in a 1000 or a 3000 you only get two of those devices right those are mirrored nbram devices if you're going into a 5000 and up you get four of those devices we reserve all four of those slots that enables customers to do non-disruptive upgrades from a 1000 to a 5000 or a 1000 to a 9000 right so we can just populate those additional envy ram slots and now they can go from from the most basic deployment model all the way up to the largest deployment model in the same form factor uh without having to do any kind of data movement or anything uh just native to the base enclosure so that's kind of a look at the front end and the drive support for power store and the base enclosure if we flip it around you'll see this is where we get into the controller sets on the back side and the expansion ports uh and connectivity options uh if you'll notice the somewhat of the the purple color or what's labeled here is the four port mes card uh that's an embedded module that's that's there uh in every base configuration it comes with with that mescard that is an ip-based module we can support copper-based one or ten gig so you can do base t configurations uh or that can be an optical module supporting 10 or 25 gig options we can also add expansion ports or expansion cards those are the the yellow or somewhat of an orangish color box and those can be up to 25 gig ethernet cards or the the base t cards as well or it can be up to 32 gig fibre channel ports in there as well you'll notice the the the blue uh ports those are uh for sas based expansion and so if we're going into an expansion enclosure we'll actually show you that in just a second uh what that looks like but this is this is how we cable everything up this is where we get our connectivity uh to a power store array going back we just talked about the base enclosure and that base enclosure supporting uh up to 21 devices we have the four nv ram devices up to 21 nvme devices and then we start with a minimum of six so in any power store configuration we we require a minimum of six flash devices once we hit the minimum of six we can now do incremental ads of one right so a customer can just add a single drive uh and and scale that up you know by adding capacity uh in as small of increments or as large of increments uh as they want to do if we go into an expansion chassis so if we're going beyond the 21 devices we're going to add a sas based expansion enclosure so for those customers that say for example they're they're not so much concerned you know with localized performance they're more concerned with you know massive amounts of capacity uh we can clearly accommodate that you know up to nearly a petabyte you know in the base enclosure and expanding that out we can go multi-petabyte you know inside of a single power store config and we can have up to three of the expansion enclosures so base enclosure plus three expansion enclosures quick question jody on that um is the minimum number of drives due to an overhead requirement we're actually going to talk well we've actually got some content we're going to talk about uh how we do data protection how we got it basically our raid design and raid layout and so you'll kind of see where that methodology comes in when we when we get to there thank you sure thing all right so we've just kind of went through the base enclosure concept we can add up to three expansion enclosures on the base enclosure and we also have the ability to scale out inside of power store this is this is different so as dan you know eloquently explained at the beginning we're targeting you know mid-range customers these are customers in the mid-range space at a mid-range price point and so when you start talking about cluster designs and cluster capabilities we are clearly meeting demands in the high-end space with technologies like powermax and xtremio that have infinite infiniband interconnectivity leveraging low latency protocols like rdma where you can have distributed i o distributed workload across every available resource in the cluster but with those additional technologies and infrastructure also comes additional cost and so targeting this world for our customers in the mid-range it kind of looks like this and what kind of the industry has branded is you know a loosely coupled cluster as opposed to tightly coupled so it's it's more of a design like this where we have appliance a appliance b appliance c and appliance d each of them have a single storage pool and all of these act independently so if you have a a volume you know that's uh an oracle you know maybe your redo logs right you could have that living on appliance a you could have your db files living on appliance b so you can you can distribute load across the cluster but what we don't do is aggregate or span storage or cpu load across that cluster so every single when we look at this every single appliance is kind of its own domain does that make sense sorry so uh we had this discussion several times already isn't it more a federated kind of approach also another thing is what happens if i'm full of capacity full of uh you know i reach my uh performance limit on one of the node players can i move the line i'm disruptively on another uh yes so so your first question enrico to the the more of a federated design you know from an industry perspective having a loosely coupled cluster does follow that concept so yes uh also there's a ton of of intelligence when we're going to get into that where we'll actually show you how we have the ability to non-disruptively move loads from anywhere in the cluster to anywhere else in the cluster but we can also do that in a in an autonomous fashion by you know the logic and intelligence built into the array can actually make those recommendations uh in real time based on what's going on inside of the cluster and even at the time of deployment so even if you're creating a new volume out of the gate and you have up to four appliances in the cluster we can actually from from an intelligence perspective we can make the recommendation and actually assign that volume to the appropriate appliance in the cluster without you having to think of it uh think about it at all so we'll actually we'll walk through that enrico i'll actually show you some of those concepts yeah so when we go and add a little bit of color right so you know we spent a fair amount of time looking at different uh scaling architectures right so your horizontally distributed scale out architectures you know versus uh this more loosely coupled model right um and when we talk to our customers and we're really designing for mid-range with with uh both transactional latency considerations as well as independent scaling to control costs and infrastructures for all this was the choice that we made um so um so i mean both architectures are good but they solve different needs yeah not questioning the architecture even this is a mid-range kind of product so nobody needs a 10 million volume from a mid-range product the thing that i was telling you though is scale out and federated that you know yeah the thing that is horizontally scaled here is actually the control plane right so a single plane in class that can manage their infrastructure right so um so that is and the nice thing about it for our customers is you know we can intelligently migrate workloads whether those you know applications or storage um to meet their uh demands economically so let me show you uh while we're on that subject uh real quick i'll i'll walk you walk you through this uh enrique maybe to give you a little better understanding so this is a look at the power store interface so at the dashboard of powerstor you're actually getting a global view you're getting a view of every appliance that's in the cluster and note here we we have two appliances uh in this cluster we can see you know our what's going on from a block and file perspective as far as what's provisioned we can take a look at capacity right uh we can see capacity at a cluster level and data reduction we can also see performance at a cluster level right so we can see things to dan's point about management and ease of use we can have this single pane of glass that allows you the control plane of the entire cluster so let me give you an example if we're going to create say a new volume and by the way we we have feature parity uh with you know with rest so pretty much everything i can do in the gui i can do via rest and everything i can do in rest i can do in the gui so whether you're using an autonomous deployment methodology like say ansible for example or you're doing native rest calls or you're just going into the gui you have all of these capabilities so if i want to go create a new volume and say hey you know first we're going to give it a name we're going to give it a size and from there note here we have this placement capability and in the placement capability we have two different appliances so we have you know two appliances in this cluster uh or we also have auto so the the analytics there the machine learning is there to understand at a cluster level so any appliance that we have associated within the cluster it knows where's the best place to put that data set so it is extremely intelligent and it's extremely easy to use from an administrative perspective you don't have to look at it and say okay well how many appliances do i have how many controllers do i have everything is is done for us at the appliance level so it's very simple and straightforward so if we select auto he's going to deploy that that 10 gig volume the best place that he understands you know from uh from an intelligence perspective to put it now let's say for example you know this is a 10 gig you know test volume or development volume and i know for example that appliance 2 maybe is a 9000 series with all scm devices so it's a very high performance and i don't want the autonomous nature to deploy a test workload you know out of you know just out of that intelligence and logic capability over there i can pin it right so i can pin the workload to a particular appliance if i want to do that so can you you mentioned that you could have like 21 sem devices in one appliance yes oh okay that's pretty impressive actually okay thanks so so the the base enclosure uh let me go back ray and show you yeah i know i understand i'm just i'm just trying to understand it wasn't clear to me whether it was a limitation on scm devices or you could populate the whole damn appliance excuse the friends you can actually populate the the bait so in version one yeah yeah storage class memory is is 21 devices in the base enclosure i got you that's good thanks sure so you can see this autonomous nature here uh by you know setting you know auto capabilities uh we also have uh some capabilities when we're talking about moving workloads right so if we go up to the migration tab we we see here migration actions internal migrations right so we can see do we have you know particular things that are being moved anywhere within the cluster right so we can see what are what's the status of these what's the projection of these if if i want to you know move a volume i can select a volume and then i can actually tell him you know hey i want to migrate this guy or move this guy right from one place to another inside of the cluster so we get all of those capabilities uh native inside of here so it's it's really nice even to the point if for example if we had a a particular volume uh within the cluster that let's say was was starting to fill up and we hit a threshold or a marker let's just say for example um i'm going to get a notification at 85 percent full right and at 85 full i'm going to get you know some type of dialog box the system's actually going to generate an alert right and as soon as i get that alert i'm going to have to go in and acknowledge right that alert that i'm at 85 full on this volume well one of the actions in the dialog box it actually has the analytics that will tell you hey in eight days you're going to run out of space it will literally tell you eight days you're going to run out or 30 days you're going to run out based on the analytics and based on the trending of that particular volume in that particular appliance part of the dialog box is actually do you want to do an assisted migration right so maybe you don't have the physical space to dynamically grow this volume right so maybe you don't if you had the space physically you could do that but maybe you don't so you can select that box and say hey yeah i want to do this i want you to kick this thing off for me it will then pick that volume up and move it to another appliance in the cluster that has free space completely non-disruptive to the host right so that that capability is there as well of moving workloads or volumes you know throughout the cluster uh in in an autonomous fashion is this something you were working with the yammer on as well so they've got vrealize and this is completely independent of vmware okay with vmware's drs so um so for say compute balancing um for example that would be drs and drs will make migration uh recommendations to us uh but we have internally uh our resource balancing analytics that will actually either accept or deny those resource balancing um uh actions based on you know internal things uh like the quality of service that's needed for that application the capacity that's available across the cluster um the where on the devices um the capacity forecasting um so it works in concert with drs but what some of the your customers will probably be working with we realize operations as well so how does does that hook into what you're doing here and what they're seeing in real life operations yeah we so we have full integration into into vrealize so customers can use vrealize automation and orchestration uh to to to dynamically assign a workload or move a workload you know take action on workloads we we do have that capability and so you um because you have all of analytic capability does this help you size things for customers or make recommendations for customers based on the application workloads they're using yeah gina it's a is a great point so we're we actually have a couple of slides on cloud iq just to kind of throw that out there so all of this analytical data that we have localized within the cluster you know the customer also has the ability you know just via check box enable cloud iq that comes free cloud iq is free to our customers as part of a support agreement right and so that cloud iq capability is doing broader churning of analytics and looking at not just a single cluster not just at a single appliance but maybe looking at everything within their data center or multiple clusters that they may have globally dispersed or at edge locations as well as core data center and taking all of that data into account and then making recommendations based on that so things like you know overall holistic growth patterns you know hey at this point you're you're going to need to add capacity six months down the road like all of those analytics are being aggregated into cloud iq uh and you're talking 39 billion data points we've got nearly 40 000 systems already uh you know working within the cloud iq structure and that's growing every single day so the analytical points get more and more and more and there's a lot of logic within cloud iq that's taking that data and then extending that over into use of usable concepts to the customer so they can generate you know growth patterns they can generate graphs they can generate not just generating report type data but you can also get content out of cloud iq like anomaly detection so let's say for example uh in a in a particular in a particular cluster let's say i've got you know a volume you know that's sitting over here on on appliance a uh and by the way we support the intermixing of appliances within a cluster so this guy could be a one thousand uh a five thousand a seven thousand and a nine 000 right so we can have intermixed appliances within a cluster in and of itself but let's say for example um this guy this this particular volume at the host level let's say he's normally running at you know 5000 iops that's you know as we've looked at the analytics and the trending within cloud iq he he trends that metric uh and says you know what that's that's the normal kind of working set area for that that particular volume uh and and we i you know i've been an administrator i've sat in the seat you know i've gotten the call at 2 am in the morning when something goes south uh and let's say for example that this 5 000 iot volume all of a sudden go it doubles it goes to 10 000 iops right so that might not be a big deal in a high performance flash array it might go unnoticed but the reality is is that something changed within a very small window of time where you doubled that workload cloud iq doing all of the aggregated analytics would then notify you this is an anomaly right this is something you need to look into this because maybe a dba went in and made and you know made a sql query change maybe an application change rolled out and something is now different and five thousand to ten thousand iops might not be a big deal but fifty thousand to a hundred thousand iops could be catastrophic right and cloud iq would detect and notify the the end user administrator about that so not just with performance but also things like capacity uh you can do capacity uh analytical trending and notification as well uh from an anomaly perspective so you know i've been down this road where you know i come into work the next day and you know i've got a two terabyte volume that went you know from 900 gig to 1.7 terabytes overnight and everybody's scrambling trying to figure out what's going on only to realize that somebody left debug logs on right and things started filling up like crazy overnight cloud iq with all of the logic and the intelligence is scanning the entire landscape of the customer install base and we'll recognize that anomaly and notify them on that so all of that stuff is built into uh cloud iq within powerstor is cloud iq today or in the future going to have visibility to the full stacks of the powered servers the switches even the applications we're so for those that may or may not know the history of cloud iq we started it when we launched unity about four years ago that's when we brought cloud iq to market since then we've rolled in the entire primary storage portfolio and we're working every day to extend that into other aspects of the dell technologies portfolio i can only really speak to the primary storage we've done a really good job of bringing all of that stuff in but i do know working with the cloud iq team they have a strong focus to bring other aspects you know of the dell technologies portfolio into cloud iq so i would i would definitely look look towards that in the future and maybe even ask some of those questions with the other presentations you're going to have today johnny i have another question here uh you know going back a little on the migration thing can i also make a copy like i have a system in production and they want to mess up with developers and you know other people that could run anything on the top of their lan on a you know on a snapshot so i want to make a copy that is like a migration but without moving you know the actual traffic and creating new traffic on that plan is possible yes so in this case you would have you know say a production volume that's running here uh and in your case you have a couple of options right so you could create a a thin clone right so let's say that's a tc a thin clone of that guy which is still going to be direct pointers to that that same data set right you could make that thing clone read writable and you could have your host uh you know that's maybe your dev host that is now you know hitting that guy right but it's on the same appliance uh but if you want to take that workload uh and move it off you could then take this particular volume that's a thin clone and you could migrate that to maybe a dev appliance or something like that right so you you would have the ability to you know to to make those maneuvers to put data sets where you want them when you want them and how you want them how um just thinking of a scenario now where you're moving a volume from one array to another array how aware is the controller of what's happening in esxi on that same array so for example you're running something at the edge on the esxi that's running on it you then move the volume somewhere else are you relying upon the infrastructure for vmware uh um moving the vm as well or or do you go ah that's running on a local vm on the other controller we're gonna speak to vmware we're gonna arrange for a storage remote at the same time as uh as the the day to be it moved so it's a so great great question barry so let me give you some insight here um what we're talking about uh with up to up to four appliances right in our current version this is for the t model right in powerstor x currently we only support a single appliance for powerstor x now we have a we have a strategy in the roadmap to to change that to where we would extend scalability multiple appliances for powerstor x but today in version one t scales up to four appliances x is a single appliance so in a power store x deployment today we would not be talking about the non-disruptive mobility of a volume across different appliances uh however we will look to do that into the future and uh dan do you want to add any any commentary or color around that so that's specifically for the powerstor x deployment right before um [Music] when we get to multiple powerstor x clustering it's all integrated with uh drs so um you know anything that you can do in vsphere today across hosts you would get all of that same capability um this is where we get into how we integrate our storage migration capability with drs and we augment uh that uh the decision making you know based on various things that could be happening within the cluster barry does that does that answer your question yes it certainly does funky okay all right so here is uh you know getting into the data engine there's a lot of a lot of things here you know you know you've got the eye chart of a lot of different things here but i know dan specifically wanted to take an opportunity to to walk everybody through the concepts of the power store data engine and kind of how things are working under the covers well so let's just do a quick time check how much more time do we have remaining i have a 25 minutes ah okay good all right so um i'm gonna briefly touch on a few points uh within the um data engine itself um uh to give you that guys an idea you know we get a lot of feedback you know you know from both uh you know media partner customers where they you know we really want to communicate that this new data engine has been really built from the ground up and it's departure from the architecture that we've had in the past and one of the biggest differences is that it's a log structure file system um or a log structured storage system uh uses a four kilobyte wrangler uh mapping uh and it always writes fully consistent so when we lay down data on the back end we lay down full to make stripes that that data that's laid down on the back end is fully packed right so we built the data engine from the ground up for data reduction and that means that we didn't want any have any holes in the way that we laid out data which meant we wanted to go to a log structured system okay so that gives us the best efficiency um being in in the mid market the you know the best architectural model that we can do is still staying with the dual controller h a shared disk model we did change our caching strategy between uh other mid-range storage arrays where they would use a sort of a battery-backed mirrored dram we actually moved to a shared non-volatile memory transaction log using dual ported nvrams and the key difference here is that this is more aligned with our software defined strategy because as rights come into the system uh they get written to the shared nd ram and then we only require a low latency low bandwidth connection just to send a little record to the pier to tell him where that is since the pier has access to that as well and that fits nice if you want to take our asset you want to move it into more of a software-defined model you're not relying on custom hardware or or pci express non-transparent bridging between the two controllers okay that was another big difference obviously you know uh it's built for for flash uh supports both nvme flash sas flash and and 3d cross point um the redundancy engine and the data reduction i'm going to talk about next so i want to clear up some of the confusion we have around um you know our parity protection and then i'll talk about data redundancy we do have share based qos um you can set that on a per volume basis uh and or you can um use spbm uh with uh with vmware um yeah one of the aspects that's uh let me show you that real quick while dan's on that um talking about the the qosing capabilities and again going back to the intelligence and the simplicity of the architecture for a user is when you're creating you know a volume as we kind of went through this screen earlier if you'll notice over on the right you'll see volume performance policy right so you have high medium and low so it makes it very very intelligent very quick for a user to assign a workload to a performance policy and then that performance policy handles you know the you know prioritization of those ios based on high medium and low without having to get really complex or granular at any individual workload level so again it just goes back to the intelligence and the simplicity of the system one question about this because if you are using a virtual volume i guess that this kind of policy can be driven by storage policy from vmware vcenter correct and that's what i meant by uh spbm okay and the vasa provider where is located inside the controller the vasa provider is located inside the controller oh that's great and if you scale out or federate with four appliances as a provider you have four buzzer provider because you have four appliances well we have one active vasa provider okay but there are four that are running so if we had an entire appliance failure we have another basa provider that we just make active that's great to have a resilient vasa provider because is one of the the problem that you can have in a virtual volume approach and last question about replication that i see here maybe you are going to talk later but here i see that you have only asynchronous replication correct some customers for example from compellent sc series maybe today they are using uh live volume you you got it so is a synchronous replication or better light volume something in your future plan yeah it definitely is definitely is on the roadmap so you're going to see we're just you know months away from the the v2 release and you're going to see a lot more of those data services um come in to that v2 release yes yeah so one thing i'll say to the audience here is that um you know building this product from the ground up there was you know in order for us to consolidate our portfolio right you know we need to begin to consume all of the use cases for the other products as well right um and also there's some additional innovation that's coming with powerstor so we have a very rich road map and this is the first uh basically uh three to four major releases that are going to build out the story so you'll see that advance this is uh we have enough in in version one where we need to get it to market and we need to get the customer feedback and the reaction right and really understand you know where where the um you know where where the gaps are and how to prioritize what we need to deliver next so um so version one is very fully featured right um a lot of the things that you know um you know people bring up they're like well have you thought about this well yeah you know it's actually in the pipeline it's just we couldn't deliver to the market with the quality that we wanted at the time that we wanted right so um so just know that there's going to be three to four major releases to realize the full vision of powerstor so sorry do you have any plan for a virtual appliance for this product i mean any of your customer you know we actually so we actually are discussing um a fully software defined instance of power store operating system of course if you look at it in the context of powerstor x we already have a vmware virtual machine capable right instance of power store os uh but we are having conversations with with customers uh about potential use cases for an sds version of powerstor os and how and where might that be used but yes those those conversations enrico are happening within the product and engineering team as well as with our customers on you know what potentially that might that need might be in the future okay so i'm just gonna briefly run through you um our four to one data reduction guarantee um you know of course um we had mentioned before that we're an active active mapping layer um between the two so it supports stem provisioning you know snapshots and clones so when you copy you know a volume it's actually a thin copy right there's actually no data moving it only involves metadata um once we also perform pattern reduction for the most common patterns like zeros and ones so as soon as uh you know our write comes into the system and we run through the data checks on the data and at that same time we have the ability to detect well-known patterns and store them without having to store copies of that data you know on the back end so that's one form those are those first are the first two forms of data reduction uh for compression and deduplication they're always on and they're in line um for compression we are actually using um intel's quick assist offload um it uh has uh it's using uh deflate with uh uh dynamic that's both static and dynamic we're using dynamic and it varies between levels one and four in intel's terms which really kind of equates to levels one two nine we do have the ability to dynamically detect you know streams that aren't very compressible and we will actually back off compression um or if the system is under heavy uh pressure where the intel offload engine starts to become overloaded will actually back off the compression efficacy say from like a level nine to like a level five for example uh you know to to dynamically adjust the resources um in general we want to make sure that on the right side that it's really the cpu that becomes the bottleneck right um we don't want no we we don't want the compression engine to get in the way um one of the the major things architectural changes that i talked about on the last slide was really that we moved to a log structured layout uh and this really allows us to take so as we're as data is ingested into the system uh we quickly acknowledge the host as some time later that's when we're going to perform the inline compression and deduplication when we bind it from the logical space to the physical space but when we do that we're actually taking you know a bunch of data segments organizing them and then we're compressing them into a contiguous two megabyte buffer then that buffer is just written out as a full stripe to the back end uh with no um you know offset overhead so it's very tightly packed and we can do that with the log structured layout um and then the other thing that we're doing with respect to deduplication is we're doing a granular 4k block deduplication and one thing that's different than our other products is that we have a global deduplication domain within the appliance so yeah there are no uh deduplication domains per per volume it's across all volumes uh within the appliance so that provides some great opportunities for deduplication with that said we have a very rich roadmap for compression and deduplication so along the roadmap you're going to see some pretty significant enhancements in new innovation i'd love to talk to you about them uh today but uh but i can't uh but what i do want you to know that along with the other enhancements to realize the full vision for the product they're gonna be significant uh areas of innovation in this area okay uh but what i can say is that you know with the techniques that we've made in the architecture and the inline data reduction you know in the corpuses that we've tested with we feel very comfortable we now offer an average uh four to one drr guarantee so first off before i'm going to briefly kind of go over this to kind of give you an idea to clear some of the fear and certainty and doubt uh there will be a much more extensive blog that we're going to be doing on this to kind of get into to more detail and you'll see that on ethics blog i'll be working with itsec um but what i will tell you is that one um so so powerstar has implemented uh sort of a mapping system for the way that it lays out data much like you would have a thin provision mapping system you know for your volumes uh we have the ability to fully distribute um the um the segments for each of the resiliency stripes so we can map around drive failures for example uh and it's just mapping that allows us to always write fully consistent um so we can map around dry failures and since we're distributing the data segments you know across all of the drives we also have the ability to rebuild extremely fast so when you talk about reliability you're really talking about you know what is the mean time you know between uh failures you know the individual drive capacity you know how large is the capacity what are the number of drives you need to support and then ultimately in order to survive multiple drive failures uh that would render your your data unavailable or offline you need to to stay within that availability window you need a very fast rebuild so power store can actually rebuild up to a one gigabyte per second okay um so if you're talking about um you know using parity protection um our dynamic resiliency engine actually determines what the appropriate level of protection will needed to be based on the class of data and so that could be mirror or could be parody protection or multiple levels of of mirrors so metadata will be protected you know normally using mirrors or triple mirrors um and data will uh be persisted using um parody protection our goal here was to provide fault resiliency to meet our availability goals while uh minimizing our protection overhead where we can make the most available physical capacity to our our customers and even using single parity schemes what people need to understand is we can actually survive to drive failures with a with a single parity scheme and the way that we do that is by creating something called resiliency sets so when data is written to the back end data is always written fully consistent to a resiliency set and so with each resiliency set with single parity protection you can survive a single drive failure matter of fact you could probably uh do a little bit better than that because the spare is actually distributed within the resiliency set and since we have a very fast rebuild uh we can remap around that and always make sure that we're always writing fully consistent right i think that's one of the key aspects there dan is like noting that we allocate spare space so even if you had a failure of a device and you you leverage that spare space you know at a one gig per second rebuild all devices participating in the rebuild process aggregating that one that rebuild happens very quickly and then any additional space that's in the system can actually be used uh in the event of a failure so if you have a lot of free capacity uh you could you could literally lose another device and it would rebuild into that space you can you could lose another device yeah so the thing there is to make sure that you're within the availability goals uh to meet your availability and reliability for for your storage subsystem now if you're using single parity protection that's only going to get you so far so what we've done is to make the trade-off between meeting our availability goals and making the most amount of storage available uh you know we can distribute um that across multiple resiliency sets so if you think about it say i have two writes coming in so using single parity protection for the primary data i can write one uh stripe to resiliency set one and one stripe to resiliency set two now within each of those resiliency sets i can survive uh i can i can survive a single drive failure right so that's that's the basic concept so getting into the intelligent landscape let me just question so you actually split up the data into two or all the drives into two resiliency sets is that how this plays out depending you know depending on the number of drives i mean it's it's not like right so for example if we start off with a minimum of six drives right so that would be with single parity protection that would be a four plus one plus another drive for distributed spare drives worth for a distributed sphere and that's distributed horizontally so when data is laid out right we only need single property protection to meet our availability goals now as the drives grow right and your probability to encounter uh drive failure increases right and it increases you know also based on the amount of data that you have have have written the capacity to drive as well we'll we'll actually allocate a second resiliency set yeah that's all dynamic range so as the system scales it will dynamically create that and auto assign that behind the scenes yes but drives are assigned to resiliency sets i got it okay yeah so nobody you know an sc or nobody has to sit down and try to figure this thing out it's just the way that the system works all right so getting into the intelligent landscape here as we mentioned earlier we have you know we launched the box on may the 5th of of this year we support vrealize automation and orchestration we have the csi plug-in to allow for persistent storage and kubernetes integration and orchestration from a container perspective we also on may the 6th we pushed up our ansible module into github so that's publicly available for customers that want to leverage ansible we have a full open set of restful apis that customers can you know independently program against so the array is highly programmable from the outside you know coming into the array but there's a lot of logic you know internalized within the array as well so for example we actually have something embedded into the cluster uh called the discovery tool uh basically think of it as this it it kind of allows the cluster itself to become plug and play uh so we can actually start off with a single appliance and then a customer just literally has to plug power into the the new appliance and plug it into the network as soon as it's plugged in with power and on the network the cluster will actually discover that appliance the customer can then say oh yeah that's a new power store appliance i want to bring it into the cluster uh and then we have the intelligence built in to i p the controllers build out the storage resource pool bring everything online and ingest that appliance directly into the cluster in an autonomous fashion so it's very simple even to just scale out or to grow the cluster very straightforward and easy to use we i gave you this example on the whiteboard earlier so for the sake of time i won't rehash this but this is hey we hit a capacity threshold we have the autonomous capability to say hey just by the check of a box i can pick that volume up and move it to another appliance anywhere within the cluster that may have additional capacity or i can move workloads anywhere within the cluster that i want to in a power store t schema we talked about cloud iq already so i won't rehash uh these slides uh as well and then the adaptability uh i want to make sure that we give some some time here in closing uh for for dan to talk about you know the concepts behind the container-based operating system yeah i mean it's really simple we talked about much of it before you can read about it online as well but it's no secret that when we re-architected our software stack we architected it using a container-based operating environment um and you know and the really the way that it was born was i think was back to enrico right uh you know how do you run you know without an esx hypervisor and how do you embed applications directly maybe that's just containers and maybe we're just using kubernetes and i talked about some of the challenges around that but we did see a benefit from our own perspective uh to create sort of a software-defined environment where we can take our best of breed assets we can containerize those and now we can offer to our customers you know um you know the same benefits of some of the technology in our portfolio bringing the market so that there's you know it's you know they have the same uh capability same look and feel um we don't have silos of organizations you know creating multiple um assets that do the same thing we can concentrate on one um you know so that's what i really talk about leverage best of breed uh portfolio we also wanted the ability to incrementally upgrade different components so if there's uh say our file stack you know has an upgrade there's no reason to qualify the entire stack we can qualify you know our file protocol stack and we can actually deploy that in a patch and then the system can just individually upgrade that container so these are all all benefits and of course having a container-based architecture allows you to realize that asset in different forms so you know there was a lot of discussion today around well will you ever offer a virtualized version well we kind of do already uh internally but that's really used for our own development purposes you know whether there's a use case for customers you know to use that as jody mentioned before that's something else but it's this software architecture that gives us the flexibility to realize these assets you know uh anywhere um so that's pretty much it yeah so we went through we talked about power store t a lot this is just your standard deployment methodology where i've got my array power store os running on bare metal and i'm provisioning that so all of the benefits we've talked about with powerstor you know in this traditional deployment methodology we spent a lot of time early on talking about the deployment methodology of powerstor x the ability you know with apps on to run literal vmware virtual machines running applications directly on the storage array itself but simultaneously providing highly scalable independent storage right that independent storage can be provisioned out you know to external appliances we can leverage you know onboard replication to send that data back to the core data center right we can also x deployed uh and work in tandem with customers inside of their core data center if they've got you know highly scalable highly performant you know applications that they want running in a rail cluster they can do that and simultaneously serve up additional storage from powerstor x via fiber channel or iscsi right or we can simultaneously serve up external storage to a pre-existing esxi server form that's that's independent from everything else so there's a lot of dynamic flexibility here we can integrate in with vcf so if you've got a rail cluster running vmware cloud foundation of course the management side of that requires vsan but we can run workload domains on powerstor right so we can have storage expansion uh directly integrated into a vcf construct and if we look at this from a cloud integration perspective customers running powerstor x can actually run you know the native capabilities of vmware cloud on aws and have native portability of those applications directly into vmc for disaster recovery or dev test environments or whatever it is they might want to do leveraging native cloud integration with vmc or we also have our cloud storage for multi-cloud offering where the customer in a fully operational model can have a power store at a cloud destination that they don't have to they don't have to purchase they don't have to manage they don't have to worry about updates right they're just getting a capacity an annual subscription to an amount of capacity that they can then independently mount up to any hyperscaler that they want you know from that perspective so a lot of cool integration uh on the back end of there some people had questions about ingesting pre-existing infrastructure we do have the ability to take parts of our pre-existing portfolio and pull that in uh i'll show you just real quickly in the interface in the gui we have this option import external storage right i can go to import external storage it's it's as simple as you know selecting the remote system that you have that's pre-existing from dell emc adding you know putting in your credentials it will then add it to this configuration list from there i can select the guy that i want i can say import i select the volumes that i want to import and if you look at the top you'll see the workflow it's going to give us the option to take those workloads and create and add them into a volume group or independent volumes inside of powerstore we're going to do our host mapping we can schedule this import to kick off at a certain time or we can do it immediately it will auto assign a protection policy meaning as soon as that data starts hitting the power store it will start replicating and snapshotting and creating the protection construct automatically and then you just kick off the import so basically we have host level visibility from a power store perspective so we can see what the host sees from an individual one perspective we can also see what that storage array sees we then sync that data across mirror that data over and then we cut over from a multi-packing perspective the host does a cut over to the new power store array so you know that's kind of looking at things in a nutshell we close this out with our anytime upgrade program which allows us to do some clear differentiations in the industry meaning we can offer our customers in power store models the ability to to have those upgrades as part of their support contract so they can go from you know a 1000 series to the next generation of the 1000 series as part of their support contract they can also have the ability to go from a 1 000 to a 3 000 or a 7 000 to a 9 000 so you can literally do a full-blown upgrade go out of family and go to the next largest version and one of the aspects here that i think is truly differentiating is because we do have a scale out architecture the customer can take that that that uh investment that they've made and think of it this way we have our select offering here or excuse me that's standard offering here and then we have select here right and with anytime upgrade select since the customer is is owed a controller upgrade so instead of you know here they would go from a 1 000 to a 3 000 they could say hey look um this thing can scale out i want to i'm doing fine i want to keep my 1 000 but i want to take that investment and i want to add your your new 1500 right and i'm going to get these controllers so the customer only has to purchase any necessary expansion capacity and now they're in a scale out model so they can go from a 1000 to next generation fifteen hundred they can go full blown upgrade from a three thousand to a five thousand uh or the or they can scale out the kicker here anytime in the contract they don't have to wait some extended period of time and they do not have to renew maintenance right with dell emc in order to take advantage of this so going into the anytime upgrade out of the gate entitles our customers to do this anytime they want without going into any kind of long-term lock-in obligation with us
Info
Channel: Tech Field Day
Views: 9,414
Rating: 4.8961039 out of 5
Keywords: Tech Field Day, Dell Technologies, Dell EMC, PowerStore, Dan Cummins, Jodey Hogeland
Id: jk1heTCUHRI
Channel Id: undefined
Length: 86min 46sec (5206 seconds)
Published: Sat Jun 27 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.