Dell EMC Data Services and Data Mobility with Vince Westin

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is vince weston Technol of angeles ford le MC we're here for storage field day 16 talking about the wonders of power max for our wonderful industry analysts here all right reliable data protection so why do people buy power max for data protection we've got snap VX we've got protect point we've got RDF which means within the array within the data center and across data centers so let's talk about what that means snaps I can do 256 snaps 124,000 24 link targets they're easy to manage because I imagine manage them at the storage group level I don't have a bunch of individual Lunz hanging out there when I make copies so I can have snapshots we'll go when we do the demonstration of the user interface we can show you how easy just say take this create snaps on that every hour keep them for 12 hours go write a news go make them in the background and done and once you've set that on the storage group if you add five new Lunz into the storage group all the snaps from then on gets snapped for the Newlands as well as the old ones all right so you do all this at the storage group level going back the idea Howard we had earlier about applying service levels at the storage group level right same kind of thing with snaps it's part of your service level that I want to make snaps happen we haven't built that into the service level yet it's one of the things we're talking about is how might you create a service level that specifies this service level includes snaps always happen every 15 minutes and are kept for 12 hours and the hourly ones are kept for three days and the midnight and noon ones are kept for a week right you can create those kinds of service levels that include availability or copies as well as performance and other such pieces so really easy to manage easy to do and we have this secure snap feature that we came out with a year ago secure snap say when I create a snap make sure it sticks around for a certain period of time these examples are a little long I usually recommend secure snaps not be longer than three days but you can make them 30 or 45 days if you want to once the retention time has expired the London will automatically just free itself up as they all do when they expire you can extend the time of one of these secure snaps from 45 to 60 so if you're on a legal hold and you've got a copy of the data out there and you say gee I want to keep that for 20 days right change the legal whole time 220 days nobody can delete it until that time is that snaps or redirect done right so the snaps are actually a read-only point in time you link them to a device to mount them and any rights the device never go back to the underlying snap so yes the rights yeah its redirect on right on the source and the other one also has a redirect on right a snap itself is guaranteed for seen the snaps are all involved this doesn't impact your ability to make copies take one of the link it up and mount it you don't have to do anything special about it but again anything you do to the target is right you did to the target that doesn't change the nature of the underlying snap so there's no way for someone to come through and corrupt the underlying snap even if they're mounting a copy to go play with it or go test it or go do backups with it whatever else they want to do so it gives us great security this is also useful we have a number of customers doing air-gap systems where they're doing air-gap backups once or twice a day defeat the bad guys you have a air-gap system you connect it up you do your backup you disconnect it and once or twice a day you do that the problem is you're doing it once or twice a day right the bad guys may come and get you at any time and you know Murphy says that if you're doing it once a day it's gonna be 24 hours after your last backup right as you get ready to do the next one they're gonna hit you and you're gonna lose 24 hours for the data so if you're doing these kind of snaps every 15 minutes during the day right the bad guys come in and they say okay first I'm gonna turn off the you know air-gap copies and I'm gonna try and go delete those and second I'm going to delete all the snaps out of the Lloret right so they send the command of the array and these are secure snaps knee raises yeah that's nice I'm glad you want that but you're not allowed right and they said no I'm the administrator of the array I can do anything I want I can go delete it run like ya know you can't believe these snaps they are not touchable right not until this time expires have a nice that see yeah right and so the bad guys are just kind of stand around yeah okay not this system let's go to the next system right what does this do this doesn't fix everything but it makes you less likely to be one of the low-hanging fruit easy targets right they want the bad guys hit the easy targets don't be the easy target the link and process is making a snap exposed as a LAN so you actually read it and buy to it so now let's read right at that point yep and that link process doesn't it's not like a clone process per se it's just it's a set of pointers now these are also redirect on right so when you write you get a right part of this target that doesn't get associated to the original set okay I got it snap again the snap is in volatile so you can link it to something and do whatever you want but any rights here are part of the target not part of the snap right okay is any more on customer ask this for 17a for certification on this not that I know of but Caitlyn might be the first it would be a bad idea to do on a one what is the restore and set load due for a linked snap so you can restore the device back to the primary right you can still restore you can you can change things about how the device is being used you just can't touch any of this data until it's done you can restore back to the primary LUN and then go writing and do whatever you want on the primary lon that doesn't change this map right the contents of the snap are in volatile until the time expires the ones and the snap is deleted what's the maximum allowed retention field what happens if my administrator sets that to infinite and I run out of space so if your administrator sets that the infinite you call us and there's a multi-factor authentication we can use to go okay so you can we can okay it'll lessen it but it's it's we make it intentionally painful see that I would appreciate you charging for it as opposed to power performance so we've got the role based access stuff now on four snaps on line device expansion while snaps are running yes I know many people say gee that's kind of table stakes well okay now we have the table stakes on that right but you shouldn't have to stop all your replicas just because you want to expand a device so you can now expand devices with those snaps especially if you have secure snaps on it you really aren't going to expand it for quite a while so now you have the ability to do that and some other operational things about it we have protect point so for those customers who have problems with large database backup and restore usually I say if your database is larger than 25 Carib i'ts you know you have a backup challenge if it's larger than 50 terabytes you're acutely aware that you have a backup challenge and if it's longer than larger than 100 terabytes you probably have multiple DBAs who know how to keep their resume offsite as their backup challenge right because they don't expect to actually be able to restore the production data in a business ly business useful period of time we have a solution for that protect point allows us within the array to grab a copy of any sized database push the deltas the data domain on a nightly basis dramatically changing your backup times and then also dramatically changing the time to restore alright so if you have that large database problem we have a solution I won't tell you that this is an easy thing to do which is part of why we say if you have a large problem we have a solution right you shouldn't shoot flies with a bazooka you make a big mess yes the fly will die but you're going to make a really big mess you use a fly swatter right this is a bazooka only shoot big things with it open a replicator lets us do copies so if you're sitting on some other inferior version of storage and you want to upgrade to power max we allow you to grab your lungs suck the data over and put them into the new array so we pretend to be a Windows host to the old array you give us access to the lungs and we can copy the data over not only that but you can give us access to the lungs shut the server down we start the copy you can bring the server up on the power max while we copy that in the background so your migration will just refer write your migration time on this your outage window is five minutes for any size system now having said that if you're running power paths on both on the old system and moving over power path can manage that so the outage time becomes zero all right so yet another reason that power path because it'll hide the redirect in the background remote replication with our df0 data lost high performance awesome stuff for synchronous Metro is the same thing only now active active readwrite on both sides with wonderful witness options and all that kind of great you'd want for a Metro active-active solution and then we have asynchronous going extended distance multi cycle multi array consistency all right one of the big difference in here is that we can do asynchronous replication from multiple to multiple at any distance you want I have a customer with five boxes in Texas and five in Atlanta I have another one with 20 boxes in New Jersey and the other 20 are in Tennessee right and they do mainframe and open and all this other stuff and it's all consistent all the time you know and this is continuous with rate or integrity ac great yep yeah it does it's doing basically 15 to 30 second windows of go and it's all consistent at the window so instead we're not probably keeping you know senior's style okay yeah it's just it's really difficult to get it all the clocks synchronized for every i/o it's not that hard to synchronize a clock across 20 arrays right once every 30 seconds that's easy to do so that's what we do we cheat we're good at that well cheat for you right we're trying to make it easy so what are we doing with this already up again we talked about enhance trade so SRD F is a remote mirror it has been since it was invented we first went production in 1994 because in 1993 some idiot drove a truck into the basement of the World Trade Center and tried to blow it up and several of our customers who had dated in those buildings said excuse me I need this data to be somewhere else now so the next year it was mirrored across the the river into New Jersey and then we started doing asynchronous and there all the way to Texas right but the Hance raid does this automatically my DC even has a nice paper for us that you can go out there and read that says yes we came we pulled two drives out of a raid group and the host I chose kept on streaming and nobody did it anything special RDF just said oh I can't read that it I'll go over there right it doesn't take anything it just happens magically again it's mirrors on top of whatever your local storage is multi-session consistency multi erase stuff I talked about a minute ago and migrations with Metro so one of the things customers have asked about is how do I move data around and we're gonna talk about NDM in a minute but if you just want to say gee I've got two arrays sitting in my data center I wanna bounce a data base between two different boxes I can grab a database sitting in one array turn on Metro to a second array now all the Lunz have been copied to that second array I point my server to the new array I take the server away from the old way I dropped Metro I just moved between frames done online right so I can move your data between arrays anytime I want to with Metro mm assuming power path nope works with MPI Oh works all kinds of stuff okay quick easy done it's still single master replication all right mmm it's still single master replication right well again when you do when you do Metro it's active active so they're both while you're once you do this yeah any lone LUN goes from one place to another yeah but so we do but we do at the storage group level so this storage group can go from this array to that one and the others next door is people new to the next one right I mean every individual storage group go wherever you want and that's kind of the upgrade plane as you may not have a one-to-one relationship you may do what many to one you may be one too many we don't care storage group level go have fun I just wanted to make sure you guys had an invented magic that made multi master work no because that's hard could you use that to my grave MT max - bah max yes online yes we're talk about nd M in a minute which is the really easy way to do that new enhancements online device expansion while you're doing s RDF with the caveat not with Metro yet right because trying to do it while the other side is also reading the whole time other than reading and writing is a little trickier so we'll do it we'll do online resize with Metro next year okay it's it's coming it's just a little harder we did add some things into Metro so that you can do some nice things about dropping things out of the group so if you have some things you're managing with Metro you can demote them from Metro leave most the group take a storage group demote it down to already a PHA for example right change volume sizes promote it back up alright so you drop one side of the Metro go to rdfa make the changes go back up you can do all that and not have to resync all the data so we can do it it's just not painless yet we'll get it there with Metro next year and we do mainframe stuff there and we do mainframes so if you're how many you are do mainframe okay so we have a few mainframe enhancements thanks for coming right somebody has yeah mirror optimization for SRT F so we can write to both sides of an RDF synchronous link it's a really interesting idea for zhp F stuff we've got zdp the mainframe is that data protection which is snaps every 10 minutes continuously I have a large Midwestern bank that runs three million snaps per array because they're doing snaps every 10 minutes on the primary data sets right it's it's kind of a cool thing it gives them 10-minute restore points and they absolutely love it so it's a it's a lot of fun there and then we've got some new innovation with dynamic volume expansion so you can make the ones bigger mapping the devices automatically super path support lots of other things so great stuff going on in the mainframe we've got a whole team mainframe guys but I'd be glad to put you in touch with if you're interested in more about that right a reason to talk to the mainframe side that's awesome power path integration I mentioned earlier we have a lot of great things going on so one of the things we've been doing with newer versions of power path is tightening the integration with each of the arrays so with the latest release of power path when you have a server running power path that sees LUNs from a v-max it will automatically reach in and say by the way here's my hostname here's my OS version here's some things about my cluster here's what my VMs look like right lots of things ESX licensing can be automated right so those 75 licenses the array knows about if a server comes up and says hey I'm an ESX server and I run I have power path install on me but I have a license key yet the array can say here I got a whole bunch empty licenses have one right automatic you don't have to manage it done all right trying to make all that easier we can do host I own limit and service level management with power path and power path is now aware of those we can do automated mapping from the HPA's so when you go into Unisphere to map things you'll see the hostname instead of just worldwide names and crazy stuff like that we're working on io tagging advancements so you can even do things like say gee all iƶ is coming in here a part of an oracle log file for example or these are part of the database file or these are part of you know archive logs whatever else you want alright so we can start tagging the individual iOS and understanding more about the profiles and again that feeds back into the machine learning on the inside I know more about what the i/o is for so I know more about what the priority should be I can manage things better within the box set priorities we have embedded nez so for those customers who want to be able to run Naza as well we can just turn on Nazz in the array and export ports out so you can run SMB and NFS and FTP and various things usually we this gets used by smaller customers who say gee I'm buying one storage array I don't have to do Nazz on top of this I don't have to manage them as thing we just turn it on in the array others say yeah no I'm gonna put a Windows server or whatever doing mine as I don't care we can go either way but just gives some customers one less thing to manage its installed in 10 or so percent of the frames that we saw before you move on yeah yeah that file retention does that snap like style and again we come back to 1784 not 1784 but yes file level retention snaps and such yes okay and that's file data is also compressed D duped and and it's it's spread across the system just like any other yeah it lives on standard LUNs within the array so it gets treated the same as any other data yep and everyone in the box is thin automatically we don't support the concept of thick ones anywhere environment just not something we do data mobility so we mentioned using Metro we've had this concept of nd M nd M is our non-disruptive migration capability so nd M says gee I've got a source array and a target array I'll use s already have to copy data I'll move the worldwide name the identity of the source by serial number everything over to the new frame and path management at the host will allow all this to be magical and happen without disrupting things right so this is application level open systems FBA devices only it gives you a simplified user experience for doing data migrations with a nice support matrix and a bunch of things let's talk through a couple of the options of how we have to do this I'm gonna skip the benefit one real quick because we're running short on time so pass-through mode for old s RDF so if I'm running on an old v-max 20k 40 K original v-max 10 K we still have cloud additions out there v-max EE any of those things right running 76 code what I will do is I'll set up s RDF in a pass-through mode because the older way understands s RDF and I can s RDF between the two and so my application will see the data on the old array see the new array and I can move it with the new arrays when we do this kind of migration so if you're starting on a v-max 3 or a v-max all-flash at 77 code we can do Metro 278 code right which means we can be active active and simplify some pieces in this migration so the way we did we talked about you know the way we used to do migrations before we came up with n diem was all this mess and then we kind of went to the zone some things you create cut over commit and you're done as we go to Metro we start simplifying some of that because the the cut over is now get ready and commit and then with Metro it's really simple you create it and then when it's done you just commit it but there's not much to do because Metro is active active you have to worry about what stage you're in it all just happens in the background all right so I'm going to walk through kind of how this works with Metro you've got a source right array that the application is talking to we install a new PowerMax we built that sardius between them all right we've got NDM we create an RDF group all that stuff is ready to go right now we're getting ready to go so we pick a storage group on a migrate that's on the source array so you want to put it on the target we zone the server to be able to see the target we issue the NDM create command and what is create do it sets up SRD gough it creates the storage group over here it creates all the Lunz it copies over all the performance attributes does all those kinds of things that you need it copy it sets them up as r1 r2 relationship but it's Metro so they're both active readwrite and we copy at some point as we start synchronizing this stuff over right we will copy the personality so the worldwide name serial number and all that from the original array we'll come over to the lungs in the new box right so now when we make this available to the server we copy the speci reservations we're done right now we make this available to the server and away we go so we're migrating we're happy we're reading and writing host scan finds the new paths right I'm still on my Indian migration stay and I'm moving data across here but I can start doing i/o on both sides alright so my servers up my servers running my servers doing i/o the old array and the new array in ballads are passing back and forth because this is active active right they go back and forth between the arrays and nobody cares and my metro session is running all this and I say okay now I'm done with my synchronization now I'm running a standard Metro config right so I'm now in my synchronized and ready to change ready to finish this virus you might commit bless you as you might commit and what will happen I'm gonna first I'm going to stop accessing the old array from the server right I'm going to stop my yes RDF I'm gonna give these guys some new personalities alright and I'm done my application just moved from this array to this array the server saw new paths come live it's old paths I and it doesn't know anything else as far as the server is concerned it's still running on the same ones in the same array it just happened to move to a new platform alright so that is pretty much quickie done there's nothing else much to do you didn't ever go configure LUNs on this array you never configured a storage group on the new array you never configured any world masking information on the new array all of that is done by just copying what was going on on the source into the new box and I'll show you how easy it is to do that in the GUI in a minute we did the demos for Unisphere right we tried to make this really easy so you can pick up data just move it to a new platform and done right this is why we say NDM has made this really really simple questions I know I flew through a couple of slides a little fast but I want to get the endpoint of it's done right a lot of text but a very simple in the end a simple concept right server now now sees new storage on the box and ready to go and then you can pull the old box away and say goodbye so what do we let us do it lets us take old boxes with 76 code to 77 or the new 78 code or we can go from 77 to 78 code and in the future we'll do 78 to 78 as well right so the new 78 code we call power max OS the only thing that's going to confuse people is that that same code runs on v-max all flash as well and it's still called Power Max OS even though it runs on a v-max off life so we just call it 78 code on those boxes to make it easier
Info
Channel: Tech Field Day
Views: 741
Rating: 5 out of 5
Keywords: Tech Field Day, TFD, Storage Field Day, Storage Field Day 16, SFD, SFD16, Dell EMC, Vince Westin, SnapVX, ProtectPoint, SRDF
Id: 8mkwAssD8G0
Channel Id: undefined
Length: 23min 25sec (1405 seconds)
Published: Thu Jun 28 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.