Data Protection for VMware and Virtual Machines

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi my name is Celine mulch and honey and today we are going to talk about the deficiencies in the current data protection platform and how coisa tea solved these but these deficiencies going forward so to understand that let us try and understand what is a typical landscape that we see in the data center today in the data center today we have application servers virtualized environments database servers and your production storage deployed as well as others but in this case let us look at what a typical production deployment looks like we have a virtualized environment which a VM farm that's running a typical virtualized workload application servers and file servers database to run your mission-critical applications and your primary production storage which runs all of these various workloads now what happens typically in in today's data center is the amount of data that is brought in is constantly increasing and this data needs to be protected at a very fast rate because the business needs are very very stringent across the lines that you want to protect more data at a faster rate or a smaller RTO now how are the current backup vendors geared to try and meet that in my mind they're not and here is here's the reason why backup software vendors are built on a legacy architecture this architecture has not changed for many many years typically backup software vendors have what are called media servers metadata or master servers them the metadata server is where is a centralized repository of where each given backup set is written the media server is the server that performs these given backups this data set is then written to what is a target storage right how does this all work as I'm performing backups I decide to say what given backup is performed by what given media server so I have to distribute my backup workload across all of these media servers this data set is then run at various given time intervals and written to a target storage the job of the target storage is to did you and compress that data set now as you can see as the data volumes grow out here this architecture cannot scale to meet these needs because there is no scale out build out so to solve this problem I need to add more and more media servers more metadata servers and more storage servers and the second problem that we see is they have all a single point of failure if a metadata server crashes then there is no way for us to know where that given backup data is if a media server crashes there is no way to perform the back of that it was performing and if the target storage server crashes then there is nothing that can be written to finally I also deploy multiple different solutions to try and solve this problem to solve this problem I may have a backup software to backup VM environments a different backup software to solve database environments I may use a different software for archival and a different software for cloud spill so you have us appoint solutions that are mixed together to form a complete solution as a result your manageability and maintenance becomes tougher and it becomes significantly more complex finally what our today's backup software is useful they are merely useful protecting these data sets as an insurance policy the business needs today is I want to take the latest production data set and run my test and dev environment off of that and backup software's cannot solve that to overcome that problem people deploy additional storage environments to run their tests endeavor environments secondly if I ever have to restore this data set from up target from my backup I would have a very costly and time-consuming proposition where I will need to find the users to copy that data set which is time consuming consuming and kludgy and does not meet the agility that the business needs so is there that can meet all of these needs and eliminate many of these different components and that is what we will talk about how Co a city does that now as you can see once the data is brought on to the co a city platform you can run tests in dem environments off of the COI CT platform so the co ESET data protection platform is just not an insurance policy in these when you run tests and M off of us we become the primary storage for these tests and dev environments so the question is how is that possible as the data is brought into the co a CT platform our choices file system that's built on our snap tree infrastructure ensures that every given snapshot is in a fully hydrated State so if you ever want to go and instantiate a given backup it is just point click and restored as a result the RTO is nearly close to zero so you can now run tests and dev environments off of us as a result you can also start doing many other things that are current backup and does not allow weather that is spinning up a cloned environment for a time period and then destroying it running it up for backup verification running it for testing their purposes running it for analytic purposes it now opens a whole new paradigm for users to use this for various different use cases why is this not possible with the with the with the other vendors it is really the way we keep our snapshots if you look at other vendors when they take snapshots and they take continuous snapshots they have a limit on how many snapshots they can support we have week we can take unlimited snapshots secondly as you take more and more snapshots usually all the different vendors have a pointer that points to these snapshots as a result you when you want to restore you have a transfers that whole tree to understand where these snapshots are and as a result that time taken to the store is much much more complex the biggest difference on why we can achieve this is because of our strap snap tree infrastructure snap tree allows you to take infinite number of snapshots and has a fixed depth as a result you can basically I restore any point of any snapshot instantaneously and this helps in the backup use case as I can pick any given snapshot that needs to be restored so what does quality achieve in the data protection platform 1 we've eliminated all of the given different workflows that are needed and we have converged this into a single scalable secondary storage platform
Info
Channel: Cohesity
Views: 11,644
Rating: 4.6799998 out of 5
Keywords: virtual machine, virtual machines, data protection, backup and restore, disaster recovery, enterprise data protection, continuous availability, virtual environments, hypervisors, vmware snapshot, enterprise backup software, recovery time objective, cohesity vs rubrik
Id: QKDywsxOwNs
Channel Id: undefined
Length: 7min 20sec (440 seconds)
Published: Tue Oct 13 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.