Best practices for VMware backup - Webinar

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay good morning good afternoon or good evening and welcome to today's webinar best practices for VMware backups my name's Rick Vanover and I'm a product strategy specialist at Veen software and I'm happy to talk to you about this topic today I I work at Veen but this topic is going to talk a little bit about beams offering for backups for vmware as well as some general topics that can help you in whatever you're doing so we're going to talk a little bit about both ends of the spectrum today if you're not familiar with beam I do encourage you to check us out at beam comm and you can also follow us on twitter at beam or myself at Rick Vanover so let's go ahead and get started with a few administrative points about today's webinar first of all we're going to kind of focus a lot around vSphere and specific the challenges around having these fear in regards to the protection strategy that all of us probably face we are also going to talk about how lean can help and some specific scenarios that even if you're not using me you might even be able to take some of these best practices with you in what you're doing we do want to position beam as modern data protection that will help you in your vSphere environment and that it's powerful easy to use and affordable and that's kind of an important aspect of what we do and then also our free offerings in fact all of our product lines now have a free offering and we can't make it any easier for you to give it a try we'd hope you like it towards the end of the webinar I have some time dedicated towards Q&A and if you have a question go ahead and ask the question in the little GoToWebinar panel now if I see your question if I can answer it while we're going through the content I'll do my best but I will have to possibly hold on to it until the end of the webinar lastly this webinar is being recorded so you will get a link to replay the webinar and download the PowerPoint slides and that'll be sent to you via email a few days after the completion of today's webinar now lastly I do encourage everyone to stick around because at the end of the webinar I am going to draw let's see how many people we've got a good number of crowd good number of people on line today I'm going to draw 7 7 lucky winners who will win a free book we've got a great offering of virtualization and IT related books and at the end of the webinar I'm gonna ask you one question and actually the answer is gonna be one word so I know the question already I'm gonna ask one question at the end of the webinar and seven lucky people will win a book so let's jump into it this is a very loaded topic first thing we're going to talk about what's most important with virtualization storage disk systems we all know that my personal journey of virtualization actually introduced me in my experience to shared storage I'm sure many of you can relate to that so when it comes to backups we have to really think a lot about our storage beforehand ok so let's talk about performance first of all there's no way to you know I'm not telling you something you don't know basically there's no way to dodge this topic storage performance is very important for virtualization now when it comes to backups we want to make sure that we don't find things like the i/o tipping point what I mean by that is depending on how you do your backups you might cause problems to come up right like ok so I have my virtual machines and now I'm going to start my backup process and that might send additional IO verdun on my storage and maybe that's because I haven't installed enough separation I'll talk more about separation in a bit in fact that's probably the most frequent word I'm going to use here today but things like exchange servers sorry they might be important if you've loaded up a database on one disk system and you know if you are doing it back up and it's a very large exchange server it might be impactful on the storage so I like to set great things as much as possible and I got the exchange example up there is just one in particular so we can have a database on one volume that's very large or we can have one virtual machine or we can have multiple virtual machines with multiple databases that are all more manageable size personally I like the latter because we could have a distributed exchange environment with more separation between the large data profile all of that being said we need tools and we need visibility so measuring things within our storage can actually impact how we go about our backups I have three things that I like to use the most i ops latency in free space and actually I put them in that order because they're very sophisticated from more sophisticated to less sophisticated if you're just looking at free space you're gonna be in trouble but if you're able to have visibility into AI ops chances are you'll be in good shape latency is kind of that middle-of-the-road measure it's a good measure it takes a little bit more digging to the disk resource to see what it's capable of and what's normal but I like using all three of these indicators as a general rule and there are a number of specific storage measurements but it's very important to have this visibility because if not like I say the i/o tipping point will be discovered the i/o tipping point is basically once you start doing something and doing something more you then push the storage system to a situation where while it still works the performance is something that you notice it's negative so a lot of discussion around storage but the one thing I want to really really ingrain in everyone's memory is separation separation is the most important thing when it comes to backups let me give you the best example and that's taking your backups from one say VMware datastore and putting it on the same data store okay of course it's not a good idea in case you lose the data store you could apply that to multiple domains of failure if you have one disk tray and then you're backing up from that desk straight to another disk tray but they're on the same stand controller you see this domain a failure is that same controller you can even go one step higher one fibre channel sand to another fiber Channel SAN but they're on the same fabric what if the switches fail so you have you know you can just keep on going down the the wet down the line of domains of failure but with whatever you have my recommendation is the best practice is to have as much separation as possible between your primary virtual machines which we're monitoring and we're having this visibility for performance and separation to where we put our backups now the separation will help us one of two ways it'll actually help us in terms of domains of failure but they'll also help out this performance discussion basically if we're reading from one place and writing right back to it for our backups well however they're done right and take veem out of the question if you read from and write to the same spot that's going to be an adverse performance indicator so separation is going to help you two ways protection and performance and we're going to talk more about that in a second but when we talk about separation and where we want to put our backups one question that I always have to field and what I do is really address where do I put my backups okay now we're going to talk about the transport modes in a second but there's a lot of factors and chances are everybody wants all of them yes I want deduplication yes I want cost efficiency yes I want high capacity and yes I want ultimate performance the reality is that we can't always do everything okay so deduplication is is a big topic right and Veeam has an answer around deduplication I'll just mention that right now and that's basically to use beens deduplication for a backup job so if we're backing up n number of virtual machines we duplicate across all those virtual machines and that's really good stuff when we D duplicate multi are multiple jobs beam by itself does not deduplicate across those jobs but you know I have a low-cost answer to that and that's Windows Server 2012 deduplication and beyond that there are plenty of disk resources extra grid you know data domain of plenty of other targets that have a hardware deduplication HP store wants you know there's a million of those types of products so if deduplication is a priority right in in memes world we have a no-cost option which is built-in with our jobs the low cost addition which would be a Windows Server 2012 repository and then more sophisticated solutions like hardware appliances side note those hardware appliances can receive backup data from other products and so can Windows Server 2012 for that matter so you know if you have a lot of things going on and I realize the average company has 2.4 or 2.3 I can't remember the number backup products so if you have a lot of things going on that might be a factor cost you know getting the money for storage is the hard part getting storage isn't the hard part so when we talk about where are we going to put our backups I personally like to start with disk which might might be disk to disk the tape just just another disk disk to disk cloud a lot of different arrangements but I always like that first one being disk so disk to disk to something else possibly that there's cost to disk right sometimes people want to go directly to tape and you know this performance considerations with that plus personally I don't know that many people who really like tape so those are a lot of priorities also and in capacity is probably the next one so you know if you have to balance all of these priorities you know I might have the money for this much storage of this performance or I could take that same purchase power and get less capacity but higher performance right these are some of the decisions I'm sure you've had to make it's really tough is what I'm trying to say to kind of prioritize these because you want them all and I want them all to for that matter so whenever you design around whatever storage resource you have a couple of things come into play first of all separation again separation comes up when we talk about where this disk resource would reside ideally it's as far away with the highest performance possible from the source a good example is within a rack if you have a data center rack and then you have a sand even just a modular sand which is like one or two drive trays or something if you put the backup storage right underneath of it that's good unless the rack falls over or is flooded that particular rack gets rained on the ceiling leaks or something like that so again the domain of failure comes in so as much separation as possible could be putting it on the other side of the data center on the other side of the building maybe if you have a data center closet little things like that are a best practice separation and then when it comes to how these backups are communicated some details we're gonna talk about these in a second of transport modes for our Virtual Machine backups we want to make sure we have the best performance coming into these disk resources so that we don't burden our VMware environment that much and then don't do too much excessive IO or long-running process on to our backup targets a lot of things come into play so in terms of sorry this is a little bit out of seem to have lost a slide here here it is sorry about that when we talk about where our backups will reside we have to answer our question you know of where they're going to go but we also have to identify how are they going to get there now we in the I am Warworld have three major options specifically for very specific VMware technologies now there's a fourth option okay and that's doing agent based backups now if you're not familiar with veeam we don't provide an agent base backup which is why I haven't talked about it here so you could treat your VMware virtual machine like a physical server personally and I'm not a fan of that approach means product doesn't go that route either so that being said I want to talk about the three very VMware specific options the first of which is directs and access and I'm starting with that one because that's my favorite now directs and access is provided by the virtual disk development kit or the V storage ap is for data protection is probably a better parent family of technologies where this comes from these these frameworks that are provided by vmware vsphere allow backup vendors and and other things for that matter like maybe some of the virtual appliances that are out there or storage virtualization technologies etc a lot of things could use this but basically these api is in this framework allow us and other vmware backup specific products to communicate directly to the storage for a backup notice that's real important so what happens here is when a backup happens and let's take a fiber channel I scuzzy example for this first case the data is read directly from the LUN or the I scuzzy target and then something like Veeam picks it up directly from there and puts it into the disk repository that we talked about just a second ago so the good thing here is that if enough separation is in place meaning I've put my Veeam infrastructure or my backup infrastructure for that matter separated enough from the the production element of my vSphere environment that there's no performance impact then I can actually do a lot of good things I can actually avoid using the network from my backup so land free from a backup standpoint I can also avoid cluttering the multipathing on my ESXi servers so because I'm coming through the backup servers own path I don't have to worry about that and if you put in even more separation you can even take away the CPU hit so I talked about deduplication earlier big resource for that is deduplication and that's the way vemma implements it as well a lot of hashing and check something going on on disk regions and that takes CPU so if I can take all that away from my core cluster now I could I could still do this as a virtual machine don't say that I have to install beam as a physical machine then I can still make it a virtual machine but maybe I'll put it on the dev cluster or maybe I'll put it on you know a free ESX I put my team proxies and beam server over there at just to run not to backup from but to run there all right a lot of options right so you can really separate some of your production infrastructure to really dial up some ultimate performance that's direct and access and that's my favorite axe way to go the middle one here is virtual appliance hot add now this is an NFS it can be I scuzzy and fiber channel as well but for NFS this is one of the best options as well virtual appliance hot add actual ecomax a virtual machine to the backup server at the same time so that you can avoid spending so much I owe going over the network and of course in the NFS realm there's no directs and so a virtual appliance hot add is a little bit different if you've not ever seen it so I do have a graphic here so this kind of helps visualize it here so on the left that virtual machine is what we're backing up with virtual appliance hot add and he has that virtual machine has this mapping directly to the VMDK file at the bottom here and over on the right the the backup server will have a temporary mapping basically directly to that virtual machine now the magic is usually with hot ed there's a snapshot a virtual machine snapshot that's taken it's later removed so it's not a snapshot itself is not the backup it's just part of the backup but when that snapshot is taken that first VMDK at the bottom it's read-only and so with hot add it allows two different virtual machines one of them doing the hot ed work to map into it so one little trick here you know hot ed is great especially if you don't have 10 Gigabit Ethernet now the only negative to hot add is that there's some API sequencing right so I mentioned the VDD K for example there's a lot of calls that have to be made to V Center to make this happen and then to clean it up at the end it could be up to a minute or two just of API calls right a lot of things go into timings I just throw that out there as a generic answer but the takeaway is if there's a lot of data to move this is a pretty fast data mover especially when directs an access isn't available again I like direct saying the best but it's not available in all situations so this is kind of good to know so the third option come back to the other slide here the third option is network block device or vmkernel this is basically not using direct SAN not using hot add and reading the the VMDK through the host now this isn't necessarily bad it's just generally speaking it's not the highest performing now if you're using 10 Gigabit Ethernet that might be a different story in fact if you're using 10 gig Ethernet this one will fly because there's very little API overhead compared to hot add and of course 10 gig Ethernet is quite quick so there's some good and bad but as a general rule I prefer them left to right so you still want to have as much separation as possible and have as much performance as possible but what you'd like to do above all that is have you know the separation between these different interfaces to get the most performance right so I would go through the drill in your environment and say which one performs the best does directs and mode does hot add does network mode perform the best and figure out which way to go it's hard to make a blanket recommendation but usually if you know if I would say if your eyes fuzzy I would say directs and if your fiber gel fiber channel I would say direct sane or hot add if your NFS I would say hot add or network mode in those kind of general recommendations and if your direct storage I would say hot add so as a general general rule now here's the best practice that I think can't be stated enough and that's not to forget virtual machines now if you're like me when I was an IT admin it drove me crazy someone would deploy an application and they wouldn't tell me it's now production you know and I had a tried to have a pretty good flow from development to production right and we didn't promote virtual machines we actually required our app teams to redeploy code and configuration right so we had some good controls around it procedurally but they still sometimes would use development systems as production or deploy production system and not tell us everything that they needed backed up whatever that's a surprise I don't want to have and when it comes to agent based backups there's a real risk that you don't have it all when you need it if the application team hasn't told you what's going on and if you're you know depending on how you do your VMware environment you might intermixed quantities of development and production and that's a great resource management technique right but it's a little bit short on separation but you know all virtual machine environments are created equal I get that but it becomes really difficult to ensure that everything gets backed up when the resources are intermixed so VMware actually has a you know a constructor framework that can really help us out and this is what I consider my best practice here when it comes to backing up VMware VMs is to use these constructs so data stores folders vs resource pool whole host the whole cluster the whole vCenter the way that you take this approach to think of it as an inventory list so if I'm going to backup VMs in a datastore folder a V app a resource pool etc etc that's going to be a container that's going to be a dynamic result every time in fact my most popular blog post on the Veen blog last year was something called vm backups forget-me-not and that's exactly what I talked about in that blog post and I have the short URL up there Vee I am /fo the takeaway is that if we back up by a container such as a data source such as a cluster such as a resource pool we will accommodate that organic growth and to me that's the best safeguard I love it and I have an example of that I'm going to jump into a demo later I'm gonna show it to you nonetheless on to kind of the next best practice just to never forget about our applications right that's really what our VMs do they run apps so a lot of things go into this because there's a lot of different applications I'm sure you're aware of that so what are the boundaries right if we can if we can back up a VM and we can do that agent lessly which is kind of a first kind of decision point I talked about that sounds great until we start talking to our application people so what about locked from what about granular recovery what about application consistency can you have at all I believe yes I have some examples that I can show you or plenty of being materials on yes in fact specifically for VMware we have this technology called you air Universal application on in recovery and we have some pretty slick things specifically for say exchange server and for files out of the guest OS and for sequel server Active Directory domain controllers we have some wizards and and even better than that type of tool and in those realms but then beyond that Oracle my sequel etc go on down the line yes you can now today with our virtualization practice we had two things going on right we deploy virtual machines so quick and we don't necessarily have such a quick recovery expectation right but our users do expect that you know if we can deploy a VM in an hour 20 minutes but it takes me four hours to restore do you see the disconnect so we can really if we take the right approach we can actually bridge that gap but with our applications we still need to keep that in mind so my best practice recommendation related to applications is to ensure that everybody is on the same page know what the app the application options are right because when you go down the agentless route it's a different game right maybe the app owner needs to put in a script to dump its specific application information on disk right maybe that's part of the solution so it's not a total under the hood thing you know as a platform provider with a vSphere environment we still need to have some interaction with the application owners so my best practice recommendation is to ensure that application requirements are still met aside from that yet related is to make sure that there's no guesswork problems or alerts or anything like that that they're just dismissed so I like to have good visibility into the changes of vSphere and then ensure that the backups are accommodated accordingly like good example is somebody added a new data store so to my previous example of backing up by data store that's all great until somebody adds a new data store and doesn't tell me if I'm in charge of backups so I have some visibility into that and when things go wrong what do we do right if something starts breaking we start turning off features right we kind of simplify the situation say a good example is something like a backup job fails because it can't log into something well well what if I have this option that lets me turn off that logon mechanism and work a different mode like say instead of going application consistent I can go crash consistent the job doesn't fail but the jobs not running optimal did I go back and fix that so have some some management some process around it so that we don't have these loose ends go on forever and ever so that's an important recommendation and specifically convene talk if you're ever diagnosing like a VSS issue well the easy thing is just to turn the SS off but that's not exactly the right thing to do that might be directing a new short-term but let's address that problem long term same thing around capacity sometimes we have to guess how much capacity do we need for a backup well one way to kind of prevent subsequent guesswork is deduplication and it could even be used like I said with Windows Server 2012 tee-do where we can land our VMware backups on that type of disk resource and we can actually be duplicate across beam backup jobs which are already do you be duplicated so that can be a great space savings technique especially if you run into a jam down the road so my best practice recommendation is to not bury things you know if there is a problem get it solved and then secondly ensure that the right people are looking at problems be it alerts reports etc etc so when it comes to Virtual Machine backups as a best practice I'm actually convinced that there's not just one answer and it could be one product you know of course I'd love that to be Dean but it might be a combination of multiple approaches or multiple strategies for the larger Virtual Machine backup process so let's take a look at that here so I have here what I call a layered data protection view and I've got seven things some something as simple as a backup on disk that's on site and then possibly a virtual machine replica on site now I'm actually not going to talk a lot about replication but beam backup and replication also provides over application engine and that's a great additional protection technique in fact you can do that on-site off-site or both for that matter right so right there that's with the backup and a replica on and off site that's three layers of protection we're going to talk a bit about cloud storage here in a second tape is an option and if you have been wondering beam is planning tape support later this year we are able to talk about that and we can also talk about San snapshots and off-site disk resources right so when you look at it this way we can really deliver a lot of options right now Vigne can help you into the future tense with with all of these ok but there's this old adage which sometimes old adages are just that they're old but this one actually still helps right there's this old three two one three instances of your data or your in this case virtual machines running two copies on-site one copy off-site etc etc well you can actually dial up more than that if you take this layer data protection approach so my recommendation is is the best practice is to have there's many protection strategies in play as possible and mean can help you with a lot of these all of these right now except tape which will be coming later this year so I kind of talked a little bit about the meme and some of these best practices but it's important to note that a lot of this that I talked about it's actually possible with our free tool so beam backup free edition allows you to do all kinds of things edit DMX files now don't laugh but that's actually pretty hard to do without a tool but basically we make it a right click just to edit a vmx file right from the VMFS or NFS data store the biggest capability within backup free edition is beans if however which is the ability to archive a virtual machine and take it with you WinZip for your VMs does it work but probably the two biggest features that come with the free edition are beam Explorer for exchange and VM Explorer for staying snapshots those are really powerful feature features that just came out in our version 6.5 last October so the takeaway here is that beam Explorer for exchange provides free recovery for exchange server emails and discovery techniques as well you can like Google for your exchange databases basically what it is and then you can recover items or hold mailboxes right out of that database beam Explorer course and snapshots is a great way to have additional recovery points throughout the day from the high efficiency high performance engine that comes with a cup and one in this example with the HP store virtual engine the left hand engine both I scuzzy and virtual storage appliance if you're using those we can actually recover whole VMs guest files and even emails from a SAN snapshot those are actually included for free good stuff so I want to take a quick moment and show you some of this stuff ok so let's jump into beam backup and replication here and I have my one example that I want to show you and that's what I call a datastore backup now in this particular example here I have a job and I just gave it a simple name backup datastore and look at this I'm not backing up individual VMs I mean I can do that I can go over here and say browse VMs in fact it'll be that host in clusters view right we all use that to look at our virtual machine environment where we say host cluster it's a resource pool and then the data centers are all populated etc etc but I like this data store view because I don't have to worry about which VMs are in place and especially in this environment I have I have just a crazy number of virtual machines how easy is it and you guys know this to not know which ones need to be backed up or to not accommodate for all the new that's right it happens they come in so what I've actually done here instead of adding individual VMs just flip it over here so you can see a little bit better I flipped the view data stores in VMs in that sense instead of browsing through that host in clusters for you I have this which is actually a little bit cleaner now let's take this example here I've got dad's direct attached storage I've got Nass so I've got some network attached storage and some sand volumes right so I've got a good nomenclature and then when I expand these of course I see the different be ends that are on here and then every time this job runs it just reinvent or ease that so this organic growth that's going to happen here will be updated every time that particular one has a lot on it but the takeaway is I could have a backup job per datastore or a job that has multiple data stores on here and then every virtual machine that's on here will be inventoried the deduplication will actually go across all of these different data stores but it's most important to note that all added VMs will be included now that's one example of a way to do it but let's say you've got someone awesome we all have our one offs don't we here's an exchange server not huge but I do have an exchange server and when I do my backups I got to make sure that again I'm taking care of my application so by default and I have it turned on here is what we call application aware processing this does a lot of great things it enables a consistent backup through VSS which is a framework for Windows consistency but also important right he it truncates the logs this is important for exchange because you don't want your logs to get out of hand so right here and by default if I turn on application-aware processing I'm taking care of the logs which is great and that way I really don't have to worry about some of the issues that may come around now I mentioned a word cloud and if you've been following veeam recently you might notice that we have launched our cloud edition so i'm actually i've got a job set up that i just launched right here i'm gonna go over show you this real quick and we have launched our Cloud Storage Gateway so once the backups land on disk and in this case they are landing here I have a local disc that's a T duplicated volume I have some some bean backups here these this particular examples hyper-v but it's VMware or hyper-v for this feature it doesn't matter at all so I've got my full backup 48 gig and then some incrementals like this and basically what I'm doing over here is I just launched a job and it's underway ok it's it's moving along swiftly and the what this job does once the backup job happens it's done right there it's actually going to kick off let's see if I can catch it get them both up here it's going to kick off a backup to the cloud so it hands it right off to our Cloud Storage Gateway and the one that just turned green there that's the cloud backup running so just like that once the on-site backup happened I'm now sending that incremental data up to the cloud and I'm actually sending what we call our V metadata file to the cloud also the vbm so this particular cloud is Amazon and I'm limiting my bandwidth to one megabyte per second and I'm putting my incremental data up there and this will be not even a minute or two long so right there I've provided myself two layers of protection one of which being off-site I like this new thing this is new just came out on Monday from beam and I'm estimating that my Amazon storage is eight dollars and ninety four cents a month okay so that's me I'm not using glacier I'm not using reduce redundancy I'm using the full s3 storage but basically I just then took a 58 second backup to move a backup up to the cloud and that was a small example I'll give you that so I can look at some history here and I can let's take the biggest one I have is 46 gigs took me about 12 hours again I limit my bandwidth to one megabyte per second but you get the point here that I've got a really good way and it hands right off from the bean backup job to do off-site storage with cloud technologies good stuff and that's new then if you don't like Amazon that's no problem we have 14 different or 15 different public cloud storage providers including OpenStack if you have a cloud of your own and then here's a good one file system storage if by chance you have different storage resources in your environment yet you want ravine backups to be replicated between them this can help you this will actually take it say from one mass resource to local disk on another system or from one sand to another sand as long as we can access them in Windows a lot of flexibility with this right so we can use this technique more than just sending to clouds and of course the the household names are there Amazon as your Google Storage HP storage right all the big ones are there and sure probably some you haven't heard of or on there also and they each have their own options right so here's my Amazon s3 account I've got my bucket defined I also have a option here to set my price and then set a limit if it goes over a certain price and lastly I can even make a drive letter that says hey this drive letter is my bucket we don't want to write our backups there but it's a great way to look at what's on the storage cloud or delete it if we wanted to the house clean on our own but this back-up plan that we have has its own logic basically we set this up say which bucket in this case Amazon is configured for me and I am going to do this advanced mode where I have some encryption but I've dragged down and find the path that I want to backup and I have a very very deep path here so there we go there's the backup job and I'm backing up I'm gonna do everything but if I just wanted to do vbk s not the incremental data if I wanted to say bandwidth I could do that right you have some granularity we're technical people right we want those knobs and dials good stuff and also I can drop an encryption which is important and in fact I can drop in many types of encryption that's important also and I like this one here there's this notion of a purge then this number 3 right here will actually keep three versions of the file that we've changed and then optionally right here I can keep an optional 30-day trailing window of retention so after I delete something I'll leave it in Amazon for an extra 30 days there you go that's another point of recovery for you good stuff that's pretty much being backup cloud edition and that's the new stuff that's kind of why I wanted to spend a little bit of time looking at that so hopefully that helped but the real takeaway is that being backup and replication is powerful easy to use and affordable and we want to be there for all of your VMware virtual machines and hopefully we can build that data protection best practice for your VMware virtual machines around beam we would like to really say that beam is built for virtualization modern technology cloud computing you know x86 virtualization that's where we play and you know the cloud storage discussion that's a factor I had to explain cloud technologies once to someone and I said you know if you look up in the sky what can you tell me about clouds and the answer I was looking for I didn't get it but the answer I was looking for is that all clouds are different and that's true so are everyone's cloud requirements so is cloud storage right today that's a good question right we have an option for you if you're interested and I I personally like it I'm using it as you see good stuff and actually I've been using something like this for a while so I personally my own approach have been using data protection in the cloud for a while but it's worth checking out the cloud edition like I just showed basically takes that on site data and moves it to a public source cloud now it's important to note it still lives on site so that backup repository we still want that high-performance recovery on site the Recovery Options with cloud Edition actually do bring the data back on-site so it's not a recovery to the cloud in this initial release that's kind of important distinguisher which makes sense because if I have an internal virtual machine restoring it to the cloud might not be the best option right away okay so that as we kind of wrap up here I want to just kind of summarize that you know beam backup and replication is built for virtualization and I didn't really talk about the scalable solution I did talk about agentless but scalable is an important topic basically you know depending on the size of your VMware environment we can help you out we can we can grow with it we can you know be a small shop and backup this you know single host or you know essentials plus cluster not a problem but basically whatever the size of the VMware datacenter we can basically tune and scale the solution to meet it and a lot of knobs and dials I talked about the vSphere API is for data protection change block tracking I didn't really mention that but that really helps the performance of our backups I think we throwing out 90% performance improvement for incremental backups because we don't have to traverse the whole disk to be fair hypervisor is telling us what's changed good stuff I mentioned the beam Explorer for saying snapshots earlier but if you're using HP store virtual storage this is a great technology I really encourage you to check it out but basically you can restore an entire virtual machine files and folders from the guest file system and even exchange items right from a SAN snapshot so this isn't is not a replacement for backups it's very important to note that it really you still want both so again another layer another type of protection is possible here in that you'll have your backups taken with beam say every night which is a common schedule but yet during the day maybe you'll take these low-impact sand snapshots and have these additional recovery points for those daily disasters all right so this is an option as well in particular the store virtual snapshot engine is very efficient and very low impact encourage you to check that out and this is also included in the free edition now VM Explorer for exchange this is actually my favorite one again we have to take care of our applications as a best practice for our backups in the restore is just as important with that so all editions are being backup and replication including the Free Edition allow us to open the EDB file now this sounds a little bit crazy but literally it's it's in a browser into the exchange database and from there you can search you can restore and with the exception of a free Edition you can actually put these items back in the mailbox so let me give you an example user deletes or email and the free edition you can open it you can forward it back to them you can save it as a PSD you can save it as an MSG but with the paid Edition and specifically the Enterprise Edition you can actually inject it back into their mailbox now you can also if you wanted to move one or more items back into a different mailbox even in a different exchange organization I had someone actually recently used this as a migration tool because their company was being acquired and part of it was being divested but they were going to two different exchange organizations for the small group that was being divested to another organization they actually used beam Explorer for exchange to migrate them to a new exchange organization good stuff so the takeaway here is that veeam is modern data protection that is built for virtualization and we like to say that we're powerful easy to use and affordable so I was trying to I tried to talk a lot about a lot of different things here today and you guys have been a great crowd I do have some questions coming in which is awesome but basically you know when we're built for virtualization it opens up a new approach right and if you're not familiar with them we really encourage you to check it out the trial additions and the free products make that quite easy to do so let's go over to the QA and I've got a couple in here and if you have a question go ahead and drop it into the QA and if you're sticking around for the book hold on we'll transition to that in a moment so Andy asks I have a question related to small business how do you do it we have managed to hack it and sounds like you've had a little you went on with a couple of examples there Andy you know the key thing with small business server will be application we're processing now it is supported but myself I've not used much with it I do encourage you to go to the forums and maybe do a quick search Andy on SBS which is a second-to-last link here people who are using SPSS are discussing it our product management our tech support myself a lot of people are on there as well as other users well hey this is what I did or hey here's my script for this great resource encourage you to check that out Patrik asks are you going to do a short exchange item-level recovery video I can if you'd like since you asked I'm sure let's do that real quick if I can go over here and if I want to do a restore it's very simple VMware what type of restore application item microsoft exchange now here's the magic Patrick what veem Explorer for exchange does is it actually finds our exchange server and I have it named accordingly just you know for easy demo use find the point in time that we do our backups I did one two hours an hour and a half ago locally here I'm gonna take the most recent one I'm gonna go ahead and launch this up and basically VM Explorer for exchange will mount the beam backup file and browse into the filesystem of this virtual machine which was called vex demo - zero - zero - because I messed up the first one but basically we're going to browse that filesystem and what we actually do is we read the windows log of the VM and say hey where is a log entry that says it's starting exchange databases so if I find the EDB files in a log entry on this backup file I'll go ahead and open them up in this tool now I could browse through and just find the EDB files but I'm gonna let the logic do its thing real quick so I'll let that stop for a second and I'm also gonna get I'm gonna get something else up here because I want to show you guys a another thing real quick a lot going on here busy day hope everybody's uh hope everybody's Valentine's Day is going well you celebrate that here in the United States and sure many of you do as well so I just switched over to getting something ready for the QA but basically we're reading the windows log and we're gonna find the EDB files now this particular exchange server has two e DB files and I set that up on purpose because I I realized that you might have multiple databases on an exchange and in this particular case we're going to load it and we're doing a quick index of the contents of that so what mailboxes what folders are in place etc we're not actually looking at the whole database we're only just doing a quick index so here it is Patrick we expand it and there are the user names on this mailbox I'm going to go into this particular user name and I think there's like 50,000 items in this mailbox so this is kind of a good example here it's either fifty or five so it's a lot either thirty so it used to be five - now it's thirty so basically I have this particular inbox and I can open up this guy and this actually is just filled with a beam log message so these particular mailboxes are my demo mailboxes some of them are from me but again the back up logs okay so basically once I have these I can actually go over here and do the restore situations so I've just opened it but I could also save it as an MSG file you know the outlook format I could also forward it if I had Outlook installed and I don't have outlook installed here and I can also export it to a PST which actually might be more efficient if I had multiple for example or if I wanted to do the whole folder I could put it as a PST now this option right here to restore it back to the mailbox that's pretty powerful as well but I also have some discovery capabilities so I can look for I know that there's some messages from Chewbacca or mentioned the words Chewbacca so you see I just typed and I had a quick little search so quick a quick way to view beam Explorer for exchange Patrick hopefully that helped you hopefully that was beneficial so let's go to Andy check the forum's Patrick hopefully that helped Martin asks is there an additional license fee for cloud Edition or is it covered with active support Martin if your active customer there is an add-on for cloud edition so the Cloud Storage Gateway is not included with existing support it's it's a new product in that sense so check with your sales team sander well beam Explorer for exchange support exchange 2003 no it is supported for exchange 2010 exchange 2013 is going to be released very soon and for those of you who have asked a couple of times you will get a copy of these slides and a link to replay the webinar sent in a few days to the email that you registered with uh I iqbal asks so I show you how to backup a whole VM or data store but can can we back up specific files or folders in a VM know if ball we are an image-based backup of the whole VM we can exclude VM decays but we couldn't say just backup the data directory on this Windows drive letter Craig asks is the license for the software unlimited kind of let me change your question a little bit Craig our licenses per CPU socket so if we have a two socket VMware host and we have five of those we would need 10 beam licenses and those 10 licenses would enable us to have an unlimited number of virtual machines an unlimited array of applications so we don't have exchange or sequel agents for example and then also we would not be worried about how many terabytes you're backing up now if you're using the cloud edition we do not include the cloud storage so that Amazon account would be separate from that hope that answers your question Craig okay we are almost out of time I am going to switch up just a little bit real quick and we're going to transition to the QA where you can win a book and actually sorry this is a question I'm looking for here so basically I'm going to ask you one question and the first seven people who answer this correctly will win their choice of these popular books the one I'm recommended right now is the 5.1 clustering deep dive for vSphere but I got some other ones that are on there that are excellent as well okay get your QA panel ready here is the question there is one word that I used a lot at the beginning of the webinar as a protection and a performance technique what word is what word did I use and a lot of people already got it and the word was separations veranda that's not the right answer but I'll give you a year a winner for that that's a good one so it's actually gonna be seven plus one veranda Gregg Dan Gavin GE Ryan rageh veer and Patrick I think that's what you're trying to do you guys are all winners you guys have been an excellent crowd I appreciate everyone for attending and sticking around to the end of this webinar I hope you found this valuable as well as you know if you had some veeam questions and and anybody everybody should be able to take something away alright everyone thanks again for attending and that concludes today's featured webinar you you
Info
Channel: Veeam
Views: 44,398
Rating: undefined out of 5
Keywords: VMware backup, virtual machines, restores, application backup, disk systems, Veeam Backup & Replication, Software (Industry), Management, VMware (Business Operation), Technology, Data, System, Best Practice, Office
Id: spG3JvF-BhQ
Channel Id: undefined
Length: 57min 20sec (3440 seconds)
Published: Fri Sep 11 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.