Understanding SQL Server Backup and Restore

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good afternoon everybody and welcome to DBA fundamentals down under my name is Warrick rod and I'm your host this afternoon afternoon session we have understanding sequel server backups and restores by Tim reading I'd like to put a big thank you out there to sequel century for supporting us with this session before I hand over to Tim there are a few little call-outs that I also want to make so for the upcoming sequel Saturday events in Australia in New Zealand this coming weekend we've got sequel Saturday South Island if you haven't registered there's still time to register and see some great presentations from both local and interstate are local and international speakers and then in May we've got the event in Brisbane now we've also running a precognitives open on power bi for that so if you would like to register for those events feel free and it'll be a great day of speakers and sessions from local interstate and international speakers for next month we've got Ben Miller talking to us about sickle server encryption that session will be updated and made available for registrations very shortly so I look forward to seeing you again next month for sequel server encryption by Ben Miller so with that I'll now hand over to Tim to take us through sequel server backups and restores alright thank you very much can you see my screen okay I can cut so thank you to the past DBA fundamentals virtual chapter for having me speak this is understanding sequel server backups and restores so go ahead and dive straight in I am with sequel skills weary small team of Paul Randall Kimberly true-up Glen Barry Aaron still ATO jonatha Hayase and myself we do instructor-led training known as our immersion events in the US the UK and Australia this year we do online training with Pearl sight comm our consulting side of the business we do health checks hardware and performance tuning upgrades we also have a remote DBA service and services where we do monitoring and troubleshooting for instances speakit Global's and conferences such as past summit and sequel intersection but the big takeaway here is becoming a sequel skills insider Paul will send out a newsletter every couple of weeks that includes an insider video with some cool little tidbit you how to type video so very cool to get those in your inbox a little bit more about the immersion events we have training from everything from accidental DBA and at two different weeks of performance tuning an immersion event on high availability and disaster recovery that one was one of my favorites business intelligence SSIS data science I mean you name it just check out our website sequel skills calm for it slash training so Who am I I'm a consultant trainer speaker author all those good things little vanity slide I blog it sickle skills in Tim writing comm I'm on twitter at t rodney data platform in vp but the takeaway from this slide is kind of get into electronics I've been dabbling in aquaponics but I farm chickens goats and tilapia so I'm one of the numerous sequel farmers out there I mentioned Pluralsight if you're not a current subscriber of Pluralsight if you'll email Paul at sequel skills calm with the subject line user group Pluralsight code he'll send you a code that you can use on rural sites website to get a 30-day free trial of all the sequel skills content which is over a hundred and twenty plus hours of sequel server training so lex that no catches no credit card just provide the code your email address to create an account and get 30 days free so onto the actual session that I'm going to talk about so I'm going to go for your backup types we're going to talk about why service level agreements matter we're going to talk about which backup type is correct for you how to know that your backups are valid I'm going to talk about an approach for backing up large databases then we're going to switch gears to knowing how to restore your backups I'm going to discuss about having a restore strategy I'm going to demo performing point in time restores performing piecemeal restores and ultimately the overall goal is for you to have a plan to have a solid restore strategy and have a proven disaster recovery plan so jumping straight in backup types you which is the correct choice for you or your organization there are several types that mean from a full a partial differential file group file a log a copy only for any organization some combination of the above is usually warranted to meet their service level agreements a service level agreements you have your recovery point objective so basically how much data can you afford to lose and then your recovery time objective of how long can you be down if you ever ask a business unit either one of those questions they're going to tell you no data loss no downtime you you have to be able to be knowledgeable enough to explain how that's almost impossible and to be able to negotiate something realistic and think what I've kind of found to be the industry standard is log backups every 15 minutes so up to 15 minutes of data loss and then anywhere from two to four hours of you know potential downtime for even the most mission critical systems it's kind of unrealistic to have multi terabyte or half terabyte databases without having very complex expensive systems in places to not have a sum somewhat of a substantial downtime such as you know two to four hours to be able to bring a system up in another data center I mean that's that's a huge event it could be 30 minutes to an hour before someone calls that and says okay fell over to the dr site all this infrastructure has to be up or you can start restoring a database so just you know those know what your capabilities are and then where those things don't mesh be able to address and state the facts of you know what you can and can't do so talked about very large databases you an approach so a very large database is subjective you what makes the database very large I mean it can be yeah typically look at hundreds of gigabytes or more so if it's a 500 gigabyte database that's a pretty large database at that point I need to start thinking about can I partition the database you know what's the the growth potential is this thing going to grow from 500 gig to a terabyte or more as you start getting these very large data sets your backups and restores start taking much longer if that 500 gig database has ten years worth of data in it you most of that data is going to be stale could I split that data into multiple file groups if the data that's not changing can I put it in a file group that's read-only with it being in multiple file groups the older stale data that's accessed less frequently I can bring online during a a dr scenario at a later time I don't have to restore all 500 gig at the same time so if I can split the data into multiple file groups using either partition views a sliding window partition mainly might new migrate it into individual file groups you that's a good option because what this does is allow for a partial restore you can restore active partitions or file groups more quickly so a scenario a client that I used to work with I had a database that was a terabyte in size it stored some financial data but it was the type of financial data that you could not change once it was written there was no updates and we had to keep seven years worth of history since the data wasn't changing and it already had a very great you know key on it with a date time I could move the data into yearly file groups once it was in that yearly file group I could set it to read-only once I backed up that file group and knew I could restore it and had already run check DB I knew everything was consistent I could then just focus on backing up the primary file group which was the act of data and so now my nightly backup or the data L is concerned with being able to restore the quickest went from being a terabyte down to less than 100 gigabytes because I would keep a Yi the active rollin year in that primary file group so if we had an event that could restore just the primary file group to get the active data back online for my consumer so the external customers then during maintenance windows or as space became available or time became available I could then start bringing on the yearly file groups in the most critical order so the the newest data first so and I'll demonstrate this it's a lot less complicated or is fairly easy to implement so I'll kind of walk you through it and then demo bringing on individual file groups at a time you need to know how to restore transaction logs usually when I have an audience in front of me I'll ask the question how many of you have ever restored a database usually every hand goes up then I ask how many have restored a full backup and then apply transaction logs to it usually if I had a hundred people in the room you know all hundred hands would go up the first question and I would have four or five hands still raised when I would ask who's applied transaction logs so you need to know how to step through that process and then you also need to know how to perform a tail log backup if you can so what does a tail log back up a tail log or the tail end of the log or all the transactions that have occurred since the last transaction log back up so in a traditional OLTP system you always have a tail end of the log that's the potential data loss so if you're backing up the log every 15 minutes and the last transaction alone backup was five minutes ago and you have a catastrophic failure you have five minutes of potential data lies what I'm going to demonstrate for you is if you have access to the log file your LD F then you have a very high probability of still backing up the tail end of that log so you'll see that in a demo in a few minutes so you have your full you've been able to take a tail log so you'll restore your last known good full backup then if you're doing differential backups you will apply the most recent differential and then restore your transaction logs since the differential and then restore the tail end or the tail of the log backup that you were just able to take if you couldn't do the tail log backup then your most recent full differential if you have it and then the logs it's the differential or log since the last known good full you stepping through this doing a demo you'll see a demo I'm going to provide all the scripts and every the the sample databases and everything for you so you can step through this in your own time but you need to practice the time to go through this for the first time it's not in production two or three o'clock in the morning when your organization has had some catastrophic failure you want to be comfortable with this where it's just second nature if you're a production DBA and your job is backup in a recovery this should be you simple so have a plan you have to know how to recover you need to test that your backups actually restore I've worked with clients I've had a company worked at for a number of years would encounter on occasion where someone would go in and comment out a particular database in our backup routine because there was going to be something going on with that database and so they didn't want to have a full backup run that would create additional i/o or calls pressure on that database so they were just comment out backing up this critical database they would forget to go back and remove the section of the code to comment out so we could go days or weeks or months without a full backup except that we were testing restoring our backups on a weekly basis so if a database didn't backup and we didn't catch it in another report that we were using we would see when we would go and try to restore the that database that we didn't have a an actual backup so that's one reason why it's good to you test that your backups restore another reason is somebody could go in and modify your backup job to backup to disk null so you're taking an empty backup it could be that you know something's wrong with your you're writing this your backups to some deduplication device and something could be messed up there and you try to restore your your backup and it can't hydrate your backup file so you have no good backups so regularly testing is the most secure solid way to know that you have good backups you also need to have a disaster recovery plan meaning something happens in your organization how do you recover at the database you what do you need well first you need a server when I ask this question when I have a physical audience and I always pick on somebody in in the room and appoint and say okay so your primary your most critical database server failed what do you do is that will we restore our backups great to wear to another server okay where is it if you have to go on Dale comm you get a problem you you need this plan already in place you need to know where you're going to restore to so let's say you have another server you restore to that server what order and process do you restore if you just restore your user databases or the application guy is going to be able to connect to you well if firewall rules are in place yes they can connect can they authenticate no unless you've restored the master database you know so you need to have your either process to restore the master or you need to have your user object scripted out so your users your passwords you know those things so I've included in my self knowledgebase article for the SP underscore help underscore rep login script where you can script out your users and permissions and be able to just add those to another instance because if you have to restore master you have to be at the same minor revision number so if your version 12.04 t100 in 2014 Service Pack 1 you got to be on 2014 Service Pack 1 to restore master what if you've had some cumulative updates you got to have this cumulative updates to restore master to a secondary well if you only have your ISO the downloaded media and you don't have all those cumulative updates saved as well in your dr site you may not have internet to go out and get it so a good safety net is to have those objects script it out make that part of a nightly jump up so you're outputting that data and store it with your backups and then you're in a much better place where you can add your users without having to restore master then you can restore ms DB get your jobs back but this is stuff that you need to have plans you need to have documented you need to regularly test and then you need to know your facts how long is it going to take to restore if you have that restore validation process in place and restoring your back your database backups at least on a monthly basis you have the reports all this data is in ms DB because it records all your backup and restore history so create you some reports to go and log that information how long you restores are taking and then you have another database where you're keeping up with your service level agreements for those those databases so if you have this mission-critical system that you have to be able to restore within two hours but over the past two or three years the database has grown to such size that you can't restore it within two hours you know you can't meet that service level agreement so that's the time that you start escalating to management stating we need to do something else we can't just rely on restoring from full and a differential and transaction logs to have the system highly available we need to start looking at availability groups log shipping database mirroring you know some of those HADR type technologies so that you have that system more highly available so know those facts the last thing you want to do is have a c-level executive ask you how long is it going to take to bring back our most mission-critical the livelihood of our business database and you say you know what I don't know we've never never done a restore of it that's a career limiting move so this is really the whole slide deck I mean I have demo and then we talk about summary so we'll go ahead and get out of the PowerPoint and now we're in management studio so what I always like to do for this demonstration is to start with a clean slate so I'm going to restore a database called backup sample from a known good state and then we're going to bring the database online I'm going to clear my backup history just for the purpose of this demo and it's also a good time to talk about I mentioned earlier that MSD B contains all your backup and restore your history if you're running some very common industry script for your backup routine it's going to take care of your backup history if you enabled that job most your third-party tools that do your your backup and your backups and stuff for you that schedules them will clean up backup history but if you have created your own process and you're not regularly purging your backup history then MSD is going to continue to grow it's very simple to you purge this you run the stored procedure espy underscore delete underscore backup history and pass it a date so this syntax that I'm using if you did get date minus 30 or 60 or 90 you know however much backup history you want to maintain pass that date to the stored procedure it will delete all history prior to that date now I'm deleting everything after or before tomorrow so I'm clearing MS DB of all backup history typically this is a 30 60 or 90 schedule it in a job and it will help keep them SD be lean and more optimal so on to the actual backup and restore demo what I want to show you is the top 10 sales order ID so our top sales order ID is seven five one two three so I'm going to go ahead and back up my database and do an initial log backup and what we're going to do here is just step through adding some transactions so we're going to add a 7-5 150 we're going to backup the transaction log we're going to add 75 200 backup the log insert another record 75 250 and then a final log backup so we added three transactions we backed up the log after each transaction so what would our our a restore script look like so we can see here we added the transactions how do we restore this new server dies right now do we restore our full that we took and then our last transaction log backup or do we need the full backup in all four transaction logs in the order that the logs were were taken well I'm one of those DBA s that I like to script everything so I have a script that will be made available to you but if I run this is going to regenerate my restore script for me we can see here that we have to restore our full and then the four log backups that we took in the order that we took them and then lastly we'll restore the database backup sample with recovery what I like to recommend to your clients in anyone that watches this demo is incorporate a script like this into your backup process and you can generate your restore script after every backup that you make so whether it be your Falls your differentials or your transaction logs if every single time either one of those or any of those jobs run it generates the restore script for that point in time then if you have a disaster you can go and just grab that script and if you can take a tail of the log then you can add that syntax but if you can't get the tail log back up then you just run your script and you have your database back to the most recent backup that's taken it's a kind of a really good practice to get into and it takes some of the complexity out at two o'clock in the morning if you can just take this copy and paste it and get your database back online so what if we're doing differentials so differentials are very nice to be able to do so you're not backing up the same data a full backup you every single night if you had a hundred gig database and backing up with compression is you know 20 gigabytes then over the course of seven days you've backed up that same hundred gig or twenty gig backup file seven times if you were able to do differentials and you only have a gigabyte of data that changes each day then your differential on say so you do the full on Sunday Monday your differential would be a gig Tuesday would be up to two gigabytes but if that same some of that data in that initial gigabyte was just updated or modified then it would be slightly less than a full gig on that Tuesday but for simple math let's say it's a gig every day so Monday would be a gig Tuesday would grow to two Wednesday three four and so forth but it's still considerably less than that full backup and it's also think of it like a cumulative log backup it's all the data that has changed since the last full so it's really nice to to have that if you you can to get into a weekly full daily differential and then transaction logs based upon your your SLA s so again let's start with a known state so we'll restore our backup sample back to a normal state we'll go ahead and clear the backup history again we'll do our initial demo do full and demo diff log and now this is similar to the first demo except this time we're going to insert a transaction 75 150 a log backup we're doing another transaction but this time we're going to make a differential you'll notice the only syntax difference is backup database to disk with differential so to do a differential is quite simple a third record with a log and there are fourth trends transaction seventy-five 500 and a final log backup so we have full log diff or yeah diff log log so what would our restore script look like would it be the full and the diff in all the logs or full death in each log after the differential well again I have a very similar script is that this one also looks for differentials if you have them so we'll take a look and we see that we have a restore database we're going to do our full a differential and then the last two logs that were taken in after the differential and then finally we bring the database back online with recovery all right so now on to the piecemeal restore so I had mentioned here in the slides that if you have a very large database if you can partition the database then you can backup individual file groups so we'll start with a clean slate again and then we'll bring the database online what I'm going to show here is this old stale adventureworks data I need to to update my modified dates to something more current but this data goes from 2009 10 11 to 2012 then there's a hundred and twenty-one thousand rows of data so I'm going to go ahead and do a full backup and then I'm going to back up the log to null just because I need to have had a log backup prior now we'll take a look at the backup file group sample database I'm just going to refresh because I don't know the state it was in before there were door so take a look at our properties I'll take a look at files and we see that we just have one file and one log let's take a look at our file groups and we just have a primary file our primary file group so let's go and partition this database so to partition it I need to create I'm going to create three additional file groups one for 2009 10 and 11 and then we're going to create a file for each one of those file groups we're going to call it a backup file group sample 2009 in DF 2010 and 2011 all right did I not do this in order oh sorry I didn't run the first script so we'll create the file groups and then create the create the files and associate to the file groups now let's take a look back to database and we'll see the three additional files and notice that the initial size is one megabyte yeah so that's modeled after model 2 Auto growth by one Meg that is not a best practice but for this demo it's going to work out very very nicely and then we say with the file groups as well we have the two thousand nine ten and eleven with one file associated with each so just in case well now let's partition the data so this isn't a partitioning session so all I did was step through a the partition wizard and it created this nice script for me and just something up I'm going to go ahead and do a full backup and then I'm going to back up the database and specify each file group individual and just so that you see we haven't messed with any of the data we still have a hundred and twenty one thousand rows now comes the cool part so we're going to restore this database that we're going to specify the primary file group we're going to do from disk and the special sauce the the syntax that makes all this possible is with partial so we're going to choose with partial we're going to bring the database online with recovery and then I'm going to replace choose replace because the database already exists the only time you can use with partial is when you're restoring the primary file group so you notice the only uniqueness here it from a regular backup is we're specifying file group equals primary well the primary is done we can do this again with the individual yearly file group so let's go ahead and execute just the primary file group and we say that it processed successfully if we take a look at the state description from sis data files we see that backup file group sample and the log are online but the three additional file groups are in a recovery pending State so what happens if we try to query data that we know that is within the primary file group we query it with no problems we get fifty thousand two hundred and thirty-four rows but what if we access data that we know is not in the primary file group what happens so we're going to do greater than two thousand nine it says one of the partitions of the index for table dbo cells gives us the partition ID resides on a file group two thousand nine that cannot be accessed because it is offline restoring or defunct this may limit the query result so if we look at results because we started very low at two thousand nine it didn't exist we have no data so let's bring two thousand nine online and then we'll go ahead and take a look at the state description as well we see that now two thousand nine is online so what happens if we in to show if we put boundaries so everything greater than January second and before December 31st between 2009 we can get all of that data so 49 43 thousand records if we query 2009 and above it's going to throw an error again same thing but this time with file group 2010 but it stated before that would limit the query results so if we look at results we see that we have all of our 2009 data but our query broke once it hit 2010 so kind of cool we know that 2012 and 2009 are available so if we put boundaries in search just between 2009 and 2012 then we get 94 thousand records so your users at this point if they're made aware what data is available in their application allows them to put new dates between they should be in good shape and can can at least function and work if your application already has date boundaries where the external consumer so think about financial if your online banking application you're already limited to 30 60 or 90 days then if that data reside within the primary file group they're already okay because they're stored procedures are already going to pass you know get date minus 30 or 60 it's already going to be taken care of your back room staff that have access to all the data you'd have to let them know what their date boundaries are while you're bringing those other file groups online so we'll go ahead and restore 2010 and 11 as well and we'll see that now our entire database is back online if we query all data we see that we have our one hundred and twenty one thousand records back so that I mean that's kind of cool I mean if you've never seen a piecemeal restore you've just witnessed it you see it it's not overly complex I mean you have to bring that primary file group online first there are some rules related to with the read/write file groups how you can restore with the transaction logs and so forth here so there's a little bit more that you'd have to read up on but you see it's not overly complex you just have to know what your capabilities are you practice it know that you're good with it document the process so others can follow that but it's a really cool way to be able to get a very large database back online much much more quickly now for some fun stuff so here's the script that we created from the first demo our full backup that we took in the 4 log backups so what I want to do is go ahead and restore to that state and now we're going to take a look at our top transaction which is 7/5 250 I want to insert another transaction this one has some significance with a name it's five nines we all strive for that five nines of uptime here 99.999% so we now take a look at our top two sales and we have our 75 250 and our five nines the the worst type of corruption that you could you could get would be you just a completely dead database so I'm setting backup sample database offline I'm going to browse out to my data folder this is my MDF file for backup sample database and I'm going to delete it so we now have no data file we take a look at my databases we see backup sample databases offline we're going to try to bring it that online we get a nasty error stating that the backup sample MDF file cannot be found try reverting to a previous status and you can start the database minute this is a bad error so if we try to do a standard backup so backup log backup sample to disk give it a path we execute incorrect syntax sorry didn't have the whole thing selected execute backup sample cannot be open due to inaccessible files or insignificant memory or disk space blah blah so when you take a backup you know checkpoints you're written I mean things happen so the special syntax here is we're going to choose with no truncate which is similar to similar to they copy only where it's like a backup that doesn't happen with no truncate tells it to continue on error basically back up what you can don't truncate log just I mean it's a nice safety net so we now have backed up the log successfully we process three pages so we're going to run the same restore scripts that we had above this time we're going to specify or include our tale log backup and now let's take a look at the top view sales order IDs and we have a seventy five to fifty and our five nines so this is cool but this was on the same server that backup sample existed I'd like to give an example it's a hypothetical I'm sure it's happened somewhere but I'll always deny that it ever happened to me as DBAs our databases grow and we have to ask for additional space if you have ever had to ask your Santa administrator for additional storage and they tell you sure but we have to provision a new LUN and then a short while later you you get a call that says oops we've formatted the wrong one and all your data files are gone because you store your MD EPS on one disc in your logs on another your your data files your databases are gone but you still have access to the transaction log so that's the case over here which I can look at our databases this is another virtual machine we have no backup sample database so I'm going to create one this is what you're about to witness is a hack attached so we create the backup sample database will refresh and we see we now have a blank database I'm going to alter the database and set it offline like we did before and I'm going to come into my sequel data my backup sample I'll go ahead and delete the data file and we'll come over to the logs or to my backups say our if you're a a wise production DBA you backup your databases to another location so they're not on the same physical server as your production database so this would be our data domain or NetApp or whatever device and network share but I copied the backup sample transaction log over so we're going to copy it and we're going to overwrite the existing LDF file and we'll try to bring this database back online we get the same error we'll back up the transaction log using the with no truncate so we now just attached the transaction log from one database to a completely fictitious blank no-nothing database on another instance and we backed up the tail end of the log now I like to joke and tell people that this is what you can call an extra week a holiday a bonus a promotion or you raise ransom I mean whatever you want to call it you have the potential if you can do this to prevent your organization from having to issue an apology of we lost your data to an apology of sorry for the inconvenience that we were down for that period of time reputation a loss of losing data is pretty substantial you know an oops that we were down due to some computer glitch or a power outage or hardware failure is a lot easier for clients to justify and accept then they lost my data so we backed up the database now let's show that since we do have our our full backup our you know transaction log backups new all stored somewhere else and safe and replicate it to a another data center we should be able to execute and bring the entire database back online so this is a server that has never seen backup sample database we've restored up-to-the-minute of the failure with the five nines I mean this is huge you've witnessed it you've seen how easy it is to do you're going to have the code to be able to do this I mean I was completely blown away the first time that I was able to witness and see performing a hack attached I mean this is cool stuff all right with sequel server 2014 encryption was added it was prior to this we had to use third-party tools or we head it back up to to disk and then use like 7-zip or something else to then encrypt or zip up and encrypt our databases for the database backups now straight out of the box you can with 2014 start encrypting your sequel server backups why is this important anyone that can get access to your backup files and has a copy of developer edition which is now free can restore your database and have access through all your data so although we put sequel servers behind firewalls and we lock down security and and all those things if you're backing up your databases to a network share that sis engineers or other people have access to how secure or you really being so kudos to this being baked into the product how do we make it work well first you need to create a master key and then you have to backup your master key can't or your certificate so you before you can start using it it has to be backed up so we're going to create the master key then we're going to create a server certificate and then we're going to backup our master key and backup our certificate now notice with both of these we're having to backup our master key we have to back it up to a location and we have to encrypt it by password so I'm giving it a complex password when you backup your certificate you have to encrypt it with a complex password as well this is important because in order to restore your database to another instance you have to have this certificate to decrypt the backup no certificate to decrypt the backup no backup in order to restore their certificate you have to have this password so all right we have to back up and store our certificates and we also have to retain this password so you need to have a key vault you need to have someplace that you can store these passwords that you can gain access to during a critical time or it needs to be something that your team knows and keeps you safe guarded and you'll see why in just a few few more minutes so we're going to go ahead and back up our master key and backup our server certificate and now how do you specify this intact pretty straightforward you were still backing up our database database name to disk we're going to choose with compression and then we're going to choose encryption we're going to you get to choose your algorithm we're going to use the AES 256 and then we're going to specify which server certificate to use to encrypt the backup and just so that you can see I'm going to backup the backup sample database with compression and encryption and then just back it up with no compression so that you can see that still using encryption with compression we get some pretty decent compression rate so with knowing no compression our database was 15 megabytes with encryption and compression it's four point five two megabytes so there used to be a myth that or if you encrypted your database then you you negate any compression with your backups well we can see here that that's not the case so on the server that we backed up a database with encryption how do you restore it well the server already has your certificate so it's no different than any other backup restore database database name from disk and we're going to choose with replace but what happens if we don't have our certificate so we're going to drop our sequel skills encrypt certificate and now let's try to restore the same backup we get an error cannot find server certificate with thumbprint and gives us this unique value so restore database failed terminated abnormally so before we created a certificate and we called it sick with skills encrypt and give it a description so let's let's create the exact same certificate and now let's attempt to restore again cannot find the certificate with this unique thumbprint that was created when we created the first certificate so you can create the same certificate with the same name over and over and over again but it's still going to have that unique thumbprint so we cannot restore this database unless we have this certificate so let's drop the certificate and now let's restore the one that we created and to restore it you still have to choose create certificate but we're going to specify from file and then we have to decrypt the key with the complex password so again no password no key no restored backup or restored database so we've now restored our database our certificate from file and we're back to being able to restore our compressed and encrypted backup so what happens if this server doesn't exist we have to go to another instance so let's go back to our backup recovery to instance and take a look so just for safety measures I'm going to make sure I have it already had then the key created so we're going to create a master key and we're going to call it new sequel skills we're going to restore our sequel skills encrypt certificate so notice we have different named master keys but we're restoring the same sequel skills encrypt cert that we use to encrypt the backup so we restore it from file and we're able to successfully restore the backup to another instance select I was like I've stressed if you have your certificate that you encrypted your backup with and you know the password that you encrypted it with you can restore that to any other instance and then be able to restore your encrypted database so if you have a a mirror and availability root things like that I highly encourage you to either have access to the keys or to have the keys in place on those servers so what else was new with backup and recovery over the past few revisions of sequel server with sequel server 2012 I believe it was service pack 2 cumulative update 6 or something like that URL based backups was introduced I mean we could back up to an azure blob I like to Joe can say that with URL based backups kind of like Henry Ford back in the day you could order a Model T and any color that you would want so long as it was black we could back up to a URL as long as it was you know a sure so what's changed with the syntax not much you backup your database give it a database name to URL and specify your a sure blob storage location now I'll always drop this down a number so that it works because you can't overwrite what's already there now notice I'm having to specify with credential and so I created a credential called as your backup and so if we take a look at senator security credentials so as your backup my identity that's my account within my as your storage and then you get your password which is a key from your Azure portal very simple to configure and set up so let's go ahead and kick this off and see what it looks like to back up to an azure blob so I'm choosing with compression to go ahead and gain that ability since I'm pushing this over the wire smaller data going across the wire should be faster and including equal one we'll see that it's pretty quick I'm going to have a decent connection to the internet and I'm using the East as your database or data center so took all of 23 seconds so what about restoring so restore database from URL you specify your credential I'm choosing replace it's really no different than backing up to any URL within your data center or UNC path or to physical disk now the restore usually takes a few more seconds than the backup but this is this is kind of cool I mean if you don't have another data center and you need to be able to get your backups offsite I mean this is a very inexpensive way to do that you could still leverage instead of backing up to URL you could use so backup to disk and there's I think it's called a sure to are I think it's going as your backup a utility that you command line that you can specify to then move all the files in a folder to a sure so you can still get your your backups offsite to your Azure storage which is really inexpensive and then from there you can restore just like we did from it from a URL so some really cool options I'm loving the technology that we're seeing with Asher and its capabilities in Azure sequel database but being able to leverage these hybrid type scenarios where you're getting your backups you to the storage you can have that storage geo replicated it's really starting to change what we're seeing out out in the field so we've talked about you full backups with logs full differential file group doing a piece mill your tail on restore backing up with encryption backing up to Azure storage I mean this is some cool cool stuff so we'll flip back over to the PowerPoint we've gone through a whole lot of demos and in summary and we've talked about knowing your backup types you having the proper backups to meet your customers service level agreements I mean that's key there's nothing more important than being able to recover data from a you know some catastrophic event to a user having an OOP situation where they ran an update without a where clause I mean a number of things could could turn your data to to mush and you have to restore from backup and you need to know what those requirements are how much data can you afford to lose and how long can you be down knowing how to recover we've gone through so many scenarios in these demos how long do are these going to take the best way to know is to practice performing them knowing how to perform point-in-time restores having a plan I mean the first time you perform a restore should not be in production you need to know or know that you have a solid backup strategy that meets those requirements and if they don't I mean make it known and if your organization accepts that risk my best advice is to have a an up-to-date resume because if disaster hits and your company can't recover from the outage your company's not going to recover from the disaster and you're going to be out looking for a job so have your resume up-to-date so you get that advantage that from your teammates that you work with because you're all going to be competing for the same jobs you have that proven disaster recovery plan in place you go all the way through it just because you can restore your databases to another instance doesn't mean that all the other connective tissue is in place I mean if you're importing files through ETL processes how are you going to get those files if you're providing files to someone else how are they going to get those files I mean all that stuff needs to be known documented and tested lastly the demo scripts that you saw me perform I've uploaded those are actually in a few minutes I'll hit update on my website and you'll see the they'll be a zip file be listed under the 412 2016 passed DBA fundamentals link and there'll be a hyperlink that you can click on and it will download the zip file that has the sample databases and all the scripts that that I step through so with that I want to say thank you all for attending if you have any questions and it's coming up close to the time I don't know if you heard the Thunder you know behind me but we have storms rolling in but I am available to answer any questions as long as I have connectivity to the Internet well thank you Tim thank you everybody for attending this session so Tim we do have a couple of questions that have come through for you so the first one is to do with the partitioning so what about if there is an update on an online petition to date that has not been restored yet yet so that's where you get into the multiple read rat file groups and there there's some tricks along with those I think with those any rewrite file groups have to be brought online at the same time and the transaction logs rolled forward to that point in time for for them to be available and there there's some tricks with the the multiple rewrite at at I really should dive deeper into that and provide a demo for that particular type situation that this session was written for like a 60 minute so I don't know how long it would take me to dive further into that but specifically if there's more detailed scenario or somebody's in that type of situation and they have questions just shoot me an email at Tim at sequel skills calm because I have teammates that are much more familiar with partitioning per se and I can definitely help drive you to the answer to those specifics but I do recall in my early testing with multiple readwrite you have to restore the transaction logs for those file groups or you can bring them online which means you had to to restore all readwrite you in the beginning which kind of would defeat the purpose if you partition a terabyte database into multiple file groups but they're all readwrite yeah it's not going to you know help you a whole lot but again it's been a while since I've played with that so I would have to dig in again to give specifics okay so with the slides around the encryption so the question was so if you have multiple instances would you create same certificate on all instances so you can restore a database to any of the instances so you can have multiple certificates so I probably wouldn't use the same certificate and password on all my instances because that if there's different levels of security different security types just like in highly secure environments you have a different sequel account for per instance and then you grant permissions from individual servers that need to talk to each other so in that case you know I would probably have a different certificate per instance and then as needed I could restore that certificate because you can have multiple certificates on that instance so it really all depends I mean if you're it how secure do you want to get how far do you want to go with it I mean at the bare minimum I mean sure you know same certificate per instance just so that you're keeping non DBAs from being able to restore but taking it to that extra level you know the paranoid guy in me would have a different certificate per instance so that it's much more secure so if somebody compromised one they didn't compromise all okay now this question I'm hoping we get it right but so it's a two-part to try and get a little bit more information so the first part was can you make up a number ie 10 full backups that are automatically date stamped each night in two separate files then remove the oldest after the 10 has been reached or without a script or an execution plan is it built in or what is the best practice on how to do this well okay so they want to are they wanting to split a fullback up into ten individual files or they're wanting to so I think are they wanting to keep ten full backups in retention and then roll the oldest one out I think it's they definitely wants to do so a single full backup with ten retained it's just okay so that sounds like they're they're wanting new full backups but only keeping the last ten or I mean keeping the most recent ten so in those two have unique file names so there's the XP delete files or there's something close to that I'd have to look at the syntax but there's a built-in function to delete files and it's for backup files so you could call that pass it to path and give it the date time that to delete everything older then so if you're only wanting to keep ten days worth of full backups then you run that you Sprott or function and it will go out and delete your backup files older than that that time let me see if I can find the actual syntax but yeah I mean that's built-in you could also use PowerShell I mean there's a number of methods for for being able to to do just that and then most of your backup processes we have built-in things to handle the retention whether it be database maintenance plans O'Halloran's backup maintenance process and you know the other third-party tools okay so if that didn't so the question just email me at Tim at sequel skills calm and I can pull a little bit more information from you specifically toward the specific and get um it probably get you the syntax that you need to do what you're trying to accomplish okay so looks like the late one so can you please explain the recovery steps in the multiple file group scenario can I restore the T log after the primary file group restore is done yes so when you restore the primary file group that's your read right if you had any transaction log backups after that you would you would have to or you need to apply but once you bring the primary online with those log backups once that's done there's no more restoring logs you could then bring on your read/write file groups after I'm assuming your read-only file groups after that which is why the previous question with if you had multiple readwrite you have to restore those read rights apply your transaction log backups and then bring all that online so that's where if the all that then the piecemeal restore stuff is documented on MSDN with step by step in specifics to handle the multi rewrite file groups my demo was really to kind of step you through the the possibility and the functionality and you know that piecemeal restores are possible I have very few clients that are leveraging it and then those that that are it's a single primary file group and then read only after that so fortunately for me it's been very simple but yeah again I can quickly change that and turn it into multiple read write and do some transactions into each and step through it if somebody needs to do this in production I'm more than happy to to help you step through it and we can fire up a WebEx and play with it okay thanks you're welcome everybody so if there is no more questions to come through I'd like to take this opportunity to thank Tim for his time in presenting this session in our time zone and I'd like to thank everybody for coming along and being part of this this session and the slides and the recording will be up on the DBA fundamentals website in a couple of days time once that's been processed so feel free to go to the archive section on the DBA fundamentals website and you'll be able to rewatch this session to keep up with what has been put forward in this session okay on that thank you all and we'll talk to you all next month
Info
Channel: DBAFundamentals
Views: 64,720
Rating: 4.8535213 out of 5
Keywords: sql, backups, tim radney, sqlskills, recover, restore, full, differential, log backup, partial, filegroup, copy_only backups, backup encryption, attach, tail log
Id: CPUsHei1CXQ
Channel Id: undefined
Length: 62min 57sec (3777 seconds)
Published: Fri Apr 15 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.