CMMC Recovery (RE) and Microsoft 365 GCC High Backup

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right everybody thank you so much for joining us on this webinar uh first one in a while i think the last one we did was in in february kind of covering cmmc as a whole specifically level 3. today we're going to be talking about backup and i'm really excited to have daniel joining us today daniel spearheaded a lot of the effort in us identifying which vendor we were going to be using once once 1.0 hit the streets and cmmc was released fully knowing that backup was going to be still one of the requirements we then kind of went on the outset to look for an ideal vendor not only just as a whole to meet other compliance requirements as we'll talk in a little bit as well as security features things of that nature we also needed to find a vendor that could back up office 365 gcc high which is obviously what most of our clients are on from a platform standpoint and also too to be able to back up into azure government so nevertheless daniel's got 15 some odd years of i.t management experience and specifically two and daniel helped kind of spearhead our efforts in identifying ways that we can support our msp clients that we're providing ongoing i.t support for uh from a backup standpoint to make sure that they were covered and so nevertheless wealth of knowledge really excited to have mon with us today so just a little bit of housekeeping uh we're coming to you first off from a new studio here at uh summit 7 headquarters so everyone hope you like the scenery and yeah so i'll go ahead and roll into the agenda yeah let's get started all right awesome so we're going to go over kind of an intro to cmmc a little bit of an overview we'll keep it very short and brief most of you folks are probably well read and versed on that we'll go over the domain as a whole recovery and we'll go over kind of some foundations of recovery daniel kind of cover disaster recovery as well as just back up scenarios we'll go through best practices and warnings as it pertains to the cmmc requirements do a little deeper dive there then we will go through specifically gcc high and azure government strategy doing backup of gcc high doing backup of whatever is on prem backup of whatever is in in azure and go through various scenarios and talk through that and then lastly we'll demo a little bit of what we use from a solution standpoint for our clients as it pertains to backup so and then we'll go through some upcoming events and do a little q a so just a high level overview of cmmc obviously you have domains processes capabilities and then practices underneath that there are 17 domains one of them that we're going to obviously cover today is recovery otherwise known as re so they all have good good little acronyms because we all love our acronyms in the federal space so getting into specifically there are levels there's five levels they are all in aggregate so if you anticipate needing to be a level three company or level three certified business then obviously you would need to meet the requirements within level one level two and level three uh specifically and we'll get into this in a second but recovery has requirements in levels two three and five we're going to mostly focus on levels two and three but nevertheless just a quick snapshot of the practices 17 within level 1 55 within level 2 58 within level 3 uh 26 within level 4 and 15 and level 5. and then if you were to bundle all those together specifically to meet level three you would need to be 130 practices and something to note the reason why we're covering recovery specifically first is recovery is one of the ones that was not in nist 800-171 so this was an added requirement beyond nist 800 171 and beyond what was required in dfar 7012 though it's obviously always been a best practice to back up what you need what you what is critical to your organization to keep you operational uh let alone what's important to the government as well and the data that you handle for them so getting into the recovery domain right here as you see on the slide there are level two three and five requirements specifically covering levels two and three uh starting with level two regularly perform and test data backups protect the confidentiality of backup cui at storage locations and then the level 3 requirement regularly perform complete comprehensive and resilient data backups as organizationally defined some of the things that got taken out of this and we'll talk about this some more later is there used to be a requirement that they'd be off-site and offline and that's still not necessarily gone away but some of that's taken out and some key words that we're gonna focus on later are testing your backups being comprehensive resilient and obviously protecting the confidentiality or just quite frankly protecting the backups that you have in terms of access to the backups are they encrypted at rest as well as in transit and when the the whatever mechanism that you're using to back up is that that transfer of data also encrypted as well or protected so uh let's see here let's go ahead and get into some foundations of recovery so i'm going to hand it over to daniel and he's going to kind of discuss why it's important to have internal conversations first and this is more written policy that we're talking about that reflects the technical requirements but what are the things that we need to meet as an organization and think through not just as a whole because obviously depending upon your sharepoint data let's say and what is housed in sharepoint versus personal data that users have on their onedrive versus some sql database that's running in azure that is internal to your organization that you know is running some sort of internal workload to your accounting system to some software program that you're developing uh for the air force let's say that's a a training software for instance and and if you're housing any of that data or processing any of that data what how what does your backup strategy need how does it need to differ um depending upon what kind of data we're backing up so daniel you want to go and handle that hello everybody it's uh nice nice to meet you my name again is daniel and i wanted to talk a little bit about the backup aspects individually and really understand a policy perspective behind what these are so the top one the first one we're going to go over is your recovery point objective so recovery point objective is when in the when in time do you want to be able to restore data from and so if you look at that and break that down a little bit a sql database you know you might run a backup on a sql database every hour right because the data put into that database could be incredibly valuable and so if you lose that data it'd be you know thousands tens of thousands hundreds of thousands of dollars to replace so that's an important kind of point to capture the next one being recovery time objective so the time from a disaster striking to the time that you can actually recover that data what is that time span that takes place and not only that but you know your your executive team will want to understand how much money are we losing during that time right if it takes two hours to restore what's the loss expected from that so that's another good uh concept to understand is your rto the reason that is is obviously a lot of it is risk-based a lot of it is comes up to the the data owner itself and so it's really important to identify each industry or sorry each uh department in your organization and understand what data they're using and how often it needs to be backed up from a risk perspective because if hr data gets lost or you know ip gets lost that is hard to replicate those are important conversations have on the front end rather than during the disaster itself and the last one is the retention period so the retention period is how long do you plan on keeping the data now there's a kind of a fine line here that you want to be very careful of contracts could call for seven years worth of data retention whereas some other stuff some other you know intellectual property for example you might only want a three-year retention and that comes into liability at the end of it right so if you get subpoenaed or anything like that you have to actually provide that data to lawyers in order to be investigated so it's really a protection point there and it's incredibly important to make sure those lines are very clear and those documents are labeled appropriately if you look at this site here rpo recovery point objective is how frequently do you want to be backing up your data so there's a client that i previously worked at or worked for that uh they had some sql databases we didn't quite understand the severity of how often needed to be backed up until we had a conversation with the data owner the date owner was saying for every hour that this is down we lose ten thousand dollars our company loses ten thousand dollars which is a crazy number right you know some some of the people watching here you might think that's a huge number you know you might think that's a really small number depending on the size of the corporation but regardless it's the principle behind it having the conversation with the data owner understanding that 15 minutes of losing data is thousands of dollars to that specific organization and so going around to each department understanding okay you know this critical line of business application we need to be able to back up sql every 15 minutes every 30 minutes whereas maybe the os level itself maybe the windows server the linux server can only be backed up or only really needs to be backed up once a day right so making that kind of clear line in the sand of okay how frequently do i need to back up and you know what's the impact if i don't back it up that frequently and understanding the risk and presenting that risk because backup software can be expensive setting up backup policies testing restores times invested in all of that and so it's really important as a as an i.t specialist as a executive level to have those conversations with data owners to make sure that everything is is appropriately uh scheduled out and then the next part there you see kind of in the middle is our critical event right so something happened um ransomware for example is is is relatively common so ransomware strikes these servers it's bad right servers are down employees can't work now starts to clock right so now we have a a timer going on how long is it going to take us to restore that data so your point in time is how how how far back can i go to restore right so every 15 minutes for example if it's sql and then have the rto is how long is it going to take me to restore that data another very important concept right because you need to know how long is it going to take to restore the sql database is it going to take 30 minutes is going to take an hour okay what's the expected loss from that right the down time between the time that you can start the restore process and finish it and put it back in production so again that's a really core concept to understand because you want to make sure you want to be able to present the risk to your leadership and tell them you know based on based on what the data owner told me i'm taking backups every 15 minutes this backup will take you know one hour to restore back to production the total risk associated is you know us losing seven thousand dollars if this application goes down and that ties into other aspects like going into high availability and some other you know different types of items to keep that data up as long as possible yeah and you talked about internally especially in the contracting world you know everything can vary contract by contract can vary depending upon business unit by business unit you know you have a training and a simulation arm of your business you know that element of your company may may have way way way higher expectations when it comes to rpo and rto absolutely and so you may even have to get your contracts management involved uh to really understand what's unique to each contract in each each contract vehicle etc to really understand you know what's the best approach for each each workload that you have yep okay so let's go ahead and start getting into best practices and tease some of this out a little bit so we're going to go over those key words that i talked about that are pulled rather directly from the requirements or pulled from the explanation and descriptions that are further in the appendices of the cmmc documentation so first off we're going to go over testing what does it mean to test backups what does it mean to have a comprehensive backup what does it mean to have a resilient backup strategy and also too what does it mean to protect confidentiality or protect your backups and then kind of lastly even though it's going to be third in our kind of agenda is compliance so that's not necessarily pulled pulled from the requirements however it would be negligent for us to come up with a cmmc backup strategy that also didn't take into account dvr 7012 and the requirements you have there if you have itar data what are the requirements there in terms of backup even iso and some things that you do operationally from a process standpoint and how does that affect your backup strategy so let's get into text testing so what we have in front of us daniel is i've taken some snapshots of app point which is a cloud-based sas application to run to run backups of specifically office 365 gcc high and a little bit of a dashboard there showing a specifically you know exchange online being backed up there and the kind of status of a job and then over to the right hand side depending on i guess your orientation the right-hand side you have an azure backup screenshot of you know backing up specifically some vm or what have you can you speak to going beyond just obviously checking to whether or not a job works which also too you probably should have some sort of alerting whether or not a job fails and that kind of thing and maybe even some redundancy as to who gets those alerts so it's not one single point of failure but you know obviously smaller organizations you know may just be one person but talk to me a little bit about testing more thoroughly than just obviously checking on jobs and whether or not they you know check a box that they ran and they ran successfully and digging into that more sure yeah so there's a couple things on the left hand side you can see the app point where it says exchange online and that specif that product specifically is actually backing up our office 365 gcc high instance so it's looking at your workloads like sharepoint and teams exchange online right and so when you do a backup like that you're probably looking for maybe single file integrity right so you're looking to maybe restore an email you're looking to maybe restore sharepoint document library and make sure that all the files there are as they should be right the integrity is there the job restored correctly and then you can move on with your testing to something else on the right hand side is your azure backup azure recovery vault really and so what that is is backing up multiple or single vms and then there's some alerting notifications down there where you can trigger emails there's a power bi integration so you can actually have your own backup dashboard if you're really interested in what your recovery vaults are doing it's really just a quick snapshot of okay how my backup's doing when did it take a snapshot you can set your schedule and do all that to it as well and so azure makes it really easy to do a single restore map point does as well just to be able to test to make sure that the vm does restore successfully and then you can end up assigning you know ip addresses to it network adapters things of that nature to actually log in and test if you'd like so from a recovery perspective azure and app point actually do a very easy job of being able to understand and show you okay what's up what's not working what is working and then kind of get you on the rest of your day right because no one likes just checking backups all the time that's correct that's correct so um you know here in a little bit we're actually going to do a small demo of app point and that that product and also too we'll kind of get into what does office 365 do for you already uh specifically from a disaster recovery standpoint and what are the limitations why do you even need a product like that point things of that nature so let's go ahead and get into comprehensive so just to talk to the slide for a second i'm gonna kind of hand it over to you to explain a little bit further so when we talk about a comprehensive backup that's meaning everything that's in your data state that if rendered inoperable or you were to lose it all together what have you or there to be any downtime experienced from you know on that specific workload you know what would be the impact on your organization and if it would have any sort of significant impact and especially especially regardless of impact if it has cui right or intellectual property or any sort of company proprietary data it needs to be backed up in some way that's what the requirement really speaks to and so in the graphic you see here obviously you have cui an intellectual property ip that can be stored on a myriad of places and those those places if you will those those data stores need to have some some element of a backup strategy behind them no matter how small or big of an implementation that may be so go ahead and talk to me a little bit about the nuances of backing things up on prem backing things up in the cloud and how to make sure that i guess a company's strategy is comprehensive in terms of data yeah so it's really important to understand what you need to be backing up i think i talked about this a little bit earlier with data owners and so as we're kind of working through this chart at the bottom from endpoints all the way up to on-prem um there's some some really interesting points and conversational pieces that need to be have so for endpoints for example i had a friend of mine reach out to me that manages i t another organization and he was telling me that a user had shipped their laptop back after they were let go because everyone has been working remotely and it wiped the system completely clean nothing on it it had ip on it it had specific code that he was working on that is now potentially lost and so your first your immediate first step is you need to understand what on the workstation or the endpoint needs to be backed up and what needs to be stored in a cloud or a different type of storage account you know that could be a network share for example that could be one drive for business right there's a myriad of different options there and what that looks like and so really kind of a clear-cut definition and then when you're onboarding users they understand what they should and should not be storing on their workstation that is step one and understanding kind of what that is the next two are subscriptions at ias again it's really important to make sure that the data that you're storing there is data you want to back up um honestly like some people have too much data that they're backing up right more liable we talked about retention policies earlier it's good to have a really clear understanding of what you're backing up how much you're backing up and to make sure you're actually not backing up too much and so when it comes to subscriptions and ios it's really a good kind of exercise to look at okay we're backing up this you know maybe this user archive mailbox that's been 10 years old five years old whatever that might look like the other piece of that if you look at subscriptions in ias and we find this a lot when it comes to shadow i.t people are actually backing up data's data to dropbox and google drive when that might not be a corporate application that you're actually using and so shadow it can actually kind of appear in the background you can lose data that way so that's another thing to also be aware of locking that down make sure no other third-party applications file sharing services can be used to share and then back up data somewhere else very important and the last one is on-prem so on-prem you know workstations we talked about a little bit subscriptions and ios on-prem specifically around servers um it's again important to understand what file shares need to back up talking to the data owner to make sure you're backing them up appropriately and the retention policies are set correctly and then not backing up too much data right so having a very clear scope on uh the os's that you're backing up the vms you're backing up what those policies are and have them documented right is it once a day at seven o'clock eight o'clock nine o'clock okay that's great let's move on okay what's the restore procedure for that specific data set you know after after a disaster occurs and so looking at that kind of that wide breadth of different backup solutions and different backup endpoints you know from workstations all the way up to servers it's really really important to have conversations because it really comes down to risk right when you're backing up data you're either backing up data to be able to restore in the case of a disaster or some maybe a malicious intent or you know potentially having to back up or restore data from because you're getting audited right you need to be able to produce evidence in a particular case and whether you have or have don't have that evidence is going to be important to your legal team depending on what the case is so it's very important to uh to understand every form of file and folder and application and data that you're backing up across every platform you know it's interesting you brought up going back and looking at the backup data and what sources of data you're backing up and somehow or not somehow and in some ways it may be overkill or um basically you're losing money by backing up stuff that you just don't you don't need um so not only is it a fiscal responsibility to kind of go back and look at these from uh from a cmmc standpoint this is a maturity model and part of it is processes and part of the process piece is going back and looking at some of these things on some sort of regularity to see okay where are we where are we losing money you know um you know not only does it affect your pocketbooks but also too uh you know it just makes no sense to back up some of these things when you don't have to well especially when you're looking at the cloud right i mean the cloud charges a very clear-cut number based on usually a gigabyte model right and so if you're retaining data for an extraordinarily long amount of time like the azure backup default policy is 10 years well if you're you know backing that up you know one truncated every year that's a lot of data you're going to be charged a lot of money so um your boss your owner ceo president will be very happy if you kind of dive in a little bit understand and actually save some money that's good that's good so if it's cool let's go ahead and jump into compliance as well so i mentioned this kind of at the beginning but from a compliance standpoint when when you look at the requirements of cmmc what predicated that a little bit was d47012 and in d47012 correct me if i'm wrong here but nevertheless there's there's uh requirements on if you're using some sort of cloud service yep um that there's gonna be requirements as to where your data's stored um specifically once you start getting into itar then it's important as to who is handling your data and how that's being even reported and then also from an iso standpoint from a process maturity standpoint there's certain ways that you're backing backing up your data that need to be considered from a process standpoint and also too with dfr 7012 uh itar you know there's data residency that gets into it and also too as you know as we uncovered going through backup vendors uh there's certain vendors that don't meet these requirements right so maybe maybe even just speak to some of that process just you don't have to get really into office 365 gcc high and all the nuance of that but just in just picking a backup vendor that's cloud-based um or even even not not cloud-based something on-prem that's doing doing backup yes yeah so it's it's really important to understand where your data is at rest right and so when you're looking for an offsite backup that could be a full kind of sas solution that you're looking at right the application hosted the the backup storage is hosted as well or it could be kind of a half and half right maybe you manage the application but the the off-site storage is stored in azure government as your commercial um you know maybe it's aws right maybe you have uh storage or maybe it's replicating through another to another data center so it's really important to understand specifically when it comes to cloud storage right so dfr7012 we need to make sure that the data the cui data that is protected is stored in a federal moderate data center so that's kind of check box number one so kind of going through this with you office 365 commercial just so you're aware is a federal moderate equivalent okay moving into the the azure commercial space that's actually fed ramp high okay then you also have azure government also certified to fed rank high so both good storage target locations now because of some other needs we usually recommend azure government and i'm going to go into here in just a second looking at the itar piece there are some very specific requirements around storing itar data not only from an accessibility standpoint right for nationals and things of that like but also storing it in a fedramp high data center and the requirements that surround that so azure commercial again is fedramp high azure gov is also fed ramp pi gcc high which we we talk about all the time here is built on top of the azure government platform so we recommend just going with that because you're already going to have a subscription anyways and it's already doing your identity management which then kind of ropes in a little bit more of the of one of the next conversations we're going to have about role-based access so with itar data azure has our back roles which basically allow you to set certain permission sets and even scope it as small as someone that can review a backup restore backup and perform specific actions like that and so role-based permission sets are really important to be able to really shrink the footprint of who you have accessing your backup data because if you know someone goes in there they're you know domain admin locally global admin subscription owner and azure they can then go and do whatever they want and that might not be the best case that you want um for separation of duties good good so let's talk a little bit about resilient resiliency i threw in a little bit of a availability which is not part of the slide but i feel like it was important to talk about it a little a little bit so as i talked about earlier in cmmc the requirement so hard line requirement in the specifically in the level two and three practices it used to say that you needed off-site no offline backups they took that out and it is now um kind of interwoven in all of these descriptions um in one of the level two and level three requirements and so it's still prevalent and for that matter it's it's somewhat as i understand it it's a little bit difficult to meet the requirements of resiliency and to have resilient backups if there is not some sort of component of your backups being off the same network that could be compromised and off the same even data out of the same data center that could be affected in some way shape or form or or zone if you will so maybe talk to talk a little bit about the from an azure standpoint uh what is some of the off-site capabilities or resiliency capabilities within that and even kind of explain region zones and all that kind of good stuff yeah absolutely so if you look at the chart here you can see where it says offsite and a bunch of acronyms underneath it we're gonna start with lrs lrs is your locally redundant storage what that is basically a storage account sitting on a server in an azure data center now if you're an azure government that's probably either going to be in virginia or texas maybe arizona there's a handful of additional data centers here in the u.s that's what that means the lrs piece the zrs is zone redundant storage so it's a little bit different there's data centers close enough to each other um in each spec in each specific region they'll actually replicate copies to each other in that small region right so arizona might have three data centers within a i don't know 1500 mile radius of each other and it's actually replicating the data across there and then the next one is grs geo-redundant storage and that's one where it's actually taking a copy and moving it from maybe it's moving it from virginia us gov virginia data center to the arizona or the texas data center so it's actually a geographical difference in where the copies sit and so it's really important to understand that that's obviously going to be one of your better options if you're able to do it it is more expensive but that way you have a cop you have two copies that are basically being created across opposite sides of the continental united states the next one is a little bit kind of level up and that's geo-redundant zone uh or geo-zone redundant storage sorry that's a mouthful on its own but um that's doing similar of the past tunes what that's doing is replicating from maybe uh virginia to arizona but then it's also keeping three copies of that through the three data centers that are within you know let's say 50 or 100 miles of that arizona data center um there and so that i mean you at that point you'll have six copies going um of your data at any given time which is which is a lot of copies i mean that's a lot of data so usually what we recommend is trying to do at least zone like zone based storage redundant storage or if you can do the geo-redundant storage because that just provides you a couple extra copies right and then you'll also have your on-prem copy potentially if you're backing up to on-prem storage as well so yeah the other part of that is the azure site recovery piece and so azure site recovery is a little bit more disaster recovery based rather than just uh maybe just a general backup restore and so what that does you can actually have your vms your applications spin up in a completely another another data center that process usually takes about 15 minutes to replicate and that's going every 15 minutes and then you literally flip a switch and that data center is now live and you cut over maybe some dns records externally and things of that nature to get people back up and going so it's a relatively quick process um to go from one environment to another um again this is all you know god forbid some you know act of nature act of god occurs right um major power major power outage something of that nature so and a lot of that's even abstracted to the user you know they may they may log in and experience it as though nothing's different though their data is now being surfaced to them from exactly another so it's really important to understand the storage piece which is that first kind of part that i talked about and then looking at the azure site recovery piece which is more disaster you know semi-real-time again replicates every 15 minutes or so to another data center across the continental continental united states so one little thing to tease out and correct me if i'm wrong here with azure site recovery the difference between that and this even gets into some of the nuances of office 365 and why the native capabilities of office 365 are not enough to meet these requirements is it doesn't it doesn't allow you to go back to a moment in time exactly specifically so you can do versioning with sharepoint as you guys probably well know if you delete an account the account will stay in the deleted items for about 30 days one drive date i think you i want to say you have close to 90 days to be able to restore back from that but outside of that it's you get what you get and so a lot more thought has been put into okay how long again do i need to keep my backup data what all do i need to back up making sure i have policies around that which is why you know we vetted and recommended avepoint as our solution with gcc high it's one of the few vendors that actually support the platform let alone provide a really good backup product yeah so talk to me a little bit about also too we've talked about storage redundancy and resiliency of your backups from a technical standpoint but let's also talk about from a from a role an account standpoint hypothetically let's say one of your admins your global admins has their comp their credentials compromised or something happens to where an individual that would otherwise be responsible for recovery or you know whatever the case may be how how you can make your backup process and your backups themselves a little bit more resilient to scenarios like that sure so i mean there's there's a few things on the azure side when it comes to our back which is role based access control is what that stands for so you can actually dot and i think i mentioned this a little bit earlier but you can actually segment out and gives people very specific roles that it comes to backups and recovery vaults and so it's really important to understand who in your organization should be able to access that you want to keep that obviously very tight mfa things of that nature and then that way in case something happens you do have a backup with a very scoped and specific set of rules that they can go and restore a file now if an account was compromised you know maybe you're maybe you're a small it shop maybe you're the only i.t guy there in your organization you don't have a team you don't have really great separation of duties we can create separate accounts and scope those to very specific roles so that when you're using your backup restore account it's not doesn't have global admin access or you know subscription ownership level access to the rest of your environment right so it's really important that separation of duties is important from a a person base but if you can't quite do that because maybe you're just one person you know least privilege is kind of the other way to swing in that to make sure that you have very specifically scoped accounts to do what you're trying to do without giving you know potentially future compromised account access to to the whole thing so gotcha gotcha so um let's also talk about availability which can kind of bleed into resiliency but talking through the types of storage and how quickly you can spin up a recovery i know from an azure blob standpoint they have i believe it's access tiers or something to that effect they have tiers of hot cold archive talk a little bit about that and the nuance of availability and why you would choose one over the other sure so i mean it's really important to understand the type of data you're backing up how often it will probably need to be accessed or restored from and that's where it comes into the storage tiers right so there's john just mentioned there's three different kinds with azure you got your archival you got your cool you got your uh you got your hot storage and so if you look at the pricing uh model of that it's actually priced based on read and writes and so an archival storage might have really low right cost but really high read cost right because it's assuming you won't have to get to that data very frequently and so understanding again the use case of what you're backing up and understanding okay how often do i need to actually pull back and pull data out of that so if you're looking for maybe like a vm or something of that nature you probably want you know hot storage whereas if you're looking for some kind of long tor long term solution you might want to go with the archival right if it's something you maybe will have to touch once every two years save some money go with the archival route the other side of that which is kind of interesting i've worked in the msp space for a really long time and i remember one time i was at conference and a client was telling me about a backup product they were using that will go unnamed here but they had either 10 or 12 terabytes of data to restore for a client and the backup vendor from their cloud solution was throttling them to only be able to restore it was either 500 gigs i want to say or maybe it was even yeah i think it was right at about 500 gigs a day and so it was going to take them forever to actually restore all the data they had so it's really important especially when you're vetting vendors to understand how quick can you get to your data and are there any limitations on you restoring all of your data in case of a disaster recovery situation so getting into a little bit of protection which i think is obviously one of the more critical pieces of this uh because it's one thing to back up properly it's one thing to have your backups available but also to protect them in a secure manner that obviously doesn't compromise cui sure which was obviously major crux of of cmmc and why it was even um come to be if you will so talk talk a little bit through the slide um in terms of encryption and everything else yeah so i mean a lot of this we've talked about before kind of in previous slides here encrypt your backups right data at rest data and transit right are you doing it over tls encryption if you're going to the cloud it's really important to understand that is your data at rest actually encrypted again you can use your bring your own keys you can use azure native storage encryption if you want to do that it's up to you based on kind of your threshold for security but regardless make sure your backups are encrypted both in transit and at rest because it has potentially cui and you know if you really look at it could potentially have itar data in it which is incredibly sensitive um the next one is you get real-time uh risk detections and i think that's more kind of looking at backup failures right i need to know when a backup fails i need to understand what what went wrong and how do i fix it and so again azure recovery vaults do provide you a level of notification email notification i mentioned earlier you can actually generate a power bi dashboard out of it which is pretty cool yeah it's a little kind of a newer functionality there and then that even weaves in something like sentinel right to be able to right and even that's signal will definitely tie in a little bit especially when it comes to the managed user account compromise piece and looking at the risk events um for sure so the real-time detections is is making sure that you know you're notified you have somebody assigned to work the request if you need to make changes the environment submitting that to a change control board if you need to back up additional items or remove additional item or remove items things of that nature it's kind of important to understand all of that scope there the next one is like you said they analyze jobs and risk events tying in products like a like a log aggregator like a sentinel right a sim solution to be able to tie in logs for you know azure log analytics and and all of that to track okay one of my failures happening um who's accessing my backups which kind of goes to the last bullet there the compromise accounts it's really important to understand um not only from inside of maybe a server specific like a windows server for example but also the surrounding environment and azure you know you have potentially a different identity accessing uh the high level kind of container of all of your azure environment how are they getting in right when's the last time that they looked at a backup when's the last time somebody actually did a test restore for backup and tracking and being able to show in the logs when all these actions were were given and if someone had a compromise account what did they do in the environment did they try and encrypt my backups did they try and restore my backups did they restore my backups all very good things to keep track of which azure sentinel does a really good job of thanks daniel let's talk a little bit about some products that you've brought up before and and tell me a little bit about how uh role-based access control and even pim yep which you can kind of share with that acronym is how both of those products and even to some nuances to subscription and licensing and additional costs that may come into play but nevertheless how those two products can lock down access to your backups and things of that nature yeah so it's really important and again i take another lap on the our back piece to have separation of duties it's a requirement right it's a nist requirement to be able to say joe can maybe back up the data but susan can restore that that's super incredibly granular but you get the concept of it right yeah and so you can create custom rbac roles inside of azure or you can use some of the backup operator for example as one inside of azure to really scope down the level of access somebody has and how that you know you might have four or five recovery volts running all different backup jobs and even scoping them specifically to certain recovery vaults to make sure that maybe you know joe can only access the recovery vault for the huntsville location which is where we're located whereas maybe susan can only access the one in um let's say hampton virginia right so it's really important to kind of get that under wraps and you can have i believe just in time access to where you know an individual may only be able to touch uh or basically access that data you know during a certain time limit or after a certain approval process that type of thing that's where your privilege identity management comes in you mentioned it earlier the pim piece um is giving end-time access is kind of the whole concept behind it um azure i believe you have to have a p2 license actually an azure p2 license or to enable that across your user base but what that will allow you to do is just in time break glass type and do a change control and a change approval process to get that additional level of access that's needed to maybe maybe us to perform a restore maybe it's to elevate their permissions for some other feature inside of azure awesome so let's talk about the glorious office through oh yes gcc high maybe and this isn't in the bullets per se but let's talk about office 365. what comes with it out of the bag out of the box that you have from a disaster recovery standpoint at a high level and and kind of reiterate why the the native capabilities of office 365 are just not going to cut it sure so from a disaster recovery piece um you know that's microsoft's responsibility to make sure that the data center's up now if something goes wrong with the data center microsoft you know kind of holds um the responsibility to make sure that that does come back up there's nothing that you can do in your control to spin up another office 365 tenant and migrate that that data immediately right it's not there's not an azure site recovery equivalent for office 365 right so it really scopes us down to specifically backup and looking at this and you can kind of see some of the bullet points here and my apologies if you see me looking up where we actually have a tv behind the behind the camera here is backing up and how long and how often can you back up data and so we went through invented a ton of ton of vendors you're going to see that on a future slide here but it's important to understand okay office 365 you can have um archival mailboxes in exchange you can have uh onedrive i think will stick around for about 90 days user accounts will get deleted after 30 days after you perform a deletion there and so it's really important to understand kind of the scopes of that right you know sharepoint will have versioning control on on their files again important to know but maybe you need to go back to a specific time right maybe you need to roll it back 180 days and say okay what did the file look like you know in this person's onedrive or hey this person was supposed to get an email and said that they didn't but we need to verify and make sure that they didn't and rolling back that to that exchange mailbox and so that's a feature step that is currently not yet available in gcc high is that a bit availability to roll back to a very specific date and pull that out now you can do some advanced things with e-discovery when it comes to mailboxes and that's expanded to reach the sharepoint and teams and things of that nature right but from a long-term storage perspective you're going to want to use a third-party tool like an av point which is what we've recommended here okay yeah so as i as i understand it you can expand on some of these bullets here but it needed to be not only the the location where backups would be would be going to that that location needed to be fed rent moderate or federal moderate equivalent also to the product itself the sas application that would be doing the backups and how it handles uh the data right obviously cui etc that also too would need to be fed moderate and and then that we we kind of needed because of most of our clients some element of bring your own storage that they could basically select and not be relying upon the vendors storage or their own data center and also too specifically we needed folks to be able to store their data in azure government um because obviously that is what we're most versed in and what we use most often and also too that because some of our clients have itar data needed to be a us-based company some of that was a little bit of preference but also too needed to be us-based and also to from a support standpoint so if something happened with their backup or had happened with the the software that they were using they may be getting support from somebody that is not us-based not be a us citizen et cetera and and data sovereignty was such a huge huge piece of our our research did i did i miss anything there anything i should explain further i mean really kind of some of the highlights right and you see in the slide here the federant moderate right so um there are some data centers out there that are not even fedramp certified at all right you can't be federate moderate equivalent you have to actually be fed ramp moderate oh that's good and so there's a little bit of a difference there now when it comes to iotar data that that kind of moves you into the fedramp hive space we're looking at you know the azure gov ideally kind of space to store everything again azure commercials also fedramp high that that's a recent addition as of a few months ago that's really important bring your own storage again also important uh bring your own encryption key right that's one thing that is also good to talk about right the ability if you want and have you know policy and understanding and your your cyber team wants you to bring that to bring your own encryption key to the party right um all very important things to talk about u.s support is huge it's incredibly hard to get us-based support um for for many products and so not only vetting okay where are you storing your application where's your application hosted is it a fedramp moderate fedramp pi if you're using a sas application you know what kind of data center is it in but then also your u.s support personnel because if you have somebody you know a foreign national trying to access your backup data to troubleshoot it for you they could then potentially have access to itar's cui data it's very important to understand from a support perspective that they are us-based citizens and if not request that some companies will allow you to do that some companies won't and so again it's really important to vet to the nth degree kind of all the vendors that you use for that particular perspective going into the down selection that we had kind of the bake off that we had looking at the at the various vendors when we looked at backing up from on-prem to some other cloud storage location more or less there was many many companies that met that criteria um in a safe secure and compliant way but then once you started getting into backing up gcc high then that narrowed it down to veeam and half point as i understand it and then ultimately we decided on half point can you give some some flavor to that beyond beyond just that yeah so uh like he said the two that we kind of narrowed it down to where veeam and av point which is the solution we ended up going with um again kind of going back a little bit making sure that they're in a fedramp certified data centers you know moderate or high if needed depending on if you have the itar requirement um is a big one u.s support you know kind of going down that list i just mentioned the reason that we ended up picking um uh avpoy and this is again i think we decided on this vendor i want to say 90 days ago 180 days ago yeah at least three or four months ago um and so some of this might have changed but as far as we're aware none of it has um is that veeam their r d departments actually based in moscow and so although they do have a license to work with the with the army we just decided it's probably best not to use someone that has a portion of their organization um such as such a large portion based out of a russian uh out of russia and so we decided to finally go with avepoint which is u.s based they're they host their product out of the u.s gov virginia data center and then you can have us gov obviously data storage there as well so it's a one-stop shop from not only the application access but also the storage access and then their support is us-based yeah and some of some of the nuances that we ran into where there were some companies that credibly could technically back up gcci they could do that right but you know their their software application did not meet the fedramp standards that it needed to meet or there was nuances to where it could store information and it had to be on their data center which was not did not meet the compliance requirements things of that nature so uh so at that point ultimately we were working with one company at one time doing the vetting process and uh they said that they could back up gcc high which was great they were a canadian-owned company and all their support was canadian-based um their platform was an aws commercial which is federant moderate so i was meeting some of the boxes but we actually made it to the discussion point where they would have to hire us-based staff in order to troubleshoot any backup issues and so again very important and i want you guys understand as well like the the links that we went to vet the products um that that we that we did are pretty extensive um and it's something that i don't wish upon anyone to have to do um consistently but it's something needed for compliance right it's very important to use compliant products um just kind of how it is right on so let's get into a little bit of a demo of app point and what users can expect if they decide to go this route and a little bit of what we deal with from a day-to-day basis uh using app point sure for clients just kind of talking about it a little bit av point um the configuration is really easy you register an application inside of your azure tenant you give that very specific permission sets to be able to reach in and back up your exchange and your sharepoint your team's data your onedrive data things of that nature and then you set up all the users all the accounts you want to backup the site collections you want to back up you know kind of going through you know methodically and understanding what you want to back up and checking those boxes and then you you do you do basically a backup right so you start it off the first backup backup takes quite some time because it's actually backup backing up all of that data as a point in time which could be hundreds of gigs it could be terabytes upon terabytes upon terabytes so that first backup job might actually take a while yeah but then after that it's going to start doing incrementals every day and then it's going to save that for up to a year for you and then you can continue on that process year after year depending on how long you want to save the data perfect so getting into the demo here of app point in backing up office 365 you see here obviously the workloads exchange onedrive sharepoint et cetera and where you can kind of control your backups so to go into general settings you can even control the storage location so hypothetically this is a bring your own storage scenario you don't necessarily want to use half points data centers and their backup storage natively you want to go ahead and use whether it be aws or azure government or some other on-prem location you can go and click that there remove backup from half point and you see here there's storage type and as you click on this the storage type and you may change that throughout you can you can obviously it will uh kind of react to whichever storage type you pick so i'm going to go down here and so let's say you decide to go uh with dropbox so obviously see you can see credentials then you have retention period right here as well and this goes for all of them you can set retention period just from this test whether or not everything is connected properly based off the things that you configure and set you see here obviously these change throughout and you can set access key you know bring your own key scenarios things of that nature and here we go in into encryption keys and being able to export those and control it in that manner and let's go ahead and get into being able to check on jobs after you do do a backup so now that i've kind of shown you a little bit of the granularity and some of the control you can have over where you're backing up how you're backing up things of that nature right here you can see the dashboard for all of your backups and you can check on the jobs you can even dig into specifically when you click on more details and things of that nature you can click on more details and see what exactly was the cause of a backup failing what happened to that job when it failed when was the last successful backup all that kind of good stuff so here you see basically job analytics per workload you can see how many jobs have finished successfully and you can look back as far as seven days a week out and also to all jobs that have ran this week just to have a quick look at status and do some diagnostics there here too you can see just a different look at how objects are being backed up and specifically within each workload and successes failures etc so here in job monitoring you can really start looking at each individual job and you can even see if it was an automatic backup versus a manual backup you can see obviously date time etc how long it took you can even generate a report so let's say you're going going into some sort of review or meeting within your team you can kind of export some of these to discuss amongst your team as far as what's being backed up whether or not we need to back up certain data any anomalies things of that nature all right that about wraps it up i just wanted to show a quick snapshot into what is available uh in in the user interface specifically obviously it takes a little bit more work to get the uh get the product set up and get it tapped into your tenant and backing up the right things and obviously as daniel alluded to the first time you're going to back something up it's going to take much longer but then after the fact once everything's set up then it's just a matter of coming in into either one of these dashboards and kind of messing around and finding your way through it it's pretty self-explanatory um i don't do backups on a regular basis but uh but nevertheless it's it's pretty easy to navigate and that's more or less what i wanted to to show everybody that's in attendance today otherwise go ahead and check out cmmc.blog that is a quick link to get to our blog and you'll see one of one of our most recent which is covering some of the material that we covered today but going a little bit more extensively and mapping it to each requirement within levels two and three so if you want to go and check that out again that's cmmc.blog and it's a in the latest blog will be about recovery and some of the things we've discussed today but again look out for the recording as well and if you have further questions or you want a more extensive demo on avpoon and what it can do in terms of backing up gcc high feel free to reach out you can reach out to our team at cmmc at summit7systems.com again that's cmmc at summit7systems.com and we would be happy to follow up with any additional questions rather daniel and daniel or i or another one of our team members can address so thanks again for attending today we appreciate the questions and all of your feedback throughout the whole process and signing off here from summit 7 headquarters at the summit 7 studios man we like alliteration
Info
Channel: Summit 7 Systems
Views: 1,371
Rating: undefined out of 5
Keywords: Azure Government, Azure Security, Cloud Security, Cloud Compliance, Microsoft Azure Government, CMMC, DFARS, Cybersecurity, Azure Can, Summit 7, Microsoft, Azure Information Protection, Azure Sentinel, Azure Security Center, Azure ATP, Microsoft Government Cloud, Governance, Backup, Recovery, Cloud Backup
Id: ORDC7BQfM8w
Channel Id: undefined
Length: 55min 55sec (3355 seconds)
Published: Wed May 06 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.