AWS re:Invent 2018: Data Protection: Encryption, Availability, Resiliency, & Durability (SEC325-R1)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
confidentiality integrity and availability these are the core dimensions the three legs of the stool when we say protect data how do we achieve these outcomes using the controls and capabilities of the platform is what we're talking about today and then we're gonna talk about how encryption can add yet another tier of not only access control but also audit ability and Trust to achieve these same outcomes managing security means understanding least privilege and least privilege means doing work you can just give the accounting team oh well here's the accounting bucket s3 star it's not the way to do it means understanding your workloads and understanding the principles who are going to be taking the actions to serve the business who are they what actions do they need to what resources and under what conditions PARC who is the identity where did it come from what's the identity provider maybe on-prem your Active Directory you feder eight into the platform maybe you're building a workload with kognito kognito is your identity provider the actions all of our services have very defined detailed granular specific phenomena that you can do list buckets is different than creating a bucket you can let anybody list a bucket well maybe not anybody but maybe the whole team but all these certain people should be allowed to create a bucket what are these resources am I going to give you access to all of ec2 we're going to give you access to specific instances I'm not gonna give you access to instances where tag department equals your team in under what conditions from what originating IP from what region and then there's service specific conditions as well you're allowed to create a bucket but only with these characteristics you can launch an RDS instance but only certain engine types we solve for this problem using decorative policy language across the platform policy language can be attached and of the principal the actor the identity policy language can also be attached to resources so when we look at user policy what are we gonna do here we're going to allow what are we going to allow easy to attach volume easy to detach fine for what resources that is all the instances wildcards but under what conditions only where department equals dev because the people that are going to get this policy should not be attaching volumes from the production we should only be attaching the volumes that are tagged dev and that is what this allows this is a resource policy same style of language but it's not attached to the principal it's attached to the resource you can also use resource policies to grant and allow access but here's a really good nice example of using a resource policy to deny now we always deny we start with deny when we evaluate the policy language during authorization and we authorize every single API call there's no session there's no session state every single time you call we check every single time so this doesn't grant any access you may have been granted access separately it's part of your identity like the policy on the left but the policy right that applies to a specific resource that's attached to a specific resource says we're gonna deny all of the principles for all of the actions for this one bucket and prefix right bucket name prefix prefix is a little bit like a folder it's the namespace s3 is a key value store for this particular key space you must have multi-factor if you make a call to s3 for any operation for any principle no matter who you are and what are what policies you've been given no matter what permissions you've been given if you're going to interact with this resource that has this policy on it for that particular prefix you must present a multi-factor authentication during the call doesn't matter what other privileges you have I could give you all the explicit allows on your identity this policy protects this resource very specifically so data is held in a lot of different places on the platform we've got a lot of cool managed services purpose-built databases we can down some new ones today and there's some really good ones there don't sleep on that it's pretty exciting RDS manage relational database Aurora bigger better faster version of RDS in these resources or AWS resources they have arns and they can take resource permissions but the data they hold inside of the engine is outside of AWS I am now you can manage the RDS instance and you can manage the already s snapshot but if you've defined a user inside of my sequel that is outside of AWS I am if you need to make sure your data is gonna be durable again confident confidential integral and available so snapshots are a great way to ensure availability you can take snapshots and you can move snapshots to other regions for purposes of recovery for purposes of resiliency or availability may be the regulator's making you do it you can even copy these snapshots into other accounts why would you do that because inevitably your CRO is gonna come to you and say well write write write write write you have great access control but what if you compromised the thingy that does the access control you can place resources into a separate account with a different plane altogether as the oh my god break glass at least we have this one extra copy depending on your resources depending on your requirements but the data inside of the database is protected by the engine and we've tried to make this easier for customers and a couple of the engines you can exchange your IM identity for a database user identity and you can kind of map the database user identity to the I M identity we make this possible of thumb I sequel really excited to see it ship for Postgres but these credentials in engine in your Oracle instance your Oracle user you've got to protect those credentials if you don't protect those credentials it's like you're not protecting the data so how do you protect those credentials well this is actually a super hard problem but we solved it for you we gave you a nice easy simple way to not only manage credentials but to rotate them and not just rotate them but rotate them safely so of course it's integrated with hi I'm and you get the same really nice granular control on who can access the secrets and the credentials controlling access to these secrets is almost as important as controlling access to the resource right so if I allow you to manage this resource or I deny you for managing this resource but you have a login to the my sequel port it's another dimension of protecting your data wouldn't there's a lot of humans handling the secret it really creates vulnerabilities don't put the secrets in the dot password file in your home folder who's done that I've done don't do that an automated rotation the big value-add here with secrets manager is what's a fundamentally a workflow manager from rotating the secret and why is that important well number one rotating a credential think we all kind of know rotating credentials make sense in a lot of circumstances you should probably end up doing it but it's a dimension for availability and maybe in your own life when you've changed a password on your you know desktop and your phone still has the old credential what happens the phone keeps banging on it with the wrong credential and your account gets locked out anybody ever have that happen right it's a really ugly problem so when you rotate credentials you take on this dimension of availability and that is really dangerous so we try to make it safe in secrets manager can help you safely rotate the credentials and by making it safe you can do it more often we are all or already doing this today with ec2 instance profiles we can essentially inject an identity into the ec2 instance the actual credentials that is to say the access key and the secret key which is like a fancy username and password for an ec2 instance profile are changing throughout the day hundreds of millions of changes throughout the day on the platform of rotated credentials inside of our customers instances heck inside of our instances yeah we use AWS - we rotate them constantly and you never notice because it's safe you never notice because it's integrated with automatically through the CLI and the SDK so here we will offer some lambda templates to facilitate the rotation I'm in the box that works with all the RDS engines I'm using the lambda template you could make it work for your homegrown database or whatever application that you've got so of course it's all logged monitored it's stored encrypted and makes it really nice to manage little credentials if you are not solving for this problem you are not protecting your data so there's storage services as well and of course these hold even more data than the databases we've s3 our object store the elastic block store the virtual disk service that attaches to our virtual compute service ec2 glacier of course for long term in EFS POSIX compatible that's a nice filer it's managed it's highly available across regions or cross availability zones if there's now more as we announced this morning really excited about fsx if you ever wanted to manage lustre now you can have manage lustre it's pretty neat so protecting data inside of these services takes on dimensions that are more tightly coupled to AWS I am so for EBS the block store for the objects themselves for the volumes or the snaps from the volumes you can write really nice tag based policies as we saw earlier where Department equals def or lifecycle equals dev durability you can move snapshots around as an implementation detail snapshots are held in s3 so when will you take a snapshot and we hold it for you we're using s3 you're inheriting the benefit of the durability of s3 and finally the integrity is just kind of we just solved for it there's no bad blocks on EBS there's no kind of corrupted we're solving that it's part of presenting the block device to you in the first place in EFS you can control the attachment of the filesystem using AWS I am but of course then there's POSIX file permissions to go deeper you does not support ACLs but it does support unix-style Hammad style permissions it works pretty well and that solves for most use cases durability again you can take backups moved around EFS file sync it's like a custom built widget that we made you light it up inside of your VB see you could do all the pumping between EFS we announced a new file sync service yesterday it's a challenge for us to stay on top of what we have announced well we haven't we have a new file sync so if you're coming from an on-prem filer you'll want to pump it all in the FS week that easier for you now in integrities again automatically managed for s3 confidentiality as we saw earlier we can write really detailed policies in require MFA durability SV is extremely durable in region it's 11 nines of durability I like to tell customers those 11 nines or not a marketing number those 11 9s or a science number it actually is 11 9s durable we are not gonna accidentally lose your data to place an object in s3 it's the route of trust it's the route of state we're not gonna lose it you can copy your data out of region CRR move it to another bucket really nice it's automatic supports all the encryption if we now can do selective CR are based on object tags so if you just want to make sure you're automatically replicating the certain objects that have Department equals controller we'll see our our now supports that in integrity automatically solve for we have some very clever methods that we're using this through storage layer to make sure that everything always is bit perfect it doesn't decay there's no bit rot it's in it's incredibly integral over time so let's go deep on a very common access pattern I've got identities in my virtual private cloud they need to access s3 4 years ago we're gonna light up an Internet gateway I'm sorry did you say Internet gateway to access my secure data it's fine it's TLS your security people don't don't like that so we give you something called an endpoint so you can reach s3 privately without giving Internet access to all the instances in your environment so how do we reason about the role accessing the end point could using the end point to reach s3 we have policy language as we saw earlier attached to the role as it turns out there's policy language that can be applied to the end point that is to say separate from the identity and separate from the resource that it's consuming the end point has a role to play in the policy language on that can further raise the bars we're about to say and of course we have bucket policy I am policy language attached directly to the resource the I am policy is primarily defining what actions this principle can take as we saw earlier principal action resource condition this is what a I am policy looks like saw this a little bit earlier we're going to allow listing buckets we're gonna allow listing all the buckets s3 star we're going to allow put object get object and delete object three different actions for this particular resource here the reinvent bucket on the bucket policy side again you can use a bucket policy to grant access but it's often very nice as we saw earlier where we were gonna deny it doesn't matter unless you have MFA it doesn't grant access it makes sure that the access takes a certain pattern example bucket policy we're gonna deny all of the actions for all of the people except for the tax documents unless it's multi factored this example we saw earlier but the end point has some additional tricks and the end point is a resource so it gets a resource policy and the endpoint can take policy language and it can further constrain or allow the actions to s3 that cross the endpoint so when we look at endpoint policy this first one's really cool it's got a new condition from us principle org ID this says we are going to deny all of the principles for all of the actions where the principals org is not mine string not equals principal org ID that's actually my org ID in my house account which is not a confidential piece of information obviously this means that if somebody brings in their home credentials into your environment and pastes their access key and secret key into their console and they attempt to use their credentials to use the endpoint it's going to deny the use of the credentials where the org ID is not mine really really powerful for exfil concerns really really powerful for if you've got to allow maybe a partner to use your stuff least privilege there's another stanza here we're going to allow anybody to use s3 but only these resources API calls using this endpoint to s3 can only regard those buckets no other bucket not a different bucket in your company it's certainly not your bucket at home you can bring these together with further policy language on the resource itself so let's look at how a bucket policy can regard an endpoint we're going to allow access to a specific VP see we're going to say alright on this bucket anybody in all actions we're gonna deny you access to this bucket where it's not coming from my VP see all API actions to this bucket will fail if they are not coming from my VP C and you can also write it with not coming from my VP C end point it would be weird to probably use both but you can doesn't using both doesn't raise the bar but you can have some options because you could also blame deploy different endpoints inside of a VP C but either way the critical thing here is nobody can hit this bucket unless the calls are coming from your endpoint the set of interlocking policies allows you to get really fine-grained and really specific to achieve least privilege and that is how you protect data you define principles actions resources and conditions using our granular policy language to get the specific outcome that's required to prevent certain outcomes at ohmygod absolutely I will not tolerate you can't use foreign credentials on my endpoint of course not but there's a way to raise the bar further let's add encryption of the mix we're gonna do default encryption on a bucket this is really nice this took a long time to to get out this is something customers really wanted so the default encryption solves for a couple things number one not only do your developers not have to remember to set encryption when they put a bucket put an object but if you've got third-party software that integrates with that's three but maybe didn't support encryption that means you couldn't have used it if you need it encrypted outcomes well now makes it moot this guarantees that all objects placed in the bucket this is what the config looks like when you put it they're going to use we're gonna enable server-side encryption we're gonna have it be the kms version of server-side encryption if you need to just tick the box you can use sse s3 which is service managed keys because there's typically want to raise the bar further we use my keys could you use this specific key ID to get your encrypted outcome so then the question is well why are you doing why do you encrypt in the cloud this is what people say but what they really mean is that they need better control over their data they want control with a key that I use and they want to prevent unauthorized physical access but this idea of providing this additional point of control is incredibly powerful because now not only do you need the permissions to use the resources and the permissions to use the actions you need the permissions to use the keys to tell you more about that it's my privilege and pleasure to introduce you to my friend Ken beard the director and general manager of the heavens of the AWS key management service all right thanks a lot Peter all righty all right so talk to you about all the ways that you can define access control in terms of network based access as well as the resource itself right so if there is an Amazon resource name and arn for the resource then you can make that as condition you can build a policy on the resource itself depending on the service and you can control which identities will get access to that resource a lot of people say well but I'm not quite sure whether or not I like that resource to be in plain text on disks in data centers that you will never let me visit in person if I if you as a customer treat an ADA abuse data center as a hostile environment encryption is a best practice now customers will say all right I know encryption comes down to where the keys are and I sort of buy the argument that maybe I don't want to be managing the billions of keys needed to encrypt petabytes of data and I could put those keys next to my data so it can be encrypted and decrypted quickly so I sort of by the idea of maybe Keys being in an AWS data center but dang it I need to have control over those keys that's what I tell my auditors it's what I tell my customers what does control mean in this case so let me dig down a little bit here's the fundamental problem with having control over encryption keys so with symmetric key encryption in this animation I've done several times in the past so follow along you take a key you take an algorithm you apply it to your data you get ciphertext that ciphertext can be stored anywhere on the planet you trust that the use of a 256-bit key under the a AES algorithm is going to produce ciphertext that is practically impossible for someone to brute-force attack great store to that wherever you want however you have a problem in terms of access control on that data because you have this plaintext key that was used the algorithm is public I just told you it's a yeah so where are you going to put this plaintext key you certainly can't store it right next to your data although that has some attractive durability properties we don't let you do that so where are you going to put it in a different directory structure on the same host or a different host or somewhere that's far away from the data all right this is a hard problem to solve the best practice in the industry is to protect this key the way you protected your original piece of data is to encrypt it so now I can put the data key required to decrypt the data right next to my encrypted data two pieces of ciphertext they share the same durability properties and I have to worry about anybody who has access to that server host being able to decrypt my data problem solved right know you've just encrypted a piece of data called a key with another key so if you follow along you just do this ad infinitum at some point though if you want to decrypt your data you have to have a plaintext key somewhere that's always available to be able to decrypt data so where is that key going to exist so for the past approximately 40 years the best practice has been well divine a device a specialized appliance called a hardware security module and this hardware security module will have some interesting security properties to it you'll notice it has very few ports you'll notice that it has this special keypad device called a ped where you as a human get to go and initialize the device and say I now want you to create keys on yourself and those keys will never leave the device in anybody who tries to crack open the chassis will cause the system to zero why's all the keys on itself so you would rather lose those keys than give up confidentiality right he's a very clever devices and like I say they've been used for decades by governments the banks by large manufacturing companies but they are still a single device that has a single copy of a key so you have to think about durability and availability of these things wherever your data happens to be that needs access to this high-level key at the top of the hierarchy to be able to decrypt your data so the pros here is you control the actual device you control how you authenticate to it with a username and a password or a certificate and you control the user accounts and how the authorization to make use of keys on that device work the best part if you're in a regulated industry is this looks very familiar to your auditing team they've been checking boxes with FIPS 140-2 validated HSMs for years so this is easy however if you are trying to build a geographically dispersed highly available highly durable application that can go up and down and scale to your needs this is tough because you've got to procure these devices and guess what you don't get to put your hardware into our data centers so now you're sort of stuck and because the authorization authentication mechanisms of these devices are using very bespoke rather arcane cryptographic protocols like pkcs 11 or jce or CNG right that's not the language you use to talk to AWS services right you use things like signature v4 to sign API requests and use things like JSON policies that Peter went through to define how you access resources inside the cloud and this is true regardless to the cloud provider so second choice is maybe you take a dedicated HSM but it exists in the cloud so now you potentially reduce the time it takes to provision a device you still control the authentication and authorization mechanisms so your cloud provider can't see the passwords that you use they can't control the crypto officer or the crypto user you get to own that because this device is now closer to the ec2 instance or the s3 bucket in that particular region or availability zone you get lower latency to cause that decryption to happen the encryption to happen faster because we supply the hsn's here through the cloudy HSM service we give you the opportunity to make an API call to cause another HSM to come into existence all right so again you don't have to wait several weeks to procure one of these very expensive expensive boxes from a rapidly diminishing set of vendors make an API call and it shows up and again this looks pretty familiar to your auditors it's Fitz validate at HSM in this case it's applied the underlying cryptographic modules supplied by a third party this gets easier to check the box but you're still stuck with I can't make this work with AWS speak there is no significant policy document that works here this is designed for your application whether it's running an ec2 or on-premises to talk directly to this HSM to make use of keys so the third option is something that we invented four and a half years ago with the key management service and the idea is this is a managed HSM so the key material itself is very safe right cannot be accessed by humans you also control the authentication and authorization but now you're gonna do it with signature before in JSON policies you're going to treat keys inside this HSM all the keys have an aren't there just yet another Amazon resource you also get lower latency to your applications in the cloud whether you're running them directly or they're running directly from an AWS service and now you don't have to worry about scaling and making these keys highly available it's our problem right you are used to approximately 20 milliseconds of latency on the decrypt path to gain access to a master key right you get that over time that's our commitment to you you're used to nine and a half excuse me four and a half nines of availability for all kms api's right that's what we want to do to make sure that you continue to use not only kms but also all the services that rely on kms to encrypt your data and because this service speaks Sigma for understands policies understands I am users and roles we can integrate this very tightly with other services so when you call s3 and say s3 I want you to encrypt this thing that I'm inserting into the footpath s3 makes the call to kms and says I need a new 256 bit key to encrypt this brand-new thing but it's not me the s3 service asking for a key it's actually customer a it's their identity their cryptographic identity that wants permission to cause a new key to be generated or on the get path to cause a key to be used to decrypt right so we're passing along your cryptographic identity directly to the service that holds your master keys the big con is this does not look familiar at all to your orders what is a multi-tenant stateless HSM right that's been one of the biggest challenges that we've had over the past four years is to try to explain we architected these hsn's we built them ourselves because the commercial HSM market did not have a solution that would scale we operate to the tune of tens of billions of requests on a daily basis there just is no HSM on the market that can handle that right you can say great scale you know deploy a thousand of them I challenge anybody on the planet to say they have a thousand commercial HSMs in a production deployment frankly there's not that much of a need for a single customer to have it so we built the HSM s we built the architecture and we're trying to work with our auditors our customers auditors our customer CISOs to try to explain the security properties of this service are very similar to what you're used to okay how do we do this well we say the service is designed so that once master keys are generated the plaintext copy of the master key is not available to anyone it is simply not accessible how does that work well when one of our HSMs gets provisioned when we launch a new region once it is in active mode and it has generated its own set of keys that are then used to protect your customer master keys there is no ssh to it there is no telnet there is no if there is no nothing no humans have any ability to connect to that device to do anything there is a very limited API and only other trusted components in the service have the ability to call that API there is no get key API there is no extract key API there is only I want to use this key to encrypt and decrypt okay the next question is well you got a update software how do you do that on a running host well we don't we kill the host and by killing the host we wipe all the keys off the host there is no more key material and then we push new software to that host and we do that one by one to make sure that we have the right set of keys to decrypt your cmk when you need it to be used the next thread is going to be well what if somebody decides to push some malware to your HSMs and change the security properties of your devices introduced a extract key API yes that's a very real concern which is why we built these to require a quorum of authenticated digital signatures to be able to use the API which is called update firmware all right so we make it a very loud noisy internally public process to update the firmware we do this as rarely as we can get away with every time we do it we are committed to resubmitting the new version of firmware to nist under FIPS 140-2 and you will see that our commitment inside our sock control which is now available for the fall 2018 and we have a new control related to our commitment to Fitz and you'll see the certificate from NIST so we've designed the system that we think has the right balance between security availability and durability the security properties again is our promise to you how do you know that we're not going to change those security properties because we have to violate FIPS to do so because we have to violate our sock and get a critical finding in our sock 1 & 2 if you've read our sock over the years you know AWS has never had a critical finding in our sock reports that would be a catastrophic event it would be a huge trust buster if we had a critical finding so we are holding our own feet to the fire to ensure that kms is as secure as it can be ok so if you buy that the physical security of keys and the inability to compromise physical keys is fairly well designed next question is alright how do I know who's going to use those keys so as I mentioned before each cmk has a resource policy it's called a key policy assigned to the key what are the types of things you can express in a key policy using that principal action resource condition semantics that Peter talked about well you might say that these particular users and roles have the ability to use this key for encryption and decryption either within the same account which the key exists or maybe some external account that belongs to another account in your I am organization or it could be an external account that belongs to a partner right you are a service provider and your customer is an enterprise customer that has an AWS account they don't do much in it but they might have a C and K and they can block access to you the service provider whenever they want they would grant you the ability to use that key only for encrypts and decrypts this is how the integration with box works so if there's any box customers in the audience every time you upload files into box box will encrypt and decrypt it will go to kms and you have the option of saying you're gonna go to Mikey if I start to feel like something hinky is going on a box I push one policy change in now box can neither read nor write data because they no longer have access to your key so that's a good separation of control story another way to look at these policies is to control which applications can encrypt and which applications can decrypt for those of you who have been doing crypto for a while this semantics was often served by public key cryptography you would give the public key to your encrypter x' and say please encrypt data I hold the private key I'm the only one who can decrypt it you can express that same semantics using a key policy and using symmetric keys the administration of keys can be limited to a specific set of administrators we strongly recommend that you think very hard about who has administrative rights to manage a key and I'll go into a little bit more detail about that you can share this key with an external partner account as I mentioned but only limit it to the use of encryption and decryption and also you can put additional conditions so they only get to decrypt if they also pass in a particular string maybe it's an account ID maybe it's some other type of contextual information that is unique to that particular caller and the language that we use for key policies is the same as I am policies s3 policies the nouns and the verbs are a little different and let's go through that so here's an example of the key policy where the statement ID at the top is around access for key administrators so if you look at the principal again this is an arn of a particular I am user it could certainly be a role it doesn't have to be a user and when you look at the actions the verbs if you will the ability to create keys the ability to list them enable them and disable them so control the state of a key is very important the ones at the bottom there schedule key deletion you deletion those are incredible incredibly powerful api's anybody has the ability to delete a master key now controls the durability of potentially a petabyte of data in s3 so as you're creating key policies one thing to consider is you just don't include that action if that action doesn't exist nobody has the ability to delete a key but the person who has put star and specifically put key policy can go and update this policy later when you think you're ready to delete a key and that could be a ceremony that involves multi-factor authentication multiple people all right these are but two potentially scary things to do now the cost of having a key stored inside AWS kms a dollar a month alright so this is not necessarily a painful thing to have keys lying around but your auditors like a story of key lifecycle management showing the births use and death of a key ok so now let's look at the use of a key so this is much more common applications and users partners need to be able to cause keys to be used even though the plaintext key material will never be exposed so again here we have in this case the principal is a role and the actions are encrypt decrypt re-encrypt generate data key is actually a function where we generate a new 256 bit key and we encrypt that key under the master key that you passed inside the generate data key API parameter this is how envelope encryption works almost every AWS service that integrates with kms and that list is approaching 50 right now eventually it'll be every service will use this generate data key API so that individual keys that are scoped to the specific piece of data that you want that service to handle get their own unique key whenever you see resource star in a resource policy just note to yourself this is a reflective reference on the particular resource this does not mean this is applied to all keys inside your account okay so you've defined access control in the logical plane and you agree that it might be a good idea for your cloud provider to host the physical key because it gives you great availability and low latency but now where does encryption actually happen what are the choices there so we have two basic choices client-side encryption and in this case this diagram is a bit of a mess because there's lots options for where did you clients that encryption if you start with where is your plaintext data it's in an on-premise system or it's inside an application that you're running in ec2 you've got to pass it into some program that will perform encryption that program is going to need a key and that key can come from your own key management system it could come from AWS kms or cloud HSM ultimately it produces a piece of cipher text and then you submit that ciphertext to the alw storage service whether that's s3 whether it's EFS any storage service that's going to take arbitrary data okay so now when you want to get that data out you have to have the same client processes that can do the decryption dance okay server-side encryption little simpler all right your data starts out wherever you call the correct AWS SDK to say I want to put data in here I want to write data I want it if it's dynamo you're gonna do readwrite different verbs and nouns used for different services but you're asking the AWS service to encrypt on your behalf and you pass the specific cmk key ID to that service and say make sure that when you encrypt this data that the data key that's being used is encrypted under the C m'kay because I have defined access policies on that cmk and I want to ensure that only the correct people are able to write data into this bucket or read data from this Dynamo table so you're allowing the AWS service to act as a proxy passing your cryptographic identity to the place where the key exists to cause it to be used okay so this is a repeat from Monday so this is an announcement that we made Monday night but for those of you who didn't catch it because as you know we do announce a few things here at reinvent every year what we have done is provided another option for trying to find this balance between the storage of keys and the access policies for using keys so we call this custom key store and this is the simplest logical diagram let me go into a little bit more detail about how this works so if you were to use kamas by itself today the red key if you remember from my diagram that master key that wraps data encryption keys it is made available at the time you need it inside a fleet of HSMs that we own the way you call kms either from your own applications via various SDKs including the AWS encryption SDK or you ask an AWS service to call kms on your behalf if you were using Cloud HSM before Monday you would again have some type of master key a key encryption key created inside a cloud HSM cluster but now the interface to make use of that key is going to be limited to a set of cryptographic api's if your application understands pkcs 11 that's great all right if you're using something like Oracle Enterprise Edition with transparent data encryption they've got a way to tie in to pkcs 11 and those interfaces but you could not ask 50 plus AWS services to make use of that key inside the cloud HSM cluster custom key store now enables that ok so you call kms and say here's my cm keiki ID this is what I want to use on the encrypt path kms says oh I know where that key actually lives over in this cloud hsm cluster that you can figure earlier and we will go from our service CPC into your V PC where the cloud eh SM cluster exists and we will use a credential that you have given to us that gives us access to just that particular key on your cluster so how does that process work how do you set up this connection well here's a screenshot of the console clearly there's a set of new api's that you can do this programmatically in the left-hand side you'll see there's a new option for custom key stores and here you get to define an arbitrary name for the store you are going to choose an existing cloud HSM cluster so this cluster has to have been created beforehand and you can certainly read more about how easy that has become over the past 18 months as we've introduced enhancements in cloud HSM and then you are going to give 2kms the proper certificate that proves that it is connecting to the right cluster so this is a cryptographic identity that kms will use to ensure that the cluster ID that you selected is in fact the correct one so this is the belt-and-suspenders approach then finally you will have needed to created a cryptographic user inside your cluster called kms user you will define a password for that kms user you will share that password with us the first time right here and then we will go through the process of rotating that password so this is an important distinction because you're effectively giving kms permission to connect to your bespoke cloud HSM cluster use a single key act as a crypto user to do encryption and decryption events you can also have kms key lifecycle management API s affect the key in your cloud HSM cluster for example disabling that key deleting that key right you get an independent audit log from Cloud HSM about all access to use keys inside that cluster and that audit log again is not tied to an AWS service per se it's another application log like you have inside ec2 you can certainly direct those logs to cloud watch to make them easy to understand but you're not relying on AWS to generate log of logs of key access through something like cloud trail so you can compare what you saw in kms and the AWS service in terms of access to keys and therefore access to data and verify that with Cloud HSM this can be a really interesting story for your auditors so if you think about the various ways in which you can set up keys and manage them there's what we call a native kms you just call a create key API we generate keys for you no humans have access to those keys and you control all the access you can also import key material so the B yok feature that we launched a couple years ago or you can have all your keys stored inside your very own Cloud HSM cluster how do you know which of these is the right choice well we've got a blog post that we published Monday night that might give you a sense for how to look at this this might be the right conversation to have with your IT security team as well as your auditors and regulators or if you're a service provider and you do business with a large bank or a government or a large manufacturing firm ask them what makes more sense to them you can mix and match keys that are stored in different ways inside kms it all comes down to that key ID which then you refer to inside your API calls we keep track of where exactly that key material is used so that we can do cryptographic operations against it so trying to summarize kind of where Peter started with the access control on the resource itself the network layer how you combine some of that how you think about the encryption of this information it really does come down to access control policies you don't get to worry about physical data security in the cloud we've taken that away from you we provide all sorts of assurances in our sock reports that should give your own regulators and auditors assurance that we are doing the right thing but the ability to read data and write data into your resources is entirely your responsibility so you need to invest in understanding JSON documents the control access the I am team if you've gone to some of their sessions this week are always trying to find new ways to make it easier to get into JSON policy documents understand the concepts make changes if you're trying to do this at scale and do it programmatically you will appreciate the power of JSON and cloud formation templates and these types of things but it can be confusing at first but this is where all of your security around access to your data is going to be linked because if you want to think about using encryption again all we're asking you to do is to manage access control policies the physical security and all the assurances around the cryptographic operations we do on your behalf so one thing that Peter alluded to in his slides that I want to bring up this idea of a database we let you control who can spin up a database who can cause an existing database snapshot to be launched the data within that database we don't know anything about we can't necessarily apply an IM policy to a social security number that exists in this particular field in this particular table in your my sequel database however you could use encryption to do access control on that if you did client-side encryption of that social security number before you wrote it into the dynamo table or the my sequel database then you effectively are requiring the right principal to have access to the decryption key to be able to read that social security number so it's a bit of a backwards way to think about using encryption as a way to provide granular access controls on data that AWS knows nothing about and you don't want us reading individual PII inside your databases but it's an indirect way to control access because the master key has to be available to the principal calling decrypt for that PII to be decrypted okay so we've got some breakout sessions here some of these we've removed the dates because they've already happened it might be useful for you to take a picture of this so that next week when you get back and you're going through the large library of YouTube presentations you can type in particular session numbers here and understand more about how cloudy HSM works if this custom key store feature is interesting learn what it takes to set up a cloudy HSM cluster understand how secrets can be applied not only to securing data at rest but also for transport security using certificates so we invite you to take a look at all the features we provide we're again at the end of the day what you need to be responsible for is that resource policy ok thank you very much for your time apologize for Peter and I coming from the Venetian a little bit late be sure to fill out your speaker sessions Peter and I will both be available if not here at the front then out on the hallway if you have any follow-up questions thanks again [Applause]
Info
Channel: Amazon Web Services
Views: 5,982
Rating: undefined out of 5
Keywords: re:Invent 2018, Amazon, AWS re:Invent, Security, Identity, and Compliance, SEC325-R1, AWS Key Management Service
Id: FH6AXreSQWQ
Channel Id: undefined
Length: 52min 28sec (3148 seconds)
Published: Thu Nov 29 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.