AWS re:Inforce 2019: Achieving Security Goals with AWS CloudHSM (SDD333)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome everyone to our session this afternoon which is around ews cryptography and the services that we have and hardware security modules and when they're used and the choices and options you have as a customer we're going to look at the range of AWS crypto services that we have because there may well be a suitable service there for you when you think of hardware security modules and then we're going to deep dive on the cloud each SM service itself absolutely I'm a venire am beyond everyone I'm the product manager for cloud HSM it is my goal that you find an alternative to cloud HSM today it is my single purpose for this talk we'll go through several alternatives quick well on different options for your use case and then if you must use cloud HSM we'll go through a whole bunch of power user features to make your lives a little easier a I'm squig I'm a principal security sa AWS I've been here six and a half years my focus these days is with our financial services customers large and small many of them have very regulus capital requirements when they're using our platform so first of all we'll look at the services that we have available for you to use and here we understand the right service for the right job then we're going to look at the fundamentals of the cloud HSM Services self and then how you can what design goals you might have and how you can start to optimize your design for cost and efficiency and then we'll have a look at what we've recently released for the service and what will be coming up on the roadmap it's primarily focused on the Clady GSM service but we're going to spend quite a bit of time talking about the other services first of all so ews cryptography oh sorry this is a quick shout out so we just announced root CA for private CA for anyone who has been using cloud HSM because you couldn't do root CA on private CA you can go to this top it's almost right after hours and you can see how to get rid of your HS on the okay so AWS cryptography services selecting the right tool for the job that you have in hand we know customers talk and think about crypto there's a few common use cases encrypting sensitive data within applications and protecting the keys using a hardware security module it's creating a PCI infrastructure for authentication between services that are private so this isn't about public CAS and building TLS for webs this is about doing internal PK eyes to protect your own services the secure the secure storage and retrieval of secrets from applications at bit time so an application starts up and as I said database credentials for example or a separate having direct access to our Phelps level 3 validated hardware security module that they control themselves when customers tell us that they need a cloudy HSM typically what they feel is that they need more control than an AWS managed service gives them most of the time customers are happy when they understand that we're using HSMs on their behalf and that we control those HSMs and we give customers comfort back that we're doing a good job of doing so typically customers don't need or deserve the full terror of running their own hardware security modules and I've built one as I understand how terrifying is so for customers it's about this balance between control and one hand and simplicity on the other hand allows you're probably used to with ews we give you a choice across that spectrum all AWS crypto services are backed by FEPs validated hardware security modules full stop the difference as an about security is about the control and flexibility that you have and it's also about the costs of these things because there are very different costs between using an AWS managed service and running your own fleet of hardware security modules so let's move on now to talk about the different AWS cryptography services that you have available to you to me encryption signing certificate management and PGI needs and how to choose what option works best for your use case there are four services and this portfolio so all of this fits under the banner in the service of AWS cryptography there's the AWS key management service that gives you an easy way of encrypting data as it falls across different AWS services we have the AWS certificate manager private certificate authority AWS secrets manager they help you to secrets management and then finally the raw control of the AWS cloud HSM service itself so the first years case I want to encrypt sensitive data within my AWS applications EWS kms is designed for this task virtually every eight of us service I think it's 117 services now integrate with AWS TNS kms to help customers encrypt their content by simply clicking a button and selecting a key it's absolutely awesome customers can use care kms to generate strong encryption keys that you can then use and your own applications for example to encrypt sensitive fields and a payload before you store it in a database you can generate master keys within the kms service you can bring your own keys from your on-premise HSMs and upload it into the service or if you're really regular steps level 3 requirements you can buy kms with a cloud HSM that is fully under your control soak EMS is a key manager you don't have access to the backend HS the service does so on your behalf when you're using TMS there is a fax 142 level to HSN fleet sitting behind that service you get a really simple API that can generate data keys that can encrypt things and decrypt things for you and every time any of that happens you get a record and AWS cloud trail just a fully auditable service as you use different AWS services ec2 RDS you can check a box and say this is my master key and kms use that master T to protect all the data keys that are being used to do this encryption on my behalf on the AWS services you don't have to worry about ciphertext these services perform the encryption and line between you and the backend storage so it makes it really simple for you to consume the data but get the expectation that all that data has been encrypted using really good crypto on your behalf by an AWS service and your keys safely stored in a hardware security module when you bring your own keys to AWS kms you take over the redundancy and durability of your keys if kms loses its fleet and a region it will lose your keys you have to go through the task of going retrieving your keys from your own HSMs and importing them back into the service again we're giving you balance here between it and finally if you really don't want your keys stored in an AWS managed HSM you can choose to have your customer master keys stored and equality HSM running on AWS that is fully under your control as an expensive option you have to have those ATMs on again you're taking over some of the control here for resiliency and durability but you still get the really simple kms API you get all of the AWS service integration that kms offers you so you have all of these choices available to you I would always say look at native kms first and work backwards from there and understand for one should individual security needs when you want to use TMS keys within your own applications we have the really straightforward AWS encryption software development kit so this makes it easy for you to take less service and all of the strong security and do really good crypto and your application so that for example you have our web application it receives personally identifiable information you want to encrypt that information before you store it in a back-end database this makes it really simple to do that it supports different languages it has a really easy data format for you to use again as open source and at the end of the day all the key material has been backed by AWS TMS and effects validated HSM so it takes away talk if is really difficult you know it's one of those things as a security person don't try and write a new cap to system don't try and write a new identity system this takes all of that away from you and means you can go on with the task of building really robust applications but we're doing the hard work of the crypto on your behalf let's look at the nation's next use case so you need to Kay a peak height and a authenticate internal servers of devices I remember building my first PGI years ago and it was a Windows NT server that signed a covered in at keene I know and again and you used it to to distribute keys and who's pretty difficult the AWS cryptography teammates the kids appeared here as well thank God I covered that properly Amazon certificate manager private CA takes the heavy lifting away from running your own PGI when we first launched this service it was a subordinate CA you had to take you had to take the certificate and sign it with your own PKI thankfully a few days ago we also the list the function for it to be a fill ripsi as well now so you don't need to take your keys and sign them anywhere else you can use AWS and generate your entire private CA and use this then to su revoke and update certificates it's really easy to certificate vacation is one of those things that kills everyone let's can do this on your behalf and again take away the heavy lifting and the worry but give you a really really strong security bike with the simplicity of an AWS service private CA is backed by FIPS 140-2 level 3 validated ATMs on your behalf so private keys are staying in a level 3 validated HSM you get a really simple API at the front end for issuing and distributing your certificates this is a fraction of the cost of running your own HSN FLE and then running your own CA on top of that as well which was one of the use cases we heard about before that session so let's look at the next year's case they need to securely store and retrieve and use secrets with an applications and several AWS secrets manager is the right service for this job so secrets manager was launched a few years ago now last year last year very mature service been used by very large AWS customers that are in the world it makes it really simple to the lifecycle management around secrets again standard AWS API so that when your application starts you have an identity and access management role attached to your ec2 instance call the secrets management server service retrieve your secrets start using them the magic is it can automatically rotate secrets for you as well the secrets are only good as long as they stay secret so good advices to the take your secrets very regularly and I would ask you how often you to take your secrets what secrets manager you could be repeating your secrets on a daily basis so we've integrated notation with Amazon relational database service RDS for example you can write your own custom locations using AWS lambda so you can integrate this with anything and what the service can do then is generate the new secret safely dirty over discard the old secrets you just go on with worrying about building good applications and running all of the key notation is happening on your behalf and the background and every time all of that happens you've got all those beautiful AWS cloud trio information about what happens well so you have a full audit you of all of that activity there are fine-grained policies that control access to the secrets again they shouldn't be split to wait and this is backed by AWS kms that is encrypting those secrets on your behalf so again the route of trust of all of this becomes a fence 142 level to validated hardware security module it also offers pay-as-you-go pricing you don't have to install any software so it's very cheap to get started is very cheap to run in the long term as well you can use this from on-premise for example it's just an Awas api so you can start to look at how you can use this and hybrid deployments as well really really simple now we've got to the meet I want to have direct access to effects 142 level 3 HSM that you control if I've managed to solve your problem before we get to this slide congratulations you can now leave you don't need a cloud a chess M but if you're still here maybe you really do know that HSN and I'm gonna pass over to a proper cryptographer Omni you can talk about the service I work with proper cryptographers who make me look smart those are the cryptographers over there guys can we all wave to the encryption SDK team here they do amazing work on your behalf all right so um we're gonna dive into the basics of cloud HSM we're going to look at a couple of power user tips for cloud HSM I have a problem I talk very fast when I'm excited about stuff and I love cloudy HSM so if I'm talking too fast please wave me down and I promise to slow down for a while at least all right so we'll cover the fundamentals of cloud HSM you are using an HSM in the AWS cloud because you need low latency access to a secure route of trust and you need that route of trust to be under your control right what are the aspects of control who can administer and use the HSM the user management the algorithms and key lens right so what kind of encryption what kind of signing which kinds of algorithms ASRs a blockchain what lengths of keys 256 bits to 4k what kinds of algorithms on what kinds of ways these are general-purpose HSMs that give you far more flexibility as compared to the fully managed AWS services that have picked the ideal algorithm in key lengths for your particular application you have control over your application development most HSMs will give you industry standard SDKs Java C open SSL and then you have control over demonstrating compliance with whatever rules you have whether they're internal regulations or external now in return for that control you inherit responsibility right for anyone who's had the misfortune of running a fleet of HSMs in-house you know that high availability is difficult you have to configure each client to talk to every HSM you have to monitor that uptime yourself you have to load balance yourself when you need additional capacity you have to provision and clone everything and configure everything yourself the maintenance is your responsibility right so updating firmware if there's a security vulnerability patching your HSM changing to new versions of FIPS approved firmware as NIST changes what algorithms they have approved and disapproved all of that is your job and backing up your data because if you lose the keys that are on an HSM the blast radius typically is business-critical you are storing your roots of trust on these things you also have to worry about application integration and user management so when we build cloudy HSM our goal was to take away a lot of the undifferentiated heavy lifting without impacting the actual control that mattered so we've automated high availability with cloudy HSM you've got zero configure GA add an HSM delete an HSM we take care of the reconfig of every client the load balancing is automatic the failover is automatic you just have to run the client provisioning is one-click it's either an API call or a button press on the console the HSM comes up fully configured and ready to go part of the cluster I equivalent to any other HSM in the cluster we transparently handle all maintenance for you so firmware upgrades are transparently handled if an HSM hardware is to fail it's transparently replaced you will see an event in your watch logs but other than that your application notice is nothing and we automated backups for you so backups are taken automatically every 24 hours they're also taken whenever you add or you delete an HSM and so you can be assured that the keys that are in your HSM are safe you can also copy backups from one region to another there is no cost for this API call and so if you have compliance requirements or durability requirements where your keys have to be present in multiple regions you no longer have to provision and run HSMs in every region you can just copy the backup over wherever you need to be in Avni there wasn't there was a really there is a really nice point there that you made about logging for example so when you call NE WS service you go all the logs back as AWS quote your logs because you're calling our api is and it makes it really simple to take those logs and order them and use them and the way that you generally use cloud trio but when you're using the cloud HSM you're talking directly to the HSM you're responsible for logging you are and that's an important distinction I'm glad you brought it up quick HSMs being under your control are also your responsibility to monitor so the client software gives you logs the HSM gives you logs but none of those are in cloud trail because they're in your data plane which is end-to-end encrypted and is invisible to AWS so you get control over your HSMs but you take responsibility for your logging we can't give you those operations and cloud trail anymore now you're also responsible for user management you cannot use iam roles and policies for HSM access I'll say that again you cannot use iam rules or policies for HSM access this is explicitly outside the scope of AWS so you create your cryptographic officers you create your cryptographic users you rotate the password and if you lose the password you've lost your HSM there's nothing we can do we can't see it all right so with that said let's talk about a couple of concepts and cloud HSM the most basic construct cloud hsm as a cluster right a cluster is a regional construct every HSM that you create lives within a cluster and is grouped with other HSMs within that cluster HSM instances are FIPS 140-2 level 3 hardware based instances they come from a third party manufacturer the firmware is signed by nist again comes from a third party vendor and each HSM inside each cluster is identified by something called a masking key the masking key makes each HSM belong to that particular cluster it's created when you create the first HSM into a cluster that you've created from scratch and any of the synchronization that we do of key objects between those HSMs is protected with that masking key it's considered to be a cloning operation that's within the FIPS boundary of the HSM right so you have a cluster it's a regional construct every HSM within it is a clone of every other HSM and as you create and delete persistent keys or what we call token keys in this cluster they are automatically synchronized between all the HSM s using the masking key that's built and his cluster unique and again an important point there of control when you bring that HSM up you can validate that centers you can do these things yourself so that attestation that this s your HSM and not someone else's you didn't control all of those things so you get a high level of confidence of what you're using you're not you're not trusting AWS on anything really at this point now when you add or delete HSMs the magic that makes all this work is a backup a backup is a snapshot of your entire HSM the certificates that demonstrate your ownership the keys the users and any policies you've put in place including quorum controls and key sharing right when you add an HSM to the cluster we take a backup of an existing HSM and restore it to the new HSM and that's how it comes up fully configured each of the HSMs then communicate the new IP address back to every client the client knows there is a new HSM in a cluster it pulls it into the load balancing group and off you go to the races and the backups are stored in Amazon s3 they are stored in s3 they're protected with kms but more importantly they're protected with a key that comes from the Phipps validated manufacturer as well as a key that comes from the AWS fleet and it remains locked down to your credentials so you can only restore a backup to your cluster on genuine hardware in the AWS fleet and the great thing is when your backup stores in Amazon s3 as I accesed to the dude ability the Amazon s3 provides a three is designed for eleven nines of durability this is a very safe place for your backups to give you confidence that over time you still have a backup of this thing now you can create a cluster from scratch or you can image one cluster from another clusters backup when you create a cloned cluster magic happens the hsm that you create in the clone cluster is cryptographically identical to all the other HSMs in the original cluster which means they shared the same masking key and you can swap keys between these cloned clusters without ever leaving the FIPS boundary right now the difference is the service only recognizes an individual cluster so we will synchronize the keys within any within a cluster across the HSMs for you but we have provided tools for you to manually synchronize across clone clusters so if you need the same keys available across different regions if you need to shard your clusters because you're running out of key capacity for any of those reasons you have a safe inside the Phipps boundary method of synchronizing even non exploitable keys alright so there are two ways to use Cloud HSM you can use it in direct transaction mode which means you're calling to your HSM every time you make a call um so for example if you're running a web server and the web server keys are on your HSM every time someone says hi you're reaching to the HSM to setup a session right the other way that you use it is envelope encryption this is typically used for databases for example where you have one master key that's on the HSM you reach out to it for bootstrapping then you are sorry you are decrypt all the client keys use them locally and then you're off to the races so for direct transactions you want to pay a lot of attention to your availability to your latency and to your performance and we'll talk about some practices to get your performance up to as high as it can be with envelope encryption you're really just worried about durability and we take care of that for you so it's it's a lot simpler to deploy those types of workloads alright so now that we're ready to get working with our HSMs this is the cast of characters that you get to deal with the service api is really the only thing that is familiar AWS territory right so this is your create cluster delete crust or create HSM delete HSM delete backup copy backup you can control access to these calls by role based access control you can see them in cloud shell once you've created the HSM and you want to transact with it you're leaving what we call the management plane or the control plane and you're going into the data plane that data plane is end-to-end encrypted between your client and your firmware and we can't see any of it so we have the cloud Aegis of management utility it's a command line utility that you use for routine management create user delete user set quorum policy stuff like that you have key management utility which is sort of a utility of convenience for those who have infrequent crypto operations or those who don't want to write custom applications it is scriptable and single command mode I'll show you how to use that we also have SDKs so industry standard pkcs 11 industry standard jce and open ssl we also have Windows CE and G and KSP for things like iis integration and then underlying any of the sdk based apps or the key management util is the client daemon and this is what implements the high availability the failover the load balancing all the magic that makes Cloud HSM transparent to you I want to spend a minute talking about the cloud HSM management utility this one doesn't use the client daemon so specifically when you create a user or you change the password the service does not automatically synchronize this across HSMs it's dangerous for us to do that your HSM can get locked out if anything gets out of sync right so when you use cloudy HSM management you tell you want to make sure that the configuration file that starts it up has the latest snapshot of all the active HSMs in your cluster and you can do that using the configuration utility this is provided as part of your installation you just have to make sure you run it before you start up the management utility and don't add or remove HSMs while you're making changes to your users right generally you will run this utility and what we call global mode so you start it up it connects to every HSM in your cluster whatever command you give it it will run sequentially on every HSM in your cluster and it will give you a trace that tells you whether that operation succeeded or failed you can run cloudy HSM yuto in what we call server mode where you're talking only to one specific HSM and not all of them in sequence sometimes you actually do get a cluster that's out of sync you know you change the password something didn't work you forgot to update the config you till 'ti something got out of sync for some reason that's when you can go into server mode there are also power user ways of doing key synchronization across cloned clusters that you will do in server mode with cloudy HSM management Udo but generally speaking if you're just getting started with the service stay in global mode it will keep you out of trouble all right so well deep dive the first section of the deep dive is designing for resilience and cross region mode generally speaking you know that AWS loves the notion of regions we don't generally want you have an application in one region reaching out to resources in another region it it messes up the availability story we're not huge fans of it but what you can do is replicate your keys to cloned clusters that are present in multiple regions and that gives you the benefits of cross region dr without losing any of the benefits of regional I support indeed you could be you could be serving different users in different parts of the world and using applications deployed in different AWS regions or in the world to give the best user experience as well for example and so what we see a lot of customers doing for example as you'll create intermediate seers that are identical in every region and then issue worker certs that are unique to that specific region or you'll have master encryption keys that are identical worldwide and then you'll have data keys that are you know unique to that particular region so different use cases for this the first step is to create the cluster the way you want it including a bootstrap wrapping key and we'll will walk through this step-by-step in the next slide but I want to give you an overview of the process here once you have all the users and the policies and the bootstrap keys that you want you'll take a backup you'll copy it to a new region and you'll create a clone cluster right so now you have cryptographically identical HSMs and both destinations now you have two options to synchronize keys on the fly you can either extract an object that's encrypted with that masking key we talked about which is unique to the cluster and never leaves it or you can actually wrap an unwrap using a bootstrap trapping key but either way you've got a trusted safe cluster specific way of getting keys across from one cluster to another so if you were to do this using wrapping right what you'll see is in the original cluster we've created bootstrap wrapping key this could be an asymmetric key or a symmetric key generally speaking customers tend to prefer symmetric keys because it's a more lightweight operation it's easier to track it's more standard but it can be anything you want and so when you clone it and move it to the other side you notice your bootstrap key is already in the destination cluster so when you have a new key that you want to move over you use key management youto you call the AES rap for the new key with the bootstrap key get it out it's a wrapped object you can copy it across as three buckets you can really move it over any way you want email it to yourself and then you unwrap it on the other side because you have the bootstrap wrapping key there you can unwrap the key and you're off to the races if you want to do this using masked objects it's a little bit different from what you may have normally seen so in this case you're going to rely on the masking key that's already built into your HSM there's no other bootstrapping key that you require however you can only do this across clusters that are cloned there is no other way to share a masking key so use key management util you use a command called extract masked object it will wrap your key with that masking key give it out to you again you can copy this across as three buckets email it to yourself do whatever you like once it's on the other side as long as you're in a cloned cluster you will have your key back and there you go so quick demo of how you would do this there are a couple of things I want to point out here so we're running key management util in single command mode here right this is how you script the command line utilities if you don't want to get interactive and do Python be expec scripting you can just run this sequentially in a single command mode I'm finding the keys here you can see I've got three keys present those are the handles I'm going to extract masked object on the one with handle two six two one five one in this case and I get the master object out so then what we're going to do is I'm for simplicity I'm just going to import it back into the same cluster typically you will move this to a different place around key management utility vered estimation cluster you have but again at this point we're inserting master object just that same file and now if you do find key you can see I've got a fourth key with that new handle most importantly this key has the same attributes as the original key if I shared it with different users it has the same sharing properties as the original key and if I had things like trusted attribute unwrapped with template attribute C underscore derive any of the permissions or quorums that you had associated with that key are going to transfer over as is to the system right so I'm quick comparison masking versus wrapping we get this question a lot which one's right for you masking is proprietary to cloud HSM so you're not going to have it through pkcs 11 or jce generally you want to use masking for the bootstrap keys the ones that are going to enable you to do programmatic wraps and unwraps Inbal crypto once you have your bootstrap key across with masking you can use wrapping and unwrapping to do whatever you want in bulk if you have any questions about this will be here by the side of the stage at the end we're happy to answer any questions at all that you have also share our email addresses at the end you can amaz all right so optimizing performance and cost quick show of hands how many folks here think that their HSMs are too slow if you're using HSMs already they're slow right yeah how many folks are worried about the network latency and the throughput of their applications pretty much everyone all right so there's something you need to understand about network HSMs right they are making network calls and this is fundamentally different from on-premises applications where very often you have servers that have the HSMs wired directly into them any transaction that you make is going to have a minimum round-trip latency to the hsm then whatever time of execution you have on the HSM plus if let's say you're creating a key or you're deleting a token key the additional time it takes to safely synchronize that to a minimum number of nodes in the cluster right now there's ways around that so the first thing is you want to maximize your utilization of your HSM which means you can't single thread all your crypto calls one behind the other right what's that what that's gonna do is you will have a network round-trip you'll do some crypto you'll come back 90% of the time your HSM is sitting idle because the HSM SAR fast it's the network that's causing the delay so you want a multi thread your application we recommend at least 150 to 200 concurrent threads if you're doing bulk crypto like AES right we recommend at least 50 threads if you're doing something more heavyweight like sign or verifying but that's when you're really going to start to load your HSM and you'll realize that you're past the point of loading it any more when increasing the number of threads doesn't give you an increase in throughput so again the individual called latency isn't going to change but the throughput will start to go up as you paralyze these transactions now the second part of this of course is the operation itself there are a number of ways to initiate a crypto algorithm in pkcs 11 or in jce very often you're used to finding a key by a label or an ID first getting the handle and then using the handle to do the crypto operation the problem with that is you're making multiple calls to the HSM and so you're incurring that network latency multiple times if you're going to be using the same key over and over again we strongly recommend you find the key once cash the key handle once and then use the handle to do all your operations generally speaking that's going to give you a 10 to 20 X speed-up and throughput just by caching the handle the other thing I want to cover and this is more specific to cloudy HSM you generally have not seen this with other forms of HSM we have two types of keys we have persistent keys which you are creating on the HSM as long live keys and you actually want to have exist on every HSM in the cluster for for the full duration of the cluster these take time to create because first you generate the key then you make sure it's replicated across a minimum number of nodes in your cluster and then the call will return as successful on the other hand you can very often get away with session keys right so session keys are keys that you're only going to be using in that particular context if you're using SSL offloading example your session key is always a session key once a session goes away you don't need it and you don't need to have multiple HSMs replicating that key very often you'll have applications with millions of keys right millions of data keys or you know hundreds of thousands of key pairs for your users each HSM can only store 3,500 keys so you're not gonna keep these in memory for the entire duration of your application what you're going to do is you'll create a key pair for example when you enroll a user into PKI and then you'll wrap that key out with a key that with a persistent key that stays in your cluster store it in a database DynamoDB or you know whichever one you want then when that particular user needs to authenticate you'll pull that key in which means you'll unwrap it you'll do whatever you need to do in terms of sign or verify and then you're done with it those operations can be done as session keys that means you're not synchronizing across every single HSM be are not writing audit logs for a key management operation on every HSM see you're saving storage space you're maximizing capacity and D you will get on the order of 25 to 50 X faster throughput right so wherever you can youth session Keys youth session keys you will have better utilization of your HSMs you will have faster throughput the only thing you have to be careful is you have to manage your retries well for whatever reason if your connection to your HSM snaps in the middle of an operation the session keys are cleaned up and so if the operation fails you have to watch for it and then you have to go back and retry here's another really it's a really good Segway again when you look at the eight of us cryptography SDK which does things like key caching for you and the clients say that can make things really fast but again we're the backend sit-ski AMS there and another HSM so you get many of the benefits here of the exceedingly complex HSM world with the simplicity of an AWS service and API when you look at things like I don't crypto SDKs yeah no I think that's entirely fair the other thing that I should call out here is there's no inherent throttling and Cloud HSM right because we can't see what calls you're making to your HSM so as you're looking at your latency on your throughput as you're doing your load testing you will have to make sure that you throttle incoming calls to the capacity that you can get off of your cluster and what capacity you can get depends on the types of operations you're doing right someone did a really good job Becky Becky always does a wonderful job she does that actually explains that explains a lot so if you I lost my train of thought if you're going to be using what are they talking about thank you performance auditory I'm so much an offer I think if you can hear Becky talk you really should can we at least have a bigger club after the session baby so when it comes to throttling your HSMs will do the best they can and then they will simply block until the previous calls have been processed as you start to see drops in throughput or you see increases in latency you have the choice either to start the throttle incoming calls to your application or to add HSMs to your cluster right and so at some point you know you're gonna run out of all the optimizations that you can do here at that point you're going to have to add HSMs to your cluster in order to get the additional throughput that you need how many HSMs you need cluster depend on a couple of things what types of operations you're doing right if you're generating RSA keys in bulk you're going to need many more HSMs if you're doing bulk AES and works at wire speed and probably three will be enough for any production cluster you need at least two HSMs spread across AZ's we call this a multi AC cluster our SLA does not apply to single HSM clusters and there's an interesting thing here you know we don't have any insight into what the HSM is doing because it's yours and you have an encrypted connection so you can't get this automatically include watch for example where normally we would say look at metrics of things and make a response but at your application level you could be sending your own metrics up to cloud watch so that you can then hook into the rest of the AWS automation world to say my HSM is obviously slowing down create more in my cluster using the AWS api's so you can still create that's kind of hybrid approach where you've got full control and fill insight into this HSM but use some of the other AWS features to get better than the automation side of it now there's one other thing I want to call out our HSM publish health metrics to cloud watch if you see an HSM unhealthy or if you see you know metrics doing strange things generally you don't have to respond to it we watch those metrics for you and we will replace the HSM for you the one exception is when the HSM stops emitting metrics because at that point our service doesn't know whether it's the metrics system that's broken or the HSM that's broken and we will wait until we get explicit on healthy metrics before we replace the HSM for you your application on the other hand knows of the HSM is there are the HSM is gone so that's the one situation where you may want to proactively add an HSM to your cluster for anything else you can note the unhealthy metrics but know that we've got it covered cost management this is hands-down my favorite slide I'll give everyone a minute to take a picture and then I'm going to walk across the stage for you all of this is going on is going on YouTube as well after after these sessions you can come back and review us for the Scottish speaker you may want to slow saw the speeds on YouTube for example so we're with traditional HSMs were generally used to provisioning n plus-1 for peak load and keeping those HSMs alive forever right you don't have to do that for the new Cloud HSM for one it's elastic for - there are no provisioning fees and for three it's zero config high availability you can add and delete them as you go so you can cut your development costs by sixty to seventy-five percent just by deleting HSMs when you're done for the day or when you're not actively testing against the HSM s right when you delete the HSM we take a backup when you recreate an HSM it spins right up where you left off we have an IP address flag in the create HSM call where you can specify the exact same IP address that you had before and so you don't even have to reboot strap any of your config files we've generally seen vendors who are building entire services based on Cloud HSM spend less than four hundred dollars throughout their dev test and POC you should not have a high bill for HSMs when you're in the dev test phase when you're in production workloads also you can leverage elasticity right not everyone has the same peak load 24/7 if you're signing certificates you may only have to sign at the end of the month or at the end of the quarter if you're doing something like bi okay you may really only need your HSM as once a year create them when you need them delete them when you're done you can get a lot of money saved that way you could you could just close watch to automate this again on a schedule so that as you come into work in the morning you know HSM pleats up they're ready for you to develop and it automatically turns off at night for example as well so again even though this is a very specific service you still get the rest of AWS what are available to you to do all the regular automation and absolutely the other thing to optimize cost again remember you don't have to run backup HSMs and other regions just for data durability you can copy your backups over it's a zero cost API call it can be automated it can be worked into your application you don't have to spend money on it you can maximize utilization by sharing clusters across accounts right the HSM lives in VPC but you can do VPC peering across accounts and so if you have HSM that are only used once in a while create a centralized account manage it through a centralized security officer delegate access to whoever else needs to use transit VPC use any other mechanism to make that available anywhere it's just an IP address on Annie and I at the end of the day and then to optimize storage again if you're not going to be actively using a key to strap it out it takes a few milliseconds typically if it's an asymmetric key a little longer to unwrap it back in use it on the fly you'll save a lot of capacity when you have idle workloads you can not only delete all the HSMs you can go forth and just delete the cluster even though the cluster doesn't really cost you anything it does take resources in your account and so the backup stills the the backup is still there you can create a new cluster from the backup again pick up where you left off so with that I'll quickly go through the recent launches in the last year we've shipped a Windows client so you can now do iis server offload on Windows you can sign anything you want with sign tool you can use Active Directory to run a PKI using keys Houston Cloud HSM we have jce samples on github I know this has been a pain point for many customers we're still not happy with the volume of Java samples we have we're continuing to add to them over time we're welcoming contributions from our users but you can find all the basic usage all the performance optimization and most importantly log and manager samples for handling network disconnects on the github now as of last year we're fully compliant with the pkcs spec the latest client version is to dotto we strongly recommend you download that if you haven't already check on our software version history page periodically so that you can get those updates and then we have backup management so an important thing to cover is when you rotate user credentials make sure you delete all the old backups as well because otherwise someone could use an old out-of-date credential by using an older backup to clone a cluster so you can delete backups you can copy backups as we looked at and you have your audit logs in cloud watch so with that I'm going to turn it back to squig to talk about the feature we're most excited about which is custom key store thank you everyone so last year we also were also empty just eight of us kms custom Keystone and one of the things we talked about earlier was where customers certain kms was actually an HSN backed service for example what we've given you as the full use of kms which means you got all the AWS service integration you get the use of the AWS encryption SDK for example but then this choice of your own HSN FLE soak EMS has its own fleet of hardware security modules always with their level two validators and some customers and some markets and financial services for example some things have a level three validation target they need to meet so custom keystore never allows you to meet a level three validation target you've still got all the really nice key management features that kms gives you but all of the capture graphic operations are performed and your close HSM cluster so again really wide variety of choices here at your disposal that hide the terror of a Venetian kind of running on HSN fle with the simplicity of KMS setting in front of it let's choose you the comparison across all that I'm not going to get down to this and detail but again EMS has concepts like key policies that make it really easy to put good cross accounting access controller in keys for example so that you can specify who can use these keys and what can they use them for and a way that would be very very difficult if you're using the raw HSM charges are some or the differences with Cloud HSM you have to have these ATMs up and running continuously because kms may be calling them at any point in time so these are not closing systems that you're going to be able to turn off overnight because kms wouldn't work anymore as you know an EVs volume leader than your data key because you've integrated kms with the EBS service for example so that's it from us now you know the right tool for the job hopefully the first half of this showed you the service is available and what for the second half terrified you to go back and look at the first half again and think I don't need an HSM I can use one of these money services to achieve exactly the same goals without the complexity of running your own HSN plea but then finally when you do need you don't need to simply and we do have customers and it hopefully this give you some more insight into that so thank you everyone and again if you have any questions we'll be hanging on side of the stage here and take some questions thank you all for coming here's an email addressed as you can get in touch
Info
Channel: Amazon Web Services
Views: 5,058
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud, AWS re:Inforce, AWS re:Inforce 2019, security, identity, compliance, cloud security, AWS security, cloud security community, learning conference, Detective Controls, Infrastructure Security, Data Protection, Incident Response, Governance, Risk, Compliance, security best practices, Security Deep Dive, AWS re:Inforce 2019 Sessions, Session, SDD333, Stephen Quigg, 300 - Advanced
Id: _gezaWmwzYY
Channel Id: undefined
Length: 49min 21sec (2961 seconds)
Published: Wed Jun 26 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.