AWS re:Invent 2019: [REPEAT 1] How to ensure configuration compliance (MGT303-R1)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right hello everybody thank you guys for joining us in our session how to ensure configuration compliance my name is mo Sonya Scott I'm a business development manager for two AWS services abbs control tower as well as AWS Service Catalog and I also consider myself a cloud Operations evangelist with me today is Sid Gupta who's the product manager for AWS config and we're honored to have Manish Mohit II who's the global head for cloud engineering at the Jeffrey's group and we're gonna just give you guys the best approaches for sharing configuration compliance so from an agenda perspective we'll talk about some of the challenges and you guys will probably be familiar with those challenges we'll give you some best practices and then Manish will come up and explain how Jeffrey's group uses AWS management tools to ensure configuration compliance and then finally we will wrap up and if we have some time we'll do a QA so we do have some related sessions feel free in terms of governance to take a picture of this those sessions will be on YouTube some of them depending on what type of session they are but we just want you guys to know that there are multiple sessions on governance so let's talk about challenges okay you know you have your builders your developers whatever you may call them engineers they they want to be innovative have speed and agility they create a proof of concept they're using AWS for provisioning their resources for whatever solution they're trying to do and now they're ready to move that de prize okay and so they're starting that process and then all of a sudden either compliance or security stop someone says wait what is this what are you doing have you aligned it to our policies and it controls does it meet all the regulatory requirements we have how many people went through something similar to that don't raise your hands just blink okay because compliance might be next to you so I just want you to just blink alright alright okay so you guys are familiar with that situation and so like I stated builders want to move fast be innovative get something out there see the results of what they thought of and IT or your governance wants to make sure that they mitigate the risk protect your brand and continue to adhere to the governance of the organization so how do you address both of those how do you address compliance now that you're moving from infrastructure being in some rack and stack in a data center to infrastructure as code how do you educate your stakeholders from your leadership to your compliance your security finance how do you educate them on what to do and finally how do you report to your auditors who may not be familiar with cloud on what we're moving in how we're transitioning do we align to a software is code or do we change the dynamic in and make additional considerations for the cloud so from a best practice approach one of the things we want to make sure you guys realize is that with AWS management and governance tools you can have both agility and balance of governance as well and if you guys don't remember anything I know it's kind of late in the afternoon and y'all are ready to have some fun if you don't remember anything else we recommend that you pre configure compliance into your designs so begin with the end in mind and make sure that you're meeting those those needs for your policies so from a best practice approach we have four kind of categories the other thing you're here today are we're going to feature kind of controls from a preventive perspective as well as a detective perspective okay and then from establishing the standards what we want to do is make sure those four approaches are establishing your standards designing your configuration is launching those products and then finally managing the resources that your provisioning so let's start with establishing standards and from that what we recommend you do is identify what are the in each one of your environments what are the policies that you must adhere to and as you move towards production things are more strict but you got to understand what that is and how can you create templates and designs that meets all your requirements in every business need as we move on to designing configurations let's think about that application stack on the top you have your code your configurations your repositories and therefore the resources that those that code runs on from a cloud service configuration perspective you'll have the templates you'll have the configurations for security metadata and we call them tags you have your operational rules and you have your detective controls okay we recommend that you template eyes your provisioning of resources so that you can have a consistent delivery mechanism and so you can have proof of how things are moving into the cloud and so with those templates we want those templates to be immutable and you know I love the word immutable but I just want to just make it simple unchanging okay and then align those configurations to your different environments as the previous state showed we also want you to do those two tech detective controls by enabling AWS config rules and then enabling auditing so that you can prove to your your auditors just leadership that things are running as you have planned and as you have expected them to run now so when we're talking about provisioning or launching the products we have AWS CloudFormation which is our infrastructures code orchestration service and on top of AWS CloudFormation we have another surface called AWS Service Catalog that enables you to provision products in a template consistent matter and enable self-service for engineers AWS Service Catalog as a proxy not only does that but it also separates the permissions from your engineers so your engineers access to the products that you want them to provision but then we have launch constraints that enables those provisions for your resources and what those launch constraints do is give you the ability to let service AWS catalog do the provisioning without given that direct access to your builders now I know someone you're saying will sign yeah we want to be innovative we don't want you know we want to give our developers freedom and that's fine but if you think about if you're a platform engineer in here how many times does somebody come to you and ask you can you give me this environment can you spin up this account can you do those different things if you enable self-service and gave them the permission to do that then you and your team could go and focus on doing something else right improving what you're doing and architecting for for improvement and efficiency and so from a Service Catalog perspective in configurations we do a couple of areas that I want to call out tag options and those tag options I call them standardized tax and what you do is you associate those tags to the products and products are nothing but CloudFormation templates in AWS Service Catalog and anytime that product is provisioned those tags are automatically placed onto that particular resource or resources the other thing we do is enable stack sets where you could provision a product to multi multiple accounts or multiple regions depending on your use case and finally from a compliance perspective we know that customers have enterprise service management tools or agile development tools that they use and they've stated and declared in their compliance auditors as their repositories for certain finders so we also have enabled through two connectors one is for ServiceNow we've done a fifth iteration of that recently launched that allows you to provision products and service a service now Service Catalog both AWS Service Catalog search serves as a source of truth the other is new we just launched just last week it's to Alessi and JIRA service desk and what it does is allows you to provision AWS products as a request type in JIRA service desk so it gives you that ability to be agile whether it's from an enterprise view or from a DevOps builder view to provisioning resources in a consistent manner and like I stated to you the administrators of Service Catalog or your cloud app means basically setup the portfolio's can collect them and provide different products within those portfolios for organization and then give the ability to put the launch constraints and the permissions and finally end-users just do self-service and they have three options now of doing it either due to console 80% of AWS Service Catalog is provisioned through command line and then we also are using additional opportunity through other tooling such as ServiceNow and alas the JIRA service desk so what I want to do now we're going to like I told you we're going to focus on controls from a preventive and a detective I just talked about cloud formation and AWS Service Catalog from a preventative point of view we're gonna do a demo and I'll walk you through that demo process for preventative and then I'll turn it over to see it to do the detective and managing of resources so we're going to do is this quick use case will enable end-user a developer to order a web server through a request type in Alaska in JIRA Service Desk and then web server will be provision into AWS account so they're going to go to the Help Center in alas in request the AWS product they're gonna input some summary details that are required as baseline for a JIRA Service Desk request type then they're going to find which product they want to provision so we're going to say a web server and what we exposed to the end users are the versions of the product if they're multiple versions and then we also expose the parameters for that particular product so you see now that the the end user has picked their version there they're going to pick their parameters leave some default and select the teaching medium as their particular instance they want a provision now notice here we have tag options so the end user selected their tag option and now they've click create which is going to go ahead and send that to a request before it gets requested there's an approval workflow associated through Alaskan and so now they're going to as administrator that person just automatically approved it it may be another group and so that products launching now and once our product launches we put in details that the customer and user requested and then we start seeing details coming back from AWS to this particular lasting of Jo services this instance on what they want to provision now I'm showing you that that product provision and it's in that initializing state the one thing I also wanted to point out is show you that like I stated those tag options because the provision it automatically putted those tax on that ec2 instance now how this was done was first everything was set up in AWS service catalog since we're the source of truth we put it in a portfolio which is nothing but an organization of products select to that web server and then there are three different versions I the developer chose the lamp stack 3 version and then there we also in AWS account I'll have an integration with systems manager to put SSM documentation for post provisioning so you can start stop run ec2 instance reboot it or do a custom document as well and then we show in AWS service catalog that that product is in the the provisioning stage now once it goes through that provision you'll see in Jared that is saying that it succeeded so it out gear already knows that it succeeded but it's still gonna cascade some additional information and turn it to the state of available now what I did was I went to another request earlier request to show you that you can do post provisioning actions such as update terminate and do system manager documents so I'm changing that instance right there from a previous request lowering it down to a smaller t2 family instance and what just happened is I got a notification that my other product engineer service desk that I requested is now available in AWS as well so the whole goal of that is to show that you can provide consistent delivery you could provide different products either at the console at a command line or through a connector that we've developed and enabled for customers so we've talked about three things and establishing the foundation of standards we talked about design configurations and then we talked about how do you launch or provision of your products I'm gonna turn it now over to see it who's gonna talk to you about how to manage resources and then his core focuses on detective controls see thanks Mon Sonya hello everyone my name is Syd and I'm a product manager at AWS so you heard from asagna about some of the preventative controls that you need to put in place so that your resources don't end up being misconfigured so oftentimes you may come across a situation that some of some out-of-band changes are made either someone logged into the console outside of service catalog or made some changes through the API or CLI and so now you have a situation where your resources are misconfigured so as a best practice recommendation we recommend not just having preventative controls but also detective controls so detective control is a set of controls that you can configure so that they can post provisioning of the resource so the main intent of these post provisioning checks or detective checks are to find out whether your resources are compliant with best practices are they compliant with your internal policies you may have your info SEC teams or internal governance teams that have policies defined for various services and resources you may also have some requirements that are set forth by regulatory bodies like PCI or HIPPA but in order to have an effective detective control in place you need to know a few things for example you need to know what resources currently exist in your account what does their configuration look like and what what does the configuration look like at a certain point in time in the past what relationships exist between your resources and finally what is the actual list of resources that are non-compliant with my policies so in short you need a real time configuration auditor in the cloud now the key word here is real time because as you may know the resources in the cloud are dynamic you have things like auto scaling mechanisms or spot instances where these resources come and go within a matter of few minutes so even in that short duration it's important to make sure that those resources don't violate your policies so that's where AWS config comes in handy AWS config is a native agentless service and it automatically discovers resources in your account it tracks changes to the configuration of your resources and it maintains that configuration history for a period of seven years by default now this information is valuable if you are in a heavily regulated industry and you want to show this data to your auditors you can show the change log for an extended period of time and since AWS config is capturing the configuration state you can also compare that configuration against your desired state or policies that you can define using config rules AWS config is also integrated with AWS organizations so if you're someone who is managing a number of accounts spread across various regions AWS config can give you an aggregated view of your configuration and compliance across your entire organization and lastly if you have investments already in your own ITSM tools like ServiceNow or any other BMC remedy or any other tool you can put the config data into that tool and then have your change management incident management workflows kick in so I'm super excited to talk about some of our recent launches in AWS config a couple of months ago we launched automatic remediation of config rules so previously config as a detective control check the resource against policies and then alerted you in real time using Amazon SNS notifications or cloud watch events that this resource has violated a certain rule but now with automatic remediation you can configure an action that you could take that config could take to remediate that resource so for example if you have a config rule that looks for wide open security groups on port 22 you can have a remediation action that goes and fixes that security group settings now behind the scenes we use Systems Manager automation documents that's where you can define your remediation action now the action could be as simple as sending a notification to someone or raising a ticket in an ITSM tool because a lot of times customers have various steps or workflows in place where they don't want to take a remediation action immediately but they want to kick in or workflow the other feature that we just launched last week is something known as conformance packs so imagine if you are someone who's got like several several accounts like maybe 50 accounts and spread across three different regions and you have a set of policies or let's say if you're subject to PCI there are about 50 or 60 different checks that you need to trigger on your resources now if you wanted to provision these rules in every account across those regions it's a lot of heavy lifting out there with conformance packs you can package the collection of rules along with remediation actions into a single entity known as a conformance pact now you don't have to worry about deployment of each individual rule as well as each individual remediation action you just deploy the PAC as a whole and then we take care of the rest so it's very it's meant to simplify your deployment experience at scale at the same time conformance PACs help in simplifying your reporting experience because now you can get compliance summaries at the PAC level and then of course you can drill down to the resource level to get further details another important benefit of conformance PACs is that the PACs when they're deployed through the organization master account they're immutable meaning the member accounts or the child accounts in an organization cannot delete or modify them now this is especially important from a governance perspective cuz oftentimes we've heard customers say that they've enabled a set of baseline checks but then the application teams ended up disabling them for some reason so you don't have to worry about that if you're using conformance PACs the third feature that we launched last week is custom configuration items so this is a very interesting feature and this enables you to use AWS config as a single tool to audit the configuration of not just AWS but also non AWS resources so for example if you have an Active Directory which may be on-premises and you want to monitor whether the configuration of that Active Directory is per best practices so you could use now you could now use a configure to check that Active Directory configuration and the way that works is that you can pull the data from the Active Directory using a connector that we have provided and then you can call our public api's to feed that data into config so this opens up a variety of use cases for example checking github repositories that are public versus private so it really opens up the door to check things beyond just AWS resources all right so now let me give you a demo on conformance packs okay so here on the left nav you can click on conformance packs this is on the config console this feature is available in our new redesign console in the on the conflict page so here you can see the ability to create a new pack or deploy a conformance pack let me click on that so conformance packs uses a ya Mel template under the hood so folks if they are familiar with cloud formation they can they can use this yeah Mel template and out-of-the-box we have a few sample templates available so these are various best practices for Amazon DynamoDB s3 or I am we also have a best practices pack for PCI DSS so you can use these sample templates download it modify it according to your environment and then upload it back to conflict so for the sake of this demo I'll go ahead and modify and s3 best practices conformance pact so as you can see this pact has a number of rules within it so this is a config rule and then it also has the remediation action within the template itself so this particular template has about six different rules and remediation actions so I'll go ahead and upload this template I'll give my back a name I'll provide any input parameters for the amel template hit next and then I'll go ahead and deploy this back in my account all right so let's wait a few seconds so there you see the best practice is back for s3 the deployment has completed and it's already detected that it's non-compliant so when I click on it I can get more details within my pack so this is a list of all the rules within my pack and the remediation action associated with it and the compliance status so I can click further to get more details within each rule I can see the remediation action associated with the rule and then I can also see the list of resources that were evaluated by this rule all right so now let me try to delete this conformance pact so I don't have to worry about deleting each individual rule now or the remediation action I just delete the pact and then that takes care of deleting all of the contents within the pact alright so that was a demo of deploying a conformance pact in a single account but now imagine if you are an organization master user and you want to deploy this pact across your entire organization so here's a simple CLI command to do that it's basically calling the put conformance back API especially you can specify the s3 template URL and that's it so it deployed the pack in my entire organization now currently in my organization I've got two member accounts so let's quickly login to one of the member accounts and see if the pack got deployed okay so let's refresh the page so there you see I have the organization CPAC and it's already detected a non-compliant event same as before you can see the list of rules and the list of remediation actions so now let's like I told you earlier if you're using organization master it's immutable so now let's try to delete this pack as a member account so I'll select this pack I'm logged in as a member account and go ahead and delete this so as you can see you get a message saying that you don't have permissions to delete it it's a very useful feature from a governance perspective and to make sure that your central teams your InfoSec teams have their baseline policies in place that are not tampered by these member accounts all right so let me now give you another demo so this one is on custom configuration items so just to show you the demo here in this demo I'm going to be evaluating the configuration of an active directory using config rules so I'll log in to my Active Directory server now this server could reside anywhere it could be on-premises or any other environment now here you can see a list of users in my Active Directory so if you see here for this particular user the password never expires flag is set to true now ideally from a best-practice perspective you don't want to have users with that setting you want your users to change their password regularly so I'll be using config rules to identify a list of Active Directory users that have this flag set to true so just to briefly explain the architecture here I have a lambda function which is an Active Directory config connector that will basically pull your active directory using LDAP polling and it will then turn around and call our public API to create a custom configuration item in config and then I have a config rule which uses lambda function under under the hood to evaluate that configuration and then highlight all the users that are violating that particular rule okay so just to briefly show you the lambda function for the ad connector this information is also available on github so you can take a look at the at the connector in more detail but essentially this is information that is publicly documented about you know user access control codes for Active Directory for various settings so the password never expires setting has a particular code that you need to look for which is what the config rule would be evaluating so you can also see the logic to call the AWS config api's to put that information into config now the other lambda function is for the config rule that that I've created now this particular rule will it's a very simple rule where it's only basically going to mark my resource as non-compliant if that flag for password never expires is set to true now these lambda functions are configured to run every minute so by now that information should already be in config and as you can see the active directory evaluation rule has already detected three users that are non-compliant so I can click on it and I can get a list of those users so that was the end of that demo just a quick recap we launched these two features last week conformance packs and support for custom configuration items so these features really enable you to do configuration compliance checks at scale across your organization without worrying about them being tampered or deleted by member accounts and then you also have things like you could use configure as a single tool to audit the configuration of not just AWS but also Nonie WS resources with that let me hand it over to Manish who talked about how they do configuration compliance at Jeffrey's yeah thank you said thank you my Sonya thank you said thank you folks for your time and thank you for allowing me to share I am responsible for cloud at Jeffrey's and part of my responsibility is to provide governance and agility for the business generally those are kind of competing with each other how do you provide agility and governance at the same time governance in the context of security governance in the context of cost governance in the context of performance reliability pretty much all the non-functional right so that's my role there so Geoffrey's is is a diversified financial services company engaged in investment banking capital markets asset management equities fixed income we we are global we are in Americas we're in Europe we're in a PAC about lean 3,700 you know employees globally you know 41 billion you know into total assets and you know over the last five throw they've done seven not 90 book crowns right valued over 167 billion dollars so you know sizable presents many financial services products for fixed income capital markets investment banking and so on so when I share my journey first right in terms of what Geoffrey's has been doing in a Tablas you know Geoffrey's is presence in multiple clouds in from an either blush ten points it's a little more gravity towards a to bless so we're very opportunistic in the cloud right so we look at what can go in the cloud and we started off our journey with development test environments the cloud for every production application that is likely you know for five non prod an emergence of that dev test QA performance test and so on and then it oftentimes becomes very opportunistic to say if I'm not using performance test environment I will shut it down I'm going to say right so that's where the journey started that's where you know we were able to start learning with the cloud days and so on and then we quickly adopted dr2 cloud so we were we were the early adopters of VMC on eight applause it made no sense to us to keep all the dr you know servers and the infrastructure sitting idle for the most part not you being used so VMC on AWS made a lot of sense for us we you know we essentially run a minimal compute and we SRM all our storage into iterator blasts so we have we are meeting aggressive recovery time recovery point objectives with the d-r-e NIDA Blas then we started off with new applications we're gonna give some examples you know it makes sense right so oftentimes we have all databases in the in our on-prem data centers and and the databases are constantly growing so we came up with the architecture to archive databases to the cloud so we started to archive databases to the cloud you know pull data out of the database that's not being required or not being needed and the frequent basis put it on s3 and then have a tena query the date on s3 so it's all service you know it become a very cost efficient solution at the same time allowed us to maintain the cost of you know storage you know in the on-premise then of course you know analytics right we are you know a lot of data petabytes of data on s3 and so on so we have a lot of opportunity in terms of running analytics so we are a big consumer of glue we have heavy consumers of Athena we've been running products like data breaks and so on for ETL machine learning ml models so on with a lot of the new applications that we're building in the analytic space digital of course as we build models for to help our customers we need to interface with the CRM systems right so that's where we are actually enabling a lot of you know AI ml type capabilities back to the business the CRM products that we are using and that largely as we do all this stuff we've you know rolled out many production critical mission critical applications now in a tableau us so this is you know a it's kind of a busy slide but but I want to share a little a lot of different perspectives here so if you think about security of the cloud I'm just curious how many of you are responsible for security in the cloud okay and how many of you're in development okay thank you so III think I believe you have to be secure in the cloud there's no questions about it right and and and there has to be better ways and you know simple ways and you know to enable security the cloud you can't really you know compromise security so if you cure t off the cloud you know and and there is security in the cloud right there's there's those two separate things for the most part security of the cloud is taken care of by it applause in this context but security in the cloud is where the customer has a ownership in terms of how you build security in the cloud right now this gets you know a little tricky because now we have many products many services 130 40-plus services that are out there from AWS and these services you can think of them in three different categories right there are infrastructure services so things like ec2 VPC they're like infrastructure services you can think of container services right that's typically how Amazon phrases this so RDS is a container service Amazon MQ is a container service potentially EMR is a container service right so this is the second class category of analysis and then comes the abstracted services abstracted services is things like s3 things like sqs things like SMS right so in you can see as all these different service categories the security responsibility for the customer for a SS kind of varies a bit if you look at abstracted services like s3 either plus offers I am all the way up to the customer if you look at infrastructure services like ec2 AWS offers I am up to the control blame right they allow you to build I am as to who can launch servers who can update servers shut things down and so on so so when it comes to abstracted services you know there's there's a lot of ownership for on the customers to do it right from a security standpoint so if we quickly walk through some of these controls at the top is the public block right in most cases you don't need as three buckets as three objects to be probably right so a SS offers a control that that you know offers you to block public access at the s3 bucket level right so from a Jeffrey standpoint we have a requirement that you know data needs to be private then we enable public block for all our s3 stuff the second context is the the TLS right you you want to encrypt you want to have data in transit to be encrypted at all times you want to you want to have heaps prevent eavesdropping tamper proof ring right and you want to provide perfect forward secrecy and so on so TLS is is mandatory it's required so that's another next set of patrols that we have implemented for astray right if you come down next kms all data for our requirement is all data in the cloud is needs to be encrypted in transit needs to be encrypted rest regardless of whether is test dev QA prod doesn't matter all data needs to be encrypted right so we have a requirement that all data we are using chemistry encase to encrypt our data in the cloud and I what I really like about that is that when you enable chemisty mk4 your data in the cloud for s3 as an example anonymous access will not will not work right you have to explicitly give cameras key usage rights for that data as for from s3 right enforce IP restrictions there the within the bucket policies for s3 you can say I am going to put a condition in there that says this data can only be accessed if you come from this certify piece enterprises will have IP addresses that are external facing so if you put P address conditions on on your bucket policy you essentially restrict access to buckets from within your data center so if you think of a large firm that was recently compromised with a server-side resource forgery and data was then accessed from outside of the enterprise network for that firm and of course there's a lot of you know loss in terms of personal information right if it's something like my IP conditions were applied that would have probably prevented that so again this is another one of those controls s3 private length right so s3 has been the private link with Esther has been out there for many years it for us it's a requirement that whole s3 access should should come through private link as opposed to on the Internet right so from a networking standpoint this actually is a cost saving as well if you have s3 that you're accessing over the Internet then you pay for egress charges over the Internet they're more than the the charges over the private length right over your direct connect possibly and so on and then it's much higher performance as well lease permissive model right so obviously are aligning to some of the security principles least permissive model is something we use so if you need to put offer gets but no puts but no gets multi-part yes now you could do all those sort of lease permissive you know permissions and we're doing that part of our Jeffrey's requirement for for s3 now from a from an audit logging standpoint is the the s3 offers object logging server access logs as well so anomalies that that come across if there's a large get operation those those things can be detected if object logging is enabled right and like you know who creates the bucket stuff like that are also logged in the server access logs right so so that's right that kind of provide the visibility so that's for s3 it's a requirement at Jeffrey's to be to enable those sort of that sort of visibility for for the service for the s3 right and and the last three sets of services over there is really enabling say cops right the cyclops team to be able to detect you know any violations or any issues and then be able to quickly report on it and take an action alright so you can see that this is a very broad range of controls just for s3 and this is not everything right like what was announced this is an evolving set of controls what was announced I believe on Tuesday with the with the access analyzer that's that's awesome right you know that we'll be able to put that in there as part of dozen of the controls we're looking for one of the things that we also do it's not here is DLP we want to make sure that we enable DLP for for data that's out there some of the things around tagging is not here so you can see that this is a very broad set of controls for one service there are 130 40 services like that that are out there so this becomes a fairly intense you know effort when you're evaluating a service to say how are we as it has like how are we a Jeffrey is going to use it so and if I flip this around a little bit we're looking at you know a model where we always start with the prescriptive control enterprise requires us to do and this may be coming from your C so that says encrypt everything in the cloud encrypted transit and gripped at rest implement defense-in-depth implement you know the the MFA with privileged functions things like that maybe coming from an enterprise policies that are they're part of your you know becoming possibly from your seaso and so on so that's really preventive controls right the directional statements directional policies and most companies regulatory will have those sort of directional policies part of the enterprise then comes the preventive controls right so many of the controls that you see here in terms of bug public s3 log or TLS enforcement or chemists key enforcement they're all preventive controls right you can't write data into s3 without using a key so those sort of things are really preventive controls and twosome Anya's ma Sonia's point around you know using Service Catalog is it's really awesome way of doing that that's one of the ways we're doing it and then of course I have later slides to talk more about it and then we we talk about detective controls anytime we see you know a violation of our policy in some form we want to be able to quickly detect that right because as as Sid pointed out there's always a chance that there is something that happens on a ban and somebody created a bucket a developer may be offshore created something that is not conforming to your standards you want to be able to quickly detect that take it action right so that's where you know the the detective controls are all coming into play and last but not least a target or it is going back to the visibility right you know what are what's going on how do we know about it what's the anomalies that are happening in the environment things of that nature those are audit so if you kind of look at all these controls and you put it into these different buckets is is sort of what we are doing at Jeffrey's so we we are kind of ensuring that we are secure in the cloud essentially so this is an example of what we have done with building preventive controls in our environment I gave you the controls for s3 this there's many there's sqs SMS Kinesis you know he c2 RDS and so on and on and on what we did was we built service Katla products we built service scheduler products we aligned those products to the portfolios and then we we built that in one central account and we shared it across all the member accounts we have an account kind of strategy intern of the build accounts my line of business by tenancy and so on so we have clear demarcation of what's Ward and provide the right level of access to the right developer team so that they Investment Banking developer is actually just working within the Investment Banking framework within their account a fixed income developer is working with in fixed income account and so on right so we built an account structure based online a business based on tenancy so there's an account for dev test QA prod and we're using the algae automated landing zone to as a starting point we did this before control tower was even there in terms of how we built out the accounts then we in a part of the products and portfolios that we built in the Service Catalog we we shared those with pretty much all our products all our accounts and with with with sharing those things we're actually maintaining the controls and and codified all those controls in one place and sharing that those portfolios with all the other member accounts what I really really like about that is the large constraints I want that I want to give the developers the ability to do things fast but I want to make sure that the developers are not going to mess it up so the launch constraints allow me to give developers the ability to launch products within ada within a tableau ice but at the same time not having direct privileged access to make API calls to the services right so that's really the the sort of a big win today we have a lot of products we have bbc2 before yes your Kinesis SQ SS in it so on certain developers can just go and you know kind of request what they want and it kind of does that the second part of that is is the config rules this is where all the detective controls are coming into play where where we built you know configure rules in in one lambda functions basically in one account and then we are doing configure and the analysis across all the accounts I really like config service I mean I how many of you use it can take service awesome service you have to really look at it this is just an incident I think a few weeks back where we had an application I was working and it stopped working it's like what what changed we weren't quickly able to go back to config you look at all the assets resources for that application and say nothing changed from the you know resources standpoint so on right you have lots of easy tools you take snapshots you want to be able to connect the snapshots to easy tools when you have thousands of servers and thousands of snapshots they you know if you good tagging strategy you'll be able to identify that but if not can fix service can connect the dots for you right so I I really like config service I think it's makes a lot of sense especially in the eye on policy world you have very complex policies that you've set up you want to be able to track what change when when it changed and so on and config service can do that I'm also very excited about the conformance pack but that's a talk about because you know with all those controls we talked about for s3 I don't want to build a context and balances for every control in every account in every region right I want to be able to package all of that together and essentially run it across all my accounts across all the regions that we operate in right so it's pretty exciting so part of configural we we have these governance functions that we run and then we log or everything in a centralized login account and from the centralized logging account we are looking at the aggregated views and reporting to see what has changed in the environment or what are some of the anomalies and what are things we need to take action in some cases we are taking automated remediation in other cases we are building a notification back to the you know to the teams to take an action on the next phase of what we're looking to do is is really build the cost governance aspect here right the service catalog is is great it gives you the preventive controls it gives you the ability for the developers to be able to launch and products and resources but we want to be able to build a workflow we don't want to see a big bill that is like wow okay who approved that so so we are building a workflow with the service catalog ServiceNow connector that essentially would would create a workflow something like a developer putting a request and the request goes to his manager saying do you approve this this is what they're looking for this is the estimated cost by the way of what you're requesting and then it comes into the cloud team and cloud team is basically able to look at it saying yep makes sense and then it kind of goes through so very simple the the a tubeless ServiceNow service catalog connector is really going to be very helpful here and we're looking to totally automate that process so last not least in summary right you know think of security at scale I think security is he is security first is our strategy you know think in the context of preventive controls in terms of all the services to her what preventive controls do I need to put in place and this is an evolving area there will be more and more controls that would be coming across for all the different services detective outer band changes are possible they happen all the time how do we detect those and I think the conformance pack is great in fact you know what I did one of the things she talked about was the custom things I'm quite excited about that too because we have DLP that is not present in a SS today that's external it's an external system for us so I want to be able to get the DLP configs right into the config so I can actually look at changes and track config changes on DLP as well and then last but not least audit right you want to insure a cloud trail is turned on everywhere you you're able to do object logging you're able to do server login you're getting access log into into potentially your SEM whether it's plunk or sumo logic or whatever it is so so basically you know that's what we have put together thank you very much okay to wrap up we we provided a force theme best practice approach we talked about preventive and detective controls the outcomes of ensuring your compliance yield to operational efficiency consistent delivery predictable patterns for your workloads developing template consumption and then finally enforcing compliance ensuring compliance and at the most for those who are compliant people in here making you guys happy so on behalf of CID Manish and I we do want to leave you guys with a bit of a call to action and there are a couple of tiny URLs from an AWS Service Catalog AWS config as well as AWS answers page on how to ensure compliance we'd love to hear your feedback and would love to the opportunity to work with you to get more compliant workloads in to production so thank you guys very much this concludes our session [Applause]
Info
Channel: AWS Events
Views: 3,321
Rating: undefined out of 5
Keywords: re:Invent 2019, Amazon, AWS re:Invent, MGT303-R1, Management Tools & Governance, Jefferies Group, AWS CloudFormation, AWS Systems Manager, AWS Config
Id: u8u9DXwNoIs
Channel Id: undefined
Length: 57min 52sec (3472 seconds)
Published: Sun Dec 08 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.