Advanced Security Best Practices Masterclass

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello welcome to this AWS webinar my name's Ian massing ham I'm a Technical Evangelist with Amazon Web Services based in Europe and I'm going to be your host for this session this is a masterclass series webinar presented in partnership with Intel and today we're focusing on advanced security best practices on AWS you may be aware that we have another webinar series called journey through the cloud where we focus on a solutions orientated look at how you can apply a variety of different AWS services to a specific use case or challenge that you might be facing in the masterclass series of webinars we dive deep into one specific AWS service and take a good focused look on how you can make use of that service and get the most out of it having said that today's session is not a typical masterclass because we're going to be covering several different AWS services that have relevance in the area of security control on the AWS platform and I just want to say that we're going to be pressed for time today to cover all of the potential topics that are relevant to this area so if you have questions that you might want to ask that a broad-based relating to security best practices on AWS please submit those via the Q&A panel that you can see in the webinar interface and a member of our solutions architecture team will get back to you over the course of the next few days with answers to your questions the other things that you should be able to see in the webinar interface firstly you can find the materials from today's session available for download in the files panel so you can grab a PDF of the slides and in common with all masterclass series webinars you'll find that a useful thing to do because many have many of the slides have links on them which will take you to further reading on some of the topics that we're going to cover in more detail today at the end of the session we will switch the webinar into Q&A mode and when we do this you'll be given the opportunity to rate today's session so please give us a rating between five and one with five being the best and let us have your feedback on today's session we do use this feedback to help us improve our webinar for future audiences and if you want to leave as any qualitative feedback you can do that once again using the Q&A panel I'm happy to take feedback via that method as well as take questions you can see my Twitter on the screen there I'm also going to show you at the end of today's session a couple more social media accounts that you can use to stay up to date with AWS here in the UK and Ireland and also globally please follow us on social media and you'll be able to stay up to date with our education program and other AWS related news okay let's get on with the master class and as I said earlier this is a technical deep dive that is going to go beyond the basics in the specific topic area that we're covering intended to educate you on how to get the best from AWS services show you how things work and in the case of this session today quite a few demos to show you how to get things done using some of the features that we're going to be talking about later in today's session and the topic for today is advanced security best practices if you joined our session a couple of weeks ago journey through the cloud security best practices on AWS you'll know that security is job zero for us we really regard that as the foundation upon which all AWS services are built and as a result of that mindset I think we've been able to create a platform that satisfies the most of security sensitive organizations providing very good features for visibility auditability and controllability and allowing you to combine these with one of the main reasons that you might use the cloud which is agility of course we'll talk about how you can achieve that using some technical features later on in today's session and operations on AWS AWS generally but specifically in the area of security we feel you can significantly lower the operational overhead associated with traditional maybe private owned IT that you operate if you deliver services using cloud computing using the AWS cloud instead hopefully illustrate that to you during the course of this session today so increasing your security posture in the cloud I just want to quickly recap this from the last session that we've done on this topic so do you think it's important firstly to say that AWS has a very familiar security approach if you check out Security Center AWS Amazon comm / security can find out a lot more information about that and if you're a customer that has a non-disclosure agreement in place with AWS we can also share additional information about how our security delivery works in terms of people and process systems network and physical security which might help you comply with compliance or audit requirements that are placed upon you or your organization also we have a very large security team here at AWS and as I said a second ago we work to provide services to create services that provide you with visibility into usage and resources which can lead to you being able to develop a really strong security posture on the AWS cloud something that's arguably going to be better than the security posture that you can achieve with traditionally operated private IT or on premises IT we have a broad range of accreditations and certifications so these are third-party organizations and standards accredited that have taken a look at the way in which AWS does security and have certified that that approach the processes that we use the controls that we use are consistent with a wide variety of industry standards if you want to learn more about that aws.amazon.com slash compliance is the place to go and we can share once again much more detail about these compliance reports and the controls that we implement if you have a nondisclosure agreement in place with Amazon Web Services one other cool feature of actually the cloud generally but it applies very specifically to security is the idea of a community network effect so very lucky at AWS we have over a million customers that have used the AWS cloud in the last 30 days and amongst those million customers there are some very demanding organizations organizations like GE NASA shell bristol-myers Squibb and Pinterest the name just a few customers that are using AWS extensively we have a broad partner ecosystem and combining the demands from these very demanding customers together with the AWS mindset of creating services directly in response to customer demand we're able to create new and innovative security services that come directly from customers saying things to us like AWS it would be good if you could provide us with a better mechanism for managing encryption keys for example we build these services and everybody benefits from that so when we introduce a new service in response to a very demanding customer actually that benefits every customer that's using the platform will talk more about encryption latex it's a really good example of this but the community network effect in general that AWS takes advantages of in order to create services that have a high degree of relevance to customers and help customers achieve their objectives applies in a very specific and interesting way to security services I just want to bet you'd use a better in mind as we go through the webinar and if you actually if you think that we're missing anything today there's anything that you feel we don't do we're not doing that we should be doing that might help you with security management or with maintaining the right kind of security posture for your organization I'd ask that you let us know about that and be via feedback in this session or via your AWS account manager or Solutions Architect that might work with you and let us know where we can do better let us know about the features that you would like to see in the platform whether those are security features where the features and we'll build services in response to customer demand so let us know what you need from us okay the agenda for today's session we're going to recap in a second the shared security responsibility model AWS provides and we're going to dive into some best practices for Identity and Access Management with iam will then talk about defining virtual networks with Amazon VPC and networking security for ec2 instances within V pcs then talk quickly about container and abstracted services and we'll close by talking about encryption and key management in AWS there are actually quite a few security topics that we're not going to have time to cover today so do submit your questions using that Q&A panel and we'll get back to that get back to you on those with additional information if you've got topic areas that you don't feel that we've covered adequately during this session so sharing the security responsibility model this is a key aspect of operating services on the AWS cloud that you as a customer need to understand very well and what this means is that you let AWS do the heavy lift I'm operating the physical operations the physical security the physical infrastructure operating the network infrastructure and virtualization virtualization infrastructure and of course managing the lifecycle of the hardware that provides our services and you're able then to focus on what's valuable what's more important most important to your business or organization but there are responsibilities that are placed upon you through this model you know we have very clear service Demark that we operate up to and we provide you with the set of tools that you can use in order to develop and maintain a security posture during using the services that we provide but it's very important that you understand how those tools work and also what your responsibilities are in for example installing and configuring operating systems or are creating and maintaining a security policy using features like security groups or access control lists how you might manage users credentials in your organization using an AWS service called iam identity access management so we provide technologies that you can implement to protect your data in transit and at rest but how you use those services is up to you how you configure the platform is up to you and it's very important that you understand the different models that I'm going to talk about in a second so that you can be clear where your responsibilities start and our responsibilities end now the first model is for infrastructure services these are services such as Amazon ec2 Amazon EBS and Amazon VPC that run on top of the AWS at global infrastructure these services vary in terms of availability and the durability objectives that they provide but they always operate in the specific region where they have been launched so you can build systems that meet both your availability objectives by taking advantage of things like AWS availability zones and also meet your needs in terms of data location or data sovereignty by placing your data and services into specific AWS regions around the world building on our secure global infrastructure you then install and configure operating systems and platforms in the AWS cloud just as you would on your own premises or in your own data center facilities where those are operated by yourself and by a colocation provider you then install applications on your platform and ultimately your data resides in and is managed by your own applications now for certain compliance requirements you might require an additional layer of protection between services from AWS and your operating systems and platforms where as I said a second ago your applications and data reside and you can impose additional controls here things like encryption of data at rest or protection of data in transit even introducing a layer of opacity between services from AWS and your platform and this layer can include encryption can include data integrity authentication software and data signing secure time stamping and other features so you're able to build that on top of the AWS infrastructure services if you have a requirement to do so second service model is for container services and the shared responsibility model also applies here this is for services such as Amazon RDS and EMR and in these services AWS doesn't just manage the underlying infrastructure and foundation services we also manage all layers up through the container for example with RDS for Oracle we would manage all layers of the container up to and including the Oracle database platform providing services such as data backup and recovery but it's your responsibility to configure and use those tools in order to meet your businesses or your organization's continuity and dr policy requirements for AWS container services you're responsible for data and for firewall rules security groups access to the container service in the case of RDS or in the case of VM are managing those firewall rules through Amazon ec2 security groups for your EMR instances and you're responsible of course for from a ensuring that those technical features are configured in a way which enables you to meet your security policy objectives and then the last service model is for abstracted services this is services like Amazon s3 or DynamoDB where we're providing API endpoints that are tightly integrated with our identity access management service I am which are going to come to much more detail in just a moment in these abstracted services you're responsible for managing your data including classifying your data assets and for using I am tools to apply ACL type permissions to individual resources either at the platform level or to apply permissions based on user identity and it's your users your responsibility to manage those users at the I am user group level for some services such as Amazon s3 you can also use platform provided encryption of data at rest or platform provided HTTPS encapsulation for your payloads to encrypt your data in transit to and from the service and we're going to talk a little bit more about encryption at AWS at the very end of this session as we mentioned a few minutes ago when we're talking about the agenda having said that the next agenda item is best practices for Identity and Access Management so let's take a look at that now there are ten or so different topic areas that we're going to cover here the first is users it's very important with iam to create individual users that have unique credentials then they inherit the ability to endure you then inherit the ability to individually rotate those credentials and also manage permissions individually for each user getting started with this is very simple you just need to identify which I am users you want to create in order to meet their separation objectives of unique credentials rotation and permissions and then you're going to use the console the CLI or the API to create users assign credentials and assign permissions to them and it's that simple once you've done that individual users will be able to log into the CLI using their unique identities their unique credentials and also be able to access api's if you provide them with an access and secret key pair AWS credentials with the permissions that have been assigned to them I'm moving on to talk about permissions in a little bit more detail the key principle here of course is to work under a principle of least privilege and by granting lease privilege you can have a couple of effects when the first is you lessen the chance of individuals making mistakes by over running the scope that's been provided to them they're not able to do that if you provide them a lease privilege set of credentials of course it's easier to relax credentials later stage than it is to tighten up and the unlikely to break pre-existing functionality something that occurs user may have developed themselves if you relax credentials in comparison to the likelihood of that happening if you remove or tighten up credentials once they've been issued you're also able to effect more granular control over API in resources if you work on newest principle and getting started here is also simple identifying what permissions are required creating those passwords or access keys for individual users that have those permissions avoiding wildcard bearing in mind the fact that there is a default denied policy with iam you need to explicitly allow access to api's and allow access to access to resources and you can make use of policy templates and a new feature that we provide called manage policies which you can attach to users or groups to simplify the process of allocating lease privileged credentials to users it's important to know that permissions do not apply to root user in your AWS account the root credentials are exceedingly powerful and should be treated with care so bear that in mind there's no mechanism for reducing permissions on the root user account it always has all permissions for all api's and therefore it's you should be avoided whenever possible groups this is a mechanism to simplify the management of permissions obviously it's much easier to assign the same permissions to multiple users by placing them in a group than it is to individually manage that and it's simpler to manage the lifecycle of a permission set if it's assigned to agree with well reassigning permissions for example basically the change in responsibilities is as simple as moving a user between groups or maybe you want to change the function of a group reviews as well you can just change the permission policy that's assigned to a specific group if you want to achieve that it's also simple to upgrade update permissions for multiple users to get started with this mapping permissions to a specific business functions the first step in assigning uses to that function and he's simply managing groups in the group section of the iam console you can further restrict privileged access with conditions and this can give you additional granularity when defining permissions and it can be enabled for any AWS service API and it minimizes the chance of accidentally performing privileged actions now how to get started well use these conditions where it's applicable there's two different types common ones that apply to all AWS services and those that are service specifically is probably worth illustrating the use of conditions with a few examples and you can see four different examples on this slide the first is restricting instance termination so that's access to the terminate instance API method it's restricted to users that are authenticated with an MFA device you can see top left here on this slide top right enables a user to manage access keys for all iam users only if the user is connecting over SSL that would force your transport source IP restrictions you can see bottom left the again once again we're working on the terminate instance method of the ec2 API here and we're restricting that to a particular in this cases last 24 Network when they sir private network in this case with RFC 1918 addressing so this would be either somewhere inside a V PC or somebody accessing a V PC over a private connection via an internet so I via a VPN gateway or direct connect and then bottom right restricting to a specific tag so we're allowing a user to terminate ec2 instances only if the instance is tagged with environment equals dev it's very common use case where you want to assign particular users access to work with resources in specific environments you can tag the resources in those invite and use tag restrictions with conditions to limit what that users capable of doing auditing so this is enabling AWS cloud trail to get logs of API calls AWS cloud trail is an AWS service for logging and recording API calls and depositing those logs into an s3 bucket that you specify can use a variety of visualization tools to visualize cloud trail logs or write your own politics processes if you wish to do that and by enabling AWS cloud trail to get logs of API calls you provide visibility into your users activity this enables post event auditing may be required for compliance with regulatory standards that you need to comply with or you may want to use it for things like incident remediation it's very very simple if you visit AWS amazon.com slash Cloud trail you find a product detail page and the details there on the services that are integrated and also details of the very simple setup process you create an s3 bucket enable cloud trail and then point an analysis tool and that bucket or use another service like EMR or redshift to do data analysis if you want to create that capability yourself passwords configuring a strong password policy I was very simple to do and it ensures that your users and your data are protected from simple password breaches password guessing and other similar approaches you can configure password expiration forcing users to change passwords regular intervals you can configure password strength length and also character mix and you can also place restrictions on password reuse and again an important note is that password policy does not apply to the root user in your AWS account so once again a good reason that you should avoid using the root user unless you have special circumstances where you need to do so credential rotation or deletion obviously it's normal best practice to delete or rotate credentials delete those that aren't in use and rotate credentials that are in use and you can use credential reports within iam to identify credentials that should be rotated or deleted for example identifying credentials that haven't been used for an extended period of time make them make them good candidates for deletion and you can see this information in the IAM consoles I said and you can grant an iam user permission to require rotate credentials it's also worth noting that I am roles for ECT which we're going to talk about a little bit more extensively later in the session that performs automatic credential credential rotation for you so if you are accessing AWS API endpoints from ec2 instances which strongly recommend that use iam roles for a number of reasons one of those being automated credential rotation MFA we've touched upon this already in the context of instance determination using conditions with an iam policy to restrict particular actions to MFA authenticated users but it is just good practice to use MFA for privileged users and actually I would advocate using it for all users in your organization it's very simple you can use a no-cost virtual MFA on most smartphones today or you can use a hardware MFA the Gemalto token for example and you can use the IAM console to assign that MFA device and it makes the setup process extremely simple so this will supplement username and password with a one-time code during authentication at the console and can optionally be required for API actions as we showed you earlier really would recommend making use of MFA for users on your account very simple to setup and very low cost credential sharing there are circumstances where you may want to provide security credentials to third parties and you can do this using iam roles or some documentation there about the simple token service which is an AWS API endpoint the deals with credential sharing and the provision of temporary security security credentials and this removes the need to share security credentials over the long term or store them it's very easy to break a sharing relationship and it has many use cases things like cos account access into account delegation or fed raishin you should never share the static credentials and AWS secret access key always use an assumed role with I am as a mechanism for granting access to endpoints so you just create that role specify who you trust and describe what the role can do then you share the name of that role externally and there's an API that your Federation partners can call in order to gain access to a temporary set of security credentials which will inherit the permissions with the role that you've shared with them so it's a very powerful feature and something that it's well worth investigating in more detail if you check out the URL on the preceding slide download the materials to access that find out a lot more about how to use that and how to set it up so I mentioned a second ago are the use of iam roles with ec2 instances it's a very powerful feature makes it very simple to manage access keys on ec2 instance costs your keys and secret will also so your access and secret key will automatically be provided to ec2 instances that are assigned a particular role virulence metadata support automated key rotation enabling you to assign these privilege to the application that is running on an instance so for example you may assign an instance the capability simply to write data into a specific Amazon s3 bucket use that for a simple black backup service of data on that instance that instance would have no other privileges beyond that but if you did want to add an additional privilege for example maybe you wanted to start making use of dynamo DB you can update the policy attached to that instance role and instances that are already running with that role will gain the capability to write into the dynamo DB API endpoint once again with the restrictions that you've specified in the policy attached to that role it's a very powerful and flexible way to give AWS SDKs or CLI tools access to AWS API endpoints from ec2 instances to get started with this simply create an iam role in the iam console assign a set of permissions for that role and then launch ec2 instances with that role and if you're not using SDKs you can sign request to AWS services with that roles temporary credentials as well it's a very very powerful feature we'll take a quick look at that later in today's session and then lastly for I am best practices with tease this trail there quite a few times already when talking about some of these other best practices but reducing or removing entirely removing the use of the root account user on your AWS account is a very good idea it reduces the potential for misuse of credentials you can do this by entering the security credentials page within I am deleting the access keys for this user and then activating an MFA device for console login for this user as well as ensuring that you've set a strong password a lengthy password with a variety of different character types in it of course is a good idea and this will help you secure the root user the powerful user that can access all AWS resources within your account if you have a physical location where you can secure the MFA device that you've used perhaps locking in a safe it's also an excellent idea as well so that some best practices for use of I am it was something that we were was requested that we cover following the last session that we did on joining through the cloud security best practices as usual there's a lot of depth that we haven't covered then if you want to learn more about iam AWS amazon.com slash iam should be your destination you'll find many product details there as well as comprehensive set of documentation and many usage examples showing how you can achieve certain common usage outcomes with with the iam service as well ok remove on now from iam I'm going to talk about defining virtual neck networks with Amazon V PC as you may know V PC is a virtual network your own logically isolated area within the AWS cloud you then populate with infrastructure platform and application services that share common security interconnection characteristics you provide layer 2 separation here so you can lay out your own subnetting topology using your own IP address space either 1918 address space for non internet root of all networks or a publicly accessible address space if you're using Internet gateways you can then connect to your V PC via the internet by our IPSec over the Internet via direct connect you can combine those with AWS Direct Connect with IPSec on top of it and you can combine those to provide multiple connections into one V PC with customized routing customized subnet topologies I said a few minutes ago and also custom service instances such as DNS or time servers that you might place within that V PC many sub features of this service including things like elastic network interfaces subnets network access control this which we're going to come to in a few minutes route tables Internet gateways I mentioned virtual private gateways and of course we fifty-three private hosted zone 4 in V PC DNS V PC topology a V PC can span multiple AZ's but each subject subnet must reside entirely within one availability zone so you only use two subnets in different AZ's for each layer of your network in order to build a good availability strategy or platform nor support and available if your strategy for your particular service or application and you have control of subnets and have routing tables and we provide several different template based layouts that you can use to create V pcs including a V PC creation wizard and you can see here an example of running through the V PC creation with it wizard creating a V PC with a single public subnet as this example shows and then you have options for public and private public and private with hardware VPN or V PC with private subnet only and hardware VPN so it's very possible to use a V PC to create an area of the AWS cloud that extends your own corporate network but does not have internet access but is something that's worth calling out so it's a commonly used approach for hybrid architectures that customers might adopt you can use AWS CloudFormation which is a templating language for defining and creating collections of related AWS services you can use AWS cloud formation to programmatically define the layout of your V PC source control that programmatic definition of your V PC layout use this as a component of an infrastructure as code strategy where you will version control your infrastructure in the same way that you might version control software so an increase in a common trend that we see amongst customers it's very common for customers to use AWS cloud formation to define and layout V pcs and you can also peer VP sees a V PC peering connection is a one-to-one connection between two V pcs that are capable of exchanging traffic with each other once that V PC connection has been set up excluding of course cases where you might have overlapping address space it's important to ensure that VP sees that you want to appear with do not have overlapping address space and this can be used for example to create a service V PC that contains shared service components that you may want to share with a number of and the V pcs that you may run that might have individual instances of an application may be supporting different customers in different V pcs and to share access to resources there are also AWS technology partners that provide enhanced networking services for V PC interconnect customer customers and partners like cohesive networks for T cloud and others that provide these additional services so if you do have a particularly complex network topology that you wish to create you can also look at partner technology to help you achieve that or to focus in a little bit during today's session on network ACLs these are an optional layer of security that acts as a firewall you can use for controlling traffic in and out of subnets within your V PC you might set these up with rules similar to your security groups in order to add an additional layer of security to your V PC you can find more details on V PC ACLs if you follow the URL that you can see at the bottom of this slide but the basics are that an ACL is a numbered list of rules that we evaluate in order starting with the lowest numbered to determine whether or not traffic is allowed in or out of any subnet associated with that particular ACL you can have up to 3,000 sorry thirty-two thousand seven hundred and sixty-six rules we would suggest that you number rules in multiples of 102 ability to insert rules later on if you need to do so Network Network ACLs have separate inbound and outbound rules and each rule can either allow or deny traffic fee PCs come with a non modified with a modifiable default network ACL which allows all inbound and outbound traffic we can obviously be changed to create custom ACLs and each custom network ACL starts out closed emitting no traffic until you add a rule each subnet has to be associated with an ACL and ACLs are stateless so responses to allowed inbound traffic are subject to the rules for outbound traffic and vice versa this is the default Network ACL that will be applied to subnets that you create where you don't specifically associate them with an ACL you can see there's a default allow here to allow all traffic followed by a default deny but the default allow will be matched and that means that the default is to allow all traffic there's a couple of different approaches that you can use the first is whitelisting so this would be removing the default allow and replacing it with a series of explicit allow statements here we are for example allowing inbound HTTP and HTTPS we're whitelisting ssh traffic from a specific networks public IP address range via the internet gateway where whitelisting RDP traffic and we're allowing returned traffic from the internet for requests that originate in the subnet by working with that if from ephemeral port range that you can see in the fifth row of this table 49 152 through to 65535 another alternative approach that you have of course is blacklisting this could be used for example for targeted denial of traffic for specific protocols or targeted denial of traffic originating in specific CIDR ranges we have a very simple deny here maybe there's a scenario where an ssh vulnerability has been notified to us we want to block SSH on all of our instances in this subnet until we're able to patch them ability so we're applying a deny rule for all ssh traffic regardless of source it's the first rule in this Knakal so effectively blacklisting all ssh traffic into our subnet and it will be blocked we're allowing everything else and then denying everything as the final rule so that's how you can affect blacklisting with knackles reasonably new feature in the V PC much requested by customers is V PC flow logs and the idea here is to provide better support for a really important aspect of network monitoring which is customers wanting to debug traffic flows within their V pcs you can enable this for a particular V PC subnet or elastic network interface eni once you've done so relevant network traffic will be logged into cloud watch logs for storage analysis either by your applications or by third party tools you can create alarms which will fire up certain types of traffic are detected you can also create metrics that might help you identify trends and patterns and the information captured here includes information about allow allowed and deny traffic based on security group and network ACL rules it also includes source and destination IP addresses ports the Ayana protocol number packet and byte counts time during which the flow is obtained observed and the action accept or reject taken by the security group or network ACL rules that apply to this traffic it's just one illustration there have a partner solution from an AWS management partner sumo logic they were very quick to provide a beta version of a real-time visualization tool for VPC flow logs just a couple of days after this feature was announced back in June if you want more on that check them out sumo logic comm and you can find Jeff bars original blog post about VPC flow logs if you check out the URL at the bottom of this slide so I've talked a little bit about vbc now next thing i'm going to do is show you a demo of VPC creation using the VPC wizard in the AWS console so here we are in the AWS console we're operating in the origin Oregon region here as you can see in the right now we're going to work with VPC here which is down in the networking section of the console so we're going to use the V PC sub console to create our new V PC and this might be the first step in a workflow if you were going to deploy a new application into AWS that was going to run on a collection of ec2 instances logical first step to do that would be to create a new V PC to deploy those instances into so I'm going to start the V PC wizard and I'm going to create a V PC with public and private subnets and the wizard is going to create that in a single availability zone for me so I'm going to show you how to add to it additional subnets that'll be in a second daizy for resilience purposes it's going to name our subnet game front-end because this will be consistent with the naming that we're going to use for the rest of the demo and we're going to drop our two initial subnets into availability zone uswest to a and you can see that we've got a addressing which is populated in forwarding to the wizard we could obviously customize this if we want to do that as to specify details of an atty instance that's going to provide NAT connectivity from our private subnet out to the Internet I'll just click create V PC at this point and to change anything else and you'll see that the resources for the beep V PC are created for us now that our VPC has been created we can add additional resources to it I just click OK here you'll be taken to a list of all the VP C's - available in our account you'll see we've now got a VP C called game front and now we can filter here and that'll restrict all of the resources that are available and visible through the console just to the scope of the VP see that we've selected in the filter and we're going to add an additional subnet here so we're going to add an additional public subnet to our VP C we'll call it public subnet to put this into a different availability zone and we'll select a different CIDR block for this subnet 10.0.0.0 slash 24 will create that subnet and the last thing that we need to do with the subnet that we've just created is to associate it with the routing table with the same routing table actually as the first public subnet that's being created and you can see that that is the routing table ends in 5 for BD it's actually currently associated with the private routing table so we'll edit that route table and we're going to change that to the BD route table and you can see that that I will associate this particular subnet public subnet with a routing table as the default route of the internet gateway for this particular VPC in other words we've converted that subnet now into a public subnet with public routing and you can see now that we've got three subnets in our V PC to public and one private and really setting up a V PC via the wizard is that simple is not much more to show obviously you can go in then and change other settings associated with this newly created V PC including features like Network ACLs this we've got a standard Network ACL here which is the default you'll recall seeing on that prior slide that concludes that quick demo let's jump back into the slides ok we're going to move on now and take a look at networking and security for Amazon ec2 instances and as you'll know ec2 instances are the virtual machines that you can run inside the AWS cloud install an operating system of your choice on those machines obviously when your applications and workloads on there and the principal are control that's available to you to secure ec2 instances is something called Amazon ec2 security group groups and the rules of a security group control the inbound traffic that's allowed to reach the instances that are associated with a security group and of course the outbound traffic that's allowed to leave allowed to leave them excluding instances that are running in the ec2 classic network configuration so VP C's offer inbound and outbound control by default low security groups allow all outbound traffic so that's something that you need to change consciously if you want to restrict outbound traffic flow you can add and remove rules anytime and what you do so your changes are automatically applied to all the instances associated with that security group after a short period if it doesn't take long for rule updates to be reflected in the control of traffic within your VP C control traffic to your instances and also copy rules from an existing security group to a new security group and as I said earlier you can't change outbound rules for ec2 classic ok ok so how do they work well as I said the control traffic so here's a hypothetical deployment model we've got to public subnets in our V PC here we've got instances in both of them in availability zone one two so this could be the V PC that we just created a second ago actually now we've got two groups of instances here we've got some game servers that we've put in a security group called game servers and you can see we're allowing ports on here actually they're associated with running Unreal Engine and with registering those unreal engine game servers with Steam with steam quick steam server browser and then at the back end if you like we've got another security group called API servers and this is for our web services API that we're going to use for some aspect of maybe stay or matchmaking in our game server so I have a secondary API idea that we're running and you can see here that we have a reference in that second security group for API servers that allows traffic from ec2 instances in the game server security group and only that traffic to access HTTP that secondary tier and it's just to illustrate that security groups are self referential you can reference other security groups in them this provides a dynamic way to control security policy so if we add or remove game servers dynamically we don't have to update the security group on our API server tier it will automatically provide the policy that allows those game servers to communicate with it regardless of how many or how few game servers are active any particular point in times it's a dynamic mechanism for controlling traffic next thing I want to do is show you a quick demo of creating and working with security groups so we'll jump back onto the console now and show you that so back in the AWS console here in the V PC dashboard looking at the V PC that we created earlier I'm going to just show you the process for creating security groups here so I drop into the security groups console create security group and as to name my groups I'll call this one game servers as I said a second ago the description is required and game servers it's going to be in the V PC we talked about earlier game front-end I'll create that once I've created am then asked to add rules inbound rules to the outbound rules as we said the default is allow all inbound rules there are no inbound rules so it's not permissive at the moment and we're going to create some rules to allow traffic custom UDP rule UDP 17 triple7 so I got double 7.7 and triple seven eight sources anywhere zero - zero - zero - zero slash zero add another rule another custom UDP rule we're going to take traffic on 27 or 15 again the source is going to be anywhere for this that's our first rule we can save that and to create our second security group for API tier same VPC once again and here we're going to create a reference to the game service security group that we already created add again an inbound rule and we want to allow HTTP traffic the source is our game servers and it's that simple we could then launch ec2 instances with these two security groups assigned to them and that would permit traffic in the way of describe so we can see how simple is the options I've just showed you there through the console of course are also available via the AWS CLI and I just want to show you a few examples so here's a AWS ec2 create security group command we're adding the group name in the description and the VPC ID that want to create the group in this sir creates a skeleton group that you can then add rules to you can find more details on this particular command at the bottom of the on the URL at the bottom of the slide and you can authorize security group ingress Orion or egress from a pre created security group so I'm taking the group ID that was returned to me on the prior slide I'm using that to authorize the UDP UDP traffic on port 2706 teen from any source address and I can then use describe security groups with a filter option to give a name and a value for that key name is group - name and the value in this case would be game servers up with that in text now I'll return back to me and you can see my additional security groups being created there 2701 six UDP so that just describes how to work with security groups using the CLI and we'll move on now quickly and talk about working with container and abstracted services so these are services like Amazon RDS or Amazon Mr when you're building we're providing to you manage services essentially that are built using ec2 as a substrate as a low-level component and just to give you a flavor of how this works for example with RDS entering into the RDS console you're asked to create an RDS security group and this is a reference actually to an ec2 security group which is assigned to servers to ec2 instances that you wish to have permission to access your RDS instance so in this case I would create an RDS security group from my my sequel master and slave multi AZ enabled RDS instances referencing back to my API server security group and that would permit traffic from that tier into my my sequel database routing via the routing table that you can see represented here in the middle these four subnets so it's very very simple to permit access to container service as long as you're familiar with ec2 security groups of course there's another component access to container services and this is secret so things like database usernames and passwords obviously this is not managed by security groups something we need an alternative solution for within the operating system environment it's actually a very good recent blog post on the AWS security blog which talks about whitelisting for access to sensitive s3 buckets and how you might use this as a mechanism for storing credentials I'm not going to exhaust this here other than to say that if you check out the AWS security blog that I'll show you a reference for at the end of today's session there's an excellent new post there that explains how you can use I am instance roles or STS assumed roles to provide whitelisting capabilities to access credentials that you might store in a bucket and that's how we'd recommend that you get credentials onto onto instances where you require them the other type of services is abstracted services services like Amazon s3 and dynamodb and in this particular case it's all about I am rolled it's about using I am rolls to pass access credentials to an instance to grant permission for instances to work with the ws API endpoints that abstract these services a services like s3 DynamoDB and other services of that type just going to show you a very quick demo now working with iam roles via the AWS console so I'm logged back into the AWS console here and I'm an another account here which has a very minimal set of resources configured within it to try and show you a nice clean console not entirely empty but almost empty so jump into the Identity and Access Management and sub console and creating roles for ec2 instances is very simple we're going to go into the roles section of the dashboard and we're going to create a new role here so we'll call this my games of a role and we want this to be an Amazon ec2 service roles it's an ec2 instance role it's going to call AWS services of my behalf and you can see here the number of managed policies that are available for us to attach we're going to grab an s3 policy here so say you wanted to have a role here that would have permission read-only permission to work with the s3 API and access resources with now account in that API namespace well we can use a managed policy for that and if I create this role you'll see that I now have my game server role here with a policy attached to it which I can show and I can then start an ec2 instance within this account so I'll jump over into the ec2 console and if I was to launch an ec2 instance within this account you'll see that I have the option when launching an instance to attach an iam role to in my role is available for me here that will provide the credentials associated with that role two instances that are assigned to that role via instance metadata and I'm then able to modify the policy attached to that role and that will change the permissions associated with that role and modify the permissions associated or delegated rather to instances that hold that role that's it really that's a very very quick demo showing you how to get started with I am roles and ec2 instances last section of today's session is about encryption in key management in AWS after we've covered this we'll talk about some resources that you can use to learn more about security in the AWS cloud before we start with some AWS specific topics I just want to talk briefly about encryption and key management functions this is how a process like this is typically implemented generating a symmetric data key either generated by hardware or by software and in this use case symmetric keys are preferable to asymmetric keys because we want an encryption of data of arbitrary size we also want this to be strong in terms of performance very fast so what we do here is use the key along with an encryption algorithm like AES to generate ciphertext the encrypted data what about the symmetric key that would just use that you can't store that with the encrypted data it would be very unsecure thing to do so we need to protect that key somehow and the best practice here is to encrypt the data key with yet another key called a key encrypting key this key can be symmetric or asymmetric it needs to be derived and stored in a separate system and the one you're processing your data in after you encrypt the data with this key encrypting the encrypt the data key with the scream key encrypting key you can then store the ciphertext encrypted key along with the original ciphertext they encrypt a data what about protecting that key encrypting key how do you do that you can iterate on the process of enveloping that key with as many additional keys as you want creating a key hierarchy but at some point you can need to be able to access a plaintext key that starts the unwrapping process to derive the final key to decrypt the data and the location access controls around that master key should be distinct from the ones ones used with the original data options for implementing this kind of infrastructure within AWS well you have a few the first is DIY key management this is where you encrypt data client-side and send the ciphertext to an AWS storage service well that's Amazon redshift or DynamoDB or another service and in this scenario you need key management infrastructure which provides the keys and manages the keys they used to encrypt your data that can run in private infrastructure on your own premises or of course it can run in private infrastructure inside a V PC on an AWS instance in ec2 instance with this implementation of course the only place that you can decrypt your data is in your code using the keys under your control in order to simplify this slightly we provide a solution for client-side encryption of data bound for s3 it's called the s3 encryption client and it's integrated into the AWS SDK so that you can minimize the number of calls that you need to make when you're encrypting data before making the put call to put that data into s3 and the encryption client lets you configure whether you want server side encryption to happen in addition after you put the data using keys that AWS managers or client-side encryption to happen before you put using keys that you manage those are the two options that you have it's also an option for server side encryption using customer provided keys and in this scenario you provide a key at the same time as you provide the data that you want to put into s3 in this example you can see here the key is then used at the Amazon s3 web server to encrypt the data then deleted and you must provide the same key when downloading the data in order to allow Amazon s3 to decrypt it so it's a ephemeral key model why keys were only held temporarily temporarily in debt in memory for the purposes of encryption and decryption new a service that we launched just last year is a service called the AWS key management service and this is a managed service intended to make it easier for you to create control rotate and use encryption keys it's integrated with the AWS SDKs and a variety of AWS services that use the key management service to manage keys that are used for encryption of data at rest it's also integrated with AWS cloud trail to provide auditable logs to help with regulatory and compliance to produce for example recording an order to each key access event that takes place its integrated into the iam console / key management you can create a managed Keys there and then it is further integrated with a variety of services as I mentioned we're kms manage keys can be used for encryption of data at rest you see an example with EBS the elastic block store block volumes that are attached to ec2 instances here's another example of integration with Amazon s3 where we're sharing use of a key management service master key and here is an example of integration with Amazon redshift we're using it to encrypt a database again using the kms service in that use case services integrate with the AMS AWS key management service using a two-tiered key hierarchy with envelope encryption unique data key is used to encrypt customer data and the AWS kms master keys are then used to encrypt those data keys so gives the benefits of envelope encryption that you can see there you've got a small number of master keys to manage rather than potentially millions of data keys that could be in service so the fundamental characteristics of the key management service are intended to provide security for your keys so steps like never storing your keys in persistent memory on runtime systems or automatically rotating keys for you the system side controls and then there are process side controls as well such as separation of duties between systems that use master keys and data keys or multi-part controls multi-party controls rather for all maintenance on systems that use your master keys these controls are documented in public white papers and also in the sock 1 compliance package which is available from AWS if you have a nondisclosure agreement with Amazon Web Services its enforce ok so that concludes the technical content for today's session and some resources that you can use to learn more of course the location for everything security related at AWS aws.amazon.com slash security it's a great place to go for finding for other information about AWS security features security best practices and also for finding sessions and other technical content might help you understand more about how to secure workloads in the AWS cloud it's also a great deal of technical documentation we've referred to quite a lot of it during today's session here we're looking at something that compares security groups and network ACLs a couple of topics that we've covered today there's a huge amount of technical documentation about VPC about kms about other AWS security features like AWS config and AWS cloud trail so check out technical documentation for real in-depth content about precisely how to use every feature of every AWS service that we've covered today you want to check out some blog posts to do with secure an AWS we have a security specialist blog blog state WS damaging comm slash security it's where I found the post about whitelisting access to s3 buckets for credential storage also been a lot of other posts on there of note recently so check out that security blog put in your RSS reader and stay up to date with that a collection of security white papers covering security topics in AWS including the AWS security white paper which is an excellent place to go for security best practice associated with running services on AWS as well as detailed papers around at rest encryption security best practices logging in governments large number of AWS services that we've covered today and several others that we have not covered there are also relevant for secure use cases of course every product and service has its own product detail page where you can find information such as links to technical documentation so check out the product detail page for each one of these services if you want to learn more about them for general AWS training and certification information go to AWS Amazon comm slash training you find our technical curriculum there with a hound hands-on self-paced labs certification traction of course classroom based training delivered around the world so that concludes the day session we're not going to have time for questions today we packed a lot of content in there and we're slightly over an hour our time slots I'd like to apologize for that my colleague is going to put the webinar into Q&A mode now they're all the same and of course if you do have questions please continue to submit those using the Q&A panel that you can find in the webinar interface we'd love to get your questions we'll get back to you on them in due course if you let us have any questions that might have arisen during today's session please do give us a rating from 5:00 to 1:00 like to apologize for my voice during today's session not feeling too great at the moment I'm sorry if that was apparent during today's webinar like to apologize for that lastly please do follow us on social media you can find me at enm with four M's you can find us at AWS underscore uki for news and information here in the european time zone in the english language and of course the UK and ireland education program is covered extensively there also the AWS loft that we currently have open in London and if you want to stay up to date with AWS service announcements globally news from AWS reinvents coming up six to the 9th of October in the very near future find us at AWS cloud also on Twitter lastly I just want always thank you for giving up a bit of your time to learn a little bit more about AWS today we do appreciate that please do keep submitting new questions and we'll get back to you on them and please don't forget to rate today's webinar thanks again for joining us bye-bye
Info
Channel: AWS Online Tech Talks
Views: 46,774
Rating: 4.7142859 out of 5
Keywords: Cloud Computing (Industry), Amazon Web Services (Website)
Id: zU1x5SfKEzs
Channel Id: undefined
Length: 61min 46sec (3706 seconds)
Published: Tue Sep 22 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.