Cloud Security Fundamentals | Cloud Computing Tutorial | Simplilearn

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] but when evaluating security of cloud providers is very important to understand the distinction between the security measures that the cloud service provider implements and operates which is known as the security of the cloud and the security measures that the customer implements and operates related to security of customer content and applications that make use of the cloud service provider services and this is called security in the cloud basically the AWS shared responsibility model defines which security controls are yours and which is a responsibility of AWS in other words you decide the security for your applications that run in the cloud for example which ports are open which IP addresses can access your resources what patches are applied to the operating systems do you have encryption enabled etc and AWS guarantees the global security of the AWS cloud for example the hardware the data centers the networks etc and this diagram here gives you an overview of who owns what exceptions to the AWS shared responsibility model for example with AWS managed services like RDS dynamodb and redshift AWS is responsible for more than just the hardware and the networks in these cases AWS is responsible for the security configuration like caching antivirus etc and you're just responsible for account management and user access Amazon Web Services takes cloud security and compliance extremely seriously one of the biggest concerns of consumers is that the cloud is not secure and it's something I hear time and time again but in fact that could not be further from the truth that AWS invests huge resources into securing the AWS cloud and making sure that it's compliant with the required assurance programs AWS cloud compliance allows customers to understand exactly what controls have been put in place by Amazon to maintain cloud security and data protection however AWS doesn't take full responsibility as for systems that are built on top of AWS cloud infrastructure the compliance responsibility belongs to the end-user AWS meets a large amount of the Assurant programs for finance healthcare government and many more and here's a list of some of the Assurant programs AWS is compliant with and you can see there's some big names in there like HIPAA FedRAMP and ISO but just because AWS have all these compliance assurance programs it doesn't mean that your applications are compliant there's still stuff you'll need to do with your applications to make them fully compliant with these programs let's start with infrastructure as a service ec2 s3 and B PC are completely under customer control so you have to perform all the security configuration and management tasks the virtual instances are completely controlled by you you have routes access AWS does not AWS has no access rights to your instances or guest operating systems and they cannot SSH or RDP into your servers so they have no idea what's going on on your instances and you'll find this out if you ever raise a support to keep of AWS but they are unable to log on to your instances to help you with your problems you have to do screen sharing so that's a security attribute that AWS provides AWS has a storage decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals customer instances have no access to raw disk devices and instead you're presented with virtualized disks AWS proprietary disk virtualization layer automatically reset every block of storage used by the customer so that customers data is never exposed to another in a similar way memory is also scrubbed by the hypervisor when it's unallocated from a host and is unavailable for use again until completely scrubs with the network AWS protects against denial-of-service man-in-the-middle IP spoofing port scanning and packets anything it prevents IP spoofing by AWS controlled host-based firewall infrastructure and it will not allow an instance to send traffic with a source IP address or MAC address other than its own denial of service is prevented through the use of security groups access control lists so that you can minimize public entry points and reduce the surface area of your applications you can protect databases and non internet-facing resources in private subnets and use Bastion servers for SSH and RDP access the instances hidden in private subnets you can put your elastic load balancers in security groups with inbound and outbound restrictions Amazon ec2 provides a firewall solution that by default is configured in denial mode so you have to open up the ports you require for your applications to work with network security you can use HTTPS instead of HTTP and you can use VPN to provide encrypted tunnel access to AWS AWS allows you to perform vulnerability scans but you have to request permission in advance to perform one and you have to limit it to your own instances and this is something you may see on the exam it's a popular question are you able to run Bunner ability scans and the answer is yes but only after you've got permission from Amazon people often ask if the Amazon corporate network segregated from the AWS network and the answer is yes using network security different instances running on the same physical machine are isolated from each other via the Xen hypervisor which is what AWS uses also the AWS firewall resides within the hypervisor layer between the physical network interface and the instances virtual interface so all packets have to pass through this layer meaning that any instances running alongside have no more access to that instance than any other host on the Internet physical RAM is also separated using similar mechanisms AWS provides multiple options to secure your user credentials you have good old username and passwords we have multi-factor authentication then there's the access keys and the key pairs and also x.509 certificates and these are a way of making media online secure by providing a certificate and a private key to a user so that only the user with the key and the certificate can view the media that you've placed online AWS provides the ability to encrypt with aes-256 encryption so the data on ec2 instances or EBS storage is encrypted for it to happen with low latency and to be efficient it's only available on the more powerful instance types AWS offers SSL termination on load balancers so this means any traffic that passes between elastic with load balancers and web servers is unencrypted so the load is taken off them so you can ensure that your data is fully encrypted all the way into Amazon and then once it's in there it can be unencrypted to speed up and make your applications work faster AWS Direct Connect provides an alternative to using the internet to utilizing AWS cloud services so in all the demonstrations we've been using I've been connecting to AWS via my web browser however if you use direct connect you can change this so data that would have previously been transported over the Internet can now be delivered for a private network connection between AWS and your data center or your corporate network so it allows you to access public services such as s3 and private resources such as ec2 running within your V PC using your private IP space it also means you can extend your IP address range of your office into your virtual private cloud AWS cloud trail is a web service that records AWS API calls for your account and delivers log files to you the recorded information includes the identity of the API caller the time of the API call the source IP address of kpi caller the request parameters and the response elements returned by the AWS service the AWS API call history produced by cloud trail enables security analysis resource change tracking and compliance all the team now as you know auditing is a hot topic in today's IT world so enabling cloud trail to do this for you is a very important and you don't need to know much more about cloud trail other than what it's for and why you would use it Amazon Cloud Watch is a monitoring service for AWS cloud resources and the applications you run on AWS cloud watch enables monitoring for ec2 and other Amazon cloud services so you can get alerts when things go wrong you can use Amazon CloudWatch to collect and track metrics so you can get system-wide visibility into resource utilization application performance and overall operational health and you can use these insights to react and keep your applications running smoothly card watch offers two types of monitoring there's basic monitoring which is included free of charge and polls every five minutes and gives you ten metrics five gigabytes of data ingestion and five gigabytes of data storage then this detailed monitoring and this costs more there's a price per instance per month but it pulls every minute so if you want more detailed monitoring than you can pay for it AWS cloud watch allows you to record metrics for services such as EBS ec2 elastic load balancer and Amazon s3 and using these metrics you can add them to dashboards to give visual or text-based notifications of what's going on and this is a diagram of the dashboard in Amazon Cloud watch metrics are at the hypervisor level so you can get things like CPU disk Network but you cannot see memory usage metrics appear as you add more resources to your AWS account you can create events based on your cardboard monitoring for example triggering lambda functions so perhaps if you need es volume fills up you could trigger an event so that data is removed and archived from the volume or a new volume is created so there's many things you can do you can install cloud watch agents on ec2 instances and this will send monitoring data about the instance to cloud watch so you can monitor things like HTTP response codes in Apache or you can count exceptions in application logs you can set alarms to warn based on resource usage for example if CPU utilization is too high then it could send a notification it can also Auto scale so if your CPU is maxed out you can get another instance launched to take care of some of the load or you can send cloud watch monitoring alarms to ec2 actions to say recover an instance or reboot an instance if something happens you can also use alarms to shut down instances isn't just use for starting them up so if you have idle instances you can get cloud watch to shut them down for you in this demonstration we're going to take a look at AWS cloud watch and how we can use it to shutdown idle instances so before I started this demo creation I launched an Amazon Linux instance and let it run for 10 minutes just so we'd have some data so now I'm going to go to management tools and click on cloud watch and this brings us to the cloud watch dashboard so what we're going to do is we're going to create a new dashboard and we'll click on create dashboard so that we can have a look at the monitoring statistics for our new instance so let's just give it a name and we'll call it just simply learn just for a change okay so there's create dashboard and now we get the option do we want to put text base or metric graphs which is to our dashboard and we're going to select metric graphs so we'll click on configure and then we're presented with all the cloud box metrics that I have available in my AWS account so obviously I've just launched an ec2 instance so I'm going to click on ec2 metrics and my new server is called simple Zurn cloud watch demo so I want to add CPU utilization so click on that so let me create this widget and then that now appears on our dashboard so we can add more so we click on add widget will do another metric graph or go another per instance metric and let's choose network in and network out and we'll create the widget now as you can see there's not a lot happening here because I just launched this instance and I haven't really done anything on it but this is how you would create a dashboard and it's quite useful and very easy to see so you can create a dashboard for your instance types so what we're also going to do is set up an alarm so let's go to alarms so now we want to create an alarm so we click on create alarm and we get to choose the metric glue on the base the alarm on now obviously we have an ec2 instance so we're going to choose that option I'm going to do it based on CPU utilization of our simply learn cloud watch demo instance so we'll click on that and we'll click Next so now we get to give our alarm a name so just going to call this simply learn underscore alarm just for the purposes of this demo we can give description and now we get to set the alarm threshold so we're saying whenever CPU utilization is and we're going to say less than or equal to 50% for one consecutive period and remember a period is five minutes in basic monitoring we want this alarm to fire so why would you do this well imagine if you had a really high-powered server that we're using every night to do some highly intensive compute computations so you might know that it runs for a couple of hours and it's charging you quite a lot of money per hour to run but when the CPU utilization drops below say fifty percent or ten percent you know the job is complete so then you can get an alarm to fire and an action to happen so you could set a notification so we could together to send an email when this alarm fires we're not going to do that in this demonstration but we can do an ec2 action so whenever this state is in alarm I want it to stop this instance so this instance is going to be stopped whenever the CPU utilization is less than 50% for five minutes and it's just telling us that there so now if I click on create alarm and here is our alarm so it's just telling us we're in alarm already because this instance has been up and running it has monitoring data for the last five minutes and it's saying the alarm has fired let me click on it and we'll see why so state change to alarm reason threshold crossed one data point point zero zero three four was less than equal to the first shoulder 50% and it says when it's in alarm it's going to stop the instance so if we go to the ec2 dashboard and here we can see that the simply learned cloud works demo instance has been stopped and it's in the status of alarm so it's telling us that our alarm fired and it took it straight down obviously that's a pretty dramatic example you wouldn't expect anything like that to happen in a real world but it's a good way of seeing how alarms work AWS trusted advisor is an online resource to help you reduce costs increase performance and improve security by optimizing your AWS environment trusted advisor provides real-time guidance to help you provision your resources following AWS best practices AWS trusted advisor provides best practices or checks in four categories cost optimization security fault tolerance and performance improvement the status of the check is shown by using color coding on the dashboard page red means action recommended yellow investigation recommended and green is no problem detected let's start with cost optimization and using this function you can see how you can save money with AWS for example you might see some reserved instances recommendations so if you have instances that have been running for a long time and they're always up and running AWS might say we'll move this to reserved instances and you'll save some money it'll also make for you about idle instances that aren't doing anything you might want to shut these down or perhaps even you might have some unassociated elastic IP addresses which are costing you money because they're not allocated to an instance trusted advisor will help you improve the security of your applications by notifying you about gaps that can be closed security features you're not using and by examining your permissions so you might see a report saying that you have unrestricted access to certain ports so if you have your sequel server port wide open to the internet that's a bad thing it will tell you about s3 bucket permissions maybe you have more readwrite users than you need and maybe you don't have multi-factor authentication set up on your root account trusted advisor will help you increase the availability and redundancy of your applications by suggesting that you take advantage of things like auto scaling health checks multi availability zones and backup capabilities so for example you might have not taken a snapshot in a while and advisable that you know that the age of your snapshots is too old and you should run some new backups it might suggest that you add more availability zones to your load balancer to make it more redundant and it also might suggest s3 bucket logging for your auditing trusted advisor will help you improve the performance of your service by checking your service limits ensuring you take advantage of provisions throughput and monitoring for over-utilized instances so if you have instances that are constantly maxed out trust it advisable that you know and suggest you move them to a different instance type it will also let you know when your service limits over 80% of the limit so it gives you time to raise a ticket of AWS to increase it and if you have EBS magnetic volumes that are over utilized it might be beneficial to switch to SSD and trusted advisor will let you know this this is a very quick demonstration to show you how AWS trusted advisor can help you optimize performance and security of your ec2 instances so if we click on my to the bottom and click on trusted advisor it brings up the trusted advisor dashboard now I just ran this a couple of minutes ago so to make speed to demonstration hub and you can see here's the four areas that we talked about in the lesson cost optimization performance security and fault tolerance and then underneath it tells you whether you have any areas of concern so the cost optimization we haven't because I've got hardly anything running in this account performance it tells me is fine now I'm the security we have two cautions so let's take a look at this so it says security groups we have specific courts are unrestricted and it's been checking security group for rules that allow unrestricted access to specific ports so let's take a look at that we'll click on the little Chevron and it's telling us down here that we have two security groups one of which is our simply learn web server security group which has unrestricted port access to port 22 which is the SSH port so if you remember when we set this up we said we would allow anyone to SSH into our web server instances on port 22 so it is warning us and letting us know so we did this without realizing we could go along and say oh okay we need to change that another thing is telling is that I don't have MFA enabled on my routes account now I did but I changed my phone recently and when that happens you need to reset up your MFA and I haven't got around to it yet so it's just a good reminder so I will probably do that after this demonstration and we have no other faults or alerts but if you had a much bigger environment you would see many more things popping up here another cool feature is on the top right you can click on the little download button and it will download all of this to an Excel spreadsheet okay so they watch you can see it puts it into a spreadsheet for each of the four tabs we have security groups IIM MFA and service limits so it's telling me my service limits and whether they were all okay so you could download this and send it to your auditor or whoever deals with this kind of thing in your company this concludes the demonstration on trusted advisor you should and you can secure your virtual network at several different levels the most broadly scoped level of security is at the router table level having a private subnet with no direct path to the Internet is one of the best ways to protect your internal computing resources against unauthorized access the second level is the network ACLs provide the ability to define default security behavior for your subnets the V PC or the subnet layer security is controlled by the network security team you can use security groups to control behavior at the instance or the eni level which is the third level at the fourth level you can use a third party host based detection software that monitors individual ec2 instances for specific threats such as malware intrusion known operating system vulnerabilities and security auditing according to AWS you can use a network address translation or in that instance in a public subnet in your V PC to enable instances in the private subnet to initiate outbound traffic to the internet or other AWS services but prevent the instances from receiving inbound traffic initiated by someone on the internet the diagram on the slide represents the basic mechanics of how an ad operates the network diagram has been simplified for the sake of discussion regions availability zones routers and gateways are not shown here in this example a database server in a private subnet raises a request for a public internet resource because of the routing table rules defined for the private subnet AWS routes this non-local request to the net server via the Nets private IP the NAT server performs the function of rewriting the request so to the external resource it appears that the net is requesting this resource via its public IP of 154 dot 10.1 dot 3 when the external resource responds the net reverses the process changes the packets originally addressed to the Nats public IP address and re addresses them to the private IP address of the database server this way numerous private instances with only a private IP address can make public requests to the Internet via the net instance keep in mind the source and destination checking must be disabled on the net while configuring route tables for net you have to keep in mind a few points you can create round tables and associate them each with a subnet public subnets route non-local traffic through the internet gateway private subnets route non-local traffic through the NAT instance or NAT gateway a common scenario that you might face with Nets is port forwarding you have learned how to use NAT to provide access for the private subnet to the internet but what if you wanted to open up a specific resource in the private subnet and make it available from the internet for example assume that you have an internal defect tracking database running as a web server on port 80 on a machine in your private subnet you want to expose this resource to the internet via your net so that select customers and partners have visibility into your development process to enable this scenario using a NAT instance you have to log in to the instance and configure it to forward all requests to port 80 of your net the traffic will be forwarded on to an Amazon ec2 instance in your private subnet however the nat gateway does not support port forwarding yet so if this feature is required in your infrastructure you need to set up a net instance you can use in that instance in a public subnet in your V PC to enable instances in the private subnet to initiate outbound traffic to the internet or other AWS services but you can prevent the instances from receiving and down traffic initiated by someone on the Internet this slide illustrates the communication between the instances running in the private subnet and the Internet through the net instance the main route table sends the traffic from the instances in the private subnet to the net instance in the public subnet then that instance sends the traffic to the internet gateway for the B PC the traffic is attributed to the elastic IP address of the net instance the net instance specifies a high port number for the response if a response comes back then that instance sends it to an instance in the private subnet based on the port number for the response to configure in Amazon ec2 instance as a net you must enable IP masquerading on the instance it can be done manually or you can search for an AWS community ami image that is pre-configured to serve as a net ensure that you choose an image with the correct virtualization type for the instance family you are using for example you cannot use a PV image for your net if you plan to deploy the net as a t2 instance ensure that the source and destination checking is disabled on the net source and destination checking is a feature on Amazon ec2 instances that informs the instance to drop network packets that are not specifically addressed to the instance as the job of a net instance is to serve as a proxy for the network traffic of other instances this feature must be disabled on your net keep in mind that a single met instance can become the single point of failure so if you are using an ad instance and high availability is important for your setup then consider implementing a to availability zone net instance design as a high availability fill over solution it will prove more effective when used together with auto scaling for web access you can also use squid proxy architecture the most modern and recommended way to enable instances in a private subnet to connect to the internet or other AWS services but prevent the internet from initiating a connection with those instances is to use a network address translation or NAT gateway a NAT gateway is a highly available and automatically scaleable service managed by AWS however a NAT gateway is not a free service a NAT gateway is charged for hourly usage and data processing in addition Amazon ec2 charges for data transfer also apply on that gateway is redundant inside the availability zone but not in the whole region in the event that the availability zone of the net Gateway is down resources in the other availability zone will lose internet access to create an availability zone independent architecture you can create a net gateway in each availability zone and configure the routing to ensure that resources used in that gateway in the same availability zone on that gateway supports bursts of up to 10 gigabits per second or 10 Gbps of bandwidth if you require more than a 10 Gbps burst you can distribute the workload by splitting your resources into multiple subnets and creating a net gateway in each subnet on that gateway requires one elastic IP address you cannot disassociate an elastic IP address from on that gateway after it has been created if you need to use a different elastic IP address for your nat gateway you must create a new net gateway with the required address update your route tables and then delete the existing that gateway if it is no longer required an at Gateway supports transfer control protocol user Datagram protocol and the internet control message protocol you cannot associate a security group within that gateway you can only use the security groups for your instances in the private subnet to control the traffic to and from those instances you can use a network ACL to control the traffic to and from the subnet in which the NAT Gateway is located the network ACL applies to the NAT gateways traffic the net gateway uses the ports 1024 through 65 535 when that gateway is created it receives an elastic network interface that's automatically assigned a private IP address from the IP address range of your subnet you can view the NAT gateways network interface in the Amazon ec2 console in this demonstration we'll learn how to create and use NAT gateway in the previous demonstration we created a new V PC with private and public subnet for this demonstration I have created an instance in a private subnet it does not have a public IP but only a private one so it does not have access to the Internet I created one more instance in a public subnet to be used as Bastion host and then from that host will use SSH for port forwarding now let's log into the private instance and verify if it really can't ping from outside the V PC first let's check the bastion host we'll try to ping something from it the Google URL proves very useful for testing network connections you okay so we can't see it is working next let's try to ping the private instance somewhere else so it's 165 okay the private instance is pink Abel will now try to SSH to it as you can see we are in the private instance so the bastion host is also working fine and now let's try to ping something on the internet from the private instance as you can see the private network is connecting to the internet we'll leave this ping running for now and now let's proceed with creating the net gateway we'll go to services then DPC and then tune that gateways will create on that gateway and we are going to create it in public subnet it once an elastic IP right now we do not have any elastic eyepiece and so we will create a new one we are now ready create an at Gateway you as you can see the status of the nat gateway is shown as pending it takes a couple of minutes now it is up and running let's now edit the route table for the private subnet will go to subnets private subnet and then route table out table link or now that we know the ID we can pick it from the route table list you select the route table and now let's edit its routes we'll now add one more route to it you so all the traffic that goes outside VPC CIDR block will go out our nat gateway you you can add more routes if you want we'll click Save now as you can see the private instance has started pinging Google so then that gateway and the route table are working fine you security groups control access to the instances by protocol port and source or destination you can use a single IP address CIDR block or ID of the resource for example instance ID or security group ID as the source or destination some important things about security groups are only 5x security groups can be associated with an en I which is a soft limit there is a limit of 100 security groups per VPC which is a soft limit by default no inbound traffic is allowed however all outbound traffic is allowed all outbound traffic in response to an inbound request is permitted instances that are part of the same security group cannot communicate with each other by default let's take a look at some examples of security groups in action in the first example there is a source description of 0 0 0 0 / 0 to specify that any computer from anywhere on the Internet can access a web server on our instance that is listening on port 80 note that security groups that restrict access by IP actually specify an IP range using CIDR notations in the second example access is allowed only from a specific IP address in the third example SSH access is allowed only from a particular security group there are some important limitations for security groups that you need to keep in mind the number of rules per security group is soft limited by 50 the number of security group is soft limited by 5 the maximum number of security groups is limited to 100 per VPC by default your AWS account is limited to 100 security groups by default while you can increase this limit having a large number of security groups can lead to a negative performance impact not just on your network but on other networks hosted on the same hardware as well it's best to stay within the 100 security group limits and as the number of security groups in your V PC increases you should create security groups based around cidr blocks instead of around resource IDs such as ec2 servers network interfaces and other security groups within a V PC security is controlled using security groups and NaCl the N ACLs are associated with specific subnets within a V PC as NaCl s are stateless both inbound and outbound rules ingress and egress must be defined you can define the inbound and outbound rules by specifying the type of rule for example custom TCP providing a rule number keep in mind rules are process from the lowest to the highest defining the port range that will be allowed or denied in or out specifying the source or destination IP address or range that will be allowed or denied in or out after the NA Theo has been defined it can be associated to subnets within the V PC AWS officially recommends a set of network ACLs for each of the four configurations offered by the V PC wizard as these configurations involves applying values that are specific to your private network they must be implemented manually placement groups are physical groupings of high performance instances in a single availability zone the instances use enhanced networking with maximized consistent throughput placement groups are very useful for clustered databases and parallel processing for big data or rendering of graphics AWS Identity and Access Management or iam is a web service that helps to securely control access to a US resources I am provides a centralized environment in which you can administer users groups roles and permissions I am is integrated with AWS providing granular level access control at the service application program interface or API and resource levels AWS iam is a free service which spans across all AWS regions with I am you can enable identity Federation between your corporate directory and AWS this means you can use existing corporate identities to grant secure access to AWS resources such as Amazon s3 buckets without creating new AWS identities for the users you can create users and define groups and roles to securely control access to AWS resources AWS provides various ways of securing the AWS resources I am users I am groups I am roles I am access Keys you'll learn about each of them in detail in the lesson I am users are not separate accounts they are users within your account you can create I am users and add or delete them from groups based on the permissions you want to provide I am provides unique security credentials that can be used to access AWS I am eliminates the need to share passwords or access keys and makes it easy to enable or disable a user's access as is considered appropriate AWS I am enables you to create multiple users and to manage the permissions for each of these users within their AWS account you can provide each user with its own password for access to the AWS management console you can manage the users passwords including strength link and password reset you can also create an individual access key for each user so that the user can make programmatic requests to work with resources in your account you can revoke the access key as and when required AWS I am allows you to administer multi-factor authentication or MFA for privileged users note that a user does not necessarily have to be a human you can create an I am user in order to generate an access key for an application that runs in your AWS or corporate network and neat AWS access a new I am user by default is not authorized to perform any AWS action or access any AWS resources you need to provide access to the users you can assign administrative permissions to a few users who in turn can administer your AWS resources and create and manage other iam users an advantage of having individual iam users is that you can assign permissions individually to each it's best to use a least privilege policy when assigning permissions limiting a user's permissions to just the AWS actions and resources that the user needs for his or her job reduces chances of mistakes you can always increase access permissions as and when required however assigning permissions to individuals in a big organization can become a maintenance nightmare as the list keeps getting longer it becomes difficult to assess which particular user has been granted what permissions using groups simplifies the task of adding or removing permissions to a large number of users simultaneously and I am group is a collection of I am users I am groups allow you to define sets of permissions by function Department or level of responsibility to provide added security to your AWS account it is strongly encouraged that you create a set of groups corresponding to the least privileged principle which is grant your users and applications the necessary permissions to perform only the tasks assigned to them for example there is no reason to assign EC to read permissions to a user whose only job with regard to AWS is to generate billing reports there are a few points that you need to keep in mind with regard to I am groups a group can contain many users and the user can belong to multiple groups groups can't be nested they can only contain users and not other groups there's no default group that automatically includes all users in the AWS account if you want to have a default group you need to create it and assign each new user to it there's a limit to the number of groups you can have and a limit to how many groups the user can be a part of let's now study an example to understand how I am groups work the diagram presents an example of groups created for a small company the company owner creates an admins group and adds users to it to create and manage other users as the company grows the admins group creates a development group and a test group each of these groups consists of users humans and applications that interact with AWS each user has an individual set of security credentials in this example each user belongs to a single group however users can belong to multiple groups as well for instance if you need to grant a permission from the development group admin privileges you can simply add that person to the admins group as a result the person now belongs to both the development group and the admins group till now we have discussed accessing the AWS resources with the help of unique identities that are particularly a user however you can also assume a role to temporarily take on permissions for a specific task a role lets you define a set of permissions to access the resources that a user or service needs but the permissions are not attached to any iam user or group instead at runtime users can programmatically assume a role when a role is assumed AWS provides temporary security credentials that the user or application can use to make requests to AWS consequently you don't need to create long-term security credentials for each entity that requires access to a resource AWS rotates the security credentials automatically on a daily basis the new credentials become available at least five minutes prior to the expiration of the old ones you need to assign permissions to a user group or role as required so they can securely access the AWS resources to assign permissions you need to create and attach a policy which is adjacent document that explicitly lists the permissions a policy lets you specify the following actions what actions will you allow AWS service has its own set of actions for example you might allow a user to use the Amazon s3 list bucket action which returns information about the items in a bucket any action that you don't explicitly allow are denied resources on which of these resources will you allow the action for example on which specific Amazon s3 buckets will you allow the user to perform the list bucket action users cannot access a resource that you have not explicitly granted permissions to effect what will be the effect when a user requests access are you going to allow or deny it AWS resources are denied to users by default you need to specify that you want to allow a particular user access to a specific resource another way of securely accessing the AWS resources is via I am access Keys the I am access keys are cryptographically generated keys that represent your account the access key consists of two components the access key ID and the secret access key both keys can be generated for an iam user in the AWS web console the key ID uniquely identifies user credentials and can be considered a public knowledge the secret access key is used to cryptographically sign all requests it's private to the user and should never be publicly posted or shared with unauthorized personnel if compromised the key can and must be revoked you should never place your access keys anywhere where they can be potentially discovered by third parties third parties can scan source code repositories such as github for AWS credentials and use those credentials to spin up their own AWS resources at your expense you can create access keys for your master account however AWS strongly discourages this practice it is safer to generate a user in I am assign that user to a group with the subset of permissions required by your tools or application and then generate an access key for that user password only security is not enough to secure your account passwords can be easily compromised through social engineering or by malware's such as a keystroke logger to provide an additional level of account security you can enable multi-factor authentication or MFA with MFA users need to supply a password and a one-time response token this token can be generated in one of two ways one by software software applications such as Google Authenticator for iPhone and Android can be used to generate a token from a smartphone or a tablet to buy Hardware AWS supports key fob and display card devices sold through Gemalto physical or Hardware token generators provide an additional level of security over software devices since software devices are harder to secure against unauthorized usage both software and hardware devices are configured through a one-time process through which the token devices are synchronized with your AWS or iam account make sure to safeguard your MFA token generators if a physical token is lost or stolen or a software generator is deleted for example as part of a device wipe you will have to disable NSA on your account and it before disabling you will have to provide a proof of identity to AWS now that you have learned how to use NSA to secure your AWS account you will learn about the AWS recommended best practices regarding account security do not circulate information about your master or route AWS account delegate system administration functions to lease privilege I am admin groups make multi-factor authentication mandatory for root level access physically secure Hardware MFA devices in a safe place such as a vault do not share master account information with anyone other than the account holder use I am roles to provide cross account access after securing your account the next step is to secure your instances you can define the instance level security by deciding how to grant and revoke permissions as employees enter and leave the company for Windows instances you can leverage the AWS directory service to grant and revoke access to machines based on existing Windows users and groups for Linux instances you can add users generate new key pairs and append them to a new users file with the commands displayed on the screen devise a strategy for fleet-wide user management such as creating Amazon machine images or AM ice containing all the current users of your company use configuration management applications such as chef puppet or ansible to simplify the process of granting and revoking access across a large number of machines often reducing the process to a few simple commands let's now learn the AWS recommended best practices regarding instance security you should use iam roles when launching instances avoid giving permissions to individual users use lease privileged access policies do not give the user more privileges than required guard and manage access or secret key tokens store your access key tokens in a secure place keep security patches up to date subscribe to security mailing lists of the operating systems they have applications you're using and apply patches and updates in a timely manner use a net and Bastion host do not expose your ec2 resources to the Internet if it is not required you should not use root level or master account access or secret keys you do not need to use the master account regularly so it is highly recommended to remove its access keys or avoid generating them embed access or secret keys and the code or commit them to get remember that anyone who has your access keys has the same level of access as you to your AWS resources in this topic you'll learn about the responsibilities shared between AWS and the customer the shared responsibility model defines the responsibilities shared between AWS and the customers with regard to security AWS is responsible for the security of the cloud that is the underlying infrastructure and the customer is responsible for security in the cloud that is the resources built on the infrastructure which include AWS ec2 instances and the operating systems and applications installed on them accounts that access the instances security group that allows outside access to the instances VPC subnet within which the instances reside external access to the s3 buckets a crucial part of a customer's responsibility involves penetration testing penetration testing is the practice of scanning a system application or network to search for vulnerabilities that an attacker can exploit to perform penetration testing of your ec2 instances you need to request for permission you need to keep in mind the following points regarding penetration testing requests permission is required for all penetration tests to request permission you must be logged in to the AWS portal using the root credentials associated with the instances you want to test otherwise the ADA us vulnerability penetration testing request form will not pre-populate correctly AWS policy only permits testing of ec2 and RDS instances that you own tests against any other AWS service or AWS owned resources are prohibited currently Amazon does not permit testing of small or micro RDS instances in one small or t1 micro ec2 instances you have already learned what the customers responsibilities regarding security are let's now quickly learn what the responsibilities of AWS are AWS offers extensive networking and security monitoring systems to provide a secure environment for your instances AWS is responsible for ensuring the following physical and environmental security fire detection and suppression uninterruptible power supply maintaining a constant temperature for servers management of electrical mechanical and life-support systems storage device decommissioning business continuity management availability Incident Response company-wide executive review network security secure network architecture secure access transmission protection Amazon corporate segregation or separation of duties between logical and physical fault tolerant design network monitoring and protection AWS access account review and audit background checks password policy secure design principles secure software development formal design reviews threat modeling risk assessments static code analysis change management systematic review tests approve and communicate phase deployment starting with lowest impact areas metrics for impact health thresholds and alarms root cause analysis hey once become an expert in cloud computing then subscribe to simpler Channel and click here to watch more such videos turner it up and get certified in cloud computing click here
Info
Channel: Simplilearn
Views: 40,962
Rating: undefined out of 5
Keywords: cloud security fundamentals, certification, course, cloud security architecture, cloud security tutorial, cloud security certification, cloud security projects, cloud security challenges, cloud security aws, tutorial, 2017, training, simplilearn
Id: M_9hRPVH5SA
Channel Id: undefined
Length: 58min 11sec (3491 seconds)
Published: Fri Aug 18 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.