AWS Supports You: Diving Deep into Amazon S3 Security Features

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello i'm rob pegareta and i'm an enterprise support lead at aws and i'm based out of austin texas welcome to aws supports you where we share best practices and troubleshooting tips from aws support joining me today is luis fox and hugo adele from aws can you give us a quick introduction luis and hugo sure hi everybody my name is louise and i'm a technical account manager based in amsterdam in the netherlands where i work with a diverse range of customers on many different types of aws services including s3 so happy to be here hi hello my name is hugo i'm a senior cloud support engineer i'm located in dublin and i support customers i'm from the storage and content delivery team where we support customers using those same services you for the intro uh so on today's episode we're going to be focusing on how aws supports you in s3 security features and support best practices as well so before we get into the details a quick note to the attendees online please feel free to use that chat window on the right hand side of your screen to share your thoughts and ask your questions throughout the episode we really look forward to your questions from hearing from you luis can you start us off and walk us through what we're going to be talking about today yeah thanks rob so today we are going to be starting with access controls so we'll be going through different types of ways to restrict access to your bucket uh booker policies im identity based policies acls and service control policies as well we'll be talking about block public access so providing control to ensure um basically who has access to your objects and making sure it's never public where possible um access points so simplifying managing your data access at scale for applications using shared datasets on s3 s3 encryption options so encrypting your data in transit and also at rest pre-signed urls so granting temporary access at a specific s3 object access analyzer for s3 so this would be reviewing bucket access logging and monitoring we'll be going through the different services here that are integrated with s3 and which would best suit your use case for that kind of thing compliance and then also security best practices for myself and hugo 4s3 so starting off with access controls um so there's three different types of access controls for s3 the first one being uh bucket policies oh sorry iam first so this would be identity based policies that you can um configure policies for your iem users and roles to set um what buckets are objects they're authorized to use are sorry access s3 book policies these would be resource-based policies where you can grant who has access to specific buckets and lastly s3 access control lists also called acls where you can specify more granular access for who has control to your objects or buckets as well so starting off we're going to do some examples on bucket policies so this would be the general structure of a resource-based bucket policy you have your version you have statement optional is a statement id here it says public read and then the effect can either be allow or deny so in this case it's an allow policy the principle being everyone and then there's two actions that are granted which are two different s3 apis get object and get object version and then the resource you're going to be specifying the bucket and the slash star asterisks is the objects within the bucket so what this policy here is doing is saying uh get object for anybody um that could be anonymous users anyone on the internet to that example bucket so this is granting read-only permissions to anyone so it's basically making your book public read the next example here is limiting access to specific ip addresses so what the first statement is doing is it's again allowing the public read so it's allowing anyone access to the objects within your bucket however then the second statement is denying all access to all principles for all s3 apis unless the request is made from are not made from this exact source ip address so that that kind of restricts access and limits the access to your bucket basically to only this ip address and this can also be done with um a different condition like vpc endpoints as well then this bucket policy here is granting permission to an amazon cloudfront origin access identity so this is a special type of user that you can create where you can then specify this user in the principle field and what it's going to do is it's going to allow any request to not go directly to s3 but instead they will have to go through your cloudfront distribution so it's just an added bit of security again where no direct request can be made to your bucket so what you're doing here is you're granting get object to your example bucket for only the cloudfront origin access identity so that's going to be the only user that can actually access the bucket so your users will have to go through your cloudfront distribution to access it moving on to access control lists so access control lists are um they basically it's a way to manage access to your buckets and objects so each bucket and object has an acl attached to it as a subresource and it defines which aws accounts or groups are granted what type of access so as you can see in the table here this summarizes all of the acl permissions and you can see that they're different when they're granted on a bucket or when they're granted on an object so that's because bucket and object acls are actually independent of each other so specifying different access basically for example an object would not inherit the permissions from its pocket so here's the list of them all read write read acp right acp and full control and just from a security perspective we'd never really recommend granting full control on the bucket obviously um moving on to boundary enforcement i think is the next slide yeah thank you luis both reinforcing luis was talking about how you can prevent access to your bucket using acls or buckets policies but there is also other ways to for you to enforce that boundary on the on your s3 bucket what happens also is that um by default and remember when you create an s3 bucket the bucket is private so that means that all public access is denied and here where we have this special feature that is s3 block public access it's a feature that if in the event that you mistakenly create either a bucket policy or an acl that grants more than you wish this feature will block that and will override that another boundary enforcement that you can use is the aws organization service control policies let's say that you have an organization and on your organization you have several accounts and you want to have a unique policy for all your accounts without going to go into individual accounts and specify each policy so it's one place you can push that let's say policy across all the accounts the next one will be the amazon s3 vpc endpoint policy and if you have like a private vpc and you want to allow access to s3 but not allow access to anything else you can use that it will create like a tunnel between your private vpc and s3 and allow your instance your users to access history so let's talk a bit more about s3 block public access so there are four security settings uh you can deny or you can override either acls and you can create or deny the the creation of new public grid acls same goes for bucket policies if you already have an existing bucket policy it will override and start to ignore or you can also deny the creation of new pocket policies that allow public reads so they are applicable at the account level or individual buckets um if you have a need to have buckets that they need to be public you can implement that and at the same time you can also implement a bucket level that will deny that and lately and like we mentioned you have the possibility on the organization level to implement also that change let's say that you know that for sure on your organization there is no requirement for you to have public read buckets you can use that at an organization level so that means that any bucket existing or new buckets that will be created will have the possibility to be public it's a great feature to block any mistakenly created policies that will allow access to data that you should or you don't want to especially when you're dealing with confidential data here we have an idea of what you will see on the console in this particular case is for the account level and as you can see we have the block for all and we will block all four options either grant through new acl so if you try to create a new acl it will deny or you can only apply to existing acls so let's say that you have a bucket with millions of objects and you want those millions of objects have an acl that allow public reads you can apply this and automatically s3 will start to let's say ignore that public grid acl and your objects become privates the same goes for the buckets level either a bucket policy to deny new public buckets or to ignore existing ones i'm not sure if we have any any questions so far hi yes we do have some today uh one of our questions coming from the spec 007 uh back on the acl's topic can we replace the permissions on s3 objects in bulk uh no so what happens here is that acl for object level are per objects it's no there is no way for you to on a single click apply acl objects no and you will need to apply to each individual object so let's say if you have 100 objects on your bucket you will need to go through all 100 objects and apply that acl into that object the recommended way here is to use bucket policies because in a single point you can just apply a let me create a bucket policy and for that prefix or for all those objects i want to grant get access anonymous get access so in that single point you can grant exactly what you want to apply uh publicly they sell to all the objects all right thanks hugo appreciate that welcome we do have one more than what shall i ever answer for from mike adam123 um how can i get from one x3 object all the attributes such as tags and uri while the s3 peg does expose them is there another way to potentially get that no and so the only way for you to get that information is to use the public uh apis and you need to you will need access to get that for example by doing the get the head objects when you do a head object you will see all the that's the information of the object without downloading them and if the object contains tags contains metadata you will also see it but other than that no because what happens is that several features they are they will require you to do several api calls the head object will return for example the object attributes plus the metadata but if you want to get a tag for an object you will also need to do another api call against that same object to get object tags and specify the object name and that's where you will get the tags but unfortunately it's not possible for you to get that information on a single api call now you'll need to use several api calls all right great thanks you um well that's all for right now so i'll turn it back over to you great so moving forward access points so what is access points let's say that you have a buck and as we talked about you can grant a bucket policy but unfortunately bucket policies has a limitation so and that limitation is from 20 kilobytes that's the maximum amount of policies that you can put on on a bucket policy and if you have a big organization and you need to share a lot with a lot of customers you can easily reach out that limit and for example s3 access points is one way for you to do that because with access points you are able to create policies for each individual access points and you can very specifically and do a fine grain control against that permission so on a for example what you're seeing here is the iron of this access point for you to access your objects through an access point you will need to use that specific access point and one of the use cases is exactly this let's say that you have a huge data lake and you need to grant the permission to your finance to only have access to that finance part to the accounting to the sales to either third party customers then you can use access points to find grain control dot so that the finance will only see the finance data the accounting will only see the accounting and the sales and the third party the customers will only see their data and as we saw so you can easily segment your clients into these different groups and you have a very fine grain because each access point can have their own policy so you can only apply what you wish to that specific access point while other people still have full access or have a more broad access and of course it's always on a single point so that you can restrict access and control of those access points next encryption luis can you tell us a bit about the encryption and how can you use it on s3 sure thanks hugo um so yeah with s3 there's two types of encryption there's protecting your encrypting your data in transit and then also encrypting your data at rest so encrypting your data at transit is using ssl or tls um and s3 uses https by default so this means that for transferring um data or any s3 service requests the issue through the console or s3 api's ssl tls encryption is enabled by default and there's no manual requirement needed by you and then for encryption at rest there's server-side encryption so server-side encryption is where you request s3 to encrypt the data for you before or sorry your objects before saving it on disks in our data centers and then decrypt it when you download the objects um so there's three different types here which i'll come back to in a second and then client-side is where you encrypt the data on the client side and upload the encrypted data to s3 so if you were to go with this option um you manage the encryption process the encryption keys and any tools that you use as well so just touching on the three different options for server side oh actually just go back one slide for a second um i just kind of want to delve into this a small bit more so you have sses so server side encryption s3 managed keys which are managed by s3 and the encryption keys are um there's each object is encrypted with a different key so s3 manages everything for you and then we have server-side encryption kms that's our key management service our encryption service where you can manage the keys or even provide your own keys if you wish so this would give you a bit more granular control if you can't decide between the two of them um yeah so kms would give you more control over your encryption keys because you can actually control who has access to them by creating key policies for example which are another type of resource-based policy and you can define who has access to use the key or administer the key and also you also in charge of when the key should be rotated as well you can control that whereas s3 managed keys everything is done for you and then the other option is customer provided keys so this is where you provide your own encryption keys you have full control over them access to them key rotation everything's organized by you um so moving on to the next slide there so delving it just a bit further into kms encryption so there's two different types of keys that you could use to encrypt your data on your s3 bucket um and by the way only symmetra keys are to be used here as well that's all that s3 supports for this so for aws managed kms keys they're fully managed by aws they're generated on your behalf and if they only have access to the account that your bucket is in you don't manage the key policy this is all done for you so you don't have you don't control who has access to the keys aws does this and it can't be disabled or deleted whereas customer managed keys it's a bit more work for you but obviously it depends on your use case it's managed by you so this might actually suit you better in terms of security fine-grained access controls so it has the kms key policies where you can define who has access to it as i mentioned before and you can actually grant different accounts access as well doesn't have to just be in your same account and if you wish you can disable or delete the key well actually and you can also rotate the key as well so you can decide if you want it to be auto rotated or you can also rotate it manually on demand so yeah so here is just a diagram of kms encryption where you are actually enforcing kms encryption before you upload data to the bucket so as you can see here there are temporary credentials generated and then this has a customer master key here which is then encrypting the data for your object and then you are going once the object is encrypted then you're going to go upload it to your s3 bucket and you can actually define in your bucket policy um you can define something that's a condition for example where you are enforcing kms encryption or any type of server-side encryption and layers of protection for your data so here are some other ways that you can actually protect your data as well just different best practices and security features we offer so first one is versioning you can enable versioning on your bucket and this will help you recover from any objects that are accidentally or maliciously deleted or overwritten so this can be really handy just in case anything important does get deleted i guess then you have object lock which will actually go into more on the compliant side but this will help you prevent any objects from being deleted or overwritten within a certain period of time a fixed time or indefinitely as well depending on your compliance regulations requirements um mfa delete so this is multi-factor authentication delete where you have just an additional layer of authentication basically so you have to put in um yeah a second layer of authentication before actually being able to delete or override anything in the bucket and lastly replication so this is basically an automatic asynchronous copying of data from your s3 buckets and it can be done cross region so that's c or or cross region replication or s or or same region replication so this is ensuring that your data is available in another region and just making it more available and i guess more redundant as well are there any questions for now from the group rob yes so we do a few questions additional as well um so one of them from skolex is here is there any way of updating the object owner without in place copy i can take that one louise yeah no and so per the s3 ownership model the account that uploads the object is the account that will own the object so what happens here is that for you to change the ownership of the object you will need to copy the object back to himself and using the other account credentials so it's not possible for you to without copying the object back to himself update the ownership i'm sorry okay thanks and then another question here from princetongirl21 is can i enable encryption with kms on all existing objects in the bucket after it was created i can also take that one no so what happens that when you enable for example default encryption on the bucket that will only be action for all new objects from that moment forward so we will not back let's say backlog all the objects that exist on on the bucket for that you will need to as previously copy the objects back to themselves and either provide the encryption key or just copy the objects and the default encryption will kick in and apply the encryption one example that you can use for example is s3 batch to copy the object spectrums themselves and so apply the encryption but from the moment you apply the default encryption only new objects will get the encryption the existing ones they need to be manually processed so that the encryption will be applied great thanks we've got one more here today as well from joe desmond that's which keys would you recommend to use it's back to our kms question or subject before yeah um yeah i'm assuming this is for the encryption keys so uh joe it really just depends on your use case and what best suits your security requirements so whether you need to consider key rotation how often it needs to be rotated where your keys need to be stored um it depends how much access control you need as well like do you need to have granular access control or would you prefer if it was managed for you so it really just depends on exactly your use case exactly your security requirements but if you have a support plan feel free to reach out to us and we can help you a bit more specifically thanks louisiana appreciate it uh back to y'all thank you so as louise mentioned uh replication what is this it's a feature that has to be asked and you can do replica what happens that you will configure your pockets so that every time you upload a new object into your bucket automatically s3 will replicate that object into another bucket it can be on the same region or it can be on another region recently the s3 also launched replica one too many so that means that you can upload an object into that bucket and that bucket will replicate your object into multiple buckets until then you have to manually do it but now we support that out of the box how you can so what are the use cases for this first disaster recovery let's say that you are doing cross region and for some reason uh there is an issue on aws region you can easily switch over to the other region and that's it and latency wise if you are using a global service and you want to distribute those objects across the globe you can easily do the replica so that once your client in let's say north america uploads an object into your buckets automatically that object will be replicated in europe and asia so the next time and user in asia tries to download that same object instead of need to go to the bucket in north america now we'll just download it close to him so you can get much performance and latency wise will be reduced and what happens also with encryption in this case replica you can also apply encryption so that means that when you upload an object into your bucket you can specify which keys you want for those objects to be replicated and to answer the one question of one of our listeners and while doing a replica and if it's a cross account you in that case you will also have the possibility to select which accounts you want the object to be owned on a normal situation the account that is pushing the replica will be the owner but if it's across account like you have a bucket on account a that is replicating into account b and you upload the object into the bucket on account a you have the possibility for s3 to win uploading in this case when replicating the object into buckets on the count b to change the ownership so that the owner will now be account that's a possibility also to avoid any cross account access in the future next we have president urls as we talked earlier by default your buckets are private so s3 will create a bucket for you and that bucket will be private you can apply policies like a bucket policy or acls to allow those objects to be public but you can also apply the block public access to be sure that those objects are not public but in cases where you want to share objects to other users and without either sharing them your account or granting them access to your buckets through bucket policy you can use present urls basically a president url is going to be a link to that object that was pre-authenticated using your credential so you have user you have a credential on that user and you will pre-authenticate that with the present urls you can specify how long the url will be valid if it's going to be to download an object if it's going to be to upload an object you can either generate presenter else to copy objects between buckets or even lastly to delete an object let's say that you want to grant one customer the possibility to delete that object anytime they want you can also generate a present url so that they on their own can delete that object there are only one classes here is that present urls will be valid until they are expired so when you create a presenter l you define what is the expiration date and on top of that the present url will also be valid until the the client or the user that created that present url exists like for example if you have temporary credentials and you create a present url using temporary credentials when the temporary credentials expire so does your present url even if you set expiration in five or six days in the future but if the the temporary credentials expire in a couple of hours that's until when your temporary your presenter will evaluate one final note using s3 present urls and using an iem user the maximum value allowed for present urls will be seven days it's not possible to extend more than that moving forwards we have x access analyzer for s3 so what happened is that we talked about accessing our objects and accessing our buckets but do we know who is accessing them no but with s3 access analyzer you can know that so pretty much is a basic function that you enable that on your sd bucket and it s3 will show you who is accessing your objects who or and when is accessing those uh which are the objects that are being accessed so it will continually monitor your pockets and all the access that has been doing or everybody who's accessing our objects so that you can easily know and who is accessing and it can also be used for example and policies that are providing more access than it should be it's something that you can also find over here so it's a very highly uh recommended feature so how does it work you have access either through accounts or your organization on that organization you have im roles you have s3 bucket policies you can even have lambdas kms keys sks that goes that are triggered by s3 events the analyzer will check all the access it has been done every time you try to access something it will authenticate against your im user using your access key and your secret key and that will be tracked through im on aws so what happens that everything will be tracked and on a single point you can know exactly who is accessing what is being accessed and by who and when at which time and and it provides a highly level of security assurance that you know that no one is accessing anything that it shouldn't be and so it's a it's a very it's a single place where you can monitor all of that instead of going to s3 logs instead of going through then either let's say along the logs the im everything else though you can easily get that moving forward we have logging and monitoring louise do you wanna talk a bit about this sure thank you hugo um so these are the types of the different types of security logging and monitoring features that s3 offers just on the next slide so the first one is cloudtrail so this would log all api activity in your account at bucket level and you can also enable it at object level as well then you also have s3 server access logs which is another way to log detailed records about the request made to your bucket so between these two services you can get a very good idea of your customer base the request made your s3 billing and also security and access audits as well bucket access control view in s3 so this is just in the s3 console you can see who has access to your bucket in just a nice kind of ui so that's another way you can just check the control the access control um aws trusted advisors so this is a really handy free of charge service that we offer which offers real-time guidance to help you provision your resources in terms of the aws best practices so this is for reducing your overall cost improving performance and of course security as well improving security so for example we have a bucket permission check which checks uh it identifies any s3 buckets that are publicly accessible due to acls or policies in your account and you can be notified of this and you can then change it if it's not what you expected i am access analyzer which hugo just touched on so i won't spend long at it but it just helps you identify any s3 buckets in your organization or accounts that were shared with an external ident entity basically um config rules so config is our compliance service where you can one of our compliance services where you can actually monitor configuration changes to any of your resources and you can set up config rules and make sure that they're compliant and if they ever go into a non-compliance state from what you expected you can be notified of this or set up auto remediation we'll go into more on this on the next slide and then lastly amazon macy so this will scan and give you a full inventory of your s3 buckets to categorize the data showing any different sensitive data types including personally identifiable information pii so let's say customer names or credit card numbers or making sure that you adhere to different compliance requirements such as gdp or hepa oh and you can also use it to check um if any buckets are unencrypted or are publicly accessible or shared with any accounts outside of your organization this will allow you to quickly address any unintended settings on your bucket so it's kind of a preventative control basically so on the next slide we'll talk about config specifically so what happens here is your resources can be changed due to just different users in your account making different modifications to your s3 objects or buckets then these will be recorded by config and you can have predefined rules which are managed by aws or custom rules that you can create and manage yourself and these can check that your resources are still compliant with the rules that you've set for your account and if they're not for any reason uh you can set up notifications to notify you of this or even autoremediate um whatever it is so just some examples of the many different config rules we have for s3 so the first one is checking whether logging is enabled for your s3 buckets so you can set this predefined rule that was created by aws for your account and if any bucket is created without having any let's say cloudtrail logging enabled this rule will turn to non-compliant and you'll be notified of this another is checking whether buckets have policies that require requests to use ssl so that's another predefined rule we have making sure all requests go over https uh checking whether s3 buckets are encrypted with kms so that's the encryption service we talked about earlier you can actually have a rule for this and if any bucket is not encrypted with kms again you can get emails about this or you can have some other remediation set up and lastly checking where whether versioning or mfa or and mfa is enabled for your buckets as well for data protection so they're just some of the some of the rules that we offer so moving on to s3 compliance so um yeah so the security and compliance of s3 is assessed by third-party auditors and these audits are audit reports are all available to you publicly in our compliance service aws artifact so this is another free of charge service that we offer and it's a place that has all of your it's like a go-to service for all compliance related information you can actually download audits and different compliance reports and check your own configurations against them to make sure that you adhere to them we also have a dedicated compliance team which you can fill out a form and reach out to them specifically if you have any questions um and uh yeah then with one of the features that we offer to do with s3 compliance sorry the lights are automatic um so one of the rules that we the features that we have for compliance is s3 object lock so this is basically what i mentioned earlier it prevents any objects from being deleted or overwritten for a fixed amount of time or indefinitely so this can actually help with compliance requirements because if you have let's say financial service regulations regulators they might require you to have a write once read many worm data model so that certain types of data or records or books are kept for a fixed amount of time so this can be done for just auditing purposes uh compliance or maybe just for data protection so these are some of the features that that offers and if you go to the next slide you'll see that there's two different modes for protection so compliance mode and governance mode so compliance mode is intended for compliance and it does not let anyone in the account delete the data that is in compliance mode including the root user and yeah this is assessed by a third party and then governance mode this is intended for data protection as i kind of mentioned earlier so you can actually set an im permission to allow users to actually modify the data if needed but it's mainly for data protection to prevent any accidental or malicious deletes or overwrites uh do have any questions at the moment no thanks uh we don't actually have any that are um knocking against right now in the chat so thanks please appreciate you guys carry on um okay so moving on to some aws support best practices just kind of summarizing up what we discussed earlier so the first is as hugo mentioned earlier enabling account level block public access so this is to prevent any buckets from being publicly accessible and this can also be enabled on bucket level however we usually would recommend account level because this means that any future buckets that are going to be created will also have block public access enabled as well another is implement lease privilege access so this would go back to what i was saying about access controls granting the least amount of permissions required for any user role to carry out a task and this can be defined with identity policies such as iem bucket policies with s3 access control lists service control policies um the kms key policies the list kind of goes on and so that's something definitely to to implement following on that and as we mentioned secure traffic so we talked about data encryption data in transit but if we don't want to use the public internet and we have a private vpc we can use vpc endpoints we have the gateway and we have the interface each one of them have their own benefits and we also talked about access points if you want to have a fine grain control on who access your data if you want to share your buckets your your data across multiple users different groups different areas on your organization you can use access points to do that and on top of that even we have the encryption and it's a best practice to encrypt everything and while encrypting you can use the sse qms that will give you either the aws manage kms or the custom managed kms each of them have their own pros and cons and lastly you have the sse s3 so this encryption is all managed by s3 on its own you don't have the kms and it's again it will provide you the more more and more possibilities of the controlling your objects rather than the key ms and as well as encryption there's also data protection so enabling object lock versioning or mfa are all different ways to prevent any accidental deletions or overwrites as i mentioned earlier so that that's whether you're wanting to lock the data for compliance reasons add extra versions of the same data or have mfa deletes that's requiring an extra layer of authentication when you have to go to delete or overwrite an object and monitoring as well so using services that are integrated with s3 for example cloudtrail which logs all apis in the in the account any api request and also config which will monitor any uh configuration changes made to your resources so that's something to implement if you were to consider logging and monitoring your data and on top of that you can also use the s3 server access logs or aws cloud rail data events like we stocked you have the cloud trail by default that's enabled on your account and will record all api level bucket level uh apis and if you want to know who is accessing your objects who is downloading your objects or who is uploading your objects you will need to use either s3 server logs or the cloud related events and on cloud rail you will need to enable this because by default only the bucket level are recorded this is a service that will have an extra cost but we'll record much more information than the s3 server access logs and finally we have the adwords trust advisor to inspect your amazon implementation and like like lewis mentioned let's say that you want to track who if your bucket has a public read bucket policy if you have ssl enforced all of those security best practice you can use and you can you rely on trust advisor to inspect that configuration on your account and if it's not implemented it will alert you and will recommend you to enable that moving forward we have a quick demo but before that is there any questions hi hugo thanks um download s3 right now we have one config that i'm working on trying to find smokeful for but i think right now looks like we don't so if anyone has any other questions on the comments of s3 and the different security features and such please feel free to put them in the chat here if i get them answered for you thanks all thank you so moving forward on this demo we will go through create a bucket uh apply bucket policy and encryption and versioning so let's start when we when we go to s3 we will create a bucket and remember bucket is our s3 is a global service regarding the bucket name so you cannot use the same bucket name on different regions but your data will be stored on a single region so that's why when you create a bucket you select which is the region where you want the bucket to be stored next you have the block the block public access like we mentioned but we will go a bit more on that the next feature is the versioning and that will enable different versions and an advanced setting is the object block but again we will go back we will go on that a bit further so let's go and create our pockets we have our bucket created and as you can see by default my bucket is private no access is granted to the buckets so this bucket now resides on that region where i select and i don't have any objects on it let's go and talk a bit about the permissions and block public access again it's not public it's private by default s3 will always create a private bucket you need to explicitly allow access when you allow access and you want to be sure that you are not granting anything else more than you want you can enable the block public access this is a bucket level or account level feature here you can control individually then if you want to only apply to the acls new acls existing acls or the private or to the bucket level and here on our left we can see that there is account level so the changes that i apply here will be applied for all the buckets that exist on my account the existing ones and all new buckets that i will be creating so let's say that we have the blog public access enabled and i'm going to apply a bucket policy so i will edit and and i will apply let's use for example public reads policy from the examples that we have on aws so let's copy this and move back to our buckets so pretty much i'm granting everybody on the internet uh get objects and let me just change my bucket name okay so my bucket policy is there and now i try to save what happens i'm getting an access united why because i have the block public access enabled so that means that s3 is not allowing me to create a public reads and bucket policy because it's there and clearly saying that hey i don't want you to allow that the same goes with acls and if i try to create an acl that allows everyone on the internet to either list or read my buckets because the feature is enabled account level or pocket level s3 will deny me that operation so i will not be able to create that so keep in mind this feature is very useful if you want to be sure that no one by mistake creates a public read either acl or bucket policy and by this allowing you to access objects that you should we want the next is server access logs like we talked the server access logs this is per bucket so that means that i will be enabled on the bucket level i will go into my bucket i select where is the destination for my logs so where my logs will be stored and i save one recommendation is to create one bucket only for logging so it's not recommended for you to use the same buckets that you are logging to be the destination of your logs so you should always create a bucket only specific for the logs next we also have the cloud related events as you can see it's disabled if i want to cloud cloudtrail to also record my object level like the gets the puts the deletes i will need to come here and enable that feature because by default it's not okay moving forward and another protection is bucket versioning by default every time i upload an object and if i delete or override that object is gone when i enable work bursting pretty much i'm telling s3 that hey if i delete an object or if i overwrite or i upload a new object with the same name i don't want you to overwrite i don't want you to delete s3 will create a new version for that so let's say for example if i upload an object into my bucket that is not versioning enable as you can see versioning is disabled okay pretty good i upload the object the object is now on my bucket if and i perform an operation like for example delete that object and as you can see s3 is now asking me to confirm if i want to delete and notice the permanently delete i need to type yes i confirm that and that's it my object is gone please keep in mind every time you you delete an object from an s3 bucket there is no way for you to recover unless you have versioning enabled now that my bucket is virtually enabled what happens is that one so i upload my object s3 will okay create that object but now we'll create a version and if i check on my list then and if i try to delete perform the same if you realize now it's only delete it doesn't ask me to permanently delete and the object is gone but on my listing if i list versions if i enable this option the s3 will list all the versions that i have on my buckets and as you can see i have the current version that is just a delete marker it's just a pointer saying that this object is deleted and i have my previous version if i delete that delete marker that pointer saying that hey this object is delete voila my object starts goes back to show so if i disable the list versions i will see my object again because that pointer no longer exists and s3 will treat my previous version as the current version so my old object will now become the the current version next we have the default encryption by default my objects are not encrypted unless i enable this or on the upload i specify and when we enable this we have two options we can either use the amazon s3 keys or we can use the kms in this example let's use the s3 sse it's simpler as we don't need to perform any kms so as you can see we the object that we uploaded earlier and when we check the encryption there is no encryption now what happens is that i have default encryption enabled now i'm uploading a new object as you can see s3 now is telling me the default encryption is enabled i could of course select other types of encryption if i wanted but if i'm not then s3 will apply the default again s3 will only apply the default if on my uploads i'm not specifying any encryption my object is now uploaded remember that i have version enable so let's list the different versions and my previous version in this case my current version now has s3 encryption and is using the encryption type that i selected the s3 sse if i check um my my my object again remember that is not possible for me to go back and encrypt all the objects my bucket is now private and i want to grant access to this object as we also talked about s3 present where else i can create a present url there are several tools aws cli allows that out of the box i just need to have the cli configure so what i'm going to do is i'm going to create a pre-signed url i specify which is the pocket and the key and i can also specify how long the present url is valid and the sli will return the present url so this is my present url that i can share with all the users and the users will now be able to download this object even so my bucket is private and i have to block public access enabled but that present url automatically authenticates for me to download this object okay moving forward and lastly object locking the object locking we need to be aware that this must be enabled every time that i create a bucket if i try to enable this after my bucket is created i will not have that option and the only way for me to enable this is to reach out to aws support so that they can give us a special token that will allow me to enable the [Music] the object locking so be aware every time you create a bucket if you intend to have that you should enable from the start okay and that's the end of our um not sure if we have any questions rob thanks hugo um do have one question real quickly here if you have time for it um from casper d they've been looking at doing some pre-signed url but getting access denied with an i am role that's trying to access it um just see if you have any tips from just the demo that you had there quickly on that info sure so when when you create a present url you need to be sure that the user that is generating that present url so the credentials that you are using to generate that presenter l has access to the bucket has access to the object if the object is encrypted that user and in this case kms the user also needs to have access to the kms and you need to check that if everything matches what you can also do is one thing is was your bucket created recently because when you generate a present url then s3 so the the signing method will also sign the host in this case the the bucket name dot s3 dot amazonidws.com but if your bucket is very new usually takes one or two hours what happens is that your the end point at global endpoint the bucket name dot s3 dot amazon aws dot com will point to north virginia that's our global region and then s3 will say hey that bucket is not on north virginia it's on the other region and will automatically redirect you to the the region where the bucket is and you might get an access denied or the the common error is also the signature didn't match exactly because of that so i will say if the bucket is new wait a bit a bit more until the dns is propagating if it's already an old object then yes you should confirm that the im user has the proper permissions to access that object that the object is if the object is encrypted using kms or not and again confirm one thing which what you can do is using the normal the cli like s3 api head object and go to that object if you have access okay then it probably it's that redirection because of the bucket is still new hope that answers the question rob i think you're on youth there thanks louise appreciate that everyone today we looked at s3 best practices at security uh if there's any questions that were not answered today you could post your questions on the aws forums at forums.aws.amazon.com and feel free to email us any feedback to aws supports you at amazon.com we want to hear from you and tell us what else you would like to see on this show thanks for joining us on aws supports you happy cloud computing
Info
Channel: Amazon Web Services
Views: 5,689
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud, AWS Cloud, Cloud Computing, Amazon AWS
Id: 1wdTEiy6cjA
Channel Id: undefined
Length: 56min 21sec (3381 seconds)
Published: Mon Apr 12 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.