Simplified Amazon EKS Access - NEW Cluster Access Management Controls

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
here we go here we go here we go here we go go here we go here we go here we go here we go here we go here we go here we go here we go here we go here we go here we go here we go here we go all right hello and welcome to another episode of containers from the couch today we're going to be talking about the new improved cluster access management apis with Amazon eks essentially just making it a little bit easier to set up authentication and authorization when working with Amazon eks I've got two guests with me today both from AWS both have uh I Believe been on containers from the couch H correct me if I'm wrong there sheel we'll start with you uh so excited to have you here tell us a little bit about yourself hi um I'm excited to be back I think it's been a long uh ago long while for me but uh here I am today with the new episode and I it's always exciting to be sharing uh the screen with s and today with Rodrigo as well uh I am a specialist Solutions architect uh at AWS for containers and excited to present about 's access management today and the new controls that we have launched in December I think we are a little bit late into the game like you know it's already um end of February here um and tried to fit this episode in earlier as well but I I guess today it is the time for the broadcast of this episode better better late than ever absolutely we did announce this feature at at reinvent um so yeah it's let's let's let's talk about it oh by the way we've got almost a hundred people watching us live right now drop a comment in chat let us know where you're tuning in from uh Rodrigo I'll go to you next tell us a little bit about what you do at AWS and uh you're going to be demoing for us today right so maybe a little sneak peek into what you plant to Shell oh yeah uh hi Sai hi shito hi everyone happy to be here again uh my name is r b I'm a continuous specialist Solutions aret at AWS I work in the same team as shitel uh happy to be here to share this new feature where that customer were customer was looking for to have uh to facilitate uh the access to creating and managing access to eks clusters and yeah so the demo for today uh if everything runs correctly we hope so it's a live demo uh we'll be having a cluster deployed uh in the old fashioned way uh with all the authentication being managed by the AWS off config map and during the demo we'll be transitioning uh to the new method just using the API in the middle we will have uh an approach to how do we transition that how do we make enable the Custer en aable to support both of the versions and then in the end we'll be fully transitioning to the cluster access manager and completely removing the config map from the cluster excellent thank you for that Peak and wow we've got uh a viewer here from Somalia as well love to see it worldwide here uh okay so Rodrigo as you mentioned it's important not only just to show the feature but for customers who've already maybe set up cluster access management using you know config map approach AWS AWS o and you know um as as much as I'm used to using it I I can't I love the old way of setting up uh cluster access with config map definitely an error prone approach where you're working with not just you know IM am apis but also kubernetes apis pairing the two together not great so I'm I'm giving a little bit of this away but sheel I want to start with you and ask why did why did the eks product team the the engineers why do we develop this feature why new apis for customer access management I think that's a good segue to bring up some of of the slides which has some background to it and also a little bit details about the feature itself right so what is it yeah we really wanted to provide a seamless and simplified access management as just s mentioned like customers were dealing with the Custer creation using the eks apis and the identity and the access management with the kubernetes apis right and also most of our customers what they did the challenge was they forgot on the am principal that they used to create a cluster with and they had a trouble getting access to the cluster itself right and also if the you mess up the IM am principle that you have used while creating the cluster you might lock yourself out of the cluster access as well right so we said okay now let's launch this feature and support the EK apis wherein basically you can now use the any of your I tools to create um the access to your clusters as well right and manage the authentication and authorization of the Clusters we haven't taken away any of the Upstream cuberes access permissions like cluster admin or the viewer right you still get to have the access and also use access management to simplify workflow for granting access to the other AWS services like EMR and AWS batch as well right using the eks apis so how do you do that uh you can op in the cluster access manag Management on the existing clusters or the new clusters so as you can see here you basically have a new config mode which is called as access config and you can tell whether authentication mode to be an API or a config map or API and a config map right so we do still support the config map uh as of today but probably in the coming versions of kubernetes we might flip and say API becomes the default mode and until you operate to the API mode you will not be able to upgrade upgrade your clusters quick quick question here uh so so for existing clusters you know um you can you can update the cluster with this access config approach uh but what about for you know brand new clusters on the newest version of eks is this still an opt-in feature uh what's the default for example if if you don't specify an access configuration so the default is API config map right Rodrigo uh in the latest versions or depending on the tool actually that you're using yeah that's correct so if you're using any uh infrastructure code uh tool out there for example terraform and cloud formation with the default config or using default resources from from terraform uh the default configuration will be config map is still on that but if you're using tools like ekl or if you're provisioning your cluster to the console or even using the the terraform uh Upstream modules for uh for eks we already set up set that to API and config map so we'll be able to manage uh in both ways so I guess we both are correct depending on the tool that you are trying to use to create and manage your clusters the default might vary so you might want to take a look and I would recommend that you specifically specify access config and authentication mode so that you are not confused used and and the Upstream tool that you're using flips the you know flag you're still protected by the changes to any of these Upstream tools as well right so yeah config map um API and config map and API right so when you create a cluster using only API the cluster will Source authenticated IM principles only from the eks entry API right so there are two things to uh basically uh with this change uh with the introduction of this we have something called as access ACC entry and then we have something called as access policies too right the first segue is you create an access entry and when you create when you use an API authentication mode um the cluster will Source all of the authenticated IM principles only from the access entry apis and when it is config map is the regular as uh as of today watchever you see which is the AWS Earth config map API and config map the cluster will Source authenticated ion principles from both access entries as well as the AWS South Country map however if you have the same IM am principle you created using the access entries as well as inside of the AWS South config map whatever the policies that are associated with access on the IM principles created with access entry uh will take the priority so as we just mentioned um maybe in to be determined future kubernetes version e will stop supporting AWS o config Maps separately as an authentication source and you'll be blocked from upgrade so I think now is a time for you to take action and then take a look at the feature uh like you know how it is implemented I think the demo should be really helpful uh whatever rrio is going to be going through and there is some work uh of basically if you want to shift to completely using the cluster access management and the access policies all of the IM am principles that you have created and the roles that you have created that you might want to migrate over however we do also support basically the Upstream kubernetes are back as well as as well with this cluster access management so you just create basically the access entries and you will still be able to use all of the Upstream kubernetes R back permissions that you have already created right uh with this one actually there is another thing that we have introduced um you can remove completely the cluster Creator admin permissions right with AWS Au config map the IM am principle that you use to create clusters with were automatically added as the cluster admin permission but or given the cluster ad admin permissions but with this one you can say I I just want to bootstrap but after the bootstrap of the cluster is done remove the cluster Creator admin permissions from the principle that is used to create the cluster so this is actually a feature that we've had a lot of customers ask for because uh previously when when creating a cluster it would automatically bootstrap uh the cluster with uh a cluster administrator which was tied to the principle that that created the cluster uh and and once that's set you're kind of stuck with it as long as you have that cluster so so this feature really is also improving not just our the usability features but also uh improving our security footprint as a whole you can create clusters with no cluster administrators and then configure that you know down the line based on your needs so I thought that was that was really really really interesting yeah and with this feature we also support like you know the access policies we have admin we have the cluster admin we have the edit as well as the view as well like you know just the Upstream roles that kubernetes supports today right um and um as we see through the demo we will be creating multiple different access policies and also go through how authentication and authorization works with this Custer access management editions so so I did get a question in chat here is very really interesting so Ryan cloud is asking so does that mean an AWS IM readon role can be mapped to the cluster principle the cluster admin role on eks yes I think this is where uh like you know a lot of confusions that come into right so I am the access entries and the policies that basically are supported by the cam are very different from the IM principles here itself the am role yes you can have an AM role which is called called as a read only but that can have an admin access depending on what is the access policy that you have Associated for that IM am principle that you are using great yeah and and Ryan Yes uh it can be a little dangerous but you can feel comfortable in knowing that you you always will have the ability to go in and change it I think that's that's the kind of the the benefit within new approach yeah and also we do support some of the IM conditions as well you can actually use those IM conditions to say basically if if it matches the specific IM am conditions then only allow access to the cluster whether it is read only whether it is like you know admin or the cluster admin access I'll just cover that as well so how do you basically recover the cluster access with the AWS sou config map that was a challenge uh you can create an access entry and then you associate an access policy that you see here first we are creating an access entry for an IM principal so here the IM am principle that we use is a role admin um but it can be named anything right so we would suggest that you use the names that can match with the level of access that you are providing to that IM IM role and also we recommend using an IM IM role versus IM user itself so that even if there is a change to the user you can manage it at the role level and add those as like you know principles under the a main IM role once you create an access entry you go ahead and Associate uh that access entry using the access policy associate access policy API so you will say okay this is my um principal AR and the policy on that you're going to be using is the eks cluster admin policy so as you can see it is not an i IM entity these access policies are very much specific to eks and these access policies are managed and supported by eks Amazon eks and nothing related to do with IM principles here if you don't mind I just want to pull up an image here that I think can help explain this a little bit sure with a little bit of clarity because I found this uh image on the data dog uh blog and and we'll get a link to that blog right here they have great blog that dives into this this new feature but um I think here's a nice little mapping that I like each cluster is gonna have you know zero or multiple access entries these access entries are going to be corresponding to exactly one AWS principal but it can also be Associated to kubernetes groups so the benefit here is even even if you you know don't have an i IM Identity or you have an external identity provider a kubernetes username you can still access the cluster using these access entries um because there's there's multiple authorizers that that work with kubernetes Yeah so kubernetes basically supports multiple authorizers right so the authorizer with the cluster access management sits at the down of the chain right if you have the kubernetes access uh arbs that are defined those will be evaluated for first and it's always an allow and there is nothing like a deny if a arback is evaluated first and if it is an allow condition yes you will be allowed an access if our back actually denies uh and then it just moves on to the next one in the chain that's when um the new cluster access API authentication will be evaluated um if it is not then it will be a deny condition that is actually returned back and you will not have access to the cluster excellent and and I won't uh talk about access policies just yet but but I think you're about to talk about that sheel so I I'll let you uh I'll pull your slides back up here um yeah I guess what we will do is we will just take a break from a presentation here we will just see everything in action and then we'll come back to some of these IM conditions that that we support excellent okay Rodrigo you're you're up here let's let's see how this actually works in practice let's do it uh uh so uh as I mentioned uh as Sai mentioned and Sh mentioned in the beginning if you provision a cluster uh in the old old way fashion what we'll have is uh your cluster created with the config map configuration right so if we go even using the datw CLI that's what I used here and describe the cluster that I have in my environment is a fresh deployed cluster doesn't have a any like customized axess created uh like for now uh we have just the the cluster access config Set uh to config map and if you go to the console you can see the the same configuration in the access tab that's brand new and you can see the access configuration here in the authentication mode set as config map as well and so uh what does it means uh everything is controlled uh get CM uh everything is controlled by this config map here uh DWS Al uh we can see like uh some kubernetes groups uh we can see the the all are Arn uh this is for the the cluster out scaler or the outscaling groups or uh manage node groups to be able to access the ec2 API and control those actions but uh one thing to mention here is we can see that cluster Creator role that Sai and shittle mentioned before uh so it's completely under the hood you cannot access you cannot change it and that's why uh where people get lost and even lose access to that role or do some uh configuration that's not compatible with the config map and they lost access to the cluster so and with uh you will see through the presentation where we'll be migrating from this format with the config map to the API that now uh you can get rid of this config map in the future and just manage the access to the kubernetes to the a uh AWS API right so first thing that we will do is to change uh the access config here um where is it no update cluster config so we will change the cluster config now to do uh the API and config map so that authentication mode that we just saw in the configuration now we will be Chang it to this format here so there's an interesting caveat here Rodrigo where when you move from you know uh so previously this config map and now you're moving to API and config map uh it says that this the status is in progress what if you wanted to go back what if you wanted to go back to just using the config map that's actually a good good question Sai so uh you can transition from the config map to API and config map to do the modifications and be uh in compliant with the API format and then from the API in config map to just API but this is a oneway door decision you cannot go back you can't go back from API to config map and API and you can't go back to the config map anymore so one time you do did this transition you're not you won't be able to to go back to the previous configuration so that's what like you know you want to go step by step before it becomes API um as a default it's from the config map change it to API and config map create map the entries from the AWS South config map to the access entries create those access entries and still you can leverage basically the existing RBS you know kubernetes rback permissions for the authorizations and then once all of the access entries are migrated that's you change on the config mode to be an API that's when you are ready for whenever we flip that change so to be clear if if if you're making a brand new cluster and and you don't have existing odd config maps and all of that just use API you'll be good things will be good uh great but you know for for most of us which have existing clusters that we want to migrate here's the kind of path go from config map to then API and config map and finally to just API because as we mentioned earlier the config map approach is deprecated it we're not removing it just yet but but it is deprecated and you should move to the the new uh access approach one quick question when you're on this API and config map you know both at the same time which one takes precedence so there are two two aspects to it right because we use this feature both for authentication as well as authorization so when you are using API and config map for authentication if you are created an access entry access entry is what would be taking a preference and for authorization if you have an RB back in place and also you have created the associated and access policy now first the arback is going to be evaluated for authorization and if it is an allow condition then whatever the access policies that you have Associated it's not going to be validated because it's an allow right so you have an access to the cluster ad you're authorized to the kubernetes objects that you have specified in your uh roles if that is not an allow then it passes to the next authorizer in the chain which is our cluster access API authorizer and our access policies will be evaluated whatever the access policy icies that you have associated with the IM am principes when you by using that associate access policy uh API those will be evaluated um and if it is an allow then you will be authorized to the kubernetes objects that are as part of like you know as per the permissions that access policy allows you to do so excellent makes sense okay Rodrigo what's that's where the bigger confusion is like the customers ask like you know authentication versus authorization so so I would just make it like simply to remember there is an authentication part it and authorization authentication when you have access entries access entry is going to take the precedence versus for uh access authorization it's going to be the rback if there is an rback else access policies got it makes sense and that that's a good point it's something that we should see happening uh live in the in the demo uh because like now now we just transition to the access configuration using eks CPI and config map and as you remember uh in the previous screen we didn't have like any access entries here like it was empty it was in blank so by the time I I created the AKs API possibility to to authenticate to this cluster those access entries were already created so we have the admin that means that the cluster admin that was uh created when I spin up the cluster so the cluster Creator one and the ones that were dedicated to the nodes uh that we will be managed by uh this cluster right so are these new or were they pulled from the existing config map no they they were I I'm not sure if they were pulled from the existing config map but for sure like uh when I switched the the API it just read my cluster configuration and spinned up the this permissions that were uh that that were needed for the cluster to to work let's say like this got it makes sense but if you if we look to the to the config map again here uh you will see that we have like two different uh roles for nodes here we have the the default node group and the initial node group and in the access entry we just have one I see that that's the this one here so uh another way to to list this this uh these access entries is to list access oh there a good good one before we go jump into the access entry let's take a look in the access policies so access policies are Global to the to the AWS account and these are the default ones so we have the admin cluster admin edit and view policy that uh were the the the access policies that chto mentioned earlier and these are aligned to the kubernetes irback like cluster rols and cluster rooll bindings right and if we look at the list uh access entries then you see that I need to specify the cluster that I want to see the access entries because it's not Global to the account anymore but they are just for this specific cluster and we'll see a mirror of what we have in the console here right and as I as I mentioned before like we just have the initial e node group here we don't have the default one that's ex that exists in the in the config map so if you have any other identities uh in your config map config map you need to uh run processes similars to the the one that I'll be doing here to map those permissions to the to the cluster access entries uh so now uh if we take a look on the the associated permission list rodri quick question that just came into chat here um so uh Ryan Cloud asked but then any IM user or role with enough eks access can add any other identity and map them to a role in kubernetes and is there a way to restrict the IM roles or users in a big organization so if if we're we're talking about different entities one thing is the IM role and the other thing is the the kubernetes roles like the cluster cluster access roles uh if and the answer for this is if the entity the the AWS entity or identity that's has like admin access to the account or admin access to the eks resources yes it will be able to associate and disassociate and manage all those access policies to the cluster uh what those access policies and access entries provide are are access to the kubernetes API so inside the cluster so we need to separate the those concerns we have the AWS identities that yes can control who can access the clust and then we this policies will give permissions to the users inside the kubernetes cluster does that make sense yeah I think that makes sense and and I think one way to think about it is even previously with with the config map approach the principal who who has access to the cluster it was was able to create IM roles and edit the config map to to to uh add access to other entities um and so I think the the same thing lies here where where that principle um is going to have the ability to control the access policies here the access policies that Rodrigo is showing here specifically are predefined policies that we have for working with resources within kubernetes hopefully I got that right rrio yeah that's right and uh shito please correct me if I'm wrong the uh there's no way to create like custom access policies today not today no yeah so if customer have any specific custom requirement for uh creating these policies because arbac is still supported and that's where we didn't think about adding these custom or creating these custom policies like these are all um Amazon eks manag and we are just going to continue to do that unless we see a really see a need U for providing an API to create the access policies custom policies yeah I'm GNA drop a link in the chat right now to our eks best practices guide which uh Rodrigo Grace graciously updated for us uh before this episode but it actually Maps out these uh these predefined policies to the arback policy so for example the eks cluster admin policy is cluster Dash admin in kubernetes arbac whereas the eeks view policy is the view policy in arbac and so on and so forth we have we have four of those access policies today which are enumerated in our best practices guide that's correct yeah so uh like just moving on uh if we list uh the the associated policies with this uh this entry that were created during the the cluster creation so this role is my role the role of the cluster Creator we can see uh all the associated policies so we have the cluster admin policy here so this is the cluster Creator this have a scope across the the all the entire cluster uh and we will see like some commands uh through the demo that you can specify this to the access scope to the type of name space and then Define the specific name spaces that this entity has access to right and and uh Rodrigo just one last thing that I want to call out for for here is something that that I had remembered um while while these principles can be I am users we have always recommended that the principles should be I am roles so you have that decoupling of of the user and the permissions that are attached so if one day that that role you know is attached to a cluster but the person uh who it was assigned to needs to change you know you work with the role whereas if previously if you had a user attached you were basically stuck with that single user with these new cluster access management apis um whether or not that principle is a user you can modify it but we still recommend that that you attach these permissions to a role the principle should be a role yeah and answering Ryan's question yes if that entity has enough access to manage eks cluster it will be able to provide uh access to to other entities inside the cluster uh the way to to manage it Grant the list of uh the principal list privilege to all the IM roles that have access to your accounts uh what we're going to do right now is delete um what are we deleting the access entry for the admin role so when you do this no one's going to be able to access this thing that's correct like my user is at me so if I run here like copl get pods I don't have access anymore because I'm not authenticated to this cluster right like I can do anything at a cluster level but I can still list uh the access entries because my user is admin of this account so I can still see the access entries here but I can't manage anything inside kubernetes because I just revoked my access right so technically like you will have a separate IM admin who is you know managing and access to the cluster itself because Rodrigo created this cluster uh using his account um he was added as an admin now he removed himself but he because he had he has admin access to the account and to the cluster with whatever permission he has he'll be able to create an access entry right so this is to the question that was earlier asked um yeah that that's correct so uh this is something that uh usually happen uh with customer scenarios where they get just lose access to the that Ro what that created the cluster and it was the only one who permissions to manage the cluster so now uh we we can see hey I just removed my access and I'll be just adding it again so I just created a new access entry for my uh for the admin role uh to my cluster I have the access entry here the the Ern what's the principal and what's the the cluster name and now I'll be back to access the cluster but now one thing that will happen here uh it's because I am already authenticated so let me see if we can see that in this screen uh to answer this question yes it it is much harder to lock yourself out of a cluster now that that's absolutely oh yeah that like any any any admin user to the account like any user that has admin permissions to the account can regain access to the cluster or reive access to the cluster but like one interesting thing that happened right now is uh I am authenticated to the cluster uh but I don't have like any access policies uh Associated to my user so but because I am already authenticated I have permissions to do things in the cluster but as soon as I change the user and you will see that the next few commands uh I'll I lose this access because I don't have like any access policies attach it I'm just authenticated and because of my previous uh the previous permissions that I had in this cluster they just uh were regained but as soon as I de authenticate I will lose everything so uh what we are going to do right now uh first of all let's see let's create a new access entry here um for something that's called power user uh we will be linking this one to the edit uh arbach and this of course this I I this principal R this this user this role already exists in my account that's why it will work uh otherwise it will just uh show an eror hey this this entity doesn't EX exist so we cannot create this access entry so same thing as the last one and I we just update uh our coup config uh to use this new Arn here right and what happened here is I have access to the cluster to authenticate the cluster but I don't have any per inside it because I don't have any Associated uh Associated policies to the to the cluster so if we list the access entries I have the power user there but if we list the associated access policies oh this sorry this one is for the admin for the power user we don't have anything as well right so what we need to do here now is associate an access policy to this user let's see where do I have that so this is the one so I'll be associating an access policy to this cluster C that I have uh the principal irn is the same one is the power user and I'll be providing the eks edit policy and this is this will be a uh a cluster wide uh access code so here you're actually calling the e as apis right um associate access policy as an admin because your terminal uh the user AWS user that you're using the AWS config is as an admin but the cube cutle commands that you're executing is executing using the power user role because you have created a cube config using that power user role so whatever Cube cutle operations that you perform the I am principle that's going to be used is for power users so yeah that's that that that's perfect uh so this this also uh ties with the questions that were sent to the chat so my user is admin to to the AWS account but I don't have like any access to the cluster unless I associate something so now I can have like C City I'll get pods and C C get pods uh across the entire cluster because the admin policy is the the edit poish is cluster wide but I can't get like uh get cluster roles for example because this user doesn't have cluster admin access it just have admin access so it will have admin access across all the name spaces but not for cluster uh specific resources right so to do that I'll need to do uh to update my Cube config for example and remove this rle here so this will fall in the default one that's my user that now should have lost access okay okay so that so that that's really cool to see Rodrigo but but uh kind of building on this we got a great question from from Juan in chat um is it possible to uh attach you know policies rback policies which you know define specific name spaces and so previously with the config map you would map um you know the roles I am roles in the config map to arback policies but with this new approach since we only have these predefined policies H how would you how would you maintain this use case so this is possible to to do uh using kubernets groups so you can just create uh specific uh our uh role and and role biing or cluster rooll cluster roll binding configuration ATT tie that to the kubernetes group and then inform that that when you're creating your access entry it's it's a scenario that we'll be showing so what we will do just like giving some uh some insights is we'll have like a a view a read only user but we'll be providing access to run jobs in the default name space for example and then in the default name space it will be it will be able to uh run jobs but won't be able to delete jobs or run pods or do anything else and also won't be able to access anything in the other name spaces and also theate access policy supports the access scope as well right so you can say specify an access scope and type as a name space right and specify the list of the name spaces and we also support the wild cards as well so there are multiple ways to do it um let's see which way Rodrigo actually you know them demonstrate this yeah I feel like if we just let Rodrigo do the demo we'd have all of our questions answered but okay yeah I try I try I try to to to do that but it's good to have the questions uh to the questions come up like earlier so before we we move on like you see like I'm I'm using my admin the the the cluster Creator I don't have any any access to the cluster anymore because I don't have like any Associated oh sorry Associated access policy to my role right it's it's empty it's an empty list here so what we are doing now is to regain my cluster at uh permissions to the cluster so different from the hold on uh different from the added PO is oh right gotta update the config now it should work right yeah now it now it's working so uh now I regain access to to my to my cler admin and different from the power user that has uh admin access across the the name spaces but not for cluster uh objects if we get now CU CTL get cluster roles I'll be able to access all those roles when are missing so now I can I can have those aess that are not possible with the power user that were imperson was impersonated before right so let's move forward let's create a new access entry uh for uh sorry uh for a read only user right so I already have this entity created in my account so I I have this principle and it's just a a read only access uh as of now it doesn't have like any any Associated policies so if we list the associated policies to the read only Ro sorry I can't speak and type at the same time so problem with the coordinations so yeah we don't have like the the access policies set so now we will associate uh associate access policy let's find the read only user so this is the the read only user and answering the question that we had before I'm I'm restricting the access scope to just the default name space i i wanton be this user will be able to read everything inside the default name space but nothing outside there so let's just go and do this one you can see in the the principle that we have this access code described here so it's a Time type name space and it's just tied to the default name space so we don't have like anything else and if we update our Cube config uh with the read only now we just impersonated that we can do like a c c get pods here in the default name space but uh for example at the CU system m space we can do anything like it's all forbiding and now like I I just maybe try to run something uh I have access to the to the name space but I cannot do anything so run uh let's see if I have something ready here so I want to run like Eng Gen X pod I can do that or create a job it's not possible right so what we will will do here is create a new role so I need to impersonate uh my admin one uh let's see create role so let's create a role called audit uh that will be able to uh ex it could create actions but just on job resources right so I'm I'm calling this audit because this is a a common requirement from from from customers hey I have an audit team that has read only access to my cluster just to gather information and prepare for an audit for example so let's go ahead and create this role and then create uh the role binding for this role and we're call as you can see the role binding is tying that to the group audit right so now what uh what we'll be doing is uh we'll be tying that uh specific role uh to the the the view uh the read only user that we created before does that make sense yeah so we'll just update the access entry for that R only user and tied to the kubernetes group audit that we just created here with a roll button I'm not sure if this is uh showed anywhere like uh different from here but we can see like the kubernets group uh tied here to the permission in the the access entry so now what we can do c c get oh we need to re impersonate that so re personating the the read only user we can get pods CU system no access default we have access to read because these were predefined uh but we cannot run any pods okay but we can create jobs so this is what like connects the like custom RBX or custom permissions inside the kubernetes cluster to the access entries and and uh and the access policies for the the cluster access manager format of managing uh authentication authorization inside the kubernetes cluster but now hey I can still read uh everything I can get jobs I can get uh pods and jobs I can see but I cannot delete it right so by no means is this replacing rbac and we there's there's no intention of replacing rback here rback is still critical if you want to get that you know fine grained Access Control authorization uh this this is simply to you know help tie the link between IM am uh roles and users to to kubernetes authorization exactly so this is the goal of cluster access manager is to to facilitate the this management you can do almost everything to the AWS using AWS API but like if you have like very custom fine Grand access inside your kubernetes cluster you still need to do some twixs and Twitches inside uh your cluster rolls or rolls and roll bindings and then tie to that those to the kubernetes groups and then you can link those kubernetes groups with the access entries so what I will do now I will go back to my admin permissions so we already have uh just now we can delete that job because this one this guy have access to everything and now we can change the cluster config to use just API because we have Mapp at everything we have our power user we have our read only user with audit group access to run jobs and collect information if needed and now we can just switch that to just API and this will take a few seconds but if we go for example to the console here we can wait for that change to get in place and we'll be able to see our access entries uh that we created through the demo using the read only the power user and the cluster admin access already shown in the console so Rodrigo when that's done curious to see what the config map the AWS odd config map looks like once it's switched over to purely a oh yeah this is this is this is the mo the funniest part of the demo we will remove the config map love it so now we just changed the the authentication mode TKS API we don't have the config map anymore so what we can do now is cctl DN CP system delete config map AWS off done we don't have that config map anymore in the cluster but as you can see we can still impersonate uh that read only user CTL get pods get pods DN Cube system not allowed anymore we can switch to the update Cube config and go to the power user get pods uh run engine X in the default name space sometimes it takes a little bit longer but maybe it's just my my cluster but yes you can see that we can like authenticate and switch roles without the config map the config map is is gone and this is the goal of the cluster access manager we can't like guide users to get rid of those the that AWS out and then provide create manage all the access through the AWS API go ahead sh so Rodrigo one thing right so if you go to your uh access entries you will see an access entry for a node also that was created so yeah add yeah we did add U basically a separate yeah because rodo might have used manage node group like you know while creating these nodes that's where you will see that access entry that's already created um for for Rodrigo in the console there is no access policy Associated but um you know you just create an access entry and the type as you have a separate type for all of the like I am principles that you're using for the end users there is like standard and then you will see e to Linux and then there is Windows for another like Windows nodes so when you create an access entry as type as that you can use it for a node so because Rodrigo has already created this cluster using API and config mode and when he created the manage node groups the default was used was basically an access entry but if he had created the noes before uh you have changed the mode to API and config and if you're using config map just a word of caution like before you delete your AWS sou config map make sure you have recycled the notes to use this new access entry so that your notes can still join join the cluster yeah yeah exactly so if you have like any custom uh any custom access any custom configuration to your to your nodes and that that works for any other identity that you created access uh in your AWS off config map you need to map those those here and yes you can see like the for example the we don't have like access policies to the to the nodes but we have this uh group name for system nodes that's an embedded node inside kubernetes as the same we have the the audit group kubernetes group that we just created uh to the console or to the CLI okay excellent um oh I see you're about to show the output right now no um that's it that's it for the demo today oh okay excellent so folks that are tuning in while we still have our experts on the line if you have any more questions that you want to ask we've got some really awesome questions coming in here uh don't hesitate uh before we let people go here I just want to call out if you ever want to get involved with um the Amazon eeks service team the developers product managers give feedback and help either shape or see the features that are coming out out you can always do so by checking out the containers road map on GitHub uh we pretty much document almost all of the features that we're working on out there and and uh are happy to accept feedback over there as well uh before we leave our audience today anything else that that you want to leave folks with here Rodrigo or Sho yeah I think like closer access manager is a game changer uh for like managing identity access to your cluster like this is this can help like leveraging your overall security footprint on your Amazon EAS clusters and kubernetes cluster yeah one thing that I would like to also caution is like say if you delete an I am principal uh and you haven't deleted an access entry or access policy do not assume it's going to work because um it's going to be mapped to the principal ID and not the on so on might appear the same and you might think it should work I mistakenly deleted an IM am role or an IM user that I have used to create an access entry for you will have to delete all of those existing um like you access entries and associ Associated policies and recreate um the access entries for the principal you just create it like you create it so that's that great call out there great call out all right folks well with that uh I just want to tell everyone here if you want to see more content like this more demos more updates on what we're working on uh here at AWS more updates for whether it's Amazon eks or any of our container based Solutions please subscribe to twitch.tv/ AWS or containers fromthe couch on YouTube we appreciate having you here thank you for joining us and until next time bye thanks everyone [Music]
Info
Channel: Containers from the Couch
Views: 3,191
Rating: undefined out of 5
Keywords:
Id: ae25cbV5Lxo
Channel Id: undefined
Length: 54min 0sec (3240 seconds)
Published: Fri Feb 23 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.