Security in Kubernetes - How to do it right!

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] in this episode we're here to talk to nana yanasha about kubernetes security kubernetes is becoming the standard in the devops world teams are adopting it very fast but rarely with enough time to really understand how it works and what are all the right ways to configure and work with kubernetes so in this episode we'll see some of the easy and practical ways to make sure that we configure and run the secure and run secure clusters um stick until the end and keep an eye on the chat because we're going to be giving away a copy of nana's ultimate kubernetes administrator course to one of our attendees but first a message from our from today's sponsor datadog [Music] [Music] [Music] [Music] hello internet and welcome to the wasp dev slop show i'm rennan and i'm here with my co-host nikki if you can stick with us until the end don't worry the recording is gonna be it's gonna remain available on our channel the links to everything that we're gonna be covering today will be added to the description box below right after the stream and make sure to give us a thumbs up and subscribe hey everyone so today's guest is nana janashia nana is a devops engineer and trainer she started the tech world with nana channel to share her expertise about various devops topics and to help devops enthusiasts get into the field easier and with more motivation through her channel and different courses she has reached millions of people her passion is to explain complex topics easily so that everyone can understand that she consults with teams of developers to help them improve their existing processes for ci cd containerization and orchestration welcome to the show nana so hi everyone um thanks for the introduction and thanks everyone attending live for joining in and i'm really excited to talk about my personal favorite subject kubernetes and specifically security in in kubernetes um so we can start off with the slides awesome um so first of all as reynon already mentioned we can all agree that kubernetes has established uh pretty much as a standard in the industry right uh we all know that it's a complex tool to set up but more importantly it's actually complex to manage and maintain once we have it already set up so for uh people who are administering kubernetes it's a lot of configuration effort and part of that configuration and setup is actually the security configuration um now first let's actually talk about devsecops um or kubernetes security in terms of devsecops trend so first of all kubernetes has a lot of security controls that many people may not be aware of and that's because by default almost none of these security controls are enabled and configured which is kind of like kubernetes gives giving you a flexibility to do and configure whatever you want however it also means that you as an admin actually have to know all these security controls and then decide how to configure um what you actually need for your application setup um uh so right so um in in in this specific case when i have uh i'm doing a presentation dedicated to kubernetes security uh specifically it's it's pretty clear that security and kubernetes is important right but in reality in real projects when teams are setting up a cluster the main focus is not security it's actually running it's getting kubernetes running it's deploying the applications inside getting the applications running and making it accessible for the end users right so getting this to work can already be pretty challenging because of the complexity of kubernetes so thinking about security while doing this whole thing is not really really common right so security ends up being an afterthought in the kubernetes configuration and setup process so once everything is configured and set up the application is already running and accessible for the users uh someone in the team may say um you know what let's actually check that we're doing security right in our kubernetes cluster right to make sure that we don't we have no vulnerabilities we have no um security holes and so on so in many projects and teams you may have dedicated security teams uh people who will stop such a cluster going to production uh but in in projects where it is not the case you may actually end up moving your insecure kubernetes cluster and actually releasing into production that is actually not a rare case so in terms of dev devsecops where basically there is a trend um of highlighting the security in the whole depth ops process we think about security at the beginning and not after everything is set up and configured again devsecops also sounds pretty good in theory but in practice it's still hard to do because of course considering security in every step of the software release lifecycle will initially at least initially will slow down the process the initial setup and deployment of workloads in the cluster so business-wise it may be a pressure for the teams to release fast to basically make changes in the cluster and deploy the applications there while security concern uh will be actually slowing them down um but of course we know that eventually it always pays off to to start with or or to integrate the security in the initial setup process as well um and in terms of kubernetes specifically thinking about security may even start before we start we even start setting up the the cluster already during the conceptualization or planning phase for example your team may be considering what type of kubernetes setup they want to choose this could be a self-managed cluster a semi-managed cluster or completely fully managed cluster and when choosing between these options uh security should actually be one of the main decision criteria because if you know that you have one team member that will be administering the cluster and they are basically just getting started with kubernetes they have never configured a cluster before and they are going to be learning as they actually set up the the whole thing it'd probably be a better idea in this case to go with a managed service with a fully managed service because that would mean that the uh the cloud platform that manages the manages the cluster will actually take care of the control plane of the uh kubernetes cluster including its security because control plane is the most important part of the cluster if someone gains access to it basically they have access to the whole cluster so security securing the control plane components is super important and if someone will take over doing that who is actually specialized in that instead of someone who is just learning kubernetes that should be a decision criteria of choosing what type of cluster you're gonna run um so what i have done is i have prepared a list of some of the most important security topics in kubernetes uh so uh basically teaching you or showing you how you can secure a kubernetes cluster and maybe have some huh moments if you're already running the cluster to to see basically where you're still missing some security configuration and while we go through the list um basically we can we will take your comments and questions and try to answer them as well so with that i'm going to jump to the next slide and let's get to the list of kubernetes security best practices first of all kubernetes the main purpose of kubernetes as we know is running our workloads right so it's we we go through the whole effort of configuring and setting up the cluster so we can run applications on them so the the first thing or actually one of the main things is how do i run my workloads securely on kubernetes and um the the security in kubernetes actually starts before we even deploy the workloads in the cluster and that starts at the face while we are actually building the images that will uh later end up in the cluster and run as containers uh inside the pots so what are some of the issues that we may have while building the image some of the security issues well the when we when we think about images like docker images for example the images are built in layers right so each um so we have the operating system base as the layer all the operating system tools are separate layers all the operating system commands as well as the code that we have written that it's going to build into the that is going to be built into the image and all the dependencies and libraries and each layer may actually have some security issues if you're using a library or tool that has vulnerabilities right so basically the the bigger the image and the more layers we have inside uh including the operating system and our code and all the dependencies the higher the risk that some vulnerability will slip through and it's going to end up in the image and then it's going to end up in the cluster so here are some best practices to to solve that which is using official images not from somewhere some repository that has the image but official approved images also using images that are relatively smaller and yourself building images that are relatively smaller uh usually you don't need full-blown operating system with all the tools uh in your containers they basically just need enough to run your application right and some companies actually have have limitations on which registries which which image registries developers can use or some of them and host the the external images locally in their private registries and developers can only use that um so everything gets basically approved and one very important thing that you can do for securing the image is actually image scanning there are tools um docker for example doesn't do it itself it has it actually integrates with a tool called sneak so there are actually tools for image scanning that have databases with a list of vulnerabilities for all the common tools and libraries and dependencies they basically go through the image layer by layer and try to identify whether they are using any tools with known vulnerabilities and there are two ways to use those that scanning one of them is obvious we scan the image before we build it and if there are no vulnerabilities we build and push it to the registry now let's say uh we have pushed a bunch of a bunch of images to the registry that it doesn't actually mean that those images do not have vulnerabilities because they may be discovered later so so quickly quick question um there we have a question in the chat from smesh ash around lean images and i guess in terms of originating an image source like what is your like sense on that is it getting it from a vendor downloading it from docker hub like how do you source a lean image uh one ways obviously to to see how many people are using it um because that is a sign of how many people trust that so that's one and of course with a combination of taking an image that is directly from the vendor um and not some random team or developer and i also look them up on on docker hubs right so it's really judgment yeah yeah and then in terms of scanning a lean image so if you have a source and you're like okay i feel pretty good about this is scanning enough or do you go further than that like how do you how do you trust that image uh so basically the scanning actually comes once you have um done your due diligence you have checked the image that it's from official source maybe from the vendor and then also kind of made sure so for each image for example you have um a bigger uh a larger version which is operating with more tools and a leaner one so you have once you have done all these and you do the scanning so you can be pretty much confident um that the image has less vulnerabilities as i said you cannot be 100 sure because there may be some some issues discovered afterwards like once you did the scanning that's why it's actually important to do the scanning on the on the registry level as well and some docker registries actually have an integration for these tools so you can actually connect them and then do kind of regular scans there as well yep okay two more questions and then i'll i'll let you keep going um one from israel taurus can images be cryptographically signed so that you can tell if an image has been tampered with uh yes there is that's that's also one of the features of docker you cannot you can also do that and and many people actually use that for security checks uh and i believe that is actually one of the one of the things that docker also um suggests yep and then last question from pramod p uh do you recommend any scanner tools um i have used sneak as i said uh because that's what docker uses so i basically um and i also uh checked it and it looks pretty pretty good so that's the tool and basically it integrates with with docker so when you can do uh with docker command i think it's docker scan and the image name or image id and it uses a sneak in the background uh nikki i just have a follow-up question on that um yeah so usually with this uh you know scanning tools does that focus more on the operating system side and but how about the application to know if there's a vulnerability with the application do you still recommend still using something like a static code analysis on top of that uh yes so static and a steady code analysis should be used uh in addition and that will actually come before you build the image um but uh sneak actually would do the both so it's going to be the the dependencies uh on the application level but also in the operating system tools level great all right all right um so once we have made sure that the images that we're building are secure and then we're storing the secure images in the repositories uh we also have to make sure that those images will run securely in the cluster uh so we the the two main points here are or actually one major point is uh which user runs the um the images inside the container right so in in most cases probably you could probably say in hundred percent of the cases there shouldn't be no um use case for running containers as root user so you have to have a service user that is uh created in the defined in the image in the docker file uh for example that will run the process uh you can also configure this on a pod level so you may have an image that has a service user like node.js for example uh node for node.js application but you can actually escalate the permissions and you can actually override the configuration in pod to run it as root you shouldn't do that right and there could be some cases where you need more control of uh the host i don't know file system or you may need to mount the the docker daemon inside and there are cases like these but that actually creates a really high security risk in the cluster so none of the containers should be running as root and i believe that uh in the recent years there was actually a big change where most of the official images got upgraded and updated to run with service users um instead of wood in kubernetes we actually have additional uh control for securing how the containers within the the pods will run and that is by setting limits on the resources that containers can use if you um as i said this is not um configured by default as many of the security controls so when you create a when you define a pod with a simple configuration by default uh the container basically gets assigned some cpu and ram from the host um which is defined in the default configuration but the container doesn't have any limits so basically if something happens inside and suddenly the application is using lots of ram or lots of cpu it can actually take all the resources from the node and that's why one of the kubernetes best practices generally also uh for proper configuration is to limit the number of resources cpu and ram that each container um inside the pod can actually use so when it uh hits that limit there that's it you cannot actually um go over that resource comes consumption um and these limits are actually on a container level so we can define it in the plot definition but you also have resource quads and these are basically the resource limitations per namespace and this is also important because when you have multi-tenant cluster where your you host multiple projects uh and divide them by namespace you also want to make sure that one project if they have a bug there it doesn't actually affect all the other projects in the cluster so you limit the resources per uh namespace quick quick question around uh privilege access um it's from smesh s again there are and i'm just going to paraphrase the question a little bit but there are deployments that require root access right if you're running a log collector prometheus things like that how do you manage or how do you handle activities and deployments that actually need privileged access uh well you can actually deploy prometheus without needing root access so that the the prometheus container shouldn't have um shouldn't be running as roots uh or do you mean root access to to the host or to the yes run as root like that's the permission that the the deployment would need to collect logs from say across and across the name spaces or do some other sort of metric like activity yeah i mean uh prometheus doesn't need actually full complete access to the cluster and that's why it has limitations so for example we can you can define that prometheus can read logs at certain endpoints and those endpoints are exposed on http uh or https ports and that's it so it doesn't need actually like uh direct um low level access it can just ping the endpoints uh and you can then give prometheus access to that endpoints yeah all right maybe prometheus is a bad example just any deployment that would actually need root access yeah one a common use case that actually uh comes to mama which i actually have used also was when deployed jenkins inside the cluster we actually had to um because of the the mounting of the volume and as well as mounting of the docker demon and i actually had to run the jenkins pod uh jenkins container as a root because of that but again if possible you should actually avoid doing that so i think there are better uh alternatives for doing that even for jenkins now but um that is like one of the common use cases that i've seen okay yeah i think the the the comment was also about the uh like file beat about log collection but i guess in these cases like you can just run as a side car in the same pod right so then you have access the same like network stack even the the storage and so on so you don't necessarily need root for that so yeah yeah so i guess you should try to avoid this at all costs is really the best answer if possible yeah exactly cool and and actually uh transition from that because let's say if you have um over seen something and you have forgotten to secure something and you are running a container or a pod that is not secure right the um and that that could be obviously a case because of human error because of forgetting something or like we mentioned now you just didn't have another option that one insecure container or one insecure pod should not be able to affect all the other pods and all the other containers right so we need to basically secure um that one issue in one application doesn't affect uh other ones and that is uh the security configuration on a network level when pods are communicating by default when we actually deploy the pods in the container in the kubernetes network layer it works in a way that every pod can talk to any other pods in the cluster there is no limitation there they talk to each other on http ports on insecure parts basically unencrypted and there is no excess restriction and that's how a lot of teams actually also operate their clusters and applications but that is insecure because now if someone gets access to one of the pods or containers they will be able to talk to any other pod without any problems and the security best practice here would be to limit that and basically to to make sure that the pod the interpod communication is set to the minimum required uh traffic rules so for example my backend application can talk to database but front-end cannot directly talk to database right and you can actually configure that using uh kubernetes native resource which is called network policies now network policy actually is a component that configures the network layer of kubernetes which is a third-party tool that you think that you install and configure when you set up kubernetes cluster and there are some network plugins that do not support these components so that also could be one of the decision criteria of which network plugin you choose for cluster so network policies you can basically think of them as firewalls so you can say um all the pods that have a back-end label for example can talk to database on ports uh whatever right and then you can with network policy you can actually configure incoming traffic and outgoing traffic for any pod and these are also based on namespaces so you can actually define network policy rules between the namespaces or even within namespaces and that is the second point of securing communication between the pods as i said if you have multi-tenancy in your cluster for example you decided i don't want to set up 10 clusters for 10 different projects i don't want to have 10 different management efforts i'm going to host one cluster and all the teams can basically deploy their stuff on that cluster you can actually isolate those workloads with the namespaces which is this is not a complete isolation i have to say here it basically just hides the pods being directly accessible from other namespaces but this this is actually pretty good and uh and works very well in combination of uh network policies and other limitations between the pod communication and finally and this is actually my one of my favorite tools in terms of not only not only security but also just generally um working with microservices and deploying them in a cluster so service mesh is basically a technology that is making it possible for for the micro services to talk to each other in a way that you don't have to put that communication logic inside the applications inside those micro services but to take them out to extract them out and basically put it in the network layer of the cluster and it has a bunch of features that it adds to this communication and two of the most important ones that service mesh brings you in terms of security in cluster is one it lets you configure mutual tls because as i said by default pods talk to each other unencrypted so you can actually configure mutual tls and service mesh actually takes care of generating the certificates distributing them to each pod client service certificates and then basically configuring all of them and the second one is communication rules so just like with network policies uh with service mesh you have an abstraction of that uh that basically basically lets you define who or which micro service can talk to which other micro service um and what limitations you have there so i personally think service mesh is is a very good way to to secure your cluster even more and with service measure you can also secure the communication going outside like from the cluster to uh external endpoints so for example if you have a database running outside the cluster that's going to be an external endpoint for your back-end application and you can also track that or any third-party apis that you are xsync um how do you um how do you reason like service mesh and network policies or what are the big differences if you're new to this space and you want to understand like do i need network policies or should i go for this whole service mesh thing how do you how do you reason that so service mesh basically so service measure is the technology it has actually implementations and one of the the main one one of the most popular ones that i see used a lot in kubernetes is istio so basically istio is going to be an abstraction of uh configuring things like network policy so instead of working on this low level like doing all these little tweaks in different places you have one component which is service mesh and you configure everything on that level including the certificates including the the traffic rules um including um the visibility and all this stuff and the the traffic rule configurations in istio are actually based on they actually configure the the proxy the envoy proxy that it uses in the background so the configuration is also different right because with network policies you're actually configuring the um the network plugin that is running in the cluster with um istio service mesh you're configuring the traffic rules on the invoice proxy level right so the the configuration stuff that you can do they're different yeah i think the also the network policies is more i mean it's just for the network layer but then the e-sto issue is like there is way much more right is the communication between the service the encryption and all that but one thing that you mentioned about the the communication rules you just wanted to understand what's really the difference between using these communication rules versus using the um the network policies like when would you choose which one to use um so if generally if i'm running microservice especially if i have multiple microservices in the cluster i would always recommend service mesh not because of security but generally uh because of this communication and then security is basically something that you get on as a bonus right so if i have a service mesh it doesn't make sense to have a separate network policies set up because i can actually do it in one component and also as you said so i can do actually more intelligent uh routing configuration on the invoice proxy level than on network policy so and and also uh something that i actually have used a lot in terms of microservices was uh securing the communication coming to the cluster uh to to one of the parts and also the egress or outgoing traffic to the external endpoints uh if you're again using a lot of external apis in your microservices and you can also define those rules uh traffic rules uh using the istio configuration and when when do you uh decide that like istio is the right you know it is right for a certain organization right so that does it depend on the number of micro services or how complex is the communication so yeah when do you really decide oh i really need easter right now um [Music] i mean if you have like a micro services made up of just five six then it already makes sense to do it uh it could be different reasons for example one of them that i can i can give an example of was a project that was that wanted to to add visibility in their microservices communication so they basically wanted to to connect to collect um the the monitoring data of how the microservices were communicating and they were also wanted to to collect the data about the external endpoints that we're communicating so because that was actually one of the reasons where said instead of just doing this prometheus configuration on on the cluster level let's actually deploy istio and then we have this uh visibility or monitoring for each microservice out of the box and then of course when you have istio then you think oh let's maybe choose this feature as well and so basically i don't i don't see any any reason why not to use it when you have microservices set up in your cluster um and the the question could be probably whether to use other type of service mesh instead of istio but generally i'm i'm pretty um [Music] i'm pretty for for using service mesh in kubernetes awesome so um we've talked about securing workloads and communication between the workflows um another thing that is also very important in any type of setup not only in kubernetes is how do we manage the the sensitive data right um and of course there are general rules that you shouldn't put them in the git repository the code you shouldn't bake them into the the images and you should basically have an external configuration to basically pass the secrets to the applications when you deploy them and the way it works in kubernetes is kubernetes actually has a component called kurneth secrets which is meant for storing sensitive data the problem with this is though that kubernetes secrets are stored actually plain text in the ecd store so they're not secure if someone has access to the ecd they can basically look up all the secrets um if someone has access to the api server or the cluster they can basically just with a click or a qctl command look up the the secret data so that's the default as i said again not secure so to to secure it we have a couple of options one of them is the kubernetes uh on feature which you can actually uh enable which is encrypting the the secrets stored in uh ecb data and this is a kubernetes native configuration that you can define which algorithm to use to encrypt it and you have the the encryption key uh which however is also stored not very securely in a config map so a lot of organizations actually say okay this is not enough security for us um so we need something better and that better solution is usually um the the secrets um management tools uh one of them again the the most popular ones used in kubernetes is vault um and vault actually has a really good integration with kubernetes and it lets you do a lot of a lot of things other than just storing and encrypting secrets and a benefit of using secret management tool obviously is that it's not just for encrypting secrets but it also uh for you know using dynamic secrets and rotating the secrets and then you know updating all the pods that use those secrets and this kind of stuff so this is also something that a lot of companies choose to do uh for managed kubernetes clusters or if uh teams decide to run their cluster on a cloud platform uh it's also a very convenient option to actually use that cloud platform's security management tool um and basically manage their secret data like this and the secret data that you would actually store in using kubernetes could be anything ranging from the actual like credentials to database uh but also docker registries where you pull the images from uh this could be the tls certificate that you use to conf to to secure the traffic going to the cluster this could be basically any type of secret or sensitive data that you need within the cluster um and i quickly mentioned the ecd store which basically stores not only secrets but everything that is uh composing the cluster state and that means the data in its city is also very sensitive because first of all it city is one of the control plane components and it uh brains of the cluster so if you get read access to the hcd you basically know everything about the cluster including the secrets if you haven't encrypted them uh and if you get a right access to hcd you can basically break and mess up everything in the cluster so restricting access to ecd especially the right access is very important also because it city is a store and it actually stores the data of your cluster uh of course you have to make regular backups because if you lose the cluster you can recover the state from the city backup and of course you have to encrypt those backups as well and not just plain text store them and finally this is also a very common practice in a lot of projects that they run and maintain the ecd outside the cluster so it's not running in the on the masternodes of the kubernetes but it's actually managed on separate servers with own security configuration and now we and i mentioned that the access to the ecd store but generally of course when you have multiple teams deploying stuff on the cluster you have the administrator who is setting up all this stuff and also securing the cluster of course you have to manage who is accessing cluster and doing what right and kubernetes generally does not have a concept of users or user groups uh directly so you have to create them indirectly so the same so the authentication part to give access to kubernetes cluster uh is um basically has two different components one of them is for human users so for example as an admin you want to be able to execute cube ctl commands connect to the cluster and do some stuff right and the way to create these indirect indirectly create the user in kubernetes is one of the ways is client certificates that is the most common way so basically when you bootstrap kubernetes with cuba adm for example uh what it does is it creates uh it generates certificates it has it creates the the ca for kubernetes cluster and then it creates certificates signed by they that ca for all the control plane components the cubelets all the all the worker components and so on and that's how they communicate with each other with each other um api server for example is the main component that everybody talks to right including the the cube ctl when you execute kubectl commands it goes directly to api server so it has a server certificate and you can actually generate the client certificates that will be accepted by the api server and as an admin you actually uh again when bootstrapping kubernetes and admin client certificate gets created automatically so you can configure that for your cube ctl tool but if you have for example uh if you want to give developers direct access to the cluster or senior developers you can generate client certificates for them let the let them sign them by the ca of kubernetes and basically give them access to that and that will create some workaround of a user in kubernetes for non-human users for example you have a jenkins pipeline that deploys directly to kubernetes um of course you don't want to give jenkins and admin access right or or a lot of permissions uh you want to limit them but you also want to create its own user for jenkins so you can set special permissions on them and the non-user permission the the non-human users um are actually you can create them with the kubernetes native component which is called service account and basically in the background it just generates this token which is your credential to access the api server and once you have that authentication you have a client certificate um which indirectly creates a user or service account they cannot do anything because you have to assign them some permissions and permissions in kubernetes are defined on two levels in kubernetes generally you have two perspectives right you have people who are administering it or one person who is doing the administration and that person obviously needs access to the whole cluster because they will be setting up the namespaces they will be you know doing cluster-wide uh configuration deploying stuff um they basically um you know need to have access to multiple different stuff and they also need access to the control plane components so these are cluster-wide permissions and for that there is cluster role where you basically very fine granularly defined so this gives permission to create delete update list whatever all the services um in the cluster or volumes or namespaces in the cluster etc but if you're hosting or if you're letting your cluster be used by developer teams of course you need um permissions for namespace scoped actions right so you're going to say developer team can use namespace my app one and i'm going to give that team members for example access to only that namespace or you can say the the senior developer in the team will be able to create pods and services but the junior developer will only have a read access to any services in that namespace so you can do that really fine granularly and those listed here are the the actual kubernetes components that let you do that and once you have that cluster role or role that has a set of permissions you actually have to take those permissions that cluster a role and you have to attach that to a specific user right you have to say okay this role belongs to jenkins or this cluster role belongs to admin whatever and you can do that using these binding components which are super easy just says this role should be assigned to this user or this service account so that's basically the whole authorization configuration in kubernetes and finally as a last point of course once you have the cluster configuration access configured and everything secured the final the end end point and purpose of kubernetes is to make the applications accessible for the end users right so how do you make sure that the applications running in your cluster are securely accessible um by the end users again there are a couple of rules here um one of them the the way to expose any application run in the cluster is using services one way of doing it is using services and you have different service types one of them is a node port which you shouldn't actually use in like there shouldn't be any reason to to use it in production uh what node port does is it opens port a specific port on every single worker node in the cluster and that port will be then directly accessible from um internet right by everybody and obviously you don't want that especially when you have multiple services you don't want bunch of ports opening up on your worker nodes and having them directly accessible so instead a better alternative for that is another service type called load balancer which basically lets you put um in a virtual ip address basically in front of the cluster so load balancer is not going to be inside the cluster but outside in front of it and load balancer will then talk to the the services inside the cluster so this is actually an abstraction of node port because the ports will still be opened on the worker nodes but they will not be accessible directly from internet but only through the load balancer now load balancer generally not necessarily security wise but from the configuration point has a lot of disadvantages because uh this belongs to the load balancers created by the cloud platform because load balancer is not a kubernetes resource it's actually cloud uh platform resource uh so when you have 10 services you don't want to have a bunch of load balancers being created on your platform and have to manage them and configure them and you also don't have a unified access point for your applications right um for example you have uh one application accessible at um i don't know bookings.myapp.com you have another another application accessible at i don't know events.myapp.com so you have if you have these sub domains or access through path uh url path then load balancer is gonna be super difficult to configure so for that you have ingress which actually pretty much became a standard in kubernetes now um so with ingress you also have not only the the configuration part solved of just having a unified access to the to the applications in your cluster but it also gives you a lot of um security features right so some of them are you can actually whitelist or blacklist ip addresses um uh that as sources to your cluster applications so you can say for example this application uh a prometheus monitoring dashboard should only accessible by uh people from the admin team and let's wide list their ip addresses so nobody else can access them on ingress level you can also configure the tls so you can make sure that every request coming to your cluster is talking to an https endpoint um and you can do intelligent routing in your ingress um and nana sorry just before we move on because there was quite a few questions on the chat which is uh which is great a lot of people send in questions thank you so much we try to go as much as we can but i wanted to come back to uh when you were talking about storing secrets right because this is something that you know like if you're using config map or anything else you usually put everything in a file you store that in your like git github etc but for secrets that that doesn't really work right so you can't really just go and store secrets on github so uh there is something that even smash mentioned here on the comment about the sealed secrets so i'm just wondering if that's something you could uh you could talk about uh sealed secrets i don't i i don't know this term actually okay yeah so yeah i know maybe i can do a quick intro so i guess uh basically the way it works is instead of um you know instead of you putting the the the secret you know plain tags in your github uh repo so basically what you have is like you have two components one that is the client the other one is the the server side so the client is basically the uh just a cli that is able to encrypt all of your secrets and then on the server which is the cluster you actually run a controller or an operator and the operator is the only one that is actually able to decrypt the secret so we they they basically use um uh as asymmetric crypto cryptography and so that way you even if you know if you even if you put your secret in your repo then your cluster is the only one that is able to uh to decrypt because you even mentioned that in ad cd you know everything is stored as a plain text by default right so it's only encoded uh so yeah so something like sealed secrets would would come in handy here so i can actually put a link here on the chat so people can can read more about that later cool and then i'll just oh yeah go ahead sorry no there was a i don't want to interrupt your line of secrets questions so i was going to move away from the secrets world yeah no it's actually going to jump to our back um i don't know if your question was really oh yeah go go to our back and then i'll i'll i'll move on from there all right yeah so another question from uh will pug at 33 about r back so how about like testing the actual permissions right so when you give permissions to user in your cluster is there an easier way for you to to actually test and uh you know make sure that you're not granting too much you know for permission for for your users um i mean you can test uh for example if you mean you have given a bunch of permissions and you want to see you have a visibility of who can do what in the cluster right that if i understand correctly uh you can actually do that using uh cube ctl i think it's called auth sub command so you can uh issue cube ctl commands for a certain user and you can check can this user do this specific thing like create pods for example in a namespace whatever so you can you can do this with cubes detailed comments nikki go ahead with you yeah but as a general rule also the thing is uh when you have a cicd pipeline for example set up for your cluster right there shouldn't be a reason to give a lot of human users direct access to the cluster because you as an administrator need to be able to access the cluster and administer it but the developers actually they don't like all of them don't need access to cluster directly uh they basically um are not going to be deploying the workloads on the cluster because the ci cd pipeline is going to do that and um they may only need to to have access to the applications so they can just keep the endpoint and see that the application was deployed at this version or check the logs and stuff so direct cluster access should not be necessary for um like a lot of team members right and just one last question for me before we move on to nick's question so people are asking about if you're going to do videos about all these kubernetes features that you're talking about here today um i have i have some videos on some of the topics like service mesh for example um that i mentioned um i have videos on how to mount the secrets in kubernetes uh like some of the topic like the whole authoring authentication authorization setting up the cluster i don't know renewing the certificates i have that all in my cka course so it's pretty much uh distributed but um in the in the future i will continue doing videos on kubernetes so i'm sure that some of these topics may come up great yeah cool so there was a question from israel around um with palo alto unit 42 discovery of the siloscape malware earlier this year thoughts on the requirement of anti-malware in a kubernetes environment like what are your thoughts on on i guess that as a way to solve the problem or is that a problem do you see that as being a huge problem uh to solve do you need endpoint controls traditional kind of anti-malware controls in a kubernetes environment um i mean it probably will depend on not not on the platform level i would say it probably would depend on how or which infrastructure you're running the the cluster again like depending on on which type of cluster setup you use are you self-managing it on your own data center are you self-managing it on the cloud uh infrastructure so probably would depend on on the environment where kubernetes itself is running but i i personally don't have a lot of experience with that so cool um and then i guess there's a few questions that are really centered around reasoning through rolling your own kubernetes infrastructure or using a cloud provider um do you have any sort of thoughts on how to you kind of touched on this in the beginning but you know what are your thoughts on should i just go with a cloud provider or should i try to do this myself i mean i mean there are lots of combinations there as well so uh self-managed on the cloud self manager my own infrastructure um semi managed where basically i let's um the the cloud platform manage my control plane and i i manage my infrastructure for my work worker nodes and i can basically just say you know what i don't care about the infrastructure you do that cloud provider you know how to do that i'm just gonna worry about deploying my workloads so the the basically it probably should be very clear to the teams because if if there is a startup that has a lot of developers and basically very little people who um are operations and they can actually uh afford the time and the resources to manage kubernetes cluster because that's that's a pretty uh that's that's a full-time job um then probably they should go with the fully managed service right however i understand that in in larger organizations they don't want to hand off this uh x the the control over the infrastructure completely to the to the cloud platform right one one reason can be you know kubernetes is great because i can move it to other platforms so i don't want to be locked into to this one because if we're not happy with this we we're not going to be able to to move easily right um or another one could be we have our own team of security people operations so let them basically handle it so it really depends on on the project structure yeah and i think it's important to also you know not say oh i'm using cloud provider i don't have any security concerns because that's just not true you have totally you you have security concerns but they're just not the same security concerns as you would is in locking down the control plane or the ncd service or managing the actual physical nodes and hardware that these things are running on so i've seen that in other areas where people like oh you know amazon handles it or google's doing it i don't have to do anything right yeah you know that's true that's true actually yeah cool all right and this is actually the last one um which is all right so we we have the whole cluster set up the applications are running inside we have the access stuff so the the last um uh point where we can have security issues is in the runtime right so basically runtime attacks while the applications are running um and in terms of combinations of course there are a lot of different ways to to prevent that and to um configure that but in kubernetes one of the main things to um to basically be prepared for this is to have a visibility in the cluster right so the thing is if you if at some point your one of your applications in the cluster is suddenly starts behaving and normally uh you will not be able to see that if you have no visibility in the cluster if you don't don't see how your applications are running how much resources they're consuming uh who is communicating to who and and so on so increasing visibility in cluster is basically achieved by using monitoring and logging which is both not configured out of the box in the cluster you actually have to do that yourself and for that you have to also use third-party tools now the good thing about monitoring and logging in kubernetes is that there are actually tools that were specifically designed uh for kubernetes to monitor and do the logging and basically they integrate they extend the kubernetes api so you have a kubernetes native way of configuring them but of course you have the challenges um because uh inquiry is you're usually running these distributed applications like microservices uh you'll probably not be running monolith applications so monitoring and logging um distributed applications in a in a scaled way of course can be can be challenging because you have to aggregate all the data from all the different services and basically have a way to to put them and collect them together um two like from monitoring and logging two of the the main technologies um they're used in prometh in in kubernetes are prometheus which everyone knows for monitoring and fluent beats uh where full and bid was actually specifically created for kubernetes like that was the main use case for it um now i have to say that um from my experience of working with these two tools configuring them is not very easy as well as basically setting the visibility in your cluster the way you want it using these tools is also not very easy um because um you basically need um so the the challenge is not setting up and configuring these tools technically but all but first of all deciding what exactly you want to monitor and what kind of data you want to collect from your applications so the main thing probably that you want to be monitoring is the communication uh happening inside the pods and also the resource consumption of the pods and that basically your servers are not running out of resources and there is nothing abnormal happening in the cluster and once you have all these data collected from logging sources from monitoring sources uh these data in a raw form is not going to be usable for anyone because prometheus basically just stores a time series database you can't read it so the the main um [Music] or the second main part of this is not just collecting the data but also being able to display in a way that basically lets you see any abnormalities any weird things happening in your cluster and you have a lot of different tools for that ui dashboards that connect to permit use basically read the time series database data and show you in a nice ui format and the same thing for uh the logging data right so the ui dashboard is a very important part of the visibility and again configuring this and basically preparing and putting the data in a way that is super easy to read and digestible is also not very easy you have to learn these tools like grafana for example for prometheus you have to know how to build those dashboards how to make the queries that basically um ask prometheus for certain data and then prepare it you know in a form that um shows you exactly when something weird is happening in the cluster when uh application is being under attack or has an issue that basically starves the the worker node of all the resources and so on so the the monitoring and logging setup the data collection as well as the data visualization is a very big part of creating visibility in the cluster and that is going to help you to it's to some degree avoid um and reduce the runtime attacks cool so i'm just going to interrupt you really quick um this is for our audience if you would like to enter to the uh to win the giveaway the course the ultimate kubernetes administrator course that is nana's course the link is in the chat the initial link was a 404 oops but we fixed it so if you click on the last link it's not a 404 you can get an idea of what the course is um drop the hashtag devslop live into the chat um duplicate entries do not count so please don't flood the chat um but yeah that's how you enter and we will read the winner in a i guess a couple of minutes towards the end of the discussion meanwhile i just have a quick question about the uh the runtime attacks because uh nana so you mentioned before about you know scanning the images so that's something after you push to uh like a repo you go and scan the image but how about the the run time when actually the image is actually running there is a running container what kind of tools do you recommend for for you know mitigating runtime attacks um i mean of course the what you mentioned is exactly right so basically doing doing all these security things that i mentioned up until now so configuring them obviously uh reduces the chance that you're gonna have you're gonna end up having this type of attacks uh and the monitoring is basically just uh to to make it in case you missed some some of the security configurations and something basically slipped in to to see for example the application is getting a lot of uh failed login attempts right you can you can uh notice this kind of behavior where the the the number of failed attempts for example is uh going like a you have a spike in the application um or an application is using a lot of ram resources and this kind of stuff so that that is and that's a good thing as well because this lets you not only identify the issues after they happened but also to to to be notified in real time while this is happening so you can basically take action immediately right and that's an important thing because you don't want to be reactive to something happened and now you're just trying to figure out what exactly happened like an hour ago yeah so i think it's like there are so many different components so there is a component which is the uh the static code analysis for your code itself then plus there is the scanning of the image especially if you use image from from the community and then on top of that when your container is running there's also the runtime attacks that we need to to worry and then we need to implement everything that you that you mentioned right there there's another question around visualizing networking aloud or disallowed in istio is there any tooling that allows you to do that the only thing i know of is psyllium if you have that psyllium cni installed they have a good interface that you could kind of visualize traffic flows but does it have one yes it actually uses um i forgot the name it uses a tool that basically shows you in a graph form which pods are connected or talking to which parts and it also gives you shows you the flow for example of the traffic it also shows this in a in real time so it shows you basically who is talking to who at this moment and you can uh display it on a pod level and also on the service level um actually forget the name of the tool but it's it comes um bundled up in the istio deployment actually is this uh i think it's called keali or something like that exactly once yeah okay um so i guess this is another good one from smesh um any significance for the pss pod security standards or to be deprecated plot security policies in the context of security um so what was that again basically how psps are being deprecated and pss um pod security standards are kind of taking that place and how you see that as a security best practice and control in kubernetes um i've actually used the psp um before it got deprecated so i um so the the way i see it whenever kubernetes offers you some some control mechanisms for uh securing your pods for securing your controls you should be using them because they're there for for a good reason um and and one of them is basically also trying to limit the access to the to the host and the resources um underneath where the container or pod is running so i would use um these kind of components actually cool all right um i think there's no more questions but we just wanted to double check um do you have anything else in terms of slides or that was pretty that was actually my last slide so okay cool so maybe uh we can get our giveaway we could do that really quick before we wrap up so let's see if we can get that shared up otherwise we'll do it afterwards here we go very dramatic congrats congrats i think you have to send us an email by an hour after the show correct nancy and we'll drop the email that you have to send us in the chat yes chris please promote email us at os.deskslop gmail.com we'll put that in the chat you have an hour to contact us and we'll send you your course thank you cool um yeah so i guess do you have any final thoughts here nana kubernetes security you know things you want to leave the audience with the the one thing i want to leave the audience with with is um actually my upcoming roadmap for the courses and one of them actually and that was very interesting because we made uh we did a poll for what kind of courses that people wanted to see and surprisingly for me devsecops was actually the top um which i didn't expect uh so uh we have devsicops course the complete one on the roadmap of course kubernetes is just one very small i would say a small part of the whole thing so this is gonna be a more encompassing uh course so if anybody is interested in that um if any other courses you can actually uh sign up for a waitlist cool anything else running wanna nope just wanted to to thank nana for for uh for the presentation you know a lot of good stuff here that we that we discussed today and just in general to thank her for everything that she's doing for the for the community you know it's just so huge and i know so many people as well that learn tons where the with her courses all of her free courses on on youtube so yeah we all thank you nana for everything that you're doing for for the community thanks thank you i appreciate it yeah and thanks for being on this show we really appreciate you spending an hour and plus with us to tell everyone about kubernetes security so thank you very much my pleasure all right and next week we are back here with um threats against application identities in the microsoft cloud our guest is shanissa cambrick who you will know because she's been a co-host on the show many many times and itan basari so please join us next week on december 12th 6 pm utc and we'll see you then but everyone have a good rest of your day good afternoon good evening and thanks for joining us all right thank you bye [Music] you
Info
Channel: OWASP DevSlop
Views: 11,659
Rating: undefined out of 5
Keywords:
Id: 9OGIaaOYTEA
Channel Id: undefined
Length: 70min 19sec (4219 seconds)
Published: Sun Dec 05 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.