How to manage Kubernetes secrets with Azure Key Vault in 5 easy steps

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello welcome to yet another episode on this YouTube channel today we are going to see how to manage cuban artists secrets with your keyboard my name is Nilesh and i blow that hands on a heart attack calm I am also active of Microsoft most valuable professional or MVP so let's get started in this video we are going to use the same example which have been using in my earlier recordings this is about a rabbitmq consumer so we have a producer which is producing set number of messages on two rabbitmq and then we have a consumer which acts as the consumer of this cube and we have k-dub which does the auto scaling of these consumers based on the number of messages which needs to be processed so we will extend this example today to see how we can integrate the other key volt or akv to store secrets so this is an example where we are currently storing some values as environment variables we are passing these into the kubernetes and environment and these are used by the consumer pod so we have values like what is the environment a spinet core environment set as development we have the RabbitMQ hostname we have the rabbitmq username and the password and back size so if we look at all these values these are stored in clear-text and this is not good for a production like scenario so what we are going to do is follow the practice which is recommended by something like 12 factor app as per this practice what well factor app recommends is we should have all these secrets or the information which is like username password to be stored in an external keyboard or external world and we will use a Ewald for storing this all these the username password and the rabbitmq name so we will store them a secret in terms of Q Burnett is kubernetes allows us to ways to store these kind of configuration information the first one is the config Maps and the second one is kubernetes secrets so a conflict map is environment specific configuration so we can store environment specific configuration like what is the development environment or what is the environment name like development Uwe T or s IT and production and then all these name value pairs that is good for storing plain text but when we are dealing with secrets it's better to encrypt these values so even if somebody has access to these values they can only get the encrypted value they can't use a plain text video so that is where kubera T's secrets comes into the picture that's another way in which we can pass values to the pods and we are going to use this approach we will encrypt or we will store all these values like the host name the username password and bed size as encrypted values in other key world so what are the prerequisites for running this demo we need a kubernetes version 1.16 or higher for running these particular steps or whatever I am going to do I will come to it later why do we need 1.16 or higher we also need a keyword or a KB in the interest of time I have set up the instance of your key vault and I've also set up as your container registry I will show you how this needs to be set up and this can be done outside of this particular video so let's get started and see how do we integrate kubernetes secrets with your keyboard it's what I've done here is I have gone ahead and deployed kubernetes cluster and this is a two node cluster which I have created and I'm running some prerequisites here like for as I said during the initial discussion that we need rabbitmq and we need kada so i have cornered and deployed these prerequisites on to this cluster so let me just say cube City L notes to verify how many nodes do we have and as we can see here we have got two nodes and these are running on version one point eighteen point two so the first thing that we need to do in order to deploy these kkv integration with the AKS cluster is to create a dual keyword so here I have a script which does this creation of the key vault and I have already gone ahead and executed this particular script so what the script does is it takes in some parameters like the subscription name what is the resource group name for key world what is the resource group location what should be the name for the key world and it also needs a cluster name and the resource group name for the AKS cluster and as part of the initial setup or bootstrap what we do is we check if there is group exists if it doesn't exist then it will create a resource group same way it will check if the key vault exists and if it doesn't exist it will go ahead and create the keyboard so let's just go into the portal and look at these key wall details so if I login into the azure portal using my credentials and if I go into the ng akv that's the name of my key vault this is the key vault which allows us a vault is a service which provided which is provide by Microsoft a zoo and it allows us to store keys secrets and certificates and it also allows us to define access policies as to who can access these keys secrets and certificates so what I have done as of now is gone ahead and created four secrets which are required for my consumer so I've created the batch size RabbitMQ hostname RabbitMQ password and rebek mq username as the secrets and if I look at the versions we can see that these are stored using this secret identifier each of these secret has a secret identifier and I can see what is the secret value okay so all these have been set up as part of the secrets and once we have these secrets or keys or certificates we can define who can access these secrets so right now you can see that since I am the administrator of this particular subscription I have access to this key vault and there are different management operations that I can do with the key word like I can do at least update create import delete all these kind of operations are supported for my user in terms of the that is in terms of the keys in terms of the secrets I can do similar things like the secret lists secrets set values for the secret delete recover and if we go at the certificate level again there are different sets of permissions which can be enabled for the user so once we have this so if we go back to the script we are doing things like after the Cuban artists sorry after the akv key vault is created we are retrieving some existing values and we are giving the permission so this is required because when we created the AKS cluster so let me go back to the initialize a case cluster script here and see how did we create the cluster so in the cluster creation let me just zoom this in okay if we see here a case create we are creating a cluster with managed identity and we are attaching the container registry to that so I've also created a CAC our container registry and attached that container registry at the time of creation of the cluster itself so this container registry will store the images for my producer and the consumer and since we are enabling the managed identity the worker nodes which are created as part of this they will be the ones who will be requesting these they will be the ones who will be pulling these pods or pulling these images from the container registry so I need to give the permission to those particular managed identities to be able to pull the secrets so what I'm doing here is I go into the AKS cluster this is the agent pool which has the identity assigned to it and I get the existing identity and from that existing identity I get the ID client ID and I assign the sacred permission to get so this will allow me to use that manage identity to pull the sacred from keyword it will have the permission only to pull the secrets as a result of executing this command that's the part which is related to accessing the keys from keyword the other thing we need to do is we need to create some custom resource definitions so let's go back onto the browser and we see how do we create those some resource definitions in order to connect our kubernetes cluster to the keyword what we need is something called s community secret store driver or CSI driver this allows us to mount multiple secrets keys or certificates stored in an enterprise-grade external secret store into the pots as a volume so the external secret store that we are using currently is the uzuki vault or AKB and if we look at the container storage interface the definition of this is it's a standard for exposing the arbitrary blocks of file storage systems to a containerized workload on the orchestration system like kubernetes so this is a pretty standard mechanism which allows different vendors to create the file system onto the amenities cluster and it implements a standard interface which allows the key world to which allows these providers to extract secrets and mount them as volumes so that is what we are going to do as part of this exercise one thing that we need for creating this driver to Burnet is sacred stores GSI driver and the specific implementation of that with the azure is it's called as the author keyword provider so it takes some parameters which are specific to the azure key vault provider one of them is the tenant ID and we can get the tenant ID by going into our active directory and if we look at the default directory we have the tenant ID stored here we can also do this in a scripted manner and if we go up here in this script I was extracting the tenant ID this can be done by using this as a account show command and by converting it into a jason and if we get the tenant ID property we will get the same value so the way we set up this is we create what is called as we just scroll up here so in my turbine artist manifest files I have a folder for akv and we need to create something called a secret provider class this class or this kind of object is a custom resource definition this is created with the kind of secret provider class and the API for this is secret store CSI v1 alpha one version so we need to deploy this particular custom resource definition first and this is done using the helm so I will be using helm chart to deploy this so I have a PowerShell script which is called deploy CSI a KB provider let's go ahead and deploy this and while this is getting deployed it's pretty fast in fact so we can see here that it has added CSI secret stored provider Azure and it has created it has deployed this as a help chart so if I do cube CTL I get C or D if you should have the custom resource definition for secret store so you can see that secret provider classes secret store custom resource definition has been added and if we look at the ports which are created we should see some additional odds in our cluster now so we see that there are different parts created as part of this custom resource definition and the installation of the user provider for CSI so there are these CSI provider secret store driver as well as the provider pods which are running in the cluster now once we have this installed on our cluster we need to create a object or we need to provide the information how the object should be fetched or how the secrets should be fetched from the keyword and that is done using this file so here we create a object of type secret provider class we give the name as your key vault or K V name and I will come to this the provider is a suit here welcome to the secret objects little bit later the parameter it takes is whether we should use pod identity so in this case we are not going to use the pod identity we are going to use a managed VM identity so this is what I mentioned earlier that since we are creating a case cluster by using managed identities we said this parameter to use the managed identity and then we need a user assigned manage identity so this is the identity which is associated with that managed identity the ID of that identity we need to provide it as the value now to get this ID I will run initialize a KB which will give me that ID the managed identity ID and as part of this powershell script it will also assign the read permission or get permission on the keyboard for that manager identity so while the script is running let's go and check on the keyboard so now we see that currently that is my user or my name and there are six other identities which are having the XS allocated or assigned to this particular keyboard after we finish this script we should have one more XS added in this so these were some of the other clusters which I had created and they have these IDs which are assigned we should see our currently running clusters managed identity added into this access policy shortly so it's now going and retrieving the existing as your identity or the manage identity which is assigned to my abilities cluster as a result of creating that ak's cluster using use manager identity so this is the client ID which we need to assign to our parameter here and once we have assigned the once we have set the value for user assigned identity ID we need to provide what is the key world name and what are the objects so objects are the array which we need to pull from that particular keyword so here we are specifying all those four objects which I have currently parked in my keyboard the bad size attribute mq password username and host name and I need to specify what is the type of the object since I have put all these as secrets I'll specify the object type a secret but if I put it as a key in the key world this would be key or if it's a certificate then the value here would be assert for object type and object areas is the alias that we want to give to this particular key or the secret that we are pulling from keyword so since the script has executed now let's go back and check that we have the access granted so we should see additional value here in the excess policies so we can see here this is a case MQ cluster agent pool so this is the managed identity of my current IKS cluster which I am running and you can see here that it has only the get permission for the secret and this is because in my script initialize akv if I go down I am giving only the permission to get as a result of this statement you can see here that secret permissions have given only get permission to that client ID and that is how I am restricting the XS to this key world so now we have the excess part set up so my kubernetes cluster or the node pool using this ID will be able to query the secrets which are stored in the key vault now once those keyword secrets are available as these objects in my class here in the secret provider class I am sinking them or I want to sync them with the kubernetes sacred object so that is where secret objects come into the picture and we say the provider is sure these are the secret objects we want and the secret name for that particular secret objects is akv secrets so the result of this is all the intention of doing this is I want to create a scuba not a secret named akv secrets of type opaque and I want to map these objects as keys so the key here is rabbitmq host name and i want the object name as rabbitmq host name so this is what is the object name which will be exposed as a secret name or as a data in this akv secret and the key for that is rabbitmq host name and this key is what we specify as the alias here object alias so I'm exposing all these four as to minitest secret values now once we have this to Banaras secret values or kubernetes secret created the next step is to go and use it in our pot so let me go and apply this particular class which will create the secret provider in our communities cluster so for that I need to navigate to the governess manifest files akv and I run the cube city L apply command and it will create the secret provider class with the name as as your kv name with that created let's go and use this in our bottom or in our deployment so to use it in our deployment I will go and update the consumer deployment consumer is the one who is you going to use those secret so the first thing we do is mount that as a volume and this is done using a CSI driver so we use the sacred store CSI driver and this name matches what is the name of our CSI the secret provider class so if we look at this as your kv name this is exactly same as what we provide in the metadata here a little KB so what we are saying here is the secret should be mounted as volume in a volume named secret store in line using the CSI driver and the provider for that is as your keba name once we have the volume mounted volume created we link it with the cuban artists pod by using volume mounts and we mounted in inside the pod in a folder or at a mount path hemant a secret store and the last step of this is in the pod spec or spec for this deployment the environment variables we change the way they are fetched or they are set so we have the name RabbitMQ hostname as their marvin variable name and we are reading the value for this from a secret the name of that secret is a kv secrets so this value is matching what we specified as the secret name and then the key key is the the key that we specified here so that way we link the secret to a particular value inside that secret so if you have used kubernetes secrets in the past this would typically be the data element so the name of the secret and the data element of that particular secret so one secret in communities can have multiple data points or data and each of that data will have a key and that is what we are using here so in our case we have one kubernetes secret named akv secrets and that has got four keys and as a result of that or we are inferring all those four keys using this value from secret key ref construct in the equivalent is manifest so once we have all these settings in place we are good to deploy our application on the communities cluster so to deploy the application I have another PowerShell script which you might remember if you have seen my earlier videos this is called deploy Tech Talks in case this goes and deploys all the producer and the consumer manifest so now we have the producer and the consumer deployed let's go and also deploy the akkada autoscaler so this is the autoscaler which will automatically scale the number of instances of the consumer and if we now go back and to get pods we should see the pods running for all our application specific objects so we have the two producers running or two instances of the producer pod running they should also have a consumer so in order to have the consumer let's try it once again we don't have the consumer running at the moment this is because the kada autoscaler it checks if there are any messages to be processed and if there are no messages to be processed it will scale down the consumer so in order to have the consumer we need to pump in some messages and to get those messages let's go on to octant and in the services let's get the IP using which we can go and populate the messages onto our rep attempt - so I'm using postman here and I'm generating a set of let's say 5000 messages instead of 500 let's populate 5,000 messages onto RabbitMQ since we got the status as 200 okay that means those messages were populated now if we go back to our repeat m2 UI we have the port forwarding done which will allow us to connect to the Abbott mqi and see this in action so we will use the user and the password to connect to remet mq and we see that there are 5,000 messages which are ready to be consumed and we should see the consumers coming in shortly so the kada should scale the number of consumers you can watch it here you don't have any consumers at the moment hmm let me just go back and for a moment tear down the eutectics and he deployed once again I'm just redeploying the tech talks to the consumer and producer have been deployed again now and yes so we can see that the consumer has started scaling and if we go back on to the rabbitmq UI we you should see that there are three consumers which are up and running and this will start increasing until it hits the limit of 30 so I think the autoscaler at the moment as per the configuration what we have is 30 as the Eastgate the maximum replicas is 30 so we should see here that once it reaches 30 it will stop scaling anymore it will stabilize it at 30 consumers and all these messages will be consumed but that's not the point here so the idea of all this was to show how we can pull the secrets from uzuki vault akv into the secret object of kubernetes and then mount those secrets into the pod and then use this let me just close this okay so use the secrets into the environment variables so we have gone through that process now let's go back to the presentation and see what did we do during this demo so we started with the provider as with keyboard provider for secret store CSI driver if you remember initially I stood and that we need 1.16 version of kubernetes or higher because if we are using 1.15 or below the approach to link the key vault is slightly different there is a concept called a vault flex volume which is deprecated so if we are using 1.15 version of kubernetes we still need to use this flex volume based approach with 1.16 and higher we used the secret store CSI driver there are some capabilities which this driver provides in terms of mounting secret key certificates onto the part starting at the time of starting the pod using a CSI volume it supports mounting multiple secret objects into a single volume it supports security features like pod identities it also creates a portability with secret class provider or sacred provider class a custom riscos definition one good feature is that this also supports windows containers and this needs 1.18 and higher version of clippin it is what we also saw is the support for syncing of kubernetes secrets so we pulled the secret from the key vault and we sing them with the Kuban it is sacred objects and it also supports multiple secret store providers so in our case we were using the keyboard provider there is also support for Hershey Kottke vault so if we want we could use more than one keyword provider or more than one provider for secret stores so that could be a situation where some of the secrets are stored in a little key vault and some of the secrets are stored in something like a she caught key word and we could use different providers in the same cluster using this CSI provider in order to install the CSI driver we used help and we used kubernetes cluster with one point 18.2 version in my case I was using helm three point two point four in order to add this CSI driver to my cluster I have to add the repo which is I am referring to it directly from the github URL so what you can see on the screen is the heat up URL to that repo then we do a helm report date and we run the Elm install command CSI as Oh Trevor is the name of my install and this is the chart from where I'm going to identify that once I have deployed the CSI driver I create a secret provider class and this is a custom resource definition which is using communities or uzuki world specific parameters to store secrets in the CSI driver in case of acid keyboard there are four different modes which can be used to connect to the keyboard the first one is based on the service principle next one is based on the pod identity third one is based on of vmss which is a virtual machine scale set user assigned manager identity and the last one is virtual machine scale set system assign manager identity and this is the approach that we use during this demo if we are using this approach we set properties like use VM managed identity as true we provide the user assigned identity this is the managed identity client ID we provide the key vault name and we provide the tenant ID so based on these four properties our CSI provider is able to pull the secrets from keyword and we also went out and sync those secrets from the key vault onto a Cuban and his secret object once we have the kubernetes sacred object sync we also provided the key vault permission to be enabled the access policy to read secrets for the managed identity and beforehand I had populated the key word secret this can be done using a portal where you can go into Azure portal go into theory source which is key vault and add secrets or we can also use API so with this regards I have again a small script which I can show you quickly if I go into my personal scripts there is a script called deploy akv secrets so this is the API based approach where I'm using the keyword or easy CLI command line interface to set the secret so I can uncomment these or secrets and uncomment all these lines and populate these values so this is how I had initially populated so you can use a similar approach as well if you want to use a programmatic way of creating the secrets using easy CLI or other supported API so once we have the secrets populated we update the kubernetes deployment file and the pods pet or the deployment spec has been updated to have the akv secret mounted as a volume mode and then we also updated the environment variables to pull the values from kubernetes secrets so if you look at the complete picture this is how it looks like we have the Tech Talks consumer which is the object or which is the kubernetes manifest or kubernetes object which is going to use the key word secret the part of that which is getting created as part of the deployment it uses a secret which is named as a kv secrets and that secret is mounted using our kv name which is populated using the a kv or as your provider which is the custom resource definition we deployed using help and this is populated using the managed identity by connecting to the key word so that is how it works into n so the five step process to do this is in the step one we deployed the CSI driver we used help for deploying this next step is the key vault so we grant the read permission or get permission for the secret part only and we also populate the akv secrets here then on the kubernetes side we create a secret provider class with a kV provider and we also sync the kubernetes secrets with the KB values next step is we update our manifest files and we mount the volume using CSI driver and we retrieve environment variables from the kubernetes secret and finally we deploy the application to the cluster so we can also verify this using octant so octant is UI which is allowing us to visualize the state of the kubernetes cluster and here if we go to the workload section and we go to the pods I hope one of the consumer pod is still running now so since all the messages have been consumed the kada autoscaler has gone ahead and it has deleted or scaled down all those consumer pods so let's go and recreate some pots by sending 500 messages and that should trigger auto scaling again and we should see some consumer pods getting created the reason I want to show this is there is a very nice feature in octant which allows us to visualize so if I go back here ports and we should see in a few seconds a consumer parts coming up here there is a slight delay about one minute when the polling happens to see if the consumers have to be scaled up as part of kada auto-scaling so until that skill in or scale up kicks in we will not see any new parts have to just be a little bit patient here so what I want to show is if we go into the details of the pod or any object octant has this resource viewer which shows for a particular resource what are all the links and the thing that I wanted to show is for the consumer pod that secret reference so let's go back ports we should see the consumer coming in I'm not sure why the consumer is not kicking in usually it kicks in within few seconds first 100 that means the messages got delivered if we go to RabbitMQ yes we do have 500 messages here something is wrong with the consumer today it's not automatically kicking in so the other way I can do is I can again try it just tear down and redeploy the application usually this doesn't happen looks like something is wrong but and we are able to see the core concepts related to this demo so I'm not going to worry too much about that so let me quickly deploy the application and hopefully we'll be able to see consumers coming in so currently it's not deleting the producer and the consumer these have been deleted so let me go and redeploy the application containers so this has been deployed now and yes so we can see the consumer is up and running so if we go back to the octant now in the open to you i we should see some consumers and if I click on any of the consume here let's go into the resource view and here we can see that the RabbitMQ consumer determined pot we have the akv secret that is what is mounted as a secret onto this part let's look at the one which is running these are getting deleted for some reason so we heard some messages which were picked up but also some parts which got terminated so let's go back to the octant and see what happened that mmm we don't have any consumers running at the moment anyway never mind so in the short span of time we were still able to see that the secret has been mounted onto the consumer so we also saw that we can enhance the application security by using the enterprise grade keyword now what I mean by that is we have different levels of access at the keyword level we can control using access policies to limit what kind of user or which users have access to the keys at what level so do they have ket level exist do they have update delete kind of access so we can have a very granular level of access using access policies for all the secrets or all the keys as well as certificates stored in the keyboard we also used the manage identities with virtual machine scale sets for a Rica's cluster and we were able to sync the secrets between the key vault and the kubernetes secrets so the source code for this whole demo is available in the future a kv integration branch for my PD tech fest 2019 repository it's on github you can read on the whole demo if you wish to follow the steps all this scripts partial scripts and the steps are mentioned in the readme file with that I conclude this demo in case you want to connect with me on any of these social media platforms these are the different links that I have or these are the different platforms I use and I'm available on these my first name and last name is quite unique so on most of these platforms if you search for knowledge curation be easily able to find me so thank you very much I use this as an opportunity to learn something new code with passion and strive for excellence that's the motto I live with and I hope this session was useful for you to understand some of the concepts related to how we can integrate turkey volt it can be with other community service a case with that I conclude this demo thank you for watching if you find this useful please like and also share this with your friends don't forget to subscribe to my channel as well thank you once again [Music]
Info
Channel: Nilesh Gule
Views: 3,630
Rating: 5 out of 5
Keywords: Azure Proivder CSI Secret Store Driver, nileshgule, nilesh gule, tutorial, howto, kubernetes secrets tutorial, azure key vault kubernetes secret, akv kubernetes, aks csi driver, aks csi secrets, aks csi driver key vault, akv csi drivers, integrate akv aks, Secure Storage CSI provider for Key Vault, Azure Identity, Secure Store CSI, Azure Key Vault
Id: MHm4IVGVO1w
Channel Id: undefined
Length: 48min 38sec (2918 seconds)
Published: Sun Jul 05 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.