AWS Secrets Manager lets you securely retrieve
secrets for use in your Amazon EKS Kubernetes pods. In this video, we will create a secret
in AWS Secrets Manager and then create a simple nginx deployment to demonstrate how
those secrets can be used. We will mount the secret as a file as well as we will expose
it to the pod via an environment variable. To achieve that, we will deploy Kubernetes
Secrets Store CSI Driver and an AWS component AWS Secrets Manager and Config
Provider. Working together, they will let you retrieve the secret and provide
it to the pod. To configure access, we will create an OpenID connect provider and associate the
IAM role with the Kubernetes service account. Let's get started. You can find the source code
and the commands in my GitHub repository in the readme file. I'll attach the link to the video.
As always, let's create an admin IAM user with programmatic access only. Mainly we will use
it to create and access the AWS EKS cluster. After you download the credentials, let's go to
the terminal and set up the aws default profile, or you can expose environment
variables if you wish. The next step is to create a secret
in AWS secrets manager. You may store database credentials or just simply a single api
token. Actually, this process will work with both the AWS Secrets Manager and Parameter Store.
Alright, let's select the Other type of secrets and create key value. Let's give my api token name
for the key and just random staff for the value. On this page, you need to provide
a secret name. You will use it to reference it in the Kubernernetes later
on. Let's call it a prod service token. In this example, we won't use Resource
Permissions; we will grant access directly to the IAM role. If you wish to learn
how to use it, I have another video AWS Lambda Secrets Manager Example: 2 Ways to Grant
Access. Also, skip the roration section. Here you can find code samples, which is useful
when you want to use api calls to get the secret; in our case, the secrets store csi
driver provider will handle it. Now let's open the secret and inspect the Amazon
resource name. AWS will append a random suffix to your secret arn. Later, when you grant access
to the IAM role, we will need to provide this. Now, create an EKS Cluster Using eksctl. You can
configure your cluster by overriding some default values with the cli flags, or you can create
a yaml config. Let's call our EKS my cluster, place it in the cheapest us-east-1 region and
specify the version. You need to provide at least two availability zones. Finally, define
the managed instance group. Managed instance group is created by the EKS itself. Go to the
terminal and use eksctl to create a cluster. When you use eksctl to create a cluster, it will
leverage the Cloud Formation engine and create at least two stacks: one for cluster and,
for instance, group. The instance group is created as an autoscaling group, and
you can find it in the EC2 section. Here you have one minimum capacity, which
is mapped to minSize, and maximum capacity, which is mapped to maxSize. If you
want to autoscale your EKS cluster, you need a third-party tool that can
adjust desired capacity based on the load. I have a lesson if you are interested - EKS
Cluster Auto Scaling. Let's verify that the kubectl client is able to communicate with the
Kubernetes API server. If we get a response means everything is good, and we can continue. I don't
spend a lot of time on this section since I assume that you already have a cluster and just want
to integrate secrets manager with Kubernetes. The next step is to create IAM OpenID
Connect Provider for EKS. By the way, if you are not using EKS and deploy Kubernetes
yourself to AWS, you still can follow along; at this point, you just need to create an
IAM role and attach it to the Kubernetes instance group instead of creating
an OpenID connect provider. To start, you need to go to the EKS page and
copy the OpenID Connect provider URL. Now go to IAM service and under
identity providers, add a new one. As you can expect, it will be an OpenID connect
type. Then paste the URL and click Get thumbprint. Under the audience, enter sts.amazonaws .com.
We will update this in the trust section later. Next, we need to create an
IAM policy to read secrets. You can use JSON objects to configure
access. It will be a single statement with effect allow; then, we need to
specify the action get secret value, and identify the resource. Let's go back to the
AWS Secrets Manager and copy the arn of our token. Give it a unique name, for example,
APITokenReadAccess, and click create policy. Now, let's create an identity IAM
Role for a Kubernetes Service Account. Select web identity type and find the OpenID
connect provider that we created earlier. Filter policies by customer-managed
and select API Token read access. Call it api token access. Try to use the same
names when you follow along since sometimes we have references in different places. When you
get this working, then you can come up with your own names. Now, click on the role and go to trust
relationships. Here we want to update trust that only one single Kubernetes service account can
use it and nobody else. I'll attach the link to the official documentation. Here instead of
the audience put sub, and update the value. It is going to be a service account
located in the production namespace, and the name of the service account
is the last argument - nginx. Now, we need to associate an IAM Role
with Kubernetes Service Account. Let's create a folder nginx where we are going
to put all the Kubernetes-related files. First of all, let's create a new production
namespace for our demo. This has to match with the namespace that you specified in the IAM
role. Then create a Kubernetes service account, put it in the production namespace as well, and
most importantly, add an annotation to allow this service account to assume the role. Let's
copy the ARN of the role from the AWS console. Now go back to the terminal and apply the whole
folder to create a namespace and service account. Let's list the namespaces now; we have a
production namespace created 9 seconds ago. Also, let's check the service accounts; we have
a default service account created every time you create a namespace and an nginx service account.
This service account now has access to the secret. The next step is to install the Kubernetes
Secrets Store CSI Driver. It Integrates secrets stores with Kubernetes
via a Container Storage Interface (CSI) volume. The Secrets Store CSI driver allows
Kubernetes to mount multiple secrets, keys, and certs stored in enterprise-grade external
secrets stores into their pods as a volume. Once the Volume is attached, the data in it is mounted
into the container's file system. I'll show you both installation ways, using plain
yaml and at the end with the helm chart. I think the best way to follow
along is to clone my GitHub repository or at least open it in the browser. First, we
need to create two custom resource definitions. Secret provider class. And let's
create the second one right away. It is secret provider class pod status.
Let's apply them from the terminal. Let's see if those CRDs were
created; alright, they are here. Let's complete our deployment; it
will include a service account. Then the cluster role here is an important section. Suppose you want
to use your secrets as environment variables you need to grant access for secrets. The way
it will work, this store driver will retrieve a secret from the AWS Secrets Manager
and create a Kubernetes secret; then, you will use secret value in your deployment
to create an environment variable. Next is cluster role binding, to associate the service
account with the RBAC role that we just created. We need to run this secrets store CSI driver on
each node, so let's create a daemonset to do that. Finally, create a CSI driver. Go
over to the terminal and apply. Check the logs to see if there
are any errors in the output. You can use the -l flag to
select pods by the labels. You don't need to run the following two commands.
If you are a helm user, you can add a repo and use helm install; it will give you exactly the same
thing, but without Kubernetes secrets access by default. So if you will use helm, you need to
add RBAC permissions if you want to use secrets as environment variables. Out of the box,
if you deploy using helm and default values, you will only be able to maunt secrets as files
inside the pod. Something to keep in mind. Next, install AWS Secrets & Configuration
Provider. The AWS provider for the Secrets Store CSI Driver allows you to make secrets stored in
Secrets Manager and parameters stored in Parameter Store appear as files mounted in Kubernetes pods
or use them as environment variables. I don't have a helm for you to deploy this component, so
let's create a folder and place yaml out there. The first is a service account. We need them for
RBAC role base control; since all these components require some sort of access for the Kubernetes
API server. Then again, the cluster role. Cluster role binding. And a demonset as well. Let's apply them now. We have a sci secrets
store provider running. Check the logs. Now, let's create a Secret Provider Class.
This object will map the secret in secrets manager with Kubernetes provider. First of all,
it is a secret provider class. It must be placed in the same namespace where you will deploy
your service, in our case, nginx. Then under the parameters section, we need to specify
the name of the secret. We have a couple of options; we can use either secretsmanager or
ssmparameter. Then we can optionally create an alias. This will be the name of the file that
you will mount to the pod. Since we also want to create an environment variable, we need to
create a Kubernetes secret. api-token will be a Kubernetes secret name. SECRET_TOKEN is just
a key within the secret. Let's go and apply it. It's time for the demo. Let's split the screen;
I want to show you the logs from the CSI driver. You can immediately see if you get an error
or if the secret was successfully mounted. Let's create the last Kubernetes object
for this video, deployment with Nginx. Alright, first of all, it's just a simple
deployment based on the open-source Nginx reverse proxy. We don't even need it at all;
we just need a placeholder image. First, to mount this secret as a file, we need to
create a volume using the csi keyword and point it to the provider class; then, let's use
this volume and mount it to mnt api token. Second, let's expose this secret as an environment
variable. Let's call it API token and get value from the Kubernetes secret api-token; it
was created automatically by the controller. Here is very important to use our nginx service
account that has proper access. Let's deploy it. In the bottom window, you can see that
the secret was retrieved successfully; if we misconfigured RBAC or IAM policy or role,
you would get an error with permission denied. Now let's go inside that pod and
print the file content first. -it stands for an interactive shell.
Then the pod id and a command to execute. Let's use a cat to print the content. You can
see the secret value; it's a key-value from the aws secrets manager. My api token and the
random value that we entered. Now, let's see if we have an environment variable. Alright,
env is accessible as well with our secret. Now, as a bonus. If you are a frequent Kubernetes
user, you need to install kubectx right now. It's a cli tool that simplifies switching between
namespaces and Kubernetes contexts if you have multiple clusters. For example, you want
to get pods in the production namespace. You always need to use the -n flag and provide
a namespace, in this case, production. All the operations in that namespace will require you to
use this flag, which is very annoying sometimes. If you install kubectx, you can use the kubens
command and switch to the production namespace. Then just run kubectl get pods, you'll
get the same result but without -n flag. If you run kubectx, you can switch between
clusters quickly as well. To list the namespaces, just run kubens; the production namespace
is active. Please do me a favor, and like this video; it really helps. Thank you for
watching, and I'll see you in the next one.