In this video, we're going to talk about the
relatively new AWS Load Balancer Controller. It replaces built into the Kubernetes itself
component responsible for creating services and ingresses. AWS Load balancer controller uses
Application Load Balancer to create ingresses and Network Load Balancers to create services of
type LoadBalancer. By default, when you use this controller to create ingresses, it creates
a separate load balancer for each ingress, but you can use IngressGroup to merge multiple
ingresses into a single load balancer. It also supports two modes, Instance mode, which will
route traffic via node and then to the pod, and the IP mode, which routes traffic directly to
the pod ip address. AWS Load Balancer Controller can also be deployed on self-hosted Kubernetes
clusters and not only on EKS. In this video, I'll show you multiple ways to deploy this Controller
to the Kubernetes. First of all, we will use AWS console and plain yaml, then Helm chart. In the
end, we will convert everything to terraform, including creating an open id connect provider
and deploying the controller to the cluster using terraform helm provider. With terraform code that
I provide, you can simply update the EKS cluster name and run terraform apply to create everything
from scratch, including VPC. So if you're looking for the fastest way to start, just run terraform
apply. Also, we are going to go over 5 examples. In the first one, we will create a
simple name-based virtual hosting ingress using the Instance mode.
In the second example, I'll show you how to use IngressGroup to merge
multiple ingresses into a single load balancer. Then, of course, we need TLS to securely expose
any service to the internet. In the first part, we will issue a certificate from AWS Certificate
Manager and attach it to the ingress. In the second part, we will use the auto-discovery
mechanism to find a matching certificate. In the fourth example, I'll show you how
to automatically create DNS records in Route53 by using an external-dns controller.
And finally, we will create not ingress but the service of type load balancer using the AWS Load
Balancer Controller and network load balancer. You can find the source code for this
video in my github repository; the link will be in the description. Also, let's connect on
LinkedIn. I have the timestamps for each example, feel free to skip any section. Alright, let's get
started. AWS Load balancer controller would need access to create at least AWS load balancers
themselves. Since we're using EKS, the best and most secure way to access AWS services from
Kubernetes is to use the Open ID connect provider and establish trust between the AWS IAM system
and the Kubernetes RBAC system. I'll show you two ways how you can get started with IAM roles
for Service Accounts. To better understand what's going on under the hood, first, I'll show you how
to create an Open ID connect provider, IAM policy, and IAM role using the AWS console. Right
after, we will do exactly the same thing using the Terraform. The name of the EKS
cluster is a demo. We need to update the aws load balancer controller later with this name.
First of all, we need to copy the OpenID Connect provider URL from the EKS page. The next step
will be to use this link to create a provider. You can do it from the IAM section. Click on
Identity providers. Then add a new provider. Select OpenID connect and paste
the URL from the EKS page. You also need to get a thumbprint. For the
audience, you can use sts.amazonaws.com for now; later, we will update this value under IAM role
trust relationships. That's all for this provider; click add provider. Next, we need an IAM policy
with permissions for the aws load balancer controller. The best starting point is to use
provided IAM policy by the controller project. Go to the github and select the version you
want to deploy. The latest version is 2.4.2, I'll take that one. Don't use the master
branch because new versions of the load balancer controller may require a different level
of access. Now under the doc and install folder, you can find the policy. It's the first one,
iam_policy.json. Let's open it and click raw to copy the source. As I said, it's a good
starting point; you can increase the level of security by limiting access to a single
EKS cluster and so forth under the condition. Let's go back to the AWS console and go
to Policies. Then click to Create Policy. Switch to JSON and paste the IAM policy. And give it the name AWSLoadBalancerController.
For the first deployment, use the identical version and names to avoid any issues.
Alright, now we have a policy, but we need the identity to attach it. The next step
is to create an IAM role. It's going to be a web identity type. Then select the provider
that we just created and the same audience. Now we need to attach our IAM
policy to this role. It's going to be the first one, a customer-managed policy. Give it a name aws-load-balancer-controller. Then
we will use the ARN of this role in Kubernetes to annotate the service account. In a new workflow,
you can update the trust relationships during the creation of the role, or we can do it
later. Let's skip for now and create the role. The last modification that we need to do is update
the trust relationships only to allow specific Kubernetes service account to use it. Go to trust
relationships and click edit trust policy. Now, this is the most important part where people make
mistakes, be careful. First, replace aud with sub. Then the value with a service account. This
service account must be in the kube-system namespace in Kubernetes, and the name of the RBAC
service account is aws-load-balancer-controller. Only that service account will be able to create
aws load balancers. Finally, confirm the changes. The next step is to create exactly the
same open id connect provider, IAM policy, and role using the terraform. If you followed
along, you would need to delete all those objects that you created. Here we have the terraform
code to create a brand new VPC, internet gateway, subnets, nat, route tables, eks, and a node pool.
Now we need to create an Open Id connect provider. First of all, we need to get a tls certificate
from eks. Then the resource to create a provider. It's the same URL that we copied from the console
last time, the audience, and a thumbprint. The second file will contain terraform code to
create IAM role and policy and establish trust. The first policy will allow Kubernetes
aws-load-balancer-controller service account from the kube-system namespace to
assume our role. Then let's create an IAM role aws-load-balancer-controller. We
also need to recreate the IAM policy. You can either paste the json content directly
to the terraform code, or if it's a big file, you can use a file function to read it and pipe
it to terraform. Then attach this policy to the role that we just defined. The last thing, just
for your convenience, you can use the output variable and print out the arn of the role that we
need to use in Kubernetes. We also need to create json policy file in the same directory. Let's
call it AWSLoadBalancerController.json and then copy and paste it from the github project.
It's the same policy that we used before. Now let's save it and go to the
terminal, and initialize the terraform. When it's completed, run terraform apply to
create a provider and role for Kubernetes service account. That's the arn, you can copy it, or I'll
show you later where you can get it. Whenever you create a new EKS cluster with terraform, you need
to update the Kubernetes context with the aws eks command. After that, just a quick check with
kubectl if we can get a Kubernetes api service. Deploy AWS Load Balancer Controller with YAML
It's time to deploy the AWS load balancer controller to Kubernetes. I'll show you three
ways how you can deploy it with yaml, Helm, and terraform. You just need to pick one
approach that you want to use. I'll start with a plain YAML. This approach will require an
additional component cert-manager that we need to install. On the other hand, Helm doesn't require
it. You can install it with a single kubectl apply command. With kubectl apply, you can use
remote yaml files; just specify the version that you want to install. It's going to create custom
resources for cert-manager as well. Then a quick check that all three pods are in a running state.
Next, we need to download the spec for the load balancer controller. The latest version does not
have a dedicated spec, but we can use this one; we just need to update the image tag. Let's open the
spec and make a few changes. Of course, we need to add an annotation for the service account. You can
grab it from the terraform output or AWS console. The next change is we need to set the EKS cluster
name. You can type or copy from the console. And on line 827, let's update the image. Alright,
we are ready to deploy the aws load balancer controller to the Kubernetes cluster. Most likely,
you'll get an error the first time when you apply because it's trying to create a custom
resources and use them at the same time. Just rerun the command to deploy it. This time it
was successfully deployed. I'll show you how to validate deployment later. Next, we will deploy
the AWS load balancer controller with HELM. If you deployed it with yaml, you can skip or
delete the deployment. For Helm deployment, we need to add a helm repository. Then we can
create a Kubernetes service account manually, or later, I'll show you how to automate all
these steps with terraform. It's going to be exactly the same account and the same annotation
with arn of the IAM role. Let's apply it. Now we can deploy the aws load balancer controller.
The important part is to substitute your EKS cluster name, then disable service account
creation, and you must specify the same account name. Alright, it's done.
Deploy AWS Load Balancer Controller with Terraform & HELM
Finally, we will automate AWS VPC, EKS, and controller creation using the terraform.
Let's create another terraform file for the helm. Terraform has a Helm provider that allows you
to deploy Helm charts to Kubernetes. There are a couple of methods to authenticate with the
Kubernetes cluster. You can use the Kubernetes context, or I would recommend if you use EKS to
automatically obtain the token. First is a host; if you created AWS VPC and EKS using terraform,
you can grab these variables from those terraform resources, or you can just go to aws console and
paste them here. Then the certificate. And the command to authenticate with eks. Make sure that
you have all the tools installed, such as aws cli, helm, and terraform. Next, let's
deploy the helm chart. To do that, you need to use the helm_release resource. Same
name as before aws-load-balancer-controller, and the repository. You don't need to add helm
repos; you can just use them directly. Specify the chart name and a namespace. Also, I suggest
that you start with the same version and upgrade later if the deployment was successful. Now, we
need to override the same variables as we did with yaml. First is the EKS cluster name. Then the
image, we want to upgrade to the latest version. Kubernetes service account name. And the most
important one is annotation for Kubetnes service account. This is exactly the same arn that we
used before but pulled dynamically from the terraform resource. If you deploy the aws load
balancer controller together with VPC and EKS, you need to explicitly depend on those resources.
I added a few variables; for example, if you want to rename the EKS cluster, you can do it from
here. Also, you can upgrade the cluster. Now go back to terraform, initialize and apply. As I
said, you can copy the terraform code and simply run terraform to create AWS VPC, EKS cluster, and
deployed aws load balancer controller. If you just deployed the EKS cluster, you need to use the
aws eks command to update your Kubertnes context before you can use it. Let's see if we have a
Helm release. Also, I highly recommend before you deploy your first ingress, run the kubectl logs
command to verify that there are no errors in the controller. For example that it can access the
AWS resources. Run this command and keep it on. Simple Ingress (1-example)
To verify deployment, let's create a first simple ingress. All examples
will use corresponding namespaces. So we would need to create them as well. Next is a simple
deployment with echoserver. There are two modes; the instance mode will require a service type of
NodePort. With NodePort service type, kube-proxy will open a port on your worker node instances to
which the ALB can route traffic. And the IP mode, we will talk about it later. Then the ingress
resource. There are a lot of annotations that this ingress controller support. For example, if
we want to expose our service to the internet, we can do that with an annotation scheme equal
to internet-facing. This will create a public load balancer. You can also create services and
ingresses with private IPs only. You can use them within your VPC. This annotation also is
optional; you can add some aws tags to your Application load balancer. Then we need to specify
the ingress class of our controller. For the spec, let's use domain routing and redirect all
the traffic from echo.devopsbyexample.io to our Kubernetes echoserver on port 80. Now
let's apply our first example. Then switch to the other tab with logs, just to make sure that
it was successfully deployed. You can see the successfully deployed message. You can also
get the ingresses in example 1 namespace. If you see a hostname of the load balancer, it's a good
sign. Now we need to create a CNAME record to test ingress. My devopsbyexample.io is hosted in google
domains, so I'm going to create a CNAME record there. It does not matter where you host it; even
in Route53, you just need to create CNAME to point to the AWS load balancer hostname. You can also
go to load balancers in aws to inspect your ALB. Type is an application, which refers
to the application load balancer. Also, it's internet-facing with public
IPs reachable from the internet. And some security groups. The application load balancer
supports security groups; on the other hand, the network load balancer uses security
groups on the EC2 instances. Under listener, you can find an http port 80. If you click to view
rules, you can see the exact same spec as in your ingress resource in Kubernetes. Before accessing
the ingress, let's make sure that dns is correct. That's our load balancer with a couple of ips.
You can go to chrome and paste your domain name. Looks like it's working. You can also
find some metadata from echoserver. It can help you to debug.
Multiple Ingresses using Same Load Balancer (2-example)
By default, the aws load balancer controller will create a dedicated application
load balancer for each ingress resource. However, there is a way if you want to combine multiple
ingresses and use a single load balancer. In this example, we will create 2 ingresses in
different namespaces and force them to merge ingress rules into a single application load
balancer. I'm going to create two namespaces, example 2 service a and example two service
b. You can create ingresses in the same namespace if you want. Then exactly the
same deployment with echoserver image. Similar service, the only namespace is different.
Finally, ingress resources. It's going to be internet-facing. Then the key annotation.
By default, Ingresses don't belong to any IngressGroup, and the controller treats it as a
"implicit IngressGroup" consisting of the Ingress itself. Now Ingresses with the same group.name
annotation will form an "explicit IngressGroup", and be managed by a single aws load balancer.
Yes, it's that simple; any ingresses with the same group name will use a single ALB. Next
group.order specifies the order across all Ingresses within IngressGroup. The smaller
the order, the rule will be evaluated first, and it starts with 1. Then we use a host-based
routing for service-a.devopsbyexample.io. Now let's create a second ingress in example 2 service
b namespace. Same deployment, different namespace. As well as service. Now to place this
ingress in the same IngressGroup, we use group.name annotation with the exact
same value. Also, for order, we use 2; this rule will be evaluated after the first ingress.
Different host, service-b.devopsbyexample.io, and a namespace. Let's go ahead and apply it. Run
kubectl, get ing, -A stand for all namespaces. We get two ingresses that use the same load
balancer as you can see by the dns. To test it, we also need to create CNAME records. They
both will point to the same load balancer. And the second hostname service-b. Let's verify that we can resolve each domain name. And use curl or go to the web
browser to verify routing. Alright, service a and service b both work. If
you are building an internet-facing application or even a website, most likely, you
would want to secure it with a TLS certificate. In this third example, I'll show
you how to secure ingress with TLS using amazon resource name annotation and auto-discovery
mechanism. Dedicated namespace. Same deployment. Same service. And finally, ingress resource.
To bind ingress with a certificate, we can use certificate-arn annotation. Before we can
continue, we need to issue a certificate. Go to AWS Certificate Manager. And click request a new
certificate. Keep request as public certificate. And paste a fully qualified domain name. It should
match the ingress host. Then we need to validate that we own this domain. You can either select
email or dns validation which is way faster. Now, if you host your domain in Route53, you'll
get the option to automatically create a CNAME record to verify ownership. If you host your
domain outside AWS, you need to create CNAME manually. Click on the certificate.
You need to copy CNAME from this page. Then the CNAME value. Shortly after you create a DNS record,
your certificate should be issued by AWS. The status has changed from pending to issued. I
believe you can also use wildcard certificates, but I almost never issue wildcard
certificates since it's a bad practice. Now go back to the certificate and copy ARN. For this ingress, I also want to redirect
all the requests that come to port 80 to 443. This is a very common requirement not
simply to refuse connection but to redirect to the secure port. Then specify where you
want to redirect, standard HTTPS port 443. Don't forget to specify the scheme; otherwise, by
default, it will create a private load balancer. And the rest of the ingress, I only updated the
host to secure-echo. Let's go ahead and apply. As always, to be able to use it
we need to create a CNAME record. In AWS, you can find our
application load balancer. We have two listeners, on port 80 and port
443. Also, based on your requirements, you can update the security policy on the
load balancer with ssl-policy annotation. AWS recommends the this policy for
compatibility, but you can improve security with higher standards. You can
see the redirect rule on port 80 listener. Alright, it’s active now; we can
verify the dns and test ingress. First, let’s test HTTPS. You can see a lock means
the endpoint is secure. Next, let’s test redirect from http to https. It works as well. In the
first ingress, we explicitly specified the ARN of the certificate. But you can also rely
on autodiscovery when you omit certificate-arn annotation. You just need to specify HTTPS
443 annotation. Discovery will search the host from the ingress in the AWS certificate manager.
When it finds the match or wildcard certificate, it will attach it to the load balancer. If you
use nginx ingress, you know that TLS termination is happening on the controller level inside the
Kubernetes. With aws load balancer controller, tls termination occurs on load balancer
level instead. Before we can test it, we also need to create a certificate.
It is exactly the same process. Select the public certificate and
validate it by creating the CNAME records. Here it is important to make sure
that the certificate is issued before creating ingress. Wait till
status transitions to issued. Last cname for ingress secure-echo-v2. This ingress points to a different load balancer
since they are not part of the same ingress group. It works. You can check the certificate
from the browser. The certificate is valid. You can also use SSL labs by
Qualys to check the security. Just enter your domain name and click submit. Unfortunately, we got a B because we’re following
the aws recommendation to use their security policy. If you don’t care about compatibility,
you can specify the most secure policy and get A+. In the previous examples, we had to create
CNAME records manually for the load balancer, which was not very convenient. There is
a way how you can automate this task with external-dns. The way it works, you deploy
an external-dns controller into Kubernetes, give it the necessary permissions to list
and create DNS records, and it automatically watches the ingress resources created in
the cluster. When it sees the new ingress, it will take the dns from the host key
and create the Route53 ALIAS record. The process to grant permissions is similar
to aws-load-balancer-controller. I decided to do it from the console this time, but you
can easily convert it to terraform. In AWS, first, we need to create an IAM policy for
the external-dns controller. This policy grants access to all hosted zones in Route53, but
I would suggest you restrict access to a specific hosted zone by providing hosted zone id here. The
second statement is to list zones and records. Give it the name ExternalDNSAccess. Then we need to create an IAM role for the
service account. Similar to the aws load balancer controller, it will be web identity
type and use the same open id connect provider. Finally, attach the ExternalDNSAccess
policy to this role. Use the same name, external-dns, for the role. And don’t forget to update the
trust relationship as well. We’re going to deploy external-dns
to the kube-system namespace. Next is Kubernetes deployment. We would need a service account with a proper
annotation to allow the service to assume this role. You can grab it from the console. Then the
cluster role, that will allow extern-dns to watch for newly created ingresses. And we need to
bind it to the Kubernetes service account. You need to make sure that you’re using the
same service account in the deployment. You can also customize your deployment with a
few arguments. For example, a source equal to ingress will automate the creation of CNAME
records from the ingress resource. Provider aws, obviously. Then you can specify if you
only want to add or update dns records , or you can configure to delete CNAME records when the
ingress is deleted. You can manage DNS for public, private, or both hosted zones. Also, registry
txt and txt owner id will allow external-dns to keep track of what records are managed by
the external-dns controller. Eks-identifier is just unique string that will be used
as a txt value. It can be anything. Let’s go ahead and deploy it. Make sure that the
pod is up and running. You should also check logs; here, you can see that extern-dns was able to
discover my route53 hosted zone - antonputra.com. it is a public zone that I use for the
website. Now let’s test it with ingress. Let’s place it in example 4
namespace. Same deployment as always. But for the service, we will use clusterIP instead
of node port. Aws load balancer controller has two modes: instance and ip. IP mode is more efficient
since it removes additional network hop as an instance. For the ingress, let’s use the same
internet-facing annotation to get public ip. And here, we switch from instance mode to ip
mode. Your network plugin must support it, but if you use eks, you most likely use native
vpc mode anyway. Now, this host api.antonputra.com will be automatically created in the route53
zone. Let’s apply this example. In a few seconds, you should be able to see in the logs of
external-dns that it created a few records. In my case, it’s 3 records in antonputra.com
hosted zone. You can also see them in aws console. You may notice that it’s an A record instead of
CNAME, which allows resolving dns directly to ip. It’s done via alias record and pointing to the
load balancer hostname. Looks like dns is ready, and we can try to use curl to reach our service
in the Kubernetes cluster. Alright, it works. Create Service of Type LoadBalancer (5-example)
Aws load balancer controller can not only create application load balancers for ingresses
but is also capable of managing a typical Kubetnes service of a type load balancer. For
the Kubernetes service, controller creates a network load balancer that acts on layer 4 of the
OSI model. That's going to be an example 5. The service resources of type LoadBalancer also get
reconciled by the Kubernetes controller built into the cloudprovider component. Also called in-tree
controller. The AWS in-tree controller ignores those services resources that have the
aws-load-balancer-type annotation as external. Let’s also use ip mode for the load
balancer. And expose it to the internet. You can optionally enable proxy protocol v2;
this protocol adds the source ip address to the header of the request. Don’t forget to change
the clusterIP type load balancer. Let’s apply it. We already have the load balancer dns name.
You can get the pods with a wide flag to get pod ip. This ip address will be used in the target
group of the load balancer. You can find the load balancer in the console. Type is a network and not
an application. Also, it is internet facing. Under the listener, you can find the target group. We
have the same ip address here. You just need to wait a little bit till the target shows up as
healthy. You can use curl and port 8080 to reach the service in Kubernetes. The only issue that
I found with AWS load balancer controller that concerns me is that it does not properly support
path-based routing. For example, currently, it's impossible to use URL rewrite. In that case
you would need to fall back to nginx ingress.