How to create EKS Cluster using Terraform MODULES (AWS Load Balancer Controller + Autoscaler + IRSA)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video we're going to create eks cluster using open source terraform modules first we will create vpc from scratch and then provision kubernetes i'll show you how to add additional users to eks by modifying kubernetes config map we will create i'm roll with full access to kubernetes api and let users to assume that role if they need access to ecas to automatically scale the eks cluster we will deploy cluster after scalar using plain yaml and cubectl terraform provider finally we will deploy adbs load bouncer controller using helm provider and create a test ingress resource i also have another tutorial where i use terraform resources instead of modules to create eks cluster first of all we need to define adobe's terraform provider you have multiple ways to authenticate with adws it will depend on how and where you run trafform for example if you use your laptop to create eks cluster you can simply create a local ad bs profile with the adws configure command if you run trafform from ec2 instance you should create instance profile with required im policies it's a best practice to define version constraints for each provider but since in this video we will be using terraform ad bs modules they already come with version constraints we only need to require terraform version itself along with kubectl and helm providers we will discuss later why i choose to use cube ccl instead of kubernetes provider to deploy cluster of the scalar to create adws upc we use terraform adws module and the latest version at this moment let's call it main and provide seeder range for eks you need at least two availability zones let's use us east 1a and 1b almost in all cases you want to deploy your kubernetes workers in the private subnets with the default route to not get away however if you're going to expose your application to the internet you would need public subnets with a default route to the internet getaway we will need to update subnet tags later in the tutorial for adwords load bouncer controller to discover them now you have multiple options for how you want to deploy the nod getaway you can deploy one single nod gateway in one availability zone or choose to create a highly available setup and deploy one node gateway per zone it depends on your budget and requirements i always prefer to create a single nod gateway and allocate multiple elastic ip addresses next is dns support it's common for many aws services to require dns for example if you want to use the efs file system in your eks cluster it's handy in some cases because it allows read write many mode and mount a single volume to multiple kubernetes spots now we have all the components that we need to create eks cluster let's call it my eks and specify the latest supported version by adobe s right now it's 123. if you have a bystian host or a vpn you can enable a private endpoint and use it to access your cluster since we just created vpc i don't have either one i would need to enable a public endpoint as well to access it from my laptop next is the vpc id that you can dynamically pull from the vpc module you must also provide subnets to your cluster where eks will deploy workers let's use only private subnets to grant access to your applications running in eks cluster you can either attach i am roll with required iron policies to the nodes or use a more secure option which is to enable aim roles for service accounts in that way you can limit the i am roll to a single pot then the nodes configuration for example you can specify the disk size for each worker to run the workload on your kubernetes cluster you need to provision instance groups you have three options you can use eks managed nodes that is recommended approach in that way eks can perform rolling upgrades for you almost without downtime if you properly define port disruption budget policies then you can use self-managed groups basically terraform will create a launch template with after scaling groups as your node poll and join them to the cluster using this approach you would need to maintain your nodes yourself finally you can use forget profile this option allows you to only work on your workload and eks will manage nodes for you it will create a dedicated node for each of your pods it can potentially save you money if kubernetes is badly mismanaged let's create manage node groups for this example first is a standard node group you can assign custom labels such as role equal to general it's helpful to use custom labels in kubernetes deployment specifications in case you need to create a new node group and migrate your workload there if you use built-in labels they tied to your node group the next group is similar but we use spot nodes those nodes are cheaper but nws can take them at any time also you can set panes on your node group that's all for now let's go to the terminal and run terraform initialize first and then apply usually it takes up to 10 minutes to create eks cluster before you can connect to the cluster you need to update kubernetes context with the following command then a quick check to verify that we can access eks next i want to show you how to grant access to kubernetes workloads to other i'm users and i'm roles access to eks is managed by using adws auth config map in the cube system namespace initially only the user that created a cluster can access kubernetes and modify that config map unless you provision eks for your personal project you most likely need to grant access to kubernetes to your team members terraform module that we used to create eks can manage permissions on your behalf you have two options you can add im users directly to the eks configmap in that case whenever you need to add someone to the cluster you need to update adws authconfig map which is not very convenient the second much better approach is to grant access to imroll just once using adwords config map and then you can simply allow users outside of eks to assume that role since i'm groups are not supported in eks this is preferred option in this example we create imroll with the necessary permissions and allow i'm user to assume that role first let's create allow eks access im policy with describe cluster action this action is needed to initially update the kubernetes context and get access to the cluster next is imroll that we need to use to access the cluster let's call it eks admin since we're going to bind it with kubernetes system masters airbag group with full access to kubernetes api optionally this module allows you to enable two-factor authentication but it's out of scope for this tutorial then attach the impolicy that we just created and most importantly define trusted role irons by specifying the root potentially every im user in your account could use this role to allow the user to assume this role we still need to attach additional policy to the user the i am role is ready now let's create a test and user that gets access to that role let's call it user 1 and disable creating access keys and login profiles we will generate them from the ui then i am policy to allow assume eks admin imroll finally we need to create im group with the previous policy and put our user 1 in this group let's go ahead and apply terraform to create all those im entities now let's generate new credentials for user 1 and create a local adws profile to create ad bs profile you need to run adbs configure and provide profile name in our case user 1. to let user 1 to assume eks admin imroll we need to create another adws profile with the role name you need to replace role iran with yours let's test if you can assume eks admin i am role now we can update kubernetes config to use eks admin i'm roll if you try to access eks right now you'll get an error saying you must be logged in to the server unauthorized to add eks admin amro to the eks cluster we need to update adbus auth config map also you need to authorize jira forum to access kubernetes api and modify adws configma to do that you need to define terraform kubernetes provider to authenticate with the cluster you can either use token which has expiration time or exact block to retrieve this token on each terraform run now you can run terraform let's check if you can access cluster using eks admin role since we mapped eks admin role with the kubernetes system masters airbag group we have full access to kubernetes api suppose you want to grant read-only access to the cluster for example for your developers you can create a custom kubernetes airbag group and map it to the i am role one of many reasons we choose to use kubernetes is that it can automatically scale based on the load to after scale kubernetes cluster you need to deploy additional component you also have at least two options you can deploy carpatner which creates kubernetes nodes using ec2 instances based on your workload it can select the appropriate ec2 instance type i have a video dedicated to carpenter if you want to learn more the second option is to use cluster after scalar it uses after scaling groups to adjust the desired size based on your load in my opinion carpenter would be a more efficient way to scale kubernetes because it's not tied to the after scaling group it's something between cluster after scaler and forget profile since i already have a dedicated tutorial for carpenter let's deploy cluster after scalar in this video we have already created open id connect provider to enable aim roles for service accounts now we can use another terraform module i'm role for service accounts eks to create i'm role for cluster after scalar it needs adws permissions to access and modify adobe's after scaling groups let's call this role cluster after scalar then we need to specify kubernetes namespace and a service account name where we're going to deploy cluster after scalar now let's deploy after scalar to kubernetes we're going to use helm next to deploy ad bs load balancer controller to give you other options i'll use plain yaml to deploy cluster after scalar for yaml you can use kubernetes provider that we have already defined or you can use cubectl provider with kubernetes provider there is no option for now to wait till eks is provisioned before applying yaml in that case you would need to split your workflow of creating eks into two parts first create a cluster then apply terraform again and deploy after scalar on the other hand cube ctl provider can wait till eks is ready and then apply yaml in a single workflow when deploying afterscaler preferably you need to match ecas version with the afterscaler version go back to the terminal and apply terraform verify that after scalar is running to test after scalar let's create nginx deployment in a separate terminal you can watch after scalar logs just to make sure you don't have any errors now let's apply nginx kubernetes deployment in a few seconds you should get a few more notes finally let's deploy abs load balancer controller to the eks cluster you can use it to create ingresses as well as services of type load balancer for the ingress load bouncer controller creates application load bouncer and for service it creates a network load bouncer i also have a detailed tutorial and a bunch of examples of how to use this controller in this video we're going to deploy it with helm and quickly verify that we can create ingress since we're going to deploy a load balancer controller with helm we need to define terraform help provider first similar to cluster after scalar we need to create i am role for the load bouncer controller with permissions to create and manage adws load bouncers we're going to deploy to the same cube system namespace in kubernetes then the helm release by default it creates two replicas but for the demo we can use a single one then you need to specify eks cluster name kubernetes series account name and provide annotation to allow the series account to assume the adws item role the load balancer controller uses tax to discover subnets in which it can create load balancers we also need to update terraform vpc module to include them it uses elb tag to deploy public load bouncers to expose services to the internet and internal elb to the private load bouncers to expose services only within your vpc the last change that we need to make in our eks cluster is to allow access from the eks control plane to the webhook port of the adws load balancer controller we done with terraform now let's apply check if the controller is running you can watch logs with the following command to test let's create echo server deployment with ingress then apply the yaml to make ingress work we need to create a cname record get the application load bouncer dns name and create a cname record in your dns hosting provider in a few minutes you can try to access your ingress thank you for watching and i'll see you in the next video
Info
Channel: Anton Putra
Views: 28,258
Rating: undefined out of 5
Keywords: aws eks, terraform eks, anton putra, eks, create eks cluster aws, create eks cluster aws using terraform, create eks cluster on aws using console, create aws eks cluster, aws eks cluster tutorial, eks cluster tutorial, aws, devops, sre, kubernetes, k8s, aws kubernetes, aws kubernetes tutorial for beginners, aws kubernetes tutorial, aws managed kubernetes, aws k8s, eks setup in aws, eks setup, eks security, aws eks best practices, eks best practices, amazon web services, gke
Id: kRKmcYC71J4
Channel Id: undefined
Length: 15min 32sec (932 seconds)
Published: Wed Sep 07 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.