NestJS Microservices | Deploy on AWS EKS & Setup a CI/CD Pipeline

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys today I'm going to show you how we can set up a CI CD Pipeline on AWS as well as set up a kubernetes cluster on there and deploy a Helm chart we're also going to expose our kubernetes cluster externally using an application load balancer configured inside of AWS now the project we're going to be following along with in this video is from my nest.js microservices udemy course so if you'd like to get access to that and follow along with the source code I'll leave a link in the description where you can check it out however everything we go over in this video is going to be very generic so you can apply this custom CI CD pipeline to your own Docker images and this kubernetes setup to any of your Helm charts let's go ahead and walk through how we can do it and I'll see you there let's look at how we can run our microservices on AWS we're going to use Amazon eks to run our kubernetes cluster as well as exposing external load balancers so we can access our nashjs app we're also going to set up a build pipeline so that our Docker images will be rebuilt and pushed to an AWS image repository Whenever there is a new commit to our GitHub repository so to get started head to aws.amazon.com and you can click on log back in from here you can sign in if you have an account or if you'd like to create a new account follow through with these steps so signing up for AWS will be free however actually running the eks cluster will cost up to 10 cents per hour to run the cluster and we're also going to be charged for the underlying compute that we use so the instances that we're using we're going to use T2 micros in this tutorial which is a free instance type so we can set this cluster up really cheaply and explore with it however you will need to associate billing details with your account alright so after you finish signing up you should find that you're in this AWS Management console we're going to start off by setting up an image repository so we have a place where we can push up our built Docker images too let's go ahead and search for ECR which stands for elastic container registry which as we can see is a fully managed Docker container registry now that we're at ECR let's click get started to create our repositories so we're going to go ahead and create private repository so only we and our kubernetes cluster can actually pull these images we're going to create a repository for each one of our micro services so let's go ahead and start off with the reservations microservice and click create Repository let's go ahead and repeat the same process for our three other microservices the auth microservice notifications and lastly I'll create a payments microservice so now we have repositories to push our Docker images to let's go ahead and give this a try locally on the command line to make sure we can correctly authenticate and push our images up alright so we're going to need the AWS CLI installed on our system to easily interact with the AWS console from the command line so if you don't already have the CLI go to aws.amazon.com CLI and then we can click on getting started here we can see how to install the CLI based on your operating system so click on your operating system and follow these instructions to easily install the AWS CLI so after you've installed the AWS CLI you should be able to run AWS configure to set up new credentials to get access to these credentials go back to the UI console and click on your username in the top right corner and then click security credentials from here we'll scroll down until we find access keys and we'll click create access key we'll go ahead and continue using a root access key in case you'd like to create a user with finer grain control over the permissions they have you can create a new user in IM sign in to that user and then go back to create access key just for that user in our case we want our root user to be able to do anything with this access key because we are using it ourselves so we'll go ahead and grab the access key and paste this in to the CLI then we'll copy the secret access key and paste that in as well you can feel free to select whatever default region you'd like I'm going to use Us East 1 and we've gone ahead and correctly configured AWS CLI now let's go back to ECR and we can click on any one of these repositories and in the top right we can click on view push commands to get a list of commands to actually build and push our image up so let's copy this First Command which is going to authenticate our CLI with ACR by calling AWS CLI and then piping that to Docker login so of course make sure you have Docker running on your system before we execute this and then I'll paste this command in to go ahead and log in now after we've logged in we'll go ahead and CD into our apps folder and then CD into reservations we can copy the next command to build the docker image and paste it in now we'll make a modification we're going to specify that the path to the docker files in our current directory but we want this to be run from the root context from the root of our project all right so once our image has finished building we'll go ahead and then get this command to tag our image based off of our repository name so we can copy and paste this command directly and finally we'll run this last command which will just push the docker image up to our AWS ECR repository you can see that the push is progressing so once the image has fully pushed up we can refresh the browser and we should see our reservations image has pushed up with the tag of latest so now we have the ability to push images up to ECR let's see how we can set up a CI CD pipeline to automate this process for our remaining microservices and ensure we always have the latest code available in these Docker images so we're going to define a list of steps how to automatically build our Docker images and then push them up to ECR in order to do this we need to create a new file called build spec Dot yaml and make sure this file is in the root of our project firstly we'll go ahead and Define the version we're using in this case it'll be 0.2 and then we can Define some phases for our build the first one will be a pre-build phase where we will execute a command section and in here we'll have a list of commands so the First Command we'll need is to actually log into ECR we can take this command from our AWS ECR console where we click on view push commands and copy this First Command where we get the login key and pass it to Docker login so go ahead and copy this command and paste it in so this is all we need to do inside of the pre-build phase next we'll go into the actual build phase itself where we'll have another list of commands and now we're actually going to execute the docker commands to build and tag our images so we can copy this first build command from ECR and paste it we're going to specify the path to the Docker file will be in dot slash apps slash reservations slash Docker file and the build context or where we want to build the docker file from will be the current root directory so next we can go ahead and copy the command to tag the reservations image and paste it in notice we're using the latest image tag so let's go ahead and copy these two commands and then we can go ahead and just paste them in a few more times and we're only going to have to change the service that we're building so we're going to build the off service and tag the auth service at TAG latest we'll go ahead and do the same thing now for the payment service make sure we go into slash payments and make sure we tag the payments image correctly finally we will tag the notifications image so now that we've built in tag or images we're ready to actually push them up and we'll do this in the final stage of the build this will be the post build stage where we'll specify a new set of commands to actually push the docker images to our repository so we'll take the last command of Docker push and then we'll paste this in a few more times just swapping out the name of the image to auth payments and notifications so now we have a completed build spec that will give code build the set of instructions it needs to properly build tag and post your images so finally don't forget to actually commit your change so that it will be available when code builds scans our GitHub repository so I'll commit a message saying add build spec and push this to my Repository so we're ready to utilize code build and that build spec to actually set up our automated builds we're going to use code pipeline to accomplish this which will allow us to easily detect new commits and build them with our build spec so look up code pipeline in the search bar and click on code pipeline we want to create a new pipeline so click create Pipeline and call it whatever you'd like I'll call this sleeper next we can select a service role so click on create new service role and use the default name provided and click next next we're going to go ahead and specify the source provider for our code so in our case this is going to be GitHub version 2. you can select which source provider you use however we're going to use GitHub so in order to set up our GitHub provider we need to create a new connection so go ahead and click connect to GitHub to set up a new GitHub connection you can call this whatever you would like I will call this the name of our app sleeper and click connect to GitHub next we need to set up the GitHub app so click on install a new app to be redirected to GitHub and Supply your credentials you should be brought to this page where you install an AWS connector for GitHub go ahead and select the repository for your application in this case it'll be my sleeper Repository and go ahead and click install after the installation you should have a GitHub app pre-filled out of this form and then we can click on connect we can see that our GitHub connection is ready to use and we can select the repository from this drop down we can also select the name of the branch that we'd like to build from and watch commits in this case I'm going to use the main branch and we will keep this box checked that will start a new Pipeline on source code changes finally for output artifact format click on code pipeline default and then click next now we have to set up our build stage and we're going to select AWS code build as a provider and then we need to select a project in code build to use for this pipeline in our case we don't have one yet so we'll click on the create project button and get redirected to code build so now we're setting up a new code build project and we can call this sleeper as well then scroll down until you find the environment image we will use an Amazon Linux 2 for our code build operating system and for the runtimes we will select standard and the image we will use AWS code build 86 by 64 standard 4.0 then click on the privilege box so that we can build Docker images and create a new service role for this code build then if we scroll down we can see the build spec section where we specify the build spec.yaml however we can actually leave this blank because by default it will look in the root for a build spec.yaml which we have already provided so we can continue down to the bottom and click continue to code pipeline you can see we have successfully created the project encode build and now we can click on next back in code pipeline then for the deploy stage we are going to click on skip deploy stage and then click skip we can see all of these settings we have configured then we will click on create pipeline now we can see we are brought to the dashboard where we can see our source has successfully executed and our build stage is currently in progress we can click on the details link and then we're brought to the logs for the current build in code build you can see on the left side here we are under the code build section this is our in progress build and we can click on tail logs to follow along with this build it's currently in the pre-build section where it's logging in to Docker and you can see an error has been thrown because the assumed role is not authorized to perform get authorization token on resource this is because our code build service role we've created has not been given permissions to actually interact with ECR let's go ahead and fix this by going to IM service console and clicking on IM and we can see our code build sleeper service role click on this role and then click on ADD permissions so we can add some new policies to this user and then we're going to go ahead and filter out for ECR we can see a pre-built policy here called ec2 instance profile for image Builder ECR container builds and if we open this up we can see a list of permissions that allow this role to actually push images up to ECR including that get authorization token permission so let's go ahead and click on this permission policy and then click on ADD permissions now you can see this code build role has the ECR permissions it needs and we can go back to code pipeline to re-trigger a new release to trigger this release we'll click on our Pipeline and then click on release change and then click on release you can see a new build is in progress and we can click details and tail logs to follow along with this new build after the build has finished completing and all of our images are pushed up to ECR we can close this and see that the build was successful and if we go back to code pipeline now into our sleeper pipeline we can see that the source and build have both succeeded more importantly if we go back to ECR now all of our services have a latest image pushed up to them this latest tag has been pushed up from code build so now we have our images automatically built anytime we make a new git commit and push that up to our repository which is great next let's look at how we can run our kubernetes deployment on Amazon eks and provisional load balancer so we can access this cluster externally okay so to create our eks cluster on AWS we're going to use a command line tool called eks CTL which is the official CLI for Amazon eks and will allow us to easily create and manage our clusters from the command line so to get started let's click on the introduction page and we are at eksctl.io click on introduction and then on the right here let's click on installation to find out how we can install eks CTL so you can see for your operating system we have different installation commands here is for Windows and for Unix so go ahead and copy and paste these commands into your terminal to make sure you have eks CTL it properly installed so after you have it installed you should be able to go to your command line and run eks CTL get clusters and this is going to use the default AWS credentials we set up at the beginning we want to go ahead and Define a new cluster config file in our project that will Define how our cluster is created so if we go back to the documentation and go to the home page we can scroll down until we find a default cluster.yaml file let's go ahead and copy this and then we can go back to our root project and create a new cluster.yaml and I'll paste in this cluster config where we can change the name to be called sleeper and I'm going to change the region to be closest to me which is Us East one Phil feel free to use whichever region you would like and then importantly we have different node groups to find and this is going to be the nodes we have available to run our workloads in our case we only want to run one node group so we can call it ng1 and we want to change the instance type to a T2 micro instance this type of instance is available on the AWS free tier so it won't be charged for its usage if we don't go above the default CPU limits of 750 burst hours a month we can also specify the number of nodes we want to have in this case we can just keep this at three so now we have this cluster defined we can run eksctl create cluster and then provide dash dash config and then provide Dash F and then provide the cluster.yaml file and now eksctl is going to execute tasks through cloud formation to create our cluster so go ahead and let this command finish it might take some time to create the cluster so after some time we can see that our eks cluster has finished creating and we can see this node here that our Cube config has been updated automatically so now we can run Cube CTL get nodes and we can see the three nodes that have been provisioned in our cluster we can run EK sctl get node groups and specify the cluster as sleeper to see the current node groups in our cluster and we can see the size is at three okay so now we're ready to actually install our Helm chart into our eks cluster firstly we need to make an update to our Helm chart to point our images at our ECR repository so let's open up our build spec and then let's copy the name of the reservations image and we'll go into our k8s Helm chart and open up our templates folder reservations deployment and then we'll go ahead and swap out the image to be pointing to ECR now let's go ahead and do the same thing for auth we'll copy the tag and Swap this in for the image we'll grab the payments image and do this for the payments deployment as well and finally we will grab the notifications Repository and swap out this image so before we deploy our Helm chart into eks we need to to provide the secrets needed for our deployments the easiest way to do this is to run qctl config use context Docker desktop and switch back to our local kubernetes cluster we can of course now run Cube CTL get secrets to see our secrets that we've created previously that needed for our deployment so now I'm going to run Cube CTL get secrets and output this to yaml and pipe it to a secret.yaml file then I'm going to open up the newly created secret.yaml file which is just going to be a temporary file and I want to get rid of the GCR Json key so let's remove this first entry here so that we just have the other secrets that we want to carry over so now let's run Cube CTL config get context to get all of the context in our system and our kubernetes clusters and I want to copy the context for our eks cluster so copy this one and then we can run again cubectl config use context and paste in our eks context so now we can again run Cube CTL get nodes and see our ec2 instances again but now importantly I want to run Cube CTL create Dash f for file and apply the secret.eml we've created and now all of our secrets will be created then I'll go ahead and make sure we delete this temporary secret file so we don't expose these secret credentials now that our images have been updated we can CD into our KH folder and then CD into the sleeper directory and now we're going to run Helm install sleeper and specify the path is the current directory to the help chart now you can see that our deployment was successful so after giving our Helm chart a little bit of time we can run cubectl get pods and see that all of our pods are in a running State we can then get the logs of our reservations or any one of the pods and see that it has fully started up and is listening next let's look at how we can provision a load balancer to expose an externally available URL that won't change so quick note if you run a cube CTL get pods and you find that some pods aren't starting up because of the fact that there are not enough nodes available we can easily scale up our existing node Group by running eksctl and get no groups in our cluster of sleeper and then we can say eksctl scale node group ng1 and set dash n to 5. you can see that our Max size also needs to be updated so we can provide a dash capital M 5 to increase the maximum number so in order to provision a load balancer in eks we're going to use the AWS load balancer controller which you can find at kubernetes-cigs.github dot IO slash AWS load balancercontroller now this load balancer controller is automatically going to provision an application load balancer inside of AWS anytime we create a service with an Ingress resource so to install this load balancer controller we click on the deployment link and then click on configure IM scroll down to the steps to set up the load balancer controller and we'll copy this First Command where we associate an imoidc provider and go ahead and paste this in we just need to change the name of our cluster to be the one that we are using in this case I'm using the sleeper cluster and then we'll change the region to whatever region we're using in this case Us East one next we're going to download the IM policy based on the region we're in so we have one for the US government Cloud China and all other regions so I'll copy the all of the regions curl this and download it to the current directory and then we create a new policy based on this downloaded file so copy this command paste it in and enter to create a new policy finally we want to create an IM service account using this policy so paste this final command in to have eksctl take care of this for us in creating an IM service account so we need to provide the region like we just did before so this would be us East one for me and then importantly we need to provide the AWS account ID associated with our account so in order to get this you can go back to the AWS console go to the IM dashboard and copy the account ID in the right here and then go ahead and Swap this out in this command finally we need to provide the name of the cluster we're installing this into as we've done before this will stay the same as sleeper so enter this and let eks CTL roll out a cloud formation to create this so finally we're ready to actually install the load balancer controller so scroll down to the summary where we get the first command to add the helm Repository and paste this in now that we have the repository we're going to go ahead and apply the helm chart with this next command now finally we can actually install the helm chart so we're going to copy this command for clusters with irsa copy this command and now paste it in of course we'll need to update the cluster name as we've done before so go ahead and update that to sleeper so go ahead and Swap this out to our cluster name of sleeper and let this Helm chart install now if we run Cube CTL get pods in the cube system we can see two load balancer controller pods running and we can run qctl logs in the cube system and follow this to get some logs and we can see that the load balancer controller is up and running and watching our services now so back in our Helm chart we can remove this temporary IM policy we created now finally in order to tell the load balancer controller about our Ingress resources we need to include some annotations on our Ingress so let's add a new annotation section here where we Define two new annotations the first of which is going to be alb.ingress.kubernetes.io scheme internet facing to make sure that this load balancer is attached to a public subnet that is externally facing and then we provide the kubernetes Ingress class ALB to provision an application load balancer so let's go ahead and save this and then we can run Helm upgrade sleeper and provide the root path here now we can run qctl get Ingress and we can see that a new address has been provisioned for us it will take some time for the application load balancer to get provisioned and we can even check the status in AWS by going to the ec2 dashboard and then scrolling down to load balancers and then we can see our application load balancer that is in a provisioning state so let's go ahead and let this finish provisioning after a little bit of time we can see the status has now switched to active so if I copy the DNS name now we can then open up Postman to test out this load balancer so now in post man if we enter in our load balancer URL and enter slash reservations with it with a trailing slash and send off this request we get a resporthy response back from our reservations deployment which is great to see because we're not passing in any GDP information but we are able to reach our application externally from this URL which is great to see now we have a way to provision an external load balancer and expose our application externally
Info
Channel: Michael Guay
Views: 11,263
Rating: undefined out of 5
Keywords:
Id: G5gt5vIo1rA
Channel Id: undefined
Length: 28min 45sec (1725 seconds)
Published: Thu May 18 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.