AWS ECS Tutorial | Deploy a New Application from Scratch | KodeKloud

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign [Music] my name is Sanjeev and I will be your instructor in this course we will learn about aws's elastic container service and how we can use it to deploy a container-based application we'll start off by going over the basics of ECS what is it how does it work and we'll get familiarized with the different components that make up the ECS solution so we'll learn about tasks task definitions and services in addition to that we'll also learn about the different launch types that we have we'll talk about the different pros and cons of each one as well as when you would use one over the other foreign and once we have a good grasp of the Core Concepts that make up ECS we'll then move on to deploying a couple of different applications onto ECS we'll cover all of the main configurations necessary for this and I'll go over what each one of these configurations do and when you would want to use one option over the other and also show you how you can deploy a new version of your application onto ECS as well and finally we'll also take a look at how we can integrate a AWS load balancer to direct traffic to your ECS cluster so we've got a lot to learn let's go ahead and Jump Right In we're going to kick things off by going over what is ECS and why we would want to use it [Music] so what is ECS ECS is aws's managed container orchestrator and so being an orchestrator it's going to put it in the same family as a Docker swarm or a kubernetes but this is just one that's provided by AWS and is managed by AWS exclusively so let's quickly discuss what an orchestrator is so that we have a basic understanding of what is the ultimate goal and purpose of a solution like ECS so in orchestrator the main purpose of it is to manage the life cycle of a container so it's going to be the one that's responsible for finding available resources or a server to create a container and run it on it's going to be responsible for creating the container it's going to be responsible for destroying it and if the container ever crashes it's going to be the one that restarts it or creates a new instance so it really is the brains behind everything orchestrators all are also responsible for deploying and load balancing the application across multiple servers so if you're ever just working with Docker or Docker compose you're kind of limited to one server but in reality when you're deploying an application you would never want to just deploy your application onto one server that's going to create a single point of failure you want to be able to distribute it across multiple servers so that if one goes down the other servers can pick up the slack and orchestrators just make this a lot easier the other thing is we want to be able to Auto scale to handle variants and traffic so if we have a very small application and then for some reason our app grows and becomes very popular we might see a spike in demand for our application and we need to be able to handle that spike in traffic by automatically Auto scaling our application creating more instances of our container so that we can process all of the incoming requests and then if our Spike starts to subside we want to then lower the number of instances so that we're not paying for extra compute power and lastly when we make changes to our application updating our code we need a way to efficiently roll out our code without disrupting the end user's experience and orchestrators make this very easy [Music] so I want to quickly just walk through the workflow of deploying a application onto a plain Docker environment with no orchestrator so that could be just using the traditional Docker run command or using Docker compose and So you you're going to Define your Docker compose file that's going to have all of the configurations for all of your containers and let's say that we have a couple of different servers well the one of the limitations with using Docker composer Docker run is we can really only deploy it onto this one application so if we ever wanted to scale it out to all three of our servers we can't actually do that with Docker compose now we could technically copy the file over to the other servers but you're not truly managing your application across three different servers because none of the three servers are aware that the same applications running on the other servers so it's not truly orchestrating your application across the three servers so that's one of the main limitations in addition to that let's say that we want to create more than one instance of our application to handle more traffic we want to be able to scale up easily and we want to be able to scale down easily and we want our orchestrator to be intelligent enough to do it on its own without us having to manually intervene and that's something that Docker composed is not able to do and in addition to that like I mentioned in the previous slide if you ever need to update your application and roll it out to different instances so that we can do this without disrupting end user traffic we don't really get that ability with Docker compose we need a more intelligent service we need an orchestrator to do that now when it comes to traditional orchestrators like kubernetes or hashicorp Nomad or Apache mesos these all require a lot of effort to get up and running and they're not exactly the simplest or most intuitive to work with especially with kubernetes we all know how much of a beast that is and so ECS was created to create a simple alternative to these orchestrators so that we can basically just use a simple GUI put in a couple of details that we want when it comes to how we want our application to operate and AWS is going to handle the rest foreign has two different launch types it has an ec2 based launch type which if you don't know what ec2 is this is just aws's compute service so it's just a virtual machine that they provide you and it also has a fargate service and I'm going to go over both of those in detail but I want to discuss one more thing real quick which is the concept of an ECS cluster so what exactly is an ECS cluster well to Define what an ECS cluster is I want to talk about what is ECS in itself and it's important for you to understand that ECS is nothing more than the brains behind how your containers are deployed ECS only works with containers it does not have any other ability so when you deploy a container it still has to run on a physical machine or a virtual machine but ECS doesn't act as a server ECS doesn't have any servers EC ECS doesn't have any compute power ECS can only just create and delete containers but it still needs the underlying infrastructure to be able to run those containers on and so that's what a cluster is a cluster is just a bunch of resources the underlying resources that your containers are going to run on so they're going to be based off of a bunch of ec2 virtual machines and ECS will just be responsible for deploying your containers onto that underlying infrastructure so your cluster once again is nothing more than the underlying resources that your ECS instance can deploy your application on it's the physical infrastructure so with an ec2 launch type the most important thing to understand is that we have to manage the underlying ec2 instances so we have the ECS control plane which as I mentioned is nothing more than the brains behind the operation and then we have our cluster and like I said in the previous Slide the cluster is going to be the physical resources that our containers are going to run on and with the ec2 launch type we have to manage the underlying infrastructure as in we have to manage the ec2 instances so we have to create the individual servers or the individual ec2 instances ourselves and we have to manage these ec2 instances so we have to install Docker on it because it's going to be running containers we have to install the ECS agent so when you're working with ECS and specifically with an ec2 launch type your ec2 servers have to have a special agent so that the ECS control plane can talk to it and give it instructions and with any other server you're going to have to do the usual maintenance things like installing a firewall so that you can only access certain ports so that you're not opening yourself up to any unnecessary vulnerabilities and then with any other server you're going to have to apply routine patches uh routine upgrades so that your servers are all up to date and that's the most important thing to understand with an ec2 launch type you are managing the servers all by yourself you have to do all of the work you have to maintain them you have to upgrade them you have to patch them and the only thing that ECS does is manage the containers so you manage the ec2 instances ECS just deploys the containers onto those ec2 instances and with this type of launch type what you get is full control over your infrastructure you own the underlying ec2 instances you manage them you can configure them exactly how you want [Music] with ECS fargate AWS is now going to manage the underlying infrastructure so we're going to have the same ECS control plane we're going to have our cluster just like we did before but now we're also going to have this thing called fargate and what's important to understand is that when you're working with fargate it follows a serverless architecture and what that means is when you take a look at your cluster you're going to see that there's no ec2 instances there's nothing there right that's just like how serverless operates where there's no physical servers well from the perspective of you deploying an application you don't see any of the physical servers you don't have to interact with them you don't have to create them but when we go and create an application and send it to ECS what ECS is going to do is it's going to see that we have no servers to run our application on so it's going to talk to fargate and fargate will create the servers on demand so it's going to actually create the underlying resources whether that's ec2 instances or something else fargate is going to handle all of that and once those instances get created now ECS can then deploy our containers onto that newly created infrastructure and the great part about this is that you do not need to provision or maintain the ec2 servers fargate does all of that for you under the hood and the nice part about this is that you only pay for what you use so if you delete your application or scale it down it's going to remove the underlying resources so you're not paying for an ec2 server that's just constantly running all the time [Music] so now let's talk about a couple of the different components or pieces that you're going to have to configure from an ECS perspective and we're going to start off with the ECS task definition file so as a user you're going to have an application and you're going to dockerize it by creating a Docker file and then once you have that Docker file and you create a Docker image you can then upload this to Docker Hub or any other Repository and after you get that uploaded to Docker Hub now it's time for you to define a task definition file within ECS so this is something specific to ECS and the task definition file is going to act as a blueprint that describes how your container should launch so it's going to contain all of the container specific configurations like the CPU the memory what image it should use what specific ports should be open what volume should be attached to it and so it's basically going to contain all of the configuration that you see within a Docker compose file so you're just defining the specs for your specific container and a task definition file can contain all of the configuration for more than one container so you can put your entire application in one test definition file or you can split it out depending on how you want to configure it so we discussed what a task definition file is it's just the blueprint for our containers and how they should be deployed now let's discuss what a task is and a task is nothing more than an instance of a task definition so a test definition remember is just the instructions when we actually want to create an instance of our application we create a task so if I wanted two instances we would have two separate tasks so a task is just a running container with the settings defined in the task definition file it's kind of like a Docker image right a Docker image is the blueprint to how a container should be created but a physical container represents an instance of the docker image foreign is a concept called a ECS service so what is a service a service just ensures that a certain number of tasks are running at all times so let's say we have an application and it's just a simple python application and we want two instances or two containers of our application running at all times we would provide this instruction to the service we're going to say hey I want two separate tasks or two separate instances of my application and the service is going to say okay right now I see that there are zero instances of your application I'm going to go ahead and process your request and make sure that there are two are running so it's going to create two instances and deploy it on some available servers within our cluster in addition to that it's also going to restart any containers that have exited or crashed so that we always have two instances running so if this one goes down the service will notice that and it's going to create a new one or restart the pre-existing one on top of that it's also going to monitor the ec2 instances and if any of the ec2 instances fail the service will actually restart the task and it will restart the task on a working ec2 instance [Music] the last thing I want to discuss is load balancers so let's say we have our application deployed our services managing it and we've got a couple of different instances running on several different services we can assign a load balancer so that we can route external traffic to your service right the purpose of a load bouncer is just to make sure traffic coming can get routed to all of our resources and it's going to be responsible for making sure that traffic's evenly load balanced across each of the different instances so we can take it load bouncer assign it to the service when it receives traffic it will then be intelligent enough to understand all of the different instances where they're running on which servers and it can route the traffic to them and if we scale up resources so we add a new instance the load balancer will be able to intelligently pick that up and forward traffic to it evenly as well foreign before we get started working with ECS in the AWS console I want you guys to go to Docker Hub and take a look at the two following images so I've created two demo projects that we're going to use for the ECS tutorial these are public repo so you guys can pull them down as well and follow along with the tutorial and you can reach them at codecloud ecs-roject1 and ECS project two so for project one which is going to be the first one that we start out with this is a plain old node.js application with the express server so if you send a get request to the root path you're going to get a plain old HTML file with some text that I've written on and this server is actually going to run on Port 3000 and if you take a look at the docker file it's just a simple Docker file and we're going to be starting the application and running it on Port 3000 that's the only thing you guys really need to know we're going to be running on Port 3000 so when we configure it in the ECS console just make sure that that's the port that we expose so at the console what we want to do is search for ECS and we're going to select the elastic container service and here you're going to be prompted with a wizard if you've never used ECS before they've got to get started or like a quick start wizard that's going to help you get things set up pretty quickly so we're going to do that at first and then after we do that we're going to delete everything and I'm going to show you how to deploy everything manually because I think it's important to understand so go ahead and hit get started they've got a couple of example apps but what we're going to do is say select custom and so here we're going to provide all of the configurations for our container we're going to give it a name I'm just going to call this ECS project one the image that's going to be the image that I just mentioned so it's going to be code cloud slash ECS Dash project one uh if you have this hosted this image hosted on a private repository then you would select this and put in the credentials but it's public so we don't need to do that if you want to put in any memory limits you can do that here this is where we have to specify the port mappings as you recall if we go back to the express configuration we can see that the app is going to be listening on Port 3000 so we want to expose that port so I'm going to put 3000 and it's going to be leave it as TCP now you might be wondering uh in Docker normally when we do like a Docker run we would do Docker run and then you do Dash p and you'd expose two ports right the outside port and then the port that the container is going to be listening on with ECS you don't do the two ports it's just going to be one so the outside Port is always going to be the same as the inside Port so it's always going to be you know either three thousand three thousand four thousand four thousand what you cannot do is say uh I want to expose Port 80 and map it to Port 3000. you can't do that with ECS they have to match so I'm also like to set this to be three thousand uh if you want to take a look at the advanced container configuration you can see that you've got sections for health check here you can invite add environment variables and then there's uh volumes down here so it's basically like uh all of the settings that you would add on a Docker compose file or on a Docker run command but you're just doing it through a GUI so we're just going to go down and you could specify resource limits labels and things like that but that's all the configurations we need so I'm going to hit update foreign so we've got our ECS project one I'm going to scroll down I'm going to hit next here we can Define our service so it's going to create a service called ECS project one-service we can add in a load balancer for now we're going to leave that selected as none and then here this is going to be the cluster name so this wizard is actually going to create the cluster for us and remember the cluster is just all of the underlying resources that our containers are actually going to run on we'll leave it as default for now and you'll also see that it's going to create a brand new VPC as well as some subnets so our CR our cluster is not going to run on the default VPC it's going to create a separate one force that everything's kind of isolated I'll hit next uh here you can just review your configuration and you'll see that it's going to create a couple of different things we've got the container definition we've got the task definition we've got the service and the cluster and so I'll hit create and we'll just give this a couple minutes and I will sync back up with you guys once that's done foreign and so once that's complete we can just hit view service [Music] now before we verify that the application was successfully deployed I want to go over all of the different things that the ECS task wizard created for us because it did do a lot of things behind the scenes I want to make sure you guys understand what actually happened so the first thing that we want to do is go to task definitions and so you know during the lectures we covered uh the task definitions and the task definition file is just going to be the file that contains all of the configurations for all of our containers so this is going to contain things like the port mappings the volumes environment variables so anything that you would configure within a Docker compose file or when you're running Docker run the different flags that's what's going to be defined here so in my in my browser you'll see that there's a couple of different task definitions that's just some of the other projects that I had going if you guys are using ACS for the first time you should see only one task definition that's going to be that first run task definition so select that and you'll see that there's in mind you're going to see a couple of different revisions so that's because I ran this a couple of times but what happens is you should see probably just a one here and that implies that this is the first revision and so anytime you want to make changes to the configurations for your containers you would create a brand new revision so the higher the number the more recent the revision is so if you want to see what the latest configuration is just select the one with the highest number and if we take a look at this this is going to contain all of the configs for our containers so there's some default configurations here we can see that this is going to be running on fargate but if we go down here we can see how much memory was assigned to it how much CPU and then down here this is going to be the one container that we deployed we can see that the host to container mapping uh 0.3 thousand to Port 3000 there's no volumes or anything like that uh no environment variables and so that's really all of the configurations so the task definition just has the configs for all of your containers there's only one in this case but you can put more than one in a task definition file or you can put them in separate ones depending on how you want to deploy your application and now if we select clusters this is going to show us the cluster that that wizard created right because the cluster is going to be all of the underlying resources that our containers can actually run on we are using fargate however if you chose to deploy it on uh ECS cluster then this would contain all of the ec2 instances so if I select the default cluster that was created uh we can see here a list of the different services so that wizard also created one service for us called ECS Dash project one-service and we can see it's in an active State and we can see what task definition files associated with it and we can see that the configuration points to one desired task and we have one in a running state so it looks like everything is good if I select this we can take a look at the specific service and the configuration for this service we could see all of the network related information like the VPC the subnets and the security group and if we go to tasks this is where we can actually take a look at the task so we've got one task in this case that's currently in a running state and we can take a look at this task and get some more information so we can see once again it's in a running state but more importantly we can see that we have a public IP so this is the IP address that we use to access our application so I'm going to copy this and open it in the browser and we can see that it just keeps spinning and it looks like it's going to error out and that's because remember our application runs on Port 3000. so what we want to do is Port 3000 and we can see the demo application right this is just a very simple application I wanted to keep it simple for the first one and so it's just going to send out an HTML file so this does confirm that our application was successfully deployed we got a chance to see the underlying cluster that was deployed the task definition file as well as the service [Music] so now that we've verified that the application was successfully deployed I want to delete everything and I want to do everything from scratch without that quick start wizard so I'm going to go back and the first thing that we want to do is go to our cluster go into our default cluster I'm going to select this service and I'm going to delete it here we'll just type in delete me for confirmation it's going to take a few seconds we should see that all of our tasks also get deleted and if the task doesn't need to get deleted go ahead and just manually okay so now our task was deleted as well and if we go back to our cluster I'm going to select the default and what we're going to do is we're going to delete the cluster as well all right and now our cluster has been deleted so now we are basically starting from scratch so there's nothing in our ECS there's no configuration and now we're going to take a look at how we can deploy everything from scratch without that quick start wizard foreign okay so the first thing that we want to do is let's create our cluster so I'm going to select create cluster and you'll see that there's three different options the first one where it says networking only this is going to be if you're using fargate and then you have two options for if you are using uh ec2 within the cluster so we've got one where the ec2 instances are going to use Linux Amis and we've got one for the windows Ami now just to keep things simple so we don't waste our time creating ec2 instances I'm going to do fargate we're going to give this a cluster name so I'll just call this cluster one call it whatever you'd like we're going to create a VPC you can change the the default cider block and the subnets but I'm going to just leave that as is so it's going to create a VPC with this cider block with these two subnets as well and we'll go ahead and select create all right so now our cluster has been created we can just select view cluster and it's going to take us to that specific cluster we've got cluster one now created foreign so now that we've got the cluster set up so that we can actually the first thing that we're going to do is create our task definitions so I'm going to go to task definition I'm going to create a new task definition uh since this is going to be running on fargate we're going to select fargate we'll give this a name I'm going to call this ecs- project one I'll just call this task Def and here we have to select a role so if you went through the that initial quick start wizard it's going to have already created a role for us so just select that one if you didn't do that just go ahead and just run through the quick start again and it's going to create that role for you ECS has to have certain permissions to create some of the underlying resources and things like that you know everything with Amazon and AWS requires explicit permissions and so this will already have all the permissions set here we'll select the operating system we're going to run it on Linux we're going to leave the same execution I am world to the same one here we can specify the task size so this is going to depend on what are the requirements for CPU or memory for your application since this is just a simple demo just go ahead and select the smallest ones then we'll add in a container and here this is going to be ECS I'll just call this node app image that's going to be that image that we used before uh and then once again we're going to do the port mapping of 3000. if you want to change any of these extra configs you can do that but we don't need any of that so I'm just going to select add and then I'll hit create so now we've got our task definition defined I'll select view task definition and remember this is just going to be the blueprint for our application for our task but we haven't actually created an instance we just create so now we have to create the task itself or the service which will be responsible for creating the task so we'll go to service which is going to be in our cluster so I'll select cluster cluster one and then here we're under the services tab we'll create a service launch type this is going to be fargate operating system will do Linux then for the task definition you have to look for the specific file that we created so ECS project one and then you can select a specific revision we'll select the latest this is just the cluster we're going to leave that as default then give it a name so I'll just call this project one service then number of tests so this would be how many instances of that test definition file do you want created so remember we just have one container in that test efficient file it would create if we selected one task it would just create one instance of it if we selected five it would create five instances of it so this is how you kind of scale up your application so if we want to do two I'll slide two this should create two instances of it uh we'll leave everything else as default we'll do next step here we're going to select our VPC the one that we created specifically for ECS we'll select the two subnets and then we'll select the security groups so this is just going to be what traffic do we allow going to our specific ECS service so I'll select edit we can create a new security group or just modify uh the an existing one so I'm going to create this new one called project we can see right now it allows traffic to Port 80. however remember our application isn't running on Port 80 so this would actually break our application instead what we want to do is change this to be Port 3000 so I'll do custom TCP Port 3000 and you want to do from anywhere so you know that means you know regardless of where you are located and whatever your IP address is anybody should be able to access it as long as they go to Port 3000. I'll hit save we'll leave everything as default in this case it's asking if we want a load balancer I'm going to say no thanks we will cover that in a bit we'll hit Next Step here this is where you can specify Auto scaling so that it can Auto Scale based on the demand of your application we're going to just leave that disabled for now but this is where you would go to enable it and then we can review all of our configurations and I'm going to select create service and so our service was now created if I select view service it's going to take us to our service and the tasks right now you can see there's no tasks but if I just hit refresh we could see that it's now provisioning two tasks why is it provision two tests we selected uh we wanted two instance of this so it's going to create two different tasks and they're all going to use the same exact task definition file so it's basically identical copies of the same application and if I select one of these tasks you can see the current status right now it's in a pending State and that's because it's getting uh it's in the process of actually deploying our container so it's going to take a few minutes but eventually that should move into a running state if there's no issues so I'll go back we can see ones in a running state if I hit refresh now we can see both are in a running state if I go to the task uh we'll just grab one of the tasks here and I'll grab the public IP and I'll put that in there we can see our application works and if I go back and select the next one interestingly enough this has a different IP so for some reason each one of these different tasks gets an IEP a different IP address and this is kind of a pain because if you think about like if we deploy a front-end application right we we shouldn't have to know all of the underlying IPS it should just be able to go to one specific IP and it should automatically load balance across both of the tasks but this just verifies that it did work so this is a little bit of an issue because ultimately our front-end applications shouldn't have to know both of those IPS and if we scale up it would have to keep learning all the different IPS and then if we start scaling down it would have to lose some of the eyepiece and so this is where a load balancer would come into play We would Define a load balancer that would automatically load balanced traffic to all of these different tasks and then the load balancer would expose one single IP that our front end could consume and like I said we're going to cover that in a bit but there's a few other things I want to cover before we get to that foreign now let's say we make some changes to our application in this case I'm just going to make a very simple change I'm just going to add some exclamation points to that H1 tag and once we make changes we're going to have to then create a new image and push it up to Docker Hub so I'm going to do a Docker build give it a tag of code cloud slash ECS Dash project one okay so we built a new image now I'm going to do a code Cloud um sorry a push for that image so we've updated the image in Docker hub um but at this point you know if we refresh our application nothing's going to change right because we have to tell ECS to pull the new image so how do we do that well we'll go back to our ECS console and there's a couple of ways of doing this the simplest way is if you go to clusters uh go back to your cluster select the service select update and then you just select Force new deployment so that will cause it to pull the latest image and then redeploy however if you were making other changes not just changing an image let's say you change the task definition file so in this case if I go to Project one test definition and you just like to create new revision you could just create a brand new revision and so now we have revision two and so since nothing's really changed from a configuration perspective since we did change the revision it's still going to pull a brand new image and so if I go back to my service you know we can select the latest revision and then just skip to review and then select update service and then if we go back to our service we should see that in tasks we have the previous two tasks running and now it's starting up two brand new tasks and what's going to happen is these are going to boot up and then once they're in a functioning State all the health checks pass it will then go and shut down the old two okay so now we see all four of them in a running state so now that these two aren't in a running State it's now going to move these to a shutdown state okay so one's now been removed and in a second or two the next one should get removed as well all right so it looks like we're now back down to two tasks as we've set the desired count to two and we see the running counts too so let's go back here and I'm going to go and just hit refresh all right so I'm going to go and select one of the tasks now what's important to understand is the IP address changes so when we create a new deployment the IP address changes as well and so I'm going to do that and then do Port 3000 and we can see that we have the exclamation point however I do want to focus on the fact that the IP address did in fact change this is a problem as well that would mean that anybody that's using our application would have to remember that our IP address changed and this also can be addressed with a load bouncer a load bouncer fixes both of these problems so a load balancer will automatically point to the new IP address but the load balancer's IP will always remain the same so anybody that's you know either consuming it from a front-end perspective or from a client uh their IP address will always point to the load balancer and then the load balancer will get updated anytime we make changes to our application thank you okay so now what we're going to do is delete our entire application we no longer need this and I'm going to show you guys how we can deploy a slightly more complex application that involves a database and a volume and we'll take a look at how to set up a load balancer as well so I'm going to go to my service and we're just going to delete this I'll go to tasks and let's just verify that it was successfully deleted okay so now everything's been deleted we still have our cluster there's no need to delete that we're going to deploy our second application on there as well [Music] now for our multi-container application this one's a little bit more complex it's got two different containers I've created a dummy example Docker post file just to kind of give you guys an understanding of how the architecture of our application works it's pretty simple we have two containers we've got an API container so this is just going to be a simple node.js or Express API and it's going to be using this specific ecs--project 2 image that we've already uploaded to code cloud and you'll see there's a couple of environment variables I'll cover that in a second uh once again we're going to be exposing Port 3000 this could be running on that same port but the major difference is there's going to be a container and so this is just going to be using the default image and to get working we have to provide two environment variables which is going to be the root password sorry the root user and then the root password right now I just kind of set that to be and password just to keep it simple and because this is a database and we want the data to persist we want to define a volume as well and then whatever username and password you specify here we have to also provide it here as well and then we also have to give the IP of the container and what port it's running on and if you want to take a look at the app um I'm not sure what your programming background is however it's just a simple API so if you take a look at the code here this is just going to be a simple crud application where you can create notes and delete notes and so here the first endpoint is if you send a get request to slash notes it's going to retrieve all the notes in our database if you send a get request to slash notes and then provide an ID it's going to give you information for one specific note if you send a post request to slash notes this is going to be so that you can create a brand new note and then we also have a a patch to slash notes and then the ID this is me this will be used if you want to update a pre-existing note and then we also have a delete option so if you want to delete a specific note you just do a delete request to slash notes and then the ID of the note so it's a pretty simple application uh it's going to be using uh the Mongoose library to connect to the database once again running on Port 3000 and the important thing to understand is we have to pass in the environment variables for connecting to the database so just keep that in mind [Music] so before we get started I'm going to create a security group for our ECS application and so if I just search for Security Group this should be under ec2 security groups under ec2 so I'm going to open this in a new tab so I'm not constantly moving back and forth and I'm going to just create a brand new Security Group I'm going to call this ACS SG for a security group and I'm just going to say ECS Security Group and remember the security group just defines what traffic is allowed to go to what device right now I'm going to just add in a simple rule that says all traffic we're going to allow all traffic which you don't normally want to do you want to specify exactly what traffic is allowed in isn't allowed and here I'm going to say I'm going to allow traffic from any IP so pretty simple it's not this Security Group is doing absolutely nothing but we'll update this later on I just wanted to create this ahead of time and more importantly uh make sure you provide the correct VPC so this is the VPC of the ECS cluster so you want to make sure it's the same VPC and then I'll select create Security Group [Music] and once that's complete I'm going to go back to our ECS page and I'm going to go under task definitions and here I'm going to create a new test definition for a new application this is once again going to be fargate and I'm going to call this ecs-roject1 task for all it's going to be the ECS task execution role memory I'm going to just select the smallest ones and now we're going to add our containers so the first one we're going to start with the database the container so I'm just going to call this in the image we're going to use the default image from Docker hub uh the port mappings this is going to be the default port for which is going to be 27017 we don't really care about the health check or the environment however we do want to add environment variables and so the two environment variables are going to be listed in this Docker compose so we want init DB root username and we can give it any username we want I'm just going to say the username is going to be and then we have to provide the password as well and I'm just going to use password not secure at all but that's okay and we'll select value here so we've got the environment variable set the next thing that we want to do is actually for now we can just close this I think that's enough configuration on the side uh let's add our second container which is the express application so I'm going to call this maybe web API the image this is going to come from our Docker Hub and so this is going to be ECS project 2. and this image is also listening on Port 3000 and this is going to have four environment variables so we need the username from the database the password the IP and the port so I'm going to copy all of these and then the other one's going to be password so the username it's going to be the same one we set on the container for the database it's going to be password is going to be password now the IP is going to be a little bit interesting because if you've ever worked with Docker compose you'll know that uh Docker provides us DNS resolution so for the IP I could just say to reach the container I can just provide the name of the service into there so I could just say however with ECS we don't have access to that there's no DNS built in but instead what we can do is we can make use of localhost so you can reach your other containers that are defined in a test definition file through localhost so I could just say localhost and then it's going to use whatever specific Port that the the container is running on and so we know that the container is running on Port 27017 and so we could just provide that here so when this container wants to talk to the database it'll send a request or I'll send the traffic to localhost Port 27017 so that's how communication Works between containers within one specific task um and just like uh with a Docker compose file you can add in dependencies so if you wanted to say I need the container in a starting State beforehand you can add that in here as well but I'm not going to worry about that too much but really any configuration that you can add on a Docker compose file you can add in here as well and so we'll add that the last thing that we need to do is we have to define the volume so we have a volume for our database and to do that we have to go to the volume section down here so I'm going to select add volumes we're going to give this name I'm just going to call this Dash DB and then the volume type we're going to make use of aws's uh elastic file system and so we're going to have to create a elastic file system because right now we don't have any so you could just select this link Amazon's already provided that for us and we're going to go there and create our elastic file system we'll select create file system and we'll just call this mongodashdb make sure you change this VPC to be your cluster VPC standard is fine make sure you select customize we have to make one change we'll go to next and this is where we're going to make the change so make sure your subnets are selected if they're not just go ahead and just select the drop down and pick your two subnets we're going to change the security group right now it's using the default Security Group and if we use this you'll see that our containers will not be able to communicate with the elastic file system and so what I'm going to do is I'm going to actually create a new security group for this guy so I'm going to go back to the ec2 Management console go under security groups and we're going to create a brand new security group and I'm going to call this EFS Security Group just give it some description EFS Security Group make sure we update the VPC the protocol is going to be foreign and the destination this is where things uh this is where we can be a little bit more specific right now it's accepting traffic from anyone but what we can do is instead of just accept traffic for anyone because that's a little bit of a vulnerability because then anyone can access it what I can do is I can say use the security group that we created for ECS remember that previous Security Group that's associated with ECS we could say only traffic coming from that device so this allows us to kind of narrow down the scope of who's allowed to talk to our elastic file system it's only that es that ECS instance and I've actually made a mistake um I don't this was actually the outbound rule I don't actually want to do this to the outbound rule so I'm going to just change that back to all traffic um and then just allow that to everyone we want to add a inbound rule that was my mistake so we'll change this to NFS custom and then we'll select that ECS security group that we created so we create that Security Group let's go back to the EFS window and we can see it's not there and that's because we have to refresh it so I'll hit previous and just go ahead and select default for a second and just hit previous and then go back to next and it should reload it so we'll exit out those and then now we can select EFS Security Group and select that for both and make sure you delete the the default one we don't want that we just want one Security Group we'll hit next and create okay so now we got our elastic file system and so now if I hit refresh here we should see our file system everything else can be left as default and I'll select add so at this point what exactly did we do we did the equivalent of defining a volume down here but we now have to associate that volume and provide a mount point in the container itself so to do that we have to go back up to our container select the and there should be a storage and logging section so here select Mount points we're going to select mongodb and we want to mount it in what path inside the container that's going to be slash data slash DB so if you don't know where to mount the data for a database you would just check the documentation the mongod documentation told us to mount it in slash data DB and I'll hit update and now we can go ahead and create our task definition file and I realize this should be ECS project 2 not project one and we can see that it was successfully created so we can select view task definition and we could take a look at the final task definition file and on mine you'll see that uh you know we've got four revisions you won't have four revisions you'll just have one but I created this a couple of times beforehand so that's why we see it's already on the fourth revision so now that our task definition is created let's create our service so we'll go back to our cluster I'll go to Cluster one and we will create a new service this is going to be of type fargate we'll do Linux we'll create uh we'll select our new task definition and we'll give this a name I'll just call this Notes app service number of tasks we're just going to do one everything else will leave as default at the top make sure you the proper VPC is selected and we'll select our two subnets and for Security Group we're going to select a pre-existing one and that's going to be the one that we created called ecs-sg and right now it's allowing all traffic so it's not doing anything but once we create our load balancer we'll be able to say only traffic coming from our load balancer should be allowed and we can apply that type of policy within the security group just like we did with the EFS Security Group uh and now we're actually going to add in a load balancer as well and that way if we have multiple tasks the load balancer can load balance the traffic to all of the tasks and when we deploy a new task or update our application we don't have to update the IP address our front end points to our front end will always point to our load balancer and that IP will never change so we'll select application load bouncer and right now it's going to look for load balancers that we have configured we do not have any configured so we actually have to configure it ourselves so I'll select the open link in new tab and there's a couple of different types of load balancers we want the application load balancer we'll give this a name I'm going to call this notes lb we want internet facing and we want IP address type to be ipv4 change this to be your proper VPC and then we're also going to create a brand new security group for our load balancer and I'll call this um lb Dash SG and for inbound rules normally what we would do is we would do you know custom TCP and then we would allow Port 3000 because this Security Group determines what traffic is this load balancer allowed to receive essentially and application runs on 0.3000 so we would just list it on 43 000. however I don't want to have the users or a front-end send traffic to Port 3000 I would rather just have them send it on the default HTTP Port of 80 or their default https Port of 443. and the great part about load balancers is what we can do is we can set up a policy that says any traffic that we receive on Port 80 we can then redirect it to Port 3000 in our container so that way our load balancer is listening on one port and our containers are listening in a different one so for the security group I'm just going to use the default http and we'll say receive traffic from anywhere we will create the security group and I realized I made this in the wrong VPC so I'm going to go back I'm going to delete this Security Group and redo that again foreign so this will be lb Security Group and this is where I forgot to change the VPC I'll add in the same rule so we want just http so now that we have the security group we'll go back to the load balancer configuration I'm going to hit refresh I'm going to delete the default one we don't want that and we want the load balancer one now this is where we set up those rules that allow us to kind of receive traffic on one port and send it to another port and the way that you define these rules is that you tell your load balancer what port do I want to listen to traffic on and so here it's saying right now the default example is it's going to listen on Port 80 which is perfect that's what we want and then we tell it to forward it to something which is our ECS cluster and to tell it to forward it to something whether it's the ECS cluster or ec2 instances we have to define a Target group so a Target group is just a list of resources that we can load balanced traffic to so once again this is going to be something where we have to create another entity within AWS we're going to select the create Target group so I will open this in a new tab and we have to specify what type of Target it is if you're doing ECS you want IP addresses if you're just kind of configuring a tiger here for like standard ec2 instances it would be instances but we're using ECS so it's always going to be IP address Target group name I'm just going to call this notes Dash l the note stash Target group one we can leave all of this as default default IP address type ipv4 make sure the correct VPC is selected now this is important actually uh so our load balancer is going to be performing health checks and it's going to determine if a node's healthy or not based off of these health checks so the way the health check works is you specify uh you know whatever protocol which in this case is just regular HTTP or https it'll send a get request to a specific URL that your application is listening on and if it gets a you know a 200 or a successful response it's going to assume that the node is healthy and up however if it doesn't get a response or it gets like a 404 or something like that then it's going to assume that the node is down and not working it's going to not direct traffic to it so it's important that you set this up properly and by default it sends it to the root path and if we take a look at our application you can see that we don't actually have any routes that are listening on the root path the root path is just going to be just forward slash but it's our applications listening on forward slash notes uh forward slash notes and then ID notes notes ID notes ID so we have nothing listening on the root path so the default health check would fail because we don't have now I could create a new um a new endpoint and call it maybe like uh health check or something and then I can change this to be health check so just some endpoint that our application is listening on but instead of doing that I'm just going to tell it to send a request to the slash notes endpoint and it's going to get a response back with the list of notes and that should be successful if our application is working so I'm going to say slash notes and then if we go to Advanced Health check settings we want to override the port and we want to send it on Port 3000 because that's what the containers are listening and I'll select next okay make sure the correct network is specified and we can just select create Target group and so now our Target group has been created and if we go back to the load balancer hit refresh we can see that it is now successfully in this list and I'm going to select that and everything else looks good to go so we will just select create load balancer and we have now successfully created our load balancer and we can go back to our original ECS window and finally select load bouncer so we did all of that to get our load balancer set up and so it automatically filled it with notes lb then the next thing that we have to do is specify what container do we want to load balance traffic to because we have two containers we have the API and we have the container we don't want the traffic going to the container we wanted to go to our web API and it automatically knows that it's going to be listening on three thousand point three thousand so it's going to set up the load balancer to automatically forward to that port so we can then go to um then we can select add to load balancer and you know the configuration where we configured on the load balancer that should be listening on Port 80. here we we're just doing the same thing so ECS will overwrite any of the other configs we already did so we're kind of just redoing the same thing so production listen report this is going to be what port is our load balancer going to listen on so I'm going to select Port 80. you can also do a custom Port but we want to listen on Port 80. and so traffic's going to come in on Port 80. and we want to forward it to that Target group so the target group's going to be notes tg1 and we can see that the health checkpath has already been updated so that's good and everything else uh should be good to go I'll do next step uh here you can specify your auto scaling I'm just going to leave it as disabled and we can review our configurations and we can select create service and our service was successfully created we'll go to view service we can see that we have one task that's in a provisioning state so we'll give it a uh you know a minute or so and uh watch it move into a running State and I'll show you guys how to make use of the load balancer to forward traffic to the containers all right so now it's moved into a pending state okay so now we see that the task is moved into a running state if I select this task we can see both containers are in a running State and if we wanted to we could technically go directly to the container by grabbing this IP and since this is an API I'm going to use Postman to send a post request so I'll just put a request to that specific IP and remember the apis running on Port 3000 and we want to send a get request to the slash notes and this should return all of the notes and we can see we get an empty array and that's because we don't actually have any notes so that's one way of doing it but remember we set up the load balancer so we don't have to worry about this and to use the load balancer to send traffic to it let's go back to our load balancer actually we could probably take a look at the ECS console here and if we go to the cluster go to services click on this service I can select on this target group and if I go to this tiger group we can see the load balancer associated with this and so this is our load balancer and if you take a look at the configuration for the load balancer down here you'll take a look at the DNS name this is where we can now send requests to so when we send a request to this DNS name it will forward it to The Container so I could just replace this and remember I no longer need Port 3000 because our load balancer expects it on Port 80 which is the default HTTP Port so I could just remove that and just say slash notes and if I hit send we can see I get an empty result so now let's try sending a post request and in the body I'm going to add some Json I'm going to include a title and a body here if I hit send we can see that I get a response back and now if I do a get request I should see my one individual note so we have successfully deployed our application using that load balancer and there's one last change I want to make and that is that in our security group for our ECS this security group right here I want to change something if you take a look at the rule it's going to allow traffic from any IP address for any ports and that's not very secure instead I want to say I only want to accept traffic coming in from our load balancer on Port 3000. so we're going to delete this rule I'm going to add a new rule custom TCP Port 3000 and here I'm going to say I want to only allow traffic coming in from that load balancer Security Group so our load balancer and after we make the change we can now try sending a Rec request and we can now see it still works so we just made it a little bit more secure so that only traffic from the load balancer arrives on ECS [Music] all right guys so that is everything I wanted to cover in this video hopefully this gives you a good starting point as to how to deploy an application onto AWS using their elastic container service let me know in the comments if this is your preferred method of deploying a containerized application or if you prefer an alternative like eks and I think that's going to wrap everything up and I will see you guys in the next one foreign [Music]
Info
Channel: KodeKloud
Views: 125,073
Rating: undefined out of 5
Keywords: aws ecs tutorial, aws ecs, amazon web services, aws tutorial, aws docker, elastic container service, aws fargate, aws training, cloud computing, amazon ecs, aws ecs tutorial for beginners, aws tutorial for beginners, aws simplified, aws elastic container service, amazon ecs docker, aws ecs service discovery, amazon web services for beginners, aws ecs demo, aws container service, aws ecs task vs service, DevOps, Cloud, Kodekloud
Id: esISkPlnxL0
Channel Id: undefined
Length: 66min 57sec (4017 seconds)
Published: Fri Oct 07 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.