AWS Fargate - Running Dockerized Apps

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
and in this episode we are going to go through a series of videos on how to add ci cd4 containerized applications so the services that we are going to use for this series aw is ECS or elastic container service which use AWS and then we are going to use a w s-- e CR o elastic container registry to have our container images and then we are going to use AWS chord pipeline to automate this whole process so I told you it's going to be for parts in the first part we are going to look at our continuous application and run it locally and then we will create the resources on AWS like ECR and we'll push that image to the remote repository in the second part we need to create the network that is required to run our application so the network has seen like we need to have our load balancers run on of public subnets so that can be accessible from internet from anywhere but the containers that is running inside our ECS cluster should be in a private network which cannot be accessed outside other than through the load balancer so we need to configure this network so that will be the main focus on the part 2 in part 3 we will connect all our services and test our application if it all works fine and if that all works fine in the last part the part 4 we will automate everything so that whenever we push a cord to our github it will trigger that code pipeline and the code pipeline is going to build the container image on ECR then it will trigger the ECS to deploy a new container and our application will be updated without me having to intervene so that's the plan for this series and I hope you guys enjoy and I'll see you in part one ok now this is our G table repository and I will prune this repository by clicking here and copy the URL now get back to my terminal and let's create a new folder indexed oklets I will call it hello and I will see the into hello folder and I will open it in which you stored your code now I will open up integrated terminal right here I will type drone and the repository URL and let's clone the content into this folder now let's explore the files I will go into this folder so there are a couple of files one is app DOJ's now this is our entire container code so this is nothing but a simple Express application and this Express application it has two routes now this route is the main route or the default route which sends HelloWorld to the client and there's another route I have named it as slash health now this you will use it as the health check endpoint for a load balancer so it's nothing but returning the status code 200 with healthy text and then we have AB dot listen so our Express application is listening 1 port 3000 so we have opened up the pro 3000 for this application now if we run this locally I will just clear the screen I will go into CI CD container folder and a less of these are the files so I will run node up dot J's so in order to run it locally so we have to install Express so in order to do that we have to run npm install the dependencies are listed in here package.json file so it has installed Express now let's again run node after chase so now app is listening on 3000 and open up a new tab and type localhost 3,000 and you should see the hello world and if I go to my health check endpoint that is health and you just give me healthy message and if I inspect it in network tab and do another refresh here you'd also see the health check status is 200 okay now this is the application that we are going to dock our eyes so I will stop the server and now you can see the docker file along with depth of J's file so this is the docker file so in this docker file we have several instructions first we will use the Alpine docker image as our based image so this is one of the very lightest image that we can run node.js or that we can install rather node.js and NPM that is exactly what we are doing at the second line so I am adding node.js and NPM and I'm installing it using apk package manager and after node.js and NPM is installed I will create a new directory slash app and I will set it as my work directory so all other commands beneath here will run inside this directory so what I'm first going to do is I'm going to copy the package JSON file so this file into this slash app folder so that's what I'm going to do here copy package JSON into the folder I created here and then I'm going to run npm install so at that point i haven't yet copied all my source code that means up J's and this JIT ignore file or any other file that may contain in your application into this app folder because I just want to use the docker caching benefits so I run npm install to get the Express package installed and after that I will copy the rest of the code here into my app folder so the rest of the code will get copied into app folder so then I will expose 3,000 because my app is running on port 3000 and then I have a CMD command so this get run when somebody issue a docker run command on the image that is built from this docker file so as the run command I will run NPM start which will run my application so let's try this in local machine so I'll open up the terminal clear the screen and let's see which folder I mean okay I have the docker file and everything here and let's first build this file so I will type talk a build and let's take this image - T YouTube I will just say local and this is the latest version and then I will add period so the period denotes the path of the local file so which is right in this folder so that's why this period here and then I will hit enter so it's going to build my image and it is successfully built you can see this - it ran through the steps and now if I just do docker images I should see my beauty blogger repository which is this one okay now let's spin up container using this docker image so I will just copy this image ID and then I will clear the screen and let's run the command docker run in interactive mode - IP and I will do a port mapping now the Express application that will be running inside this container will run on port 3000 so let's do a port mapping - host to the container so I'll use port a T okay let's use just a T through to port 3000 inside the container and then I will just paste the container ID which I just copied and hit enter okay the application is running so know instead of a local host 3000 I can just type localhost and hit enter and there you go hello world is shown now it is running inside the container now we know to run it locally how to install docker in your machine and make sure the car is running in your machine to be Mac OS it could be Windows or Linux you can simply install docker for that so now let's push this image to a remote repository now for remote repository you can either use docker hub or AWS EC are elastic container registry we are going to use ECR or elastic under registry that is managed by AWS so I will go to AWS and I will search for ec r I will click create repository so let's create a remote repository types YouTube and create the repository so you have your dog or a posture here now this is the URI of my repository now in order to push an image to this repository we have to use this URI so if I go into this report at the moment I don't have any images so let's push that local image up to the CCR we'll come back to my terminal now first we have to login to AWS in order to push images to ECR now open up this easier command or text file so we can use this command here aw is easy our gate login and then the region now before running this command you have to install AWS CLI in your local machine so if we haven't installed you can go to a browser and simply type install AWS CLI that will take you to the first main URL here click on that and here you can see how to install AWS CLI on different operating system for Windows Mac OS or Linux so you can follow the instruction and get it installed in Windows it's just a matter of installing an MSI and once you have setup go back to your CLI and then execute this command I copy this and I paste it here and hit enter now we are logged in successfully we have used u us East 1 region because of ACR repo Sri is in this region ok now let's build a new image I will use this command docker build - T earlier we built for local YouTube local now let's call it just YouTube and the context is taught so it can find the docker file in the same folder and I will hit enter so it's going to build that image successfully and once it is built I can take that image so in order to do that I will type docker tag then here I have to type the image name so the image name I used is YouTube notice so I will paste it here and then you have to point it to the riveting remote repository so you know to find the ECA our URL you can simply go to your repository here and just click on the copy button it will copy this URL and come back to your terminal and paste that in and I will tag it latest at the end so it will mark as the latest one and I will hit enter so it is not properly tagged now we can easily push it to ECR by using docker push I'll type docker push then I have to again paste the ECR URL which I just copied I will paste it here and then I will add the tag name again : latest and I will hit enter so that's going to push my local image up to my remote repository in ACR ok now it is successful I will go back to my repository and I will click on the YouTube so now I can see the latest image tag and this is my image URI so it is successfully pushed onto my ECR now we completed the part one and now let's you say WS ECS or elastic container service and we will use AWS FR gate type to run our container without we having to manage any resources but first we need to set up our network properly because we don't want our containers to run in a publicly accessible network we always found our load balancer to be accessible over Internet but our task or the containers should be in a private network so in order to achieve our network requirements let's create a separate VPC over to a private cloud for our application so I will click on services here I will type V PC so now we can set up our custom V PC or the custom network that suits our application so I will just click V PC here and I have a default V PC but we are not going to use the default V PC instead I'm going to create a new way PC I will call this as let's see my VPC and for the side arranged will use 10.0.0.0 range and i'll hit create so it's created I will close this down so this is the V PC I just created now inside this V PC let's subdivide this network into four sub networks and will designate two of those subnets as public subnets where our load balancer will run and two of those subnets as private subnets where our tasks will be run so I will go to subnets sections here I will click create subnet first I will type my V PC public one so this is my first public subnet let's pick the V PC my V PC and I will pick availability zone you sts-1 a for that and let's divide our main / sixteen CIDR into four sections and for this one I will type ten dot zero dot one dot 0/24 this is the slash 24 cider and I will click create so my first subnet is created filter it by the V PC so we can properly say this VP is still not shown I will refresh this but then it should show here you go my V PC and this is my first subnet I just created so that's a public subnet with ten dot 0 dot one dot 0/24 ok now let's create the second public subnet I will click create subnet my V PC public to my V PC and the availability zone early it was east 1a I will choose east 1b and let's pick this side see it or dot-to-dot 0/24 make sure these subnet siders are distinct now there they are not overlapping so one and two is created then I will create my private subnet that would be named as my V PC private one my V PC availability zone I will just pick the East 1 again 0 . this time 3.0 / 24 and last private 3 PC or the private subnet rather private to my V PC this time us taste 1 and let's name it name beside s 10.0 door 3 I think 4.0 / 24 well you can divide your subnet as required so this is how I chose in order to make my public subnet actually public I have to attach a Internet gateway so I will create an Internet gateway here click Internet gateway I will type my V PC igw for internet gate will create it so it's created now so it's at the moment it's detached so I will click actions and attach to V PC and then pick the V PC my V PC here and click attach so it is now attached to my V PC now I will create two route tables and attach in the interrogator I just created two one route table and then attach the trout table to my public subnet so I will click create trout table here named tab I will type my V PC public route table RT for short and the V PC is this and I will click create so route table is created so this is the one it just created and now I will click routes under here and at the moment it has only the local route but we want to add the route that points to Internet gateway as well so I will use this any path pattern that means any route other than these local routes it should go to my internet gateway then it will suggest the Internet gateway here pick that one and save route so rowdy successful created you see now I have two routes so any route outside my network side will be directed to the Internet gateway that is fine so I will create my second route table let's type my V PC this time it is the private route table I will just pick the same way PC and create it so it's created now this route table I don't need an internet get too attached now what I will do is I will attach this private route table to my to private subnets I will click subnet Association and edit subgrant Association so I will select private one and private two and itself so those two subnets will be associated with my private route table then I will select public crowd table with the Internet gateway attached this and I will go to subnet association click Edit association and this time I am going to select the public subnets public one and public two and save it okay now we have done with our network creation just to summarize so I have created a V PC my V PC with tender 0.06 T in slider and I divided this entire side a block or the net IP address ranges into 4 IP address ranges so those are my subnets right here and if I filter it by my V PC I have two private subnets which has the private route table attached and it has only the local route which route within the V PC but I have two public cloud tables where it references the public crowd table so my public route table has an additional route for any other IP addresses outside my V PC to route to the Internet gateway so any resource I will add in to this public subnet will be accessible from Internet so in this series we are going to spin up our load balancer inside our public subnet so that can be accessible from anywhere in the name it and our containers those container tasks we will be spinning up inside our private network and we'll set up the configuration that only our load balancer can access those containers inside our private network but not anybody outside and in this part 3 we are going to create our ECS cluster and create task definitions using our docker container image so I will click services here and I will search for ECS or elastic container service so I have a couple of clusters at the moment but I will create a new cluster I will click create cluster and then I have few choices networking only powered by AWS Fargate so this is the one that we want to use because we don't want to manage our cluster compute resources like easy to instances by ourself so we'll let a WS to manage it for us so that's why we use AWS Fargate so I will click next step give it a name I will just type my cluster and create a V PC so here we have to create a V PC for this cluster I will just accept the default cidr blocks and I will click enable container inside and click create and now I will click view cluster okay I can see I am in my my cluster so now that I have created my cluster we can create a task definition and then create a service out of this task definition and however docker containers run inside this cluster so the first step is to create a task definition now this task definition is just like a docker container so if you click on class definition here I can click create task definition click Fargate launch type here and next and then give it a name I will type HelloWorld now I can assign a task role so this is basically if I want to access AWS services from within my application that is running inside the can Dayna then I have to assign I am roll with the permissions but the application running inside this content is only returning HelloWorld to the requester so we don't need to specify a task role here and the network mode aw CPC east by default selected and then I have to create an task execution I am role so you can click create a new role here then I have to specify some configuration related to the task or the container so how much memory do I need to allocate to my container or the task let's say 0.5 G B and V CPU ok 0.5 will pick 1gb for task memory and we CPU 0.5 and then we have to add container definition so click add container you can give it a name I will tell hello world and then I have to specify where this can go cur image resides so basically a repository URL slash image colon tag so I will open up a new tab and go to EC R or elastic container registry and this is my container repository URL and I will click on to that and this is my image URI I will just click here so to copy the URL I will go back to the ECS and paste it here so it's referencing my YouTube image the latest tagged version and I can specify memory limits and so on but I'm not going to do that but here in the container port mapping so I will say ok my container is exposing port 3000 so that I want to map to any port from the host so in this case we don't have any access to hosts because that is managed by AWS in Fargate so we just have to specify the container port here and the rest just leave it as it is for example you can add any environment variables so on but let's not focus on that now I will click just add here and then let's click create so hello world is successfully created let's view the task definition and there it is so now I go back to the clusters here and then you can see my cluster the one I created here I will click onto that so now it's time to create a service so I will click create and the launch type again I will pick and the task definition hello over the revision let's say the latest one the revision number may differ in your case and the platform keep it latest cluster my cluster service name let's call it hello world service now we have to specify number of tasks so how many tasks should this service manage so let's say I want to manage two tasks so this service will manage to toes if one fails it will remove it and spin up a new task from that task definition this one and minimum healthy percentage and these things will keep as defaults and then the deployment section the type of deployment that we are going to use is rolling update so select on that and I will click next step and here I have to pick the VP see this is the time we are configuring the network for our tasks or the containers so here I have to select my VP C so in order to get the VP CI D I will go to VP C dashboard here let's see this is our VP see if I go to your VP C's here my VP C starts with 0 0 1 0 8 so let's search for that and there it is 0 0 1 0 8 ok and now I have to pick the subnet remember our task will be spinned up in the private network how private subnet I will pick those private subnet 1 and subnet 2 and then I have to specify the security group here I can just edit the security group and can you see at the moment the security group accepts HTTP from anywhere but we want only our load balancer to access these tasks so how can we do that we can actually use the load balancer security group ID but so far we haven't created our load balancer so we can edit this security group later hello W 2131 for now let's keep the source anywhere and I will click Save now in order to create the load balancer that balance our request across the tasks I will select application load balancer under load balancing and health check grace period so this is basically how much time the ECAC should you should dig no elastic load balancer and healthy targets checks because you know it takes some time to spin up a task from the task definition so we'll just say stay about thirty seconds so I will pick application load balancer out of three load balancers I can either pick Network Oh application or classic as it will not work in this case because this AWS forget so it has to be application load balancer or network load balancer these load balancers supports balancing traffic out to different resources on IP addresses in this case tasks on specific IP addresses within our VP sees so I will pick application load balancer here and balancer name well I don't actually have a load balancer at this moment so that's this message no load balancer is found so a link is given to me so I will just click on that in a separate tab and then will actually create a load balancer so here I will pick application load balancer and give it a load balancer name my alb for application load balancer and the listener port HTTP 80 is already created for me the load balancer will listen on HTTP on port 80 and now I have to pick the VP see that this load balancer should run so my VP C and what are the availability zones that it should create load balancer nodes so I will pick both US East 1a and us T is 1b and let's pick the subnet make sure you pick public subnet public subnet one and here public subnet two because we want everybody on internet to access our load balancer on port 80 it has to be publicly accessible only our tasks are in the private subnet which can only be accessed by this load balancer so then I will click configure security group and click again consider sequel to crew I will check create a new security group let's say my alb sg4 my elastic load balancer security group and I will add HTTP on port 80 from any resource so 0 dot 0 dot 0 slash 0 represent any IP addresses from within Internet so I have added this inbound rule so anybody can access my load balancer and then I will just click configure routing and this is the place where we have to create our target group now any requests that load balancer will receive will be sent out to a target group so we don't actually have any target group setup I will click at new target group give it a name my alb target group my l btg and here this is very important we have to select IP because the service that we created in acs will create tasks from within our V PC so it will use specific IP addresses within our private subnet that we define for tasks and it will allocate those IP addresses to those particular tasks or containers so our load balancer should be able to spray traffic to those IP addresses ok protocol is HTTP and port is 80 and the health endpoint it is health if you can remember it's our application the Express application has a route call slash health and then I'll click Next register task so I am NOT adding any static IP addresses right here because will allow our easiest to dynamically at those IP addresses registered in the load balancer right so we are not a statically type anything here let's just click create so it's going to create my load balancer right there ok click close here and this the elbe that is created and go back to the previous tab I open here and I will click refresh button there we go it actually selected my lb and then and it has already picked that task definition hello world and the port that we have exposed it 3000 so that is good then I will click Add to load balancer button now I can select 80 port 80 HTTP so that is the listener port of my load balancer so load balance is listening on port 80 and the listener protocol is HTTP so any traffic that is reached out to my load balance on port 80 we should direct to my lb target group they create the my lb so you can see the target type is automatically detected and a health check path has already been automatically detected because we have already configured this when creating our lb so that seems all fine and under service discovery will unselect this because we don't have any other micro services to work with this micro service if there is then this will be really useful to address micro services separately so then I will click next step and auto-scaling options I will keep the default and click next step and finally I will create my service now I will view my service you can click details here and I can select my cluster again to view my service I will click my cluster you can see one service is already there and this is my hello world service and as you can see the desired task count is 2 so this service is managing two tasks but at the moment 0 tasks are running now I go to tasks tab just to see how it is going and as I can see the status is still pending and the desired state is running and I will do a couple of refresh still pending and after some time you should see one task disappeared well if you go to stop section you will see you know tasks are stopping now this you know to task has been stopped now what is the reason here my tasks are not running no they are not coming into the desired running state now this has to do with internet accessibility within my tasks now we spin up tasks in a private subnet which does not have any connectivity to internet but if we look at our docker file and there is this particular line that runs npm installed which pull all the dependencies of my packages and file like Express and then install it so when my service is spinning up tasks from the task definition so it's going to run this docker file but at this point it cannot do it because there's no internet accessibility hence the task fails so that's why our tasks are never coming into running state so what is the option we don't want to have our tasks in a public subnet that everybody can access so as a solution we have to add a net gateway for our public subnet so an accurate ways basically allow our resources in the private subnet to access internet and grab any updates but anybody else outside internet cannot access our tasks using this net gateway so that's how the net gateways are designed so let's do that I will go to my services and select three pcs and open it in a new tab and then from the left sidebar I will select net gateways I don't have a net gateways created so I'll create net gateway okay now we have to specify a subnet where this net gateway should run so always an ADD gateway should run in a public subnet with Internet accessibility because net gateway should be able to access Internet on behalf of our private tasks so it has to have internet accessibility so in this case I am selecting my V PC public one because it hasn't rained gate to attach and then I have to allocate an elastic IP so I will click create new IP button it will create a new IP le IP and a sign into this and then I click create an ADD gateway okay so it's created now I can just click Edit route table button and then let me filter my VPC I will select here my vp0 then I will only see the route table associated with my V PC so now I basically have to change my private route table which is associated with my to private subnet now I can see the subnet Association if I click on this button there you go both my private subnets are attached to this private route table I will go to route section so at the moment it only associate one route side that is routing locally so I will click Edit routes and I will add a new route 0.0.0.0 slash 0 that means any IP address other than an IP address from this rain and that should be sent to the NAT gateway note to the Internet gateway but to the net gateway onto this NAT gateway ID and then I will save routes ok so I have this new rule added in my route table now our TAS you do a bolito internet so let's go to our easiest tab here and we'll see if our services are now getting active so at the moment still I don't see any running tasks so let me go to tasks here so still in the pending state let's do a couple of refreshes and give it couple of minutes there you go I got one task in the running state that's a good sign the other one should also be at running state in no time let me do another refresh alright guys so both of my services are on the running status now we have to do one other change if you can remember we have allocated a security group for this service so we have to make sure that security group has an inbound rule from the security group attached to the elastic load balancer I think at the moment we have setup only the HTTP access from anywhere but that's not going to be enough so in order to update the security group inbound rule I can simply click update here because I'm in the service hello world so let's update this service click on that so give it a second to load all these variables so I will select next step without changing anything and here under configuration the network section you will see the security group of that so I will just open that security group in a new tab now this security group is attached to our service at the moment now I went ahead and added the security group ID of my load balancer to accept or TCP traffic as inbound role now let me show you how to find the security group of our load balancer you can open up a new tab I will go to services and open easy to in a new tab and now let's go to load balancers section on the left side now I will expand this section little bit and scroll it down a little bit again and then under security section you should find the security group attached to this alb now I will open it this so it is hg0 seven six so let me open this there we go so this is the security group attached to that and its name is my alb security group as well so I can check the inbound rules attached to this security group you can see HTTP allowed on port 80 from anywhere so that's all fine because my TLB is accessible to anybody in Internet but now what I want to restrict is accessing to my private tasks or rather the service that manage my tasks so I only warn this alb to access my tasks in the service so what I'm going to do is I will copy this security group ID and then go back to my service security group it is this one and I will click edit all rule and I will select all tcp and then here i will paste that security group ID so this is the security group ID of my alb I'll save the rules so after that I can go back to my load balancer here we go and I will click listen us here then you can see there's a listener set up for HTTP port 80 and that will forward traffic to my EC s hello world target group so let me click on that so all traffic coming from that port 80 will be forward to this target group and there we have the target group if I check the targets right now and the targets are all healthy and you can see the IP addresses 10.0 2 3 3 2 0 7 and this IP addresses where my two tasks are running on so now everything is connected just to confirm everything is working fine I will go to load balancer again and then I will copy the DNS name and then I will go to one of the browser window and hit enter there we go I have my hello world message printed out so just to summarize the flow again anybody in the internet can use the DN of my load balancer and once the request is made to the load balancer what happens is that request will be forwarded to the target group attached to that so I will click on the target group and in this target group ECS will automatically registers these two IP addresses that is attached to the two tasks managed by our service so let me show you the service so this is my ECS and I'm in my cluster and ECS has this hello world service so this hello world service let me click on to that it has two desired tasks count managing so always this service will make sure it's running to task so if I come here and stop one of this task I will click here and then click stop then my service will only have one task running let me show you here you see there's only one running task but the desired task is to so what will the easiest Schindler does is it will spin up another task I will go to task section probably we can see provisioning a new task right now so still one tasks let's do a couple of refresh there we go so it's provisioning a new tasks so yet I can access my website that is in one of the tasks from the alb DNS and I have a highly available system so during last three parts we have built architecture like this now I have my cord in my laptop there's a containerized Express application then I will build that containerized application locally and then I will upload that built image to a WACC are the elastic container registry and afterwards in our ECS or elastic container service which use AWS Fargate we create a new task definition using the image that we pushed on to ECR and then we associate that task definition with a ECS service which maintain number of tasks that we have defined in AWS valid and if we have spin up two tasks in the service so our ECS which use will always maintain two tasks or containers per se and those containers will run my Express application so anybody from internet they will access that through an application load balancer and once they make a request it will reach out to the alb or elastic load balancer and that application load balancer will route the traffic into those tasks in my ECS cluster using forget and today we are going to automate this whole part we are not going to manually push contain images to ECR but instead we are going to use AWS code pipeline to build the image and push it to easier and then we will have another stage to deploy that task definition using those ECR images into AWS cluster or easiest class to that you say WS so let's get started now before that a quick word about the Navigator that we have spin up for our tasks in the private subnet to access Internet and grab the Express NPM dependencies so this is going to cost you money so if you are following along once you have done with that just make sure you remove this net gateway otherwise you will be charged now I am here in my LB endpoint so this is my LB DNS name now if i refresh it so I just changed the message to hello YouTube so the task the container that runs in AWS ECS will send me this message I have updated one of the files in the CI CD container repository in github I want you to take a sink I have added this bill spec dot yml file so first let's take a sink you can easily do that by G pool so it's already updated to make sure that you have this build spec dot yml file so this is the one that we are going to use in our AWS called pipeline in the build stage now the first step is to link our G table repo with AWS called pipeline so right now I have my cheetah Bravo Sri which is not linked with any code pipeline I will create a code pipeline first so let me go to AWS management console and make sure you are in the region where your easiest cluster is so my easiest cluster is in u.s. least one North Virginia so I'm going to create a code pipeline in the same region so I will play code pipeline here you can search it here so I don't have any code file at the moment so I click create pipeline name it let's say YouTube and here you can either create a new service role or use existing service role so I'll create a new service role and then I'll click Next here so this is the place where we are configuring the source code so we are where are we getting the source codes basically from github so I will pick JIT tab here and then you get this button connect widget tab click on that then you might have to click ceptin allow screens I have already allowed those security questions afterwards you will get this message you have successfully configured action with the provider then if you click on repositories' you should see all the repositories in your G tab I will search for CI CD so this is my repository so I will select this one so if you are following along you can easily take a fork into your github repository and then link it with AWS chord pipeline in cord build stage like this and the branch that I'm using is master branch and then select G topic where books so every time you push a change to G tub this will be triggered the port pipeline will be triggered so so stage is now configured now we will go to the build stage so this is the place where we are going to use AWS code build so pick code build and select the region UST is 1 and then click create project so it's going to open up a new window you can maximize that here I can give a name to my code build project I will just say build and then I have to pick the environment that this code build should use so here I can either use a managed image or custom image and specify the docker image but I will use of manage image here and the operating system that I am going to use is open took pick this one you can pick the runtime as standard then this comes along with docker install if I am not mistaken and then the image you can pick standard 1.0 if you are picking standard 2.0 in the code build spec file you have to specifically mention the runtime in standard 1.0 you don't have to so just specify or select 1.0 here and the image version always use the latest and then make sure you check this because it says enable this flag if you want to build docker images because we are going to build the do image out of the code that we get from the source stage check this and then I can create a new service role or existing service role you can give it a name I will just rename this YouTube aww Scott will build YouTube role and later on we'll go to this role and provide the permission to pull images from ECR so we have to provide the ECR power user policy the manage policy will do that afterwards additional configuring you don't necessarily have to add anything and here you can either use a build spec file or you can insert the build command here so this is the place we are going to make use of the build spec file that you might have checked out when pulling the latest code from github so I will open the bill spec file so this build spec files defines what are the steps that code bill should execute now there are several stages Priebus Affairs a build phase and post build phase now if you can remember when we are pushing our docker images to ECR from our local machine first we had to log in to easier so that is what happens at pre build stage so this is the same command that we use to log into ECR so we are using a WCC argot login so after this you will be successfully logged in to ECR and then at the build stage first we need to build our docker image and then we will take our docker image and after that in the post stage we will push that bill docker image to ECR so that is what happens in this line now let's replace these placeholders so you build the docker image using docker build a sh t so this is the image name and this is the context so the current context you can find the docker file here so image name is this I will just copy this one and I replace the image name placeholder now we are going to tag it so we are docker tag the image then we are tagging with our ECR URL so we had to find that let me open ec our in a new tab so this is our repository click onto that and there we go so this is self-will image URL so copy that one and come back here replace it with this sadaqa tagging part is now complete and after that we need to push this tag B image to ECR so how to do that so it's basically we have to use docker push again the ECR URL the same URL that we have copied then it will successfully push push my image to ECR but that much is not enough because the next step is deploying a task definition into our easiest cluster so in order to do that we have to provide some information to the next step so that's what we are doing in this line so we are basically creating a new JSON file and we call it image definitions to Jason and this JSON file contains this content so it basically has a name so here we have to replace it with our task definition name so I can find my task definition name if I quickly go to ECS the cluster that we have used or configured in previous lectures was this one so here I will go to task definition so the task definition name is hello world you see so I will just copy this and come back here and task definition name I will paste it here and then we have to provide the image URI for this task definition so that is nothing but our EC our image URL so I will just copy here and replace this placeholder here and then output that JSON file into a file called image definitions dot JSON and then as artifacts we will have our image definition open for our next stage to be used so what is the next stage the deploy stage so I'll replace these values and I will copy this content and let's go to our code pipeline here and you can see we we were here you know using a bit spec file or inserting the build come on now I can do either way for simplicity I will insert the build command in line so I will select this switch to editor and then clean all this code here and replace it with the it called and let's move on make sure you check this one cloud wash log so we can see any logs and then I will click continue to quote pipeline so it says this is already existing because I created one of this earlier so I just say build YouTube as my project name and let's try again okay it was successfully saved and it got me back to my code pipeline stage where I left off so now it's filled the called build project name here I can click Next and this is Sowers next step which is the deploy stage so at this deploy stage we are going to use ECS as our provider so this is a very easy way of deploying services to VCS so I'm using easiest deploy provider and then I have to pick the region us least one the cluster that I am going to deploy is my cluster so this is the one we have created in previous lectures and the service that I am going to deploy into is hello world and then I am asked to enter the image definition file the JSON file that we have outputted in the previous stage which is image definition dot JSON at that make sure you spell it correctly and then I will click Next and review and click create pipeline so it's created and start pulling in the code from github it's going to fail at this build stage because we haven't at the permission to our core build role so what we can do here is I will go to code build so you see it failed now if I click on the link here I should see the command execution error so this part it actually didn't run you know getting access to AWS ECR because we don't have the necessary permission attached to that drawer so let's do that I will go to services and I will go to identity and access management opening a new tab and here I will go to roars the role I created is code build bility youtube role so this is the one I attached to my code build project so I will click here you can see I don't have any mission to communicate with ECR so I'll do is I will click attach policy and then there should be a manage policy if you search for ec2 contain a registry power user so this should do you don't have to beautiful access power user is enough and then I will click attach policy alright so with that my aw is called build project should be able to log into easier this error message will not show again I will go to court pipeline and I will trigger a new release so I go back to cold pipeline here the first one is failed that's fine click onto that and I will click release change release so it's going to go in progress mode pulling in the source will state started we'll see if this successful I will click on to the code will project again and let's be with the logs still in progress and as you can see the login is succeeded now login to ECR and our docker image is also built you see the steps are also executed and finally it has created this image definitions your JSON files I think this should be fairly completed now yes it succeeded yet so the build stage is now complete let's go back to our code pipeline check the next stage which is deploy stage say bill state completed now it is deploying so deploying to ECS now so if I click on ECS the link here should take me to my cluster and as you can see hello world 8 you know earlier it was sever now the task definition number 8 has been created now let's look at that see hello world it has been created now if I go to the cluster game you can see there's four tasks running at the moment but the desired count is two tasks so let's see what's happening there so decide count is two but there are four tasks running so I will go to tasks section then I will see all those four tasks as you can see there are two HelloWorld 8 version tasks and there are two HelloWorld seven tasks now these were all tasks so what ECS will do is it will remove these two tasks and will maintain the desired count to two with the latest tasks definitions that's going to take some time so I'll post the video a little bit okay deployment is now complete you can see it's succeeded I'll go to my LB now and then do a refresh you can see it still show hello YouTube at all fine so what we can do here is we'll do a change to the message so I'll come here go to app yes so instead of hello YouTube I'll type hello world with two exclamation marks and then now come here and I will just add all the changes and get commit change display name then hit enter then I will push the image to origin/master so it's completed see the change display name 23 seconds ago and now if I go to court pipeline and I should see the code pipeline is in progress now so I send when I commit the code the web hook triggers and then it starts the code pipeline there you go so now it's running the build stage so by the end of this code pipeline I should see my message ideally so I'll pause the video and check this okay the code pipeline is now completed so now let's go to our lb DNS or the URL and let's refresh so this is the moment that we've been waiting for so I'll click refresh there you go hello world perfect so my code pipeline pull the call from j-dub then it build the docker image tag it and push it to ECR and at the deploy stage it created a task definition and deployed a task from the definition to our service so that our load balancer can route traffic to those tasks or the containers hence the newly created container showed me the new message hello world alright guys so this what I want to show you with this let's conclude this last part and I want to remind you again once you are done with this make sure you delete this net gateway otherwise you will be charged ok see you again in a new video thanks
Info
Channel: Enlear Academy
Views: 28,006
Rating: 4.949367 out of 5
Keywords: aws, aws fargate, docker, vpc, application load balancer, continuous delivery, aws codepipeline, amazon web services, docker tutorial, aws cloud, Fargate, aws tutorial, aws training, aws fargate introduction, aws fargate docker, ecs, aws fargate load balancer, aws ecs fargate tutorial, application testing tutorial, docker aws tutorial, testing application, aws dockerized, aws simplified
Id: aa3gGwJpCro
Channel Id: undefined
Length: 59min 19sec (3559 seconds)
Published: Sat Mar 28 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.