The Beginners Guide to Running Docker Containers on AWS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey what's up everybody it's Mike Pfeiffer welcome back to another episode of cloud skills TV and in this episode this is the beginner's guide to running docker containers on AWS now this is a project-based training lesson that we did at cloud skills i/o for a different type of training through the AWS certified Solutions Architect content that we deliver and so this is a pretty long tutorial basically we take you from the beginning knowing doctor containers kind of getting the basics down and then using some of the services available in AWS to help you run containers in production so we're gonna go ahead and kick off this training and just keep in mind that this was part of a hands-on project so you can follow along do the hands-on work or you could just sit back and kind of watch what I show you here to learn more about running containers on AWS so hopefully that sounds awesome to you guys let's go ahead and start this episode so the idea with this project is that we're architects working on a team for a particular organization and management wants us to evaluate the options for a containerized web application running on AWS so that means we have to understand the basics of docker containers just the very basics and then we're gonna figure out what's the best way to run this on AWS now some additional requirements are it's gotta support high availability within a single region so we already know how that works from a high level we just got to make sure we can architect this thing across multiple availability zones another requirement is that it needs to support auto scaling and it needs to integrate seamlessly with an application load balancer here in AWS as well now the final requirement is that we want to identify the solution that requires the least amount of administrative effort we want to make this as simple as possible we don't want to burden the developers with a lot of infrastructure concerns so it's up to us to figure out what's the right solution here in AWS now the reality is there's several different ways that I'm going to show you those how we can run containers here on the AWS platform some of them are more complicated than others so we're gonna spend some time looking at the different options so we have a good clear picture of how all this stuff works but again I want to stress to you that there's not really a necessity here for you to become a master with docker containers just want to understand what the options are and we want to be able to speak to some of these things or those organizations that are looking to do containerize or workloads all right with all that said let's go and hop over the next video and we'll get an idea of how docker containers work in case you haven't work with them in the past before we dive in and I wanted to give you an idea of what containers actually are how they work why they're interesting why so many companies are trying to use them in their applications just in case you haven't started working with these in the past now a great little introduction to this is here on the docker website so you can see the URL in the address bar so you go check this out if you want but this gives you kind of an idea of what containers are why they're interesting now one thing I want to mention here is the containerization or the ability to do process isolation is really an operating system construct and the doctor software just makes that easier so there is the ability for you to run containers without docker however docker and all other software has become kind of a standard when it comes to working with containers let's scroll down here on this screen they talked about the benefits and different use cases but really what I'm going to show you here is kind of the difference between containers and virtual machines because we know that I'm working with virtual machines in the past one of the ways that we isolate applications is to put them in virtual machines like we see here and of course we've been doing this with ec2 instances and you've probably done this with your own systems what we've got infrastructure some kind of physical machine that server runs a hypervisor that gives us the ability to do virtual machines and so as you can see here we've got three virtual machines running on this infrastructure application a B and C are all completely isolated from each other because they're running inside virtual machines and so obviously this model has worked great for a really long time now it's given us the ability to really change the way that we do IT infrastructure and how we operate IT infrastructure and it's a really cool way for us to build applications that don't conflict with each other right so application a is running inside its own little isolated environment inside its own virtual server and application B does the same thing and so on but the challenge with this implementation is the fact that there's a lot of bloat and a lot of overhead and a lot of baggage that comes with virtual machines we've got the guest operating system we've got the allocation of virtual resources to these vm's like certain amount of CPU and memory and storage capacity and then we've got a license these guest operating systems in some cases takes a long time for them to boot up because they have to actually virtualized Hardware so there's the whole post routine and the server booting up that whole concept takes time and so one of the things that containers helps us do is kind of remove a lot of that overhead you can see here on the left-hand side where we're looking at a containerized implementation we've got our infrastructure so it could be a physical server could be a virtual machine we've got some kind of host operating system and then we've got the docker engine and in this case the dog or engine is responsible for basically spinning up these containers and these are the same kind of idea here we've got applications that are isolated from each other in these little docker containers but we don't have to virtualize an entire guest operating system and we're not really virtualizing any hardware with these guys really all we're doing is we're using the docker engine to spin up these different processes that are kind of contained inside what they call a docker container image and the cool thing is is they all use the same host operating system so they can be started instantly so it only takes a couple milliseconds to fire up the application it's running inside these containers and these containers are actually isolated from each other just like we would have in a virtual machine type of setup however the benefit is we don't have all the heavy overhead with like guest operating system and the allocation of memory and CPU in the same way that we do with virtual machines so essentially the idea is it just makes it faster and easier and more portable to use these containers we can ship these container images around to different infrastructure spend the application up really quickly and it just gives us the ability to be more agile faster and have a smaller footprint in terms of how much capacity and how much resources these isolated applications actually need to consume and so this is really just kind of the theoretical piece right so let's go and do this in a practical kind of scenario so in the AWS console what I'm going to do is set up an ec2 instance where we can spin up a docker container now I could do this on my Mac I could do this on a Windows computer it's very common for developers to run docker locally on their development system build their contain images and then push those images up somewhere where real servers can get to them but I want to show you this that way you don't have to worry about installing anything on your own computer so I'm gonna come into the ec2 service I'm gonna launch an instance I'm gonna go ahead and do an Amazon Linux Am I I'm gonna do this on a t2 dot micro and in terms of setting up an IM role we will want to set one up here so me keep this screen open I'm gonna go to the IM console another tab before I do this let's go to roles and I'm going to create a new role for this ec2 instance so I can control remotely this is something you've seen me do in the past so I'm gonna do an ec2 based I am role and then what we're gonna say here is we want to delegate the Amazon ect role for SSM to this server that way we can control it so we'll just call this SSM role click on create role that's built and looks good so over here let's do a refresh and then we'll pick the SSM role all right so looks good we'll go next default storage we'll call this guy docker host click the next screen here and then I call this security group docker then what we'll do is we'll add an additional rule for HTTP because what we want to do here is spin up basically a web container web-based or web server container on this machine going to view instances and then this guy is coming online okay now that that's running let me go back over to this tab here what we'll do is look for the Systems Manager so head into the Systems Manager and then we'll go to the session manager on the left-hand side and we'll start a new session on that computer so I'll click on start session there's the docker host click on start session and that's going to give us a terminal so we'll sudo up sudo su and then what we're gonna do here is yum install docker - why and so what this is going to do is install the docker engine this is gonna install the service that basically runs the containers and it's also going to give us the dock or command-line tool that we can use to interact with that service so let's go and install this package here and now that that's done let me clear the screen and if we try to run the docker client at this point it's gonna give us a list of all the sub commands that we can run and so for example if I were to do something like this docker images to see which container images are available I'm gonna get an error because I haven't started the service so let me first do this we'll say darker or we'll say service docker starts now that services started so now what I can do is dock your images and it comes back and it shows me basically that there's no images out there and so one of the things to keep in mind is there's the command line tool and then there's the server process as well now what we can do though is we could download some images to work with so for example if we go to hub docker comm there's a ton of images out on the docker hub and you can create your own images as well but for example there's one for nginx which is basically a reverse proxy for web applications here's the official nginx container image and so here's the idea if we wanted to build a web server using nginx we could use this docker image in the idea with these images is that they're really small there are not an entire operating system like a virtual machine image it's just the application and we could use this command here to pull the image down and work with that locally so the docker hub here is considered a container image repository and this is a public one of course you could do a private repository on this platform an AWS also has a container registry where you can store your own container images now in this week's content we're not going to get into working with the container repository in Amazon but be aware that you can actually do that you could store your own images in AWS if you want to let's go and run this come in here and dr. pull engine tags we could pull that one down so we could say docker pull and the cool thing is when you run this command the default container repository it's gonna look to is the docker hub so it'll should just work let's go ahead and run it and so what it's doing here is it's pulling down the image and looks like it's done let's clear this and once a docker images and there we go we've got the latest nginx container image it's only 100 megabytes in size so just that alone is very compelling right because a virtual machine image is going to be several gigabytes typically and with these container images they can be super small some of them are even smaller than 100 megabytes now that we've got the image there we can actually spin up a container so we can say something like docker and then we can use the run sub command to run the engine ax container now in these containers to start up it's similar to a virtual machine in that it has its own virtualized Network environment and so basically if we want to communicate with the container we have to go through the hosts IP address infrastructure and so what we want to do here is do a minus P and say which port on the host is going to map to port 80 inside the container and so on the outside port 80 on the host will go ahead and map to port 80 inside the container so that's what we're gonna do there and we're just saying engine X and so basically all we're doing here is running the docker clients using the sub command run and we're telling the service to go ahead and fire up a container using the engine X image and when the container comes online mat port 80 on this container host the server running docker the service and then mat port 80 inside the container not to that port on the house so we can get to port 80 and see the web application so let's go ahead and run this command and it looks like ice misspelled the name so let's clear that I cannot type today as you can see up arrow a couple times and it's Engine X okay so we're getting a flashing cursor here so we're running the process or the container interactively and so inside the container there's an instance of course of Engine X the web server or the Rouge proxy server but at this point it's completely isolated so engine access is actually installed on this ec2 instance it's just available inside the container that's currently running so if we go back over to ec2 grabbed the public IP for this and navigate over there we can see it says welcome to engine acts and now if we look at the terminal we can actually see that it's set up to actually output the server log information so it sees my clients hitting that application that's just the way this particular container image is set up but if I hit control C and break out of the container there you can see I'm back to my prompt head back over to this tab and hit refresh the web service is dead but that's basically the idea of running a container on a container host or a server hosting the docker engine so we can do these types of things now one of the things I want you to keep in mind is you know we have a requirement to make sure that everything that we run is highly available so if we were gonna run a web application inside an engine X container using this single server that's great for just building an application on one machine that works fine but what about high availability how do we get to a point where we can actually run multiple instances of this container image on multiple ec2 instances as you might imagine trying to sit here and run these docker commands and do that on every individual server would be very hard to do so that's why we have container orchestration platforms okay so back in the 80s management console let's scroll down here a bit we want to stay in the compete section but we don't want to go into ec2 what we want to do is work with the ECS service or at least evaluate this service based on our project requirements so this is a service that was built to help us run containers in production across multiple ec2 instances can like we were talking about in the last video so one of the big things to keep in mind is ECS the elastic container service is proprietary to AWS even though it works with docker containers this is a container orchestration service built specifically by Amazon now there are a bunch of third-party options available out in the world I'll discuss some of the other options available from AWS but this is definitely a service we want to evaluate to see if it's going to work for us the cool thing is this service actually integrates with all the existing things we've looked at so far in this program it works with the load balancing system the ec2 security group systems the volumes the IM roles all the stuff that we've spent a lot of time dealing with up to this point now one of things I could do is click on get started and basically spin up a sample application but I'm going to show you how to build what they call a cluster manually so you can get an idea what are all the parts and pieces so over here on the left I'm going to go to clusters and then we're going to build a cluster and really just a cluster is just a group of ec2 instances that they're gonna spin up for you as you go through this process and then we'll be able to spread our containers across those set up instances inside the cluster so let's click on create cluster and then we've got some templates to pick from here so this first one here I'm gonna ignore and we're gonna come back to this and then we've got ec2 Linux or ec2 windows now when it comes to running containers remember there's no virtual operating system there's no virtualized systems at all it's just processes that are isolated from each other so if I want to run a Linux based container I need to do that on the linux server if I want to run a Windows based container I need to do that on a Windows server because again we're just using the operating system under the hood that's powering the server so it's going through a Linux based implementation here click on next and then we'll give the cluster a name we'll call this like my ECS cluster scrolling down the list we can pick a provisioning model either on-demand or spot instances so we can obviously save a lot of money using spot instances but there's the chance that those could go away because that's how that system work so let's go ahead and go on demand what we'll do here from the list is keep this on the free tier where it's the t22 micro there it is so we'll do TT micro we'll create a cluster with just two of those servers those are gonna have 22 gigs of storage each and then we'll go in associate a key pair there so we can actually SSH into those guys if we need to that'll be good so if we want to install software or do anything on the hosts we could do that the cluster is going to be built here inside a new V PC we can also put it in the default VPC in this region but let's go ahead and let it build a new one it's gonna build some security groups for us as well and then scrolling down here you can see that it wants to create an IM role so we want to have the e CS service be able to do certain actions on our behalf and you can see they talked about it here and so essentially what's going to happen here is it's going to spin up ec2 instances running docker there's going to be an e CS agent as well and that agent needs to be able to communicate with the platform to tell it what it's doing and do some other things as well like logging in a variety of other options so what we're gonna do is let this thing create an IM role for us and we're going to delegate that permission to the e CS service give that service the ability to work with those servers now we click on that button to create the cluster we can actually see that it's building a cloud formation stack there and so if we navigate over to a cloud formation service let's open up the console home screen another tab scroll down the list actually what I'll do here is just look for cloud formation I'll be quicker get rid of some of these pop ups here and so we can see that they're using a template to build the cluster infrastructure the crate is in progress so I'm gonna pause the video and let this thing kind of sync up and then we'll pick it up from there okay and after a minute or so you can see we've got green for all these statuses here the cloud formation console shows that the stack was created successfully so everything's good there if we go back over to ec2 we've got three running instances so we've got the docker host that we are working with manually earlier now we've got these ECS instances that are coming online right now so I no longer need this docker host so while I'm in here I'll just terminate that guy and then basically we're gonna use these two servers here to power our containers now one of the things I want to point out here I only click on this first instance and scroll down he's using the same security group as the other instance in the cluster so they're both inside this security group here so for example if I select the other one and take a look they're both inside this cluster if we go into that security group take a look at the rules remember we saw a port 80 rule when we were creating the cluster in the EECS console now here's the thing remember when I started the container earlier on arc I'm apt port 80 on the host to port 80 on the container well that works pretty good when you only have one web server container but there is the chance here that we could spin up multiple web server based containers on each container host I can only use port 80 on one IP address so because of that one of the things we'll be able to do later is we can spin up our containers and listen on an alternate port on the outside on the host and then that can map to something like port 80 on the inside and so what I'm gonna do here is instead of opening port 80 I'm gonna open a range of dynamic ports so I'm gonna say instead of HTTP this is gonna be a custom rule that allows ports 31,000 through 60 1000 and so this will make more sense later when I start spinning up containers but essentially what we're gonna do is we're gonna start our containers using a random port number on the hosts in this range and then we'll map our applications to those ports you'll see what I mean when we get to that point but for now let's just go ahead and set that up I'll remind you about this later but now the cluster is built and we've got everything set up the way we need to if we go back into ECS and view the cluster we can see that we can also see the ECS instances in here the system was smart enough to set one up in two different availability zones so in u.s. West one see in u.s. West one be so so far that's satisfied one requirement in terms of making this thing highly available the cluster itself is spread across multiple availability zones and we could actually also set up auto scaling for the cluster itself so we could go over here to scale ECS instances we can manually scale here and we could set up auto scaling as well one of the requirements for our containerized application was to make sure that the solution we picked is going to integrate seamlessly with the elastic load balancing infrastructure here in ec2 so the good news is the ECS service does fully support the load balancers here in ec2 so let's go into load balancers and we'll create a load balancer now one of the biggest things to keep in mind when you're working with something like ECS is that the application load balancer is really your only logical choice network load balancer is not really for that use case and the classic one doesn't have the sophisticated to work with the containers in a way that we need to you'll see here down in the text for the application load balancer that it's got advanced routing visibility features for architectures including micro services and containers so we definitely wanna make sure we're using an application load balancer type so let's create that we'll just call this the e CS e lb this is going to be internet facing and scrolling down the list we've got a listener that's fine and then notice that we've got the newbie PC picked on the drop-down list so that's the V PC that we created to support the e CS cluster so that's an important distinction as well so let's go ahead and pick both availability zones so we can make sure we can send traffic to both the e CS hosts click on next for now we're not going to do an HTTP based listener click on next here and then what we'll do is create an external security group will call it external web and then we'll go ahead and open port 80 on this guy this is going to be on the outside of the load balancer all right so let's click Next here we got to create a target group for this so I'm gonna call this temp this is actually not the target group that we're gonna want to use later so I'm just gonna call it temp click on next we don't have anything to register just yet eventually we'll want to register some containers I'm going to skip that I'll go ahead and create the load balancer and so now that that load balancer is built let's close out of this and what we want to do here is something a little bit different than what we did when we worked with this the last time and you'll see this coming up a little bit later but I'm gonna let the ECS service build my listener as well as my target group for me so using these default ones I'm actually gonna get rid of these just to make things a little bit cleaner I'm gonna pull out the default listener inside the elastic load balancer and then on the Left I'm gonna go back to target groups under load balancing and then I'm gonna delete this temporary target group let's go and get rid that's not looks good so at this stage we've got a load balancer built with basically no listener with no target group and what we're gonna do later is set up ECS service to work with this load balancer and that process will create the listener it'll create the target group set up properly for us to with our container instances but the load balancer is still building so at this stage we've got an e CS cluster we've got a couple ec2 instances powering that cluster we've got an elastic load balancer that's available and we can configure when we're building out the rest of our infrastructure so let's go back into the e CS console and we'll build our task definition so a task definition says right here is the container information for our application such as how many containers are part of your task which resources they'll use how they're linked together which ports they'll use on the host and things like that so for example for a larger application you might have a task definition for the web front-end you might have a task definition for the database back-end maybe have an API tier somewhere in your application you might have a task definition for that and just like we've seen in different architectural diagrams we want to make sure that those individual layers in the application are highly available so they're spread out across multiple availability zones so a task definition could say hey we need two web containers make sure they're spread out across two different physical servers inside the cluster let's go ahead and create our first task definition here we're gonna do an ec2 based task definition right now we'll come back to far gate later but remember we're running ec2 instances in our cluster we've got full access to those servers we can install software and do anything we want now on this screen we got to give it a name so we'll say this is the web front-end we'll just say web Fe this is gonna be our web front end servers for our application and scrolling down the lists what are the important concepts here you want to keep in mind is the task role and so here's the idea with the task role remember when we had ec2 instances and we were hooking i''m roles of those instances so they could basically execute commands to target the platform this is the same thing what we can do is create a specific iam role and attach it just at the task definition so when containers are spun up based on the task definition they'll inherit this I am role that's set here but it won't be set on the container host this is an important concept so if I have an ec2 instance that starts a container just the container can have the i.m role not the underlying physical host so I've got multiple containers that maybe you're servicing an application for different customers that I have I might want certain containers to have more permissions than others and this is how we can kind of carve out that setup so let's go to the iron console just show you a quick example of this so what we would do is create a role and then instead of picking ec2 like I've done a bunch of times in the past we would pick the elastic container service and so we would create an elastic container service task role notice it says it allows ECS tasks to call AWS services on your behalf so let's click Next and so let's say for example our web-based container web application that's going to run a container needs to be able to manage an s3 bucket it needs to be able to upload and download objects it's what we could do is we could associate the Amazon s3 full access policy with this role click Next and call this our engine X role for example and then we could attach that just to the task definition and so the server running a container isn't going to have permission to s3 just the task just the container itself that gets this role assignment so coming over here and hitting refresh this is the idea attaching the role directly to the task definition not the server under the hood now there are some other permissions delegated to this system so those servers can upload logs into Amazon CloudWatch that's set up here we can configure how much memory and CPU the entire task can take up and we could also do this at the container definition level as well and we do need to add a container definition as part of this task so let's click on add container this thing flies out and we're going to call this well this will be the engine x container and so the names not super important but the image field is this is the actual path to the repository where we want to download the image now you saw me earlier when we were working with docker in the terminal in the very first video this week where I did a dock or pull on nginx and so me simply putting in engine acts here basically it's going to do the same thing the container host the ECS host will go to start this task it'll look here and say what's the image and it'll know that with just the image name it can just pull that from the docker hub now if we were using our own container registry like one here in Amazon the ECR basically the elastic container registry we put our own custom images in there we want to provide a path and some additional information you can see here that there's an option for private repository authentication so I'm not going to set that up here because it's not really necessary for what we're trying to do we just want to make sure we understand the basic options here now in terms of memory what I'm gonna do here is we can actually set hard limits on the containers so we could say this container can't use anything more than a hundred and twenty eight megabytes and since we're using a t-to micro we've only got a gigabyte of memory on the host so we could be very particular here in terms of how much the container can use up and then the port mappings are important so it says host port and then container port and this is something we saw from the command-line earlier but what I'm going to do here is type 0 and then 80 and so what this is telling the system is use a port a random outside port on the host something between ports 30 1060 1000 so just pick a random port out of that range map at the port 80 on the container itself and then scrolling down the list we can also specify how much CPU this container can use up if you hover over the thing here it says the number of CPU units that you can reserve your container and so a container instance like an ec2 instance in this case has 1,024 CPU units for every CPU core so on a Tito micro we've just got 1,024 CPU units so we could come in here say something like 256 so it looks good and then we'll just leave the rest of these fields as the defaults click on add and that's our container definition for this task so that looks good so far let's click on create to build the task task definition created successfully now the important thing to keep in mind here is the task definition is built that doesn't mean we started any containers this is just the settings like what's the definition of the settings if we want to actually start containers we got to create something called the service so one way to do that here is we just hit the actions drop-down and we can create a service and a service might be you know for web front-ends we have a web front-end service that uses the web front-end task definition to start containers using the settings that we've defined so far so we're building a service we want to pick the launch type here as ec2 because that's what we did we built an ec2 based cluster we're gonna pick the cluster that we already built and we'll just call this service engine X we want to do the service type of replicas and by default replicas will place and maintain the desired number of tasks across your cluster and so we want to be able to do multiple if we want to so I'm gonna start with one and then I'll show you later how to scale this manually so one task that starts and then in terms of placements this thing will do in availability zone balanced spread once once we go beyond one container and start doing multiple containers inside this service or multiple tasks is the actual right terminology there so with that let's go ahead and build the service click on next step and then we've got some more options here so it knows that we've got a V PC setup it knows that we might want to use a load balancer now when it comes to intelligent routing we talked about the fact that an application load balancer is gonna be your best bet this is especially true when you're running multiple containers on your container hosts so let's go ahead and pick the application load balancer we're gonna need to delegate some permissions here to the service to manage the load balancer that's fine and then down here container to load balance so this is important what we want to do is click on this button here add this container to the load balancer essentially what we're saying is if we spin up containers based on the engine acts tasks or the web front-end tasks that it's going to connect to the target group on the load balancer so this is where we're going to build a listener on port 80 on the load balancer and this is going to create a target group for us called ECS nginx and so this is why I deleted the listener and the target groups on the load balancer before is I want basically the EECS service configuration here to set these up for me and you'll see how this works later but let's go ahead and take a look here service discovery which is optional if we wanted to we could have this environment register a DNS record in route 53 that makes it easy to find this service via DNS instead of trying to use some other IP address or some other host name but for that I'm just going to go and uncheck it because we don't really need it at this stage so click on next step I'm not gonna setup auto-scaling right now although we could so we could set up auto scaling for not only the containers in this service but also the underlying ECS hosts but let's just keep it static for now now let's go and create the service and creating the service should go ahead and spin up the container and then register the container with the target group that just got built on the load balancer so the service is now created so if we go into view service here we can see down the bottom the web front-end task is spinning up a container to power our engine acts service it's going to refresh here now it's running and so let's hop over this other tab go to services go to the ec2 console let's scroll down on the left hand side and go to load balancers so load balancer is active that looks good go to target groups on the left hand side let's bring this up a bit so we can see what's going on let's go to targets and we can see that we've got a container running on one of our container hosts it's listening on port 32,768 running in uswest one B so if we visited this port number on the host that would take us to port 80 inside the container but the cool thing is on a load balancer we've got a listener here for port 80 so if we hit port 80 on the load balancer it's going to map the connection to port 32,768 on the EECS host which will eventually give us a port 80 on the container so for example let's grab this DNS name here and head over to this guy now we see it welcome to nginx just like we saw earlier so using random ports in that port range on the ECS host would give us the ability to load balance multiple nginx containers on the same server and there may be scenarios where you might want to do that but we definitely want to make sure that we could run these container for the webservice across multiple ECS hosts okay so back in the cluster screen here in the ECS console let's take a look at what we got we've got some ECS cluster we've got one service running powered by one task and we've got two container instances under the hood now remember we can scale the cluster instances here we can also scale the tasks and we can do that manually and we can do that automatically just like we've seen in the past with auto scaling for simple ec2 instance configuration let's go into the cluster here real quick let's go to ECS instances and then scale the ECS instances so let's go ahead and scale up to three so that'd be an example of manually scaling of course and that'll basically give us a third ec2 instance where we could run containers and then let's go into the services as well take a look at the engine X service and so we wanted to go from one running container instance for the engine X service to something like two or three we could manually scale in here we can also auto scale so if you came in here and did auto scaling basically you'd have to come in and do an update and set up the auto scaling policies we could do that but in this case I'm just gonna do it manually so either case you'd got to go in and do an update but what we want to do here is a number of tasks three I could go to two obviously but what I want to do is have that third ec2 instance come online for the cluster take a look here refresh it looks like we've got three now the third ones coming online and then I want to spin up two new containers for the web service the web front-end service so there's a container across all three of these so that's why I set three tasks there so let's go ahead and click on next step next up again and if I wanted to set up auto scaling for this particular implementation for this service this is the point in the screens where I could do that and I would set up policy just like we did when we looked at auto scaling before right now I'm just going to manually scale the three containers and then we'll update the service and I'm glad this error came up because you might run into this when you're working with this service if you're following along now there seems to be a bug in the consul's here in AWS not really sure what the story is with this but if you see this error failed updating sir the request is invalid go ahead and sign out and sign back in so let me show you it's going to get out sign back in let's go back into ECS and so it did scale my container instances but it didn't do anything to my tasks right so let's go back into the cluster let's go to my nginx service will update this service and we'll say three tasks click on next next next and update so this time it did update so make sure that if you're running that error you sign out sign back in or close your browser instance started back up signed back in and so now if we check out the tasks for this service hit refresh a few times you can see that we now have three running tasks as part of this service this engine acts service so let's head over to ec2 real quick I closed out of everything I had this open earlier I think but if we go back to target groups for our load balancer take a look at targets we can see that across these different instance IDs three different instance IDs we've got containers in the target group so 32 768 on two different servers so on uswest 1c and 1b and then 32 768 on the third server in the 1c availability zone so everything's healthy everything is looking good and so at this point very simple you know the same thing we did before is just go to the Diaz name for the load balancer and what's happening behind the scenes even though I'm hitting refresh and we're just seeing the same page over and over we're actually hitting all three of those different containers across three different physical servers and so that's an example of manually scaling both the ECS infrastructure as well as the containers within the tasks that are powering the services for your ECS implementation so at this point ECS is looking pretty good right because we've got the ability to use the elastic load balancing infrastructure we've actually seen in practice spreading the containers across multiple availability zones powered by this load balancer so we've got high availability there we've got the ability to auto scale at both the server level as well as the container level the only challenge is we take a look at instances we've got all this infrastructure to manage we've got to make sure that these servers are patched and potentially backed up and if one crashes what do we do so let's go and move on the next video what I'm gonna do is delete all of this infrastructure we're gonna clean this up and I'm gonna way that we can basically do the same exact thing we've done so far with less dependency management and less administrative control and so really what we want to do is set this up to where these servers here are managed by Amazon and not by us so it's less stuff for us to work on so what we're gonna do next is tear down the cluster that we built and really turn down everything and then rebuild it and I'll explain why as we go through but in the cluster here let's go over and take a look at what we got running we've got some tasks right we've got three tasks out there let's select all of these let's go ahead and stop those services or those tasks and then overrun services let's go ahead and highlight the service and will delete the service and this is gonna say hey if you want to do that you got to type and delete me so let's do that so all that stuff is deleted and then let's go here and the rights and delete the cluster as well and so this is going to kick off delete process in cloud formation so let's give this a few minutes to run make sure it deletes everything and then we'll kind of move forward now this delete is still in process but I forgot one other thing head back over to ec2 we actually deployed the elastic load balancer inside that environment as well and since it's in the V PC that delete process is probably gonna hang so let me delete this get rid of the load balancer and let's actually get rid of the target group as well because when I can use that anymore so we'll delete that and then if we take a look at cloud formation you can see the delete is in progress so I'm gonna give this a minute to run hopefully it will figure out that the elastic load balancer is gone and it'll delete all the infrastructure so let's give this a minute to run okay so this delete has been running for a while now it says 11 out of 13 resources deleted let's go over to cloud formation console still says delete in progress let's take a look at resources it looks like it's deleted all this stuff here but at the bottom but delete in progress for the V PC is actually been running for a long time so I think that this thing is confused about the V PC itself it probably thinks the elastic load balancer is still in there so let's try this I'm gonna go into VPC console and manually delete the V PC so sometimes you'll run into things like this let me go ahead and delete the V PC okay V PC was deleted CloudFormation didn't do it but I did it manually so come back into CloudFormation it still thinks that that's in progress so what I'm gonna do is just issue another delete on this stack and let's see what happens okay and after another minute here the CloudFormation stack is gone and in the ECS console it says it deleted the cluster successfully so let's take a look at task definitions let's go ahead and get rid of this guy here go into this guy and say let's go ahead and deregister this one we don't want that anymore we don't have any clusters either so that's okay and then score Dec to just make sure that clean up everything so no running instances no load balancers one security group and so it looks good everything is cleaned up okay so we reset the configuration we've got a clean slate to work with here and what we're going to do is build a new ECS cluster in the default V PC and that means we're going to need a load balancer in the default V PC so the first thing I'm going to do is go into ec2 and build a new load balancer so let's go to load balancers create a new load balancer here this will be an application load balancer again basically the same exact thing is last time one small difference it's just going to be in a different VP C so we'll just say ECSE lb to and scroll down the list we'll pick the default V PC right there pick the availability zones click next we'll create a new security group for this we'll just call it external and we'll listen on port 80 on the outside so no changes there we'll create a temporary target group we're not gonna use this one we're not gonna register anything because nothing exists yet but we'll get the load balancer built up now next thing I'm gonna do is same thing I did before I'm gonna get rid of the listener and delete that because ECS is gonna build this for us and the other thing is the target group I'm going to nuke that temporary target group and so now we've got this load balancer available we can go in build a different ECS cluster and we can build a new task definition so let's go back to the services and hit ECS from the history and this time I'm going to clusters build a brand new cluster and instead of doing like I'm Linux or Windows based one we're gonna do this guy here powered by AWS Fargate and so basically Fargate is a way for you to build managed ECS cluster instances and you have to worry about seeing your instances in the ec2 console and somebody accidentally shutting them down or changing the configuration and the building model is actually a little bit better as well which we'll take a look at but that's what we're gonna pick a four gate based cluster and we'll just call this ECS demo and then click on creates and just like that we've got a cluster and so there's no ec2 instances that we're going to see in the ec2 console we just have this concept of a cluster available to us and we're good to go but now that we have this new cluster we can go to task definitions create a new task definition select Fargate as the launch type compatibility so earlier we did ec2 for an ec2 based cluster but notice AWS managed infrastructure no ec2 instances to manage so that's what we're gonna do on this one the price is based on task size and in the previous model the price was based on resource usage so if we had really beefy ec2 instances obviously we're going to pay a bunch of money and with this it's really just based on the resources we allocate to the tasks so this is actually a really nice option not only is it possibly more economic for what we're trying to do it's actually easier from an administrative perspective we don't have instances to be concerned about or be worried about messing up the configuration of those it's just going to work and we can just focus on the application and the tasks so let's go ahead and build the task definition using that launch type and we'll just say this is our Web Fe version 2 we go ahead and set the task role again the engine acts role we want and since we're gonna be build based on the task size let's go ahead and set the task memory to something like two gigs and we'll set the CPU to like one CPU for example and then we'll add our container definition so once again this will just be nginx and what we'll do here for this particular container definition we'll just say go and use 128 megabytes of that memory and then for the CPU units use 128 as well and then another important thing here is notice the port mappings we only have to worry about the container ports we don't they'll worry about the host port because again the hosts are managed in this model and we can just simply expose port 80 and that's it so let's go and hit that hit add there we've got a container definition it's going to create that container definition where that task definition I should say and now we can build the service based on this definition so go to actions create a service this is the Fargate launch type our cluster is the ECS demo the service name will be engine X and then we'll start off with one task here just go ahead and run that and then we get that the screen here just like we saw earlier which should be PC do you want to use well we've only got the default be PC and let's pick both subnets there it's going to create a security group for our implementation and then down here we can pick the load balancer so we've got ECSE lb to let's go ahead and add the container to the load balancer and so we'll just listen on port 80 on the outside of the load balancer and we'll have this thing create a target group for us so it looks good well disable service discovery in route 53 and we'll just create the service and so just like that very quickly we were able to spin up a brand new environment we have a load balancer but we don't have any ec2 instances to worry about so we can see down here that's provisioning the task let's hit refresh it's going into a pending States hit refresh a couple more times and now you can see that that task is running so it looks good and let's do this let's go to services go back over to ec2 in a new tab and let's go to the load balancer let's go ahead and grab the DNS name for the load balancer and head over there there's our welcome to in genetics but the kind of interesting thing here is that if you look at instances right there's the terminated ones from before there's nothing else in the list so the instance is powering this environment here this task definition or this container isn't something we have to worry about it's not something we need to be concerned about now we take a look at our target groups inside the load balancer or behind the load balancer we do see that we've got something connected and healthy to this thing so that's our container host basically and we were just listening on port 80 so there's no craziness with the port numbers like we were looking at earlier but just like we did before we can also scale this tasks right so we could go over to or scale this service we can go to update and we could say hey we want two tasks in this service click on next next next and then update the service and it'll spin up another container and it should do that in another availability zone based on our configuration so I'll go back over to tasks here looking at the service itself hit refresh a couple times so we can see now we've got another one going into provisioning States refresh a few more times here now it's an appending state and now both of those are up and running so if we take a look once again at our targets do a refresh here we've got one healthy target in u.s. West once e got an initial one coming up in u.s. West one B let's hit refresh okay so both of those are now healthy and so this is really cool because we've got containers running in different availability zones we're not worried about any ec2 instances we want to deal with that or work with those servers in our application is sitting by another load balancer and I can continuously refresh it and it's basically sending me across to different availability zones here in Northern California met awesome it was so simple in this model to build an EC s implementation using the Fargate launch type and all we really had to worry about was the task definition and of course we could set up auto scaling for this service by just going to the update and running through the list and it's one of those screens at the end where we could say set up auto scaling and build the scaling policies but I think you get the idea so based on our project requirements so far this looks like the most interesting option using the ECS service with the Fargate launch type that's going to be the least amount of ministry of effort that gives us the ability to work natively with the application load balancer as well as set up auto scaling and high availability across multiple availability zones so the goal was to make sure we could do that with the least amount of administrative effort possible but there is one other thing that we might want to take a look at ok so at the beginning of this project we talked about ECS and how this was a proprietary container orchestration system built by AWS but the reality is that a lot of people that are looking to do containers in production right now are talking about kubernetes so kubernetes they describe it here on this page on kubernetes I oh but it says it's a portable extensible open-source framework for managing containerized workloads and services so Google open source the kubernetes project in 2014 and it's actually gained a ton of traction over the last few years and like I said a lot of people that are doing containers and production really like kubernetes the challenge is it's super complicated there's a lot going on and in terms of reducing administrative overhead trying to roll your own solution with kubernetes is going to be very time consuming and there's going to be a significant amount of technical investment to go down that road however a lot of people like the granularity in terms of the configurability in the system so it's much deeper in what you can do with it then something like ECS now in the AWS console what you'll notice under the compute section is in addition to ECS we also have this option here eks and this has been around for about a year at this point as I record this video this is a managed kubernetes implementation on AWS and typically when things are new in AWS you'll see that there's limited support for regions so right now it's not supported in Northern California but I can go over to Virginia to work with it and when you get in here you can see that this is the elastic container service for kubernetes it's fully managed kubernetes basically gives you the ability to install and operate your own kubernetes control plane on ec2 instances now from a perspective of lowering your administrative efforts working with this kind of stuff it's nice that Amazon will spend this up for you and you'll have to determine whether or not this is the right decision based on keeping things as simple as possible what we've seen so far using ECS and Fargate is probably the easiest way to go for a really vanilla type of setup but if you're on a team that needs the controls and work with the tools for kubernetes and it's something you want to invest in this might be something interesting to take a look at just keep in mind that as I'm recording this in late 2018 there's limited support for it across the global infrastructure and you can see a lot of the regions aren't even available for this service right now so keep an eye on the eks service because as time goes on and as kubernetes continues to grow in popularity this could be basically the go-to option and ECS may even end up being something that's less of an option or less recommended by the community at large so let me go ahead and walk you through cleaning up the environment and making sure we don't leave anything behind here make sure you're connected to the region where you've been working with ECS and let's go back into the ECS console first thing I'm going to do is go into tasks definitions here go into this guy and highlight the task definition go to actions and we'll deregister this and we'll go to the clusters and go to the Fargate cluster that we have I like this guy here and delete that service so we'll say delete me so I'll get rid of that and then let's go ahead and delete this cluster and so it gives me an error it says can't be deleted while tasks or active so what I got to do actually is go over here let me stop those should have been stopped but let's go and delete the cluster there you go and that's one of the reasons why I wanted to show you this some it can be kind of a pain to delete stuff here so this is all cleaned up no clusters no task definitions let's go over to ec2 so we've got no running instances we've got a load balancer so let's go ahead and get rid of this guy and so that's deleted and probably the other thing I'd want to do huge to clean things up is get rid of the target group as well so delete that and it still says it's used by a listener rule see there's no load balancer so that's probably a a bug let's try it one more time there we go so that's deleted and then one other thing that you might want to do is go into the IAM console and clean up some of these roles that all these different services were spinning up for us so we'll go back to roles and you can see I got a ton of stuff in here so I've got the SSM role no longer using that so let me delete that one I've got the engine ex role delete that and then we've got these different tasks and service execution roles that the service was building on our behalf so let's just clean this up here and if we build another cluster later these will get recreated so it's okay if we're deleting these here the other ones we want to get rid of here is the aw stroll for ECS and then we can also get rid of this one here it'll be a service role for auto-scaling so don't have an auto scaling group anymore so it's delete that's I can also delete the one for elastic load balancing so I have no load balancers anymore and in terms of these last two ones we can go ahead and keep those there let's go a refresh here and still eating that one and that's basically yet we've cleaned everything up that we've set up throughout this week's project [Music]
Info
Channel: CloudSkills
Views: 35,840
Rating: 4.9641256 out of 5
Keywords: docker, containers, aws, ecs, fargate, eks
Id: lO2wU2rcGUw
Channel Id: undefined
Length: 55min 23sec (3323 seconds)
Published: Fri Apr 26 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.