Deep Dive into AWS Fargate

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everybody welcome back in there I hope you enjoyed lunch so we're going to kick off this afternoon sessions talking about Fargate and showing you some of the features it can do I've got a little demo for you as well which fingers crossed works so we're going to talk first of all about why did we build filegate you'll notice some concepts brought over from the State of the Union chat this morning and then I'm going to do a bit of a deep dive into the internal workings of Fargate and how it works and how you build your application and then we're going to put that into practice and actually do a demo and launch a little chat application and this chat application you'll actually all be able to log into from the audience if you've got your phones on you or your laptop so you can put rude messages up on the screen while I'm speaking please don't you can put messages up on the screen while I'm speaking so first of all let's let's talk about the motivation we've already already talked about this morning we're saying at first we had ec2 and this is a great way to quickly launch and provision compute resource in the cloud but then people wanted to use that resource more densely so docker came along and we could build containers and we could run multiple containers on an ec2 instance we could get better bang for our buck on cost we could put more containers on the same host run more work more workloads without them interfering with each other and better use our CPU and and RAM resources on there so that was great but it was very very difficult to to money set over a large fleet of service and that's where when the admin came along and said says please make this easier for us we decided to build ETS now why am I talking to ETS this talks about far gate well far gate is kind of an evolution of ECS he shares a lot of the same principles and concepts and ways of building I touched on this earlier you can actually take a a CS configuration task and you can change one line in it and deploy a far gate and vice versa so they share a lot of compute principles there's a few little differences will point out as we go along but we'll get there so why is managing the cluster difficult why do I keep saying this well let's look on the hosts that were running we have an ec2 instance on there we generally want to keep this kind of minimal we've got our operating system we've got the docker agent and then we've got the ECS asian the ECS agent is a thing that talks back to our management control plane and says how much resource are easy to instances got I also reports on the health of the tasks running on there and any other constraints we put in it's also responsible for starting the docker containers on the host so it's a bit than just a communication between the management control plane and ec2 but if we're bringing these et2 instances across ourselves we've got to manage that we've got to patch the OS we've got to make sure we're running the latest version of the docker agent and the ECS agent is up to date now a lot this is quite simplified because if you use our AMI there's the SFM agent on there and in the console you can actually just click things for like upgrade ECS agent and it'll happen for you magically in the background very having to SSH into each box but OS updates and docker agent and decorating updates you still got to manage that and this all becomes overhead and time consuming from an Operations point of view when you could be out building the cool stuff instead you're looking after me current infrastructure you're not moving forward just resume what I said they're doing this across a fleet is huge and also picking the scale that you need to run out and the right size of instances it's all time consuming it's all quite complicated and it's hard to get right you have to try several iterations before you've got the perfect sized cluster and you know when your auto scale events are going to be it's quite hard to manage all that yourself elastic container service made this a lot easier but then Fargate made it even easier because all you're focusing on now is your containers no longer you have to look after all this just instead you've got Fargate you talk to easy s's as per normal same tool today command-line tools but we just focus on our containers so Fargate basically manages all your management infrastructure and your working outs if a mastic it's scales up and down seamlessly in the background something you don't have to worry about yet again there's no scaling for you to do no consideration for you to have to do this and it's really really well integrated with the AWS ecosystem iron permissions for tasks elastic load balancer integration with the AWS of EPC networking which is the only networking mode that works in Fargate you get your own elastic network interface into AWS now that's kind of important because it's it's very low jitter low latency into our network and that means you get a really really good speed on your connection to your tasks and your your containers within also cloud watch for login it's integrated but it's a very very easy to change to another login model fluently for example it's just a configuration change as you go along and build you task definition but by default all that's there and very very easy to use so what do we have here we have our image repository ECR then we used to have a way of hosting on ec2 and we managed that using the elastic container service we're bringing out eks that's another management layer yet again uses ec2 for hosting Fargate fits in here Fargate is actually the hosting section of this so whilst you don't manage et2 it sits on that same to here that's what you've got to think about in terms of accessing Fargate in when you do stuff so let's talk about what we're going to build first of all I'm gonna go through and I'm going to show you all the concepts that we're going to use in this this particular demo we're going to look at permissions we're going to look at load balancer connectivity and running containers and I've also dropped in at the last minute service discovery which is relatively new so let's keep our fingers crossed that bit works so we're going to build a chat application and it's a very very simple socket IO no Jess application I'm going to run it in multiple containers on the front link to a load balancer and then at the back end I'm actually going to run Redis in my Fargate cluster just so I can use service discovery if it's kind of doing this in the production environment I'd probably consider using saint-like ElastiCache Redis there so I don't have to manage that side and back ups and the cluster and the availability and the fire over I'd use a service to abstract that but for this demo it's gonna be great because we're going to use the service discovery so our front-end containers can find the Redis hosts at the backend and you'll be able to test this by logging in and doing stuff so let's have a look at the constructs of Fargate to start with it's all very similar you write a task definition a task definition is where you define your application you say which images you're going to use for your particular container you put in memory constraints and CPU constraints in there we then have a cluster and within that cluster we've got a isolation boundary so if I create a cluster and name production and I launch a task in there if I launch that very same task in a put in a cluster called dev there's an isolation boundary those containers cannot cross and talk to each other unless you go in and explicitly do some complicated things to make them connect and you've got iron permissions here as well so our task is going to run in our cluster and that runs by a service being created the service is what what the ECS side of things does the Fargate management control plane tells our cluster that we want to run a few of these tasks it might be we want to run multiple copies of this task throughout our cluster it will also talk to the elastic load balancers and it will also keep track of the health of those tasks if one of those tasks become unhealthy it will replace it somewhere else in the cluster for us so it's got a lot of important jobs to do the service and it's what basically makes sure everything is working as should so this is very very simple test definition snippets on here I've got a family it's an immutable document in here I've versioned it with the name chap for my family and where I list name chat app and an image and the image is my docker repository image in there I could put up to ten containers and I could configure them differently and in there I can open different ports for each containers I can give them different CPU and memory constraints a container level I can also give the entire task memory and CPU constraints as well so each container definition needs a name image URL and can go in to more detail which we will do in a minute I'm quickly going to tell you about registry support in far gate while reddit so we support easy are naturally because this is our product we also support public repositories so things like docker hub in fact the readies container I'm going to pull down comes directly from docker hub and we're also working on integration for third-party private repos as well so the far gate or the ECS system will be able to login to private repositories be that private docker hub or something like e dot IO it'll be able to log in there I'm not saying queda IO for you over there how American friend ahem so that integration will be coming soon right let's dig into the compute side of this what's happening now how are our containers running so I mentioned that our containers can have CPU limits now one virtual CPU equals 1024 CP units and our memories make measured in megabytes so let's let's add some some task level resources here my task has two containers in this this example here can maximum use one virtual CPU and maximum used to gigabytes of memory on a container level I've out said I'm gonna use quarter of a CPU for this first container and 512 maker memory so that's a container restraint not a task restraint so you can get quite granular on how you extrange thing x' here so if you've got a container that doesn't need very much resource tell it in this level it doesn't need much resource and the container needs loads let that fill its boots and use everything now task CPU memory configurations there are 50 different configurations you can actually use here so it starts off at 0.25 of a virtual CPU and they can come in five twelve one gig or two gig memory sizes a member can jump right up to the big boy at the end here which is four virtual CPUs and thirty gigabyte of RAM that'd be a pretty big container and if anybody's got a use case for that I'd love to hear it because I want to know what you're running in containers it takes that much but it'd be great fun to see it now pricing wise you're going to pay for it you proceed provision and you'll build it a task level so if your task is five containers in and between that you're using two virtual CPUs you'll get charged on the two virtual CPU total and the memory total that you using there's also per second billion it's a one minute minimum but if you switch your containers off one minute 20 seconds in you'll get charged for one minutes 20 seconds with a runtime there let's have a look at the network so when you launch your far gate tasks you launch them into subnets in my example I'm using all three availability zones I've got a subnet in each availability zone you'll see that soon and what happens to that font 8 tasks when it runs an elastic network interface gets attached to it and that will get a private IP optionally it can have a public IP as well so that private IP will allow you to natively access other ec2 or low bonuses or DynamoDB or other endpoints in your V PC because you've effectively got a network interface in your virtual private cloud so you've got you've got an IP address this matches the rest of your network so you can use security groups and network access control lists to do your normal configuring of access through your network the internet can come on a public IP direct to the eni if you want it's also possible to do this in a in a private mode which we'll touch on in a minute now let's have a look here when we configure our test definition right at the top of the test definition we have network mode AWS VPC this is the only mode that works with Fargate ECS supports more it supports bridge mode so if anyone's ever done anything with docker before on their laptop bridge modes very similar to that you share the network interface and you have a virtual interface that comes up on your machine and you route traffic via that host mode basically pins everything to the hosts elastic network interface host an AWS VPC a faster than bridge but AWS VPC is far more separated and each task gets its own network interface so in my opinion it's one of the nicest modes and it also seamlessly extends your VPC into your container world which makes the simplification of security rules and network access control is so much easier you don't have to jump through hoops to connect a container to a ec2 instance or to it RDS instance you can just use the normal constructs we have which are security groups and network access control lists so this is a little bit of example of when you launch you can specify what subnets you're going to put your application into if you're doing this from the command line in this case I've just picked two subnet IDs here and I can specify a security group as well from I for my launch we're actually going to launch in cloud formation I'll tell you why in a moment so internet access now the elastic network interface is used for all inbound and outbound traffic that tasks and that includes pulling the images from ECR or public repository also pushing logs out to cloud watch that elastic network interfaces used for that the underlying host doesn't handle any about traffic for you it all go through your task elastic network interface and this is great for separation in one way if you've got multiple tasks running on the same host we've all got their own network interface network wise and they're not going to be competing for each other for resources so two common setups we have private with no inbound internet traffic but allows out by an access to the Internet and we have a public TAS with both inbound and outbound internet access so let's have a look at a private test set up first in diagrams we've got our normal V PC setup here we've got a private subnet and we're going to use a public subnet here and that gateway we've got outbound access to the internet through the Gateway but there's no direct inbound traffic into this task going on here now if we want to make that a public task we could run a far gate container in the public network give it elastic network interface and give it a public IP address we set our security groups up and are inbound security rules and we allow access to direct into that container from the internet now what I do in my example is kind of a mix between these two I'm just going to roll you back to this one I have a setup like this I run my Fargate tests in private subnets but I have an elastic load balancer in the public subnet that connects through to this and that's a similar construct I use whenever I'm building anything in AWS my in my view public subnet should hold in that gateways Bastion hosts and elastic load balancers if you can avoid it don't run servers in your public domain obviously if you're gonna run a VPN server that's gotta be there it's got a public access but don't run servers in your public network run it in private use load balancers to point pinholes through security groups into the private network this is great for security so let's fast-forward through that again elastic load balancer configuration so we can put that in there first of all we give our container a port MAP in and then when we create our service we can tell it to go to a current load balancer we give it the AR and the amazon resource namespace we tell it what container we're going to connect to and what what port our service then is responsible for telling the load balancer about each version of the task that's spun up so for running three copies of that task this service will basically register them all with the load balancer as we go through here is this is how I set it up I knew I have the slide somewhere this is what I do I have an elastic load balancer or a I'll be in this case in the public subnet and everything goes through to a private IP in the background like that so let's talk about storage quickly I'm not going to use any storage in my demo but let's talk about it very quickly so we've got the ephemeral EBS backed storage and that's provided in the forms of you've got writable layers for storage and you can have volume stories as well you could attach an EBS volume directory or container so the layered storage if you know how docker containers work which is going to be covered in the next session now you'll see how this works in there and by default each task has a 10 gigabyte storage for all the containers on their share that 10 gigabyte worth of space even though the containers are logically separated so if one of your containers uses a lot of that space the others can use less it's what I'm trying to say there once you stop the containers that ephemeral storage disappears is gone so you lose that if you want persistent storage you're going to use volumes and you can have a mount point whether that either be an EBS volume or a TFS mount that you want to put into your containers you've got lots of options there or if you want to spin up your own stress cluster somewhere and connect that hing you've got lots of lots of options here so let's skip through I am permissions so I talked about iron permissions very very briefly we've got different levels here we've got cluster permissions a course the permission is what access you has as developers have to do stuff in that cluster can you start containers can you stop containers can you create services can you scale all those kind of granule configurations and access controls that you need that's your access into e CS and into file gate we then have application permissions and that's what these containers can actually do and you can grant a I am role to your entire task and then each container inside can also have a task level permission on there which we'll cover in a second and then we have some housekeeping permissions now in Fargate these are what we create automatically in the background and this is to allow the Fargate host that we create and manage in the background for you to reach in and do things like pull your images from your ECR and push to your cloud watch create the elastic network interfaces in your account that's what these permissions allow Fargate to do it also allows it to talk to the low balances so cluster permissions it's something that you can have a look at the example here and you can set up different access in here this one would let you run a task within a particular cluster and in fact clusters named as well this one here will only let you read the tasks and describe the task of running the cluster you won't actually let you start any or delete any or scale anything so you can get quite granular in this as you go along obviously they're very very simple examples you can have much bigger policies as you go along this and many more like it says on the slide there so application permissions if we look at a task role if you want your application to do something we're going to create an iron role and that's going to give permissions to that particular container in my example I'm not going to actually do any extra permissions but what you could say in this is that if my if i enhance my chat container so you could upload an avatar if I want to save an avatar in persistent storage somewhere I'm going to write it 2 s 3 as my cheapest option for storage so I would give my container permission to write those little graphics out to out to s3 and then my application could consume those images from there and serve them to to the general public let's have a look at one of these here so this is a task level permission but I can put any at a scroll AR n now we can go a little bit deeper this this is basically what allows your application to access AWS services without having keys embedded in your code so if you've ever spun up an ec2 instance and had an iron roll on it's the same thing I said look our housekeeping permissions we have an execution role and that's used for pulling images and pushing cloud watch and also ECS service linked role which does Eni management and ELB management these are created for you in the background they're immutable you can't can't change them so let's click through these because I really want to get on to the demo and show you the demo and hope that cloud watch is going to be nice to me today so our execution role if we ever look on this as well as well as a task for all we can have an execution role where we can give it read-only permission to certain to our ECR so we couldn't push images back from from say our file get cluster it depends if you want to use anything like automated build tools and your in your file get cluster link's roll here we're going to quickly skip through these so we can get to the demo as well sorry I didn't mean to go that fast ahem so we're trying to get rid of the need for API keys in code and I say this a lot in every talk I do about every service please don't put your API keys in your code all it takes if you use in things like github is one little Miss configuration to accidentally make your repository public and there are BOTS ATS can get hover all the time I think it helps I think like it's 30 seconds after you make a repository public with your API keys in but most keys are discovered and consumed so if it was me doing that I'd find your API keys and if they had a lot of permissions in there I'd spin up a lot of Bitcoin miners and GPU miners and cost you a fortune and not really cost me anything so be very very careful with that I am API I am users and I am groups are for us for human beings and you have API keys for it where necessary roles are for instances functions containers don't put API keys in code please so visibility and monitoring by default we've got the AWS logs driver so any of your containers an output to stand it out we click those logs and we push them into cloud watch logs it's really really simple to do and I'm actually doing in my demo which is quite nice so we create a cloud watch group in this case my group is called chat app I've set a region of where I'm going to push my log to and then my stream prefix for this is going to be chat app as well and you've got to remember to add permissions for that particular stream to your to your execution role if you've locked things down this isn't actually my chat app but this is what it would look like you'd have if you log into the cloud watch console you'll be able to search for your particular log stream go through find your logs and you've got centralized login so if you're running multiple versions of your tasks in your containers we all get aggregated together you can search through them very nicely and easily there's other visibility tools here so I mentioned this earlier ECS emits a ton of cloud watch events there's a lot of things going on in the background you can look in there what's happening in your infrastructure and your in your entire setup so have a look at those and if you're using ETS not far gate you can do things on top there's a open source project called blocks which is a way to like extend your scheduler and do some cool placement of your containers and also we admit CPU and memory utilization metrics and we push those into cloud watch on a task level as well so you can see how much memory and CPU your task is taking right we're going to do the demo so I'm going to switch over now I'm going to mirror my screen and drop to the console one last thing I want to say is I'm gonna use cloud formation for this because in my opinion everything you deploy should be templated whether it's cloud formation or terraform I don't mind it's it's up to you you've got options but the beauty of this is when you see when I launch my stacks I'm calling this a production snack even though it's really a demo if I wanted to make a brand-new copy of my stack I changed one variable and I can add a lot I can launch an identical environment for doing some dev and debug and testing and and digging in deeper by changing one variable now that's really really powerful and it makes crash recovery better it makes development better it just makes your whole lifecycle management of containers better so cloud formation or terraform for the win right let's drop over to my browser now so I'm going to now MIT on mirror my screen here again I'll make this a little bit bigger for you as well so it's easier to read okay so I'm going to start off and tell you what I've already got I've cloud formed a V PC it's a network it's a 10.000 it's my 16 network I've got in there six subnets two in each availability zone I've got three public subnets I've got three private subnets and on top of that I've exposed a load of a variable so I can find out in my outputs I can find out the name of my subnets for example public subnet one public subnet two I can find my V PC ID so i can consume these outputs now by this export name at the end here in my other scripts and the first one i've got my other CloudFormation here is to create myself a Fargate cluster which sounds crazy I've already said there's no infrastructure why am i creating a creating a namespace basically and if I have a look in ETS you can see my cloud formed Fargate cluster that's because it's cloud form that's why we've got that big long UID on the end of on the end of here and one thing to notice here we give this view you can see how many tasks and services we've got one in Fargate type I mentioned we could have hybrid clusters I want you to notice we've got no infrastructure money we've got no container instances I'm not cheating here I'm using faregates I'm not using any ec2 ECS instances in this case and one thing that what makes sense until I go through and do a bit my other demo is just know I've got two hosted zones in my DNS at the moment I'm going to do some service discovery that's going to create me a new host and so these hosted zones here are did you lucien to i/o and production did you Lucien do I owe quite a mouthful to say they're the only two zones I've got now the first stack I'm going to go and create here is my Redis master service I was hoping to bring some ready slaves to this but I couldn't get me working in time so we just got really smashed to running please forgive me and I'm going to go on cloud form this now I've stopped it deliberately on a point on this template error has never good unresolved dependencies service name let's fix this live let's let's try this so always good fun I know what this is I was messing about with it last night midnight never good things anew so let's have a look at our readies cluster in here I need an extra configuration now has anyone used CloudFormation before excellent a few a few that's good that's good I'm happy to hear that so this is a templating language basically that allows us to define our AWS infrastructure I'm just gonna put a random value and I don't believe it matters so I'm going to save that right let's try that again no fail to upload let's reload CloudFormation I've got a back-up plan to get out of this don't worry CAPITAL LETTERS yes see what I was doing now sir I was hoping you'd find that and I'm pointing out it was a test honest can tell this is my last session of the day default there we go let's try now see if we cannot blow again ah well done guys you're actually the most observant audience I've ever had this makes my life easier I'm normally panicking at this point string here we go right somebody laughed did I get that right excellent let's try here we go is looking better right I'm gonna create ready smash sir I've got some CPU and memory constraints here I've also settled the port for Redis which is six three seven nine I'll put these in his defaults a lot of this doesn't really matter we're gonna go through and just just consume all this I'm gonna skip over this because at the moment I'm not putting any particular I am roles or permissions in as this demo like matures it'll get some of those in and now I'll stop doing what I preach is best practice so we'll get updated so I'm going to go through and create this I'm going to show you what's happening in the background now I have to deliberately stopped at a certain point in this creation I've only created a task definition I have not created a service on top and that's because I actually want to build the service in front of you and show you some of the new little bits and pieces we've got built into Fargate and ECS with service discovery so what's going to be happening here we've got a crate complete it's slightly relieved there if I go into ETS now I'm gonna go to task definitions and in here I've got a test definition called production you saw me type her after I got the default value correct so have a look in production here and I've got a test definition bill my networking type is AWS VPC we can run on ec2 or Fargate but I'm saying this as a far gate task that I'm running I've got some memory constraints and then I've got my container definition as well which is that I'm going to use the ready to latest image that's going to get pulled down from docker hub for us automatically in the background there's no other real things I need to do with this what I need to do is turn this into a service so I'm going to hit create service I'm gonna let this set for a minute otherwise you get an error on the console if you notice that if you're too quick it says it's not compatible and then it decides it is so I'm going to hit Fargate and the test definition were using is production and the platform is going to be the latest one and my cluster is a cluster of preforms on here I'm gonna give it a service name of ready smash stir again and I'm gonna say I only want one copy of this task running I only want one readies container running and I'm going to hit in the next step now if I drop down here I need to select the right VPC and luckily I can remember this we need subnet three four and five these are my private subnets isolated here and I'm going to edit this security group and select an existing one to cheat I select this one and I don't want a public IP on this so I can disable this as well I'm not linking a load balancer to this particular service but what I do want to do I want my for an end application semester fine getting connected on port 63 79 so I'm going to enable service discovery and I'm gonna tell it to create a new private namespace and this is going to be a route 53 zone and the namespace is gonna be called a local the service discovery name I'm going to use he's already - master so DNS why is it like readies - master dot local is what I'll be looking up to connect to and I'm going to give this a 500 second TTL so let's see it next and next step again skip over this pit and we're gonna take it to create the service so fingers crossed time again we want to see lots of green here so we've created a private namespace we've done our service discovery record and we've created the service now I'll click on View service but I'm going to drop you in to route 53 and I'm going to update this page you'll notice I've automatically got a new route 53 zone here which is local and if I dig into this at the moment because my task hasn't come up yet I've not got the record for ready semester but if we drop back to our easy s cluster we've now got one service created here and we've got a pending task so if we wait a moment what's actually happening in the background now I'm not cheated I've not done anything with pre-warming any Fargate hosts this hasn't run for about 12 hours now so all the infrastructure that was kind of provisioned and helped me in the backgrounds gone away far gate is provisioning some instances for me underneath and it's now pulling these images of fresh for me so it's pulling the Redis image down off of docker so there's a little bit of a cold start time here so if I keep refreshing this eventually I will get a pending task pop-up while we're waiting for that I'm going to drop you in and I'm going to show you the the chat service yeah Mille I'll make this screen a little bit bigger for you as well so this is my CloudFormation template here for those of you you haven't used this before at the top we've got parameters and this is where you can set some defaults or things you can change and tweak in the console as you as you launch a particular stack and as we go down I'll point out some some important ones here I've got one here called desired count now this is my chat application I want to run three versions of my tasks so I've like pre-programmed that in there and stuck it in we've then got some resources my resources are things like log groups and a task definition that's that bit I showed you in the console that I wasn't turned into a service but on this template I've gone one step further and I've actually cloud for my service as well and this this cloud formation will connect me to a load balancer it'll run the desired count which is three which I pointed out earlier so it'll make sure I've got three copies of my tasks running and another little thing I did and I added to this here's a load balancer definition to tell how it's connected up if I scroll down I'm going to create a public DNS record now I'm gonna create a DNS record called Fargate chat did you Lucian do i oh I'll put that up a bit later so you can all try and log in and see it so I'm going to create some DNS as well one I'm creating this and then this section here is about Auto scale alarms and that we're not going to demo that today but I've basically allowed for this particular demo that if we get a lots of users in we can create Auto scale alarms and we'll have more than three copies of my tasks running and it will scale back down as well so we're going to cross our fingers and we're going to go back here and we've got one running task yes this is going well so far I've got a ready server running and if I have a bit of an update on this you'll see here I've got a new DNS record and I've got ready - master local and it's got a 10.0 dot 3.0 five address that's in my VPC and that's the power they last in network interface being connected into that task I've got a real IP in my network I've not got some officer gated network that I've got to do some funky routing to get into right let's put the last bit of this up now I'm going to create this stack and this one's quite nice because it does a lot of stuff for me and ID after doing the console I'm gonna create my chat service and I'm going to open that and in here a stack name I'm going to call it the chat app and that's just what home that's just what my cloud formation is going to refer to it has I've got this desired count here three now showed you that was in my parameters at the top if I actually changed my mind here and decided want a six I can update it before I hit go so let's just put it back to a normal three and let's go for it now this is the important bit my application takes an environmental variable it gets passed to the docker container so if you're doing this on there on a local docker install this is where you have there - 'flag and then you have st. called ready send point equals Redis - master don't clean my case so this is going to pass that into docker for me I'm going to hit next I'm just going to check I've got everything go okay on that this should be Version three of my container excellent next again and skip through this a so this would take slightly longer to do but what I'm hoping for this to create I'm hoping for it to create me a new task a task definition that comes up it's gonna create a service it's going to make sure three copies of that task is running it's gonna register them all with my alb and it's also going to create me a DNS record for my lb as well to make things nice and easy for the for the rest of the demo so let's just have a little bit of a refresh on on here and we'll see what's going on you can now see we've got two services registered to Fargate and I've got three pending tasks this is my chat application starting to spin up now depending on the day that can either be really quick like that or take ages so here we go we've got form running excellent let's just have a quick check back here and let's see the progress of this so there's still some DNS stuff happening and there's still some linking with my load balancer done so I'm not going to jump the gun and poison my DNS with a record that can't be looked up which I've done before all alive I'm gonna wait until this crate is finished I'm gonna be patient person so this is coming up and it's taking advantage of service discovery I don't have to take advantage of service discovery I've got a version of this demo where I spin up an elastic cache cluster for Redis instead and from that cloud formation I actually expose a variable like I did from the production VPC to the cluster and I consume that in my chat application instead but I just wanted to demonstrate this new cool feature which is service discovery because a lot of people will say I need a service mesh my containers well I would say you've not got a scalable DNS infrastructure that you can use properly if you're looking to use a service mesh you need to use primitives don't over complex don't make it over complex use DNS that's what it's there for discovering services okay that we've got a crate complete this is going well now after the initial test I did on all of you so I'm gonna go here now first of all I'm going to show you we got everything running it's all looking happy if I want to dig into this I can dig into my Fargate chat and I can actually look at my three tasks that I run in I can see which version of the test definition they are you can see I've done this demo sixteen times now and I can see things like I'm running on platform 1.1 and and other bits of information that you can customize a little bit I can also drop out to my logs and I can have a look at what my logs are pushing out from each task why I've only got two and not three I don't know maybe it's just a delay in the console but this server listening that port 3000 that was put into standard out in my docker container it's been picked up by cloud watch logs and it's now in this so moment of truth everyone I'm going to go to far gate - chat is that big enough to read did you Lucian dot IO and if you go to this on your phones or on your laptops it will also work so I'm hoping there Brent's gonna join at least and I'll have somebody to talk to on this chat so I'm going to go here and the first thing it's going to ask is what's your nickname and I'm gonna say brick cuz that's my name so hit enter here I wait for this to resolve on the network nervously now doesn't normally take this long you it's Fargate - chat dot digi Lucien dot IO I'm just gonna refresh my page here a second yeah thank goodness for that Thank You Josh it's working so if I say hello here we can all see each other type in first person any types of rude word or now I'm gonna I'm just gonna pretend it's not there type in Jim and I won't understand so this is our little app working in the background and I'm not really on any infrastructure I'm gonna prove that I'm gonna go into my AWS console I'm gonna go to ec2 and instances and this is where I hope I'm not left to anything running in here yes no running instances I have no ec2 instances that I'm paying for I'm just paying for those individual containers running I've not got to patch anything I'm not going to worry about agents I'm not going to worry about OS updates and and libraries being out a day open SSL all of that worries gone someone's looking after for it for me I'm just looking after my containers so I've got an infrastructure there by just giving containers to the service and that lets me just focus on my container I can now go away and improve the application so now one of the things we get asked a lot is how do I get into that container running Fargate to debug it the proper answer is you shouldn't ever exact into a container in production to do debug you should have a proper centralized login structure and collate your logs and you say hello like elasticsearch to go through or cloud watch logs in and look at your logs properly now I told you I came from a Cuban eTI's background I got very very lazy Cuba Nettie's has got a very cool tool called cube control and you can do a cube control exec give it a container name and then say bash and you can drop directly into a container wherever it's running in your infrastructure really really cool but it made me a very lazy ops person because I'd go in our debug so here come a container I might do a little bit of a hotfix to make it keep mine in production and then someone would come and talk to me I'd forget to commit that change back to our git repo when all the containers restarted it all broke again when I moved to start using ECS for demos I couldn't do that anymore and my first reaction was this is massively frustrating but then it's like actually I'm doing it the proper way now I've had to look at my logs there's a problem let's go and fix it in git let's do a proper redeploy using up my CI CD pipeline and it made me a little bit tidier as a admin so whilst it felt like a constraint I actually now welcome that to come in and I don't know it's it's something I've grown to love if the feature was to come out and ECS I would be very happy but you've got to ask some of the team about that so we're throw that one over to you feature requests this man here one last thing I'm going to show you because we've got a bit of time I talked about our ETS cluster and I talked about how we could have a hybrid cluster if you look at the state of my cluster now I've got no container instances so if I wanted to move my workload into ec2 so I could ssh to that box and use docker exec to get into the container and do naughty things with bash I could but I don't have any container instances I can't SSH into Fargate so I'm gonna move my workload and I'm going to try and do stuff I'm not actually gonna move my workload now but I'm going to show how easy it is to build a hybrid cluster so if I go back to cloud formation here I'm going to create one less stack and I'll show this this as well and I've got ECS cluster piano I'm going to open this the name you and I could override the the size of my instances here so a might actually say I don't want to see for Excel I'm gonna override it enough at sorry for lives I'm gonna have a C for Excel mainly because I'm not paying this AWS but also let's spin that up and I'm just gonna pick one of my generic Fargate container security groups it doesn't matter for this particular demo because we're not going to do anything with it but I'm gonna say i want three servers on this so I'm gonna go along here I'm gonna hit next and I'm going to hit next again acknowledge that I'm creating some iron rolls here and say create what's going to happen in the background here now is cloud formation is actually going to provision some ec2 instances for me and it's going to connect them to the ECS management control plane so my management control plane is going to be dealing with both Fargate and ECS type instances so when i supply a task all i do is tell it which launch type i want and ECS management control plane knows whether to put it on far gate or to push it out to ec2 for me so I can move workloads back and forth between the two environments as much as I I wish so let's have a quick refresh on a this might take some time like might not complete but if we drop into ec2 now hopefully will I get some pending instances very very shortly while we're waiting for that let's just drop very very quickly back to the slides and I'll show that at the end and we'll do a little bit of very [Music] a little bit of a washer piercer I'm gonna go Oh automatic switch to presentation mode a quite impressive man so we've seen our demo we've leveraged conformation let's have a quick look at the summary before we drop back to the browser never lock so far gate is a new launch type for ETS it uses all the same management control plane it uses the same CLI tools you can leverage everything you've used before you can just change one line in it and correct it same we terraform you just change one line launch type from et2 to Fargate and you can launch same integrations with AWS and ec and ECS apart from the load balancers as a little idiosyncrasy there with one being allowed to use elastic load balancers and one not being allowed to use last load balancers Fargate can use a lbs and n lbs whereas ECS can use a lbs as well there's very easy migration between the two of them if you're debating whether to use ec2 or farg a start off with Fargate if you've got a need to then drop into your containers and do extra things or tweak kernel parameters that's when you move too easy too I would say don't give yourself the heavy lifting of managing worker notes if you can avoid it concentrate on your application really really really have to have that underline access to far gate it doesn't work I talked about the good reasons for not being able to get in but I talked about the hybrid clusters as well as a get-out-of-jail-free card there almost if you need to move it to ET to so you can get into a container you can do and you can start using far gate today in u.s. East one it's great it's great fun to play with and it's super simple just to get started even if you're not going to cloud form it log into the console switch to North Virginia on your region and click the launch cluster and it's there the best bit about that is as long as you're not running any containers there's no bill con associated with that as well so you can at least have a play around the command line tools and get familiar with what's there I mentioned this before this is Nathan pecks github account if you go on here it's a big list of instructions and tutorials and demos and explanations about networking as you scroll down this you can click whether you want to talk about far gay or ACS you can pick whether you want a public deployment of ECS or forget or you want to run in a private network and it will give you different instructions of how to do that it's a great set of resources and I relied on this heavily so I could ramp up on this product so it's great to have a look at let's drop back quickly to our our slide our browser and let's see if our cloud formations finished to do right I've got some instances starting so my cloud formation is creating me three instances it's nicely spread them out in US east 1a 1b and 1c they're just starting up so they may have not registered themselves with the cluster yet I have we've got no date about from here but now I've got here three container instances so my one Fargate so my one ECS cluster you can now launch both Fargate workload types and ec2 workload types so I've got a hybrid cluster very very quickly and very easily so have a play about with that it's great for them to play with and you'll get extra memory and CPU utilization wants all this registers itself so you can see how much of ec2 you're using the thing to think about with ec2 is you're paying for the instances underneath Fargate you paying a container level so that's worth taking in consideration I've only got one last slide for you so [Music] here we go thank you very much abbs to amazon.com / Fargate do feel free to reach out to me on Twitter and ask me any questions and thank you very much I hope you enjoyed that session Cheers [Applause]
Info
Channel: Amazon Web Services
Views: 58,386
Rating: 4.9198666 out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud, DevDay Munich 2018
Id: IEvLkwdFgnU
Channel Id: undefined
Length: 56min 38sec (3398 seconds)
Published: Wed Apr 25 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.