AWS Container Day - Amazon EKS and AWS Fargate: Better Together

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi thanks for coming to this session in this session we're going to discuss for the next 20 minutes the integration between amazon eks and aws fargate and how they work together my name is massimo referee i'm a principal developer advocate in the container team at aws we only have 20 minutes so we're going to go very quickly through some of the background and the idea about this integration the high level flow some of the implementation details and then what we want to do is we want to show you how this actually works so let's get started what is the idea here about integrating eks and fargate if you're coming to this session you're probably familiar with kubernetes and to some extent maybe amazon eks amazon eks is our managed kubernetes service on aws and the way it works is that we manage the control plane for you and we allow you to use that control plane to deploy pods application on top of working nodes or ec2 instances that run inside your account these ec2 instances are typically configured with auto scaling groups and technologies similar to that that allows you to better use and scale in an app those workflow nodes based on the need but the important point here is that these workflow nodes or ec2 instances run inside your account these worker nodes can be unmanaged like you manage them entirely or we provide a managed portal node feature for amazon eks that take over some of this responsibility from you the idea here is that in addition to you being able to deploy pods on this traditional container data plane based on traditional worker node that runs in your account you can now deploy your pods onto what we refer to as serverless container data plane which fargate represents fargate is a serverless service managed by aws that allows you to forget about infrastructure details when it comes to deploy application with ews fargate and eks you can deploy pods without having worker nodes what i showed in the previous slide is essentially what we call the mix mode operation an eks cluster that allows you to deploy at the same time to both far gate capacity or ec2 based capacity in your account it is also possible to configure an eks cluster to only deploy on aws far gate this is full serverless in the sense that amazon eks is a fully managed service you don't see the the master node running and backing this control plane and fargate is fully serverless in the sense that you never get to deal with ec2 instances that are used to deploy your parts on let's get into some some of the details about this high level flow and how it works if you think about how the ec2 float works when it comes to eks this is pretty much a picture as i said the contra plane and the master node runs inside our aws account you don't get to see those you only see the endpoint and then you have your customer account where you have your own vpc and you have your ec2 instances that represent the worker nodes as i said before these are typically tied together with an auto scaling group and stuff like that so when you start deploying pods on to this eks cluster you're essentially telling the control plane can you run this container can you run this pod you don't actually explicitly say on ec2 because that is implied because ect capacity is what gives the zks cluster the capacity to run these parts so basically what happens in this case is that the control plane will pick one of these ec2 instances it will instantiate the path on that ec2 instance and it will connect that part somehow to the vpc through an eni usually it's a little bit more complex than this but for the sake of time this is the high level flow of how it works when you have ec2 instances running in your account the important thing to understand here is that you have to manage those ect instances customers came to appreciate the fact that we managed for them the control plane and they're asking us to take more and more responsibility on the ect instances that done in their accounts a partial answer to that was the manage worker notes feature that we announced later last year but later last year we also announced the integration with aws fargate let's see how this works when fargate is injected into this picture as i alluded to before fargate is a serverless data plane that we purposely built to run containers at scale at aws you don't get to see those ec2 instances or micro vms we are on them in our own account so basically in this case the flow is if you come in with your pot and somehow we'll get to this in a minute you tell this country plane can you run this pot or container for me on far gate this is what happens we will go out and source a proper ec2 instance or micro vm to run your pot we will instantiate your pod on that instance or micro vm and we'll attach that pod in a very similar way to your vpc similar to how you would do that with ec2 so from a wiring perspective nothing really changes because the pod will show up as an entity in your vpc but the compute capacity that you need to provide to be able to run those containers you don't see that in your account you don't have to manage it you don't have to scale in you don't have to scale out you don't have to worry about ami life cycle you don't have to do anything like this because basically now you can think about leveraging kubernetes um thinking about the pods and not the infrastructure let's get into some more implementation technical details about how we made this work obviously you may be wondering at this point how can you tweak kubernetes to be able to deploy to something that doesn't actually exist to do this we introduced the notion of fargate profiles templates um in the eks service so basically this is a new api that you can use to create and delete fargate profiles and the fargate profiles tells eks a few things the first one probably the most important for this short session is what we call the pot selectors so pot selectors are a combination of namespace and labels that allows eks to catch the pod deployment when there is a catch eks will know that it needs to deploy to fargate instead of standard ct instances in addition to these the fargate profiles provide information like which subnet do i need to connect this part to and the personality of this part or or the cubelet actually because the cubelet needs to have an aws personality or an emerald associated to it because the cubelet will need to be able to go and pull from ecr for example or stuff like that these are some implementation details that i'm showing you on this screen so imagine that you have an eks cluster and you have configured the zks cluster with some working nodes and capacity in your account you also have configured this eks cluster with a particular fargate profile this target profile has a selectors pot selectors one in particular that says name space equal prod and stack equal blue when it comes to labels so let's make a quick example if you come in and you try to deploy a part using cube ctr for example you're trying to deploy this part into a namespace called test what happens is that on the zcash cluster there is going to be mutating and validating webhooks they are going to spec this spot we are going to ask eks do you have a match in one of your far gate profiles for this part that has namespace equal test since there is only one fargate profile in this example and the forget profile doesn't have a part selector that matches what's coming in the mutating and validating webhooks will just let the part go through the traditional kubernetes scheduler through the traditional deployment path and the pod will get instantiated on ect instances running in your account let's take a different example you're trying to deploy the very same part but with different specifications like now you're trying to deploy that pod into a namespace called crowd and with the label of stack equal blue the same mutating and validating web books will ask again eks do you have a matching selector for this part that i'm trying to deploy eks will provide the mutating and validating web books with the format profile that i've just discussed and the mutating and validating web books will figure out that there is actually a match so they will mutate the part to include additional information like the profile that i'm going to use and the scheduler name this is a custom fork scheduler that we purposely built to be able to do this so in that case the fargate the custom firebase scheduler will deploy this pod onto the forbid fleet what you see here is how you extend kubernetes by the book we obviously didn't have the option of forking for gate and modifying fargate to be able to change the scheduler behavior so what we have done is we have extended um kubernetes to be able to say i want to be able to discriminate whether you want this pod to be deployed on standard ec2 instances or far gate and based on your intent i'm going to use the standard kubernetes scheduler or a custom scheduler that exists inside eks now we are at this point let's take a look at how this actually um works for rio this is a cloud9 instance that i use um to demonstrate or work with these technologies the first thing that i wanted to show you is how easy it is to deploy an eks cluster with fargate so we use eksctl as our preferred common line to create and delete kubernetes eks clusters and this is the cli that i have used to deploy a cluster called eks fargate with the flag dash dash fargate we'll see in a second what that flag does to this cluster but basically it takes a few minutes uh to deploy this cluster this is this is the reason for which um i deployed this um offline it took a few minutes so i didn't want to steal from uh the session but basically at the end of this command what you will have is an eks cluster called eks fargate kubernetes version 1.17 which is the default now with the ctl and this cluster is active if i go into the ec2 instance view in my account what you see here is that there isn't any ec2 instance that is baking that cluster i only have a couple of cloud9 environment that i use but i don't have any um working nodes running in my account so let's see what happens when i start interacting with this cast cluster this is a brand new cluster so it doesn't have um anything other than uh the core dns pods now you may wonder where are these kubernetes uh pilots running if there isn't any ect instance backing them this is the whole purpose about um the amazon eks and aws for gate integration if i query the node here what i will see is i will see a couple of nodes actually they are called forget ip192 something which represents my internal ip schema for my environment and if i go and dig into these i will see that there is a one-to-one mapping between this far gate um uh instances and my two parts that are running so let's take this um example here i have a very super simple pod that i'm trying to deploy here with the replicas equal one and i'm deploying this into the default namespace so let's do this let's say keep ctl apply okay i'm creating my deployment now so if i query the parts i see that there is one new part coming in my my web deployment in the default namespace and status is pending what it is happening now is what i described in my previous slide so basically the eks cluster is going to query um the uh fargate profile that i have up and running in this cluster there is a match we'll see that in a minute and so it's going to deploy these onto the far gate fleet it will take a few seconds to source an instance from the fargate fleet and deploy this pod but if you keep um refreshing these you will see that in a few seconds this part will be up and running actually the other thing that is interesting to see is that a node is coming online a new node is coming online this represents the node that is going to uh to provide the capacity to run my pod and you see now the pod is running as you can see i didn't have to deploy anything in terms of ec2 and just to be clear this is not cluster auto scaling kicking in there is no cluster or working nodes here to scale in and out as a matter of fact if i refresh here there isn't any new ec2 instance this is you can think of this as a virtual representation of a ec2 instance that is running in the aws account that is used to back this pod the other interesting thing that i could do here is i could change this deployment type so that i want to have five replicas if i apply here the same now this is being reconfigured to spin up four additional pods if i query the pods here i see that they're coming up they're pending right now it will take a few seconds to be able to deploy this part into onto this cluster i want to make sure that you grab that this is not cluster autoscaler this has nothing to do with cluster autoscaler in the sense that there isn't any ect instance that i'm running [Music] having this set however eks and fargate works very very well with a horizontal pod auto scaler so that you can create rules to scale in and out the pod without having to think about scaling in and out the infrastructure so what happened here is that now my containers are coming up they're starting to run and what happened in the background is that these this integration went out into the fleet we sourced new um ec2 instances or micro vms to back this part and these parts are now up and running as you can see that will take a few additional few seconds to run this but this is how the integration works [Music] so as i said we only had some 20 minutes to discuss this so we wanted to keep it short give you a high level overview of how it works we wanted to also show you how this works in practice and these are a few additional resources that you can refer to in call to action so there is a blog that talks about this integration a blog that we posted when we announced this integration back in december um 2019 there is a one hour deep dive that we did at uh invent 2019 that goes into some more details about how this integration works and it covers other aspects like for example how you size those pots because you don't have ecto instances that you can size so how do you size those pods and my suggestion if you want to get started is to jump onto the eks and forget getting started guide at the link here which allows you to get started in a matter of minutes with this being said thank you very much for coming to this session these are my contact details reach out to me via email or twitter um i'm referred at amazon.com or at m reference on twitter feel free to reach out comment about these questions about these happy happy to help and have a great day thank you very much
Info
Channel: AWS Events
Views: 5,895
Rating: 4.816092 out of 5
Keywords: AWS, Events, fargate, eks, aws, containers, kubernetes
Id: -xMNbys0tF8
Channel Id: undefined
Length: 21min 50sec (1310 seconds)
Published: Mon Aug 31 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.