AWS Builders' Day | Deep Dive into AWS Fargate

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
ready whoo okay um welcome to the second-to-last deep dive today or I guess are it's our last service deep dive and then the last session after this we'll be on advanced container scheduling and management just in case you weren't sleeping enough at 4:30 buckle up so this is a deep dive on AWS for our gate we spoke about it this morning in the State of the Union Paul mentioned a couple times in the deep dives on ECS KS big for our gate difference not actually a service it's the underlying technology that's going to power both Fargate mode for easy s and coming soon ish Fargate mode for eks so this is the part we talked about this morning that means you don't manage the underlying cluster infrastructure you just do everything at the container level so that means via the task definition or the kubernetes pod sometime soonish what does Fargate mean so practically speaking means you don't do the scaling or the setup or the resource allocation you just pass it your your definition so your task definition your pod and then Fargate handles all the other pieces why Fargate and if you've if you've run containers in production yourself you can probably answer this question for me which is that running one container is really easy and running one container with kubernetes or with docker or just just dock or an orchestration tool or with ECS and been there done that wrote the medium post and said containers are so easy yet on what everyone's talking about why does everyone need kubernetes running lots of containers though is really difficult and all the problems that you have in scaling and running a micro servo no less is compounded by 10 20 50 a hundred times when you're running running containers and especially in production that's a ton of work right so high availability resiliency making sure that your latency is okay making sure that your response times are okay making sure that you're managing your resources and your scheduling properly and those are all things that become extra important in your production system but that's where your handle your capacity so you probably have X number or more containers there using an SES got us part of the way so it made it easier so it removed well for docker on ec2 you're responsible for a lot more of this so if you wanted to run just docker with no orchestration engine you'd be responsible for writing some sort of custom logic that figured out okay well when do I need more of this service when do I need less of it out of my pool of 50 instances which one do i allocate this task to there's a ton of work there that a lot of people don't want to do themselves so ECS took over a couple parts of this so the scheduling and the orchestration basically so how do I manage my cluster resources how do I place the containers on top of that but it's not totally hands-off so you can still write play some policies you'd still define your task definition your resource constraints your networking your security groups your policies your roles those are all things that you are responsible for and ECS so it got you part of the way but it didn't get you to a place where your hands off and you could just write about containers to run containers so forget is here to just enable you to focus on the application so just at the container level just to the task level nothing else so you're responsible only for defining your container and the resources that you want to give your service and then everything else is on Fargate and I think the best description that I've seen of this is someone asks you for a sandwich they're not asking you to put them in charge of a global sandwich the district's chain they just want a sandwich so with Fargate it's just the sandwich so no one has to be in charge of just deploying an orchestration and management and service discovery it's just there because ultimately all you really wanted was your workload all you wanted was your job your container you didn't care about all the logistics that went into that you just wanted your sandwich so sorry we got a little pg-13 for a second there I probably would have done that knocked out like - it really makes my point I think some carefully chosen obscenities so this is a little bit more what it ends up looking like between ECS and Fargate so in this one things that you're responsible for so the EECS agent running docker itself picking the right AMI and with Fargate that bottom layer is removed so you're responsible for just the contents of those boxes and how big the boxes are and nothing else so what does that look like in practice so this is should be a familiar screenshot at this point because Paul also used it and he took the other path so what does it look like for our gate is it does not look any different all you're doing is choosing a different launch type so instead of choosing easy - I'm choosing far gate a couple caveats but we'll start off with some similarities so the first one is that both ECS and far gate use the same schema so they both take a task definition as an input the task definition looks exactly the same its AWS so it's just a big pile of JSON so those happiness task definitions are not the same and that is the API is that you can use to launch and manage those containers are also the same all you're changing is the launch type for some of you you might be changing the network mode if you were not previously looking at something like AWS PPC and there's a big reason for this right so one of them is easy migration so you can run a hybrid cluster so I can have both easy two tasks and Fargate tasks coexisting in the same cluster so that's part of it and the other part is that I should be able to switch back and forth so if I start off as we had a nice question earlier if I start off in the beginning with using far gate and I decide hey actually I want to control something on my cluster I want to run some sort of demon scheduler I want to install a package myself you can just switch your launch type back and you can go from far gate to ec2 so you can switch back and forth a really common question around kind of the initial state is a far gate to is what happens if I'm not all the way there so what happens if I still find myself in a situation where I need to exec into the container that I'm running to either to debug or to perform an operation or run some sort of command and the answer is in Fargate you don't but if you're in a position where you might need to be doing that what I would suggest doing is is some kind of flag so you can flip back and forth between easy to type and Fargate type so if you if you were like oh well you know I have a persistent problem with the container and I'd really like to run in there and run and then run something or unsought package you can flip back and forth in to you is easy to type where you can executor the container like you would have before so we said that they share the same the same schema which is a task definition so what's exactly as a task definition if you're just joining us in the case of Fargate it's kind of everything right so it's an immutable document it describes everything that there is to know about your container so it contains a container definition which is some resources and the URL that you get your image from so a docker hub URL or an amazon ECR registry URL all containers that belong to the same task are co-located on the same host you can have up to ten per the same task and you identify it by a family and version so web app one web app two messaging app ten you cannot change the test definition itself once you've created the version so you effectively version your application by adding new revisions of the task definition so if I want to change my CPU from 1024 to 2048 I would do that by creating a new revision of the task definition and then I would let ECS handle the deployment so draining the connections off the old version of my task and then rolling those connections over on to the new version a couple things that are required you must have the name and the image URL and in the case of for gate you must have some resources they're primitives beyond just the task definition are shared with the ec2 type launch types so something like V PC you have to run far get into V PC there's no option you can use I am policies in cloud watch so a lot of the same kind of primitives that are that are around an ECE in the ec2 type which I keep trying to call ECS but really they're always yes a lot of acronyms a lot of ease those primitives are all the same so you use the V PC the same way the security groups the same way cloud watch I am policies the same way workflow looks a little bit different there's in for gated so just like each gases frankly it's a little hard to demo because part of the whole point of forget is that you don't do a lot of stuff anymore so it used to be that I could just do like a cool demo and be like look at how good I am I can make docker containers and I can put the money CCS and now it's like well not anymore so it's the world's most anti-climatic demo but the the flow looks something like this right so I build my container image which is the same as it used to be locally so I work locally as a developer I I add some packages I expose a port I copy over some source code from somewhere else I choose my Orchestrator so right now you only have one choice and it's CCS but coming soon sometime in the future same with kubernetes so at that point in the future I could then choose one Orchestrator or over the other so type e CS or type kubernetes and you call this defining your application but this is literally just the task definition or the crew Nettie's pod so I drop that in I say this is the resources that it needs and here's my definition and then Fargate does the rest so that's the new kind of deployment and development process with Fargate by design not super different from the ECS process you're just missing a whole bunch of different steps so I don't have to do quite so many of the connecting different pieces all I really need to do is to find my container for the world's most popular question how do I know what I should use for gate versus ec2 mode we had a bunch of these questions both after the State of the Union and during the State of the Union so I'll give some some higher-level advice I realized that everyone is very tired of us all saying depends on your workload but we say it for a reason because the way the reason that I use for gait is not the same as why Paul might use for gait it's not the same as why you might use kubernetes on ec2 so it depends on your workload it depends on what you can maintain and scale and what works best for your process some words of advice I think between far gate and ec2 mode for Fargate if you have a task definition and some idea about resource usage and you're not doing anything else beyond that so of all of your configuration is fine inside the the container and task definition then you're fine with far gate and I actually recommend far gate is a really nice starting point if you're doing anything else so installing a daemon SSH into the host exactly into the container running everything that needs like advanced Linux capabilities perhaps for gate is not the right point for you to start at so there are some trade-offs by going back and forth right so by far gate by passing on more more of the management responsibilities to AWS for things like high availability and deployments I lose a little bit of control over things like being able to install my own packages on the host so if you're if you're doing any sort of customization maybe start with you so yes if everything can be defined for your application and just the container and task definition then Fargate would be a good place to start so we're gonna look at resource usage in Fargate because i'm determined to have the most exciting afternoon for you all possible because it's one of the only things left beyond the task definition that really controls your workload and Fargate so we're gonna look at that if you're not totally bored to death don't worry our last session is 400-level and since everyone has been asking me so many delightful questions about networking I added some slides so so compute and we compute resources in Fargate a little bit different it's a sliding scale now instead of kind of a free-for-all and what you can choose the scale is on the next slide before anyone raises their hand I choose my CPU I choose my memory I have both task level resources and level resources so a task level resource would be the total CPU in memory shared across all containers that are part of that task that is required and I can also within that set container level resources so a use case for this before anyone says well why do I need to separate memory fields the person that came up to me afterwards and said I have one container that steals on my memory you my friend you are why there's that other field so so not all containers consume resources responsibly or in the same way so if you have a container that is a memory hog or likes to steal up all the available CPU that is why you have that second level field so those continue level ones define how you share resources between the all the containers that belong to the task and just that specific container so I could pursue if you've ever seen the error message and ECS it says task X has been killed for memory usage that's why it hit a hard limit and was killed for not acting responsibly and you could ignore that error technically by just continuing to let it restart restart restart until it eats up all the memory and is killed or you could give it a new limit or you could fix the error that was causing it to eat up all the memory so many paths Padawans that that one is a choice for you guys in the following session we will look a little bit more closely on how containers share resources back and forth and how how I know when to set a hard limit versus a soft limit so I have a little bit of that in here now but not too much so this is the chart that I promised so this is how you do resource configuration and Fargate you have a couple of choices and by a couple I mean 50 but I did not list all 50 on this document but you get a sliding scale basically of CPU to memory so someone asked me in Helsinki after this could I see arete eclis give it an infinite amount of resources because it's not my cluster and the answer is no not infinite or you'd probably get a call from someone it ate it but you ask being like do you know maybe not maybe not so much but pretty generous so you can get up pretty high you can allocate a fair amount of resources to these containers the top level would be 4096 for CPU and then up to 30 gigabytes of memory so pretty generous not infinite guy in Helsinki so container CPU sharing so I promise that I only have like two pages on this even though I think it's really cool so task CPU is a hard limit it cannot have more CPU than that they cannot exceed that hard container CPU is optional so you remember how we have the tasks in the container task is what's shared container is optional but also by default your containers will share everything that's allocated to the task so will be equally distributed between all the containers have belong to the task I think in set container CPU to control that sharing so I could say that my greedy application gets a larger percentage of the CPU than a really lightweight application that maybe adds things to a queue same basic principles for memory so task memory again is a hard limit my task cannot have exceed the amount of memory that I have allotted to it container level ones are optional and a really frequently asked question also is how do I know which instance type to choose and how do I know that I've allocated the right resources for for my task here for my container this is a bad answer I guess but I generally start with the default assuming that someone at AWS has put more research into this and I would care to do but then a lot of it depends on you watching your application so you logging things you looking at metrics to see if your container is getting so maybe you have an alert on the ECS agent logs for a message that says that that task was killed from memory usage so that's something you need to pay attention to because you can get a lot of performance increases from kind of tuning how much memory and CPU you allocate you can just use the defaults though so if you do not know the answer it is okay to not know the answer and you can use the defaults and you can just adjust as you as you see fit also if you know that you have a really lightweight container so like a proxy that all it's really doing is sending requests other places maybe you know ok you know what I don't really need to allocate much resources to this and so you could drop them a little bit but you're always required to have a number for tasks for the CPU and the memory that's the hard limit and then you can set soft limits which changes how they share memory between themselves moving on a little bit from resource usage platform version so with Fargate obviously it's a managed service which means that you need to have a little bit of control about whether we upgrade you or not to a different platform version so passing the platform version argument if you you can pin a version and we will not upgrade you until you change the pin value or you can leave it blank and we will upgrade as we come out with new platform versions so if you're running a production system that perhaps is dependent on a certain platform version pin pin the version number so that we do not automatically upgrade you will will automatically send those upgrades out as you release new ones but the platform version itself refers to the runtime in which your containers are being executed so the environment on those cluster hosts that you don't control anymore that your containers are being executed in yes I do not know exactly but I'm gonna guess it's gonna be like eks where we support a couple previous ones a number to be determined and I also depends I assume it's for most services I think it depends a lot on how people are using it like if you look at usage numbers and no one ever is using more than two versions back I imagine that we'd focus more on supporting newer versions rather than supporting additional older ones but it's usually at least a couple but not too far back if that makes sense because at some point you have to upgrade just it's a way of life so let's talk about networking baby I put in jokes to make networking fun is it working I'll ask you again in ten slides and we'll see if anyone still thinks networking is fun we're gonna try to make it kind of fun because I added some of these because in between every single session we've had today there have been at least four or five networking questions so now we're gonna have networking 101 and everyone will learn the answers to these questions and then you can ask me something different ready so since we're talking about far gate and the front gate deep dive I started off with three things that you cannot do in Fargate because I'm helpful so these are the three original docker networking types there are many like it but these ones are mine first one is bridge that is the default it's called docker zero if you're someone from hacker news it says there's a different way to pronounce this I don't care I'm gonna say doctor zero this is the default behavior so for ECS it is also the default behavior for standard docker just with no orchestration system this is a default behavior this means the containers on the same network communicate with each other via IP address does not get you automatic service discovery if you want to connect your containers you can use link I know that I have three hyphens in there it's because PowerPoint insists that I want only one - when in fact I want to so now there's three so now no one's winning but at least it's not one giant line none there is no network interface attached it is only local loopback which I will explain very shortly host means that I map directly to the host network so network settings inside the container are a direct equivalent to network settings on the host more networking to answer frequently asked questions both on my Twitter feed and in this room yes you can create your own bridge networks no I will not be covering it in this topic if you're interested in learning how to create your own bridge networks I will send you some very cool links if you find that interesting if you're looking to learn more about overlay networks which I know that Paul mentioned it is more commonly talked about in the text of kubernetes so something like console or docker swarm I'm also not covering this but when the slides go out I did a link for some info from the the console team that talks a little bit about about what overlay networking is if you're looking to figure out how your containers communicate or in some cases are failing to communicate with each other I actually highly recommend the docker documentation so networking is worth a whole many many conversations of its own so this isn't an exhaustive primer on networking this is just the basics that you know a little bit about what we're talking about when we say what kind of networking type something is using so if you're looking for a deep dive on inner inner container communication or more on these networking types the darker documentation has also answered a lot of questions on networking because that's everyone's favorite docker topic is but how do my containers talk to each other so start there if you're looking for more information please feel free to tweet me because I would love to talk about it for our purposes though for talking about things like ECS and Fargate there's two kinds of networking that we care about so the first one is container or local networking and the second one is internal networking so you can sorry that's what I said did I not say that oh it's fine you guys knew what I met it's on the slides so the first one is how do containers communicate with each other so on the same network so on a single ec2 instance two components could communicate via local the loopback interface so this is how it would work on just a standard ec2 instance without docker also is I have two separate processes running they can communicate via the loopback interface those let's processes communicate directly again this is not an exhaustive primer on networking so if you would like to add that there are many more things that you can do with the loopback interface please tell Paul after this after this brief commercial interruption and external networking is probably what comes up more commonly for us though which is how do I communicate with services that are not part of my same task or to external services so in a lot of cases this means that our traffic is being routed through a V PC I launched tasks into subnets subnets define traffic through things called routing tables if you're falling asleep yet it's because a lot of you if you're a software engineer might have not done this before not created your own V PC if you're an ops engineer you've probably a lot of clothes up in quality time with the V PC console so you know all about routing tables and two types of subnets though which is important for the following public means there's an Internet gateway attached private no internet gateway you communicate through an app so that's network address translation these you don't need to know about networking though to use ECS and Fargate I'm telling you so that you can make an informed decision when you know that when Paul mentions that there's a bunch of different networking types or when we mentioned that Fargate only uses AWS PPC and not the previous networking types I want you to know at least a little bit about what that means so what is a what's the difference between a public and a private subnet why do private subnets need not gateways that's attached and public's have internet gateways so if you're planning on running this in production and you're the one that's responsible for it please please please do more research than just this but hopefully we answered some of the very basic questions on how you can use this yourself AWS VPC so this is the new networking type that we've been talking about all day this is your only your only option for Fargate so if you want a different networking type other than AWS VPC you need to use ECS or I guess kubernetes but that was Paul's talk so you have to use AWS VPC that means that each task has its own eni that's allocated through us you do not have to do it by default it receives a private IP you can also allocate a public IP so containers that are used that are launched as part of that same ask so remember you can have up to 10 container definitions in the same task they can use that local loopback this is jet lag is hard then you can use the local loopback interface which is what I just talked about a couple slides ago so they can communicate to each other the same way with an en I allocation comes to private IP and I are at the task level though so how do I control how containers that are not part of the same task communicate a little bit of a process here on how the VPC integration works so this is the process that I just said though right so we create the en I we allocate it from the subnet that you're launching your task into we attach the en I for you and then that task now has an Associated private IP you can control inbound and outbound traffic via security groups the same way that you would for any other type of container infrastructure I have a slide addressing this in a little bit where we're looking at this literally the same screenshot and I will show you why but you have to run Fargate in a VPC you have no other option there is nothing there is no other road forward above you PC you can pass in its own configuration I know that some of you have tried the far gate getting started wizard which does not let you add your own VPC that's just a getting started wizard for learning so definitely in real life use your own VPC why do we care about Ian's eyes why do I keep talking about them Ian's eyes and fog gate are what control internet access so at ASCII and AI controls the network access both to and from your task so also things like image pull so from ECR or and other registry or pushing logs to cloud watch to push logs and pull images you must have outbound internet access even if your application does not need a pond Internet access so you can either do this with a private task without bound access or a public task that has both in Met and outbound so only allowing outbound means that it will accept no requests in inbound and outbound means it will both emit data and also accept incoming requests if you are more of a picture person this is what local networking looks like so they communicate via the shared ni because it's two containers that are part of the same task and then public and private IP I waited for you you're welcome this one is a little bit more complicated showing non-local networking so both public and private so you'll see that my public has a public subnet and then my private subnet my private subnet is communicating through the NAT gateway so I accept my outbound traffic through the not gateway that one only has a private IP but might have an Internet gateway in front of my public subnet this is a little bit of a bigger and better picture of what this might look like from the from the top but you guys learn just learned all of these all of these concepts it is not important necessarily to be able to parrot them all back or to know exactly all the differences the the main takeaway really is that for Fargate only AWS VPC if you find yourself limited by AWS PPC or you're not you don't the behavior doesn't work for you look into the different networking modes for Fargate if you're looking to do anything different I included a couple of links if you would like to learn more about this so the first one is a whole post on just how AWS PPC works in Fargate and then the second one is talking about a more general view of networking for ECS permissions and access so I do all the cool things today obviously exciting topics so we're gonna cover this at a really high level because I think most of you already know this but three kinds of permissions here so cluster permission it's a kind of a people permission so who can launch or perform actions with tasks in your cluster an application permission what can your application containers actually perform and then what we've called housekeeping permissions which is a really friendly way of saying AWS permissions so how do we pull images from ECR or push locks the cloud watch or create me and eyes and your subnets on your behalf so three types and cholesterol level permissions like everything else in AWS are controlled by policies you can write an ion policy for a user or a role that lets you take an action in the specific cluster or you can write application permissions and those are your kind of your meteor permissions so what can your services actually do what other services can they interact with housekeeping we just talked about that a couple different categories so either the the ones that we do to execute on your behalf so to pull an image or a for an exit for example a service role which would let you do something like create subnets or register targets to your load balancer you were just learning that also because I just put in another summary but if you're reading the slides at home that's the high-level view of which one does which so a couple of fun facts so featuring biggie just making sure everyone's still still awake storage so these are some kind of frequently asked questions so storage on forget is backed by EBS file system space that you get for your container file system 10 gigabytes hmm it's a a limit I'm any you can ask support if you want or an increase but well there L limits are soft limits if you ask enough I think as far as I know that is the default limit I'm not sure what the process would be right now for getting it increased but I assume that it's one of the ones that is still a very new service so it could change it at any point I've already answered this a couple of times but if you're following along at home and you run into the same problem which is why do all of my clusters look the same they're hybrid a lot of people have used the wizard and have come up to me and say but it's so unsafe because I where can I use my own VPC does that mean that AWS manages the VPC for me the answer is no but in the in the the Fargate getting started wizard it's like here's how far gate concepts work wizard so in that wizard it does create the V PC for you in real production life for far gate you can pass in a V PC configuration so you need at least a couple availability zones as multiple subnets in order to distribute traffic properly but you absolutely positively can use and pass in your own V PC someone asked me about this after the State of the Union but how do you isolate between your resources and everyone else's resources and Fargate because you're not managing infrastructure anymore it's at the cluster level so everything is defined at the cluster level so anything that's in that cluster it's in that cluster anything outside of that cluster is isolated from the contents of that cluster so the logical architecture choice for a lot of people so far has been I have a production cluster or a staging cluster a testing cluster development cluster that fits really nicely I think with with the with the isolation model here which is that everything in the cluster only ever sees things in that in that cluster we'll talk a little bit more about load balancers later for Fargate though you do not have the choice to use ELB classic anymore you only have a lb or n lb I just said these but at least two subnets in two different availability zones and you have to use the IP address target type not the instance target type so just some now you knows and for when you you get start with this yourself and a couple different CL eyes so we have the AWS CLI which is the everyone knows everyone uses it it's pretty it's the same one that's shared for many AWS services so I can use it for lambda or API gateway or ec2 so same CLI open sourced includes most AWS services I put some links in and for some reason you haven't heard of it and for ecs there's also a second official CLI that's ECS CLI that's the one that supports in a lot of cases docker compose files only works obviously with ECS and Fargate not everything else so you in a lot of cases use docker compose files which has been a really popular request but you can't use them from the main AWS CLI only the either UCS CLI which I realize is confusing because you call it with AWS ECS and this one is just easiest so fun fact some good unofficial options so the one that I'm about to show you is far gate CLI and that's made by a guy named John Pagnotta that one's just for Fargate it's really nice feels a lot like how I want my frog great experience to feel like so actually a lot like the docker command so like tasks run like Fargate tasks run far gate service run they're also cold brew CLI so there's a lot of really nice open-source third-party tools for working with these kind of container setups so if the AWS CLI does not float your boat that's fine too so I'm gonna try to switch windows here which I've been told may or may not work so whoo fingers crossed okay so the wizard that I'm talking about oops oh it didn't switch windows hate this game I'm gonna cancel the demo soon we're just gonna do Q&A does that work okay so just some words of words of thoughts here so from the you get to far gate from the same place and the ec2 console that you see s console that you get to regular ECS right so you'll see that I have a bunch of demo consoles that's not helpful at all so this is how you can tell that they're hybrid right so within that same cluster I have far gate test running IVC two tests running they run in the same place I start them the same way the wizard that everyone has been talking about too much consternation okay well this is a fun game but does anyone have any questions yes in the back can I use to build my hybrid cluster windows container and Linux currently not with Windows you can do ec2 and far gate type tasks not the first request that I've heard for for Windows though so I will I will pass that on you can have a hybrid cluster with Windows and ec2 Linux windows on ec2 mode Fargate today is a linux Oni night will any of the back before I go to the front but no that's cheating Paul no nobody won at the front now I need someone in the back though okay after after this yeah is there any strategy in order to keep hot lasers in forget where we're going to deploy our tasks we'll always reuse the same underlying instances - for the running is asking for gate yeah so the question was about the the infrastructure behind the scenes with for gate and how that works how its provisioned etc and the provisioning time so today when you want to deploy a container into Fargo there's a spin up time for around 40 seconds roughly while that infrastructure et cetera is prepared and bought up for you et cetera there so that is something we're looking to optimize ultimately with Fargate we want that to be an optimization task for AWS that shouldn't be something you want to have to worry about like do I need to hop provision things do I need to pre warm things like that whole conversation is the complete opposite of where we want to be with far gate to be honest so those are controls that probably won't be exposed but will be an optimization task for us right to best optimize that pool I would say that from using it the only noticeable like start up warming up time the difference between our game mode and ec2 is that original one that Paul mentioned like that it takes us a couple seconds longer but it actually when you time it it feels like it looks like it takes longer because I'm not doing any of the thing but it actually takes much less time than it previously it took me to click all the buttons so we don't make any guarantees about the infrastructure it's going to land on there that may change as we optimize so it's a networking question um if I want to take a closer look at the data flow would I be able to logically expose the the NI can I tap the possible so you can see the Eni and you can you can get the IP address associated with the Eni so from the Fargate section of the console you should be able to find out how to communicate with the Eni whatever networking which craft you'd like to do to tap the Eni so not all of it will be yes you you can't do network taps with VP season II and I like you can't have the equipment of a spam port AWS VPC just doesn't support that construct so while you can see the en I in your account etc you're not going to be able to like dual homed that Eni and split the traffic off send it in two places etc there you can use things like VPC flow log so so VPC flow logs is when you turn it on for your V PC you're gonna get logs showing all the packets where they want whether they were accepted or denied at a security group level loads of metadata about the packet like the size the source destination port IP address blah blah blah the main thing you don't get with VPC flow logs is the actual packet contents itself you get them a metadata but not the packet contents so that would be my go-to for diagnosing network flows there and there's a whole load of example visualization tools and you know elasticsearch dashboards and stuff Cabana dashboards that you can spin up out of the box that work with VPC flow looks I think flan sure you're on the answer right so you do have visibility into it so you can't you can't see the en I it's not like a secret you and I that you can't see anything so you can't see like the address of it and you can see the en I itself but I think if you're looking for actual contents I don't think there is another option other than flow logs yeah if you find another option other than floor dogs though like you could maybe like a cycle container or something that accesses the same ip just look at the traffic before it made it out and just assume that everything going going through the instance before it actually leaves the instance you could expect it that way or just flip the task mode right and flip it so it runs on ec2 have one host in your ec2 cluster that you're using just for diagnostics and debugging so take that same container rerun it within ec2 and have one of them running in ec2 and then you can TCP dump or whatever you want to do like to your heart's content I actually think hybrid clusters are way way cooler than the name suggests because you can get like the best benefits of like some things being totally managed that you just passed a container definition but then you can keep some things like you don't have to pick one or the other like then you can you can keep an ec2 mode run open for extra debugging or extra cool stuff where you're trying to change like the Linux capabilities or flags they're like the underlying infrastructure so it's actually kind of cool you guys have to consider everything something similar to a ruku where when you try to assess h into that box it will exact a copy of it so right now we don't have access to forget to run exec so something similar to the behavior of miroku where you commit run a new fork of that environment and we can change so when you do that on Heroku does that copy like the member or it does it just rerun that in another place it reruns it so you can do that today right weird Vulgate you can if you rerun it on ec2 right so I think that would be the Heroku similarity on so they're like it's it's the equivalent of taking something just running it on ec2 they're in the same cluster and a separate but related note there has been some talk around working on a more interactive way of debugging containers so I'm not sure exactly what it would look like or what kind of timeline we're looking at but it's been a not uncommon request as for people even if it's not exactly into the containers but some some easier way to inspect the process that's currently running on it even if you're given no actual access but to see be able to see oh actually this is what's running on the container so not sure what would look like but it's something that the team has been talking about why is it called fog a don't say this one the actual story I believe it's a joke well yeah isn't it um are we actually saying that though I don't know well what we can't say definitely is there's a lot of theming around like Star Trek and other space related programs and stuff around the AWS container services a lot of the code names and stuff are based around that sort of thing so big fan space basically because space anyone else especially anyone in the back I need the exercise is fine okay so I believe we have a right now we have a we have a 30-minute break that's scheduled and if I was seeing properly it looks like they had snack trolleys it just went that way so if you're into that sort of thing I think there might be snack time out there also Paul and I will hang out to answer questions for another couple minutes after this and then if you're feeling frisky we'll be talking about advanced scheduling and resource management right after the break
Info
Channel: Amazon Web Services
Views: 9,383
Rating: 4.818182 out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud, Twitch, re:Invent, Containers, microservices
Id: ye3-gUwu9tI
Channel Id: undefined
Length: 46min 3sec (2763 seconds)
Published: Wed Feb 14 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.