Containers on AWS: An Introduction

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right good morning everyone can you hear me good awesome well it's it's our honor and privilege to talk to you this morning about containers on AWS you know I'd like to be one of the first to welcome you to the worldwide public sector summit we're certainly really excited to have you here there's lots of great sessions and I hope you get to see all the the sessions that you're interested in my name is David Curt I'm a Solutions Architect with Amazon Web Services I primarily support the federal government in helping them move their workloads to the cloud I'll be co-presenting today with Kevin McCandless go ahead introduce them yep and good morning everybody my name is Kevin McCandless I'm a Solutions Architect on the education team here at Amazon Web Services and I'll be taking over the the second half of the presentation today with Dave awesome let's go ahead and get started all right so like I said this is containers on AWS and introduction so I just want a level set everyone this is not going to be a deep dive into any particular technology right so we're looking to touch base with those folks that may just be interested in containers maybe you have some experience with containers that's okay but we won't be deep diving into any one service or any any particular technology it really is an overview all right all good things start with an agenda so we'll be talking about will be given a containers over you for those of you that aren't familiar with containers or you just you just kind of have heard about it we'll be talking about that and I think it's really instructive if you're thinking about using containers you have to really start thinking about what happens whenever you start to deploy those containers at scale and so we'll talk about container orchestration at that point I'll hand it over to Kevin and he's going to talk about the AWS container landscape so he'll be talking about a lot of the different services that AWS provides for running containers in the cloud and then he'll also talk about the AWS marketplace for containers which is a really exciting service offering which allows you to take advantage and utilize other containers that other our partners and other third-party vendors are supplying all right so we'll talk about this introduction to containers and we'll touch on docker okay we'll talk about docker because it is a very exciting and very common platform for running containers there's certainly our other container platforms out there but we want to touch on docker today so first things first why are what are containers and why are customers using them so there there's certainly a lot of reasons why we can look to using containers first a container really is just an isolation of processes so how many of you out there are developers all right and how many of you would consider yourself the system administration infrastructure type okay cool so containers really are an evolution of virtualization technology so certainly virtualized Hardware virtualized machines are what we're used to we have been used to for many years and there's many vendors that provide that and with a virtual machine you have an operating system which is virtualized and you're running an application and the Associated binaries and libraries which are required for that application within that virtual environment containers on the other hand are a little bit further up the stack so instead of virtualizing the operating system you virtualized the application and the Associated binaries and libraries for that application to run the reason why customers are using them is because they're super lightweight they're super portable and they make it very easy to distribute applications and to build applications at scale so really in a nutshell the container is this standard unit of software that packages up all of your code and all of its dependencies so that you can deploy your application quickly the other real benefit here is that you can run it reliably across a number of different compute environments and that can be a challenge certainly if you don't know necessarily where your applications going to run there are a lot of variables to consider and containers can help ease that challenge so how many UNIX veterans do we have out there okay so if you're not familiar with containers you may be familiar with some of the technology that containers really has evolved to you know if you remember back in the days of UNIX you had two rooted environments where you could isolate the types of activities that a particular process or a user could take and containers really are just an evolution of that all right so when we're looking at what type of application we can run a container really the sky's the limit but here's some things to think about the first thing we need for our application is going to be that runtime engine so whether that's the java virtual machine or that's dotnet nodejs you know the runtime engine is certainly going to be important and those are the instructions that the code needs in order to it to run on the hardware then of course you have your code right that's self-explanatory that's the application code itself and then we have our dependencies and configuration so if you're developing an application it's easy to take a shortcut sometimes in hard code your dependencies write your dependencies or all of the software and and packages and libraries that your code needs to actually run but it's important to note that when we're doing applications that could run at scale and can run in the cloud we want to try to extract those dependencies and we don't want to rely on the environment that that application is running in having the actual packages or libraries that our application and our code would need to run and so best practice here is really to explicitly define your dependencies as part of that code package so if there's a particular library that you need you don't want to rely on having that library available in different environments and then our configuration items your configuration items are gonna be those differences between environments that you know are gonna be there and when we're developing applications to run in containers we can think of these as environment variables that we would pass to the container so for instance there might be some back-end service connection handles like the database that it's going to connect to or DNS names for back-end services that it would connect to and these are really good items to have as separate configuration items and these shouldn't be hard-coded and the reason why that's important is because you want this container that's gonna have the code and the dependencies to be a discrete unit that can be deployed across a number of environments if you have those configuration items in there it makes it very difficult to do that and it causes you to have to hard code some things later on so this is really important when you're thinking about running an application in a distributed fashion and running it at scale so when I say different environments here's really what kind of what I'm talking about I'm talking about your local development laptop or your local development system it'll have all of the software you need but there's no guarantee that the libraries and the packages are going to be the same in your staging environment or in your production environment or even in your on-premise environment they could be slightly different or they can be vastly different so those those differences can be operating system patch levels the actual packages the version numbers they can be very different the runtime engines can be different and that all presents challenges whenever you're trying to build an application that you're going to deploy at scale in a distributed fashion so the more environments you have the larger the opportunity for configuration and dependency drift begins to present itself all right how many of you have asked yourself this question nobody I've asked myself this question it happens right whenever we're building an application and we're going to deploy an application we certainly try our best to try to think of all of the different possible environments that's going to run in the different software versions and in a lot of cases we know what that is right here's a great example we know our staging is at version 7 we know our on-prem is the same and we know our production is a little bit further behind and we can test for that and we can develop for that and we think it should work what it doesn't ok so even if we have the good grasp of the difference environments the variability certainly makes it difficult and you know even though we're well aware of of the the differences Murphy still rears its ugly head so how do we solve that this is where a container platform where docker comes to the rescue so docker is a container platform and this is going to allow you to run these discrete units of code these applications on a platform that really abstracts the underlying operating system resources from the container itself and it brokers that between the container that information between the container and the OS now you can run docker on a physical operating system on a physical machine and you can run it on a virtual machine on the operating system there so it's very flexible docker is a client-server environment that consists of a docker D daemon right that's the service the docker service and a REST API so if you wanted to interact with it proto macro programmatically you can do so and then a command-line interface the platform is really easy to use it's quite reliable and the CLI itself the commands are really simple to learn but it offers a lot of functionality to you in terms of being able to run containers in different types of environments to network these containers together to keep them completely isolated if you so choose now when you're using the CLI it actually uses the REST API and that's important because if you need to communicate with remote servers or systems which are running the docker platform you can manipulate that that remote instantiation of docker as well a lot of users run their containers on the same system that they're running the the docker service and the CLI that's okay too but there's a lot of flexibility here now docker is really providing you the developer or the application the system administrator to actually run the application in the container and it manages those containers for you so as we talked about the container is is going to contain the code it's going to have the dependencies and it's going to have that runtime engine all defined specifically in that container alright so here's a quick look at the difference between a virtual machine and a docker container so there on the left you see you've got your virtual machine it's running on a physical server there's the host operating system and there's the hypervisor and then of course you have your virtual machine with the guest operating system and if you have an application that you want to isolate with a virtual machine this is how you do it but you have a guest operating system and that can be kind of expensive whenever you're talking about a distributed application that needs to run at scale operating systems you can find lightweight operating systems to run your application on but there's still a lot of information there but just like a container you're still going to have those binaries and libraries that those packages that are required by the code to run and then of course the application itself on the right is the container and this is where we see how lightweight containers can be because docker actually allows you to share the resources of that underlying operating system and then all you have to do is make sure that the container has the appropriate dependencies declared it has the appropriate binaries and libraries for your application to run now this causes containers as their lightweight to be very fast to start up they're very portable they're consistent and it's a discreet unit of software that can be deployed across many different environments and there's a lot of advantages there in terms of building applications at scale now when we talk about a docker container it's not just the container itself that's actually the running instance that's the the process that's actually running but how we get to the container is through an image now the image really is a read-only template of what the container requires to actually run what it what is required to instantiate that container and it all starts with a base image and then as you layer on the dependencies those are actually created as layers and that can be really nice when it comes to updating your container image I'll talk about that in just a second so we start with this base image in this case it's a boon - it's got the basics of a boon to that that the application would need to know about and understand this allows us as we launch into the container and we interact with the container we have some command line utilities that we can work with and then basic processes that the application would need and then on top of that we actually add our runtime engine in this case it's no js' that's a layer in the image and that layer references the parent image and then here we're running engine X so we're running a web server as well and then we have our application code that our web application code that would be a part of that that's a separate layer and then at the very top you have that container now containers are writable in the sense that once it's running you can interact with it and you can change what's happening inside the container right you can you can adjust settings if you needed to do so but that's the only thing that's writable the image itself is not it is immutable so once it's set aside from you going in and updating the image you can be assured that every container launched from that image will be exactly the same so this is actually represented through something called the docker file and the docker file is a very easy to use standardized template for building the image and the docker file just literally has the commands that the docker platform will run to go ahead and build the container and so it's again simple to use but quite powerful and the commands are easy commands like copy and run so it offers you ease of use but quite a bit of flexibility alright so now that challenge we had where we had the four different environments we can actually abstract that out to having one container environment where docker is going to be running in all those so if I develop my application and it's running great in docker on my laptop and I'm running docker across the rest of my environments I can be reasonably assured that this is going to run exactly the way it's supposed to so the container benefits that you get are one that it runs reliably everywhere okay and we've seen that through the use of docker itself and then the docker file by building a consistent container you can run different applications simultaneously right so this is a this is a benefit of virtualization in general but now I can run these many different containers on one system and because containers are so lightweight I get a lot more density in terms of applications running on a physical piece of hardware and that certainly has benefits in terms of cost and resource utilization and speaking of resource utilization with containers you can actually detail and dictate how much of how much memory is actually utilized by that container how much CPU is utilized by that container and that'll really allows you to place containers strategically across your physical infrastructure in order to best serve your customers so containers through to their lightweight portable nature really made it easy to build a scale cloud native applications now running one container is easy maybe running ten containers is easy but whenever you're talking about running containers at scale and we're talking hundreds and thousands of containers comprising hundreds of services this can get out of hand really really quickly and since containers allow for rapid scaling and you're ready for these containers to scale you can go from one or two containers to a hundred in no time and that's really great for being able to serve your customers but you also need to know where are those containers being placed you know are you able to place them automatically are you able to look across your physical infrastructure and see how how much of your resources are available and then place your containers there and so unless you're going to do that by hand you could automate it and certainly automation is going to be key but there needs to be some sort of service or tool that you can use to handle that for you right and that's where container orchestration tools come in so there are a number of container orchestration tools some of you may have heard of our service which is Amazon Elastic container service kevin is actually going to talk about that in a little bit more detail kubernetes is another container orchestration platform which is very popular docker swarm a docker has its own the cube is a hoshi corp nomad and apache maysa and these are just some options you have for orchestrating your containers at scale so really what is container orchestration all about it's really about managing the lifecycle of containers when you instantiate the container that's not the end of it right it's there's a very good possibility it won't always live either on that same physical infrastructure it might need to be moved around it might become part of a load balanced pool of resources so that the orchestration tool really handles all of that for you so it will handle the provisioning and deployment of containers it'll handle the redundancy and availability of those containers any load balancing that you would want to have across those containers scaling out and removing containers is something that your container orchestration tool can do and hopefully you see that there's a lot of activities that go into running containers at scale the other thing is you might have a a system a physical system that underlying infrastructure which is actually running out of resources and you might want to rebalance your containers or or move them from one host to another or it just completely shuts down on you and so you need your orchestration tool to handle the the movement of those containers so since coober night kubernetes is really really popular I just wanted to touch on this quickly so kubernetes is and it's a rapidly moving open-source container management platform and it's since it's open source there are there's a lot of excitement about it and there are a lot of developers that are engaged in in developing kubernetes and it moves quite quickly in terms of the features and functionality that it provides but as a container management platform it really helps you run your containers at scale and it gives you those primitives for building modern applications so kubernetes is actually managed by the cloud native computing foundation Amazon Web Services is a important member of the cloud need of computing foundation and regularly contributes to development on kubernetes and we have thousands of customers who are running containers on on AWS and and certainly we have thousands of customers using Amazon ECS but we also have a lot of customers that are using kubernetes and love it and they want to run that in AWS as well and we have options for them whether they're running it on our Elastic Compute cloud or they're running it through our or managed kubernetes service customers have options for running in certainly where you run kubernetes matters it's important to understand that if you run kubernetes on a platform that really can't the underlying platform can't scale very well it can certainly affect the user experience so it's important to consider the cloud platform that you're running kubernetes on and AWS is a great platform to run kubernetes due to our experience and ability to scale the reliability we have it with AWS certainly is going to help in terms of keeping those bad underlying infrastructure running but then your users are going to have a great user experience because the system and the application will always be up and running so since 2014 AWS has launched more than 50 new features and services to help developers run containers in the cloud at first it was just running containers then we were able to release services that allow you to manage and orchestrate those containers but our mission really remains to make AWS the best place to run any containerized application our goal really is to remove that undifferentiated heavy lifting of underlying infrastructure management and then container orchestration so that you can get your new ideas out to your customers as quickly as possible right we want you to be able to experiment and iterate and do a lots of experimentation and iteration we want to see you at innovate and so with that I'm gonna hand it over to Kevin and he's going to talk about the container landscape on AWS all right thank you Dave everybody hear me all right can I get a thumbs up all right awesome all right so so thanks again for joining us this morning Dave just gave us a great overview right of kind of containers in general a lot of the benefits of containers and why a lot of customers are starting to adopt containers so now I'm sure you're wondering how does that actually map to AWS and really what AWS services might you use if you want to run containers in the cloud so this is really what that landscape looks like at a high level there's definitely a ton more services that we offer that you would probably want to use as part of your overall application architecture but these are really going to be kind of the core services that you're gonna have to look at and think about and choose between if you do want to run sort of a managed container environment on top of AWS so you can see we have a couple of different options starting at the top in terms of kind of the orchestration engine that actual management plane for your containerized applications our two primary managed services there are going to be Amazon Elastic container service or Amazon ECS as well as Amazon Elastic container service for kubernetes or Amazon eks we'll talk about both of those more but really just keep in mind those are meant to be the control plane for your containers right so those are handling the deployment of your containers scheduling your containers to run and placing them on your fleet of compute resources that's running the actual containers themselves so for the actual hosting environment of the containers basically what's actually going to be giving those can those containers their CPU in their memory we also have a couple options primary one being Amazon Elastic Compute cloud or AC - who here is used or knows ec2 right I'm guessing most to you right so that's kind of our service where you can spin up what we call ec2 instances which are really just virtual machines in any different variety of types and sizes to suit virtually any different kind of workload right so you can use ec2 to host your containers or you can also use AWS for gate which we'll talk more about but it's really just a serverless way to launch containers on demand then last service will always mention with our container services is Amazon elastic container registry or ECR which is really just a managed container image repository now all of these services they're all designed and built to give you a platform to run your containerized applications on AWS right and like David said we don't just want to be another place where you can run containers we want to make AWS the best place to run containers so we want to make it easy not just to get started and to run a few containers but also easy to scale your applications out to hundreds and thousands of containers if you need to all these services at the same time have great native integrations with a number of other AWS services things you would expect like cloud watch cloud trail I am VP sees load balancing and they also pair very well with our continuous integration continuous deployment tools so you can build very robust CI CD pipelines for your containerized applications all this at the same time is still going to be built around kind of that core concept of docker and containers so you're still getting all the benefits of containers when you use these services in terms of the portability that containers give you the control that you have over your application when you're running in containers as well as that rich ecosystem of third-party and partner tools that have already developed around containers today and you don't just have to take our word for it either the cloud native computing foundation actually did a study they found that 63% of companies that are running container workloads run on AWS today right so they're using services like all the ones I just talked about and many more to run their containers and they're doing it for everything from super small scale you know Devon test environments just playing around with it all the way up to enterprise scale mission-critical applications using those container services so diving in now Amazon Elastic container service or Amazon ECS this is our own container orchestration service right so as a container orchestration service like Dave mentioned before you know running one container is easy but once you actually scale out to hundreds or thousands of containers you're definitely gonna want some sort of management engine in place which is really just gonna act as that control plane that you can use to handle container level networking as well as placing containers on your fleet of compute resources scheduling containers to run at certain times in response to certain events while still giving you deep integration with the rest of the AWS platform so ECS it integrates with services like cloud watch for metrics and monitoring it integrates with our elastic load balancing service so you can load balanced across several containers running on ECS it also of course has an e CS CLI so you can interact with ECS via the command line you don't have to do everything via the console and like with a lot of AWS services ECS is also available in a number of regions all around the world so you can quickly deploy your containers globally if you need to now an important thing to keep in mind again is that ECS is just that control point it's just that management engine right so the e CS service itself is gonna be responsible for scheduling and orchestrating your containers and really just placing those containers on your underlying fleet of compute resources so for that actual hosting of your containers I mentioned two options before ec2 and Fargate so talking about ec2 first each of these boxes at the bottom of the diagram could be an ec2 instance and you can provision an instance just like you would any other ec2 instance and then you can add that instance to your ECS fleet or your ECS cluster and then once it's part of that that cluster you can use ECS to actually place running containers onto that ec2 instance and what's great about using ec2 instances is that it gives you a lot of flexibility and a lot of very fine-grained control over the underlying infrastructure that's powering your containers so you choose the exact type and size of ec2 instance you want to provision you can have an ec2 instance with the GPU if you need that you can get into the operating system of that ec2 instance if you need to you also have the option of using ec2 s various pricing models whether you want to use on-demand reserved or spot instances but with that increased control does come an increase burden as well so if you're using ec2 instances with ECS you are still going to be responsible for managing the ec2 instance as well as the actual containers and developing those containers so for the ec2 instance ultimately you're still going to be responsible for managing that operating system as well as the docker in the ECS agent as well as any other various software and packages you might have on that instance itself so with ec2 instances you're gonna be responsible for patching and upgrading that operating system of the instance the agents etc and you're also going to be responsible for making sure you properly scale your fleet and have the proper amount of compute capacity with your ec2 instances to run and scale as many containers as you need to on top of those ec2 instances so while ECS is giving you that nice managed control plane you will still have a lot of control over the underlying compute infrastructure but again you will have to manage a little bit more with ec2 and so we had a lot of customers who said you know ECS and ec2 works great but I really don't need that much control right and they basically said all I want to think about is the container itself and I don't need to be able to pick the specific instance type in size I don't need all that flexibility in control I just want to worry about the container and let you guys handle all the details and so that's where AWS Fargate comes in so AWS Fargate like I mentioned before it's really our serverless way to launch containers where you can quickly launch containers on-demand without having to worry about the underlying infrastructure so with Fargate you basically just tell ECS i want to launch this container using Fargate instead of launch it on my ec2 instance and then the Fargate service itself will automatically take care of the underlying infrastructure the underlying compute resources that are powering that container so that means there's no infrastructure for you to manage you can think about and manage everything at the container level directly that also means it's really quick to launch new containers and easily scale your containers right because you don't have to worry about having enough ec2 capacity or having the instances provisioned you can just say I want to launch this using Fargate and the service takes care of the rest it also has very effective pricing model where you pay based on how long the container runs based on how much CPU in memory you requested for that containers task so this really gives you kind of that fully managed experience again because you don't have ec2 instances to worry about it's also automatically elastic because you can scale up and down seamlessly again without having to worry about scaling your ec2 fleet itself and you're really only paying for what you use with AWS Fargate because you don't have an ec2 instance that you might provision and be paying for but not actually running any containers on you're only paying when you actually launch the containers and they're running with Fargate and it still maintains deep integration with the AWS ecosystem services like VPC networking an elastic load balancing iam cloud watch cloud trail and others so by combining ECS with Fargate like I said you're really getting that fully managed experience where the control plane is going to be managed by the ECS service and then the underlying compute infrastructure that's actually powering your containers is taken care of by Fargate and so all you have to worry about is building those container images and launching those containers themselves so taking a step back now that we know those two kind of options for ECS the the end-to-end workflow for ACS is going to look something like this right so you are going to need to have some place to actually store your container images we'll talk a little bit more about Amazon Elastic container registry but again that's really where you can store the images of your containers that you want to launch then you'll have to create an Amazon ECS cluster and then within that cluster you'll have to define what's called tasks and service definitions and those are really just the mechanism through which you tell the ECS service I want to launch this specific container image with this much CPU in memory with these specific network settings and all those other kind of configuration details that the service needs to know and then once you have those definitions you can launch running containers either on to ec2 instances or via AWS Fargate and then once you have those running containers the ECS service will manage the container level in terms of automatically scaling your containers and effectively placing them on your fleet of compute resources now the other service in terms of container management as opposed to ECS is going to be Amazon Elastic container service for kubernetes or eks alright so Dave touched on kubernetes before it's a super popular open-source container orchestration system and with Amazon eks as a container orchestration service it's really meant to solve a lot of the same problems that ECS is built to solve right so doing container level networking placing containers scheduling containers scaling containers but the key differentiator with eks versus ECS is that it is built on 100% upstream kubernetes so that means that you can use the same kubernetes api swith eks you have access to the same kubernetes ecosystem and same kubernetes tooling that you would have if you were running kubernetes in another environment today while at the same time still getting the benefits of a managed control plane so the eks service itself will automatically provision and manage the master nodes that are actually powering kubernetes behind the scenes while also giving you deep integration with other AWS services again things like cloud watch VPC networking load balancing things like that and again you don't just have to take our word for it right in the same study by the cloud native computing foundation they found that 51 percent of companies running kubernetes run on AWS so not just our customers choosing us you know for ECS and for our gate and our own native offerings are also choosing us for managed kubernetes as well so the workflow for eks is going to be relatively similar to ECS right but we're gonna use kind of kubernetes specific terminology here right so the first thing you're gonna have to do is provision an eks cluster again when you do that the eks service will automatically manage the underlying master nodes that power eks then you're gonna have to deploy worker nodes and add those into your eks cluster and those worker nodes again they are really going to be the actual compute infrastructure that powers the running containers while the master nodes handle the control plane in the orchestration then you can connect to eks again because it's built on kubernetes you can use the same kubernetes tooling that you might already be accustomed to and then finally you can start launching your kubernetes applications so if you're familiar with the architecture of kubernetes this is really what the service is managing for you so with eks when you create a cluster in eks the service itself is automatically going to provision and manage multiple master nodes as well as the included at CD as part of that to power kubernetes and act as that control plane it's also automatically going to provision and manage those across multiple availability zones within a region so you automatically have a highly available control plane without having to set up or worry about provisioning that yourself but you are still gonna have to worry about in your own account again those worker nodes that are providing the actual compute capacity for the containers so using eks you're gonna create that cluster it'll give you a cluster endpoint excuse me and then you'll actually add worker nodes to that cluster and again those could be ec2 instances in your AWS account we're gonna recommend you spread those across availability zones and then once you have that cluster created and ready to go you can connect to it again via your favorite kubernetes tooling whether it's cube CTL whatever else it might be and start actually using that cluster to launch applications now sure you guys are probably wondering alright that's great that makes sense but how do I actually choose between these options and so if you do want to run a managed container on AWS there are gonna be a couple different choices that you have to make so first choice being which orchestration tool do I want to use right and again our two managed services for that are gonna be Amazon ECS as well as Amazon eks and really one of the important things to keep in mind with that is that if you're already using kubernetes today in another environment or on-prem or you have a very specific reason to use kubernetes and you want to be able to use existing kubernetes tooling then you might want to gravitate towards eks but if you're not married to kubernetes and you really just want a quicker a more seamless way to get started then ECS might be the way to go simply because ECS has been out as a service longer than eks so it's a little more mature in the AWS ecosystem and then another kind of key differentiator for that is that as you guys can see on the slide Fargate if you want to use far gate you will currently today have to use ECS but that is coming soon for for eks as well so then once you have that orchestration tool selected you will then have to pick the launch type again really the compute environment that's going to be running the containers themselves and so your options there again are gonna be Amazon ec2 or AWS Fargate and so if you want really you know a lot of control and you want the most flexibility out of the compute environment ec2 is gonna be the way to go because again you are explicitly selecting the ec2 instances that you want as part of your fleet but if you want that more managed experience all you want to think about is the containers you don't want to worry about the underlying infrastructure Fargate is gonna be the quicker way to get started now a couple other services to call out that you might use with these container services first one which I already mentioned Amazon ECR or elastic container registry again this is really just a managed container image repository so it has deep integration with our container services with ECS and eks it also integrates with the docker CLI so you can take a docker file build a container image from the CLI and directly push that into Amazon ECR and then from ECR deploy it via ECS or eks and being a managed service is automatically scalable and highly available for you now you might use ECR as part of your CI CD pipeline obviously as a lot of customers and companies are moving to adopt containers a lot of times that's part of a larger push to adopt DevOps and CI CD and move to micro services right so if you want to build the CIC dip CI CD pipeline on AWS you can use a number of different services right so in this example you can use AWS code pipeline to create the CI CD pipeline itself and orchestrate all the different steps in that pipeline and then as the source step in that pipeline you might use AWS code commit which is our managed service for git based repositories so you can store your docker file all of your application code and components in code commit and then whenever new code is committed to that code commit repository via AWS code pipeline that can automatically trigger AWS code build to run and code build is our managed build service where you can build your code without having to provision a server to do that and so in code build you can have a job run that takes your source code from code commits takes that docker file automatically builds that docker file into a container image and pushes that image into Amazon ECR and then once that code build job completes code pipeline will trigger ECS to pull down that new container image from ECR and update your running application so you can do this isn't exclusive to ECS either you can build a very similar pipeline with eks but again we have all the tools and services there help you build a very robust CI CD pipeline for these services a couple newer services to point out as well for service mesh and discovery that you might want to use with containers in your containerized application first one being AWS at mesh so this is a service mesh for application level networking and really it's designed to give you consistent visibility and traffic control over services that are a part of your application that might be running in different compute environments so it works across clusters and container services so you might have one service running in an ECS one in eks one running on ec2 but you can use app mesh to kind of overlay all of those services and give you one place to have consistent visibility and traffic controls over that variety of compute environments it is alw in AWS managed service so it's AWS built and run but it's also uses the open-source envoy proxy which means that it integrates with a number of partner and third-party tools that also integrate with the Envoy proxy now for service discovery another newer service we have is an AWS cloud map and this is a managed cloud resource discovery service and really what you can do in cloud map is you can create custom names for your application resources whether it's a database or application servers or a queue or even an s3 bucket and then the cloud map service itself will automatically track the most up-to-date healthy location of that resource so as you might change versions of that resource as it might become unhealthy and get replaced cloud map will continue to map and track that custom name then map it to the actual location of the resource so that every time one of those resources and that endpoint changes you don't have to go back into your application code and change the endpoint that you have in your application code you can instead just do all of your service discovery and naming via the cloud map service itself now a final callout the AWS marketplace for containers if you guys aren't familiar with our marketplace in general it's really just an online software catalog where our partners and third parties can list different software offerings you know some of the offerings might be pre-configured Amazon machine images some of them might even be client applications or SAS subscriptions but with the marketplace you can quickly search for procure and deploy these software packages and it's all integrated with your AWS bill so it just goes onto your AWS bill at the end of the month and what's really exciting about the marketplace for containers is that we're now adding or we have added docker compliant containers as a new fulfillment option on the marketplace so that means that sellers can create you know a custom container image and then we can ingest and scan that into the marketplace so that if you are using some sort of third-party or partner tool that runs in a container and if they have it published on the marketplace you can just go search on the marketplace find that container and quickly deploy it directly to ECS or eks within your own account and again it still is integrated with your AWS bill and a lot of the marketplace listings have different pricing models as well so some of them might be free some of them might have a free tier or might be BYO l or some of them might have usage-based pricing where you're paying by the hour for it as well so now what's next right I know we went over a ton of different services and information today I don't expect anyone to be an expert in any of this by any means hopefully you guys just have a good idea of what services to go look at and start researching and playing with so first off next if you guys want to learn more about containers here at the summit we do have more sessions going on I know there's a few tomorrow if you search in the app you should be able to find them we have one dedicated to CI CD with containers one dedicated to the elastic container service for kubernetes we also have a chalk talk dedicated to container security as well as a couple of other sessions but some other resources to point out is our containers on AWS web page that has links to a bunch of great resources as with any other AWS services we have a ton of documentation for all of our container services as well as a bunch of blogs that have to do with containers everything from technical how to's to use cases to case studies and then we also have a ton of videos from past summits like this from reinvent from webinars that dive a lot deeper into some specific services a couple workshops I also want to point out are the e CS and e KS workshop comm these were actually developed by Brent Langston who's a developer advocate here at Amazon Web Services and these are great if you want kind of a guided you know hands-on tour through e CS and eks or if you just want to check it out to see what that workflow looks like end to end highly recommend these as well as the awesome ECS project which is published by Nathan Peck he's another developer advocate here at AWS and this is really just a curated list of resources and tools and guides for our container services so this is great to bookmark and reference as you kind of go on your journey of learning more about containers on AWS otherwise that's all we've got for you Dave and I will be out in the hallway for questions after this but thank you so much again for choosing us to kick off your 2019 public sector summit and please enjoy the rest of your time here in DC [Applause]
Info
Channel: Amazon Web Services
Views: 22,713
Rating: undefined out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud
Id: kBi-s3eV2Ec
Channel Id: undefined
Length: 48min 35sec (2915 seconds)
Published: Fri Jun 21 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.