Moving from Monolith to Microservices with Amazon ECS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Applause] [Music] [Applause] [Music] hey everyone my name is Nathan Peck and I'm a developer advocate for Amazon Elastic container service in this video I'll show you how to use elastic container service to break a monolithic application into microservices we will start out by running a monolith and ECS and then we'll deploy new microservices side by side with the monolith and finally we'll divert traffic over to our new microservices with zero downtime to start you might be wondering what does a model of vs. microservices and why might we want to migrate from one to the other a model lyft is an application that is a single unit of deployment that handles multiple types of business capability usually all tightly coupled on the other hand micro services take each core business capability and deploy as its own separate unit which performs only a single function choosing monolithic or micro service design fair application has benefits and drawbacks at different stages of a software products lifecycle but in general it's often easier to develop a model for a new project that isn't fully fleshed out then as the software becomes more fully featured micro services allow teams to organize their code along the lodging of the business in a way that allows different components of the system to be developed and scaled independently many companies go through this process of recognizing that their core code base is growing complex as it begin to have issues adding new features or extending existing functionality they realized that they need to split some of the functionality out into its own service but this shift must be handled carefully especially if you have customers using your application and want to upgrade your architecture without causing interruptions so to demonstrate how to do this type of migration let's look at an example model attic application this application is going to be a small REST API for a forum so as you can see this application it's a typical monolithic application it's serving a bunch of different restful api routes and you can see if there's three different top-level classes of restful object that are being handled there's users threads and posts so this is a very typical setup for a model if one codebase handling all three types of requests for all three features so first let's verify this application will if I run it on my local machine so we see a message that says that the server is ready now let's make a few requests to make sure that the server responds so it looks like this app server is functional but right now it just runs locally we need to actually package it up for deployment and that's where docker comes in docker packages software into standardized units called containers that have everything the software needs to run including libraries system tools code and runtime using docker you can quickly deploy and scale applications into any environment and know your code will run there so here's a docker file I've previously prepared for this application you can see that it's fairly simple it starts from a base image that contains my specified version of nodejs then it has a few commands to copy my application code into the container and install external dependencies using NPM I can use this docker file to construct a docker container image by using docker build command so this command that I just typed this telling docker that I want to run the container image in detached mode and I want to receive any traffic that I send to port 3000 on my localhost after I launch the container my application is once again running on my local machine but this time it's actually running in a docker container I can still access the application just like I did when I was running to wrap it on my host machine though and you can see that there's a response just like there was when I was running the application directly on my host so the next step is to take this application container that's running on my local machine and get it running in the cloud first I'm going to create an Amazon ECR repository in the nativist console this repository will serve as a centralized place for all my container images each time I modify the application I can build a container image to capture a snapshot of the entire application environment and upload it to the repository so I see a success message that says I've successfully created a repository and there's a list of commands that I can use now to upload my image to that repository now that my repository is created I need to give my local machine a login so it can upload to that repository [Music] okay so login succeeded now I can tag The Container image that I had built and upload it to the repository that I created and with the docker container tagged I can now push the docker container to the registry on the latest tag now that my image is stored in the repository it can be pulled back down and run wherever I need to run it including an Amazon Elastic container service in order to run the container and elastic container service I have to do a little bit of setup though I've prepared a CloudFormation template to set up a fresh BPC a cluster of docker hosts and a load balancer I'm going to launch that stack now by using a constable command so while I wait for my cloud formation stack to launch I can go ahead and create a task definition from my application a task definition is simply a list of configuration settings for how to run my docker container this is where I tell ECS what container image to run how much CPU and memory the container needs what ports it listens for web traffic on among other things I'm going to use the console to create a task definition for the container image that I just uploaded so I click create new task definition I enter a name for the task definition I click Add container I name the container I specify the image URL and tag how much memory the image needs to run and then the port that the container will receive traffic on I click Add to add that container to the task definition and create to create this new task definition now that the task definition is created I can launch this task definition as a service this is basically a way to tell ECS run one or more copies of this container at all times and connect the running containers to a load balancer so to do this I select the task definition I click the Action menu and click create service I name my service and specify how many copies of my container I want to run click the next step and now I need to select a load balancer to put in front of my application I'm going to select the application load balancer type because this type of load balancer is especially well-suited for an HTTP REST API you can see that it has already selected a load balancer that I created for this demo I just need to add my container to that load balancer and I do that by clicking the Add to ELB button I'm going to create a new listener on this load balancer which receives the client traffic on port 80 and forwards it to my container I'm going to name a target group which is going to be a list of the containers that are running across my cluster and I click next step I'm not going to configure auto scaling in this demo I just want to review the settings that I specified and click create service so after a few minutes everything turns green I can click view service to view my newly created service I can see that there are two running tasks which represent two running copies of our model of container and now as you can see that they've turned green to show it running status which means that they're up and running these containers have been connected to an application load balancer if I go to the cloud formation stack that I launched earlier I can locate the URL of the load balancer and make a request to the address of a load balancer to verify the services up and running so at this point I have my classic-style monolithic application up and running in Amazon Elastic container service but I'm not done yet my goal is to take this monolith and split it up into micro services if we look back at the base code for the monolith you can see that the HTTP routes relating to users threads and posts a sensible way to split this application up into micro services would be to create three micro services one for users one for threads and one for posts and here's what that could might look like as you can see it's very similar to the monolithic code but instead of serving all three different types of restful services it only serves HTTP routes that relate to one type of resources the posts so what I'm going to do is I'm going to repeat the steps that I did to deploy the monolithic application but instead I'll build and deploy three different micro services that will run in parallel with the monolith first up I create three new repositories for the three services [Music] so now I'll build and push each container image to these new repositories [Music] [Music] [Music] [Music] next I create a task definition for each of these repositories and turn each of those task definitions into a running service [Music] [Music] so as you can see I can figure the load balancer for each service to bind to a sub path of the restful route that is specific to that service and once again I see each service launch with a couple tasks if I make some web requests the load balancer I can still get the exact same response that I did before but what's been happening behind the scenes is much different if I view the listener rules for this load balancer I can see for roles there's a default rule which sends traffic to my monolith but above that there are three other rules which divert specific paths to my micro-services so based on the priority of these micro-service rules all the traffic that matches these paths is being sent to the micro-services instead of the monolith I can actually shut down the monolith without impacting service availability so that's what I'm going to do right now I update the number of tasks to zero to tell ECS to update the number of tasks and remove all of them then I click view service and I'll see that my services tasks will disappear as you can see I can still send traffic to my load balancer and all the same paths that used to work actually still work just like they did before I have successfully migrated all my traffic over from a model F to micro-services without downtime or a single drop request the same approach can be applied to any application behind a nail B but it works especially well in conjunction with Amazon elastic container service because of the automated configuration of your target groups as you deploy the new services if you want to try this out for yourself you can follow the steps on the AWS website getting started slash container micro services tutorial thanks a lot for watching this demo and I wish you the best of luck in your own micro services adventures on AWS [Music]
Info
Channel: Amazon Web Services
Views: 35,093
Rating: 4.9551191 out of 5
Keywords: AWS, Amazon Web Services, Cloud, cloud computing, AWS Cloud, container, microservices, docker, Amazon ECS
Id: _ep_yKuDWkE
Channel Id: undefined
Length: 14min 14sec (854 seconds)
Published: Fri Jan 05 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.