Deploy Your Containerized App With Docker Swarm | Scalable App Deployment

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Docker containers container images container orchestrators kubernetes if you're trying to figure out how to deploy your application to the cloud you've probably heard some of these terms and are working out how they may apply to your system in this video I'll be demonstrating one method for deploying containerized applications to the cloud by deploying into a Docker swarm cluster running on Virtual machines from lenode hi my name is Sid also known as devops directive here on YouTube I'm a developer Advocate working with lenode if you want to follow along with today's tutorial there'll be a link in the description that will give you a hundred dollars when you create a new account and that should be more than enough to get started without further Ado let's get into it [Music] thank you so what is this sample application that we'll be deploying today with Docker swarm it's a relatively simple three-tier web application it has a react client on the front end that will make requests to two different apis one running in golang and one in node.js which in turn call to a postgres database and retrieve the current timestamp those get passed back through the apis to the front end and get displayed here so if I refresh we'll get an updated timestamp from the database the application is meant to be as simple as possible while still showing the different types of configurations that you would have in a real world application today's video focuses on the deployment of this application but if you want to learn more about the application how it was built and the various aspects that go into it I have a much longer course about Docker and containers that you can check out on my channel devops directive there's also a companion GitHub repo that has all the source code that I'll be using today and I'll be showcasing some of the different aspects of this application before I jump into the lenode console and start provisioning infrastructure I thought it could be useful to give a little bit of background about containers and container orchestrators now containers are a mechanism to bundle up an application alongside all of its binaries and libraries that is its dependencies so it is much easier to deploy onto a variety of different systems it's able to share the underlying operating system and the virtual or physical Hardware while providing an isolated environment and a standard interface with which to deploy there's a number of different container platforms available the most popular of which which you probably recognize is called Docker podman is an alternative the piece of software that allows us to run container images on a host is called the container runtime Docker contains one but there's also container D cryo and others that adhere to the standard specification that can take a container image and run it along with our specified configuration if you're just working with a single virtual machine utilizing a platform like Docker in and of itself is great you can start containers stop containers Route traffic to them via private networks Etc however when your applications start to scale and you need more than one machine you want to deploy those containers across those virtual machines that's where this concept of container orchestrators comes in now examples of container orchestrators are kubernetes so on the Node you have linode kubernetes engine hashicorp has a product called Nomad which can orchestrate containers as well as other workloads and then Docker actually has a swarm mode built in and that swarm mode allows you to create clusters and machines and run workloads across them and Docker is able to manage those applications across the multiple different machines so as you can see here from the diagram here we've got multiple virtual machines with some container orchestrator installed and then many different applications deployed across those in their isolated containers so what we'll be doing today is creating two linodes so virtual machine one and Virtual Machine 2 will be represented by linodes we'll install Docker initialize the Swarm and then deploy our application across those two machines within this GitHub repo there's a number of folders here corresponding to different modules of the courses in module 6 we have a number of different Docker files for the different services that I described so for example my golang API I'll look at this final Docker file and I'm not going to explain in detail what it's doing but essentially I'm bringing in my source code installing all the dependencies and then bundling that up as a binary that can be shipped inside this container image I will use this Docker file to create a container image that gets pushed to Docker Hub and that's what Docker swarm is going to pull from when deploying the application because this application uses a postgres database we can use the node's managed database service to provision one for us I'll go here under databases click create database cluster I'll call this Docker swarm demo we're going to use postgres 14 6 and I'll put it in the US east region because this is just a demo I'm going to choose one of the least expensive plans and I'm going to choose just a single node because I'm not worried about high availability here eventually I'm going to add the IP addresses of my lenodes here such that we'll be able to access the database from those machines click create database cluster this is going to take a while to provision which is why I did it first we'll let that run in the background while we get the rest of the infrastructure set up the other two main pieces of infrastructure that I need are two linodes so I'll create the first one I'm going to use Ubuntu 2204 as my operating system and again I'll put it in the same region as that database to minimize latency between those requests I'm going to choose the lenode 2 gigabyte shared CPU option this will be swarm zero we'll create a strong password and add an SSH key so that I'll be able to log into it with that and that should be good I'll click create the node and it will go off and provision that for me now I'm going to create the second Lino just so that can provision in the background as well with the same settings Ubuntu 2204 Us East the lenode two gigabyte shared CPU I'll call it swarm one create a password add my SSH key and click create the node okay so both of my linodes here are running and so I'm going to connect to those via SSH so I can do SSH the username is root at that IP address except the fingerprint and now I have a session inside that virtual machine Docker is not installed on these machines by default however there's a useful script at get Dot docker.com that can be used to install that onto these systems very easily so I'm going to curl that script and pipe it into shell so this will download that script onto the machine and then execute it and executing that script we'll install Docker engine onto this machine which will have that container runtime as well as the container orchestrator Docker swarm built into it I'll create a new terminal and log into the other machine so it can install at the same time running the docker version command I now see it's running and active on the system at this point I'll initialize this worm on one of the virtual machines and then connect to it from the other one of the nice things about Docker swarm as a container orchestrator is that it's extremely easy to set up and much simpler than some of the other options so here on machine swarm 0 on the left I'll just do Docker swarm in it we now have turned on Swarm mode for this machine and then on the other system I just need to issue this command that was provided to me and now swarm-1 has joined the Swarm and I have a two node cluster doing Docker node LS I can see the two nodes in my cluster I'm currently the star represents the machine that I'm currently on and it is the leader whereas the other node has joined as a follower now that I've initialized these two machines I'm actually going to exit these SSH sessions and I can use the command uh export Docker host let me grab the IP Of Swarm zero and so by exporting this environment variable Docker host now I can use Docker installed on my local laptop and it will actually connect to that remote host and behave as if I'm running those commands on that machine directly so now if I do Docker node LS you can see that command was sent to that remote host and executed there now here in this companion repo in the module 12 I have a Docker swarm configuration already built out you can see we list out the version at the top that just tells Docker which syntax it should utilize to parse this different configurations and then we have each of our services listed here so we've got our front end which is actually running that react application built out and hosted via an nginx container we have our node-based API we have our golang API and then we have our database now if I deployed this as is we would have this database container running inside of our cluster but instead we decided we wanted to host that on the linode manage database service and so I'm actually going to remove this portion of the configuration I'll just comment it out that means we no longer need this volume so I'll comment that out we also no longer need this password which was only being consumed by the database when it was being created we will still need the secret associated with the database URL that the apis are going to use to connect to our database in order to do that I'll need to go get the database password from the UI so if I go here to databases while we're waiting for the database to provision let me talk through the different elements here of this configuration so for each service we need to specify an image so this is the container image that I have built using those Docker files that I showed earlier and this is a specific tag so on Docker Hub if I go to Docker Hub on my profile you can see I have all these you see I have 13 repositories here and specifically the ones that we're going to be deploying from are these three so we've got our golang API our node API and our nginx API I did a Docker build which take that takes that Docker file as well as my source code bundles it up as a container image and then I did a Docker tag to associate a specific tag with that image and finally a Docker push to push this image onto Docker hub I can show you what that process would look like for the client react image for example so let me navigate to that directory so we've got these different Docker files here and in this case I want Docker file number five let me just open that up and show you what it looks like so we've got a multi-stage Docker file here we're starting from this Debian based image we're setting a working directory we're copying in our dependency configuration with this package.json and packagelock.json we're running our npm install command which will bring in all those dependencies here we're copying in our source code from our host system into the Container image finally we're building our application so that will export the HTML JavaScript and CSS files associated with our react application and then finally in this in the separate Deployable stage we're using this nginx unprivileged container which will run a copy of nginx as a non-root user and we're copying in two things our nginx configuration file as well as the output of that npm run build command so that will be from this distribution file into the location that nginx generally serves a website from so that's user share nginx HTML finally we're indicating that this will be running and listening for requests on Port 8080. so this Docker file combined with combined with a Docker build command so we will do Docker build dash dash file Docker file number five we'll tag it with the uh the name of that Docker Hub repo which was at Sid Palace slash devops directive Docker course client react nginx and then we'll also add the number associated with this particular file just so that we know which one we are using to build push and deploy with so I can do a n equals five May build and we can see it's using doc file number five we're tagging it with our repo name and then we're pointing it to the source code where that application actually lives now it's built but that's only locally so I can do a make push I can do a n equals five make push and and we'll push based on that tag to Docker hub if I go look in Docker Hub now under the client reacts nginx refresh look at my tags and we can see that this tag was pushed by me just a few seconds ago let's go ahead and build and push our node API and golang apis as well so the process is going to be the same here in this case I believe we want Docker file number eight so I'll do while that's building I can walk through what it's doing as I mentioned earlier again we have a multi-stage build so we're starting from this go Lang image which has all of the tool chain necessary to build our go based app we're setting a working directory within that container we are copying in this is similar to the package uh package.json file for JavaScript we have the go mod and go some files which Define all of the dependencies required this go Mod download actually pulls those dependencies from the internet and builds them into the image and then in this case we're not using this development stage we're only going to use this production stage and so we're going here we're creating a non-root user inside the container that we're going to use so that we can run it as as a user other than root we're copying in our source code and then running the go build command to take our application source code and all those dependencies and output a binary that we can use to run our application now the final stage here is starting from an image called scratch which is just a minimal image with basically nothing in it and copying in the binaries that we built in the previous stage such that we can run them here at the end and once again specify that we're listening on port 8080 for inbound requests once again let's push that to Docker hub and finally we'll do the same for the node-based API in this case we want Docker file number nine you'll see that this Docker file looks very similar to that of the react-based client that's because it uses node and npm to build the application we're starting from our common node.js base image copying in our dependencies file setting some environment variables such that when we deploy it it will know it's in production mode bringing in our dependencies with this npm clean install command it's like the npm install command but ensures that we match our versions from our lock file exactly finally we set a non-root user for security purposes we copy in our source code of the application run our application here in the final line and indicate to the users that we're going to be listening on Port 3000 here with the expose command we'll push that to Docker hub and at this point we have our three container images hosted on dockerhub ready to be pulled in and executed jumping back to our swarm configuration we can look at the next field within our configuration and that is this deploy block so the deploy section tells Docker swarm how we want to manage this application in this case mode replicated means we can run one or more copies of this application here we're currently specifying that we just want one copy of that react client and then this tells it how to when we deploy a new version how do we want to handle that by specifying start first we want to start the new version wait for it to become healthy and then transfer traffic over to it there's also stop first which would be the opposite where you'll bring down the old copy before you spin up the new one but this one by doing start first we're able to have we're able to minimize downtime between versions this init field tells Docker to run a separate init process inside the container other than our application itself it has a program called teeny t-i-n-i which it will run as the first process in the container which will then start up our application we won't go into the details on why that is here but in some cases it can be useful I'm specifying that this is a part of the front end Network by specifying a specific Network Docker is able to isolate portions of our application and avoid communication between services that should not be able to talk to each other here we're telling docker to take traffic inbound on Port 80 and forward that to port 8080 inside the container so if you'll remember inside our Docker file for nginx it's listening on port 8080 and so inbound traffic when you hit a web server from a browser it's generally going to be on Port 80 and so this is going to forward that to that front end on the correct port and then this health check script is executed by Docker at the time interval that you specify to make sure that our service is healthy and if it's not it will try restarting it that health check is also used when we're bringing up new versions to ensure that the new version is healthy before we switch traffic over to it the other configurations are almost identical here a few minor modifications one for the node-based API I have it set to read only equals true this is a security feature that's saying this application should not need to write anything to the file system and so we can specify that that container should be run in a read-only configuration and then we also have uh the secret and environment variable Fields here and the secret is how I'm going to specify the password for that database such that I can read it in at runtime and not need to store those sensitive credentials in git or in my configuration here this database URL file tells my application where in the file system that secret would be mounted so that I can read it in and utilize it that will get mounted in at slash run slash secrets and then the name of my secret finally the golang configuration is pretty much identical to the node one with the difference that here I'm specifying to run two replicas and so instead of just having one replica of that API I'll have two copies and traffic will get load balanced between the two let's jump back over to the console and see if the database is finished provisioning and now our database cluster is active I need to do a few things here one of which is to grab my password I'll copy that and use that to create a Docker Secret in this case I'll be replacing what was my previous password that I was using to demo which was fubarbaz with that password Here switch over to single quotes just so that it doesn't get escaped when we execute this command and then the host here I'll copy and the host will be this part of the connection string and so now I can execute this command and it's going to take this string and printf we'll send it to standard out I'm piping that then to my Docker secret create database URL command and this Dash indicates that it should take the contents of standard in which is getting piped and create that secret from that that secret has now been created and we can now from our swarm configuration file read that in so by specifying down here Secrets database URL external the external means I created this ahead of time Docker swarm when I provisioned this stack it does not need to manage that secret directly now the other thing I needed to do now the database is active add the IP addresses for my two nodes to that allow list so go to databases click in here settings manage access controls add IP let me grab the other IP that's now updating the access control list such that we'll be able to make requests from our applications running on those linodes to this database now one additional thing that I'll need to do is set up my node.js application to connect to that database using https and so to do that I can grab the certificate Authority from the database page here so I'll download CA certificate I'll move that into the same directory where my Docker swarm configuration lives I'll just rename that DB dot cert within my configuration I can add this configs element to the node.js service foreign this is telling Docker swarm that when I provision this it will create a config object associated with that certificate and then load that into my container as a file at the path slash db.cert I also need to update the application to consume this and so in this case within my source code database.js that I'm initializing my database connection when I instantiate this pool object I need to additionally tell it to use this SSL configuration such that I can read that certificate file from my file system convert it to a string and pass that in when I initialize the pool so I'm able to connect to the database with encryption enabled now that I've made that change to the source code I need to rebuild the node-based API and push that such that modified source code gets built into the application that I'm actually going to be running foreign at this point I can run the Swarm deploy stack command and that will take my Docker swarm configuration file that I've been showing and deploy that as a stack within that cluster with the three services that I've defined so this configs object under the service should correspond to a config that lives elsewhere so here I can get rid of this down here similar to how we've defined our secrets we'll have this configuration called db.cert it's going to use that file that I copied over from the database and then within my service I consume it by passing it the name of that configuration now we can see the three services that were defined in our configuration and they are coming up now we have zero replicas currently active it is doing those health checks to make sure that they're healthy before we start routing traffic to them so if I use the watch command and let's actually visit the IP address of that swarm zero node ah and it's saying password authentication failed for user postgres that I believe is because by default in lenode when we create a database the user is not postgres it is Lin postgres so let me correct that so I'll remove my example app then I will delete the secrets Docker Secret and now I'll recreate that secret and finally redeploy my application now hopefully when the application loads in that new version of The Secret it will have the proper credentials and be able to connect to the database we can see this time all three services came up much more quickly and have the appropriate number of replicas active if I load the web page we can see we're getting the timestamp from the database which confirms the full sequence of calls going from my browser to that application running in Docker swarm on the host and finally making that request to the manage database now at this point let's show that these services are running across both of those nodes so we can do Docker node LS this gives us the IDS of the two nodes and then for each of those we can do docker node PS and self refers to the Swarm Dash zero node that I'm connected to Via that Docker host environment variable so we can see that we have our node API and one copy of our golang API deployed onto this host and then if we use the other ID we've got the other copy of the golang API and finally our front end hosted on swarm-1 and so you can see how easy it is to deploy Services across multiple hosts which enables us to do things like setup High availability where even if one of our machines goes down we're still able to connect to it where it's running on the other machine now I know that was a bit of a whirlwind tour of Docker swarm and getting an application deployed onto it if you want to really do a deep dive on how this is done and the application itself and the process to build and optimize those container images as I mentioned there's a free course that's over four hours long that we go a full Deep dive on Docker and containers and so if you want to learn more about this type of thing feel free to go over to my Channel or to courses.devopsdirective.com and check that out that's it for today hopefully you learned a little more about containers and container orchestrators and Docker swarm and might consider using a technology like this to deploy your next application on the Node if you have any questions about the process that I went through today feel free to leave those in the comments section we'll try to get back to you if you want to see more videos like this about different Technologies and how to use them with the node go check out some other videos from the lenode YouTube channel that's it for today take care [Music] thank you [Music]
Info
Channel: Akamai Developer
Views: 140,697
Rating: undefined out of 5
Keywords: linode, linux, cloud computing, linux server, open source, sysadmin, docker, docker swarm, docker swarm deployment, docker tutorial, docker container, docker compose tutorial, docker swarm vs kubernetes, what is docker swarm, dockerfile, docker swarm tutorial, docker tutorial for beginners, docker networking, docker cluster, devops, docker training, docker swarm cluster setup, docker swarm manager, docker swarm load balancing, docker swarm add nodes, docker swarm basics
Id: aghIj6A9dxM
Channel Id: undefined
Length: 32min 13sec (1933 seconds)
Published: Mon Jun 12 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.