Docker Compose Explained - Docker Crash Course #2 (2024)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello my name is Alex and in this video we're going to be going over the docker compose file and more specifically we're going to be learning how we can set up multiple containers to work together to achieve a specific goal so for example you may want to set up a web application with an attached database and you want to do that within Docker as two separate containers that work together and the easiest way to do that would be to use a Docker compose file to essentially instruct Docker how to start are those containers what to attach to those containers Etc so having that said let's quickly take a look at the architecture that we're going to be following now if you remember in the previous video we've discussed the general Docker architecture where we had a Docker client which can be any client on your machine so basically this can be a GUI a command line tool or anything else then you have the docker host which is your machine or any machine running the docker tment process and then you could use the docker demon to essentially generate images start containers attach volumes Etc so basically Docker usually runs on a single machine and then you can use the docker Damon process running on that machine to perform various actions now obviously you could have several machines running Docker using something known as a Docker swarm but this is outside of the scope of this video so basically we're starting with the idea that we have a single host and then we want to start up several containers on this host with these containers being able to communicate and rely on each other so basically we arrive at this architecture up here where we have the docker compose file which can be used by essentially using a a Docker client so you can use any Docker client to interface with your Docker compos file and then you should be able to run various commands on your do compose files to essentially orchestrate the setup and running of those containers so let's start from the very beginning now in the previous video we've explored how we can use Docker files to build images and then those images would essentially be used to run containers now in this case again we have a file which in this case is called a Docker compose file and the file in question can be used to run several containers at once and allow those containers to work together to achieve a specific purpose so let's imagine that you want to run a web application so let's say we have an angular application which should rely on some sort of database and at the same time there should be some sort of web server which is used as a reverse proxy now if you don't know what that is that is fine but let's just say we need a web server we need a web application and then we need a database so to make these three to actually work together within Docker within a Docker environment we need some sort of way to interconnect them and at the same time we need to ensure that we can run these uh containers together and we need to also set the order in which those containers should run now to do all of that we can essentially create a configuration via the docker compost file then we can leverage the docker compos file by using specific commands to essentially start up our containers and then manage them in various ways now within the context of a Docker compost file we're essentially defining what are known as services in order to configure our containers so each individual container will be encapsulated within a service and the service itself will dictate how that container will run how that container will be interfacing with other containers or other services and generally how that container will be configured within the scope of our Docker compost file so essentially to recap we have a Docker compost file in which we Define services and these Services allow us to run various containers so let's go ahead and take a look at an actual Docker compose file and in here I have created a file called docker-compose ym ML and this is the naming convention for Docker compose file so this is a yaml file and when we're going to be running our commands later on the commands should be run from a directory with this file present so basically you're not specifying the path to the actual file within the command although you can do that uh all you need to do is to run the command from within the directory with that file present now I'm going to be actually showing you an actual demo of a of two containers running off of a Docker compost file but for now let's go into this demo folder and I'm going to show you an example of a Docker compost file that we're not going to be running but it will essentially encapsulate everything you need to know to begin working with your Docker compose files so let's jump into the file itself and as you can see this is a pretty small file with most of the stuff that you're seeing being simply comments and let's go ahead and start from the very beginning now in older versions of Docker compose you could actually Define the version of your Docker compos file up top so basically you would just say something like version and then specify the version of your Docker compos file that would essentially dictate the format of your file and what you could and could not use within that file however this has been been deprecated so if we go on the official Docker website uh if we actually go back here if we go to the specifications for using a Docker compos file and more specifically if we go to versioning you will see that since Docker compos version two we're not using the version anymore so basically you can switch to the new specifications if you want to learn about Docker compos in detail but generally just know that versioning is not used anymore in in composed version 2 so essentially this means that you know longer need to specify version up top but if you see a Docker compos with a version just know that this is a legacy file so essentially since version two even if you're using the version property it will be ignored now moving on here we have our services now the services here Define our containers so essentially here we Define the name of the service with a column and then in here we Define the configuration for the service now in this case we're saying that our service will be based on a base image so in this case this is the engine's latest image and it will be fetched from the default go dockerhub repository uh so essentially we're are fetching a pre-made image here and we're using that image to start a container furthermore we're mapping the port 80 so our external Port 80 to the internal Port 80 of our image so basically once this container starts we're assuming that by default the uh engine server will be run on Port 80 and then we want to map the internal Port 80 to the external Port 80 on our host machine so that we can actually access this container now with that said we're also attaching a volume now volumes can be used to attach a local directory or file on your local machine to a directory or file on your container so with n Jinx so with a default Eng Jinx image when you start up a container you will have a directory named Etc slen jinc con. d/ default. so essentially you have this file located within this directory of your n jings container so basically following that logic we want to replace this file with our own file which is n j.con so in this example we would need to actually have this file present here so we could go ahead and we could create this file and then we would be able to use this file to provide configuration for our engine server so effectively our local file would then replace the configuration of our inin server and this would effectively allow us to change the configuration of our inin server by manipulating this file so basically here we can also attach other volumes and we can even attach specific directories as volumes uh and basically this would allow us to attach directories from our local machine to our running container now moving on we can also Define a property called depends on and this is done by saying Dash then space and then the name of a different service now this essentially allows us to Define an order in which those services will be run now if our uh web service depends on the application service this essentially implies that the docker compose will try to start the application service before starting the web service now this does not mean that it will wait for the application service to start up completely it will just start the application service before starting the web service there is no guarantee that everything that needs to be loaded on the application service will be loaded before the web service to do that you would need to implement additional logic which ensures that your application service has fully started before your web service but in general dependson allows you to essentially Define the order in which your services will start now lastly you have networks and networks allow you to essentially connect containers together and allow them to communicate so basically you could Define a network and we will look at how you can define a network in just a second but basically you can define a network or multiple networks and then you can place services or containers on those networks this will effectively allow them to communicate within said networks now having that said let's move on to our application service and in this case the difference is that instead of using an actual image we're saying that we're going to be building an image from a specific directory so this assumes that we have a directory called app and basically in here I have placed an angular application alongside with a Docker file from our previous video and effectively what this does is it looks for a Docker file within this directory and then it will build an image from that Docker file and then it will use that image to start a container so here you're using a build command to essentially build an image and run a container instead of importing a pre-existing image now the next section is the environment section now the environment section allows you to Define what are known as environmental variables now environmental variables can be accessed from within the application in question so let's say this is a nodejs application uh and then whatever you put inside of set environmental variable can be accessed from within the application this is very useful for things like connection strings Etc which should not be defined within the application itself but should be defined within the configuration of your container so in this case you could say something like uh process. env. database URL from within your nodejs application and that would fetch your environmental variable so essentially for different Frameworks this will be different but you can fetch your environmental variables as long as they're set within the context of this container uh within from within your actual applications so basically in this scenario you can provide any environmental variables uh to your application just be aware that they will be available from within your application uh globally so basically inside of your application uh you you would be able to fetch the specific environmental variable from pretty much anywhere now having that said let's move on to volumes and volumes are essentially just like we've mentioned before a way for you to mount a specific directory or file on your local machine to a directory or file within the container so just like before uh instead of actually mounting a file what we're doing is essentially we're saying that we want to uh actually Mount the application folder the entirety of the application folder to the slash code folder within the container so basically that would simply take this folder and it would essentially Mount the content of this folder to the slash code uh folder within the container itself and again our uh service here the application service depends on the database service down here and this means that the order in which these services will be started will be uh starting from the database service then the application service and then the web service and again this is because the database Service as we will see later does not depend on anything while the application service depends on the database service and the web service depends on the application service so basically in this way you can Define the order in which your services will be started and all of our services as you can see here are on the web net so basically all three of our services share the same network and this means that they will be able to communicate with each other as long as the network configuration allows for it and lastly we have the command property right here and it essentially overrides the default command for the container so for example you can have a command like this uh which essentially says flask run uh and then then you could overwrite that command by providing your own command like python app.py so basically this is just an optional way for you to Define custom commands uh which will overwrite the default command within your image now having that said let's move to the last service which is the database service and again this service uh essentially inherits the postgress latest image so we are providing a pre-made image and in here we can set our environmental variables so we can set something like a post database the username and the password and then in here we're essentially uh using a specific volume and we are attaching the data within our forest database to this volume so in this case we want to make the data in our database persistent and since we know that the data in our database is stored within the data folder we can attach the data from here from this folder within the container to the actual uh to our actual volume here so we have a DB data volume which we Define down here and we will take a look at this in a second but we're saying that the content of this uh directory within the container should be uh mirrored by our DB data volume so basically this is just a way to make our database persistent and again here we're simply defining a network which is our webnet and this concludes our three services and now having that said let's move to the last two sections which we're going to be covering in this video which are the volumes and network sections now the volume section is used to create new volumes so it is as simple as that so basically within the section you can Define any number of volumes and then use these volumes in a manner like we saw before to attach them to specific containers now when it comes to actually defining a volume all you need to do is to essentially provide the name of the volume followed by column this will effectively create a new volume with a default configuration for you now the default configuration for a volume would essentially create a volume designed for your local file system so usually you would leave it at that but in specific cases you would need to create volumes for specific environments these could be Cloud environments multi-host environments or any other sort of environment uh which would require a different type of volume so the idea is that if you want to specify a custom configuration for your volume you could follow this convention so under the name of the volume you would specify the driver and that would be customed to the environment that you're creating your volume for so again in in this case uh this configuration would essentially create a volume for our local file system but in specific scenarios just like I mentioned before this configuration would be different and it would heavily depend on the environment that you're creating your volume 4 and this is pretty much outside of the scope of this video just know that you can provide your volume configuration like this after stating the name of the volume but again normally all you need to do is simply State the name of the volume with with a column without providing any additional configuration and that would create a volume with the default configuration for you now when it comes to networks the principle here is very similar to what we had with volumes so you can create a new network by simply stating the network name followed by column and then underneath you can State your custom configuration for your network by simply stating the corresponding properties underneath the network name so in this case I'm stating that I want to use the bridge driver to override the default driver now the bridge driver in this case allows communication between containers running on the same host now again a host or a Docker host is essentially a machine on which the docker Damon process runs so normally all of your containers will be running on the same host and using the bridge driver will allow those containers to communicate as long as they are on this network so basically the idea here is that you're defining a network and all of the containers within that network will be sharing this property so besides the bridge driver you also have the host driver and this will allow all of the containers which are on this network to directly communicate with the host machine so basically this removes the isolation between hosts and networks now you can also state that the driver is none which will effectively disable networking for those containers on that specific Network now the last driver I want to mention is the overlay driver and the overlay driver is a little bit special Now when using Technologies like Docker swarm which allows you to orchestrate the functioning of containers on different machines so so basically let's imagine that we have different Docker hosts so different machines running Docker in that case we may want to essentially manage the way that these hosts operate and we refer to these hosts as nodes and then we want to also manage how we want the containers on those hosts to operate as well so basically Docker swarm allows us to orchestrate the operation of a separate Docker hosts in a very controlled way now on that note we may want containers on different machines AKA on different Docker hosts to communicate with each other to do that we can create something known as an overlay Network which sits on top and allows uh those containers on different hosts to communicate with each other so basically by setting the driver to Overlay we are creating a network which allows communication between uh containers which are sitting on different Docker hosts now obviously besides the driver you have many other configuration properties which we're not going to be discussing in this video but basically you can tailor any network to suit your needs now having that said let's go ahead and let's quickly recap what we just saw so to begin with we looked at services and we've looked at service configuration so these are the most essential properties that you need to be aware of to set up your own Services you will always be using either an image or a build to essentially tell your Docker compose file from where you want to start building your container so basically a service will always encapsulate a container and then from then on you can provide custom configuration to that container so you can Define the ports the volumes networks and you can even uh dictate the order in which these containers should be run so generally when talking about Services we are talking about container configuration so always keep that in mind now beyond the services we've also looked at our volumes which allow for Persistence of data and then we've looked at our networks which essentially allow us to customize the way in which our containers communicate now for the next part of this video we're going to be creating a very basic Docker compost file by following a tutorial on the official Docker website and the goal of the application that we're going to be setting up is to Simply uh create a web application which is going to count how many times a user has visited that application now if we go into our primary folder here in here we have a Docker compos file and we also have an application folder now inside of our application folder we have our python application we have a Docker file and we have a requirements file so let's go ahead and let's take a look at the docker compos file first so basically let's go into our code and in this case we will need two services so essentially we want to set up two containers the first service is our web application and in here we have to provide the back to our application folder containing our Docker file so basically in here we just want to say slash app like this and essentially this will build an image based on this Docker file right here furthermore it's going to map the port 8000 to the internal Port 5000 of our application now if we go into our Docker file here you will see that we're stating that we want to expose port 5,000 here now this is done more for uh documentation purposes since we're explicitly exposing the port in here so basically this is just a way of stating that this is going to be the port that we're going to be using but this command does not actually expose the port rather the mapping here is used to map Port 8,000 on your machine to Port 5,000 inside of your container so basically by default when you run flask it will run on Port 5000 so you don't really need to uh explicitly state that you're running flask on Port 5000 so basically we're using the default Port here and we are mapping the default Port here to Port 8000 on our machine so basically this is a pretty standard configuration so our Docker file here uses a python Alpine image uh we are creating a work directory called code we're creating two environmental variables called flask app and flask run host and then we're using the requirements.txt file to effectively um install the flask and redis dependencies so basically here we're stating our list of dependencies and then we're using pip install here to install all of those dependencies so then again we're stating that we're going to expose Port 5,000 at some point and then we're copying all of the local data into our container and we're running flask now on the side of the application as you can see here we're not explicitly running uh the application from inside this file we're defining our routes here but we're not running the application from inside this file again rather we're using the command uh property here to essentially uh run the flask application so let's actually try and recap what happens here we begin with our Docker compost. yaml file which is essentially used to build our two Services now when building the web service the docker compost. yaml file will go into the app directory and look for the docker file in turn the docker file is going to run the flask run command to try to run or app dop uh python application now it detects the fil in question by looking up the flask app and flask run host environmental variables automatically so the flask run command will look up these two variables automatically and then using these variables it will be able to detect that we want to run the app.py script and the hosting question is used to expose our flask ation to the outside as well so this is why we're setting the Run hose to 0.0.0.0 so basically after the flask application runs uh we enter the app.py file and in here as you can see inside of this file we're importing redis and we're importing flask now we are using a flask application and we have the default route here and when this route is accessed we Tri Tri the hello function which then triggers the get heat count function and outputs the number of times that the page has been visited now up here we have a cache which is essentially initialized as a redit cache which runs on a different port and then in here we have a number of Max retries we have a while true function and essentially it attempts to increment the number of hits inside of our red cache and then as long as the retri have not been depleted it will try doing that in case there has been an error so every single time we're visiting this page ideally the cache should be incremented so essentially again this is a very simple application which increments the number of times the page has been visited using reddits so again we're only using two containers here our web application and a caching mechanism now with that said let's go ahead and let's try to run our Docker compos file here to actually set up our application now to do that we're actually going to go into our previous demo file and down here I have specified the commands that we're going to be looking at and the First Command that we want to look at is the docker compose app command which essentially starts all of the services now the the app command is only used to start a service for the first time so it will automatically build all of your images it will automatically create your network set up your volumes Etc so basically anything you state in your Docker compost file will be initialized using the docker app command now similarly the docker down command will remove the set services including the containers the networks and the volumes so these two commands commands are only to be used when starting or ending uh the LIF span of uh set Services Now by default you can run these command uh as they are or you can specify a specific service so you can say Docker compose app and that will start up both of these services or you can essentially just say Docker compose app and then a name of a specific service to start that specific service the same applies for do composed down now with that knowledge in hand let's go ahead and let's try to run our services now the first thing you want to do is you want to actually go ahead and open up a command prompt at the location of your Docker compos file this is because essentially Docker compos will try to find this file at the location where your command prompt is running at so basically having that in mind let's go ahead and let's try to run Docker compose app uh at the location of our Docker compost. yaml file so let's go ahead and run the command and let's see what happens so the first thing happening is that our red image is being pulled and this will take some time and basically again this is just downloading the red image from the dockerhub repository so as you can see the image itself has been downloaded and next up we are actually building our own image image using our Docker file so essentially we're executing all of these commands inside of our Docker file so now as you can see everything is running and if we actually pull up a Docker client so this is a visual Docker client you will be able to see that now we have two images the first one is called part to web because the folder in which the docker composed file is located is called part two and then our service is called web and the second one is simply called redis and now if we actually go into the Container section here you will be able to see that in here we have a group of containers called part two and if we click on the Arrow here you will be able to see our two containers running so basically we can actually see the same thing if we go into our Command Prompt so let's go ahead and open up a new terminal and this time we actually want to see uh the running containers using Docker compos uh PS which is another command that can be used to list the available services so let's CD into our directory here and now we can see Docker compose with a PS at the end to see our running services and as you can see these are our services and it tells us when they were created it tells us the status and it tells us the associated command so basically this is the information on our running services and we can also do Docker compose and then logs and this will essentially show us the logs associated with our Docker compose here but as you can see right now the docker compose command is attached to this terminal so basically if we hit control and C here this will essentially stop these two containers so basic so effectively we are stopping the two running containers by exiting the terminal if you want to avoid that you can actually go ahead and run the app command with a dashd uh postfix which will essentially run the docker compose in detached mode uh just like you did when you started a container through a command line tool uh as we explained in the previous video so if I try to actually run my containers using uh the app command in detached mode you will see that our containers are running again so now before we move on let's actually try and let's visit our web application to ensure that everything works correctly so to do that let's open up our browser and as you can see the application is running on Port 8000 so basically all we need need to do is visit Local Host 8000 and as you can see our application works and every single time I refresh the page uh the counter is incrementing meaning that both our application and our cache works perfectly so now with our services running in detached mode we can exit out of the second terminal and let's try and run a few more commands in here so let's go ahead and let's go back into our text file here and essentially let's go and let's look at this command in order now just like with a container you can actually use the execute command on the specific service to execute a command inside of that container so actually let's just copy that and let's go in here and let's just say web and after that let's go ahead and say bash and let's hit enter now obviously in this case it could not execute bash due to the image that we're using but we can try and run sh instead so let's go ahead and do that and as you can see we've successfully entered the terminal now we can just say LS and as you can see we can successfully see our Docker file our application file and our requirements file as well and then we can just say exit and this will exit out of the terminal and next up we can also use build and this will rebuild the image uh inside of our service so let's go ahead and let's try to run doer compost build and without specifying a container and let's hit enter and as you can see this attempted to rebuild our image so essentially again we can just say doer compose and then PS and as you can see it attempted to rebuild our image but obviously since nothing changed in our Docker file it simply used cache to essentially go through each layer it did not find any changes so basically the entire process is finished in only 4.7 seconds now on this note I want to take a second to explain how caching in Docker works now every instruction in here is considered as a layer and each layer is cached by default after you build an image from a Docker file so every single time an instruction is executed if the instruction has some sort of output the output will be cached so for instance downloading files or installing something through your rank command produces an output and that output will be cached now the idea here is that every single layer in here is being cached but as soon as you change a layer the cache is invalidated so if I go to this uh copy command here and I change the name of the file from requirements to requirements one like so the cache that has been previously created will be invalidated and this command will be executed once again however it doesn't stop there you should be very careful because the cache of every single instruction from this point onward will be invalidated even if those instructions have not been changed so in other words even if you only change this instruction here the cash of this instruction and all of those instructions will be invalidated as well so in other words you should always place the instructions that are not supposed to change at the top and then you should place the instructions that should change at the bottom otherwise if you change something at the top from that point onwards everything else will be invalidated and your build time will increase significantly since you will need to wait for Docker to execute all of these instructions once again now again if if we go back into our terminal as you can see here the build here only took 4.7 seconds while the initial build here took 55 seconds this is because everything in here has been cached so basically all Docker did here is to look up the cache and validate that nothing changed and this is why the application also remained the same because Docker simply verify that nothing changed and it essentially loaded everything from cash so so moving on obviously you can use the build command on a specific Service as well so if I just say web it will pretty much do the same thing it will try to validate the cache and it will try to rebuild our web service so let's go ahead and let's move on and the next command is pull I'm not going to be demonstrating this command but you can use this command alongside with a specific image to essentially pull that image so it works exactly like you would use Docker pull to pull a specific image so with that said let's move on and next up we have the start stop restart pause and unpa commands now these are very straightforward and unlike the up and down commands you can use these to essentially start and stop specific services without actually uh deleting or creating containers volumes and networks so BAS basically we can go in here and we can say Docker compose and then we can just say stop and web and this will stop the web service so now if we say Docker compose and then PS as you can see the service has been stopped so basically we can now go back and we can actually say start instead of stop so let's just say start and this will effectively start the service again we can also post PA a service so we can just say pause and this will effectively pause the service and then we can unpa the service and obviously this will simply unpa the service uh besides that we can also restart the service and this will effectively again simply restart the service if necessary and as you can see the service has been restarted and lastly if you want to see the status of both running and stopped Services you can use doer compose PS with the a uh postfix and if you run that you will be able to see the statuses of both of your services so in this case the web service is stopped while the red service is running and again you can see the same thing via the GUI and as you can see here this icon which is the group icon is yellow signifying that not all of the services are running and of course you can also start the service again manually through the GUI by simply clicking on the start button and it will similarly start the service and with that said we're pretty much done with everything you need to know to start working with Docker compose now in the next video we're going to be covering even more advanced topics but for now that's about it so make sure to post any questions or suggestion in the comments below and also make sure to like And subscribe since it helps the channel immensely and with that said I'll see you in the next one
Info
Channel: Code Deck
Views: 147
Rating: undefined out of 5
Keywords:
Id: nzLwyPSwmfc
Channel Id: undefined
Length: 42min 8sec (2528 seconds)
Published: Thu Mar 21 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.