Ultimate Docker Compose Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video you will learn everything you need to know to get started with using Docker compose we'll go over what it is exactly what problems Docker compose was designed to solve its common use cases and of course we will do some Hands-On demos to learn actually using Docker compose in practice I am super excited to teach you all this so let's jump into it now in order to understand Docker compos you need to First understand docker and have some basic experience with it if you don't I recommend you pause and watch my Docker crash course first and then continue with this one because Docker compost is essentially a tool that is supposed to manage and work with Docker containers so you need to understand that part first so that you understand the context for learning Docker compost so in the docker video I break down what the containers are what images are what problems Docker solves and what use case it it has dockerizing your application with Docker file and all the concepts you need to understand Docker itself so based on that knowledge we can Now understand why Docker compos was created along with Docker and when we want to use it now applications are composed of many different parts you can have apis databases any Services your application depends on and even within the application you may have a microservice application which is basically an application broken down into multip micro applications or microservices and when you're creating containerized applications all of these different application components must be deployed and run together because they have dependencies on each other so basically you have a set of containers which are running different services and applications within them that need to run together that need to talk to each other and so on so Docker compose is basically a tool that allows you to Define and run multiple services and applications that belong together in one environment so simply put if you want to deploy multiple Docker containers where each container may have its different configuration options you can use Docker compose to do this to manage these containers way more easily now this is just a general definition to give you some idea of what Docker compose is but of course we want to understand this with specific examples and specific demonstration so that you really understand these Concepts and the actual use cases of using doer compose and not just a general abstract explanation of what it is and because of that we're going to jump right into that demo where I'm going to explain the concepts the use cases using those demonstrations so let's get started as a first step we're going to start two Services as Docker containers using just the docker command so we're not going to use Docker compos as a first step so we can see and compare the before after States first we're going to create a Docker Network where these two containers will run and talk to each other using just the container name and then we're going to start two containers one is going to be a mongodb container and another one is going to be Express container which is basically a UI for the mongodb database very simple use case and we're going to run both containers using Docker run commands so that's our first very simple demonstration let's go ahead and do that so I'm going to switch to a terminal because we're going to execute those docket run commands on the terminal and you probably see this is a fancy fun looking terminal that I have been using since recently and this is an application or a terminal app called warp which is actually a sponsor of this video I actually played around with warp and love using it it's free it's easy to install on your computer so I will be using warp throughout the entire day demo because it also helps highlight some of the commands and stuff better so it's going to be easier for you guys to follow what I'm showing you however if you want to install warp yourself on your computer you can go ahead and check out the link to get started in the video description where I'm going to provide all the relevant links for this crash course including the warp installation so to run our Docker containers of course we need to have Docker installed and running so I'm going to start up Docker and then we can start the containers so the docker service is up and running let's go ahead and create the docker Network first so I'm going to do Docker Network and since we're going to run mongodb and Express containers we can call this network Network and let's create and now if I do Docker Network LS so basically list all the networks available these are the default ones basically that you get out of the box when you install Docker and this is the Network that we just created awesome so the network is there now let's run our two containers and if you know Docker if you followed my Docker crash course basically you know all this stuff Docker run and we're going to execute this in the background and we have the and Express image documentation so we can actually reference this so first I'm going to define the port uh mongodb's default Port is $27 07 so we're going to map that to the same port so we we're going to bind that to the same port on the host then we're going to define those two environment variables to basically set the admin or the root user and password so we're going to copy those and we're going to call this admin and this is some password we're going to set this to super secret so all these should be actually be a refresher from Docker we also want to specify that it should run in this network network so we're going to do Network run in this one we're also going to name our container instead of having Docker just create a random container name so we're going to call this DB and finally we need to specify the image right and this is the name of the image and that's basically our Docker command so I'm going to execute and this will fetch or pull the latest image from dockerhub repository and run it in a detached mode perfect so we should have have our mongodb container running and now let's start Express container and I can actually bring up my previous command and we're going to adjust it for the Express right here we see that Express is running on port 8080 so that's what we're going to set here there you go we also have different environment variables so basically Express is just a UI for mongodb and in order for us to use it it needs to connect and authenticate with mongodb so we need to provide it the credentials as well that we set for mongodb database and we're passing those also as environment variables but in this case the environment variables are named differently so that's what we're using referring to the official documentation which you always should do to get the most up toate data and you also see the default values for those environment variables the port is correct because that's what we binded it to on our host and mongod to be server which is going to be the mongod to be container name in our case it's different because we called our container mongod beam so we're going to set this environment variable as well so right here I'm going to add this and we're going to set these to mongodb let's not forget the backwards slash here so the ports are correct the environment variables are correct we are going to run it also in the Network we're going to name this Express so that's going to be the name of the container and let's see what the actual name of the image is just going to copy that so that I don't make spelling mistake and that's it let's execute this as well and seems like it started without any problems let's see perfect it's running and now to test that it was actually able to connect without any issues to the mongodb database container we're going to access it in our browser so we opened it on Port 881 on our host and it is asking for basic authentication in the browser and we can actually get those in the locks let's do that do logs of Express and here we have the credentials so admin pass should work and there you go so that's a test and a proof that it was able to connect to our database since we didn't have any connection errors here and we're able to access the application here so this was basically just to demonstrate how you would start containers that belong to each other so Express container actually depends on mongodb because we don't need it without the database in the background so kind of start containers that belong together that should run together using just plain Docker and also starting them in the same network so they can talk to each other in that isolated virtual Network now obviously these are just two containers but if we have microservice application with 10 different services that has a messaging service maybe two databases that it belongs to maybe those databases have their own UI services that we want to run in addition so now these are lots of containers that we need to start and manage using just plain Docker commands and now imagine if you need to stop those containers because you don't want to have them running all the time or you want to make changes and restart them again this is going to be a lot of manual tedious work and you don't want to execute these commands all the time on the command line terminal especially when you have tens of containers so you want an easier way to manage to stop start configure containers that you want to start together and that's exactly where Docker compose comes into the picture so Docker compos basically makes running multiple Docker containers with all this configuration that we just defined on those containers so you have the environment variables you have ports maybe you have multiple ports on the same container same application that you want to open maybe you want to configure additional volumes for example for data persistence so that's the main use case of Docker compose so with Docker compose basically you have a file a yaml file where you define all this configuration a list of contain ERS or services that you want to start together and all their configuration in one central place in a file that you can modify configure and use to start and stop those containers so let's see how the file looks like and how these Docker run commands actually map to the docker compost so how can we migrate or map all of these and write a Docker compost file that starts those two containers with exactly the same configuration that we defined here so this is a Docker run command of the mongod beam that we executed previously so basically with Docker compos file what we can do is we take the whole command with this configuration and map it into a file so we have that command defined in a structured way so if you have let's say 10 20 Docker containers that you want to run for your application and they all need to talk to each other and interact with each other you can basically write all the Run commands for each container in a structured way in Docker compose and Define the entire configuration there and this is how the structure in Docker compose will actually look like so the first two lines are required attributes of Docker compose file with the first line we basically Define the version of Docker compose which is the latest version that should be compatible with the docker compose that you have installed locally so the latest Docker compose tool installed on your computer will be able to read the latest Docker compose file version and then we have the services and Docker compose is super simple Services is basically an attribute where you can list all the services or all the containers that you want to run as part of this doer compos file so in this case the first service we want to Define is mongodb and that Maps actually to The Container name or rather this is going to be part of the container name when the services are created as Docker containers and for each service like Mong TB we have all the configuration for that specific container so the first one is obviously image because we are building the container from the image so we need to know which image that container is going to be built from and of course you can specify version Tech here next to the name the next one is the list of ports because you can open multiple ports on a container if there are multiple processes running inside the container but mostly you would just have one so this is where we Define the port mappings so mapping a container port to the host so just like in Docker command the first Port refers to the host the second one refers to the port inside container then we have the environment variables listed under an environment attribute like this and this is actually how the structure of Docker compose looks like for one specific command now let's actually add the second container command for Express and how that Maps into our Docker compost file so again we have the service which we can call Express and by the way the service names are completely up to you you can call them whatever you want just like the container names you can call the containers whatever you want and under that Express we have the same exact configuration options we have the image which refers to Express image again you can have a TCH here if you want to have a specific one then we have the port and all the environment variables that we defined with Docker run command under the environment attribute and this is how Docker compos will look like with multiple Services defined inside so basically Docker compos is just a structured way to contain very normal common Docker commands and of course it's going to be easier for you to edit this file if you want to change some variables or if you want to change the ports or if you want to add more services with those services and as part of everything as code Trend dock compose is basically a code that defines how your services should run within a file that you can check in to a code repository and multiple people can work on it together compared to a command that you just execute manually on your computer with individual Docker run commands the final thing here which you may already noticed is the network configuration is not defined in the docker compost so we didn't map that part from the docker run commands so this Monga Network that we created we don't have to explicitly create or Define it in Docker compost because Docker compose will actually take care of creating a shared network for all the containers from from the services list that it's going to run when we execute this file so we don't have to create the network specifically and then specify that all the containers run in that Network Docker compose will automatically take care of it and we're actually going to see that in action right away so now let's actually go and create a Docker compos file in a code editor so in this projects directory so basically where I'm in the terminal I created this mongos services. yl file which is my Docker compos file with those two Services defined here so exactly the same code that you just saw we have our credentials all our environment variables defined and since this is a yl format please make sure that your indentations are correct because yl is a very simple language but it's very strict on indentation so the services need to be on the same level and then inside that service you need to have correct indentation for the configuration attributes so now compared to the docker commands it's going to be easier for me to go here to this file first of all see what services I'm running with what configuration edit those make any changes add any new services that I want to run and now let's actually execute this Docker compos file and see how it works back to my warp terminal I'm actually going to stop all the containers because we want to start them using Docker compost so that's the first one let's stop them we can actually remove them and we can also remove the dock Network and there you go so we have a clean State no containers running and now how do we execute a Docker compost file with Docker compost good news is if you have Docker installed on your computer that means you automatically also have Docker compose installed so you don't have to do that separately that means we should have Docker compose command already available as part of Docker and Docker compos takes one attribute which is the file name Services there you go and the command which is up which basically means go through the docker compost file provided here and Run start all the services configured right here so let's execute this and we're going to see the result awesome so now there are a couple of interesting things that I want to point out and highlight in the output that we got and also explain some of the interesting Concepts behind so let's go through them one by one I'm going to scroll all the way up to the beginning of the output which is right here when we executed Docker compose command the first one is I mentioned that Docker compose takes care of creating a dedicated Network for all the containers and here we see in the output that it actually created Network called projects uncore default so this is the name of the network and it's going to run those two containers in that Network so if I open another terminal and if I do Docker Network LS we're going to see projects default network was created another interesting thing to point out is the container names for those two containers so in the docker compose we actually called those services mongodb and Express however as you see Docker compose actually added a prefix projects and a suffix at the end to each service so this is basically the folder that contains the docker compos file where the docker compos file is located as you see right here so Docker compose always takes the name of the folder where the docker compose file is executed and it uses it as a prefix of the container and then you have one as a suffix so we have one instance of each container and that's how the containers are called and we can also check our containers and here you see the names projects mongodb 1 another interesting thing to point out is that you see that the logs of those two containers are actually mixed so we have the mongod be logs Express then mongod be again and so on because we're starting both containers at the same time so if you had 20 Services defined here they will all start at the same time and you will see the logs basically just mixed together on Startup however when you have multiple Services where some Services actually depend on the others in our case Express depends on mongodb because it cannot establish a connection the initial connection with the service until mongodb is fully up and running so we may have such dependencies or we may have an application our custom web application that also needs to connect to the database when we actually start the application to fetch some initial data and so on however if the database is not up and running when the application starts the application will fail with an error because it won't be able to connect to the database because it's not ready for the connection yet and you may have lots of such dependencies when you're running multiple Services as part of one application and this is something that you can Define in Docker compose with a depends on attribute so you can explicitly say this service actually needs to wait for another service or container to be fully up and running until this container is created with a very simple dependson attribute so basically we can say the express service depends on and we can have multiple dependencies so for example we can say an application depends on two different databases to start plus an authentication Service so all of those should be up and running until we start the application because otherwise it's not going to be able to connect to those on the initial startup so dependon takes a list of the services and it basically says wait until all the dependent services are fully up and run running before you start this service so we can fix it very easily using this attribute and now since we have both Services up and running again I'm going to refresh here and we should see Express accessible from the browser and we can actually do something here so we can change something in the database so for example I can create a mydb database and inside that I can create my collection collection I'm very bad with with names and not very creative so that's all we got we have my DB and my collection and this actually creates those in the actual mongodb database cool and if I go back to the terminal we should actually see all these change logs from Express and in mongodb basically logs new entries in the database that it created cool now what do we do if we want to stop those containers or maybe we want to change some configuration in do compose and restart those containers right now since we have the dock compos process running in the terminal we're going to need to do contrl c to basically break out of the process and this is going to stop both of the containers however just like with Docker run commands we have the detached mode we can actually run Docker compose in the detached mode like this so we'll start the containers in the background however now if we want to stop the containers we could stop them using Docker stop commands and providing the ID of the container however again if we have 20 containers running this is not going to be a efficient way to do it and with do compose it's also very simple actually instead of up we do down and what this will do if we have 20 Services defined here that are running as containers it's going to go through all of those and it will actually not only stop those containers but also remove them so now if I do Docker PS a so this shows running and stopped containers so all the containers in any state you see that we have no containers because they have been removed completely and you also see the network itself was removed so basically with Docker composed down you have a very easy way to clean up the entire state so you don't have any leftovers of containers and networks that you created previously everything will be completely removed however when you're running containers and when you make changes like we did in the database for testing you may want to retain those changes the state or the data in those containers so you don't want to completely remove the containers you just want to stop them and then restart them and as you've learned in the docker crash course containers are ephemeral they have no persistence so all the data is gone when you remove the container because by default it doesn't have any persistence unless you configure that persistence with volumes however if you just stop the containers and restart them you will still have the state and data because the container itself was not removed it actually stayed locally so to demonstrate that let's do up again and with Docker compos you can execute stop command which simply stops the containers and if I do docker PSA you see that the containers are still available locally they're just not running they're in an exited status and we can start them again using Docker compose start command and if we refresh our mydb database and collection are gone we can create them again like this we can restart using Docker compose and the data should still be there so that's basically the difference between up and down commands compared to start and stop and obviously both have their different use cases and one more thing since we are executing Docker compose commands very often like this one for example we can actually go ahead and bookmark this like this so if we have too many commands in the history for example and if we are scrolling around which basically creates this visual marker and you can just click inside and it jumps directly to that command we can then copy that command and execute here perfect so now before we move on to the next part of this demo where we connect our own custom application to the mongodb database and run it also as part of Docker compos service let's go to the database and in our new collection let's actually create a new document that our application is going to need it's going to be a very simple document let's add two more attributes here so we're going to have let's call this my ID again as you see I'm very uncreative with names so this is going to be an ID that we can reference in addition to this generated ID and then we're going to have the actual data which is going to be a string and we're just going to write here some Dynamic data loaded from DB so when we load this from our application we know that it's coming from the database so I'm going to save this document in the collection you see it was created here here are the values the generated ID my ID literally my ID and this data um text okay and we're going to make this a little bit more interesting so we're going to use a custom JavaScript application which is a super simple application with just one file that simply connects to the mongodb database and displays the data in the browser so we can see some of the concepts in action and we're going to containerize our JavaScript application and run it as part of the docker compos services and of course I'm going to provide the link to the git repository where this JavaScript application is hosted in the the video description so you can just clone it locally to follow along and by the way you will also find the docker compost file in that repository so all the code that we write in this demo will be there so I have cloned my own application locally in the projects I've called it Docker compos crash course so let's switch inside and to show you how simple the application is I have opened it in the visual studio code so I don't have the docker compos here yet this is the entire application we basically have the server JS which is a node.js backend and index.html which has the style the JavaScript code which is basically just one function and the HTML code in one file so the simplest app ever created so first of all you don't need to understand any part of this code we're just going to concentrate on the configuration and the dockerization part part of this app so even if it's simple app you don't need to understand the code but just to run through the logic on a high level this backand basically connects to the database logic we have this index.html file which shows two lines of data we have some static data which is hardcoded in the file itself and then we have data that is supposed to come from a database so this is empty so we're going to set it dynamically from the data that we get from the database which is going to be this data right here and the way we do that is in this JavaScript section when we load this index HTML page it basically the front end basically sends a request to our server JS backand and it says fetch the data and in server JS we accept that request right here we connect to the database base using this logic right here and now that we are using my DB and my collection as the database and collection name that's why we created them in the database and it connects to this collection and it basically grabs the element that has this key value pair inside my ID one so it's going to get this data here from the collection and it's going to send that the whole object back to the front end as a response and then we're going to grab the data attribute from that response that's the data this is the value of the data and we're going to set it as the value for this second line so that's how the whole thing is going to work and in order to connect to the database because remember we actually set username and password on mongodb so our application needs to have that same username and password just like the Express container had to have those credentials so we are providing those also as environment variables so just like Express had to receive those values as environment variables our application is also going to receive those as environment variables with these names so Mong to be username M Tob password and we use those to connect to the database that's the entire logic so now our goal is to take this application to build a Docker container out of it using the docker file blueprint which is right here also very simple simple because it's a nodejs application we use node as a base image uh we basically take the code that we have here in the app folder we copy it into the image we run npm install to download the dependencies inside the image and then we just start the application using node command which comes from here and server.js file which is this file right here so it starts the application on Port 3000 and logs this as a first log of the application so we want to use Docker file and again you learn Docker file in the docker crash course how to use it so all this should be familiar to you so we want to build our custom JavaScript application as a container and run it as part of Docker compose along with mongodb and Express Services that's the goal how do we do that first of all we need to copy that Docker compos that we created into this application code and remember another interesting point to highlight here Docker compose is part of the application code so developers work on Docker compose just like they work on Docker file and other parts of the application code which is the best practice to have all this logic together in one repository instead of scripts and commands spread on laptops and computers of different developers everything is in a central place so we created the this Docker compose file in the projects folder and we want to copy this into this folder so let's do a simple copy there you go and here we have our Services yl. compost file and as I said we want to add our application as a third service which is going to run as a container service but we don't have the image yet so we need to build the image as well in do compost what you can actually do you can Define both in one configuration so we can build and then start the container with Docker compose so right here I'm going to add the service for our nodejs application and let's call this my app because great with names and instead of image because we want to build that image first we're going to Simply provide build attribute and the build context which is current directory and this basically points to where the docker file is located as well as the entire build context for that image and the rest of the configuration is going to be the same as for other services so we have the ports in our case we're starting this application on Port 3000 so that's what we're going to Define right here so Port 3000 inside the container we're going to bind it on 3,000 on our host and as we saw we have the environment variables defined here as well so we need to set those so that our application will be able to connect to the database using those credentials and that's basically the entire configuration this will build our node.js application image using Docker file as a blueprint for the image and it will start it as a container on this port and pass in those environment variables that our application will read here and use it to connect to the database now we don't have to configure depend on here because the application doesn't actually connect to the database when it starts up so this is the startup logic so here we don't have any connection it only connects to the database when we load the application in the front end in the browser this function gets executed or this script gets executed and because of that we don't need to do depends on here and now let's go back to the terminal let's first of all see whether we have containers running let's get our Command to stop those containers so we're not going to remove them because we need the database collection and the data inside for our application and now I'm going to go into the docker compose crash course folder where we have the new Docker compose and I'm going to execute Docker compose up and let's execute and as you see it is actually building the new Docker image from the Noe base image and that was actually pretty fast and now we should have all three containers running let's check that and we have really bad names for our containers because the name of the folder is very descriptive large name which was used as a prefix for containers but that's fine and this were created from scratch so our previous containers with this prefix are not actually running instead it created the new ones and that brings me to another concept which is you can actually over IDE the value that is used as a prefix so maybe you want to reuse the same containers but you have moved the docker compost file to another location so let's actually remove those containers that we just started like this so now we only have those two and the way we can override is use using a flag on Docker compose so we can add an additional flag here minus P or also project name so essentially the name of the folder is assumed to be the project name so we can overwrite that project name value using this flag and we can call this projects which was the previous one like this and let's start the container let's do drps as you see our old instances of mongodb and Express were restarted instead of new ones being created plus the network called projects default that was already there and that means if I refresh this we still have our MB and my collection and the data inside for our application which means if I visit the application on Local Host 3000 which should see our awesome application and Let me refresh this once again so we can see the network traffic here we refresh so this was the fetch data request which we execute right here that basically returns this object that we created here from in the database back to the front end so if we go to preview or our response we see this object with my ID one this is the ID from the database and the data some Dynamic data loaded from DB and we're using that to set this line right here so if I actually went there and changed this like this and let save I'm going to refresh again you see that now we get this updated data from the database so the entire connection Works our application is connected to the database and displays that information right here now I mentioned that dock compose is part of the application code which means it gets committed and stored in a git repository so that everyone can work on it it's available for the entire team if a new engineer joins the team and they download the code they have docu compos so they know know exactly what services are running as part of that application and they can easily start that locally however that also means that it's really bad that we are hardcoding our secret credentials in the docker compost file because the best practice for security is that you shouldn't hardcode any sensitive data in the application code so it doesn't end up in the git repository and even if you remove it later if you accidentally checked it in and removed it it's still going to in the commit history so you shouldn't have any hardcoded values here so how do we solve this because we need those credentials to be passed on to Services well for that we can actually use variables or placeholders in Docker compost instead of the actual values and we can set the values of those variables as environment variables on the operating system so let's see how it works first of all we're going to remove all those hard-coded values and instead we're going to define the variables in Docker compos which has a syntax of dollar sign and then curly braces and inside that we can name the variable whatever we want I'm going to call this admin user because it's the admin user in mongod to be and by the way this could be lowercase you can call this really what you want but I'm using a standard environment variable name convention here with all upper cases so we have the admin user let's call this admin pass for password and we're going to reuse use those everywhere which is another advantage of using variables because if you change those values like if you change the password value for example you just have to change it or set it once and it automatically gets updated everywhere so now this do compost does not have any hardcoded sensitive data and it's safe to check it in the G repository however we still need to set those actual values so to test that let's go back to the terminal first of all I'm going to stop those stop the containers so we can test that everything works and on the first Docker compos command execution we get a warning that says the variables are not set the containers were stopped however we need to set those as variables in our terminal session so we set them here export admin user let's set the other one like this and the same way as we did with up command we actually need to specify which containers we're stopping so by default it's going to look for containers that start with this name the name of the folder so we need to overwrite that projects tag again and there you go and I'm actually going to bookmark this one as well like this and if we refresh we should see that the pages are not working because the containers are stopped and then let's start them again and if we start them again with those environment variables set everything should work same as before let's wait there you go now I want to mention here that Docker compos actually has a concept of Secrets which is another functionality to manage the secrets especially when you're running do compos in a production environment which is exactly for this use case where you need to pass in credentials or any sensitive data to the services defined in Docker compose so you can use Docker compose Secrets as an alternative to this method basically awesome we have just learned the fundamentals of docker compose and more importantly you understand its core use case and by the way I want to highlight the importance of learning tools like Docker and Docker compos or generally cloud and devops Technologies because nowadays it is becoming more and more needed for software developers to learn those tools to become more valuable in their roles especially in the current tense job market where we have layoffs and companies hiring less as more and more companies are adopting devops it is a great way to Stand Out Among developers who only focus on programming and are not interested to learn new Concepts and tools that are being adopted in the industry so I think it's definitely more important than ever to educate yourself keep learning and with devops or Cloud engineering skills you will definitely be ahead of 90% of developers in fact most of our devop boot camp students are actually software developers or software Engineers since many companies do not have a separate devops engineer role but often their responsibility lies on senior developers to set up the devops processes like release pipelines for example so even if you are a junior software engineer learning devops and Cloud skills and technologies will absolutely accelerate your career to a senior engineer so if you want to get a complete education on devops to take over devops tasks at your work then definitely check out our devops boot camp you will learn various Technologies from zero to an advanced level to be able to build real world Davos processes at your job and if you need some guidance before you can also contact us there with your questions so check out the information below and let's move on to the next part so now we are building and running our JavaScript application as a container along with mongodb service and the mongodb UI but usually that's a testing environment as you learn in the docker crash course eventually you want the JavaScript application your custom application container to be stored centrally in a Docker registry or rather the image to be stored in Docker registry so we can start and deploy it as a container on the end environment on a actual deployment server where end users will access it so we need to build the image and push that to the repository like a dockerhub repository or whatever other Docker repository want and now the interesting question is after we push the image to private Docker repository how do we reference our Custom Image from our private Docker repository in Docker compose and also note that when we run this on an actual deployment server we're going to copy the docker compose on that server where we have Docker and Docker compose installed and when we run this Docker compost file or execute this it will go through all the services and it will pull all the images defined here and run them as containers with all this configuration so we pull the official images from docker H public repository and any custom images from the private repositories so let's actually see how it works it's actually very very easy and again building image pushing it to the repository the whole thing you should already know it from the docker course so this should be a refresher for you so let's actually see that in action right here so first of all I'm going to log into my dockerhub account and I'm actually going to create a new private repository in dockerhub like this let's call this my app because it's generic I may use it for some other demonstration later so there you go and now we're going to build our image using the docker file and we're going to push that image to this specific private repository so let's execute those commands I'm going to build using Docker build command I'm going to tag this and this is again a refresher from Docker course we need to take the image with the name of the repository so that Docker knows on push command which repository to push that image to so that's going to be the entire name so the image name itself includes the repository name and we're just going to tag it with a simple 1.0 and we need to provide the build context which is the current directory where Docker file is located and that's basically it let's execute again super first let's list all the images so we can see what we have built locally and there you go this is our image with an image tag and this is exactly the image we want to push now to the private reposer and you know when we want to push or pull from a private repository we need to be logged in to that repository so we need to do Docker login and the username is actually your dockerhub username name and dockerhub user password and this is actually different for other Docker repositories so if you have a ECR or some other Docker repository the process may be different with dockerhub it's very simple that's why I use it for the demos mostly so we are now logged in to dockerhub and what I can do now is basically push that image that we just created using simple Docker push and the image full name with the tag and there you go and if I refresh we should see one tag here 1.0 for our my app image repository perfect and one thing I wanted to show you here is that whenever you are building and pushing an image you basically have the same commands all the time for the specific action you build the image you log in if you're not logged in already you push the image and so on and I have actually used a feature called workflows in workp for these kind of use cases which can be really helpful if you want to if you have a set of commands that you always need for the same type of workflow you can basically Define that as one unit so you can group those commands in one unit called the workflow and you can save it here and whenever you need that you can just execute all those commands with one click which I personally found super cool so in warp drive you have some options here you create a new workflow and I'm going to call this Docker push and here you can list the different commands basically so let's do Docker login and we have username which was this one right here and obviously we don't want to provide a password hardcoded here so we're going to pass that as very variable or argument and we're going to read that from the standard input again this is a refresher from Docker this is how should use Docker login so you don't type in the password directly and with worp you can actually use arguments like this so whenever you run this command or list of commands before it runs it will actually tell you how you should fill out this argument so we're going to use an AR arent here and then after Docker login we're going to do Docker build like we did with an image tag we can also make this an argument let's make this an argument number two and then push that like this you can also set default values let's actually do 1.0 here and this way you don't have to type out all the commands for the same workflow so let me check all these commands so we need a build context at the end and the rest looks pretty good and let's save the workflow and the way it works is now I have the workflow right here and whenever I need that workflow to execute I just click on it and it fills out the terminal basically with all these commands and it highlights the the arguments that I need to set so I'm actually going to put my password here as an argument and then let's say we want 1.1 as a second argument and we can execute all the commands like this and this will actually have pushed another tag with 1.1 so this is a cool feature that you can use on warp to make your life a little bit more convenient and that means now we have our image with two different Texs in a private Repository and now if we go back to the dock compost we don't need to build it locally we can again this is convenient for testing because when you're testing on a local environment as a software developer as an engineer you may want to just do very quick local changes in a Docker file or in application test it quickly and you don't want to be building and pushing and pulling image all the time so this is a very good functionality for local testing however on an end environment obviously we need to Define an image that comes from a repository maybe we have scanned that image already and made sure that it's secure and properly configured and so on and now we want to actually use it and how do we reference our Custom Image from a private repository in Docker compose very simple basically do it just like any other image in dockerhub or any other repository like this with a specific image tag that is available let's do 1.0 and how will Docker compos be able to pull that image from a private repository or basically authenticate with the private repository to pull the image it actually uses the same Docker login that we use to do Docker push so Docker login basically creates after successful login authentication with the docker Hub it actually creates a Docker Json file locally that creates the authentication credentials or tokens in that file and Docker compos in the background is using Docker to run those containers so it's going to be the exact same process pulling or pushing the images from Docker compose so that means if you have done Docker login already to that repository then you should be able to pull any images defined in Docker compose from that repository that means that's the configuration this should work now in order to test that let's actually stop our containers so they're all stopped and I'm actually going to remove the application container because we want to simulate that the container is recreated from the new image that we pull and now if we do up again in DET mode and as you see my app image was pulled and the container my app was started from that if we check the running containers we see that this is the image that was used to create this container awesome and again we can check that our application still works and there you go that's how we can reference our Custom Image from a Docker repository in the docker compose and in case when you're executing those commands so let's say we're doing Docker build and you forget one of the arguments or use a wrong flag and so on first of all you get an error but you can also do a troubleshooting feature within the terminal to actually give you a pretty good tips and notes on what the error actually is because sometimes we make spelling mistakes sometimes we forget an argument or whatever so it could be helpful for a tool to actually tell you what the actual problem is so you can fix it and warp has this AI assistant which is pretty cool so for this specific error if I open this warp AI which you can see on every command block so you have this bookmark and you have this warp AI so if I click on this it actually autog generates the question question because it knows that this is an error and you can ask it how to fix it so you can modify your question or example and if I hit enter here it gives me an answer that the docker build command requires an argument which should be the path to the docker file and gives me an example with the correct one so change the directory that has Docker file and then execute this command which has dot at the end so I found this feature also pretty cool which means if any of the commands give you an error while you're following this demo you can actually use this to find out what the error is about and and ideally how to fix it so this is going to help you troubleshoot your issues and finally I want to actually add a few very interesting and important Concepts regarding docu compose and kind of what the next steps are so this final small section may be really really interesting for you so basically as you see the main use case of Docker composed was to have a central place to manage containers that were supposed to run together like your application and all the services that it depends on and we configure all the environment variables or any other configuration for those services in that one file and also start them in one isolated Docker Network and it makes it super easy for us to clean up all the resources so Engineers took Docker and they containerized their applications to a whole new scale that was not a standard before and Docker was especially perfect to use as a host for microservice applications where you have even more applications and more containers now running in one environment and again if you don't know about microservices I have a separate video about them but essentially it's when you have all the services needed to run one application but split into separate micro applications or services and they can be scaled independently and run independently as independent containers so Docker was a perfect host for that so we ended up with lots of applications lots of microservices applications with hundreds or thousands or tens of thousands of containers that is pretty much a standard nowadays such a scale actually led to Docker compost actually not being able to handle such large scale of containers and more importantly Engineers will have to still manually manage running and operating those containers with do compose like if containers die or crash or have connectivity issues ETC you have to manually detect and then debug and restart the services now Docker compose actually made some improvements on that there tags like restart and so on but it's still a lot of operational effort to run the containers with this kind of scale where you have thousands of them using Docker compose and that's where kubernetes kind of came into the picture to solve exactly these two main issues initially scaling to thousands or tens of thousands of containers with kubernetes you can basically merge hundreds of servers into one huge server to deploy all the containers that belong to the same application in that environment they will all run as if they were running on the same server so it naturally makes it easier to scale your applications and to run thousands of instances and the second one was the automatic operations or making the operations of applications easier or also called kubernetes Auto healing feature which basically manages starting and restarting containers if they crash and has mechanisms to manage operations of a large number of containers in an automated way when manual effort isn't it's just impossible or not feasible anymore and that led to C is becoming so popular so Docker compose is kind of like a intermediary step if you have smaller set of containers but with today's standards when you want to work with very complex applications with a large scale then Docker compose has its limits so that's where kubernetes basically comes into the picture so if you're learning this containerized containerization and container orchestration Concepts then I would actually recommend to use this road map of learning the docker using the docker crash course then learning the docker compose with this course like you did and then you can move on to the kubernetes and if you want to learn kubernetes I also very conveniently have a kubernetes crash course to get you started in kubernetes very easily so if you want to get started with that you can check out any of the many videos that I have on my YouTube channel but I would recommend to start with the kubernetes crash course so I hope you learned a lot of new Concepts you obviously learned Docker compose as a new tool new technology thank you for watching till the end let me know in the comments how this video actually helped you in your work or maybe in your job application I'm always happy to hear and read that feedback from our viewers to know that my videos are helpful in actual job environment you can also share any other tips and learnings about dock compose that you have from your practical experience so that other viewers can read and benefit from it as well and with that thank you for watching and see you in the next video
Info
Channel: TechWorld with Nana
Views: 135,805
Rating: undefined out of 5
Keywords: docker compose, docker compose tutorial, what is docker compose, docker compose yml, docker compose network, docker compose vs kubernetes, docker compose file, why docker compose, why use docker compose, docker compose course, docker compose crash course, techworld with nana, docker, docker compose syntax, docker compose environment variables, docker compose secrets, how does docker compose work
Id: SXwC9fSwct8
Channel Id: undefined
Length: 63min 14sec (3794 seconds)
Published: Thu Jan 11 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.