[Backend #25] How to write docker-compose file and control service start-up orders with wait-for.sh

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hello everyone! Welcome back  to the backend master class! In the last lecture, we’ve  learned how to use docker network to connect 2 stand-alone  docker containers by names. Today I’m gonna show you how to use docker compose to automaticall setup all services  in the same docker network and launch them all at once with a single command. OK let’s start! First, I’m gonna create a new file: docker-compose.yaml at the root of our project There are a lot of things you can  config with a docker compose file You can read all about them in the documentation by going to docs.docker.com open Reference Compose file reference version 3 On the right-hand side menu, you can see a list of different syntaxes. In this video, I will show you some of the  most important one that we usually use. Here in the center of the page,  we can see an example compose file Basically you define the docker compose version, Then a list of services that  you want to launch together. In this example, there’s a redis service, A postgres DB service, And some other web services for  voting and computing the result. Now I’m just gonna copy the  first 2 lines of this example, And paste them to our docker-compose.yaml file. Note that a yaml file is very  sensitive to indentation. At the moment, the file is  using 4 spaces indentation, But I just want to use 2 spaces instead. So I’m gonna click on this button, Select indent using spaces, And choose 2 as the tab size. OK, now if we press enter, we can see that it is nicely  indented with 2 spaces. Next we have to declare a list  of services we want to launch. The first service should be postgres db. For this service, we’re gonna  use a prebuilt docker image Hence, the image keyword, followed by the name and tag of the image, which is postgres:12-alpine in this case. Now we will use the environment keyword to specify some environment variables for the username, password and db name just like what we did in the github CI workflow. First let’s copy the POSTGRES USER variable. The syntax is a bit different, we have to use the equal  operator to assign the value. Similarly, I gonna copy the  POSTGRES PASSWORD variable And finally the POSTGRES DB variable. Alright, next step, we will  declare the api service, which will serve all API  requests to our simple bank. For this service, we should build  its image from the golang source, So, under the build keyword, We specify the context to build the image: This dot means the current root folder. Then we use the dockerfile  keyword to tell docker compose where to find the docker file to build the image. In this case, it’s just the  Dockerfile at the root of the project. Next, we should publish the  port 8080 to the host machine So that we can call the simple bank  API from outside of the container. One of the most important thing we must do is to tell the api service how to  connect to the postgres service. In order to do that, we will set  1 environment variable: DB_SOURCE As we’ve seen in the previous lecture, Setting this environment variable will override  the value we declare in the app.env file And since all services in this docker-compose  file will run on the same network, They can communicating with each other via name. Therefore, here, in this URL, instead of localhost, we will use  the name of the postgres service. And that’s basically it! The docker-compose file is completed. Let’s try to run it! If you have the latest docker CLI on your machine, All you have to do is to run docker compose up Then docker compose will automatically search for the docker-compose.yaml  file in the current folder and run it for you. As you can see here, before running the service, It has to build the docker image for  our simple bank API service first. Then after the image is successfully built, docker compose will start both the  postgres and the api service at once. In the logs, we can see whether  it comes from which service by looking at the prefix of the line. It is either postgres_1 or api_1 in this case. Now if in another terminal  tab, we run docker images We can see a new simplebank_api image. Basically, the image name  is prefixed with simplebank, which is the name of the folder  containing our docker file. And its suffix is the name of  the service itself, which is api. Now let’s run docker ps to  see all running services. There are 2 services: Simple bank postgres 1 And simple bank api 1 Both of them have the simple bank prefix  followed by the name of the service. Now if we go back to the docker compose tab, And scroll up to the top, We can see clearly here that a new network simplebank_default is created before the 2 service containers are created. Now if we try to inspect the  simplebank_default network, We can see that the 2 service containers  are actually running on this same network. That’s why they can discover each other by names. Alright, now it’s time to  send some real API requests to see if the service is working well or not. As the new database is completely empty, I’m just gonna send the create user API. Oops, We’ve got 500 internal server error, And the reason is: relation users does not exist. Do you know why? Well, that’s because we haven’t run db migration to create the db schema yet. To fix this, we have to update our docker image to run db migration before  starting the API server. It would be done in a similar fashion as what we’ve done in the github CI workflow: We will have to download golang  migrate binary to the docker image and use it to run the migrations. So I’m gonna copy these 2 instructions And paste them to the Docker file, in the builder stage. First we have to run this curl command to  download and extract the migrate binary Then I’m gonna move this second  command to the run stage, Where we will copy from builder the  downloaded migrate binary to the final image. We have to change the path of  the original migrate file to /app Because that’s the working  directory in the builder stage when we download and extract the file. Then I’m gonna put this file  in the same WORKDIR folder in the final run stage image, which is also /app. Next, we also have to copy all migration SQL files from the db/migration folder to the image. So COPY db/migration I’m gonna put it in the migration folder  under the current working directory. Now, one thing we must do is, to  install curl in the builder stage image Because by default, the base alpine  image doesn’t have curl preinstalled. To do that, we just have to add a  RUN apk add curl instruction here. Finally we have to change the way we start the app So that it can run the db migration  before running the main binary I’m gonna create a new file:  start.sh at the top of our project Then let’s change mod this  file to make it executable. This file will be run by /bin/sh because we’re using alpine image, so the bash shell is not available. We use set -e instruction to make sure that the script will exit immediately if a command returns a non-zero status. First step, we will run db migration. So we call the /app/migrate binary Pass in the path to the folder  containing all migration SQL files, which is /app/migration. Then the database URL, which we will take from the  DB_SOURCE environment variable. So here, I just use $DB_SOURCE to get its value. Let’s also use -verbose option to print  out all details when the migration is run. Finally the up argument is used  to run migrate up all migrations. After running migrate up, we will start the app. All we have to do in this step is to call exec $@ It basically means: takes all parameters  passed to the script and run it. In this case, we expect it to be /app/main  as specified in this CMD instruction. To make it work, we will use  the ENTRYPOINT instruction And specify the /app/start.sh file as  the main entry point of the docker image. Keep in mind that when the CMD instruction  is used together with ENTRYPOINT, It will act as just the additional parameters that will be passed into the entry point script. So basically, it will be similar  to running “/app/starts.sh” with “/app/main” as the second argument. But by separating the command from the entrypoint, we have more flexibility to replace it with other command at run time whenever we want. You can read more about this in  the docker documentation page, In the CMD instruction section. OK, now the docker file is updated, Let’s try to run docker compose again. But first, we need to run docker compose down To remove all existing containers and networks. And we should also remove the simplebank_api image Because we want to rebuild the image  with the new db migration scripts. OK, everything is cleaned up. Let’s run docker compose up again! It will take a while to rebuild the image. Oops, we’ve got an error: Start.sh: no such file or directory. Oh, that’s because I forgot to copy  the start.sh file into the docker image So let’s do that now! Here in the run stage, we just need to add 1 more instruction: COPY start.sh . And that’s it! We should be good now. Let’s run docker compose  down to remove all containers Run docker rmi to remove  the old simplebank_api image And finally run docker compose up  to rebuild and relaunch everything! This time, looks like the  db migration script was run, However it was still not successful. The service exited with code 1. And in the log, we can see  the error: connection refused. The app cannot connect to the  database to run db migration. That’s because it takes some time  for the postgres server to be ready, But the app tried to run db  migration immediately when it starts. At that time the postgres server was  not ready to accept connection yet. So to fix this, we have to tell the  app to wait for postgres to be ready Before trying to run the db migration  script and start the API server. Now although in the docker-compose.yaml file, we can use the depends_on instructions  to tell docker compose that the api service depends on the postgres service. This only makes sure that the postgres  will be started before the api service. It doesn’t ensure that the postgres is in  ready state before starting the api service. You can read more about this behavior in  the docker compose documentation page. Just search for depends_on in this page. Then you will see here, they listed  several things to be aware of when using the depends_on instruction. If we want to wait for a service to be ready, We should follow this link to know  how to control the start up order. There are several tools that has  been written to solve this problem, But the one we should use in our use case  is the sh-compatible wait-for script, because we’re using an alpine-based image. OK, so here is its github page. Wait-for is designed to synchronize  services like docker containers. And its usage is pretty simple. We just need to run wait-for, and pass  in the host:port URL we want to wait Here’s an example, It’s basically waiting for the web  page eficode.com port 80 to be ready before executing the echo statement. So what I’m gonna do is Open its latest release version, And click on this link to download the script. The file will be downloaded  to my Downloads folder. So I’m gonna open the terminal And copy that file to the root  of our simple bank project. I will rename it to wait-for.sh to  make it clearer about the file type. Note that this file is not executable yet, So we should run change mod  +x to make it executable. Alright, now back to our Dockerfile. I’m gonna and 1 more instruction To copy the wait-for.sh file  into the final docker image. Then, in the docker-compose file, We should override the default  entry point and command So that it will wait for the  postgres service to be ready before trying to start the api service. Let’s add entrypoint here And its content should be: /app/wait-for.sh Then the host and port to wait  for should be postgres:5432 Followed by 2 dashes, Then finally the start.sh script to  run after the postgres server is ready. Now you should know that when the  entry point is overridden like this, It will also clears out any  default command on the image, meaning that the CMD instruction we  wrote in the Dockerfile will be ignored. So in this case, we have to explicitly  specify the command we want to run in the docker-compose.yaml file as well. We use the command keyword for that purpose. And the command we want to  run is simply: /app/main And that will be it! I think now we have everything we need  for the app to launch successfully. Let’s try it one more time! First let’s run docker compose down  to remove all existing containers. Run docker rmi to remove the simplebank_api image And finally run docker compose up to  rebuild and relaunch the services. This time, you can see that the  services are started in order. First the postgres service will run, And only when it is ready  to listen for connections, then the api service will  start running the db migration. Therefore, the migrations were run successfully, First the init_schema migration, Then the add_user migration. Finally the API server was started  successfully after the db migrations. We can see the GIN logs saying that the server is listening and  seving HTTP requests on port 8080. So let’s open postman to try it! I’m gonna send this same  create user request as before. But this time, the request is successful. A new user is created! Awesome! And with this I’m gonna conclude  today’s lecture about docker compose. Now you know you to use this powerful tool to launch multiple services at once, with specific startup orders. Thanks a lot for watching, Happy learning, and see you in the next lecture!
Info
Channel: TECH SCHOOL
Views: 2,314
Rating: undefined out of 5
Keywords: docker compose, docker compose file, docker compose wait for, docker compose wait for database, docker tutorial, docker, backend master class, backend course, backend development, backend tutorial, coding tutorial, programming tutorial, tech school
Id: jf6sQsz0M1M
Channel Id: undefined
Length: 16min 8sec (968 seconds)
Published: Mon Apr 26 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.