In this video, we’re going to learn the
differences between the ENTRYPOINT and CMD instructions in Dockerfiles. Then
we’re going to take a look at a few practical examples of the most common use cases for
one or the other, so let’s get started! By the way, if you don’t know how to create
Dockerfiles or you just need a refresher, check out my step-by-step Dockerfile tutorial!
I’ll link it up here and in the video description. In Dockerfiles, both the ENTRYPOINT and CMD
instruction are used to specify commands the container is supposed to run when starting up. So
whether we end our Dockerfile with CMD somecommand or ENTRYPOINT somecommand, it
produces the exact same result. But what’s the difference then?
The first thing to know about these two instructions is that they can be used both
in the same Dockerfile. If you put multiple CMD instructions or multiple ENTRYPOINT only the
last one will take effect, but if we use a CMD in combination with an ENTRYPOINT they will both
be taken into account when running the container. Let’s learn what happens with an example.
If we have a Dockerfile with a CMD command1 when we run the image with “docker run
mycontainer”, docker will create a new container and immediately execute command1. We also know
that we can change this behaviour by specifying a different command after the image name and
override the CMD instruction in the Dockerfile: i.e. “docker run myimage cmd2” will run
a new container with the cmd2 command. When we introduce an ENTRYPOINT, docker
will behave a little differently. It will first look at the command defined by the
ENTRYPOINT and then use whatever we have in CMD as additional options for that command. So a “docker
run” in this case will result in the execution of “entrypoint_1 parameter_1 parameter_2”. Notice
how these are not two commands chained together! When both are provided, entrypoint will be
the command and whatever is specified as CMD will be interpreted as additional options.
What’s special about the entrypoint is that if additional arguments are passed at the end
of a docker run, these will not change the container ENTRYPOINT. If we execute “docker
run mycontainer cmd2” the new container will start with the command “entrypoint_1 cmd_2”.
The entrypoint is transparent to the user and can only be overridden by explicitly passing
the --entrypoint argument to “docker run”. Defining an entrypoint for your dockerfiles
can be a double-edged sword. From the developer perspective, it is a powerful way to make sure a
command runs transparently at container startup, on the other hand, you can really confuse
users if you’re hiding too much logic there. So when should you use CMD and when ENTRYPOINT? The short answer is that you should
always use CMD in your Dockerfiles, unless your container has to execute a certain
command every time, in that case, you can introduce an ENTRYPOINT. Using CMD is the easiest
and more flexible solution because it allows users to override the entire startup command just by
appending it to the “docker run” instruction. For the long answer, I want to show you two
real-world scenarios where the ENTRYPOINT is necessary.
We’ll see: how to package a command line
utility in a docker container and how to create a wrapper function and
force the container to run as a non-root user I recently needed to run DNS queries to
investigate a network issue on my Mac. I know how to do that in Linux with the dig command
but I didn’t know if it was available for MacOs. So I thought to package “dig” in a container.
To do so, I can create a very simple Dockerfile. Let’s use Debian as a base image, and
then install the dnsutils package with apt. Dnsutils will include the “dig” utility. I
will build the container and tag it as dnsutils so I can run a DNS query on google.com
with “docker run dnsutils dig google.com”. Even though it works just fine I
don’t like writing long commands so I can use the entrypoint to my advantage and
make “dig” transparent to the user. I just need to add the ENTRYPOINT at the end of the Dockerfile
and tell it to run dig as my container command. I’ll build the container again
and use “dig” as the image name, so now I can run “docker run dig” and still
specify command-line arguments. Let me try a few: I can type “docker run dig
-h” to show the help page or do a DNS lookup with
“docker run dig docker.com" If you really can’t stand typing “docker
run” every time you can even create a command-line alias in your .bashrc
file. Add alias dig=“docker run dig”, and now your shell will replace
that for you! Isn’t that great? Let’s take a look at the second
more complex use case now. By the way, if you’d like to follow along with the tutorial and try this out for
yourself. You can pause the video, and download these examples. I have linked
the repository page in the video description. Running containers as the super user can
leave your services vulnerable to attacks, but there are cases where you need root
privileges at runtime. Let’s say, for instance, we have a containerised application that produces
data, like a database, and store it in a volume. In this case, we want to make sure that when
we mount the data volume, the non-root user in the container can read and modify all files
in that directory before starting the service. In this example, I have created a simple
bash script that generates 5 unique IDs and appends them into a file in the data volume.
To follow best practices, this container runs the generator.sh script as appuser instead of
root. This can be an issue if something else tampers with file ownership in the volume and
appuser can no longer modify the keys.txt file. To demonstrate I can simply run the container
as root once. I’ll create a new empty volume with docker volume create keysvolume and run the
container as root the first time: docker run -v, we mount the keysvolume into /data directory,
run as the root user, and the container name. If I now run this a second time as non-root, the
user won’t have permissions to write the file. To solve this issue we can
create a wrapper script, so that if we run the container as
root, it will fix file permissions, and then lower privileges to run
the container command as appuser. Let’s create a new file called entrypoint.sh
in the same directory as our Dockerfile, and add the following script: exec and then $@
between double quotes. The $@ is a shell script notation to replace all the command line
parameters, and we know that whatever is defined as CMD in the dockerfile, will translate to
additional parameters for the entrypoint. Let’s also echo the command we’re executing to
see what is happening. We save this file for now and edit the Dockerfile, where we can
add an ENTRYPOINT to run this new script. If we now build the uuidgen
container and run with docker run uuidgen ls -l /data we can see our
echo instruction from the entrypoint, telling us we’re going to list files in the
data directory; and clearly permissions on the keys file are wrong because we need
appuser to be the owner of the files. To fix it, let’s go back to our
entrypoint.sh and add the following: we check if the current user running the
container is root, if it is, we know we need to fix file permissions by making appuser the
owner for the data directory and all files in it. Then, we need to run the
container command as the appuser, for that we can use the “gosu” command, then
specify the user we want to lower privileges to, and finally the command to run. We close
the if statement and save the file. After building the container we simply need to
run our key generator as the root user once. And the keys have been generated successfully
this time. To make sure we are indeed running as non-root we can start the container and print the
current user. Notice how even if we asked docker to run the container as root, the current user is
appuser. And if we list the data directory again, appuser is now the owner of the keys.txt file. In this video, we looked at the differences
between CMD and ENTRYPOINT in Dockerfiles and practised using them. I hope you learned
something interesting with this tutorial, if you enjoyed it, please support my channel
by subscribing and smashing that like button! If you have any question, don’t think twice
and drop me a comment down below! I would love to hear your feedbacks! Thank you so much
for watching, and see you in the next video!