In this video I want to talk about how you
can configure Docker to work behind a proxy server. If you are configuring docker deployments
in a private cloud environment or maybe you just need to pull images from a registry hosted
in a corporate network, chances are that your Sysadmin will probably tell you to connect
using a proxy server. One thing that may be confusing with Docker
and Proxies, is that the Docker daemon and docker containers don’t share the same proxy
configuration! Let me save you some time and headaches, and
repeat it again: You will need to provide your proxy settings for the Docker daemon
and also for each individual container! I’m now going to show you how to do both. For this tutorial to work, I have configured
a few CentOS machines on AWS, so this node I’m using, can connect to the internet only
via the proxy that is now running on another node on port 3128. If your Docker daemon is not configured to
use the proxy correctly, you will typically see timeout errors when using “docker login"
to log into a private registry or when pulling images in general. To test the connection let’s try pull the
alpine image from docker hub. And we get a connection timeout. I’m now going to show you how to solve this
problem, whether you’re running Docker on Linux or Docker desktop on MacOS or Windows. If you run Docker desktop, the solution is
very easy, just open Docker preferences from your OS interface, navigate to Resources > Proxies. Switch to “manual proxy configuration”
and provide the URLs for your HTTP and HTTPS proxy, and the list of hosts that don’t
need to proxy connections, usually localhost and 127.0.0.1. Click "apply and restart" and just wait for
Docker to come back up again. If you run docker on Linux, the easiest way
to change your proxy settings is using systemd. Your docker daemon is most likely installed
as a systemd service and can be customized by using the environment variables HTTP_PROXY,
HTTPS_PROXY and NO_PROXY. Now, this is not going to be a video about
systemd, so I won't bother you with the details, but if you are facing any issue with systemd
configuration, ask a question in the comments and I will do my best to reply. For this test I am using a CentOS environment,
but you should be able to replicate the following steps in almost any Linux distribution. To customize a systemd service on CentOS we
can use the “systemctl edit” command. So “sudo systemctl edit docker.service”. You’ll notice that systemctl opened a blank
file with your default editor. We can use this file to provide additional
configuration or override the existing docker.service unit file. Let’s type the following: Service between
square brackets to start the section definition, and set the proxy configuration as environment
variables for this service. I’m going to include the configuration snippets
in the video description, so check it out if you want to copy paste it into your own
environment. When we’re done defining our variables,
we save and close, and restart the docker daemon with “sudo systemctl restart docker.service”
We can verify our proxy configuration has been applied with “docker info” and filter
for the “proxy” keyword. We can see that our variables have been loaded
by the docker daemon. So if we now try to pull an image from docker
hub, it should work. Great! But what happens if we try run a container,
and access some online content? Let’s run an Nginx container and "curl google.com". After a few minutes we get a connection timeout. As I told you, this happens because running
containers don’t share the same proxy settings as the docker daemon. To access the internet we’ll need to set
the HTTP and HTTPS_PROXY variables for the container. This can be done in the Dockerfile or at runtime
with the “env" option, although I recommend that you avoid doing so in the Dockerfile,
which makes the image less portable. Let’s run the Nginx container again but
this time configure http and https proxy variables: “docker run" we set the environment variable
for http_proxy, same for the https_proxy variable, image will be Nginx, and the command "curl
google.com" again. And this time, the container successfully
connected to the internet. Setting proxy variables on containers is easy,
but it gets boring very quickly since you’ll have to do it for every new container. A better way to do this is configuring default
values for the docker client, so these settings will be applied for all new containers that
the current user runs. To do so, we need to create or edit the ".docker/config.json"
file in our user’s home directory (in my case I need to create it), and add the following
configuration. Same here I will include the snippet in the
video description for you to copy, but essentially we are just providing default values for the
same three variables we’ve set before. Save and close, and now all new containers
started by this user will have the proxy settings applied by default. Let’s test the result of this last configuration
by echoing the http_proxy variable for a new container: And the variable has been set by the docker
client now. One final test to try connecting to google.com
again, and it works as expected! I hope this clarifies how Docker works with
Proxies. Please let me know if you liked this video
with a thumbs up, or a thumbs down if you didn’t like it to help me improve. I do my best to upload new content every week,
so subscribe to the channel if you'd like to learn more about docker and containers! Thank you very much for watching until the
end and see you in the next video!