In this video, we'll talk about how different
containers can communicate with each other and with standalone applications
on the same host. Additionally, we'll explore communications between
containers located on different hosts. We'll cover all the default Docker
network drivers, including bridge, host, overlay, and others, and discuss
when to use each specific one. I'll also provide some hands-on demos that
you can reproduce on your own. We frequently use Windows and Mac for local
development. We may build and run multiple containers on a laptop. While some network
drivers, such as the bridge network on Mac, may still work, they do have
some limitations. For instance, on the Mac, Docker is created inside a
virtual machine, and while on Linux you can access a container using its IP
address directly, it’s not possible on Mac. You would need to forward ports to the
localhost in order to access the application. Now, some networking features of Docker
can still be applied for local development, but most of them are used for running
applications that are accessible by external clients on, let's say, a remote
Linux server. So, it's important to keep in mind that there are two different
use cases for Docker and networking. You may be wondering why not use Kubernetes
to run your applications in production. Well, perhaps you only have one or a few applications
to run on Linux, and deciding to run those apps in Kubernetes can lead to significant overhead.
Kubernetes is a distributed system and needs to run many different services, such as kubelet,
DNS, etcd, and different controllers. And you need to spend a lot of time maintaining
and upgrading your Kubernetes clusters. The typical example is that you may have
a standard 3-tier application. You have clients such as web browsers or maybe mobile
applications. There's a logic tier with your application, and for the data tier,
you may have a relational database. You can run your app and the database
on the same Linux server using Docker, which is a perfectly valid use case. Don’t
use Kubernetes unless you really have to. If you want to get the most out of
Docker networking, you need to use Linux. Let's start with the default bridge network,
which is used whenever you start a container. But first, let's take a look at what a bridge
is in a regular network without Docker. In a nutshell, a bridge is a physical or virtual
device that connects multiple local area networks. Let's say you have a large number of devices
connected to the local network in an office building. Maybe you have 30 devices,
like computers, printers, etc. Now, when two or more devices on the same network
try to transmit data at the exact same time, this will lead to a collision, and some packets
will be dropped. So, the more devices you have connected to the same network, the more collisions
you would have. This reduces the performance of the network since if the packets are dropped,
the sender needs to resend those packets. And we say that all those devices on that network
segment belong to the same collision domain. In the early days, to solve this problem,
the bridge was introduced. The bridge can divide the local network into multiple smaller
networks. For example, we can split the original network with 30 devices into two networks with 15
devices each. And each network will get its own collision domain. Fewer devices on the network
mean fewer chances of a collision to happen. Now, when it comes to Docker, the two
segments that the bridge connects are your local host (it can be your
laptop or a Linux server where you run your containers) and the
virtual network created by Docker. For example, when you install
Docker for the first time, it will create a default bridge network
with a CIDR like 172.17.0.1/16. And all containers that you create on that host
will get an IP address from that range. Also, containers that are created in that default bridge network will be able to
communicate with each other. Now, the default bridge
network has some limitations, and even Docker itself does not recommend
using it, especially in production. The recommended approach would be to
create a new user-defined bridge network. The biggest difference you'll notice
immediately is that with a user-defined network, you can use DNS to send requests to
your containers. This applies only to communications inside the bridge network
between containers. You still won't be able to use the container DNS from the host. The
name that you give to the container when you start it will become its DNS name. This is
not possible in the default bridge network. Alright, let me run a quick demo for you. First of all, let's list all the default
networks created by Docker: we have bridge, host, and none. Now, we can use the 'inspect'
subcommand to get all the configurations for a specific network. For example, the default
bridge network has a CIDR of 172.17.0.0/16, and the gateway is '.1.' Additionally, you can see
the driver that was used to create this network. Let's go ahead and run the first container. By the
way, I have the source code and all the commands in the README file. You'll find a link in the
description in case you want to follow along. Now, to run a container, we need to specify
the image. I have built one for both AMD and ARM architectures, and it's public, so you can
copy, paste, and run exactly the same command. Use '-d' to run it in the background. On a
Mac, I have to explicitly forward ports to localhost in order to access the application.
For example, if you run Docker on Linux, you can directly access the container by its
IP address from the same host, of course. For Mac, I'll map port 8081 on
localhost to port 8080 inside the container, where I run a Python application. To get all the running containers, you can run
'docker ps'. In case your container has failed, you can add '-a' to get all containers,
including those that are stopped or have failed. Well, to get more information about the container,
we can also use the 'inspect' subcommand. So, you can see that the container got its IP
address from the default bridge network, which is 172.17.0.2. On Linux, you
can directly hit this IP with ‘curl'. On a Mac, I have to use localhost. Let's
send a request to the container. The Python application will return the hostname
and the IP address of the container. Alright, now let's run the second
container. It's exactly the same process, except we'll give it a different name and
port-forward to a different port, 8082. Alright, so far we have two containers: 'myapp' and 'myapp-v2.' We can
use 'curl' to access both of them. Now, let's verify that both containers can
communicate with each other. We'll SSH into one of the containers using the 'exec' command, and
'-it' stands for interactive shell. Since these containers are based on Alpine Linux, we don't
have Bash, so we need to use the 'sh' shell. Let's use 'curl' to access both containers
by using their IP addresses. This way, we can access the same container and also
another one using their respective IPs. However, ideally, we would want to use DNS to
send requests between containers. Let's try that with 'curl.' Well, that's the limitation of
the default bridge network: DNS is not supported. Before we continue, let me remove both containers. Now, let's go ahead and create a
custom user-defined network. We'll name it 'my-bridge-net,' and, though it's totally
optional, you can provide a CIDR and a gateway. List all the networks, and you'll find
the new bridge network. Let's go ahead and inspect it. As you would expect, you'll
see the provided subnet and gateway. Since it's a virtual network only accessible
from the host, the CIDR you use doesn't really matter. I would even suggest
letting Docker choose the CIDR for you. Now, in order to create a
container inside that network, we just need to specify the network. It's
as simple as that. The rest stays the same. Let's also create the second
container in the same network. Well, now we can access both containers using
'curl,' but you'll notice that the IP addresses come from the CIDR we specified earlier. So,
we have 10.0.0.2 and 10.0.0.3 as IP addresses. Again, let's SSH into the first container. And now we can actually use DNS. The
DNS will be the same as the name of the container you created. This is the
biggest difference and a key reason why you would want to create a custom bridge
network instead of using the default one. Alright, let me clean this up by deleting
the containers as well as the network. Now, most of the time when I need to create
containers locally, I use Docker Compose, which is a declarative way to define which
containers to run. This compose file does exactly the same thing as what we ran
before. We declare to run 'myapp' and 'myapp-v2' containers. By the way, those
are the DNS names that you can use inside the network. We also declare exactly the same
bridge network and optionally provide a CIDR. So, this is exactly the same setup,
but it's defined in a single YAML file. Now, in order to run it, just execute
the 'docker-compose up' command. Use '-f' to specify the compose file, and
'-d' to run it in the background. We can test it with 'curl' in the same
way. And finally, to tear this down, just run 'docker-compose down'. It will shut
down the containers and remove the network. The next networking mode available is
called 'host'. If you use this option, your container will not get its own IP address and
will share the same networking namespace as the host where you run your container. Essentially,
there is no networking isolation from the host, and it would appear as if you were running a
regular application on that host. Therefore, any application running on a different server
will be able to access your container using the host's IP address. This networking driver
only works on Linux instances and is most likely used in production to expose your services to
clients, rather than for local development. When you use the host network, obviously
you cannot use or bind multiple containers to the same port. For example, you won't
be able to run two proxies on port 80. Now, when using the host network, it does
not require address translation (NAT), and no 'userland-proxy' is created for each
port, which means it increases performance. So, the two main reasons why you would choose
this option are to optimize performance and if the container is using a lot of
ports that you need to bind to the host. Now, to run the demo, we need to use a Linux box.
For example, I’ll use Ubuntu. I have two virtual machines here, both with Docker installed.
Let’s go ahead and create a Docker container on the virtual machine, using the host network
for the container. When you start the container, it will bind to port 8080, which is used
by Python internally inside the container. Next, let’s get the IP address of that VM. And now, from the second virtual machine,
let’s try to access the container. Alright, it works. If you're following along, don’t forget
to delete that container. That’s all for this demo. My point was to show you that the container
which is using the host network, and to other applications, it will appear like any regular
application running on that virtual machine. The next type is 'None'. This completely isolates
the container from the host and other containers running on that server. When you use this option,
only the loopback interface will be created for the container. You won't be able to publish
or forward any ports. This mode can be used, for example, to run batch jobs or some
kind of data processing pipelines. For the demo, I’ll use Linux as well. On
other platforms, such as Mac and Windows, Docker may create additional network
interfaces inside the container. Let's go ahead and run the container,
specifying 'none' for the network. Now, let's get all the network interfaces
in that container. You can see that only the loopback interface is present.
This means the container is totally isolated from the host and any other
containers that may be running on it. That’s all for this demo. Traditionally, to expose a container
to the outside world, we used a bridge. It works just fine but adds additional
complexity. Besides the performance penalty, since the packet needs to go through
an additional hop, we also have to map ports from the container to the host in
order to expose it to other applications. Now, IPvlan is a new network virtualization
technique. It's extremely lightweight since it does not use a bridge for isolation
and is associated directly with the Linux network interface. As a result, it is easy to
provide access for external-facing services, as there is no need for port
mappings in these scenarios. When you start a container using this type, the container will receive an IP address
from the same CIDR range. For example, if my host has an IP address of 192.168.50.55 and
it's on a /24 network, this means the range starts from 50.0 and the last IP address for this range
is 50.255, which gives you 256 IP addresses. Now, the container may receive the 50.2 IP
address. Any service on that network will be able to access your application
using that IP address, whether it's a VM or another container on a different host. And the
traffic will be routed using the network gateway. For this demo, I’ll use two Linux VMs. On
one, I’ll attach a container to the Linux network interface, and from the other one,
I’ll try to access the python application. First of all, let's find the parent network
interface by running this command. Let’s take note of it; my interface is 'ens33.' We also
need to note the host’s network. We’ll use this in the next step. So, the IP address is
.55, and we have a network of 192.165.50.0/24. Alright, the next step is to create a network.
Don’t forget to update the subnet and gateway values. The gateway IP address is the
first usable address in the range. Also, don’t forget to update the
parent network interface. Now, let’s make sure the network is created. Let’s start the container on the first
host. Remember to specify the network, and note that you don’t need to map any ports. Next, let's find the IP address
assigned to this container. And from the second host, which in my case
is another virtual machine, we can use 'curl' to test the application. Alright, it works.
It's a simple demo, but in the near future, I’ll create a detailed tutorial covering
new Docker networking techniques. Some applications, especially legacy applications
or applications which monitor network traffic, expect to be directly connected to the
physical network. In this case, you can use the macvlan network driver to assign a MAC address
to each container's virtual network interface. It will appear as a physical network interface
directly connected to the physical network. The difference for example between ipvlan
and macvlan is that if you use ipvlan, your container will get the same Mac address as your host but in case with macvlan,
the Mac address will be different. Similar to the previous demo, we need
to find the network interface that we want to use for macvlan. It’s going to be the
same 'ens33' interface, so take note of it. Now, let’s create the network. It’s very
similar as well: you specify the subnet that the host uses, the gateway, and the
parent network interface, and of course, you choose to use the macvlan Docker
driver. But before we create this, take note of the Mac address assigned to
this virtual machine's network interface. List all networks. Next, let’s run a container in that network. Let's inspect that container. You’ll find that this container
has a different Mac address than your VM. In the case of IPVlan,
the Mac address will be the same. We can SSH into the container and get all the
network interfaces. Just to prove my point, you can compare the parent Mac
address with the Mac address created for the container.
That’s all for this demo. Now, when you deploy your
applications to production, of course, you will need more than
one physical or virtual server, each with the Docker daemon installed. The overlay
network driver creates a distributed network among multiple Docker daemon hosts. This network sits
on top of (overlays) the host-specific networks, allowing containers connected to it to communicate
securely, especially when encryption is enabled. Most frequently, this type of network is
used with Docker Swarm, however, it is also possible to connect individual containers
as well. In my opinion, if you really want to manage your containers at scale, especially in
production, you should consider using Kubernetes. Most people use overlay with
Docker Swarm, but in this demo, I want to show you how to connect individual
containers to the overlay network. Well, for this demo, I also have two
virtual machines based on Ubuntu. Before we can run the demo, I found that
many people faced the same problem with the overlay network, and we need to
disable something to make it work. Let’s find the network interfaces on both
VMs. So, in my case, it’s ens33 on both VMs. Now we need to disable this on that
network interface on both VMs. This solution is not persistent across restarts,
so you would need to maybe create this script and automatically run it on boot. There are
many options available for you to solve this. Even if we want to connect individual containers
to overlay network, we still need to initialize the Docker Swarm. It’s actually very easy. On
the first VM, just run 'docker swarm init'. Then it will give you a command
that you can execute on other VMs to join the Docker Swarm. Keep in mind
that each VM should have Docker installed. So far, we have a manager and worker node. On the manager, let’s create an overlay
network. Add an attachable flag in order for individual containers to be able to
use this network, otherwise only swarm services will be able to use it.
Check if the network was created. Now, on the manager node, let’s start the
container and use this overlay network. If you list networks on the second worker node, you’ll notice that the overlay network is
missing. It’s automatically created when you start the container that uses that network.
Let’s go ahead and run a container on the worker node. Now you can see that
the overlay network is created. To verify that we can access containers on
that overlay network deployed on different VMs, we can SSH to the second container and use curl to send a request to the first
container on the manager node. I have a lot of tutorials like this
on my channel, covering Kubernetes, databases, and other infrastructure
components like Kafka. If you like it, consider subscribing to my channel. Thank you
for watching, and I’ll see you in the next video.