In the first episode we had a lot of hands
on experience with bridge networks and we defined our own bridge, also with docker-compose. If you haven’t seen it – here is a link
to the video. Like always – all the commands that I type
or show will be in the description of the video. You might ask: “Why would I want to have
other network types than the default bridge network ?” Fair enough - After all, we can
map ports from the docker host to the bridge network in a container and make it accessible
from the physical network. Plus we have all possibilities of communication
between the containers. Well, in fact, there are a handful of limitations
or particularities with the bridge network. For example a docker container will only be
reachable via the IP address of its host. There might be situations where you want the
container to appear as a standalone machine in the network. With its own IP adddress. Also – performance could be a reason. Stay tuned. (intro) Quick reminder, guys – here is the breakdown
of this episode – please use the chapter markers if you’re in a hurry and want to
fast forward. The chapters are on the time line and in the
description of the video. Thank you. Let’s start with the host network. We will use the same container like we did
in the first episode, that means the nginxdemo/hello container. Just this time in the network section we specify
the host network. Guys, we’re using portainer for this. See the description on how to install it or
check the first episode. Thanks. On the command line you could specify the
network to use with the –network option. Now – moving over to the containers section
in portainer we can see that the container has been created and that there is no mapped
port. However, if I open a new tab in my browser
and I browse to the localhost address then I can see that the nginx demo container replies
on standard http port 80. And – it replies with the localhost address
127.0.0.1. It gives back the same IP address if I browse
to the 0.0.0.0 address, so if I let linux chose which interface to use or if I use the
host name of my docker host. They all resolve to 127.0.0.1 inside the container. If I browse to the LAN IP address of my docker
host then the container replies with that IP address. Same is true if I browse to the default address
of any bridge – it will always reply on that interface. So the first thing which we note when we use
the host network is that the container replies on all interfaces of the host – in networking
terms it is bound to the generic 0.0.0.0 address which means it replies on all interfaces. Including the bridges. It’s therefore visible not only on this
host, but also in the physical network and all containers can see it. In other words, it is pretty much like running
nginx directly on the host without any network bindings and without any isolation. That’s definitely something to keep in mind. Next, let’s talk about performance. A docker bridge is nothing else than a NAT
or masquerading network. That means, very much like your local network
at home is isolated from the internet behind your router, the docker bridge is isolated
behind the docker host. Very much like your router in the lan, the
docker host serves as a gateway and can forward ports. It also masquerades outgoing packets with
its own IP. But this mechanism – NAT or Masquerading
– uses CPU. Not a big deal on one single container with
one user on a powerful PC, but if you have hundreds or thousands of connections or multiple
high speed connections then you will notice that. Let’s check. We add two simple containers – one with
the host network, and one with the bridge network. Let’s call them ubuntuhost and ubuntubridge. I’ll just use ubuntu as an image and type
in /bin/bash as a command and – important – select interactive and tty as console
type. What I want to do is stress the network a
little bit. For this, I will use iperf3. That’s just a tool that you can use to measure
point to point network speeds. One side acts as a server and the other side
is a client. We will check both directions. Iperf3 answers on port 5201 so nothing to
do on the host network but on the bridge network I will map port 5202 to port 5201 in the container
because I can’t use the same port twice on my host. Inside the container I go to the console,
run /bin/bash and run apt update && apt install iperf3 in order to install the tool. On a distant host I launched an iperf3 -s,
so that host is my iperf3 server and from within the container I will now just pump
a gigabit stream over to it by doing iperf3 -c and the IP address of the server. While I do this I will observe the CPU on
my docker host with HTOP. First the host network – CPU goes roughly
from 37 to 50 % here, the iperf3 process uses 2% CPU. Now with the bridge network – hmmm – more
or less the same. So outgoing masquerading doesn’t seem to
be very expensive from a compute standpoint. We can’t really see the difference. Now let’s exchange server and client roles. I start iperf3 in server mode in both containers
by typing iperf3 -s and now I can connect to my docker host from the distant machine
either on port 5201 to the host network or to port 5202 on the bridge network. First the host – That one CPU core here
goes roughly to 70 to 75% plus the iperf3 process pulls between 35 and 40 percent. OK. Next with the bridge network. I think that’s obvious. Total cpu on that core goes close to 90% while
the iperf3 itself pulls constantly above 45%. Again, it doesn’t seem to matter really
because the machine that I am running it on is so powerful. But in relative numbers that’s a 15 to 20%
uplift in CPU cost. OK ? Right – so as we could see, the docker host
network uses fewer CPU resources than the bridge network – it’s a question of scale. No problem on one machine, potentially a big
deal on a million servers with zillions of users. Another aspect why you would want to chose
the host network is if you have applications which use many ports. In the bridge network you would need to expose
and map every single one while in the host network all ports would automatically be exposed. Flipside of this might be security considerations. So if you actually don’t want ports to be
exposed then you have better isolation with a bridge network. Cool – so both network types have their
particularities. However, they both have one thing in common
– that is – if I want to access a container from outside of the docker host I would always
need to go to the IP address of the docker host. Also – if I wanted to run multiple instances
of a container on the same port I can’t do that – because I only have one host. I would need to use a bridge and map different
ports to the different instances. Let’s have a look at the next network type
– the macvlan network. Creating a MacVlan network in portainer is
a two-step process. First we need to configure it and then we
can create a network using it. The reason is that portainer creates the network
as a so-called config-only network. I don’t really know why. Anyhow – let’s configure one. In the networks section we click on add, then
we select macvlan as the network type and again give it an address ranges like we did
in the first episode with the bridge. Couple of remarks with regards to that range. If you have a router that gives a away addresses
over DHCP then you should exclude that range here in order to avoid duplicate Ips. Very often those are the addresses ranges
from .100 to .200 for example. You should also of course exclude your router’s
IP address and any other fixed IP addresses that you have on your LAN. You can do this by defining subnets here. Depending on how many IP addresses you want
you can make this larger or smaller. Here’s a couple of examples for 8, 16, 32
and 64 addresses in one range. Alternatively, you could specify a range that
is not inside the IP address range of your LAN at all and change it from inside the container
later, for example with DHCP. We’ll come to that. That’s actually what I do here. I specify a range that has nothing to do with
my LAN. Also we need to tell portainer and hence docker
which network card we want to use. Let’s click on create. Next, we do the same thing again but this
time we actually create the network. We select the macvlan network which we have
added before. That actually creates a usable network. On the command line we could have done this
in one step by issuing a docker network create command with the d option “macvlan” and
the o option “parent=eth0”. Now let’s again define a container –
Add Container, give it a name –
use ubuntu as an image, same parameters as before for command and
console In the network section let’s first use a
bridge – we will attach the macvlan later – but I want to show you something – to
make a start however we need an interface with internet access in order to pull some
software – next, go over to the capabilities here and let’s add the net_admin capability
to the container. It will all become clear in a minute. We deploy the container and launch a console. First we need some additional software in
the container. Therefore let’s do apt update
then apt install iproute2 and dhcpcd5 and iputils-ping. Now we go back to the container properties,
scroll all the way down to the networks. We leave the bridge network and we join the
macvlan network. Back to the console. If I type ip -br addr I can see the ip address
of the new network. But as I had given it an address outside of
my LAN space I can’t access the internet or anything. I first need an address in the LAN which I
can pull from my router using dhcp with dhpcd. So I type dhcpcd eth1. And – Tadaa – I get assigned an IP address
by my router. And that only works because I had added the
net_admin capability. With this capability and with the iproute2
package I can now release the unused address by doing ip address del (then the address)
dev (and here the network interface). Checking with ip -br addr shows the magic
– I now only have that IP address in my LAN which I got over DHCP. Let’s try and ping google
Yep – that works – nice. Ok – let’s stop here for a second. We have done a lot of things here. We have added a MacVLAN config-only Network
to docker, we then defined a network using that config. Then we created a container on a bridge network,
pulled some software and after that we left the bridge network and joined the Macvlan
network. We then requested an IP address over DHCP
from my router in the LAN, deleted the old IP address and we can now browse to my router’s
status page and here we see that the docker container is listed very much like any other
PC or device or VM or container. You couldn’t really tell from the outside
that it’s actually a docker container. When I defined the network I could of course
have added an IP address range from my LAN directly and that would have saved us the
additional steps of first adding the bridge for internet connection in order to install
the software we needed and second join the Macvlan network separately. But I wanted to show you two more things here
– first, you can leave or join networks with a docker container. And second, if you add the net_admin capability,
then you can change the network configuration from inside the container. That may come in handy if you want to create
containers that actually behave like a physical machine in your network such as routers. If you have followed the previous episodes
then you might see where this is going, right ? Just a quick remark on the MAC address though
– I had not specified a MAC address for that interface, so when I stop and restart
the container, I might get a new one – and hence, if I launch dhcpcd again, I might get
a different IP address and also the old address might remain blocked on the router until the
end of the lease. So it would be safer to actually define a
MAC address on the container’s network definition. Another thing to know is that if you run docker
in a virtual machine then you would need to define the network adapters such that they
support promiscuous mode – that means, the network adapter needs to be able to have multiple
MAC addresses. So that might be one thing to check if the
previous steps didn’t work for you. Perfect – so if you have watched both episodes
so far then you should now have a good overview of networking options in docker from the system
bridge to user-defined bridges to the host and macvlan network drivers. We have spoken about docker-compose files
in order to have bridges defined for stacks, we had a brief look at docker integration
into editors such as Visual Studio Code and we can now define containers that behave like
physical machines inside our LAN. The only limitation so far is that we have
done everything on one single docker host. In one of the next episodes we might again
take this to the next level by defining a docker swarm made of two or more hosts and
define an overlay and ingress network in order to be able to scale workloads over multiple
hosts. Let me know in the comment section if you’re
interested. Awesome – Again, I hope that I could give
you some ideas to experiment with – we might follow up with having a look at how we can
import a disk image of a virtual machine into a docker container so that we can for example
run OpenWrt in a container and build a dockerized version of our test lab network. Or – we might have a look at docker swarms
or maybe have a closer look at docker-compose or Visual studio Code and stuff like git maybe. Leave me a comment. Until then, many thanks for watching, liking,
commenting and subscribing. Stay safe, stay healthy, bye for now.