Hello and welcome to the Docker for beginners
course. My name is moonshot 100, and I will be your instructor for this course. I'm a DevOps
and cloud trainer at code cloud comm, which is an interactive hands on online learning platform.
I've been working in the industry as a consultant for over 13 years and have helped hundreds of
1000s of students learn technology in a fun and interactive way. In this course, you will learn
Docker through a series of lectures that use animation illustration, and some fun analogies
that simplify complex concepts with demos that will show you how to install and get started with
Docker. And most importantly, we have hands on labs that you can access right in your browser.
I will explain more about it in a bit. But first, let's look at the objectives of this course.
In this course, we first try to understand what containers are, what Docker is, and why you might
need it and what it can do for you. We will see how to run a Docker container how to build your
own Docker image, we will see networking in Docker and how to use Docker compose what Docker registry
is how to deploy your own private registry. And we then look at some of these concepts in depth. And
we try to understand how Docker really works under the hood. We look at Docker for Windows and Mac
before finally getting a basic introduction to container orchestration tools like Docker, swarm,
and Kubernetes. Here's a quick note about hands on labs. First of all, to complete this course, you
don't have to set up your own labs. Well, you may set it up if you wish to, if you wish to have your
own environment, and we have a demo as well. But as part of this course, we provide real labs that
you can access right in your browser anywhere, anytime and as many times as you want. The labs
give you instant access to a terminal to a Docker host and an accompanying quiz portal. The quiz
portal asks a set of questions such as exploring the environment and gathering information. Or you
might be asked to perform an action such as run Docker container. The quiz portal then validates
your work and gives you feedback instantly. every lecture in this course is accompanied by
such challenging interactive quizzes that makes learning Docker a fun activity. So I hope
you're as thrilled as I am to get started. So let us begin. We're going to start by looking at
a high level overview on why you need Docker, and what it can do for you. Let me start by
sharing how I got introduced to Docker. And one of my previous projects, I had this requirement
to set up an end to end application stack, including various different technologies, like
a web server using node j s, and a database such as MongoDB, and a messaging system like Redis,
and an orchestration tool like Ansible, we had a lot of issues developing this application stack
with all these different components. First of all, their compatibility with the underlying OS was an
issue, we had to ensure that all these different services were compatible with the version of OS
we were planning to use. There have been times when certain version of the services were not
compatible with the OS. And we've had to go back and look at different OS that was compatible with
all of these different services. Secondly, we had to check the compatibility between the services
and the libraries and dependencies on the OS. We've had issues where one service requires one
version of a dependent library, whereas another service requires another version, the architecture
of our application changed over time, we've had to upgrade to newer versions of these components, or
change the database, etc. And every time something changed, we had to go through the same process
of checking compatibility between these various components, and the underlying infrastructure.
This compatibility matrix issue is usually referred to ask the matrix from hell. Next, every
time we had a new developer on board, we found it really difficult to set up a new environment,
the new developers had to follow a large set of instructions and run hundreds of commands to
finally set up their environments, we had to make sure they were using the right operating system,
the right versions of each of these components. And each developer had to set all that up
by himself each time. He also had different development tests and production environments. One
developer may be comfortable using one or less and the others may be comfortable using another one.
And so we couldn't guarantee that the application that we were building would run the same way in
different environments. And so all of this made our life in developing better Building and
shipping the application really difficult. So I needed something that could help us with the
compatibility issue. And something that will allow us to modify or change these components without
affecting the other components and even modify the underlying operating systems as required. And
that search landed me on Docker. with Docker, I was able to run each component in a
separate container with its own dependencies, and its own libraries, all on the same VM
and the OS, but within separate environments, or containers. We just had to build the Docker
configuration once and all our developers could now get started with a simple Docker run command.
irrespective of what the underlying operating system they're on. All they needed to do was
to make sure they had Docker installed on their systems. So what are containers, containers
are completely isolated environments. As in they can have their own processes or services,
their own network interfaces, their own mounts, just like washing machines, except they all share
the same OS kernel. We will look at what that means in a bit. But it's also important to note
that containers are not new with Docker containers have existed for about 10 years now and some of
the different types of containers are Aleksey LSD like CFS, etc. Docker utilizes Aleksey containers.
Setting up these container environments is hard as they are very low level and that is where Docker
offers a high level two, with several powerful functionalities making it really easy for end
users like us. To understand how Docker works, let us revisit some basic concepts of operating
systems First, if you look at operating systems like Ubuntu, Fedora, Susi or CentOS, they all
consist of two things, an OS kernel and a set of software. The OS kernel is responsible for
interacting with the underlying hardware, while the OS kernel remains the same, which is Linux. In
this case, it's the software above it that makes these operating systems different. This software
may consist of a different user interface drivers, compilers, file managers, developer tools, etc. So
you have a common Linux kernel shared across all OSS and some custom software that differentiate
operating systems from each other. We said earlier that Docker containers share the underlying
kernel. So what does that actually mean? Sharing the kernel? Let's say we have a system with an
Ubuntu OS with Docker installed on it. Docker can run any flavor of OS on top of it, as long
as they're all based on the same kernel. In this case, Linux. If the underlying OS is Ubuntu,
Docker can run a container based on another distribution like Debian Fedora Susi or CentOS
each Docker container only has the additional software that we just talked about in the
previous slide that makes these operating systems different. And Docker utilizes the underlying
kernel of the Docker host, which works with all OSS above. So what is an OS that do not share the
same kernel as this windows. And so you won't be able to run a Windows based container on a Docker
host with Linux on it. For that you will require Docker on a Windows Server. Now it is when I say
this, that most of my students go, Hey, hold on there. That's not true. And they install Docker
on Windows, run a container based on Linux and go see it's possible. Well, when you install Docker
on Windows and run a Linux container on Windows, you're not really running a Linux container on
Windows, Windows runs a Linux container on a Linux virtual machine under the hoods. So it's really
a Linux container on Linux virtual machine on Windows. We discuss more about this on the Docker
on Windows or Mac later during this course. Now, you might ask, isn't that a disadvantage then not
being able to run another kernel? On the OS? The answer is no. Because unlike hypervisors,
Docker is not meant to virtualize and run different operating systems and kernels on the
same hardware. The main purpose of Docker is to package and containerize applications and
to ship them and to run them anywhere anytime, as many times as you want. So that brings us
to the differences between virtual machines and containers, something that we tend to do
especially those from a virtualization background. As you can see on the right, in case of Docker,
we have the underlying hardware, infrastructure and then the OS and then Docker installed on
the OS. Docker then manages the containers that run with libraries and dependencies alone.
In case of virtual machines, we have the hypervisor like ESX on the hardware, and then
the virtual machines on them. As you can see, each virtual machine has its own oil inside it.
Then the dependencies and then the application. The overhead causes higher utilization of
underlying resources as there are multiple virtual operating systems and kernels running. The
virtual machines also consumed higher disk space, as each VM is heavy, and is usually in gigabytes
in size, whereas Docker containers are lightweight and are usually in megabytes in size. This allows
Docker containers to boot up faster, usually in a matter of seconds, whereas VMs as we know, takes
minutes to boot up as it needs to boot up the entire operating system. It's also important
to note that Docker has less isolation as more resources are shared between the containers
like kernel, whereas VMs have complete isolation from each other. Since VMs, don't rely on the
underlying OS or kernel, you can run different types of applications built on different services
such as Linux based or Windows based apps on the same hypervisor. So those are some differences
between the two. Now, having said that, it's not an either container or virtual machine situation.
Its containers and virtual machines. Now, when you have large environments with 1000s of application
containers running on 1000s of Docker hosts, you will often see containers provisioned on
virtual Docker hosts. That way, we can utilize the advantages of both technologies, we can use the
benefits of virtualization, to easily provision or decommission Docker hosts, as required, at the
same time make use of the benefits of Docker to easily provision applications and quickly scale
them as required. But remember that in this case, we will not be provisioning that many virtual
machines as we used to before, because earlier, we provisioned a virtual machine for each
application. Now, you might provision a virtual machine for hundreds or 1000s of containers. So
how is it done? There are lots of containerized versions of applications readily available as of
today. So most organizations have their products containerized and available in a public Docker
repository called Docker Hub, or Docker store. For example, you can find images of most common
operating systems, databases, and other services and tools. Once you identify the images you need,
and you install Docker on your host. Bringing up an application is as easy as running a Docker run
command with the name of the image. In this case, running a Docker run Ansible command will run an
instance of Ansible on the Docker host. Similarly run an instance of MongoDB Redis and node j s
using the Docker run command. If we need to run multiple instances of the web service, simply
add as many instances as you need and configure a load balancer of some kind in the front.
In case one of the instances were to fail, simply destroy that instance and launch anyone.
There are other solutions available for handling such cases that we will look at later during
this course. And for now, don't focus too much on the commands. We'll get to that in a bit.
We've been talking about images and containers, let's understand the difference between the
two. An image is a package or a template, just like a VM template that you might have worked
within the virtualization world, it is used to create one or more containers. Containers are
running instances of images that are isolated and have their own environments and set of processes.
As we have seen before, a lot of products have been dockerized already, in case you cannot find
what you're looking for. You could create your own image and push it to Docker Hub repository, making
it available for public. So if you look at it, traditionally, developers developed applications,
then they hand it over to ops team to deploy and manage it in production environments. They do
that by providing a set of instructions such as information about how the host must be set up,
what prerequisites are to be installed on the host and how the dependencies are to be configured
etc. Since the ops team did not really develop the application on their own, they struggle
with setting it up. When they hit an issue, they work with the developers to resolve it.
with Docker, developers and operations teams work hand in hand to transform the guide into a Docker
file with both of their requirements. This Docker file is then used to create an image for their
applications. This image can now run on any host with Docker installed on it, and is guaranteed
to run the same way everywhere. So the ops team can now simply use the image to deploy the
application. Since the image was already working, when the developer built it, and operations are
have not modified it. It continues to work the same way when deployed in production. And that's
one example of how a tool like Docker contributes to the DevOps culture. Well, that's it for now.
In the upcoming lecture, we will look at how to get started with Docker. We'll now see how to get
started with Docker. Now Docker has two editions, the Community Edition and the Enterprise Edition.
The Community Edition is the set of free Docker products. The Enterprise Edition is the certified
and supported container platform that comes with enterprise add ons like the image management image
security, universal control plane for managing and orchestrating container runtimes. But of
course, these come with a price. We will discuss more about container orchestration later in this
course, and along with some alternatives. For now, we will go ahead with the Community Edition.
The Community Edition is available on Linux, Mac, Windows, or on cloud platforms like AWS, or
Azure. In the upcoming demo, we will take a look at how to install and get started with Docker on
a Linux system. Now, if you are on Mac or Windows, you have two options, either install a Linux VM
using VirtualBox or some kind of virtualization platform. And then follow along with the upcoming
demo, which is really the most easiest way to get started with Docker. The second option is
to install Docker desktop for Mac or adopt Docker desktop for Windows, which are native
applications. So if that is really what you want the check out the Docker for Mac and the
windows sections towards the end of this course, and then head back here. Once you're all
set up. We will now head over to our demo, and we will take a look at how to install Docker
on a Linux machine. In this demo, we look at how to install and get started with Docker. First
of all, identify a system physical or virtual machine or laptop that has a supported operating
system. In my case, I have an Ubuntu VM. Go to Doc's dot Docker comm and click on Get Docker.
You will be taken to the Docker engine Community Edition page. That is the free version that we're
after. From the left hand menu, select your system type. I choose Linux In my case, and then select
your OS flavor. I choose Ubuntu read through the prerequisites and requirements. Your Ubuntu system
must be 64 bit and one of the supported versions like this called cosmic bionic or sannio. In my
case, I have a bionic version to confirm view the Etsy release file. Next uninstaller any older
version if one exists, so let's just make sure that there's none on my host. So I'll just copy
and paste that command. And I confirm that there are no older version that exists on my system. The
next step is to set up a repository and install the software. Now there are two ways to go about
this. The first is using the package manager by first updating the repository using the apt get
update command, then installing the prerequisite packages, and then adding Dockers official GPG
keys and then installing Docker. But I'm not going to go that route. There is an easier way.
If you scroll all the way to the bottom you will find the instructions to install Docker using the
convenience script. It's a script that automates the entire installation process and works on
most operating systems. Run the first command to download a copy of the script and then run the
second command to execute the script to install Docker automatically. Give it a few minutes to
complete the installation. The installation is now successful. Let us now check the version
of Docker using the Docker version command. We have installed version 19.0 3.1. We will now run
a simple container to ensure everything is working as expected. For this, head over to Docker Hub at
hub Docker Comm. Here you will find a list of the most popular Docker images like nginx MongoDB,
Alpine, no jazz Redis, etc. Let's search for a fun image called we'll say we'll say is Dockers
version of kousei which is basically A simple application that trains a cow saying something. In
this case, it happens to be a well copy the Docker run command given here. Remember to add sudo
and we will change the message to hello world. On running this command, Docker pulls the image
of the willsey application from Docker Hub and runs it. And we have our avail, saying, hello.
Great. We're all set. Remember, for the purpose of this course, you don't really need to set up
a Docker system on your own. We provide hands on labs that you will get access to but if you
wish to experiment on your own and follow along, feel free to do so. We now look at some of the
Docker commands. At the end of this lecture, you will go through a hands on quiz where you will
practice working with these commands. Let's start by looking at Docker run command. The Docker run
command is used to run a container from an image running the Docker run nginx command will run an
instance of the nginx application from the Docker host if it already exists. If the image is not
present on the host, he will go out to Docker Hub and pull the image down. But this is only done
the first time. For the subsequent executions, the same image will be reduced. The docker
ps command lists all running containers and some basic information about them. Such as
the container ID, the name of the image we use to run the containers, the current status
and the name of the container. Each container automatically gets a random ID and Name created
for it by Docker, which in this case is silly summit. To see all containers running or not
use the dash eight option. This output all running as well as previously stopped or exited
containers. We'll talk about the command and port fields shown in this output later in this
course. For now let's just focus on the basic commands. To stop a running container use the
Docker stop command, but you must provide either the container ID or the container name in the
stop command. If you're not sure of the name, run the docker ps command to get it on success.
You will see the name printed out and running docker ps again will show no running containers.
Running docker ps dash a, however shows the container silly summit and that it is now an exit
state a few seconds ago. Now what if we don't want this container lying around consuming space?
What if we want to get rid of it for good? Use the Docker rm command to remove a stopped or exited
container permanently. If it prints the name back, we're good. Run the docker ps command again to
verify that it's no longer present. Good. But what about the nginx image that was downloaded? At
first? We're not using that anymore. So how do we get rid of that image? But first, how do we see a
list of images present on our host run the Docker images command to see a list of available images
and their sizes. On our host we have four images nginx Redis, Ubuntu and Alpine. We will talk about
tags later in this course when we discuss about images. To remove an image that you no longer
plan to use. Run the Docker Rmi command. Remember, you must ensure that no containers are running
off of that image before attempting to remove the image. You must stop and delete all dependent
containers to be able to delete an image. When we ran the Docker run command earlier, it downloaded
the Ubuntu image as it couldn't find one locally. What if we simply want to download the image and
keep so when we run the run Docker run command, we don't want to wait for it to download. Use the
Docker pull command to only pull the image and not run the container. So in this case, the
Docker pull Ubuntu command pulls the Ubuntu image and stores it on our host. Let's look at
another example. Say you were to run a Docker container from an Ubuntu image. When you run the
Docker run Ubuntu command it runs an instance of Ubuntu image and exits immediately. If you were to
list the running containers, you wouldn't see the container running. If you list all containers,
including those that are stopped, you will see that the new container you ran is in an exit
state. Now why is that? Unlike virtual machine containers are not meant to host an operating
system. Containers are meant to run a specific task or process such as to host an instance of a
web server or application server or a database, or simply to carry some kind of computation
or analysis task. Once the task is complete, the container exits a container only lives as
long as the process inside it is alive. If the web service inside the container is stopped, or
crash, then the container exits. This is why when you run a container from an Ubuntu image, it
stops immediately. Because Ubuntu is just an image of an operating system that is used as the
base image. For other applications. There is no process or application running in it by default.
If the image isn't running any service, as is the case with Ubuntu, you could instruct Docker
to run a process with the Docker run command. For example, a sleep command with a duration
of five seconds. When the container starts, it runs the sleep command and goes into sleep for
five seconds post with the sleep command exit, and the container stops. What we just saw was
executing a command when we run the container, but what if we would like to execute a command
on a running container. For example, when I run the docker ps command, I can see that there is
a running container which uses the Ubuntu image and sleeps 400 seconds. Let's say I would like to
see the contents of a file inside this particular container. I could use the Docker exec command to
execute a command on my Docker container, in this case to print the contents of the Etsy hosts file.
Finally, let's look at one more option before we head over to the practice exercises. I'm now going
to run a Docker image I developed for a simple web application. The repository name is cloud slash
simple web app. It runs a simple web server that listens on port 8080. When you run a Docker run
command like this, it runs in the foreground or in an attached mode, meaning you will be attached
to the console or the standard out of the Docker container. And you will see the output of the
web service on your screen. You won't be able to do anything else on this console other than
view the output until this Docker container stops. It won't respond to your inputs. press the
ctrl plus c combination to stop the container and the application hosted on the container exits and
you get back to your prompt. Another option is to run the Docker container in the detached mode
by providing the dash D option. This will run the Docker container in the background mode, and
you will be back to your prompt immediately. The container will continue to run in the backend,
run the docker ps command to view the running container. Now if you would like to attach back to
the running container later, run the Docker attach command and specify the name or ID of the Docker
container. Now remember, if you're specifying the ID of a container in any Docker command, you can
simply provide the first few characters alone, just so it is different from the other container
IDs on the host. In this case, I specify a 043 D. Now don't worry about accessing the UI of the web
server for now. We will look more into that in the upcoming lectures. For now let's just understand
the basic commands will now get our hands dirty with the Docker COI. So let's take a look at how
to access the practice lab environments. Next. Let me now walk you through the hands on lab
practice environment. The links to access the labs associated with this course are available at code
cloud at code cloud.com slash p slash Docker dash labs. This link is also given in the description
of this video. Once you're on this page, use the links given there to access the labs associated
to your lecture. Each lecture has its own lab. So remember to choose the right lab for your lecture.
The labs open up right in your browser, I would recommend to use Google Chrome while working with
the labs. The interface consists of two parts, a terminal on the left and a quiz portal on the
right. The quiz portal on the right gives you challenges to solve. Follow the quiz and try and
answer the questions asked and complete the tasks given to you. Each scenario consists of anywhere
from 10 to 20 questions that need to be answered. Within 30 minutes to an hour. At the top, you have
the question numbers below that is the remaining time for your lab below that is the question.
If you are not able to solve the challenge, look for hints in the head section, you may skip
a question by hitting the skip button in the top right corner. But remember that you will not be
able to go back to a previous question once you have skipped. If the quiz portal gets stuck for
some reason, click on the quiz portal tab at the top to open the quiz portal in a separate window.
The terminal gives you access to a real system running Docker, you can run any Docker command
here and run your own containers or applications. You will typically be running commands to solve
the task assigned in the quiz portal. You may play around and experiment with this environment. But
make sure you do that after you've gone through the quiz so that your work does not interfere with
the tasks provided by the quiz. So let me walk you through a few questions. There are two types of
questions. Each lab scenario starts with a set of exploratory multiple choice questions where you're
asked to explore and find information in the given environment and select the right answer. This is
to get you familiarized with the setup. You're then asked to perform tasks like run a container,
stop them, delete them, build your own image, etc. Here, the first question asks us to find
the version of Docker server engine running on the host. Run the Docker version command in
the terminal and identify the right version. Then select the appropriate option from the given
choices. Another example is the fourth question where it asks you to run a container using the
Redis image. If you're not sure of the command, click on hence and it will show you a hint. We
now run a Redis container using the Docker run Redis command, wait for the container to run. Once
done, click on Check to check your work. You have now successfully completed the task. Similarly,
follow along and complete all tasks. Once the lab exercise is completed. Remember to leave a
feedback and let us know how it went. A few things to note. These are publicly accessible labs that
anyone can access. So if you catch yourself logged out during a peak hour, please wait for some time
and try again. Also remember to not store any private or confidential data on these systems.
Remember that this environment is for learning purposes only and is only alive for an hour, after
which the lab is destroyed. So does all your work. But you may start over and access these labs as
many times as you want. until you feel confident. I will also post solutions to these lab quizzes.
So if you run into issues, you may refer to those. That's it for now, head over to the first
challenge. And I will see you on the other side. We will now look at some of the other Docker run
commands. At the end of this lecture, you will go through a hands on quiz where you will practice
working with these commands. We learned that we could use the Docker run Redis command to run a
container running a Redis service, in this case, the latest version of Redis, which happens to be
5.0 dot five as of today. But what if we want to run another version of Redis like for example,
an older version, say 4.0. Then you specify the version separated by a colon. This is called a
tag. In that case, Docker pulls an image of the photo zero version of Redis and runs that. Also,
notice that if you don't specify any tag as in the first command, Docker will consider the default
tag to be latest. Latest is a tag associated to the latest version of that software, which
is governed by the authors of their software. So as a user, how do you find information about
these versions and what is the latest? At Docker hub.com look up an image and you will find all the
support tags in its description. Each version of the software can have multiple short and long tags
associated with it, as seen here. In this case, the version 5.0 dot five also has the latest tag
on it. Let's now look at inputs. I have a simple prompt application that when run asked for my name
and on entering my name prints a welcome message if If I were to Docker eyes this application
and run it as a Docker container like this, it wouldn't wait for the prompt. It just prints
whatever the application is supposed to bring on standard out. That is because by default, the
Docker container does not listen to a standard input. Even though you're attached to its console,
it is not able to read any input from you. It doesn't have a terminal to read inputs from it
runs in a non interactive mode. If you'd like to provide your input, you must map the standard
input of your host to the Docker container using the dash I parameter. The dash I parameter is
for interactive mode. And when I input my name, it prints the expected output. But there is
something still missing from this, the prompt when we run the app, at first, it asked us for our
name. But when dockerized that prompt is missing, even though it seems to have accepted my input.
That is because the application prompt on the terminal and we have not attest to the containers
terminal. For this use the dash t option as well. The dash T stands for a pseudo terminal. So with
the combination of dash IMT. We're now attached to the terminal, as well as in an interactive mode
on the container. We will now look at Port mapping or port publishing on containers. Let's go back to
the example where we run a simple web application in a Docker container on my Docker host. Remember
the underlying host where Docker is installed is called Docker host or Docker engine. When we run
a containerized web application it runs and we're able to see that the server is running. But how
does the user access my application. As you can see, my application is listening on port 5000. So
I could access my application by using Port 5000. But what IP do I use to access it from a web
browser. There are two options available. One is to use the IP of the Docker container. Every
Docker container gets an IP assigned by default, in this case it is 172 dot 17 dot 0.2. Remember
that this is an internal IP and is only accessible within the Docker host. So if you open a browser
from within the Docker host, you can go to HTTP, colon forward slash forward slash 172 dot 17 dot
0.1 colon 5000 to access the IP address. But since this is an internal IP users outside of the Docker
host cannot access it using this IP. For this, we could use the IP of the Docker host, which
is 190 2.1 68 dot five. But for that to work, you must have mapped the port inside the Docker
container to a free port on the Docker host. For example, if I want the users to access my
application through Port 80, on my Docker host, I could map Port 80 of local host to Port 5000 on
the Docker container using the dash p parameter in my run command like this. And so the user can
access my application by going to the URL HTTP, colon slash slash 190 2.1 68 dot 1.5 colon 80. And
all traffic on port 80 on my Docker host, will get routed to Port 5000 inside the Docker container.
This way you can run multiple instances of your application and map them to different ports on
the Docker host or run instances of different applications on different ports. For example, in
this case, I'm running an instance of MySQL that runs a database on my host and listens on the
default MySQL port, which happens to be 3306, or another instance of MySQL on another port 8306.
So you can run as many applications like this, and map them to as many ports as you want. And
of course, you cannot map to the same port on the Docker host more than once. We will discuss more
about port mapping and networking of containers in the network lecture later on. Let's now look
at how data is persisted in a Docker container. For example, let's say you were to run a MySQL
container. When databases and tables are created, the data files are stored in location slash four
lib MySQL inside the Docker container. Remember, the Docker container has its own isolated file
system and any changes to any files happen within the kernel. tainer let's assume you dump a lot
of data into the database. What happens if you were to delete the MySQL container and remove
it. As soon as you do that, the container along with all the data inside it gets blown away,
meaning all your data is gone. If you would like to persist data, you would want to map a
directory outside the container on the Docker host to a directory inside the container. In this
case, I create a directory called slash OBT slash data dir and map that to var lib MySQL inside
the Docker container using the dash v option, and specifying the directory on the Docker host,
followed by a colon and the directory inside the Docker container. This way when Docker container
runs, it will implicitly mount the external directory to a folder inside the Docker container.
This way all your data will now be stored in the external volume at slash RPT slash data directory.
And this will remain even if you delete the Docker container. The docker ps command is good enough
to get basic details about containers like their names and IDs. But if you'd like to see
additional details about a specific container, use the Docker inspect command and provide the
container name or ID. It returns all details of a container in a JSON format, such as the state
mounts, configuration data, network settings, etc. Remember to use it when you're required to find
details on a container. And finally, how do we see the logs of a container we run in the background.
For example, I ran my simple web application using the dash D parameter and it ran the container
in a detached mode. How do I view the logs which happens to be the contents written to the
standard out of that container. Use the Docker logs command and specify the container ID or
name like this. Well, that's it for this lecture, how to work with the challenges and practice
working with Docker commands. Let's start with a simple web application written in Python. This
piece of code is used to create a web application that displays a web page with a background color.
If you look closely into the application code, you will see a line that sets the background
color to red. Now, that works just fine. However, if you decide to change the color in the future,
you will have to change the application code. It is a best practice to move such information out of
the application code and into say an environment variable called app color. The next time you run
the application set an environment variable called add color to a desired value. And the application
now has a new color. Once your application gets packaged into a Docker image, you will then
run it with the Docker run command followed by the name of the image. However, if you wish to
pass the environment variable as he did before, he would now use the Docker run commands dash
e option to set an environment variable within the container. To deploy multiple containers with
different colors. He would run the Docker command multiple times and set a different value for the
environment variable each time. So how do you find the environment variable set on a container that's
already running? Use the Docker inspect command to inspect the properties of a running container.
Under the config section, you will find the list of Environment Variables set on the container.
Well that's it for this lecture on configuring environment variables in Docker. Hello, and
welcome to this lecture on Docker images. In this lecture, we're going to see how to create your
own image. Now before that, why would you need to create your own image? It could either be because
you cannot find a component or a service that you want to use as part of your application on Docker
Hub already, or you and your team decided that the application you're developing will be dockerized
for ease of shipping and deployment. In this case, I'm going to containerize an application a simple
web application that I have built using the Python flask framework. First we need to understand
what we are containerizing or what application we are creating an image for How the application
is built. So start by thinking what you might do. If you want to deploy the application manually,
we write down the steps required in the right order. I'm creating an image for a simple web
application. If I were to set it up manually, I would start with an operating system like
Ubuntu, then update the source repositories using the abt command, then install dependencies using
the abt command, then install Python dependencies using the PIP command, then copy over the source
code of my application to a location like RPT and then finally, run the web server using the
floss command. Now that I have the instructions, create a Docker file using this. Here's a
quick overview of the process of creating your own image. First, create a Docker file named
Docker file and write down the instructions for setting up your application in it, such as
installing dependencies, where to copy the source code from and to and what the entry
point of the application is, etc. Once done, build your image using the Docker build command
and specify the Docker file as input as well as a tag name for the image. This will create an
image locally on your system. To make it available on the public Docker Hub registry, run the Docker
push command and specify the name of the image you just created. In this case, the name of the image
is my account name which is mm shot, followed by the image name, which is my custom app. Now
let's take a closer look at that Docker file. Docker file is a text file written in a specific
format that Docker can understand. It's in an instruction and arguments format. For example,
in this Docker file, everything on the left in caps is an instruction. In this case from run,
copy and entry point are all instructions. Each of these instruct Docker to perform a specific
action while creating the image. Everything on the right is an argument to those instructions. The
first line from Ubuntu defines what the base OS should be for this container. Every Docker image
must be based off of another image, either an OS or another image that was created before based
on an OS, you can find official releases of all operating systems on Docker Hub. It's important
to note that all Docker files must start with the from instruction. The run instruction instructs
Docker to run a particular command on those base images. So at this point, Docker runs the abt get
update commands to fetch the updated packages, and installs required dependencies on the image.
Then the copy instruction copies files from the local system onto the Docker image. In this
case, the source code of our application is in the current folder, and I'll be copying it over
to the location property source code inside the Docker image. And finally, entry point allows
us to specify a command that will be run when the image is run as a container. When Docker
builds the images, it builds these in a layered architecture. Each line of instruction creates
a new layer in the Docker image with just the changes from the previous layer. For example, the
first layer is a base Ubuntu OS, followed by the second instruction that creates a second layer
which installs all the IPT packages. And then the third instruction creates a third layer with
a Python packages followed by the fourth layer that copies the source code over and the final
layer that updates the entry point of the image. Since each layer only stores the changes from
the previous layer, it is reflected in the size as well. If you look at the base Ubuntu image, it
is around 120 MB in size. The IPT packages that are installed is around 300 Mb and the remaining
layers are small. You can see this information if you run the Docker history command followed by the
image name. When you run the Docker build command, you can see the various steps involved and the
result of each task. All the layers built are cast. So the layered architecture helps you
restart Docker build from that particular step in case it fails. Or if you were to add new steps
in the build process, you wouldn't have to start all over again. All the layers built out cached by
Docker. So in case a particular step was to fail, for example, in this case, step three failed, and
you were to fix the issue and rerun Docker build, it will reuse the previous layers from
cache and continue to build the remaining layers. The same is true if you were to add
additional steps in the Docker file. This way, rebuilding your image is faster. And you don't
have to wait for Docker to rebuild the entire image each time. This is helpful, especially when
you update source code of your application. As it may change more frequently, only the layers above
the updated layers needs to be rebuilt. We just saw a number of products containerized such as
databases, development tools, operating systems, etc. But that's just not it. You can containerize
almost all of the application even simple ones, like browsers or utilities, like curl
applications like Spotify, Skype, etc. Basically, you can containerize everything
and going forward and see that's how everyone is going to run applications. Nobody is going to
install anything anymore going forward. Instead, they're just going to run it using Docker. And
when they don't need it anymore, get rid of it easily without having to clean up too much. In
this lecture, we will look at commands arguments and entry points in Docker. Let's start with a
simple scenario. Say you were to run a Docker container from an Ubuntu image. When you run the
Docker run Ubuntu command it runs an instance of Ubuntu image and exits immediately. If you were to
list the running containers, you wouldn't see the container running. If you list all containers,
including those that are stopped, you will see that the new container you ran is in an exited
state. Now why is that? Unlike virtual machines, containers are not meant to host an operating
system. Containers are meant to run a specific task or process, such as to host an instance of a
web server or application server or a database or simply to carry out some kind of computation
or analysis. Once the task is complete, the container exits a container only lives as long as
the process inside ID is alive. If the web service inside the container is stopped or crashes, the
container exits. So who defines what process is run within the container. If you look at the
Docker file for popular Docker images like ngi Nx, you will see an instruction called CMD, which
stands for command that defines the program that will be run within the container when it starts.
For the ngi nx image. It is the ngi nx command for the MySQL image. It is the MySQL D command. What
we tried to do earlier was to run a container with a plain Ubuntu Operating System. Let us look at
the Docker file for this image. You will see that it uses bash as the default command. Now bash is
not really a process like a web server or database server. It is a shell that listens for inputs
from a terminal. If it cannot find the terminal it exits. When we ran the Ubuntu container earlier,
Docker created a container from the Ubuntu image and launch the bash program. By default, Docker
does not attach a terminal to a container when it is run. And so the bash program does not find
the terminal. And so it exits. Since the process that was started when the container was created,
finished the container exits as well. So how do you specify a different command to start the
container? One option is to append a command to the Docker run command. And that way it overrides
the default command specified within the image. In this case, I run the Docker run Ubuntu command
with the sleep five command as the added option. This way when the container starts, it runs the
sleep program waits for five seconds and then exits. But how do you make that change permanent?
Say you want the image to always run the sleep command when it starts. You will then create
your own image from the base Ubuntu image and specify a new command. There are different ways of
specifying the command either the command simply as a In a shell form, or in a JSON array format
like this, but remember, when you specify in a JSON array format, the first element in the array
should be the executable. In this case, the sleep program did not specify the command and parameters
together like this, the command and its parameters should be separate elements in the list. So I now
build my new image using the Docker build command, and name it as a boon to sleeper. I could now
simply run the Docker boon to sleeper command and get the same results. It always sleeps for five
seconds and exits. But what if I wish to change the number of seconds it sleeps? Currently, it is
hard coded to five seconds. As we learned before, One option is to run the Docker run command with
the new command appended to it, in this case, sleep 10. And so the command that will be run
at startup will be sleep 10. But it doesn't look very good. The name of the image, Ubuntu sleeper
in itself implies that the container will sleep, so we shouldn't have to specify the sleep command
again. Instead, we would like it to be something like this Docker run Ubuntu sleeper 10, we
only want to pass in the number of seconds the container should sleep and sleep command should
be invoked automatically. And that is where the entry point instruction comes into play. The entry
point instruction is like the command instruction. As in, you can specify the program that will be
run when the container starts. And whatever you specify on the command line, in this case, 10
will get appended to the entry point. So the command that will be run when the container starts
is sleep 10. So that's the difference between the two. In case of the CMD instruction, the command
line parameters passed will get replaced entirely. Whereas in case of entry point, the command line
parameters will get appended. Now, in the second case, what if I run the open to sleeper image
command without appending the number of seconds, then the command at startup will be just sleep and
you get the error that the opposite is missing. So how do you configure a default value for the
command if one was not specified in the command line, that's where you would use both entry point
as well as the command instruction. In this case, the command instruction will be appended to
the entry point instruction. So at startup, the command would be sleep five, if you didn't specify
any parameters in the command line. If you did, then that will override the command instruction.
And remember for this to happen, you should always specify the entry point and command instructions
in a JSON format. Finally, what if you really really want to modify the entry point during
runtime, say from sleep to an imaginary sleep 2.0 command? Well, in that case, you can override it
by using the entry point option in the Docker run command. The final command at startup would then
be sleep 2.0 10. Well, that's it for this lecture, and I will see you in the next. We now look at
networking in Docker. When you install Docker, it creates three networks automatically. Bridge
no and host bridge is the default network a container gets attached to if you would like to
associate the container with any other network, specify the network information using the network
command line parameter like this. We will now look at each of these networks. The bridge network is
a private internal network created by Docker on the host all containers attached to this network
by default, and they get an internal IP address, usually in the range 172 dot 17 series. The
containers can access each other using this internal IP if required. To access any of these
containers from the outside world, map the ports of these containers to port on the Docker host,
as we have seen before. Another way to access the containers externally is to associate the
container to the host network. This takes out any network isolation between the Docker host and
the Docker container. Meaning if you were to run a web server on port 5000. In a web app container,
it is automatically as accessible on the same port externally without requiring any port mapping
as the web container uses the hosts network. This will also mean that unlike before, you will
now not be able to run multiple web containers on the same host on the same port, as the ports
are now common to all containers in the host network. With the non network, the containers
are not attached to any network and doesn't have any access to the external network, or other
containers. They run in an isolated network. So we just saw the default burst network where the
network ID 172 dot 70 dot 0.1. So all containers associated to this default network will be able to
communicate to each other. But what if we wish to isolate the containers within the Docker host, for
example, the first two web containers on internal network 172 and the second two containers on a
different internal network, like 182. By default, Docker only creates one internal bridge network,
we could create our own internal network using the command Docker network, create and specify
the driver which is bridge in this case, and the subnet for that network followed by the custom
isolated network name. Run the Docker network ls command to list all networks. So how do we see the
network settings and the IP address assigned to an existing container? Run the Docker inspect command
with the ID or name of the container, and you will find a section on network settings. There you can
see the type of network the container is attached to its internal IP address, MAC address, and other
settings. Containers can reach each other using their names. For example, in this case, I have a
web server and a MySQL database container running on the same node. How can I get my web server to
access the database on the database container. One thing I could do is to use the internal IP address
signed to the MySQL container, which in this case is 172 dot 70 dot 0.3. But that is not very ideal
because it is not guaranteed that the container will get the same IP when the system reboots. The
right way to do it is to use the container name. All containers in a Docker host can resolve each
other with the name of the container. Docker has a built in DNS server that helps the containers
to resolve each other using the container name. Note that the built in DNS server always runs at
address 127 dot 0.0 dot 11. So how does Docker implement networking? What's the technology behind
it? Like how are the containers isolated within the host? Docker uses network namespaces that
creates a separate namespace for each container. It then uses virtual Ethernet pairs to connect
containers together. Well, that's all we can talk about it for now. More about these are advanced
concepts that we discussed in the advanced course on Docker on code cloud. That's all for now.
From this lecture on networking, head over to the practice test and practice working with networking
in Docker. I will see you in the next lecture. Hello and welcome to this lecture and we are
learning advanced Docker concepts. In this lecture we're going to talk about Docker storage
drivers and file systems. We're going to see where and how Docker stores data and how it manages
file systems of the containers. Let us start with how Docker stores data on the local file system.
When you install Docker on a system, it creates this folder structure at var lib Docker. You have
multiple folders under it called au Fs containers, image volumes, etc. This is where Docker stores
all its data by default. When I say data, I mean files related to images and containers running on
the Docker host. For example, all files related to containers are stored under the containers folder.
And the files related to images are stored under the image folder. Any volumes created by the
Docker containers. I created under the volumes folder. Well, don't worry about that for now. We
will come back to that in a bit. For now, let's just understand where Docker stores its files,
and in what format. So how exactly does Docker store the files of an image and a container?
To understand that we need to understand Dockers layered architecture. Let's quickly recap
something we learned when Docker builds images, it builds these in a layered architecture. Each
line of instruction in the Docker file creates a new layer in the Docker image with just the
changes from the previous layer. For example, the first layer is a base Ubuntu Operating System,
followed by the second instruction that creates a second layer, which installs all the Add packages.
And then the third instruction creates a third layer, which with the Python packages, followed
by the fourth layer that copies the source code over and then finally the fifth layer that
updates the entry point of the image. Since each layer only stores the changes from the
previous layer, it is reflected in the size as well. If you look at the base, a boon to image it
is around 120 megabytes in size. The abt packages that are installed is around 300 Mb, and then
the remaining layers are small. To understand the advantages of this layered architecture, let's
consider a second application. This application has a different Docker file. But it's very similar
to our first application, as it uses the same base image as Ubuntu uses the same Python and flask
dependencies, but uses a different source code to create a different application. And so
a different entry point as well. When I run the Docker build command to build a new image for
this application, since the first three layers of both the applications are the same, Docker is not
going to build the first three layers. Instead, it reuses the same three layers it built
for the first application from the cache, and only creates the last two layers with the
new sources and the new entry point. This way, Docker builds images faster and efficiently saves
disk space. This is also applicable if you were to update your application code. Whenever you update
your application code, such as the app dot p y. In this case, Docker simply reuses all the previous
layers from cash, and quickly rebuilds the application image by updating the latest source
code, thus saving us a lot of time during rebuilds and updates. Let's rearrange the layers bottom up
so we can understand it better. At the bottom we have the base Ubuntu layer, then the packages,
then the dependencies, and then the source code of the application, and then the entry point. All
of these layers are created when we run the Docker build command to form the final Docker image. So
all of these are the Docker image layers. Once the build is complete, you cannot modify the contents
of these layers and so they are read only and you can only modify them by initiating a new build.
When you run a container based off of this image using the Docker run command, Docker creates a
container based off of these layers and creates a new writable layer on top of the image layer.
The writable layer is used to store data created by the container, such as log files written by the
applications, any temporary files generated by the container, or just any file modified by the user
on that container. The life of this layer though, is only as long as the container is alive. When
the container is destroyed. This layer and all of the changes stored in it are also destroyed.
Remember that the same image layer is shared by all containers created using this image. If I were
to log into the newly created container and say, create a new file called temp dot txt, it will
create that file in the container layer which is read and write. We just said that the files in the
image layer are read only meaning you cannot edit anything in those layers. Let's take an example of
our application code. Since we bake our code into the image, the code is part of the image layer
and as such is read only after running a container What if I wish to modify the source code to say
test to change. Remember, the same image layer may be shared between multiple containers created from
this image. So does it mean that I cannot modify this file inside the container. Now, I can still
modify this file. But before I save the modified file, Docker automatically creates a copy of the
file in the readwrite layer, and I will then be modifying a different version of the file in the
readwrite layer. All future modifications will be done on this copy of the file in the readwrite
layer. This is called copy on write mechanism. The image layer being read only just means that the
files in these layers will not be modified in the image itself. So the image will remain the same
all the time, until you rebuild the image using the Docker build command. What happens when we
get rid of the container, all of the data that was stored in the container layer also gets deleted,
the changes we made to the app.pi and the new temp file we created will also get removed. So what if
we wish to persist this data? For example, if we were working with a database, and we would like
to preserve the data created by the container, we could add a persistent volume to the container.
To do this, first create a volume using the Docker volume create command. So when I run the Docker
volume create data underscore volume command, it creates a folder called Data underscore volume
under the var lib Docker volumes directory. Then when I run the Docker container using the Docker
run command, I could mount this volume inside the Docker containers rewrite layer using the
dash v option like this. So I would do a Docker run dash v then specify my newly created volume
name followed by a colon and the location inside my container, which is the default location
where MySQL stores data and that is where lib MySQL and then the image name MySQL. This will
create a new container and mount the data volume we created into var lib MySQL folder inside the
container. So all data written by the database is in fact stored on the volume created on the
Docker host. Even if the container is destroyed, the data is still active. Now what if you didn't
run the Docker volume create command to create the volume before the Docker run command. For example,
if I run the Docker run command to create a new instance of MySQL container with the volume data
underscore Volume Two, which I have not created yet, Docker will automatically create a volume
named data underscore Volume Two and mounted to the container. You should be able to see all
these volumes if you list the contents of the var lib Docker volumes folder. This is called volume
mounting. As we are mounting a volume created by Docker under the var lib Docker volumes folder.
But what if we had our data already at another location for example, let's say we have some
external storage on the Docker host at four slash data. And we would like to store database
data on that volume and not in the default var lib Docker volumes folder. In that case, we will run
a container using the command Docker run dash v. But in this case, we will provide the complete
path to the folder we would like to mount that is for slash data forward slash MySQL and so it
will create a container and mount the folder to the container. This is called bind mounting. So
there are two types of mounts a volume mounting and a bind mount volume mount mount a volume
from the volumes directory and bind mount mounts a directory from any location on the Docker host.
One final point note before I let you go using the dash V is an old style. The new way is to use dash
mount option. The dash dash mount is the preferred way as it is more verbose. So you have to specify
each parameter in a key equals value format. For example, the previous command can be written with
the dash mount option as this using the type, source and target options. The type in this
case is bind. The source is the location on my host and target is the location on my container.
So who He's responsible for doing all of these operations, maintaining the layered architecture,
creating a writable layer moving files across layers to enable, copy and write etc. It's the
storage drivers. So Docker uses storage drivers to enable layered architecture. Some of the common
storage drivers are au Fs btrfs DFS device mapper Overlay and overlay to the selection of the
storage driver depends on the underlying OS being used, for example, with a boon to the
default storage driver is au Fs whereas this storage driver is not available on other operating
systems like Fedora or CentOS. In that case, device mapper may be a better option. Docker
will choose the best storage driver available automatically based on the operating system. The
different storage drivers also provide different performance and stability characteristics. So you
may want to choose one that fits the needs of your application, and your organization. If you would
like to read more on any of these stories drivers, please refer to the links in the
attached documentation. For now, that is all from the Docker architecture concepts.
See you in the next lecture. Hello, and welcome to this lecture on Docker compose. Going forward, we
will be working with configurations in yamo file, so it is important that you are comfortable with
tmo. Let's recap a few things real quick course we first learned how to run a Docker container using
the Docker run command. If we needed to set up a complex application running multiple services, a
better way to do it is to use Docker compose with Docker compose, we could create a configuration
file in yamo format called Docker compose dot yamo and put together the different services and the
options specific to this to running them in this file. Then we could simply run a Docker compose
up command to bring up the entire application stack is easier to implement, run and maintain
as all changes are always stored in the Docker compose configuration file. However, this is all
only applicable to running containers on a single Docker host. And for now, don't worry about the
yamo file, we will take a closer look at the yamo file in a bit and see how to put it together.
That was a really simple application that I put together. Let us look at a better example.
I'm going to use the same sample application that everyone uses to demonstrate Docker. It's
a simple yet comprehensive application developed by Docker to demonstrate the various features
available in running an application stack on Docker. So let's first get familiarized with the
application, because we will be working with the same application in different sections through
the rest of this course. This is a sample voting application which provides an interface for a user
to vote and another interface to show the results. The application consists of various components
such as the voting app, which is a web application developed in Python, to provide the user with
an interface to choose between two options, a cat and the dog. When you make a selection,
the vote is stored in Redis. For those of you who are new to Redis Redis, in this case serves as
a database in memory. This vote is then processed by the worker which is an application written in
dotnet. The worker application takes the new vote and updates the persistent database, which is
a Postgres SQL, in our case, the Postgres SQL simply has a table with a number of votes for
each category, cats and dogs. In this case, it increments the number of votes for cats as
our vote was for cats. Finally, the result of the vote is displayed in a web interface, which is
another web application developed in Node JS. This resulting application rates the count of votes
from the Postgres SQL database and displays it to the user. So that is the architecture and
data flow of this simple voting application stack. As you can see, this sample application is
built with a combination of different services, different development tools, and multiple
different development platforms. So As Python, no js, dotnet, etc. This sample application will
be used to showcase how easy it is to set up an entire application stack consisting of diverse
components in Docker. Let us keep aside Docker swarm services and stacks for a minute and see
how we can put together this application stack on a single Docker engine using first Docker run
commands, and then Docker compose. Let us assume that all images of applications are already
built and are available on Docker repository. Let's start with the data layer. First, we run
the Docker run command to start an instance of Redis. By running the Docker run Redis command,
we will add the dash D parameter to run this container in the background. And we will also name
the container Redis. Now naming the containers is important. Why is that important? Hold that
thought we will come to that in a bit. Next, we will deploy the Postgres SQL database by
running the Docker run Postgres command. This time too, we will add the dash D option to run
this in the background and name this container DB for database. Next, we will start with the
application services. We will deploy a front end app for voting interface by running an instance
of voting app image, run the Docker run command and name the instance vote. Since this is a web
server, it has a web UI instance running on port 80. We will publish that port to 5000 on the host
system, so we can access it from a browser. Next, we will deploy the result web application
that shows the results to the user. For this we deploy a container using the results stash
app image and publish Port 80 to Port 5001 on the host. This way, we can access the web UI
of the resulting app on a browser. Finally, we deploy the worker by running an instance of
the worker image. Okay, now this is all good. And we can see that all the instances are running on
the host. But there is some problem, it just does not seem to work. The problem is that we have
successfully run all the different containers, but we haven't actually linked them together. As
in we haven't told the voting web application to use this particular Redis instance, there could
be multiple Redis instances running. We haven't told the worker and the resulting app to use this
particular Postgres SQL database that we ran. So how do we do that? That is where we use links.
Link is a command line option, which can be used to link two containers together. For example, the
voting app web service is dependent on the Redis service. When the web server starts, as you can
see, in this piece of code on the web server, it looks for a Redis service running on host Redis.
But the voting app container cannot resolve a host by the name Redis. To make the voting app aware
of the Redis service, we add a link option while running the voting app container to link it to the
Redis container, adding a dash dash link option to the Docker run command and specifying the name
of the Redis container which is which in this case is Redis followed by a colon and the name
of the host that the voting app is looking for, which is also Redis. In this case, remember that
this is why we named the container when we ran it the first time so we could use its name while
creating a link. What this is, in fact doing is it creates an entry into the EDC host file on
the voting app container, adding an entry with the host name Redis with the internal IP of the
Redis container. Similarly, we add a link for the result app to communicate with the database by
adding a link option to refer the database by the name dB. As you can see, in this source code of
the application, it makes an attempt to connect to a Postgres database on host dB. Finally, the
worker application requires access to both the Redis as well as the Postgres database. So we
add two links to the worker application, one link to link the Redis and the other link to link
Postgres database. Note that using links this way, is deprecated and the support may be removed in
future in Docker. This is because as we will see in some time Advanced and newer concepts in
Docker swarm and networking supports better ways of achieving what we just did here with
links. But I wanted to mention it anyways, so you learn the concept from the very basics. Once
we have the Docker run commands tested and ready, it is easy to generate a Docker compose file from
it. We start by creating a dictionary of container names, we will use the same name we used in the
Docker run commands. So we take all the names and create a key with each of them. Then under each
item, we specify which image to use. The key is the image and the value is the name of the image
to use. Next, inspect the commands and see what are the other options used. We published ports.
So let's move those ports under the respective containers. So we create a property called ports
and list all the ports that you would like to publish under that. Finally, we are left with
links. So whichever container requires the link, create a property under it called links and
provide an array of links such as Redis, or TP. Note that you could also specify the name of the
link this way without the semicolon and and the target target name, and it will create a link with
the same name as the target name. specifying the DB colon DB is similar to simply specifying dB,
we will assume the same value to create a link. Now that we're all done with our Docker compose
file, bringing up the stack is really simple from the Docker compose up command to bring up the
entire application stack. When we looked at the example of the sample voting application, we
assumed that all images are already built out of the five different components. Two of them
Redis and Postgres images we know are already available on Docker Hub. There are official images
from Redis and Postgres. But the remaining three are our own application, it is not necessary
that they are already built and available in the Docker registry. If we would like to instruct
Docker compose to run a Docker build, instead of trying to pull an image, we can replace the image
line with a built line and specify the location of a directory, which contains the application
code, and a Docker file with instructions to build the Docker image. In this example, for the
voting app, I have all the application code in a folder named vote which contains all application
code, and a Docker file. This time when you run the Docker compose up command, it will first
build the images, give a temporary name for it, and then use those images to run containers using
the options you specified before. Similarly use build option to build the two other services
from the respective folders. We will now look at different versions of Docker compose file. This
is important because you might see Docker compose files in different formats at different places
and wonder why some look different. Docker compose evolved over time, and now supports a lot more
options than it did in the beginning. For example, this is the trimmed down version of the Docker
compose file we used earlier. This is in fact, the original version of Docker compose file known
as version one. This had a number of limitations. For example, if you wanted to deploy containers on
a different network other than the default bridge network, there was no way of specifying
that in this version of the file. Also, say you have a dependency or startup order of some
kind. For example, your database container must come up first and only then I should the voting
application be started. There was no way you could specify that in the version one of the Docker
compose file support for these came in version two. With version two and up, the format of the
file also changed a little bit. You no longer specify your stack information directly as you
did before. It is all encapsulated in Services section. So create a property called services
in the root of the file, and then move all the services underneath that. You will still use the
same Docker compose up command to bring up your application stack. But how does Docker compose
know what version of the file you're using? You're free to use version one or version two depending
on your needs. So how does the Docker compose, know what format you're using. For version two
and up, you must specify the version of Docker compose file you're intending to use by specifying
the version at the top of the file. In this case, version, colon two. Another difference is with
networking. In version one, Docker compose attaches all the containers, it runs to the
default bridged network. And then use links to enable communication between the containers as
we did before. With version two Docker compose automatically creates a dedicated bridged network
for this application, and then attaches all containers to that new network. All containers are
then able to communicate to each other using each other's service name. So you basically don't need
to use links in version two of Docker compose, you can simply get rid of all the links you
mentioned in version one when you convert a file from version one to version two. And finally,
motion to also introduces a depends on feature. If you wish to specify a startup order. For
instance, say the voting web application is dependent on the Redis service. So you need to
ensure that Redis container is started first, and only then the voting web application must
be started. We could add a depends on property to the voting application and indicate that it
is dependent on Redis. Then comes version three, which is the latest as of today. version three
is similar to version two in the structure, meaning it has a version specification at the top
and the Services section under which you put all your services just like in version two, make sure
to specify the version number as three at the top. version three comes with support for Docker swarm,
which we will see later on. There are some options that were removed and added. To see details on
those you can refer to the documentation section using the link in the reference page. Following
this lecture. We will see version three in much detail later, when we discuss about Docker
stacks. Let's talk about networks in Docker compose. Getting back to our application. So far,
we've been just deploying all containers on the default bridge network. Let us say we modify the
architecture a little bit to contain the traffic from the different sources. For example, we would
like to separate the user generated traffic from the applications internal traffic. So we create
a front end network dedicated for traffic from users, and a back end network dedicated for
traffic within the application. We then connect the user facing applications which are the voting
app and the result app to the front end network and all the components to an internal back end
network. So back in our Docker compose file, note that I have actually stripped out
the port section for simplicity's sake, there's still there, but they're just not shown
here. The first thing we need to do if we were to use networks is to define the networks we are
going to use. In our case, we have two networks, front end and back end. So create a new property
called networks at the root level, and listen to the services in the Docker compose file and add
a map of networks we are planning to use. Then, under each service, create a network's property
and provide a list of networks that service must be attached to. In case of Redis and dB. It's
only the back end network. In case of the front end applications such as the voting app and the
result app, they're required to be a test to both a front end and back end network. You must also
add a section for worker container to be added to the back end network. I have just omitted that
in this slide due to space constraints. Now that you have seen Docker compose files, head over
to the coding exercises and practice developing some Docker compose files. That's it for this
lecture. And I will see you in the next lecture. We will now look at Docker registry. So what is
a registry if the containers Where the rain then they will rain from the Docker registry, which
are the clouds. That's where Docker images are stored. It's a central repository of all Docker
images. Let's look at a simple nginx container. We run the Docker run nginx command to run
an instance of the nginx image. Let's take a closer look at that image name. Now the name is
nginx. But what is this image and where is this image pulled from? This name follows Dockers image
naming convention nginx. Here is the image or the repository name. When you say nginx, it's actually
nginx slash nginx. The first part stands for the user or account name. So if you don't provide an
account or a repository name, it assumes that it is the same as the given name, which in this case
is nginx. The usernames is usually your Docker Hub account name or if it is an organization,
then it's the name of the organization. If you were to create your own account and create
your own repositories or images under it, then you would use a similar pattern. Now, where
are these images stored and pulled from? Since we have not specified the location where these
images are to be pulled from, it is assumed to be on Dockers default registry, Docker Hub, the DNS
name for which is docker.io. The registry is where all the images are stored. Whenever you create a
new image or update an existing image, you push it to the registry and every time anyone deploys
this application, it is pulled from that registry. There are many other popular registries as well.
For example, Google's registry is that dcr.io, where a lot of Kubernetes related images are
stored, like the ones used for performing end to end tests on the cluster. These are all publicly
accessible images that anyone can download, and access. When you have applications built in
house that shouldn't be made available to the public. hosting an internal private registry may
be a good solution. Many cloud service providers such as AWS, Azure, or GCP, provide a private
registry by default, when you open an account with them. on any of these solutions via Docker Hub, or
Google registry or your internal private registry, you may choose to make a repository private,
so that it can only be accessed using a set of credentials from Dockers perspective. To run a
container using an image from a private registry. you first log into your private registry using
the Docker login command, input your credentials. Once successful, run the application using private
registry as part of the image name like this. Now, if you did not log into the private registry, it
will come back saying that the image cannot be found. So remember to always log in before pulling
or pushing to a private registry. We said that cloud providers like AWS or GCP provide a private
registry when you create an account with them. But what if you're running your application on
premise and don't have a private registry? How do you deploy your own private registry within your
organization, the Docker registry, itself another application, and of course, is available as a
Docker image. The name of the image is registry, and it exposes the API on port 5000. Now that you
have your custom registry running at Port 5000, on this Docker host, how do you push your own
image to it? Use the Docker image tag command to tag the image with the private registry URL
in it. In this case, since it's running on the same Docker host, I can use local host semi colon
5000 followed by the image name. I can then push my image to my local private registry using the
command Docker push and the new image name with the Docker registry information in it. From there
on, I can pull my image from anywhere within this network using either localhost if you're on the
same host, or the IP or domain name of my Docker host, if I'm accessing from another host in my
environment. Well, that's it for this lecture, head over to the practice test and practice
working with private Docker registries. Welcome to this lecture on Docker engine.
In this lecture, we will take a deeper look at Dockers architecture, how it actually runs
applications in isolated containers and how it works under the hood. Docker engine as we have
learned before you simply refer to a host with Docker installed on it. When you install Docker
on a Linux host, you're actually installing three different components. The Docker daemon, the REST
API server, and the Docker CLA. The Docker daemon is a background process that manages Docker
objects such as the images, containers, volumes and networks. The Docker REST API server is the
API interface that programs can use to talk to the daemon and provide instructions, you could create
your own tools using this REST API. And the Docker CLA is nothing but the command line interface
that we've been using, until now to perform actions such as running a container stopping
containers, destroying images, etc. It uses the REST API to interact with the Docker daemon.
Something to note here is that the Docker CLA need not necessarily be on the same host. It could be
on another system like a laptop, and can still work with a remote Docker engine. Simply use the
dash H option on the Docker command and specify the remote Docker engine address and a port
as shown here. For example, to run a container based on nginx on a remote Docker host, run the
command Docker dash H equals 10.1 23 dot 2.1 colon 2375 run ngi nx. Now let's try and understand how
exactly our applications containerized in Docker, how does it work under the hood, Docker uses
namespaces to isolate workspace, process IDs, network, inter process communication, mounds
and Unix time sharing systems are created in their own namespace thereby providing isolation
between containers. Let's take a look at one of the namespace isolation technique. process ID
namespaces. Whenever a Linux system boots up, it starts with just one process with a process
ID of one. This is the root process and kicks off all the other processes in the system.
By the time the system boots up completely, we have a handful of processes running. This can
be seen by running the PS command to list all the running processes. The process IDs are unique
and two processes cannot have the same process ID. Now if we were to create a container, which is
basically like a child system within the current system, the child system needs to think that it
is an independent system on its own. And it has its own set of processes originating from a root
process with a process ID of one. But we know that there is no hard isolation between the containers
and the underlying host. So the processes running inside the container are in fact processes running
on the underlying host. And so two processes cannot have the same process ID of one. This is
where namespaces come into play. With process ID namespaces. Each process can have multiple
process IDs associated with it. For example, when the processes start in the container, it's
actually just another set of processes on the base Linux system. And it gets the next available
process ID in this case, five and six. However, they also get another process ID starting with P
ID one in the container namespace which is only visible inside the container. So the container
things that it has its own route process tree. And so it is an independent system. So how does that
relate to an actual system? How do you see this on a host? Let's say I were to run an nga next server
as a container. We know that the nginx container runs and nginx service. If we were to list all the
services inside the Docker container, we see that the next service running with a process ID of one.
This is the process ID of the service inside of the container namespace. If we list the services
on the Docker host, we will see the same service but with a different process ID that indicates
that all processes are in fact running on the same host but separated into their own containers using
namespaces. So we learned that the underlying Docker host as well as the containers share the
same system resources such as CPU and memory. How much of the resources are dedicated to the host
and the containers? And how does Docker manage and share the resources between the containers.
By default, there is no restriction as to how much of a resource a container can use. And
hence, a container may end up utilizing all of the resources on the underlying host. But there
is a way to restrict the amount of CPU or memory a container can use. Docker uses thi groups or
control groups to restrict the amount of hardware resources allocated to each container. This can
be done by providing the dash dash CPUs option to the Docker run command. Providing a value of
point five will ensure that the container does not take up more than 50% of the host CPU at any
given time. The same goes with memory. Setting a value of 100 M to the dash dash memory option
limits the amount of memory the container can use to 100 megabytes. If you're interested in
reading more on this topic, refer to the links I posted in the reference page. That's it for now
on Docker engine. In the next lecture, we talked about other advanced topics on Docker storage and
file systems. See you in the next lecture. During this course, we learned that containers share the
underlying OS kernel. And as a result, we cannot have a Windows container running on Linux hosts
or vice versa. We need to keep this in mind while going through this lecture, as it's very important
concept and most beginners tend to have an issue with it. So what are the options available
for Docker on Windows? There are two options available. The first one is Docker on Windows
using Docker toolbox. And the second one is the Docker desktop for Windows option. We will look at
each of these now. Let's take a look at the first option. Docker toolbox. This was the original
support for Docker on Windows. Imagine that you have a Windows laptop and no access to any Linux
system whatsoever. But you would like to try Docker, you don't have access to a Linux system in
the lab or in the cloud. What would you do? What I did was to install a virtualization software
on my Windows system like Oracle VirtualBox or VMware Workstation and deploy a Linux VM on it
such as Ubuntu or Debian, then install Docker on the Linux VM and then play around with it. This
is what the first option really does. It doesn't really have anything much to do with Windows,
you cannot create Windows based Docker images, or run Windows based Docker containers. You
obviously cannot run Linux container directly on Windows either. You're just working with Docker on
a Linux virtual machine on a Windows host. Docker, however, provides us with a set of tools to make
this easy which is called as the Docker toolbox. The Docker toolbox contains a set of tools like
Oracle VirtualBox, Docker engine, Docker machine, Docker compose and a user interface called
kite Matic. This will help you get started by simply downloading and running the Docker
toolbox executable. Ever install VirtualBox, deploy a lightweight VM called boot to Docker,
which has Docker running in it already so that you are all set to start with Docker easily
and with within a short period of time. Now what about requirements, you must ensure that
your operating system is 64 bit Windows seven or higher and that the virtualization
is enabled on the system. Now remember, Docker toolbox is legacy version for older
Windows systems that do not meet requirements for the newer Docker for Windows option. The
second option is the newer option called Docker desktop for Windows. In the previous option,
we saw that we had Oracle VirtualBox installed on Windows and then a Linux system and then Docker
on that Linux system. Now with Docker for Windows, we take out Oracle VirtualBox and use the native
virtualization technology available with Windows called Microsoft Hyper V. During the installation
process for Docker for Windows, it will still automatically create a Linux system underneath.
But this time it is created on the Microsoft Hyper V instead of Oracle VirtualBox and have Docker
running on that system. Because of this dependency on Hyper V. This option is only supported for
Windows 10 Enterprise or Professional Edition and on Windows Server 2016. Because both
these operating systems come with Hyper V support by default. Now here's the most important
point. So far whatever we've been discussing, with Dockers support for Windows. It is strictly
for Linux containers, Linux applications packaged into Linux Docker images. We're not talking
about Windows applications. Windows images or Windows containers. Both the options we just
discussed will help you run a Linux container on a Windows host. With Windows Server 2016, Microsoft
announced support for Windows containers for the first time. You can now package applications
Windows applications into Windows Docker containers and run them on a Windows Docker host
using Docker desktop for Windows. When you install Docker desktop for Windows, the default option
is to work with Linux containers. But if you would like to run Windows containers, then you
must explicitly configure Docker for Windows to switch to using Windows containers. In early
2016, Microsoft announced windows containers. Now you could create Windows based images and
run Windows containers on a Windows Server just like how you would run Linux containers on a
Linux system. You can create windows images, containerize your applications and share them
through the Docker store as well. Unlike in Linux, there are two types of containers in Windows. The
first one is a Windows Server container, which works exactly like Linux containers where the OS
kernel is shared with the underlying operating system to allow better security boundary between
containers and to have kernels with different versions and configurations to coexist. The
second option was introduced known as the Hyper V isolation. With Hyper V isolation each container
is run within a highly optimized virtual machine guaranteeing complete kernel isolation between the
containers and the underlying host. Now while in the Linux world, you had a number of base images
for Linux systems such as Ubuntu, Debian, Fedora, Alpine etc. If you remember that, that is what you
specify at the beginning of the Docker file. In the windows world, we have two options the Windows
Server Core and Nano Server. A Nano Server is a headless deployment option for Windows Server
which runs at a fraction of size of the full operating system. You can think of this like the
Alpine image in Linux. The Windows Server Core, though is not as light weight as you might
expect it to be. Finally, Windows containers are supported on Windows Server 2016 Nano
Server and Windows 10 professional Enterprise Edition. Remember on Windows 10 professional
and Enterprise Edition only supports Hyper V isolated containers. Meaning as we just discussed,
every container deployed is deployed on a highly optimized virtual machine. Well, that's it about
Docker on Windows. Now before I finish, I want to point out one important fact. We saw two ways of
running a Docker container using VirtualBox or Hyper V. But remember, VirtualBox and Hyper
V cannot coexist on the same windows host. So if you started off with Docker toolbox with
VirtualBox and if you plan to migrate to Hyper V, remember you cannot have both solutions at the
same time. There is a migration guide available on Docker documentation page on how to migrate
from VirtualBox to Hyper V. That's it for now. Thank you and I will see you the next lecture. We
now look at Docker on Mac Docker on Mac is similar to Docker on Windows. There are two options to
get started Docker on Mac using Docker toolbox, or Docker desktop for Mac option. Let's look at
the first option. Docker toolbox. This was the original support for Docker on Mac. It is Docker
on Linux VM created using VirtualBox on Mac as Windows It has nothing to do with Mac applications
or Mac based images or Mac containers. It purely runs Linux containers on a Mac OS. Docker toolbox
contains a set of tools like Oracle VirtualBox, Docker engine, Docker machine, Docker compose,
and a user interface called cadmatic. When you download and install the Docker toolbox
executable, installs VirtualBox deploys lightweight VM called boot to Docker, which has
Docker running in it already. This requires mac os 10.8 or newer. The second option is the newer
option called Docker desktop for Mac with Docker desktop for Mac, we take out Oracle VirtualBox
and use hypercard virtualization technology during the installation process for Docker for
Mac, it will still automatically create a Linux system underneath. But this time it is created on
hypercard instead of Oracle VirtualBox and have Docker running on that system. This requires Mac
OS Sierra 10 or 12 or newer, and my and the Mac hardware must be 2010 or newer. Finally, remember
that all of this is to be able to run the Linux container on Mac. As of this recording, there are
no Mac based images or containers. Well, that's it with Docker on Mac for now. We will now try to
understand what container orchestration is. So far in this course, we have seen that with Docker,
you can run a single instance of the application with a simple Docker run command. In this case,
to run a Node JS based application, you run the Docker run node j s command. But that's just one
instance of your application on one Docker host, what happens when the number of users increase,
and that instance is no longer able to handle the load, you deploy additional instance of your
application by running the Docker run command multiple times. So that's something you have to
do yourself, you have to keep a close watch on the load and performance of your application and
deploy additional instances yourself. And not just that, you have to keep a close watch on the health
of these applications. If a container was to fail, you should be able to detect that and run the
Docker run command again to deploy another instance of that application. What about the
health of the Docker host itself? What if the host crashes and is inaccessible? The containers hosted
on that host become inaccessible to so what do you do in order to solve these issues, you will need
a dedicated engineer who can sit and monitor the state performance and health of the containers and
take necessary actions to remediate the situation. But when you have large applications deployed
with 10s of 1000s of containers that that's not a practical approach. So you can build your
own scripts. And that will help you tackle these issues. To some extent. container orchestration
is just a solution for that. It is a solution that consists of a set of tools and scripts
that can help host containers in a production environment. Typically, a container orchestration
solution consists of multiple Docker hosts that can host containers. That way, even if one fails,
the application is still accessible through the others. A container orchestration solution
easily allows you to deploy hundreds or 1000s of instances of your application with a single
command. This is a command used for Docker swarm, we will look at the command itself in a bit. Some
orchestration solutions can help you automatically scale up the number of instances when users
increase and scale down the number of instances when the demand decreases. Some solutions can even
help you in automatically adding additional hosts to support the user load, and not just clustering
and scaling. The container orchestration solutions also provide support for advanced networking
between these containers across different hosts, as well as load balancing user requests across
different house. They also provide support for sharing storage between the house as well as
support for configuration management and security within the cluster. There are multiple container
orchestration solutions available today. Docker has Docker swarm Kubernetes from Google and mezzos
from patchy while Docker swarm is really easy to set up and get started. It lacks some of the
advanced auto scaling features required for complex production grade applications. mezzos on
the other hand, is quite difficult to set up and get started, but supports many advanced features.
Kubernetes is arguably the most popular of it all is a bit difficult to set up and get started but
provides a lot of options to customize deployments and has support for many different vendors.
Kubernetes is now supported on all public cloud service providers like GCP, Azure and AWS and
the Kubernetes project is one of the top ranked projects on GitHub and upcoming lectures. We will
take a quick look at Docker swarm and Kubernetes. We will now get a quick introduction to Docker
swarm. Docker swarm has a lot of concepts to cover and requires its own course. But we will
try to take a quick look at some of the basic details so you can get a brief idea on what it is.
with Docker swarm. You could now combine multiple Docker machines together into a single cluster.
Docker swarm will take care of distributing your services or your application instances into
separate hosts for high availability For load balancing across different systems and hardware
to set up a Docker swarm, you must first have hosts or multiple hosts with Docker installed on
them, then you must designate one host to be the manager or the master or the swarm manager as it
is called, and others as slaves or workers. Once you're done with that, run the Docker swarm in
it command on the swarm manager, and that will initialize the swarm manager. The output will also
provide the command to be run on the workers to copy the command and run it on the worker nodes
to join the manager. After joining the swarm, the workers are also referred to as nodes. And
you're now ready to create services and deploy them on the swarm cluster. So let's get into
some more details. As you already know, to run an instance of my web server, I run the Docker run
command and specify the name of the image I wish to run. This creates a new container instance of
my application and serves my web server. Now that we have learned how to create a swarm cluster, how
do I utilize my cluster to run multiple instances of my web server. Now one way to do this would
be to run the Docker run command on each worker node. But that's not ideal as I might have to log
into each node and run this command. And there, there could be hundreds of nodes that will have
to set up load balancing myself, I like to monitor the state of each instance myself. And if
instances were to fail, I'll have to restart them myself. So it's going to be an impossible task.
And that is where Docker swarm orchestration comes in. Docker swarm orchestrator does all of this for
us. So far, we've only set up the swarm cluster, but we haven't seen orchestration in action.
The key component of swarm orchestration is the Docker a service. Docker services are one or
more instances of a single application or service that runs across the saw the nodes in the swarm
cluster for example. In this case, I could create a Docker service to run multiple instances of my
web server application across worker nodes in my swarm cluster. For this, I run the Docker service,
create command on the manager node and specify my image name there, which is my web server in this
case, and use the option replicas to specify the number of instances of my web server I would like
to run across the cluster, since I specified three replicas, and I get three instances of my web
server distributed across the different worker nodes. Remember, the Docker service command
must be run on the manager node and not on the worker node. The Docker service create command is
similar to the Docker run command in terms of the options passed, such as the dash e environment
variable, the dash p for publishing ports, the network option to attach container to
a network, etc. Well, that's a high level introduction to Docker swarm, there is a lot more
To know such as configuring multiple managers, overlay networks, etc. As I mentioned, it requires
its own separate course. Well, that's it for now. In the next lecture, we will look at Kubernetes at
a high level. We will now get a brief introduction to basic Kubernetes concepts. Again Kubernetes
requires its own course, well, a few courses, at least five, but we will try to get a brief
introduction to it here. with Docker, you were able to run a single instance of an application
using the Docker COI by running the Docker run command, which is great. running an application
has never been so easy before. with Kubernetes using the Kubernetes COI, known as cube control,
you can run 1000 instances of the same application with a single command Kubernetes can scale it up
to 2000 with another command Kubernetes can be even configured to do this automatically so that
instances and the infrastructure itself can scale up and down. based on user load Kubernetes can
upgrade these 2000 instances of the application in a rolling upgrade fashion, one at a time
with a single command. If something goes wrong, it can help you roll back these images with a
single command Kubernetes can help you test new features of your application. By only upgrading
a percentage of these instances through a B testing methods. The Kubernetes open architecture
provides support for many, many different network and storage vendors. Any network or storage brand
that you can think of has a plugin for Kubernetes Kubernetes supports a variety of authentication
and authorization mechanisms. All major cloud service providers have native support for
Kubernetes. So what's the relation between Docker and Kubernetes Kubernetes uses Docker
host to host applications in the form of Docker containers. Well, it need not be Docker all the
time. Kubernetes supports as natives to Dockers as well, such as rocket, or a crier. But let's take
a quick look at the Kubernetes architecture. A Kubernetes cluster consists of a set of nodes, let
us start with nodes. A node is a machine physical or virtual on which a Kubernetes the Kubernetes
software set of tools are installed. And node is a worker machine. And that is where containers will
be launched by Kubernetes. But what if the node on which the application is running fails? Well,
obviously, our application goes down. So you need to have more than one nodes. A cluster is a set of
nodes grouped together this way, even if one node fails, you have your application still accessible
from the other nodes. Now we have a cluster but who is responsible for managing this cluster?
Where is the information about the members of the cluster stored? How are the nodes monitored?
When a node fails? How do you move the workload of the failed nodes to another worker node, that's
where the master comes in. The Master is a node with the Kubernetes control plane components
installed. The master watches over the notes are in the cluster and is responsible for the actual
orchestration of containers on the worker nodes. When you install Kubernetes on a system, you're
actually installing the following components, an API server and actually server, a cubelet
service container runtime engine like Docker, and a bunch of controllers and the scheduler. The API
server acts as the front end for Kubernetes. The users management devices command line interfaces
all talk to the API server to interact with the Kubernetes cluster. Next is the Etsy d a key value
store. The Etsy D is a distributed reliable key value store used by Kubernetes to store all data
used to manage the cluster. Think of it this way, when you have multiple nodes and multiple
masters in your cluster. etsy D stores all that information on all the nodes in the cluster
in a distributed manner. NCD is responsible for implementing a locks within the cluster to ensure
there are no conflicts between the masters. The scheduler is responsible for distributing work
or containers across multiple nodes. It looks for newly created containers and assigns them
to nodes. The controllers are the brain behind orchestration, they're responsible for noticing
and responding when nodes containers or endpoints goes down. The controllers makes decisions
to bring up new containers. In such cases, the container runtime is the underlying software
that is used to run containers. In our case, it happens to be Docker. And finally, cubelet is
the agent that runs on each node in the cluster. The agent is responsible for making sure that the
containers are running on the nodes as expected. And finally, we also need to learn a little bit
about one of the command line utilities known as the cube command line tool or the cube control
tool or cube cuddle as it is also called the cube control tool is the Kubernetes CLA, which is used
to deploy and manage applications on a Kubernetes cluster to get cluster related information to get
the status of the nodes in the cluster, and many other things. The Cube control run command is
used to deploy an application on the cluster. The Cube control cluster info command is used to
view information about the cluster. And the cube control get nodes command is used to list all the
nodes part of the cluster. So to run hundreds of instances of your application across hundreds
of nodes. All I need is a single Kubernetes command like this. Well, that's all we have for
now, a quick introduction to Kubernetes and its architecture. We currently have three courses on
code cloud on Kubernetes that will take you from the absolute beginner to a certified expert.
So have a look at it when you get a chance. So we're at the end of this beginner's course
to Docker. I hope you had a great learning experience. If so please leave a comment
below. If you'd like my way of teaching, you will love my other courses hosted on my site
at code cloud. We have courses for Docker swarm Kubernetes advanced courses on Kubernetes
certification, as well as openshift. We have courses for automation tools like Ansible
Chef and Puppet and many more on the way, visit code cloud@www.cloud.com. Well, thank you so
much for your time, and until next time, goodbye.