Hello everyone, welcome to this session
on Docker. Today, in this session, we're going to learn docker end-to-end, so
let's get started. The first topic that we're going to discuss is an
introduction to docker, after that, we're going to discuss the common
docker commands that you will be using in your day-to-day life. Once we're done
with that, we'll move on and talk about docker files, what are they? and why
exactly are they used? After that, we'll understand what are Docker volumes? Then move on to break a monolithic application
into micro-services and then deploy it using Docker. Once you're done with that,
we'll move on and understand what is Docker Compose. Once we're done with
Docker Compose, I'll explain you guys on what is Container Orchestration. So,
in container orchestration, we will be basically using Docker Swarm, and
after that, once you've understood what Docker Swarm is, we'll move on and
deploy a multi-tier application using Docker Swarm. So guys, this is the agenda
for the session. I hope it's clear to you. So, let's move on and start with the
first topic on what is Docker? Docker was essentially a need which came out of
the problem that we're going to discuss just now. So, imagine yourself as a
developer and probably you're just creating a website on, say PHP. So,
if you create a website on PHP, the first thing that you would do is write some
code. Now when you test that code, the kind of environment that you would
be putting it in, let's see what all components would be there in that
particular environment. So, the first thing that you would need in this
environment, that is around the PHP file, is an operating system. So, an
operating system probably would have a browser or would have a text
editor. So, you need an operating system to work on. Secondly,
because you are developing a PHP application, of course you need the PHP
software installed on that operating system so that your file can actually
work. When you are developing a web application on a particular language,
it's not just the language which will suffice with what you are doing, you also
have to take into consideration the third-party applications and the third-party libraries
that you would be including in your code. So, the third
component that you would need when you are developing a website is libraries.
For example, if your PHP website has to connect to a MYSQL OS, then it
would need a PHP MYSQL library for connecting to it. So, these are the
components that are basically required by a developer to run his program. So, what he does is he writes a program, he configures the environment according
to the program, and when everything goes fine his program works well. Now, the
problem occurred when this particular program had to be given to someone else.
So, the developer finished his job and it is the job of a tester to check this
program. So, the developer would give the PHP file to the operations guy, and let's
see what the operations guy would do. Now the operations guy is in the
same exact position that the developer was when he was starting to develop his
code, but the difference over here is now the PHP file is running, and it
is just that he has to run this file on his system. So, the
first thing that the OPs guy would do is replicate the environment that the
developer is having. So, he would get the same OS, the same software, and
the same libraries to run that PHP program. Now there are a lot of versions
for a particular software, for example, for PHP you have PHP 7 which is
currently running, but there are companies still working on PHP 5.6 because of probably legacy problems or because
the commands have changed, and if they want to move on to 7, they will have to
change all the commands in the codebase, so there are lot of
reasons. So, there is a particular version that this company is also following. So this ops guy has to have the exact same version so as to
make the file of the developer work. Now, he tried his best possible
judgment in it and he installed the PHP software, and he installed the libraries,
installed the OS, but still the problem was that the OPs guy couldn't make the file run, and he came to a conclusion that the code is faulty, but the developer says it
ran fine on my system, and it's your system which has
problem, so fix it by yourself. So, these were some of the statements which were
exchanged between developers and operations guy. I mean if you
think about it, it was actually nobody's fault. The operations guy did what he could in the best of his ability to
replicate the environment, but it didn't work out. Now you guys might argue
that why don't we use VMs in this case, probably the developer could
work in a particular VM and then just give that VM to the ops guy and then he
can work on the same VM and probably run the file, I mean that is a viable
solution, but the problem is the size of VMs is too large to give to the
OPs guy and give to the testing guy on every feature that a developer has.
In that case, every developer has to have a VM that he's working on and has to
have his own version of the VM because if you are adding, say, feature A and
feature A requires three or four more libraries that you have included into
your particular VM, then it will become inconsistent with what other developers
have. Now imagine the OPs guy having several versions of VMs and not
knowing what kind of VMs he has to impose on the production server or on the
testing server which basically will act as the final version of the image on
which all the features of all the developers will work. It's a very
hectic situation. So, the problem was the VMS were very large and they were
complex as well to handle. When we are talking about a staging area or the
deployment area, the production area, they were a little difficult to handle.
Now this was the problem that was there before docker. So, what did we need. We
basically needed a wrapper like in the case of what we just discussed, the
wrapper was a VM wherein it basically contained all the operating system files,
all the software, all the code, and all the libraries together and could be
basically lifted and shifted to a different system where it would execute,
but we needed something more portable, we needed something more
smaller that could easily be given to some other person for
testing. We needed something like that. We needed a wrapper around our
files and the answer to this particular scenario is that now if we get a
wrapper which will basically contain all the files, in that case, each and every
environment of ours will become the same. The developer environment is going to be
the same, and the moment it gets passed on to the OPs guy, he will again
deploy the same wrapper and his problem is also solved because the ops guy does
not have to match the versions of the OS, the software, the libraries, because
everything is there inside that wrapper, everything is there inside that
container. He just has to run that container, and he can just see if the
code is working properly according to what has been specified or not. So,
the answer was that all the environments, i.e., the developer environment, the staging or
testing environment, the production environment, all of these environments
are now going to be the same. This was possible using docker. So, what
is docker? Docker is basically a software which helps us creating this wrapper
around the files of a code the operating system and the libraries that we
mentioned that are required for a particular code to work, but the
awesome thing about docker is that you do not need the environment to be in
GBs. It need not be somewhere around 900 MB or 800 MB or 1 GB or 1.5 GB. A container could be as low as 50 MB, that is, the size of a
container called Alpine is as low as 50 MB, so it becomes very easy to
basically give it to other people to test your code with all the environment
set up in that particular container. So, docker is a software which
helps us enable that. Docker is basically a tool which helps us in
containerizing an application. Now let's see how basically docker
is doing this. How is it so portable? how is it so low in size?
So basically this is the architecture of docker. So, at the base level, you have the
hardware that basically means that you'd have a computer, and on that computer you
would have an operating system present. So this is your operating system.
So it's as simple as you have a laptop and on that laptop you have Windows or
Linux installed. Now once you have the operating system installed, the next
thing is the container engine that you would install. So, the container engine is
nothing but the docker software that you would install on top of your operating
system and that is it. Once you have installed the container engine, you can
run any type of container that is there in the repository or on
the web which is available. For example, if your code needs an Ubuntu
container, all you have to do is write docker and Ubuntu, and what would
it do? It would basically download the ubuntu container, and then you can add
your code files inside that ubuntu container and start working, but
how is it low in size how is it not in GBs when we compare it to the Ubuntu
operating system which is around 800-900 MB, the Ubuntu desktop version, but how
come this container is so small. So, the answer for that is the container does
not contain all the operating system files. So, as you can see, the
container engine actually shares its space with the operating system. So, a
container is void. A container does not have a kernel in place. It shares
the underlying kernel of the operating system on which the container engine is
installed. So, if I talk about, say, you have an Ubuntu operating system
or you have a Windows operating system, and what you do is you
download the container for CentOS. So Cent OS is basically a Linux
distribution. So you download the container for CentOS and now when
you're working in that container you feel as if you are in CentOS, but the
point is that CentOS operating system is actually
not installed, what is happening is your container is sharing
the resources from the operating system but your container has the minimum
binaries and libraries that are required to run the CentOS commands, for
example, for installing anything on CentOS, you pass the command, yum
install and then the package name. So, it would have the yum package in
there in the container. It would have all the repository URLs there in
the container, and that's how it behaves like a CentOS operating system, although
the container is nothing but a set of binaries and libraries which are
important for the CentOS commands to work. That is all what a container
is, but with the underlying virtualization of the kernel or the bios
of the operating system your container seems to be like an operating system
although it is not. Because of this very reason that it virtualizes the
underlying kernel or BIOS to make it as if it's running on its own, that is,
its own kernel or BIOS. This is the reason that the containers are very
lightweight because they don't need to have all the files required for an
operating system. They just need the bare minimum binaries or libraries
for that particular environment to work. So, if you again come back to see this
architecture, you have the hardware. On top of the hardware, you have the
operating system, on top of operating system you install the docker software,
and on top of docker software you run containers. So, all these containers have
the minimum binaries or libraries, and on these you basically just put your code.
In my case, I will put my PHP code on top of the container and that would be app 1,
and there could be multiple containers running on my system, for example, one
code file is running in ubuntu, the other code file could be running in
CentOS container, the third could be running in and some other operating
system or some other Linux container, and all this
is possible using the Docker software. So, this is the architecture for a docker
container or a container in general. Now, let us compare it with the VM. So, I gave you the example of VM when I said that VMs are heavy in nature. So, let
us understand how containers are different from VMs. So, on the
right side you can see the architecture of VM. So, on the VMs you have the
hardware which is the same as the docker architecture that we saw. Then on
top of hardware you have the host operating system which is the same as
the docker container that we saw. Now on top of the host operating system, you
have a hypervisor or a VirtualBox kind of software which will basically
virtualize the hardware from the operating system and give it to the VMs
running on top of it. So, you have hypervisor software in case of VM
architecture, and then you have the container engine software in case of a
docker architecture, then on top of the hypervisor, you have the guest
operating system, and this is where all of the difference is. So,
in case of containers, you just have the bare minimum binaries or libraries which
are present, but in case of VMs you have the whole operating system which is
basically installed on top of the hypervisor. Because you have
the whole of the operating system installed size of the VM is quite
bigger than what you have in the container. The VMS
could be multiple in nature when we talk about how many VMs can be run on a
machine. It totally depends on the specs of the machine. Regarding VMs, you can run multiple VMs on a
single machine and also one thing to notice over here is that there could be
or there could not be a host operating system in case of VMs. So, in case
of virtualization technologies, it has advanced this much that the hypervisor
itself does not require a host operating system. It
directly works on the hardware level as well, but usually when we guys work in
R&D or when we have the kind of laptops that we have or the kind of
specification that we have, we go ahead with this kind of an architecture where
we have an existing operating system on top of that we install tools like VMware
or VirtualBox and on top of that we install the VMs.
So, this was the basic difference between a docker
architecture and a virtual machine architecture. Moving forward, we have understood why we need docker and we have understood
that what docker is exactly right now. Let's go ahead and see how we can
install docker on our machines. When it comes to installation, there could be
basically three kinds of insulation that you might come across: you could either
have an apple or a Mac system that you are working on, you could have a Windows
system that you're working on, or you could have a Linux distribution that
you're working on. So, let me walk you through all the three domains in
basically the installation. So that we are on the same page. Once we come out of
installations and when we are doing hands-on, I hope you guys all will be
able to go along with the video as we are performing the hands-on. So, if you're on Mac, all you have to do is go to this link. So this
link is basically going to download the docker toolbox for Mac, and once you have
the docker toolbox, it basically installs everything. It will install compose, it will install the docker software, it will install the
other components of docker, so you don't have to worry about which component is
for what, so just go to this particular link, download the docker
software. Once you have the docker software, just come back to this CLI and
just pass the command docker version and if you get the
reply as the version of the docker which is running on your system, then your software is successfully installed. So, this is the
installation for Mac. If you talk about installation on Windows, just
go to this link, download the docker toolbox and everything is going to be
set up automatically for you. When we talk
about ubuntu, things are little simple on the ubuntu side. All you have
to do is pass these commands and your docker would be up and ready. So,
for our sake, we will be basically doing hands-on today on
the Ubuntu distribution of Linux, so what I'm going to do is, I'm going to just go back
to my terminal so that we can use the putty software to basically connect to
our ubuntu distribution, so just give me one second. So this is basically my
Ubuntu distribution. So, what I'm going to do is I'm going to install docker
in it. So, the first thing that I would pass is the command sudo apt-get
Update. So, the first command that I'm going to pass is sudo apt-get update. Once I've done that, the next command that I'm going to pass is
sudo apt-get install docker.io and then you'll be prompted
for a yes or no. Just type in Y which will mean yes, and this will install all the
packages required for docker to work on this particular system. So, if your
Internet is fast, it will hardly take a minute to install docker, and once
install docker is all installed and set up, we can go ahead and check whether
docker is working on a system or not by typing in sudo docker
version. So, just give it a minute and this will all be over soon.
So, docker is installed now and all we have to do now is check the
version. So, it's docker - - version, and as you can see that the
docker version is 18.06.1-ce and the build is as shown. This
basically confirms that docker has been installed on our system so it successfully
installed docker. So now let's go ahead and see what is in store for us on our
next slide. So, we have installed docker on our system, and depending on the kind of OS that you
are working on, I hope you guys have also installed docker on your system. If not,
you can pause this video, do that first, and come back because I would want you
guys to learn as much as you can from this session, and I would want you to
follow me while I'm doing the hands-on so that you get the maximum knowledge
possible from this session, alright. So, the next thing that we're going to do in
this session is get acquainted with the docker container life cycle. So,
basically this is the whole lifecycle of what a container goes
through when we talk about the docker ecosystem. Now, if you guys know what
github is or if you guys have worked on github, you guys might know that there is
a central repository. Whenever you have to start working on
a particular codebase, the first thing that you have to do is pull the codebase
from the central repository. That central repository is
nothing but something on the cloud or something on the Internet which holds
all the codebase for your organization or for your team that everyone is
working on right. So that is basically a central place where you can download
everything from. Similarly, in docker we have something called as docker hub.
Now docker hub would contain all the open source images that are present, for example, I told you that you could
run a container on ubuntu, you could run a container on CentOS, you could run a
container on Alpine, you could run a container on some other operating system.
So, all of that is present on docker. So, the first thing that you do is
pull an image from docker hub onto your system, and this is your system where
docker engine is installed and basically what you download
from docker hub is an image of a container. So, you get the image and
the next step that you do is you run that image, and the moment you run an
image it becomes a container. So you can run that image and that would
basically be the normal state of a container that is running. Once you want to stop a container, that is again a life cycle. So, when you're done with working on the container you can stop it,
and once you are done with everything, you don't need it on your system anymore,
you can remove it. So, this is the life cycle of a container. The first thing
that you do is you pull it from docker hub, it becomes an image. Then you run
that image, and it becomes a container. The containers can either be in a
running state, in the stop state, or in the deleted state. So, this is a summary of what happens inside
docker, but of course, there is more to it and we
will see as we move along in the session. So, we've understood the common ecosystem of docker, we've understood what docker is, how the containers work inside the docker ecosystem: You first
download them, they become an image, then you run that image, and then it becomes
containers. So, with this knowledge, with docker installed on our
systems, let's go ahead and perform some common docker operations that you would
be doing when you are working on docker in a day to day life. So, this
is the first command that we can try out when we are working on docker.
So whenever you want to find out which version of docker that you're
working on, you can simply pass this common docker version and then you can
basically get the version that you're getting on.
Let me quickly change the color of the terminal because I feel that this color
is a little dull so so that it becomes better for us to see just give me one
second all right much better. So when we pass
this command docker version, you get the the current docker version
that is installed on your system along with the build name. This is exactly what
this command does. The second command that you can go ahead and try is
docker pull. So, we saw the container life cycle, the first thing that you do is
pull the image from docker hub. Syntax for pulling
the image is docker pull and then the image name. So, as you can see in the
screenshot, we pass a command docker pull ubuntu Ubuntu and what it did was it downloaded
a container from docker hub automatically. So, what we can do is we
can also try that out. So, let me just clear the screen
all right So, we'll type in docker pull and
then the image name. So, this is the image name and we'll hit enter. So, sudo docker pull ubuntu , hit
enter, and this would download the latest ubuntu image on your system. Now remember guys, we'd have just downloaded the image. We have not run the container yet.
So, our next command would be docker images. So now if you want to check what
image did you download and is it existing on your system to verify, all
you have to do is type in sudo docker images,
and then you would be able to see the image that it just downloaded. So
this is the image that we just downloaded and you can see the size. It's
86.2 MB, well that's too low to be an operating system, right
imagine the beauty guys. So, an ubuntu operating system, a VM that you're
working on, is I think around 1.5 or 2 GB, but when we talk about this container,
it's hardly 86.2 MB, awesome, right. So, once you have seen the images, the next
step would be to run this image. Now to run this image, the command is
called docker run and along with the image name. So, you use some flags
as well when you are using docker. So let me explain you while I am executing
this command. So, sudo docker run, this is the syntax. The next flag that you add is
- IT which basically means make the terminal interactive - D which means run
the container as a daemon, run the container in the background. So, whenever
I run the container make it running in the background till basically I
stop the container, so that is what - D does. So, the command would be
sudo docker run -it -d -it means make the container
interactive in the terminal so that I can also pass on commands - D which
basically means make the country container a demon that it should be
running in the background even though I'm not working on it and then you pass
in the image name. So, the image name is Ubuntu and then you hit enter. So, can you
see you just got an ID of the container. So basically if you get something
like this, then this means that your container has just started and then to
view all the running containers all you have to do is pass the command docker ps
and that would list down all the containers which are currently running
in your system. So, let us do that as well. So, let me just
clear the screen so just pass in sudo docker PS, and this would basically show
you the container which you have just started. So as you can see, I started
Ubuntu container 29 seconds ago, and this is the container ID for that particular
container. So my container is running. So what's next? Next step would
be that docker PS would basically show you the containers which are n the
running state, but what if you want to see all the containers which are there
in the system, for example, what I can do is I can do a sudo docker stop and I can
stop this container. So, this container is now stopped and what I can
do is I can run one more container which would be sudo docker run - it - d and
then one - sorry about that, so if I do a docker PS I would only be able to see
the container which is currently running so you can see it is up seven seconds
ago but if I want to see all the containers which are there on my system
that could either be in the start state all stop state all I have to do is type
in the command sudo docker PS and then - a
with this I can see the containers which are running so this is container which
is running which was made around 20 seconds ago and this is a container
which has exited which we manually stopped and we can see that also by
passing the command sudo docker PS - eh okay now the next step would be to work
with the container so we have started the containers but the next step would
be to start working on them and how can we do that we can basically do that
using the command docker exec so what we'll do we'll just first get the
container ID so that when train ID is docker PS So, this is the container which is currently running. Now if I want
to get inside this container, the command for that would be sudo docker
exec - it make it interactive
give the container ID and then bash. Bash would be that I want to run this
container in the current terminal space that I am working in and the current
terminal space is bash, and I'll hit enter. So, let me just clear the screen, so
can you see that you are now inside the container. So, this is the container
ID and this is the user of the container. So, we are basically acting as a root or
the container ID that is we are inside the container and we are acting as root
so this is the environment that I was
talking about that the developer will start working in, and this is
basically an ubuntu container. So, all the ubuntu commands are going to work inside
this particular container. So, once you are inside the container, you can do
whatever you like. For example, if I have to update the container, I can do an
apt-get update, and it will start updating the container as if it's a new
operating system which is running on the system. Also to show you guys
that this is different or this is completely independent from what we
were doing outside the container, you remember we have docker installed on
our host operating system. Now if I try to access dock from here I will not be
able to do that so if I do a sudo docker PS you can see that it says ok let me do
a command docker PS you can see that it says docker command not found right so
basically docker is not installed inside of the container and that is the reason
it is not able to access docker also an interesting thing that you can see over
here is that I passed a command sudo docker PS and it says sudo command
not for so the pseudo library is not installed
in the container right so I told you the bare minimum libraries which are
required for a container to make it behave as a particular operating system
is only present in the operating system is only present in the container and
nothing else and that is why you can see even pseudo as a command is not present
inside the container all right so if you are inside the container haven't you
want to exit the container all you have to do is type in the command exit and
this will make you come out to your host operating system but mind you guys your
container is still running so if you do a docker PS you can still see the
container running in your docker PS space all right so this is how you can
get inside a container all you have to do is docker exec - IT and then the
container ID space the terminal that you are working in which in my case was bash
all right okay now again so this I already showed you that if you want to
stop a container all you have to do is sudo docker stop and then the container
ID hit enter and the container will be stopped and then if you do a docker PS
you will not be able to see any containers which will be running inside
the system so as you can see once we do it once you did a stop there are no
continuous running over there in a docker PS space now okay you can also
kill a container in case a container becomes you know non-responsive and
you're stopping the container but it's not able to exit what you can do is you
can kill a container it's similar to that of stock but when you stop a
container it basically gracefully exits the container it's just like shutting
down your computer or just switching off the power switch from behind the
computer's power outlet right so if you have the stop the container but the
container is still not stopping because of some program which is comparing in
loop inside the container or something like that you can immediately kill the
container using the command docker kill and then the container ID then you have
something call des docker RM so basically I told
you guys that if I if you do a docker PS - eh you can still see the containers
which were stopped which were in this top state right but as we saw in the
lifecycle there's a third stage of docker containers which is basically the
delete stage and to reach the lead stage you pass in the command docker RM and
then the container ID and this would basically delete the container from your
system now let's see how we can do that for example if we want to remove or both
these containers from the system all I have to do is take the container ID pass
in the command sudo docker RM and then pass the container ID hit enter and this
would remove the container from my system so similarly if I want to remove
the second container as well I just have to pass in the command sudo docker RM
the container ID hit enter and the containers will be removed and now if I
pass in the command let me just quickly clear the screen now if I pass in the
command sudo docker PS - eh you can see there are no containers which now exist
because they just removed them from a system using the command docker RM
alright moving forward now what to do when you have to remove an image now you
guys know we already have an image in our Dockers system which is this we have the Ubuntu image now if you
want to remove this image from the system the command for that is talker
RMI and then the image ID and this would remove the image from your system so if
I want to remove this all I have to do is type in sudo docker R M so RM was to
remove the container if you want to remove the image just type in I as well
in front of the RM command and then the can image ID which is this passengers
hit enter and this would delete the image from your system so if you do a
sudo docker images now you can see that the image is not present in the system
anymore all right so this is how you can remove image from your system and this
was it guys so this is how you can basically go ahead and do these these
are some of the common operations that you would do in your day to day life
while you're working on talker and I hope you guys are well acquainted with
all these operations so what I would suggest you guys is pause this video to
try to do all those commands together and once you're done with that let's
resume the video to going ahead right so our next topic is basically to create a
docker hub account so like I was comparing it to github so if you guys
have worked in github you also have to create an account on github in case you
want to have your own repository where you want to push your own personal stuff
or your personal code for you're probably rnt or whatever you want to
call it right similarly you have something in docker hub as well if you
have created a container for your own testing purposes or are in the purposes
and you want to push it to docker up so that in later time probably you can pull
it whenever you want or if you want to share that container with the other
people all you have to do is create a docker hub account and push your
container image on that docker hub account right so that is possible
but the first step would be to create a talker up account and for that you have
these steps so first you have to go to the the the website hub decorum
so let me quickly show you that so all I have to do is go to the browser go to
hobknocker comm and you would see a website which will look something like
this alright the next thing that it would have to do is you will have to
sign up on this website so just choose a docker ID now this docker ID is
basically like your username and this username has to be unique right into the
docker ID enter your email address your password agree to the terms of
conditions and just click on sign up after this you would get a verification
email on the email address that is specified over here verify your email
and that's it your docker hub account is all set up right I already have a
Tokarev account let me just quickly show you that
so my docker hub kuma account looks something like this whoops give me one
second all right so guys this is my daugher of
account and as you can see I have some personal continuous that I've plotted
over here and I'll show you guys also how you can do this but before that
let's come back so once you have set up your talk up account you would be able
to see your docket up account which would look something like this right and
always remember guys remember you user ID because it's going to help you out in
the future when you are basically pushing your custom images onto your
account that is your token up a comment also always keep your user ID handy
alright so guys this is how you can sign up for drop a cup once you have signed
up let's go ahead and see how you can save changes to a container alright so
what does this basically step means so this step basically means that if I am
on my system and say you know I run my docker command now see Aaron docker container all right so this is basically download
and run the container for me so sudo docker PS this is the container that I
want to go into all right I'm
the continuum now what I want to do is let us do an ll and you can see these
are all the directories which are present inside the container as of now
now what I want to do is I want to say create a directory and say I create a
library called app right so if I do an LS now you can see there is a new
directly app which has been created inside this container right now if I
exit this container and say I do an LS sorry if I do a sudo doc PS I can see
that the container is running but the catch here is that if I delete this
container say I delete this container okay so one more thing guys if you want
to remove or delete a container which is running you can pass in the command sudo
docker RM hyphen F and then the container ID the other way to do is to
stock the Pantanal first and then delete it that is a general way of doing it but
if you are in a hurry and you want to delete the container which is run in the
running state right now just type in the command sudo docker RM hyphen F and then
the container ID and this would remove your container all right so I've deleted
my container now and if I now type in if I again run the image to the sudo docker
run - I T - D and then the image name and if I go inside this container and if everyone LS you can see the
directory that I created it caught it is not nowhere present over here now the
reason for that is whatever changes you made to the container they were only
present inside the container and these this container went deleted these
changes do not propagate into the image that you downloaded originally all right
now if you want to make changes to a container and want those changes to be
saved inside an image so that you can later on again launch a new continent
and it should have all those files that or files or folders or profits that you
inside installed inside the container you would have to save these changes
inside the container and for that you would have to learn how to save changes
inside a container let's let's go ahead and learn how you can save these changes
inside a container alright so for saving changes inside a container the command
is called docker commit okay so you have to pass in the command docker commit the
container ID that you want to save and then the new image name so basically
this will create a new image of a container and once you pass it the
container ID space you have to give the new image name that you want that custom
image to have right so for example in my case if I create the folder say app and
when I les I can see the app folder is there now right I just accept the
container I'll do a sudo docker PS I can see this is the container ID so I
just copy that and sudo docker commit passing the container ID and then the
new image name so the insi image name let's say I say that image name is test
alright so with this if I now do a docker images I can see that there's a new image which
is present which is called test alright and now I can run a container on the
image test and all I have to do is pseudo docker run - ID - D and then test
this would run the container and if I go inside the container now I can see that
my changes would be there inside this container so if I do an arrest I can see
that the app folder is present and this is how you can create a custom container
so I for demo purposes says just created a folder you can also install software's
inside a container like for example you install Apache you install my sequel or
any other kind of software that you want then all you have to do is save the
container using the docker commit command and you're all set whenever you
run that image again you could have all those offers installed inside that
container alright and this is how you save changes to a container alright so
now let's do this example so basically what you're going to do is we cannot run
an ubuntu container we're gonna install Apache on it and once the Apache is
install we're gonna save that container into an image alright so let's see how
we can do this so let's clear the screen let's exit this container and let us
first clear everything up so sudo docker PS so there are two containers running
okay so let me show you one more command which is basically like a shortcut so if
you if there are like tens of fives or more than five or more than three
containers running on your system you always do not have to pass in the
command sudo darker docker RM hyphen F and then pass the container IDs the
shortcut to remove all the containers at once is the command sudo docker RM
hyphen F and then pipe it with another command this is the pseudo docker PS - a
and then having cube you pass in this command it would remove all the
containers which are present in your system
so if I do a pseudo docker PS now you can see there are no continues
running anymore alright so this is the shortcut that you can use while working
with docker alright so my next step is to basically install
so first I have to run and open to container so I will do a sudo docker -
run I have an ID I'm going to alright so my container is now running the next
thing that I have to do is install a patches over on this container okay okay
so let's take it a little differently now so basically okay so let it be like
this we'll come back to the path that I wanted to explain to you guys later
so basically now I'll just exact into this container okay I'm in first thing
that I'll do I'll just update the container so apt-get update once is updated the next step is to
install Apache and for other commanders apt-get install Apache to and this
install the Apache software all right and apache2 status would give us whether Apache is
installed or not so it says the Pacha 2 is not running so let's start the
service so as Apaches to start hit enter and if I now check the status status of
a budget - I can see that Apache 2 is running inside the container now also
this is what I wanted so let me just accept the container now and let me save
the container so we have installed Apache on our container next step is to
commit these changes to the container so that it becomes an Apache container
alright so I'll do a sudo docker PS this is the container ID and it's a sudo doc
commit give the container ID and now give the image name so let's give it as
Apache or let me make it a little more simpler for me when I move ahead so
whenever you want an image pushed to your docket of account there's a certain
nomenclature that you have to follow when you're naming your image so first
of all you would have to write the username for your docker up account
remember I told you to remember your user ID and this is exactly for that so
specify your user ID slash and then whatever name you want to give to your
container so that would be Apache right so the naming would be specify your user
ID slash your container name that you want to give alright so sudo docker
commit container ID and then the container image stain that you won't
give to your custom image hit enter and this should basically save your images
so if I do a sudo docker images now you can see that our image has been saved
over here with the name SSH our slash Apache you should also notice that now
my size of the container is 212 MB because I have installed a software
the size has gone up it was 86 MB before but because we have installed and
software on it has now become to 112 MB which is fine
all right so now we have the new image so what I'm going to do is I'm gonna
remove all the containers which are running now
so sudo dock up yes this is the container running so let me just remove
it okay for what - D - f also so now what I'll have to do I just
to check whether my image is functioning properly or not what I can do is I just
pass in the command sudo docker run - i t i fin d and then the image names is
the it's a har slash Apache okay I'll introduce you to a one more flag which
is - P now what - P does is it basically does port mapping so when I have
installed Apache if I want to check if a patch is working or not normally a patch
it works and put 80 right so one containers put 80 a pass a would
be running but if I want to check the port of the container or if I want to
check if everything is running inside the container fine I'll have to map the
internal port of the container to the outside host operating system so in this
case I have to map it say if I want to map it to port 80 - of my waist of my
host operating system to the 80 port of my container this is the command to do
that - P space port number that you want to specify : the port number of the
container that you want to link it with so I want to link the port 82 with the
internal port 80 of my container which has the image running sshi slash Apache
alright I'll hit enter this would create the
container and if I do a sudo docker PS if I can see the container is now
running so let's exact into this container so
sudo docker exec - I T give the container ID bash and if now I do a
service a partner to start it should basically give me either an error that
Apache - not found or it should start the server so let's see what happens so
it says service Apache web server purchases this basically means Apache
was present inside this container because we did the sudo docker commit
Apache was installed inside this container and this now service has been
start now remember guys we have mapped it to
port 82 so now let's check if we can access the apache software on port 80 to
of this server alright so let me just go to the IP address of the server so
basically this server is running on AWS so this is the IP address of the server
and I have opened it on port 80 - right so right now if I hit enter it will not
work reason being I'd have to open the ports on this so let me just do that if
you open all traffic so that we are on the safer side okay
so IP address : port number which is 82 hit enter and you can see the Apache is
successfully running so mind you guys Apache is not installed on the server it
is installed inside the container and that that software able to access on the
browser using the port that you mapped on the container alright so for our for
our verification what I'll do is sudo docker PS and I'm gonna stop this
container now sudo docker stop pass this ID and now if I do a refresh over here
you can see that it says the site can't be reached reason being I have stopped
the container over here alright and this is how you can save changes to a
container install software and create a new image out of it now next step would
be to push it to docker hub right so I - I told you guys once you are done I mean
working on your container and you created a custom clinic container that
you probably you want your team to access or you want it to be there on
docker up for safety purposes you can push it to docker up and to do that the
command would be sudo docker push which I'm gonna explain in a little bit but
before that we saw the life cycle right we saw we can push or pull from the
docker hub so this is the pushing state so we had open to container we installed
Apache on it and then we committed the changes and now we have an image on a
local system the next step is to push it to docker hub right and let me show you
exactly how to do that so this is your docker hub guys and the first thing that
you would have to do is you would have to login to your docker hub from your
console and to do that you will have to type in the command sudo docker login
in this command you'll be asked the username so passing the username of the
docker hub and then pass the password so if there is any problem with your
password it will give you this error so let me just try once more so I'll pass
in the username and then I pass in the password and on a successful login you
will get this message that is login succeeded awesome so I have logged into
my daugher up now what I want to do I want to push my image so I'll have to do
a sudo docker push and what was your image name it was a chess har slash
Apache this is the image name right that we gave to our custom container image
hit enter and just start pushing your image to docker hub and once it has
pushed the image all you have to do is you can just visit your dog or herb repo
and you'd be able to see your image listed over here just do a refresh
and just go to the - bid and you can see this is the image that I just pushed to
the positive which is HS har slash Apache awesome so we have successfully
pushed our image to knock it up so it's very easy just type in the command sudo
docker push and that should be it all right so our next topic now is
introduction to docker file and let us go ahead and learn what is the aqua 5
but before that let us recap what we have just learned so we understood what
docker is why do we need docker what is docker exactly then we went through some
of the darker operations the day-to-day operations that you would be going
through we got acquainted through that then we saw how to run a container and
then how to do some changes to the container and save it on your local
system and finally how to push that container onto a docker hub this is what
we have learnt now the next step is what is a docker file so a little
introduction about a docker file so you just saw that if I had to make changes
to a container I basically had to run the container first go inside the
container install whatever software I want install
whatever I want and then come out commit the container and only then it will get
saved but there's a shorter version of this when you're working on production
grade environment when you're working in a production grade remember everything
has to be lightning-fast right and for making changes to a particular container
image you can also use something called as a docker files right so we're going
to discuss what a docker file is now so let's go ahead and start with this topic
now so a docx file is nothing but a text document in which you write how do you
want your container to be customized right so one example like I just told
you guys was when I did it manually I ran a container I went inside it I
installed software came out committed the container and then pushed it onto
docker hub now what is changes can be automated using a script
style which we call the docker file now our doctor file is very easy to write
and there are basically some nomenclatures or there's some some
syntaxes that you would have to learn but I'm sure they are very easy to learn
and once you learn them it'll only be docker file that you would be using
rather than doing everything manually all right
so let's go ahead and see how we can create a docker file so guys these are
some of the sin taxes these are some of the important sin taxes that are
relevant to creating a docker file now the first line of a docker file is
always wrong right so the image on which you want to make changes for example we
made changes on the Ubuntu images where we installed Apache alright so what we
did we did docker run - iti have Indy and then Ubuntu so in docker file the
first one base image that you want to work on you'll have to specify it on the
first line using the command frog okay so this this area of your of your
presentation will basically tell you the docker file content so this is exactly
how the content of the dock or file should look like so the first command is
from so from Ubuntu so the base image that I'm going to work on is a bun - so
this would be the first line second command would be to add all the files so
add is basically used to add files inside a container now for example say I
create a HTML file and I want that HTML file to be added inside the abun -
container so for doing that and the command add the first argument would be
the place value where the files are present and the second argument would
basically be the location inside the container where you want all those files
to be copied all right so add space and dot the meaning take
files or take all the files from the current directory and put it inside
slash where slash double double blue and slash HTML all right so this is
what the add command does the third is done so whenever you want to run any
command in the container for example I ran apt cut get update and apt-get
install apache2 inside the container right now if you want to run the same
commands from the docker file you can do that using the command run so over here
there are two commands that I want to run the first is apt-get update and the
second is if it get install apache2 right
the - Vice significant the option that we get that whether you want to go ahead
with installation yes and no so if you specify - by implicitly in dhaka file it
will not ask that option it would continue the installation without any
prompts all right so run command is basically used to run any command inside
the container on the on on the terminal that you would have run okay then you
have a command called CMD so CMD key word is basically used to run or any
command that you want to run and this start of the container right and these
commands run only when there is no argument specified while you're running
the container so while you're running a container you can also specify any
command that you want to run so in case you specify a command while so for
running your container the cont the command is docker and - ID - D and then
the image name now after the image name you can also specify a command that you
want to run inside the container okay so if you don't specify anything like we
did then in the docker file this basically this command will be running
when we are starting the container otherwise whatever command we specify in
docker on come on that would be running ok then so this was about the CMD
command line so in this case what we are doing is what we want to do is like
remember when we started the container we always had to go inside the container
and start the Apache service manually that is once the image started or once
the container started inside the container and type in the
command service a party to start right so that had to be done manually but when
you pass it using the command CMD which is nothing but all the commands run on
the startup time whenever the container will start this command will execute
automatically if you specify it under CMD so CMD Apache CTL - the fokker
foreground would do nothing but the run Apache the movement the container starts
and this is exactly what we want right so this was possible using CMD so again
I'll repeat it CMD is used to run commands and the run time of a container
and they run only when there is no argument specified in the docker run
command if there is no argument specifies the CMD command will run
otherwise it will be skipped from the docker file all right the next command
is entry point now entry point is exactly the same as CMD that it that is
it runs at the starting of the container but the only difference is CMD will not
run if you specify an argument in the draw current but entry point will run
irrespective of the fact whether you have specified an argument or not all
right so CMD an entry point can be used interchangeably but if there are cases
when you are will be running the container with an argument so it's
better to use entry points so that the command does not get skipped CMD command
will get skipped if you specify an argument after the rockin command okay
so in our case we use the Apache CT life in the foreground with the entry point
command all right the next command is env so if there are any environment
variables that you want to set inside the container you can pass it using the
command env space the name of the variable and space the value of the
variable all right so in my case I specified a variable name which has the
value devops in telepath all right so these are
some of the commands that you can use to create an aquifer now let us create this
naka file on our system as well so what I'm gonna do is I'm gonna go to my party
software and I'm gonna specify Nano and then the aqua file so let me
just create a directory for so let me create a directory called the aqua file
itself simply a dacha file let us go inside it so see the naka file and now
whenever you are creating an aqua file the name of the file always has to be
dacha file itself right so I'll specify Nano and then taco file we went inside
so the first thing that we want to do is we want the open - image to be called
then we want to update this image so apt-get update then we want to install
Apache inside it apt-get - why so for that
so if it get - bye and stole Apache - all right sounds good then we are
going to add all the files from this directory to the directory where our
www.h tml so we're going to create this file do not worry we'll just create this
file right once we have done that the next step would be to run Apache in the
foreground that has run it so Apache is CTL - D ground so this would run a party automatically okay and you specify the entry point and
say I also want to specify environment variable say let me create an
environment with a blue called name and I want to specify the value s in ten
apart okay sounds good so this is my end aqua
file and I just save this now and let me create a HTML file as well so it's
create and one dot HTML and it is make it a little simple this will be a
HelloWorld HTML file okay the body morning you see I have an h1 which will
say hello from in teleport close the header close the body and then close THD LF all right this is
what I wanted to and shave this and now we are done with the dacha file and we
have the HTML page in place so the next step would be to build this dockerfile
now let's see how we can build it so for building this dacha file all you have to
do is talk build where do you want build it I want to
build it from the current directory and the image that it will create I want to
name that image as CE pneu underscore naka file okay so it will be named like
this so I'll hit enter and I have forgot to mention sudo
so let me just clear the screen all right so let me first teach you guys how
to run a docker Dokic amount without sudo so for doing that let's type in
sudo use a mod - a and G and then talker and then dollar user enter and now all I
have to do is just real log in into your session and they should work Sofer do a docker PS now without you
know you can see the command is running also
so what I want to do is I want to go inside the NACA file okay and now I want
to build this talk of as a docker build dot - T new underscore aquifer so you can see the dockerfile
is now being executed and basically this is creating the container for me
to basically come up with an image which will have all the changes that we just
told to the Container so now if I do a docker images
I can see that there is a new image which has been created which is new
underscore dockerfile so let us run this new underscore docker
file a docker run - I T - D also let us specify the port number so let us open
it at port 84 and then - D new underscore docker file and then hit enter okay so the container
is launched now so if I do and talk PS I can see that a container has been
launched seven seconds ago awesome and it being opened on port 80 four so let's
check if everything is working well for us so we just have to change this to 84
and you can see Apache is running awesome Apache is running and now let's
check our webpage if it's there in the container on rod so we named it as one
dot HTML and yes so this is the HTML that we created which has been added
inside the container so let me show you inside the container what exactly
happened okay so docker exact - 9 T continue named mm bash and let me just
clear the screen okay and let me compare it with my Hakka file so the first thing
that we did was apt-get update apt-get - why install apache2 this basically
installed a patches so this was clear then we added everything from the
current directory to where www.h tml right so if we go inside where and then
www and then HTML and do an LS over there you can see that the one dot HTML
file has been added and at the same time docker file was also added so inside the
doc inside the directory we had these two files so it $1 xtml and aqua file so
both of these were added inside the container now to not have the aqua files
inside your container what you can do is instead of dots specified dot slash + 1
dot HTML that would solve this problem it would only add one dot HTML in the
container ok so it added one dot HTML and index dot HTML is basically the
default page of a party that you see right so we added 1 dot HTML so this was
added by the daka file at the same time what we did again was
we defined apache to run in the foreground so as you can see but did not
invoke the apache service by going inside the container we just went
directly to the port number that we mapped the container to and the apache
was up and running so that is awesome and the last thing that we did was we
defined an environment with evil so what I can do is I can echo the variable
which was darling name and you can see this is the value of the variable that
are specified in the Rockefeller I specify the variable names to be in
telepods and if I specify echo dollar name this is the value that I get
automatically so this was set by the docker file in the container image that
I just created and now what I can do is I can just exam this container and if I
want you guys to use it all I have to do is do a docker push to this container on
my docker hub and you guys will also be able to access this particular container
alright so for that I just have to change the name do a docker push and you
should be able to use it but I'm sure you would not be needing it because if
you pass the same commands that are written over here you would get the
exact same container that I created alright so let me know if you guys face
any problems in the comment section and we'll be happy to answer all the queries
see you alright moving forward now we got to know how to create taƧa files we
got to know how to build images from the docker file automatically kind of like a
script kind of thing alright so now let's start with a next topic which is
introduction to a docker volume so what is the dock volume so the volume is
basically used to persist data across the lifetime of a container for example
I demonstrated it to you guys that when you create a continue you make some
changes in the filesystem you delete the container and you launch
it again and the changes are not there anymore
right so that can be fixed so imagine it like this for example you using a patch
a container and you have a website inside of it and the container stops
responding because of some reasons what you do you delete the container and you
launch it again but with that what happens is your website content is also
deleted and if your website content is not present in this new container it
becomes a problem so to solve these kind of problems we came up with docker
volumes so what doc volumes does is it basically Maps or it basically hosts the
storage outside of the container and maps it to inside the container that is
the storage is there on the host system rather than on the container but rather
than the files being written on the container itself they are written
outside the container and that location is basically mapped inside the container
so irrespective the fact whether the container is deleted or started again
the container which is being attached to that volume will have the same file
system as that of the older container and that is how this problem is solved
that is of persistence of data across the container lifecycle all right so
there are basically two ways to do it one is called a bind mount and one is
called a docker volume now what are the differences between a bind Mountain a
docker volume let us see that so basically a bind mount would be that let
me just come out of the directory a bind mount would be that I mount a particular
file location inside the container okay for example I created this docker file
directory right so what I can do is I can map this docker file directory
inside a container now to mount this particular docker file folder inside my
container all I have to do is talker run - i.t and then there's a flag
- V that I left a specifier specify the location of the directory with the slash
of moon - slash home slash open - slash - aqua file now this is the location of
my folder and then what is the mount point or if inside my container so say I
say it slash up right so I've specified that I'll specify - D and then I specify
a bundle and hit enter and my container is now launched now if I go inside this
container I can I will be able to see that there is a folder called slash up
which has been created right so if I go to app and if I do an LS you can see the
folder the files inside this container are that of the taƧa file now these
files are not actually copied inside the container but these are actually being
mirrored from the directly on the host operating system that is if i do if i
duplicate this session let me just duplicate this session so I
just go inside the directory docker file right if I do an LS and say I touch a
file say - dot HTML right so if I do LS in the container now I can see that
there is one more file - dot HTML right so it is inside the container so
whatever changes I make inside this directory will automatically or
dynamically be available inside this container okay the reason for that is
that this directly directory is mapped to the directory slash up in the
container all right so this is called bind mount but there's a disadvantage
with this the disadvantage is that a bind mount will only work when the
filesystem is the same that you have specified like over here in this
container right for example what can what can happen is I can specify this
mind mount to be in the configuration of the container and I can push it to
docker hub so anyone who is downloading this container will automatically have
this bind mount on his operating system but the problem which will arise is that
say you download this container on a Windows operating system right in the
Windows operating system this file path that again slash home
slash up unto slash docker file is not going to exist and that is where where
it is going to give you an error it and will cause you a problem right so this
way of persons persisting data is a little difficult or is a little I'd say
complex when it comes to different environment setups when you're wheeling
dealing with different demands for example if I have to talk about say a
different operating system say I want to work incentives so in that the file
structure would be a little different right
for for for any other operating system a little efficient for Windows it is
completely different right so a bind mount will not work in different
environments and that is where it has a disadvantage but then this advantage is
basically overcome by now what are volumes volumes are
basically entities or are basically storage which is managed by docker bind
mounts are not managed by docker because you create the directory you decide
which directory your data is gonna reside in right and inside that
directory you are making the changes and that will being reflected in the
container but if you create a volume the docker engine the docker engine which is
has which has been installed on a host operating system automatically
identifies where this volume has to exist okay and then creates that volume
over there also in bind mount it is a little
difficult to basically migrate the data in volumes it is very easy you just pick
take the volume up and then just put it on some other system and should work but
in case of bind mount you will have to decide at which place you want to keep
it and then bind that place to the container only in that case if you all
right let me show you what happens in in kinds of a volume right so in case of
volume we want to create a volume this is the syntax so the syntax is dollar
volume create and then name of the volume so let me show you how you can
create it so let me just top cur volume to it and let us create a volume say
test okay so this would create a volume called it test as you can see where they
ignore the warning guys so this created a volume test and if you want to see all
the volumes which are present on your system you can pass the command docker
volume LS and this would give you all the volumes which are present on your
system so on my system right now only the test volume is present I don't want
have to know where is it present because docker manages the handling part of the
file system itself I don't have to worry about that I just want to know that if
there is a volume so yes there's a volume on your system which is called
test awesome all right so the next thing that I have to do is basically I have to
type this command docker run - I T - - and so basically if I were to give you
the syntax this is the syntax for mounting a volume so docker and - ID - -
mount source easy would be equal to the name of the volume so in our case the
name of the volume is test then the target would be the where do you want to
mount it inside the container so we want to mount it inside slash app right and then - D and then the image
name alright so once the container is launched now what we can do is we can go
inside this container to check if everything is working fine and we go inside app and doing it less
so there's nothing inside this volume as of now right so what we can do we can
create say touch then one dot HTML and then touch two dot HTML we can do all
this and then what we can do we can also do you placate the session let me just
duplicate the session and what I'm gonna do now is I'm gonna launch one more
container okay so I'm gonna launch and talk run - IP - - mount and then say source
is equal to test and target could be some other folder as well but for the
sake of simplicity let's keep slash up and then - Deana bun - this would launch
one more container right let's go inside this container docker exec - IT this and
then bash okay let's go inside app if I do an LS you can see that whatever
changes are making in this volume is also being reflected in this container
so say I create three dot HTML over here if I go here and do an LS I can see the
three dot HTML as well alright so this volume is basically being shared between
two containers even a bind mount can be shared between two or more containers a
volume can be shared between two or more containers but in case of volume you do
not have to worry where your volume is being stored it is automatically handled
by the docker engine okay the other cool thing over here is that if you delete
this container and launch it again it will all be the container or your your
end user will not even realize if anything changed for that let me give
you an example so let me just exit this container and we created a image called
new underscore docker file okay so what I'm gonna do is I'm gonna launch docker
run - mighty and then source say is equal to source is equal
let me create one more volume docker volume create and let us create a volume
for Apache okay now what I'm gonna do is I'm gonna launch docker run - IT and
then - - mount source would be Apache now you look at this closely look at the
difference so source is Apache and what I want to do is I want to target this
particular directory where www and then HTML so whatever would be inside this
directive it automatically comes inside the volume okay so I don't have to add
anything to into the volume and I'll show you how to do that so you specify
this - deed new underscore proper file this was a image name and let us hit
enter so it created the container let's go inside this docker exec - IP and then
this container and then bash okay so we are inside the container now
now if I go to CD www HTML and when an S you can see the files are
still here but these files are also now mounted on the volume and that is
evident over here so if I exit this container and if I launch a new
container with the mount as Apache and target would be say slash up slash
up and the image is this okay so this is the a bun to image right so it should
not have the files inside it but if I go inside this container now can you see the files over here as same
as what was there inside this container vas / travelerbill sash HTML okay now
the beauty is this guys that if I come out say if I create a - - HTML over here
right so what I do is I come out of this directory okay and what I do is I create
nano do dot HTML a clear file and then I specify that HTML body h1 this is the
new HTML file close the header close the
body close the HTML come out okay now I will have I want to
copy this to date dot HTML inside this container so what I will do I'll do a
docker CP dot slash 2 dot HTML and then slash vast lighter Lulu HTML and then
the container ID - laughs I should do a blue slash HTML
okay so now - dot HTML is present inside the container
okay now what I will do I'll just delete the container I delete the new
underscore dockerfile container Dockers RM - f
delete okay so idly if now launched a new docker run - IT - - mount source is
equal to Apache target is equal to / r / r LW / HTML
okay and - we also we specify that we want a t1 port to be exposed and - D and
then the image name new underscore drop a file so what will happen now is this
that if I go on the browser if I go to port 81 the container is
running if I go to to dot HTML you can see that the new file is also there in
this container that is new underscore docker file so it does not matter if
your container has the file or not if you have mounted the volume and that
volume at some point had that file inside bit it will have it inside this
container as well okay originally inside this image you don't
have the to dot HTML right and let me prove that to you as well so if I do not
mount the volume and I launch this container you right so if I go to this IP address and
but on page 82 you can see the container is up and running if I go to dot HTML it
will say to dot HTML it was not found on this so reason being the volume is not
attached to this server this file only has one dot HTML that we wrote using the
docker file right for volume we have it in attached in the container which is
mapped to port 81 and that has the file to dot HTML which is this so awesome
guys we have successfully understood what Daka volumes are and I am sure that
you guys also understood if there are any doubts guys you can mention it in
the comment box and we'll be more than happy to help you out with your doubt
alright so with that guys now we come to the next topic of our session which is
how our micro services relevant right so till now we have seen how we can deploy
a single container and do things with it but usually in a production grade
environment we have multiple docker containers which are working with each
other now why do we have that let us understand that using this topic so
first let us understand what are the monolithic application so guys a
monolithic application is an application which has everything inside one
particular program right for example let us take the scenario of an uber app so
our uber app has things like notification we have things like mail
payments location services customer service and persons amendment this is
there inside one app right you can see all of these things inside one app so if
you can see it inside one app that does not necessarily mean that it is actually
one program of uber that they have written right but for now let's consider
everything is an under one program and I say one program probably you have
written a Java file or you have written a C file and everything
is there in that C file or everything is there in the project in different
different files basically everything is there on the on one server is what I
mean right so if that is the scenario what will happen that if I want to
change code in notifications I will have to pull the whole codebase I have to
pull up the whole codebase I will have to go to the notifications file you have
to change the code over there and then I have to push the whole code again back
to github which will basically get deployed on production after being
tested and everything right but the problem over here is that if there is
any problem in the notifications code that I just changed it will have a
replication on all the components of my app right there could be that I did a
mistake and probably there is something wrong with the mail program now although
I my intention was not to touch the mail program but because I had to work with
the code base which had everything probably something went wrong over here
or something went wrong over here or something went wrong where so this was a
problem with a monolithic application that is that you had to submit the whole
codebase even if you had to change the minimis thing that is there in the code
all right and this also led to downtime because when you are changing the
codebase there will be a certain amount of downtime that will occur there is
also a lot of risk because like I said the other files of the same project
could be impacted for example in our case I said that if notifications or a
safe location services is not working probably it happened it will have an
effect on payments or it will have an effect on mail if the code is related to
each other or if a function is being called from each other etc etcetera that
is how a normal you know a program is yeah in one file you define a function
in the second file you are basically calling that function that happens right
so all of these are dependent on each other and whenever the components
whenever the module so all of these small small things like notification
mail payments passenger management customer these are all small modules of
a program if the modules are interdependent to each other it will be
called a monolithic application right but when we talk about the disadvantages
of a monolithic application they are the following so like I said if the
application is large and complex it will be very difficult to understand so if
your app is now having more and more features and if someone has to add a new
feature he first has to understand all the dependencies between the components
right he has to understand the whole codebase and then only you can add a
feature as to understanding what will be the repercussions or what all files he
has to he has to handle if he's going to put a code inside the codebase it so it
is a little difficult to understand second thing is the whole application is
redeployed so like I said the whole code is again deployed and then there will be
a certain amount of downtime which will be pertained to you updating the
application right the third thing is if there is bug in any of the module and
because in a monolithic application everything is dependent on each other it
can lead to the an entire downtime of your own application right you can bring
down your entire application just because of a bug in a single module so
this is also a problem in monolithic application because the components are
dependent on each other and the last one being it has a barrier to adopting new
technologies and this basically means saying your notification code is in Java
and your customer pass and you pass in the data program because it's a
monolithic application also has to be in Java all of your modules that you're
defining all of the features that you have defined have to be in one
particular language that is what a monolithic application means right so
there was a restriction to adopting new technologies as well so it could not it
could not be like one module of mind is written in some other language the other
module is written in some other language this was what I'm only thing application
was now to solve this we came up with the micro-services architecture now what
is the microservices architecture each and every module that is
to my services function or the notifications function on the female
function the payment function passenger management function location services
function they're all segregated from each other they all exist independently
that is they are not dependent on each other so if they have to interact
obviously they'll have to interact with each other for example in uber until and
unless you don't make the payment you will not be allowed to book cab right so
still communication has to be there between these two components that is the
payment and the booking has to be you know communicating with each other but
now they don't have to communicate through in between the program they can
communicate through probably HTTP ways or you know by hitting the API of each
other they can do that by probably sending JSON to each other and stuff
like that so they can communicate in that way they don't have to communicate
from within the program and there that is the beauty of micro services that the
services that we are defining now are not dependent on each other for example
say if the notifications is not working the notifications module is not working
then in that case it's not like your ruber app will go down probably in that
case it will say we are notifications and so is having some problems but you
can still book a cab you still be able to book a cab because booking of cab is
no way related to notifications but if we compare it to a monolithic
application in that case because the code was dependent and if there was
something wrong in the notifications app the whole application could come down
right because the code was not isolated to their member functions or to the
member modules but now because each and every function of the application can
exist on a different server altogether the downtime of the application becomes
almost zero when we have to update a feature for example if a developer has
to update a feature in say the payments app for example he has to add a new
payment method even only download the code base for the payments app make the
changes over there up the code on the payments module and if
there are any problems there will be problem only to the payments model the
other services will not be affected okay and this is what micro services are now
you might be wondering that this is a docker session why and how is how are
these micro services later docker so to the answer that question these
applications all these micro services are deployed on docker containers okay
so all these act as separate entities and these containers then interact with
each other and now this also solves the problem of barrier of adopting of new
technologies reason being your containers could be of different
technologies for example a customer service could be written in Python your
notifications could be written in Java but if your customer service and
notifications programs have to interact with each other they can interact
through JSON in JSON is a way of writing a file just like we have CSVs where n
value is separated by commas in JSON you have a structure right any kind of
program beat any language can convert a file into JSON probably using libraries
or having inbuilt functions for that right and then pass it on to the other
service which has to read the information so with Michael services our
problem of barrier restriction sorry of Technology restriction was also solved
so the advantages would be because the application is distributed can be
understood really well it is not like if a developer has to work on a particular
feature he has to understand all the features of the application
he should know which applications should he or which which modules of the program
should he should that particular program that he's developing communicate to and
that's all that all he has know and of course the code of his own function as
well right and of course if he makes any changes and if he makes any bugs also in
the program it will not affect the whole application it will probably affect only
that function which is creating okay I guess we have covered all the advantages
so the first one being you understand then the code which has
to change will only be changed for the micro service which has to be worked on
then the bug in one service will not get into all the components of the
application it will only be isolated to that particular function and of course
you can do use any technology that you want with the micro service that you are
working with okay and all of this is possible using containerization
okay now because we were talking about deploying multiple containers we have to
talk about how to deploy them using docker compose now if we were to deploy
multiple containers till what we have learned so far the only way to do this
is by either using docker run or probably creating a script file which
will build multiple docker files and then hence build those images but there
is even a shorter way to basically build images and run them and that is possible
using docker compose now what is docker compose docker compose is basically a
tool which can be used with docker to create multiple containers at once
create and configure multiple creators at once with a single command right and
the way you can do that is by writing a yeah Mel file right so you write a yeah
Mel file with all the configuration required for the containers and of
course I am talking about container that has more than one containers you can
have 100 containers also that you can launch with one single command using the
docker compose file okay and this docker compose file is written using a camel
format now we are working with us now if I want to demonstrate the power of
docker compose to you I can do that using a sample dock of file that I have
created so this amble taco file what will it do
it would basically deploy a WordPress website now a WordPress DEP website has
a lot of dependencies guys it has to has my sequel in the backend it has to have
the WordPress container in the front-end and of course you will have to configure
the DB password and everything inside the container as well right now one way
of doing it is doing manually that you know installing WordPress and then
configuring the variables inside it and then doing the same with my sequel
there's a shorter way to do that and that way is using docker compose now
this rocker compose file has actually been written in camel file so if I would
take you through it the version of the docker compose file is version 3.3 and
there are basically two kinds of containers that we are launching the
first kind of container is the DB container and the second type of
container is the WordPress container okay so the first we launch DB container
and the image that we are trying to pull for the DB container is my sequel 5.7 ok
in the my sequel 5.7 container we are defining a volume which is DB underscore
data and this is the target of the inside the container where it should be
mounted that is value of my sequel okay so the environment variables for this
container are my sequel root password that is my sequel data is my sequel to
user my secret password these are the values to these environment variables
that we are also configuring inside the docker compose file itself right so
inside this container these are the environment values which will be
configured and then so this is the end of it this is what we have done in the
levy container now we are talking about the WordPress container in which we
specify that it depends on DB which this will basically create a link between the
WordPress container and the my sequel container or the DB container right and
with this the WordPress container will be able to communicate with the DB
container then we specified the image of the container that we want to download
so it's WordPress and that to the latest version the port's that we want to
expose so inside the container the WordPress is available on port 80 and we
are mapping it on the hosts with port 8000 okay and then the environment
variables we specify that the WordPress DB host is DB : 3 3 0 6 so DB is
basically the host name for this service that just got created which has my
sequel inside of it and my sequel is always always available on 3 3 0 6 port
number so that we specify over here then we have the WordPress
DB user so we specified the DB user that is the my sequel database user as
WordPress so that is exactly the same value that we were specifying over here
and the DB password is again WordPress and that is also that we specified over
here okay and what else so this is it and after that we specify that the of
the volume which has to be created is DB data so this is for the whole gamal file
that there is volume which is being created which is called DB data now this
is a yam l file which will create two containers which would configure it and
then this will happen in like three seconds right so let us go ahead and try
this sample docker file out but before that doc compose guys actually has to be
installed on the system irrespective of the fact whether you have installed
docker or not right so docker when you install it does not install docker
compose automatically except the users who have installed it on Windows and Mac
using docker toolbox if you installed using docker toolbox it has everything
installed but if you have installed it on Linux and you have just installed dr
dot io it will just install the communicating community edition version
and the docker swamp core docker compose is not installed by default now to
install docker compose all you have to do is let me first demonstrate it to you
that docker compose is not present so doc
compose - - version so you can see it is not there as of now so what I can do I
can go to my professor I will search for install doc compose and the first link is basically the official
alright so for installing it on Linux just click on Linux and these are the
commands for installing compose so first we copy this command that is we'll have
to curl the file so the file has been curled now and then we'll have to change
the permissions for it for the file just switch which has just been downloaded to
be converted into an executable file and that's it and now if I run docker
compose version I can see that the docker compose
version is 1.2 3.1 and the builders so docker compose have been installed
automatically now the next step is to basically create a Yama file so for that
let us create an AML file for so let me first create a directory so it would be
compose and it's get inside compose and now let's create a Yama file for a word
press dot gabble okay and if you basically this word press compose file
is available on docker Docs so you can directly go here and you can try this
Yama file on your own as well just copy the CML file paste it into your gamal
file that you are creating and then save it just let's let's verify if everything
else copied correctly so yes it has and now finally save this Yama 5 and
then pass the command ok so there's one more thing that I want you know but that
will be evident from the error which is that the docker compose file has to be
named a proper has to have a proper name for example if I pass the command docker
compose for basically running your docker
compose Yama file the command is docker compose up and then - D you pass it
and then you it will give you an arrow can't find a suitable configuration file
in this directory right so it says that file can only be of this name docker
composed or Tamil or docker compose dot yml right so what we're gonna do we'll
just name it the same so I just rename the file to be docker compose dot yeah
okay so the file has now been renamed and now let's pass the command docker
compose up - d I'll hit enter and as you can see everything has started
automatically and it is pulling the containers as of now once it has pulled
the containers it will start configuring the container so DB container has been
completed now it is downloading the WordPress container and then we'll
configure it and once the configuration has been done yes so it's done so our
WordPress website is now up and what we have to do is go to the ec2 dashboard go
to the running instances so this is the these are the instances that we launched
this is my instance I'll just copy the IP address and WordPress would be
available on port 8000 hit enter and here you go so website is up and
running and if I go on continue I just have to specify the value so let's give
the site title as in telepods user name as in teleport as well password would be
Intel 1-2-3 and then your email so it would let be something random and tell
apart at the raid in teleport dot-com and we'll click on install WordPress
login and then username or password so we let us specify it has the same that
we gave right and now if it locks in that basically means that the it is
being able to communicate with the database as well right so your WordPress
website is all up and running it is now configured with your my sequel database
and that is how it is interacting and this is your dashboard this is basically
from your WordPress container your data is all being stored on the my sequel
container and this is a classic case of a multi tier app that you have just
launched on docker compose so if I do a dock of PS you can easily see that this
is the WordPress container which is running right and it is being exposed on
paid port 8000 and then you have this my sequel container which is running which
is being exposed on port 3306 on which this wordpress container is
being able to interact alright so guys this is how you can launch multiple
containers at once using toggle compose and this is also kind of like a micro
service although this can be further be broken down but those broken down
services can also be launched through docker compose and everything can be
configured in the docker compose yam will file itself alright but again micro
services are actually not launched to da go compose they are basically put on
something called ours and container orchestration tool
so what is exactly a container orchestration tool all right so a
container orchestration tool is when you have multiple containers that you have
launched and you want to monitor their health as well for example if I come
back to this this example that we just did what I'll do
I just docker RM - F and then specify the container ID and hit enter the
container is removed right and now when I go over here and if I hit enter he'll
say the site can't be reached if I hit enter again
it'll again say the same thing because your sight is gone you'd accidentally
removed your container or your container stopped working because of some reason
and now I know you're not able to access your container your website is down it's
gone this was a problem which was there with traditional methods of using
containers or micro-services on docker but then there was something called as
container orchestration which everything which became very popular which
basically says that all your containers health will be monitored by docker
itself so if there is a container which goes down if there is if there is a
container which is not heavy anymore what dr. Swann does is it automatically
repairs it by stopping the container and launching a new one in place of it and
the end-user will not even realize what happened all right so this automation
led to what we today known as container orchestration now container there are a
lot of container orchestration tools with docker prepackaged comes docker
swarm so we're gonna discuss docker swarm now so what is locust swarm so
it's basically a clustering and scheduling tool which is used for
container orchestration of docker containers and with docker swamp you get
the functionality of automatically monitoring the health of containers and
it helps you keep a healthy number of containers that you have specified
always in the running state that is the basic aim of having a docker Saum up and
running now how does the Dorcas wan basically work so our darkest one
basically works like this so a topless one will cannot work with just one
machine alright so because like containers even machines can be faulty
sometimes maybe you know you have configured docker swarm on a single
machine which can automatically repair containers but what happens if the
machine goes wrong right so to mitigate those kinds of things as well we came up
with distributed kind of an architecture where you have multiple machines running
in the swarm so they're in the swarm there will be one machine which will be
called as the lead which will basically tell the workers
what to do and the workers will have the containers running on them right so you
have the leader like we have it over here and then you have multiple workers
which are running on the cluster and these workers will run the containers
that we want to launch so this was about docker song but this
is not it why let's go ahead and start a container
or let us go ahead and start a cluster using docker swamp so let us see how we
can do that so for that let's go ahead and first launch a machine on AWS so we
have the master let us launch a vaca so for that let us launch in a bun to
system okay so now our Ubuntu system is now
launching let's name this instance as a worker okay so while this is launching
you'll have to do some steps on your leader so say this is the instance that
I want to become the leader for my daugher swamp cluster so what I'll have
to do I'll have to say docker swarm in it
and then I have to specify the advertised address so advertise address
is basically the private IP address of the instance so in my case the instance
is this is the private IP of my instance it shall have to specify this over here
and then I'll hit enter so with this you will get this command swarm is
initialized current node is this and it is now a manager so manager is nothing
but a leader okay and for any node which has to join to this manager they'll have
to pass this command like that a docker swarm join and then this command that
they'll have to pass so for this so basically I'll have to login to this
particular worker of mine who installed docker first and then we'll go ahead and
join it to the cluster so let us connect to our worker and let me make the font a little bigger
so that you can also see what I'm typing okay great so this is my instance now
what I'll have to do I'd have to first install docker so let
me do an update first so docker is ablated now let me sorry
the machine is updated now let me install docker so sudo apt-get install talk dot IO you alright so docker is installed let us
verify that by typing sudo dock version great so docker is installed now swarm
is automatically installed when you install docker so all you have to do now
is go here copy this command and paste it over here right and also while we are
doing that I'll have to open the port ports for these instances to interact
with each other so let me put all traffic over here and now my instances
should be able to interact I just passed a command here and hit enter all right so I figured out what the
problem the problem was I have not specified the
address correctly so my master is basically on this IP address let me copy
that and if you pass the command again now
this is the command that we get let us copy this let us paste it over here hit
enter and you can see then it says this node joined as a swarm worker all right
awesome now if I go here and I do a darker nude
LS you can see that that I have two nodes which are there this is my current
node which is the leader and it's in the ready State and so is the second node
which is also in the ready state and has joined the swamp so this is how you can
create a docker swarm cluster guys so again to be to be very clear the first
thing that you will have to do is do a docker swarm in it on the master node
then you will get a command just pass that command and the worker and your
workable directly connect to your master now if you want to leave the cluster all
you have to do is say sudo Dorcas one through dole docker swarm leave
and you can see to say nude left the swamp and if you do a docker node LS on
the master it well in a few moments when the health
check-up has failed it will say the status is down for this
particular node okay if you want even the master to leave leave the Swan all
you have to do is pass the command go home leave on the master as well now as
you can see over here this node which because it has left the status is now
down which basically needs means that the node is no longer reachable okay now
if you want the master to also leave the swamp cluster all your flu is say docker
swarm leave and then say - - force keep enter and it will say node left this
form and yours form cluster has now been left okay so this is how you can go
ahead and create a docker swarm cluster and the command like I said is docker
swarm in it advertised address equal to IP address of leader specify that hit
enter and just copy this command on the worker and it will work like a charm
all right so our next step is now to deploy an app on the dock his form so
before that let me quickly launch the swamp cluster again locust swarm in it
advertise address and specify the IP address of the master which is this so this is the IP
addresses specify that hit enter and then copy this command and pass it
on the fucker okay and if I do a doc or node LS over
here you can see the cluster is ready awesome
now what I want to do I want to deploy an app on darkus form but before that
we'll have to understand how does an application actually work on docker
swamp so an application works something like this so basically you create
services on docker swamp and that service will basically create containers
for you for that particular image that you have specified right so all right so
let's go ahead and create a service so guys this is the syntax for creating a
service in docker swamp what we'll have to do is we'll have to type in docker
service create and then specify in the name of the service say I'm on this name
the service to be Apache and then specify the number of replicas that we
want to learn so I say I want to run five replicas and then the port mapping
so basically I want to do it to be launched on port 80 three right and then
the name of the image so say I'm launching it on HSH a our slash Rabat
okay I hit enter if you run five containers it will
verify if everything is running fine and once everything is running great it will
exit so now if you want to see what all services are running on your swamp class
so all you have to do they say Dockers home service LS NACA service LS and
we'll show you that this is the service which is running it is running in the
mood replicated right and it has five out of five replicas running and this is
the image name now I will show you a very awesome thing so basically this
port is 83 is exposed right so what I'll do I'll go to the Masters IP address
which is this Kohi here I'll type this I go to put 83 all right this is working
now what I will do I will go here and we'll go to the woodworker I will
again go to the workers IP address and I will go to port 83 awesome isn't it so
basically in swamp any IP address either of the master or the worker that he go
on they will have the application ready on both these servers on port 83 and the
most awesome thing is that I showed you guys that if I do a docker PS over here
I can see that there are two out of five containers there is there are two
containers which are running on the leader and there should be two
containers running over here on the worker so there are three containers
running over here so what I can do is I will basically just do a sudo docker RM
hyphen F and what I do I will remove water containers removed and if I go to my masters IP
address again and hit enter I still have the containers running over here right
similarly if I go here sudo docker P RM hyphen F and then I say I want to remove
everything I forgot the suit over here it removed three containers but if I go
to my web browser and refresh the Voges IP address I still have the container
running over here right this basically means that my containers are being Auto
scaled which means if they get deleted it again gets created so you can see I
deleted the containers over here but if I do a docker PS again can you see it has three containers now
launched and I guess if we go on the worker and do a PS over here now it
should have two containers exactly as we saw right so it automatically creates
the containers again and it will always maintain the number of replicas to be
five because that is what you have specified right that is why that is what
it will always retain now you can always scale the replicas and for that all you
have to do is docker service scale Apache and then just specify the
number of replicas that you want say I want to need two replicas to be there
and hit enter and then what we'll do it will basically scale down to only two
replicas now what now if I do a service PS docker service LS
you can see there are only two replicas running now so if I do a docker PS here
I have one container over here and if I do a docker PS here I have one container
running over here alright so you can also scale up and scale I can also scale
up right so I can again just go over here and type in the command ten
he'll start 10 containers for that web app right and you can verify that you
can verify if I do a pseudo doc appears over here it'll it has around 5
containers here and if I do a docker PS here it has around 5 containers here if
you want to remove a service if you want to remove an application from the
cluster all you have to do is docker service RM and then specify the name of
the service and it should remove that service and now if I do a docker PS it
should slowly remove everything out of here so it talked of the user can slowly
it'll remove everything from the container so see the containers are gone
over here and similarly if I see here the containers are also gone over here
as well ok and again like I told you guys if you
want to leave the node all you have to say is to sudo Dockers form leave this
node have left this form it will say and similarly if I want to leave the master
as well I will have to say it is Dockers one leave and on the master will always
have to force and it'll leave this form as well and then you'll have a clean
machine again alright so this is how you can deploy an application on docker
swarm alright so thank you guys for attending today's session I hope you
guys learned something new today so as always if you liked this video please
click on the like button and subscribe to our channel for any future updates
and if you have any doubts regarding any of the topics that we discussed today I
would request you all to comment it in the comment section that is mentioned
below and we'll be happy to answer it for you
alright so with that guys I will take a leave from you guys have a great day and
good bye