Docker is one of the leading
container orchestration Tools in today's market. Hi all I welcome you
to this full course session on Docker and what follows is the complete
crash course on the same. But before we begin, let's take a look
at today's agenda. So we'll start off
with introduction to Docker where we'll talk about what is
talker its components and it's architecture after that. We'll talk about how to install
and set up docker on Centos Machine and on Windows
later on in the session. We're looking to dockerfile and commands. We will understand how to create and run
a Dockerfile and use various commands after that. I'll talk about
how to use Docker compose and Docker swarm so over here, you'll understand how two runways containers
to host a single application and also coming to talk, uh, swa-- you'll understand how to create a cluster to
achieve High availability moving forward in the session will look
into Docker networking. We will understand
the various aspects of Docker networking and after that I'll talk
about Dockerize. An application so over here, you'll understand how to categorize
an application either it be an angularjs application
a micro service application for the node.js application. And finally I'll end
this session by talking about the differences between
docker and virtual version and also comparing docker versus
kubernetes right with that. I come to an end
to my today's agenda. But before we begin I
would like to request all of you to subscribe to our Edureka YouTube
channel to get daily notified on the top trending
Technologies on that note. Let's get started. Why we need Docker. So this is the most
common problem that industries were facing as you can see that there is a developer
who has built an application that works fine
in his own environment. But when it reach production
there were certain issues with that application. Why does that happens that
happens because of difference in the Computing environment
between tech and product. So I hope you are clear
with the first problem. I'll move forward and we'll see
the second problem. them before we proceed
with the second problem. It is very important
for us to understand what our microservices consider
a very large application that application is broken down
into smaller Services. Each of those Services
can be termed as microservices or we can put it in another way as well microservices can be
considered a small processes that communicates with each other over a network
to fulfill one particular goal. Let us understand this
with an example as you can see that there is an online
shopping service application. It can be broken. Goin Down into
smaller micro services like account service
product catalog card server and Order server Microsoft is
architecture is gaining a lot of popularity nowadays
even giants like Facebook and Amazon are adopting
micro service architecture. There are three major
reasons for adopting microservice architecture, or you can say there are
three major advantages of using Microsoft's architecture first. There are certain applications which are easier
to build and maintain when they are broken down
into smaller pieces or smaller Services. Second reason is suppose if I want to update
a particular software or I want a new technology stack. Of my module on one
of my so base so I can easily do that because the dependency
concerns will be very less when compared to the application
as a whole apart from that. The third reason is if any of my module of or any
of my service goes down, then my whole application
remains largely unaffected, so I hope we are clear
with what are microservices and what are their advantages? So we'll move forward
and see what are the problems in adopting. This Microsoft is architecture. So this is one way of implementing microservice
architecture over here. As you can see that there is a host machine and on top of that host machine there are multiple
virtual machines each of these virtual machines
contains the dependencies for one micro service. So you must be thinking
what is the disadvantage here? The major disadvantage here is
in Virtual machines. There is a lot of wastage
of resources resources such as RAM processor
disk space are not you. Lies completely by the Microsoft which is running
in these virtual machines. So it is not an ideal
way to implement microservice architecture and I
have just given an example of five microservices. What if there are
more than 5 micro Services? What if your application is? So huge that it requires
50 micro services. So at that time using
virtual machines doesn't make sense because of
the wastage of resources. So let us first discuss
the implementation of microservice problem
that we just saw. So what is happening here? There's a host machine. And on top of that host machine there's a virtual machine and on
top of that virtual machine, there are multiple
Docker containers. And each of these
Docker containers contains the dependencies
for one microservice. So you must be thinking what is
the difference here earlier? We were using virtual machines. Now we are using
were Docker containers on top of virtual machines. Let me tell you guys
Docker containers are actually lightweight Alternatives
of virtual machines. What does that mean
in Docker containers? You don't need to relocate
any Ram or any disk space. So it will take the RAM and Space according to the
requirements of applications. All right. Now let us see how
Dockers all the problem of not having a consistent
Computing environment throughout the software
delivery lifecycle. Let me tell you first
of all Docker containers are actually developed
by the developers. So now let us see how dark
or solve the first problem that we saw where an application works fine
and development environment but not in production. So Docker containers can be used
throughout the SCLC life cycle in order to provide consistent
Computing environment. So the I'm environment will be
present in Dev test and product. So there won't be any difference
in the Computing environment. So let us move forward and understand what
exactly Docker is. So the docker containers
does not use the guest operating system. It uses the host
operating system. Let us refer to the diagram
that is shown. There is the host
operating system and on top of that host operating system. There's a Docker engine
and with the help of this Docker engine Docker
containers are formed and these containers have
applications running in them. And that requirements
for those applications such as all the binaries
and libraries are also packaged in the same container. All right, and there can be
multiple containers running as you can see that there are two containers
here 1 & 2 so on top of the host machine
is a darker engine and on top of the docker engine
there are multiple containers and each of those containers
will have an application running on them
and whatever the binaries and library is required for that application is also
packaged in the same container. So I hope you are clear. So now let us move forward and understand Docker
in more detail. So there's a general workflow
of Docker or you can say one way of using Docker over here. What is happening
a developer writes a code that defines an application
requirements or the dependencies in an easy to write Docker file and this Docker file
produces Docker images. So whatever dependencies
are required for a particular application is present
inside this image. And what are Docker containers
Docker containers are nothing but the runtime instance
of Docker image. This particular image
is uploaded onto the docker Hub. Now, what is Docker hub? Docker Hub is nothing
but a git repository for Docker images it contains public as well
as private repositories. So from public repositories, you can pull your image
as well and you can upload your own images as well on
to the docker Hub. All right from Docker Hub
various teams such as QA or production team
will pull the image and prepare their own containers as you can see from the diagram. So what is the major advantage
we get through this workflow? So whatever the dependencies that are required
for your application is actually present throughout
the Software delivery life cycle if you can recall
the first problem that we saw that an application works fine
in development environment, but when it reaches production,
it is not working properly. So that particular problem
is easily resolved with the help of this particular workflow because you have
a same environment throughout the software delivery
lifecycle be devtest or product. So I'll move forward
and we'll see for better understanding
of Docker a Docker example. So this is another way of using
Docker in the previous example, we saw that Docker
images were used and those images were uploaded
onto the docker Hub. I'm from Docker Hub
various teams were pulling those images and building
their own containers. But Docker images
are huge in size and requires a lot
of network bandwidth. So in order to say
that Network bandwidth, we use this kind
of a book look over here. We use Jenkins servers or any continuous integration
server to build an environment that contains all the dependencies for
a particular application or a Always and that bill environment
is deployed onto various teams, like testing staging
and production. So let us move forward and see what exactly is happening
in this particular image over here developer
has written complex requirements for a micro service
in an easy to write dockerfile. And the code is then pushed
onto the get repository from GitHub repository
continuous integration servers, like Jenkins will pull that code
and build an environment that contains all they have dependencies for
that particular micro service. And that environment
is deployed on to testing staging and production. So in this way, whatever requirements are there
for your micro service is present throughout the
software delivery life cycle. So if you can recall the first problem we're
application works fine in Dev, but does not work in prod. So with this workflow we can
completely remove that problem because the requirements
for the microservice is present throughout the software
delivery life cycle and this image also explains how easy it is to implement
a Microsoft's Using Docker now, let us move forward and see how
Industries are adopting Docker. So this is the case study
of Indiana University before darker. They were facing many problems. So let us have a look
at those problems one by one. The first problem was they
were using custom script in order to deploy
their application onto various vm's. So this requires a lot
of manual steps and the second problem was
their environment was optimized for legacy Java
based applications, but they're growing
environment involves new. Acts that aren't
solely java-based. So in order to provide
these students the best possible experience. They needed to began
modernizing their applications. Let us move forward and see
what other problems Indiana University was facing. So in the previous problem
of dog Indiana University, they wanted to start
modernizing their applications. So for that they wanted to move
from a monolithic architecture to a Microsoft
Office architecture and the previous slides. We also saw that if you want to update
a particular technology in one of your micro service
it is Easy to do that because they will be very
less dependency constraints when compared to
the whole application. So because of that reason
they wanted to start modernizing their application. They wanted to move to
a Microsoft with architecture. Let us move forward and see
what are the other problems that they were facing Indiana
University also needed security for their sensitive
student data such as SN and student health care data. So there are four major problems that they were facing
before Docker now, let us see how they have implemented Docker
to solve all these problems. Eames the solution to all these problems
was darker Data Center and Docker data center
has various components, which are there in front
of your screen first is universal control plane. Then comes ldap swarm CS engine and finally Docker
trusted registry. Now, let us move forward and see
how they have implemented Docker data center
in their infrastructure. This is a workflow of
how Indiana University has adopted Docker data center. This is dr. Trusted registry. It is nothing but the storage
of all Docker images and each of those images
contains the dependencies for one micro Service as we saw that the Indiana
University wanted to move from a monolithic architecture to a Microsoft
this architecture. So because of that reason
these Docker images contain the dependencies for
one particular micro service, but not the whole application. All right, after that comes
universal control plane. It is used to deploy Services
onto various hosts with the help of Docker images that are stored
in the docker trusted registry. So it Ops Team can manage
their entire infrastructure from one single place with the help of universal control
plane web user interface. They can actually use it
to provision Docker installed software on various hosts and then deploy applications without doing a lot
of manual steps as we saw in the previous slides that Indiana University
was earlier using custom scripts to deploy
our application onto VMS that requires a lot of manual steps that problem
is completely removed here when we talk about
security the role based access controls within
the docker data center allowed. Out of University
to Define level of access to various teams. For example, they
can provide read-only access to Docker containers
for production team. And at the same time they
can actually provide read and write access to the deputy. So I hope we all are clear with how Indiana University
has adopted Docker data center. So we will move forward and see what are the various
Docker components versus Docker registry Docker registry
is nothing but the storage of all your Docker images
your images can be stored either in public repositories
or in private repositories. These repositories
can be present locally or it can be present on the cloud Dhaka provides
a cloud hosted service called Docker Hub Docker Hub as
public as well as private. Streets from public repositories
you can actually pull an image and prepare your own containers
at the same time. You can write
an image and upload that onto the docker Hub. You can upload that
into your private repository or you can upload that on a public
repository as well. That is totally up to you. So for better understanding
of Docker Hub, let me just show you
how it looks like so this is how Dokken have looks like so first you need
to actually sign in with your own login
credentials after that. You will see a page like this, which says welcome
to Docker Hub over here as you can see That there is an option
of create repository where you can create your own
public or private repositories and upload images
and at the same time. There's an option called
explore repositories this contains all the repositories
which are available publicly. So let us go ahead and explore some of the publicly
available repositories. So we have a repositories
for nginx redis Ubuntu then we have Docker registry
Alpine Mongo my SQL swarm. So what I'll do I'll show
you a centralized repository, so The centralized repository which contains the
center OS image. Now, what I will do
later in the session, I'll actually pull
a centralized image from Docker Hub. Now, let us move
forward and see what are Docker images and containers. So Docker images are nothing
but the read-only templates that are used to create
containers these Docker images contains all the dependencies
for a particular application or a Microsoft Office. You can create
your own image and upload that onto the docker Hub. And at the same time you
can also pull the images which are are available in the public repositories
and the in Docker Hub. Let us move forward and see
what are Docker containers. Docker containers are nothing
but the runtime instances of Docker images it
contains everything that is required
to run an application or a Microsoft Office
and at the same time. It is also possible that more than one image
is required to create a one container. Alright, so for better
understanding of Docker images and Docker containers, what I'll do on my Ubuntu box, I will pull a centralized image
and I'll run a center as container in that. So let us move forward
and first install Docker in my Ubuntu box. So guys, this is
my Ubuntu box over here first. I'll update the packages. So for that I'll type
sudo apt-get update. asking for password
it is done now. Before installing Docker. I need to install the
recommended packages for that. I'll type sudo apt-get install. line x - image - extra - you name space -
are and now a line irks - image - extra -
virtual and here we go. Press why? So we are done
with the prerequisites. So let us go ahead
and install Docker so for that I'll type sudo. apt-get install Docker - engine so we have
successfully installed Docker if you want to install Docker
and send two ways. You can refer the center is
Docker installation video. Now we need to start
this darker service after that. I'll type sudo
service darker start. So it says the job
is already running. Now. What I will do I will pull us
into his image from Docker Hub and I will run
the center waste container. So for that I will type sudo. Docker pull and the name
of the image. That is st. OS the first it will check
the local registry for Centos image. If it doesn't find
there then it will go to the docker hub for st. OS image and it will pull
the image from there. So we have successfully
pulled us into his image from Docker Hub. Now, I'll run the center
as container for that. I'll type sudo Docker Run - it sent OS that is
the name of the image. And here we go. So we are now
in the Centre ice container. Let me exit from this. Clear my terminal. So let us now recall
what we did first. We installed awkard
on open to after that. We pulled sent to his image
from Docker Hub. And then we build a sin
to waste container using that sent OS image now. I'll move forward
and I'll tell you what exactly Docker compose is So
let us understand what exactly Docker compose
is suppose you have multiple applications
on various containers and all those containers
are actually linked together. So you don't want
to actually execute each of those containers one by one but you want to run
those containers at once with a single command. So that's where Docker compose
comes into the picture. So let us proceed over
to talker installation. First I'll make sure my existing
packages are up-to-date. So for that I will type
sudo yum update. And here we go. So no packages
marked for update. I will clear my terminal now. Now I will run Docker
installation script for that. I'll type curl - FS. sell and now I'll give
the link https. Get dot docker.com. /sh on here we go. This script adds
the docker da triple repository and installs docker. It is done now. Our next step is to start
a Docker service. So for that I will type sudo. service Docker start Here we go. So darker has now
started successfully. Now I will pull a Docker image for Ubuntu operating
system Docker images are used to create containers if the image is
not present locally Docker will pull the image
from registry dot half.com. Currently, I don't
have any image. So I'll pull an image
for Ubuntu operating system and for that I'll use sudo. Docker run on the image name dot
is a boon to and here we go. As you can see unable
to find image. I can highlight
that at my cursor as well. So just notice it is unable
to find an image locally. That means it is pulling from
registry dot hub doctor.com. So it has downloaded
newer image for Ubuntu. In order to
start using container, you need to type
sudo Docker Run - it and the name of the image, which is Ubuntu and here we go. As you can see that we are
in Ubuntu container right now. I'll open one more tab. Over here if you want to see all
the running Docker containers, you can type sudo Docker PS
and it will display it for you. So as you can see the name
of images Ubuntu and this is the container ID
for that particular image. So why should we use
Docker for Windows? Now? The first reason is that it avoids the work
on my machine, but doesn't work
on the production problem. All right. Now this problem occurs due to
the inconsistent environment throughout the software
development workflow. For example, let's say that a developer
has built an application. Action on Windows environment
and when he sends the application for
the testing server, it fails to run now this happens because the testing server
operates on an outdated version of Windows now, obviously the application does
not support the dependencies needed to run on the outdated
version of Windows. So because of the difference
in the software versions in the development and testing
server the application will fail but when it comes to Docker
we can run our application within a container which contains all
the dependencies of the Equation and the container can be run through our the software
development cycle. Now this practice provides
a consistent environment throughout apart from that. It improves productivity. So by installing
dog on Windows, we're running Docker natively if you've been following
doctor for a while, you know that Docker containers
originally supported only Linux operating systems, but later doctor made
its platform available for other operating systems, but with a simple limitation
now the limitation was that Docker engine ran inside. Line X based virtual
machine image on top of the operating system. So basically you could run
Docker from Windows or any other operating system except Line-X was still
the middleman but thanks to the recent release Docker can
now natively run on Windows, which means that Linux support
is not needed instead the docker container will run
on Windows kernel itself. All right, so guys just like I mentioned earlier Docker for Windows suppose
native networking now not only the Docker container
the entire dock or tool set is now
compatible with Windows. This includes a Docker
CLI Docker compose data volumes and all
of the other building blocks for darker eyes infrastructure, which are now
compatible with Windows but houses advantages now since all the docker
components are locally compatible with Windows. They can run with
minimal computational overhead. Now, let's move
on to the prerequisites. So before you install
doctor for Windows, you need to check if Running on a Windows 10 Pro Edition
Enterprise education student Edition 64-bit system. Now guys a point to note here is that Docker will not run
on any other windows version. So if you're running
on an older Windows version, you can install
the docker toolbox instead. Okay now doctor
for Windows requires or type 1 hypervisor
and in the case of windows, it's called the hyper-v. Now, what is hyper-v
hyper-v is basically a lightweight virtualization
solution built on top. Top of the hypervisor framework
so you don't need a virtual box. You just have
to enable hypervisor. All right, and also you need to enable
the virtualization in buyers. Now when you install doctor
for Windows by default, all of this is enabled but in case you're facing
any issue during installation, please check if your hyper-v and your virtualization
is enabled now, let's move on to the demo. So we're going to begin with
installing doc of a Windows. Now before we go ahead guys
you have to make sure that you're using a Windows. And pro Enterprise education or student Edition one more
important point to note here is that if you're using
a virtual box on your system, you won't be able to run it because virtualbox will not work
with the hypervisor enabled but in order for your doctor for Windows to work
on your system, the hypervisor must be enabled so guys basically you
cannot run Docker for Windows and a virtual box
on the same system side by side. Okay, so if you have
a virtual box in your system, it's not going to work because you Be
enabling your hypervisor. So let's get started by
installing doc of a Windows. Now in order to install
doc of a Windows. You neither Docker for
Windows installer now, I'll leave a link in
the description box so that you can download the installer. So guys have already installed
the talk of a Windows installer. Y'all can go ahead
and download it from the link in the description now
here you can see that I've run the installer. So now let's just wait for the installation
to complete, okay? Now let us click on. Okay. All right, so
it's unpacking files. All right. So the installation
is completed. So guys once you've installed
it just open the doctor for Windows app. Alright, it's here
on my desktop. So when you try
to start the application, you'll see a whale icon
on the status bar. All right here you can see
the way like in now when the whale I
can become stable. It means that Docker has started and you can start
working on it. Okay, so this icon
needs to get stable. That means that
Docker has started. All right. So you can see him message
popped up like this. Okay. It says Docker is
now up and running. All right, so guys
you can either login to your Docker Hub account from
here or you can use the dock or login command and login. All right. I'm going to go ahead and log
into my Docker Hub account. So now you all can open
up any terminal and start running dog commands. So guys I'm going to be using
Windows Powershell now make sure you run as an administrator because there are
a lot of commands which require admin access. Okay. So yes, all right. Now in order to check if we've successfully installed
Docker what we're going to do is we're going to check
the version of Docker. So the command for checking
the version is darker space - hyphen version All right. So it's a returning
the version of Docker that I've installed which means that is
successfully installed darker. Okay. So now that we know Docker
is successfully installed. Let's run a few basic
Docker commands Okay. So let me just
clear the terminal. Now I'm going to run Docker run. Hello world. Now. This is the most
basic Docker command. That's run. Once you install darker. Okay, so I'm basically gonna run
the hello world image now. Let's see what happens. So it's unable to
find image locally. So it's going to pull
the hello world image from Docker Hub. Okay. All right. So this basically gives a hello
from Dhaka message. So we finish
the First Command now, let's try something different. Yeah. So you use Docker images
command to check The images that you have in your system since we just ran this hello
world image from Dhaka hump. We have this image
in our repository. All right. Now, let's pull a Docker image
from Docker Hub. Okay. Now in order to do that, you just use a simple command
called Docker pull and the name of the image
that you want to pull on it. So I'm going to pull
an Ubuntu image. Let's see how it works. So it's basically pulling
a 1/2 image from Docker hub. Alright now let's
run this image. So guys, do you remember that? I said that whenever
you run a Docker image, it runs as a Docker container. So whenever I perform
this command Docker Run - ID - D and name of the image. All right. So whenever I use Docker run
and I run an image, it's basically going to create
a container from this image. Okay, so it's going to create
an Ubuntu container. Alright now the next command
is darker space PS - A now basically this
should show all the containers. All right. So basically we have
two containers over here because we ran both
of these images. All right, so whenever you run
an image it runs as a container, that's exactly what
I told you earlier. Okay. Let's clear this now. Let me type this out
and then I'll tell you what this does. All right. And what I'm doing here
is I'm just accessing or running container. Okay. This is the container ID, which is basically
the Ubuntu image that we pull some basically
giving the container ID of this Ubuntu image that we put now basically
within the container. Okay, you can perform commands
like let's say Echo. Hello. All right. So it says hello. Now what you can do is you
can just exit from here. All right, so you come
out of the container. Okay. Now, let's try to stop
a running container. Okay. Let's see. August top and the container ID. All right, so it
stopped that container. Okay. All right. So the next command
is Docker commit. Okay. Let me just type this out
and then I'll tell you what it does. Okay. So basically I'm using
the docker commit command. So basically it's going
to create a new image on the local system. So after Docker commit, I have the container ID
and I'm going to create an image out of this and after a space
I've mentioned zuleika / abun to now Julia. A car is basically the name
of my Docker Hub repository and Ubuntu is the name
of the image. All right. So let's see what happens. So basically we created
a new image over here. So here you can see
that there's another image which is added
which is delay - Ubuntu. Okay. It has a new image ID and so on. All right now guys, if you perform this command
without logging in to Docker Hub, they're going
to ask you to log in first. Okay, and for that you
can use the command, which is Docker login. All right now. Did he logged in earlier
in the session? So that's why it
says login succeeded. Otherwise, it's going to ask you
for your credentials. All right, it's one ask
you for your username and your password. Okay. Now what we're going to do is
we're going to push this image to Docker Hub. So we're going to use
a darker push command. Along with the name
of my Docker Hub repository and the image name. All right, so it's preparing and it's going to push
this image to Docker Hub. All right. Now, let's say that you want
to delete a container. So what you can do is you
can use the docker RM command. So basically the command goes
darker RM and the container ID. Okay. Alright. Now, let's look
at our containers now, we have only one container. So basically the container with
container ID this got deleted. Okay. Similarly. You can also
remove Docker images. Alright so first Let's look
at the docker image ID that you want to remove. All right, let's say I want
to remove zuleyka Ubuntu. Okay. I'm just going
to use this image ID. And the command is Docker RMI
and the image ID. Now. Let's look at the docker images. Now, you can see only Ubuntu
and hello world is there so this is how you
remove Docker images and I also showed you
how to remove Docker containers. So those of you
who want familiar with Docker have a good idea of
how simple Docker commands work. Alright, so now
I'm going to create a simple python web application
using Docker compose. Okay. Now, let me tell you a little
bit about this application. It basically uses flask framework and it maintains
a hit counter in redis. So guys for those of you
who don't know what flask is it is basically
a web development framework, which is written
in Python and red is is an in memory storage component. It is basically
used as a database. Okay now guys, don't worry. If you don't know
by Thin this program is very understandable. So we're basically going
to use a Docker compose to run to Services, which is web service
and red service. Now, what is application does
is it's going to maintain a hit counter every time
you access a webpage. So each time you
access the website or hit counter gets incremented. Okay. It's simple logic just increment
the value of the hit counter when the web page is accessed. Okay. Alright, so let's begin
with creating a directory for your application. It's always a good practice
to have a directory with stores all of your code. All right. So let's start with
creating a directory. Let's say web application. All right. Now I'm going to change
to that directory. So guys have already
typed out the entire code because I didn't want
to waste a lot of time. So what I'm going to do is I'm
just going to open up the files and I'll explain
what the code does. All right, so I have all
of my code written in notepad plus plus so I'm just opening up notepad. Also guys, I want to tell you that you don't have
to install python or red is which is going to use Docker
images for Python and redness. Okay. So first what you do is you have
to create a python file. Okay. I've called it web app. So I'm not going
to spend a lot of time. We are just tell you
what we're doing. So first of all, we're going to begin
with importing the dependencies. So we're going to import
real time we need red is we also need flask. Okay. These are the requirements that are going
to import after that. We just initializing name
of the application. So here we just
hosting the database and we're connecting to read
is using the port number six three seven nine. All right. This is the default code. Then we Define the get hit
count function this basically a returns a number of hits. So we are also setting
the read rise to 5 in case the page does not load
while all of this holds true. The incremented hits
are returned. And if there's an arrow
then we have an exception. So we have also defined
exception in case of errors this function
is basically to display the hello world message
along with the hit comes so this is the python file. It's very simple guys. Very understandable. You don't have to be a pro
in Python to understand. This is very understandable. Alright now, the next file
you're going to create is a txt. File which are named
requirements dot txt. Okay now over here, I'm just going to
add my requirements which is flask and redis. So next we have the doc of file. Now this talk of file is used
to create Docker images. Okay. I mentioned this
earlier in the session that you require doc
of files to create doc images. Okay. So first we're just
setting the base image. So we're building an image starting with Python
3.4 now in this line, which is going to add the
current directory into this - code path of the image. Then we're going to change the
working directory to this path after this you're going to use
a packet manager of python to install the requirements that are mentioned
in my requirements dot txt file. Okay. So these two
were the requirements which is flask and redis. And then finally we
do setting the default command for the containers
to python web app. Okay, so it's basically
going to run my web app. Now we finally have
a Docker compose file. Like I mentioned earlier
that a Docker compose or a Amel file is going
to contain all of the services. So there is Web service over
here and there is redis service. So we're basically running
two containers over here or do Services over here, which is web and red is so now
the web service is basically building the dockerfile
in the current directory. All right the dot signifies
the current directory and it forwards the exposed Port
5000 on the container to the port 5000 on
the host machine. Now. The red is service
is basically using a redis image pulled
from Docker Hub. So guys, this was all
about the files you need. Create a web application file,
which is a python file. And then you have
a requirements dot txt file. Then you have to
have a Docker file and a Docker compose file
to run both of these services. So guys now that I've explained
the various files, what I'm going to do is
I'm going to run both of these services or both of these containers
by using the darker - compose up command. Alright guys, make sure to create all
of these four files and you have to create them obviously in
the web application directory. So if I do LS, I know that I have a Docker
compose dot yongle file. I have a dog of file I
have requirements Dot txt. And I have a web
app dot python file. Now. Let's use Docker - compose up to run all
of these containers. So just building
from my doctor file. So now it's installing
my requirements over here. So now it's running
my web app dot python file now. It's creating two
Services over here, which is web service
and redis service. So what I'm going to do is
I'm going to look at the output by using kitematic. So guys I told you earlier that kitematic is basically
a UI tool for Doc of windows. So just left-click
on the dock icon over here and here you're going
to see kitematic Okay click on it. But I think I'm facing an arrow. I'm just going to go back
to my files and see if I have missed out any line. All right, so over
here I have written RT. This is actually import time. Okay. This was a simple mistake. So let me just save this and let's try and
run this again now, it should definitely work. I'll just clear the terminal and we're going to use
Docker - compose up. All right. Now that's what I'm going
to do is I'm going to show you the output using kitematic. Here you can see
an option kitematic. So click on this now. It shows two
applications over here, which are running one is
the web service and the other is the redis service. Now, when you go
to the web service, you can see
the output over here. Let's click on this. So whenever you refresh the page
the hit count increases. So this is
how the application works. If you keep refreshing the headcount will keep
increasing so guys, this was a simple
web application and I also showed you all how to view
this using kitematic. Okay. So now you can see
that this is green, which means that it's running. All right, you can also be star
the container you can stop it. You can enter into the container and you can run
a few commands Okay, you can use kitematic
in a lot of other ways. Let's go about writing
a Docker file. First of all, your dockerfile is just
gonna be a file. Okay. Just going to be a text file
without any dot txt extension. Okay. Your dockerfile will
basically contain commands and arguments only
and that is all that is needed to run. Okay, these commands
and arguments but additionally if you want to just
comment something if you want to use bogus lines, then you can use it
then you can write it by using this hashtag over here. Okay. So technically it might involve you having
commands and arguments. Okay. So Mike imagine
arguments are the ones which are going to help me customize my Docker image
and the commands are something which I can write
for my explanation. Okay. So if I #over your then whatever
comes in that particular line after the hashtag
would be ignored. So if I say print welcome
to enter a garden, I'm just giving a sample here. So this line will be completely
ignored and not executed. But however in the second line
if I have run Ecco, welcome to Erica, then this line
would be executed. Okay. And in this case, I have my commands and arguments run
is going to be my command and Echo welcome to enter
Rekha will be my arguments so I can have argument one. I can have argument
to I can have argument three and many more. Okay, but by default your arguments would
be just too okay. I will have one command
and then two arguments. Okay, so let me go into
more details of these things and by details I want to talk
about the different syntax that I can use your
the different functionalities or the different commands
that I can use. Okay. So let me start off by talking
about the most important command which is nothing but the
from command so from is the most important command because without the from command
you cannot write a Docker file because the from command is what is used to specify
a base Docker image. Okay in my case. I was specified
your Ubuntu which means that I will be using an Ubuntu
as my base Docker image and all my customizations will
be on top of my Ubuntu image. Now think of it very much
like you working on a server or you working
on a Linux machine. Okay, you have an Ubuntu machine
with you and then if you want to execute
or deploy your application on that particular
machine of yours, you have to install everything, right? So you've not done
the other steps though, but so far by using a from Ubuntu it means that you
have an Ubuntu machine with you. So this is just
the base image, which can be equivalent. To you just having
an open to machine. Okay, and what do you do after that on to that particular
Docker image depends on the other functionalities and then moving on to the next
command is the Run command. Now, this is again, I would say the second most used command because at
the end of the day if you want to run
a particular image or if you want to run
a particular command, then you use this run
command in my case if I have an open-door image on
if I want to install say Java or Jenkins or react or curl then I will be using
this run command. Okay, so I have my run command and then my argument
Would be app get install with a yes flag. I'm saying react
or Jenkins or whatever. Okay, so that's
what my run command does. It's basically for executing
any come out of mine, but it has a slight difference
when compared to CMD. Okay, because run
is used to run a command. Okay, it can be a command which is it could be
a shell command or it could be a command which basically runs
my image into a container. Okay, and that is what
the difference is with respect to CMD with CMD you again
can execute A shell commands like I've done here
I can say CMD Echo. Welcome to edu Rekha, but however, I
cannot use a CMD command for building my Docker image. Okay, so I cannot execute
my Docker image or I cannot build
my Docker image with the help of the CMD command. So if I want to ask you to come
out of my shell I can you either run or CMD and if I want
to basically build my documents, then I can only use run
in such places CMD don't work. Okay. So moving on the next
important command is the entry point command. Okay, the entry point command
is basically used. To basically override whatever function
your CMD command does or the entry point
basically suggests that when you've finished
building your Docker image, then the command
which is specified with the entry point
that will be the one which will be executed first when you run
the docker container of that particular image, right so I can build
a Docker image which has this entry point. Come on and my Docker image will be built and when I execute
that particular Docker image, then the command which is specified with
entry point will be the first And to be executed, okay, and the additional functionality that entry point has is the one
which I already said, which is nothing but it
can override your CMD command. So take for example over here. So here I'm saying CMD,
welcome to Ed Eureka. Okay, and here if I say a three-point Echo
then my entry point. Well basically overwrite this because most of the time you
are CMD command would be the first set of commands. We should be executed
in your dockerfile. Okay, you can have
a lot of things but in your CMD command you will have some set
of arguments present over here. So This case I have one command and then one argument
but with my entry point, if I say entry point Echo, then this would be used
as my argument to execute this argument of mind. Okay, so that is the whole point
of entry point. And this is the subtle
difference between entry point and CMD entry point
can basically override your CMD commas. So next comes the add command
now the add command or the copy command. These are cons which can
be used interchangeably because the add command is used to copy whatever files
which are therein. One particular directory
to another directory. Okay, so it could be copying files from my host
to my container. Okay. So I'm seeing add
and then I can just specify the path of my source
after that for the space. I can specify the path
of my destination to where I want to copy my files. Okay. So this is also pretty
self-explanatory and then the environment command now if my application needs a
particular environment variable then then I can specify
to my Docker container that this application needs
certain environment variables and this environment
variable is present. So an example of this could be if you want to
execute Java program, then you need Java right and you have to set
your environment variables so I can specify
my Java environment variables like this in turn
my Docker container. So my environment
would be my command and these would be my arguments. So server Works would be
my argument 1 and this would be my argument to alright, so that was one example, and then the next
important command that you have to be aware of is
the working directory command in your Docker container a lot of times you would Would want to
go into a particular container and then start execution
inside that container, especially when you want
to execute certain commands in the Shell, right? So if you want to use the CMD
command inside your dockerfile, then you want to basically
execute a particular command on the shell, correct? But where exactly do you
want to execute that command because these commands
will be executed from inside the container
and inside the container if you want to customize the place by you want
to execute that command if you want to change the place where the CMD command
will be executing its arguments then you have to say
The working directory over here. So you will say working directory and you'll just
set the path over here. And then whenever you
have a CMD command, which gets executed then that
CMD command will be executed in this particular path. Okay, so pretty simple, right and then we
have the Expos command and the Expos command is a very
important command in case of front-end applications because so with
the Expos command you can specify a port number
and you can specify that this application
would be active on this particular port number
inside the container. Okay. And yeah, this will be the one which is running inside
the container and however, if you want to execute
the same particular application and you want to run
on a particular port number on your host, then you have to do
the port mapping but that comes later on
but inside the dockerfile, this is how we specify that
and remember this is going to be only specific
to your container. And this port number
is only going to be used from inside your container. The next thing is
the maintainer command so it's not a
very technical thing. But if you want to tag
your name along with the image, which you are building then
you can use this. Not to specify who's the person that's metering
this particular Docker images before you approach
the docker Hub. And then that way whoever
uploads or downloads your image from the doctor up
will know that okay. This is the guy
that basically built your image. Okay, so we can just
set your name over here. And this has to be present only
after the from command. That's the point which you have a node and then
we have the user command and if you want
a particular user to execute or to run a container, then you can use this user
command and specify the user ID of that particular user whom you want to acute
the docker container. Okay, so it's
pretty simple, right? So the user here is my command
+ 7 5 1 is the argument and this particular user who's having this uid
will be executing that particular Docker
container of mine. And then we have the one last comment
that we are going to talk about which is nothing
but the volume command. So this volume command
is basically used to set a custom path where your container
will store all the files. So this is the place
where all the files related to your Docker container
will be present and even if you want Containers
to share the same path then you can use this volume. So this path it can be shared
by multiple containers. So logically if you
have multiple containers which are hosting
the same application, then you might want them
all to use the same path right where it's stored. So this is the path
where it can be present. So that's it. And now let's move
on to our demo. And first I will show you how to
install your Apache web server and I will show you
how to write a Docker file. Okay. So in the first demo of mine, I have a simple dockerfile when I'm first all using
an Ubuntu image as my base image and then I'm saying
maintainer is at Eureka and then I'm running
a few commands. Even if you're trying
to install Apache on your local machine on Ubuntu, then you will have to probably
run these commands will offer do an AB get update this basically updates
my Advocate repositories. Okay, and then you'll have to mainly install
your Apache service. So the command for that
is app get install Apache where in your Apache
will be downloaded from your app data repository and then you will want to clean
your object repository. So you'll use the app
get clean command. And most importantly we are
deleting this particular path. Okay, so you will have
these files which are there in this particular path. We're live apt lists. Okay, so whenever you
use an app get update and if you get an error
that time then that's because of the files which are present
in this particular path. So to avoid any error
that comes in the future we are deleting whatever
is created over here. Okay and RM - RF is what is used for that
and our run command is what performs all of these so
I have up to four functions or four commands
which need to be done. And I'm using one run command
and using an and over here. Okay, and a percent to say that I have multiple commands
which need to be run. So that's the thing and for
my apartheid work after set my environment variables and
that's what I've done here. So it's pretty simple and it's the same as
the installation process. It's just that I'm setting
it manually insert the dockerfile on my own. Okay. So for Apache run user for Apache run group and for
Apache log directory, there are various parts where it has to be present and
that's what have a specified. Arguments over here. So these are my first arguments and these are
my second arguments and then I'm saying expose T. Which means that on port number
80 my Apache service would be hosted. Okay, but remember this is only
from within the container. Okay, if I want to access it
on my host machine then after the port mapping while starting this particular
container of mine, so in my inside my container on port number 80 Apache
will be present and finally if I want to start
this Apache service, then I have to go
to this particular path and then after Art
that apache2 service right? I'm doing the same thing
over here using my CMD. So using the CMD command
I'm saying go to this particular slash user
slash sbin slash apache2 and I'm saying execute this and
run it in foreground mode. So - D is the flag, which we have to specify and then I'm saying
foreground to basically get the UI up and running
and to get it hosted. Okay now to show
you the same demo. Let me open up
my virtual machine where I have prepared
this Docker file. So this is my VM. I hope you all can see
my VM over here. Okay, so I'll just
open up my terminal and I'll bring up
my Mozilla Firefox. Okay, so let me do an LS and then I have
my documents folder right? So let me do CD documents
and I let me do LS so I have dockerfile here. Let me do cat Docker file
and show you that I have the same code
present over here. Okay. So what I explain right now as
to how to install Apache and then use the various
commands I have Copy that same code into this
particular dockerfile of mine. Now. The first thing after do
is build a Docker image, which is going to be
my custom Docker image out of this Docker file. Okay, and the second thing after do is to run
that particular Docker image into a container. All right. So let me get started
with the first thing. So if you want to basically
build a Docker image from your dockerfile
the command is Doc a build. - t and then you have to specify
the name of your Docker image. Okay, so I'm going
to say my Apache image. Okay, and then I'll
just say the place where the docker file is present by mentioning period so
by mentioning period it means that the dockerfile
can be present in this particular directory
and based on that dockerfile. This Docker image
would be built. So let me hit
and oh and just wait for the Viennese steps
to be performed. Okay, so step one step two step. Three and then all
the different steps, which I specified in semi dockerfile are being executed
one after the other. So my first step here is nothing
but from Ubuntu which means that I'm pulling
a base Ubuntu image that is present over here. And then I'm saying
step number two make a dareka as the maintainer and then step number
three bottom installing various functionalities. Okay. So let this complete Okay, I think this part
can be forwarded and moved ahead. Okay, so I think all the steps
have been executed successfully because I've got this message
right successfully built and the idea of my Docker image. So if you intermittently
see step for executed step 5 executed step six step
seven and step eight everything. I've been executed. So my Docker image
has been built and then I can verify the Same by running
this command Docker images. Okay. Let me say pseudo Docker images and as you can see
here my Apache image with the latest
tag hasn't built. Seconds ago. All right, and this is the size of this particular
Docker image of mine. Now, let me use
this Docker image and bring up the docker
container out of this image. And the command for that
is pseudo Docker run and now up to specify the port number because inside
my Docker file and specify that the application
has to be active on port number 80 and if I
want to access that application on my host machine then after do the port mapping
Port mapping of my host port to my container border, so let me I say -
p and say 80 colon 80. So this means that port number 80 of
my container will be mapped to my port number 80 of my host. Okay. So first comes the host
then comes the container and after this I can just
simply specify the image which I want to build
right so I can say the name of the images might Apache image and I can also give a name
to my particular container. I can say - -
Game equal to app one. Okay, so I can give
enter and yes, so ignore this message. But anyways, my Apache servers
would have been installed. So let me just go here. And if you remember it was port number 80
where it was hosted right? So, let me just type
in localhost Colon 8080 and yes, it says that it works. This is the default page
for the server and the server is running but no content
has been added yet. That's because I have not. Not done anything manually, but it's just that have hosted
the same service which I got. This is my party service
which I have installed. Okay, so my service is running
now and I can verify that from a different terminal. So let me just go
here and say Docker PS, okay. So dog appears and you can see that my Apache image was
the name of the image. And then the name of the container is app1
and this has been continued rise and it's basically created
these many seconds ago right now if I want to stop running the service I can either stop
this container over here. I can run a command
to stop the container or I can simply use a control C. And with that I'm out, right this container
is not being executed anymore. So this is a shortcut
but it's not advisable. Either the command to stop
a container is Docker stop and then container ID. So I will show you how to do it
with the second demo. Okay, but let me just go here
and verify that again. So if I refresh this then the
page is not accessible anymore. That means my container
is not running and hence. The server's not working. So that's the end
of the first demo of mind which shows how to install
and how to install Apache. Okay. So let me just do
a sudo Docker PS - A over here and show you that the same container over
here with the same idea. Has exited okay and let
me clear the screen here and over here too. Okay, and then get
my second demo. So my second demo is all
about installing engines. So again to install
in my engines server, I'll follow the same steps. Okay, at first for all users are going to base
image and I'll be installing my answering service
on my Ubuntu machine. So that's why
I'm doing from Ubuntu and then I'm specifying
maintainer at Eureka and then similarly I'm using
or are running the command. Run apt-get update
run apt-get install - why and drinks and then
I'm doing add an index dot HTML. Okay. Now first of all, let me tell you
that with respect to the previous demo. I ran these two commands on the same line correct
with an ampersand here. I've just divided
into two lines. So it's just to show
you the functionality and the shortcut which are used
in the previous demo. Okay, but otherwise, it's all pretty much
the same and then what is new here is
the index dot HTML because with engines
this index dot HTML is Created by default. So I just created
an index dot HTML file and then I put my own code
in that index dot HTML file and that is what I'm putting inside
my container over here. So if you remember the add
command basically copies, what is there in one path
to the destination path? So this is my source path,
which is my host path. And this is going to be
my container part so index.html, which was there in my same folder that is copied
inside my container. So there is engines
inside the user / - are okay. So inside that there's
another folder called HTML and inside that folder the
index.html file will be copied and once it's copied
over there from here. I'm using an entry point command so that whenever my container
is running right, so I have my Docker image and then when I execute
the docker container, then this line
would be executed. So these will be built though. All the environment
would be set up over here. But this command slash user
slash sbin slash engines. So this is the service
which needs to be started so my doctor is going
to particularly Go to this particular path and then start my engine service
by giving the flag - G and Demon off. So demon of your basically
helps me bring my application for the foreground. Okay. So if it's demon on then
the application will be running in my background. Okay, but since I was specified
demon off over here and because I've specified demon off
and brought it to my foreground I can see the UI and I can only see the UI if I say the particular
port number and that is what I've done in my final line absurd expose port
number 80 again, so if I have said exposed
port Number 80 it means that in my container
is going to be hosted on port number 80 and I
can map this to my host port in my run command. Okay, and when I run the command
I'm going to repeat again when I run the command. This would be executed
on this was what brings up my engines service. Okay. So let me go back to my terminal
and show you the file. But however, it's
not in the documents folder. So let me go back and go
to my downloads folder. Okay, so here I have
my index dot HTML file and then I have my Docker file
for Roaring engines. So first, let me do
a cat dockerfile. So it's the same set of code, which I explained
few seconds back and then let me also do
cat index dot HTML and this is basically my HTML code which will be
displayed on my UI. So the title of my page
is going to be a direct cause Docker engines tutorial
and then in h tags, I have a hell of a trick are
loners and then I have a P tag where I was specified something. So this is the back end
of HTML file and this is how any HTML file
is built, right? So I've just copied it
and I've pasted it. So this is how the backend HTML of any web page
will look like right. So I've just created that and hosted I
actually had it on my machine and then I basically put it
inside my Docker container and then I'm going
to start the engine service when I start the engine service
this index dot HTML file will be picked up and be used as the default page
or the first page which comes up in my view. I okay. So let me clear the screen
and first of all execute this dockerfile now to build
the doctor made out of this dog. The command is Docker build - tea and then after specify
the name of the image. Let me say my engines image. And then after this, let me see the path
with the docker file is present and I can do that by specifying
Dot and let me go ahead and also specify pseudo. Okay. So my first tip executed
Second Step also executed and so is my third step
and let's just wait for all the steps to be executed so that my Custom Image
is built and trust me. This is my custom
Ubuntu image, okay. Okay, so that was
my bill command. And if I want to execute or bring up the container out of
this particular image of mine, I can use the command
Docker Run - PL specify the port number
barrier to be active and let me again say 80 colon 80
followed by the name of this particular container. I can say name is equal to app to and then I'll just
specify the image which I am going to use. So my And drinks image. So this is the name of my image, right and however after also
specify pseudo over here, so if I had enter my container would be active and since
I've specified entry point. Basically, it would go
to that particular path and start my service. So let me just go
to localhost and check if that's working. Okay, I'm going to say
localhost 8080 and yes, this is my ending service. Of course. It's a little different
from my Apache service and I've customized
this one to say that this is my page title. To iterate cause
Docker engines tutorial and then I have a specified. Hello at a record low nose. And then this right this is what I also showed
you some time back in that index dot HTML file. So this time let
me show you how to stop this particular container
by not doing control C, but by actually stopping
your container in a healthy way. So let me open up the second tab
over here and over here. Let me say pseudo Docker PS
This would first fall lists down there. Regular engines image of mine. Okay. This is the container ID. So I'm going to copy
this container ID and then stop this particular container. I can say pseudo Docker stop
and then container ID and my container
would have stopped by now. So if I go back
to the other tab, you can see that I've got
control back here, which means that my application
is stopped being deployed. So if I refresh
this page you can see that I do not have access
to the page anymore, right? So that's how you host
any application of yours. And that's how you bring
it down with the hell. Of your containers, okay in some of your fingers
you can get anything done and that's why doctor is really good
and really useful commands that you see on your screen
here the other ones which are most commonly used. So if you are
a develops engineer or if you're just someone that's working on Doc Ock
then you might have already used these commands or you
might use them in your future. Some of those commands
are darker version doctor helped broker pull Docker run. Kaka build Docker log in Dhaka
push Docker PS your pH stands for Docker processes. And this command is used to see
what are the active containers currently and then we
have Docker images. Then you have
doctor stop go kill. There is Docker RM
which of course stands for dr. Remove Docker RMI, which stands for remove images
we have Docker exec and this is used
to access the bash of any active container
and then we have dr. Commit. We have doc or import
Docker export the upper. Container Docker compose
Turkish warm and Docker service. So these are the 20
plus Docker commands, which are most commonly used now
without wasting much time. Let me get started and discuss each of
these commands darker version. This command is used to find out what is the version
of your Docker engine? Okay. So remember there
will be two flags that will come in
before writing version and then we have another command which is Docker again to flags
and then help now. This is basically used
to list down all the possible. That you can use with Docker. So here Docker will be
a parent command and whatever child commands that are possible here
as permutation combinations. Those would be less down. Now. Let me quickly open my terminal and execute these two
commands for you. Do remember that I will be using
my Linux virtual machine. Okay, and and this Linux
virtual machine of mine is an open to machine and it's hosted on my VM. Like I said, so it's going to open
my terminal over here and the First Command that We were supposed to execute
is darker version, right? So as you can see the version
of my doc origin is 17.0 5. Okay, so that's
how this command works. Now the next command that we were supposed to execute
is Docker help that of course, we're also come with two hyphens
and like I told you there are various commands that you
haven't talked up like dr. Attached Docker build
talker Comet, dr. CP dr. Create Docker deaf. So all of these Todd
the other child commands that can be used with Daka. Okay as a primary command. So I hope that was clear
and at any point of time if you people have any doubt with respect to the usage
of any command in da core, then you can just
use the help right? I'll help will basically tell
you the different commands are there along with a description? So it's also explain what each
and every command does now, let's say you have Docker a built then you
can see the explanation that it says build an image
from um a Docker file, right? So that's a good
enough explanation. If I was a guy that's working
on Docker then I would know which option to use right
similarly for everything. So for Docker RM, it says remove one
or more containers and then for Docker
for DACA start, it says start one or
more stopped containers and many more. So whenever in your free time, you can use the docker
Health command and see what are the different commands possible
along with their explanation. Okay, so I am going
to clear the screen and go back my PPT. Check what are the next set
of commands that I can execute and remember these are all still
basic Docker commands Okay. So the next command
is Docker pull. Now the docker full command
is used to pull any image from the docker Hub. Okay, and then we have
Docker images command, which of course list
down all the images in your local repository. Now in the previous command,
we do a Docker pull, right? So for the first time you
will not have any image in your local repository. You will have to pull it
from your Docker Hub, and when you pull it
from your Docker Hub, it gets stored
in your local repository. And once it is there
in your local repository, you can run the docker images
command and then check all the different images. So all the images
would be listed down. Okay, so that's
about these two commands and then we have Docker Run. Come on. Now the docker run command is basically used
to execute that image and I'm pretty sure
you are aware that whatever you download from the docker Hub our images right and if you Running instance, or if you want it to be active
then you have to run it because what you
will have to deal with they are containers, right? I'll to get containers running
then you have to basically run those images. That's the same thing
that we are doing here. The Commander's Docker run along
with the image name. So supposing I am pulling
it Ubuntu image from Docker. So I will be using this command
Docker pull Ubuntu. Okay, and if I want to execute this image and get
a running continue Rod of it then I would have to basically go here and run
the command Docker run along with that particular
image name Docker run Ubuntu. Okay guys, so I think you have a decent understanding
of these three commands. Now, let me again go
back to my terminal and execute these three commands
and show you how they work. So I'm back to my terminal here. Okay. So here let me write down
the command Docker pull Ubuntu Now by running this command. I am pulling
the latest Ubuntu image from my Docker Hub Okay, so, Hit enter in spite of the fact that I did not specify any tag over here as late as it's
pulling the latest image that is available
on the docker Hub. Okay. So let the process happen guys. Give it a minute. Perfect. So now you can see
the status here, right? It says it has downloaded
on your image for Ubuntu that is with the tag latest. Now if I want to check
if this image is actually been pulled then I can run
the command Docker images now, let me run that command
by fostering the screen and then actually running
the command Docker images when you hit enter, like I said, you have the entire list
of images available in your repository over here. The entire list
is down over here. Like so you have customer made you have rather than S /
Custom Image, right? So this is another image which I
created if you want to check if your images latest, then you look at it
Ubuntu here, right? So this has a tag as latest
and this was an image which was created 10 days ago in my local repository and it
is about a hundred and 12 Mb. Okay. This is the one
which we downloaded recently and it has the latest tag. So this is how you check
the different images that you have. Have in your
local repository guys. Okay, so I'm going to clear
the screen and now it's all about executing a Docker image. So for sample purpose, I can even run any kind
of Animation get a container, right? So before I execute
an Ubuntu image, let me execute a simple
hello world container. So for that I'm going
to say Docker run. Hello world. Now remember when we say
Docker run hello world. You might ask me a question. That is Hello World
already present in my life. Repository. Well, the answer is it's
actually already present. So I have an image
of hello world in my local repository. But even if I do not have
the hello world image in my local repository
this command will run because when you do
a Docker run command, this would first of all look for this particular image
in your local repository. If the image is not present then
it will go to the docker Hub and look for an image
with this particular name and it will pull that image
with the latest tag. Okay, so run does two things. As it pulls and it executes. Okay. So let me hit enter
and there you go. It says hello
from Docker, right? So this is the hello
world container for Docker. Now the reason I did not execute
the Ubuntu images because I want to make a few
modifications to that image. But if you want to make a few
modifications to that image, then you have a different set
of commands which are involved. So let me go
through those commands and then get back to
what I was supposed to do that was about Docker run and then we have something
called as Doctor. Built. Okay, and this Docker build command is used
to build a custom image of yours supposing you
have a low bun to image. Okay, but you do not want
it exactly as it is and you want to make
a few adjustments to that. So one other example for that
would be the note image right in my previous sessions
have had sessions on Docker swarm had
a session on Docker compose and many more
right so over there what happens is I'm using
a node.js image as my base image and then I'm building
my entire application on that. At so what you had your risk? You have a base load image? Okay, and on that note image, you build your entire
application impede an angular application or beat
a mean stack application and the command that you use to build the entire application
is the docker build command. And as you can see, this is the syntax we
have to say Docker built with the flag - tea and the tea flag what it does is it
basically tells you that you can give
your name with a tad. Order our image
with your building because this image is going to be your image right your
customly building this image. So when you custom
build this image, you can give it your own name and that's what this is and
followed by that with a space. I have specified a DOT here now the dot specifies
that the dock of file which is needed to build. This Docker image is present
in the current directory where this command
is being executed. Now, how do I specify the entire
path of my dockerfile then? I didn't I wouldn't be
Be specifying the dot over here, right? But in that case if I'm specifying
the entire path then that means that my Docker file is present and some other location not
necessarily in the same location where the command
is being executed. Okay. I hope that was a little fear
you people so now if you're still not here, let me give you a demonstration
and then you will be able to understand this
in a better fashion. Okay. So let me open my terminal again
and currently we are in the slash home slash
iterator directory now for the demo purpose. I had created a new Docker file. Let me first open
and show you that dockerfile. Okay, and that Docker file is present in the downloads
demo folder of mine. If I do an LS,
there is a dockerfile perfect. So let me say cat dockerfile. In fact, let me open
this in G edit. So pseudo G edit dockerfile. Yes. Now the dockerfile is
the most important file if you want to build
your own custom images because whatever you want
for the application to run those dependencies are
Fide in this file. We have the same which is the base image
that you have to first of all download from the doctor
up and use that as a basis where application and then you
have to say the other commands that you want to run now
in this demo of Mind simply downloading an Ubuntu image
from the docker Hub, and I'm just
echoing this sentence. Hi, this is version
from Ed wake up. So it's a very
simple process right? I'm pulling one able to image and I'm doing an echo
on that particular image so you can just
save this closest. Profile and then execute
this particular dockerfile. Okay. And since I am in this folder
I can use the dot to specify that the docker file is present
in this directory. Now, let me first clear
the screen and then run that command again. So the command is darker till - tea let's give
the name of the image as my custom Ubuntu
image custom Ubuntu. Well, that's good enough, right? And then I'm going to say dot because the dockerfile
to build this my customer been to image is present
in the same path. Okay. So it says my customer going
to should be in lowercase. Okay, no problem. So, let me just check
my dockerfile once okay. Now the reason I got this is because my image name
cannot be in caps. So what I'm going to do
is let me read on the command with a different name
possibly in small letters. My customer going to okay, perfect the command
got executed. So if you can see here, it says selling the build
context to the doctor demon and since I had specified
only two steps in my dockerfile those two steps
are being executed here Step One is it's pulling the Ubuntu image
from the docker Hub. And since it's already there
in my local repository. It's using whatever is there. Okay and step two is running
the echo statement. Hi. This is what in
from Federica, right? Right. This is the second step
and the same Echo. Command has been
executed over here. Hi, this is Worden from Ed. Wake up, correct? Perfect. So I hope you guys got
a good understanding of this particular command because this is
the most important command if you want to make it as
a devops engineer or a person that's regularly working
on Docker because all of the images that you will be working on in your
office or in your workspace. You will have to be working
on custom images for your application. I remembered how
this command is used and how the applications are bit. So let me just clear the screen
and go back to my slides and see what is the next command
in store for us. Okay. So the next command is
the docker container come on and this Docker container
command is basically used to manage your containers. Now, let's say you have
a number of containers and because so many containers
are active at the same time your system may be lagging right
there's a performance issue. So at that time you
might want to close or end certain containers. Right kill their process. So at that point
of time you can use the container command and kill
the container straight away. So it's just one
of the different options that we have. So there are a number
of other commands which can be used with Docker
container as The Parent Command and I would request
you to look up the set of commands on Docker docks. Okay, but for now, let me just go back
to my terminal and execute one of these commands
and show you how they work. So let me go back
to my terminal here and here I'm going to run that. Come on Docker container and let
me run Docker containers logs. Okay, but so here. Let me run the command Docker
container logs to basically find out the different logs that are associated
with my container. Okay. Now the thing is in arguments after specify the container name
or the container ID. And since I don't note right now then we first find out
what is my container ID. Okay, so I'm going to do
a dock of PS command. To list down
the different containers. Okay, if there are
no active containers, so I'm going to do a - a flag. So these two commands I
will explain in detail at a later point of time guys. Okay, but anyways getting back
to our problem here, you can see that the hello world
container got executed, right? So I want to copy
the container ID here. And now I'm going to find out what are the logs
of this Docker container logs. And then I'm going to paste
the container ID this way. Whatever logs Generated
for this container. Those would be displayed perfect worked as with
the container executed again. So the same thing can be done
for any of the other containers. Okay. If I do a Docker PS - A and C. There are so
many other containers which are there
right in my system. I can copy the container ID
of any of these and I can execute
the same thing again and again, Docker container logs, right and then I can paste this. So this time the logs
of this particular container, which is nine months six
and this entire ID. This logs have come out and like
I said with Docker container, you have various
other options, correct? You have options
like Docker container kill, you have Docker container remove
and all those things so I can use a Docker
container remove and hit enter. And basically when I do that this particular
container has off so if Remember the CCC right? This container is
the hello world container. And when I said RM
this container is removed. So if I go back
and do Docker PS - A then the first entry for the helloworld container
would not be present. And yes, as you can see
it's not present, right the hello world container
is not present here. Now. That's what I'm going
to show you. So we're clear my screen and now let me get
back to my slides. So I basically executed the
docker container logs command. And the docker
container RM command, so you have
various other options. Like I said, we have the container kill
which can be used if you want to kill
any one particular container. Okay, you can use the docker container run command
to start any container which has been temporarily
stopped or which is inactive. Okay, and if you want to again start
the container from something that has been stopped you
can use the docker containers start from at and these are just a few
of the commands and the entire list
of Docker container. Runs can be found in. Dr. Dobbs. Okay, so I would request
you to go to doctor docs and then see the entire
list of commands. If you want to learn more about
this command in the meanwhile, let me go to the next slide
and country with our session. The next command that we're going to talk about
is the docker log in command. Okay, and as simple as it
sounds this is used to log into your Docker Hub account can any of you guess why
we would need to login? Well, it's for the simple reason
that you might want to push any of your image that you
have created locally, right? So when you're working
with a team who are all using Docker, then you can just
pull the docker image or create a new Docker image
from scratch at your end and build a container. And if you want
to share that container with other people in your team, then you can upload it
to Docker Hub, right? So, how do you upload
it to Docker Hub? So if you want to upload it, you don't have any other work
around so you do it through the terminal
and to do it through the I know you have to first
do a docket login. Once you have logged in using your Docker
container credentials, then you can simply start
pushing your Docker image to the docker Hub. Okay, so that's why this come
on is really important. So let me go to my terminal and show you this command
the command is dr. Log in when I hit
enter it says login with your doc ready to push
all images from Docker Hub. If you don't have a doctor ID, it says head over
to this website. So this is where you can create
a new Docker ID, okay. And the username it says
in Brackets it says what ananas that's my username
because I'm already logged in. So I'm just going to hit enter
without entering the username again and the password I
can enter is my password so that of course I'm not going
to reveal to you people. But once you enter the password and hit enter
then it says login succeeded, right if your credentials
are a match then you are successfully logged in and once you're logged in you can start pushing
your Docker images, which you work down low. Lee do your Docker Hub? Okay, perfect. Right. So let's clear the screen and
get back to our slides now. Like I said, the next command is basically
the push your Docker image to your Docker Hub. Remember the command
should have your doctor ID / the image name? Okay, my Ubuntu image. This may be the name of the image that you
might have created locally. Okay, but if you want
to push it to the docker Hub, you have to tag it with a name and that name
should be your doctor ID. Okay, so let me get to the terminal and show you
how this command works. So let me first look
for the image that I want to upload
to my Docker Hub. Okay. So when I hit Docker images the
list of all the images come out and if you remember my customer Boon to is
the name of the image, which I created now, let me try pushing this image
to the docker Hub. Okay, so I'm going to copy this
and first clearing the screen and here I need to tag
this image with my dog. Writing right because right
now it has the name my customer going to and I cannot upload it
to Docker Hub with this name now since I have to tag
it with my name, there's a command
called Docker tag and here you have to specify
what is that which image that you want to tag? So the images my customer born to and here let me specify
my Docker ID / the image name, so I'm going to save Warden NS. Okay. Now that's my doctor
ID and Slash. Mike custom Ubuntu image, right? So this is the name of my image. I can even change the name, but I've just retained my custom
open to as a name of my image. So when I hit enter this image
would be getting uploaded to Docker hub. And now this image
has been renamed to what the nest / my custom open to we can verify the Same by running
the command Docker images and as you can see here, there is there is one image
with the name Mike customer 1 2 and then there is another image
with Walden NS / my customer burn to correct. Now. This is what I have to upload. So now I can use
the docker push come out. So I'm going to say Doc a push and then simply specify
the image that you want. Doc a push-button lettuce /
my customer going to hit enter and the image would be getting
uploaded to Docker Hub. And once when you
do it from your end after this command is executed
successfully you can go to your Docker Hub
and check that your image, which you created locally
has been uploaded to the doctor about it can be shared
and access by other people. Okay? Okay, perfect. So this shows that my image has been uploaded and let
me just clear the screen. And get back to my slides and move forward
and this command is something that I already
excluded some time back. Right if you remember I use the
docker PS command to identify which are the containers which are currently active
in my system right in my doctor engine. So that's what this does PS
basically stands for processes. And when you hit
Docker processes, then all the container processes, which are currently running
in your dock origin would be listed. However, if you
append this command with a a flag, right? Then all the containers which are inactive
even those containers would be listed down. So that is a difference
between these two commands Docker PS and Docker PS
with a flag a now. Let me go to my terminal
and show you that so Docker PS first. Okay, and right now
there are no entries because there are no containers
which are currently active. But if you want to find out all the containers irrespective of
whether they are active or whether they are not active
then it would list down all the containers in my system. Right or in my host and that's what it's going to do
Docker PS - name and as you can see
there's an entire list of Docker containers over here. There is the customer
Mage which I created and then there are
various other images over here, which I used to build
a container and I'll show you how they work
in my previous sessions. So the contact list angular and
then there is a demo app one. These were images
which are used for my Dockers swamp and for my Docker
compose videos respectively, so if you want to go
and see those videos And the link will be there
in the description below guys and I would request you to go
through those images to understand other
Docker Concepts better. Okay, because dr. Campos and Docker swamp, they are the advanced concepts and Docker and
that's a must know if you want to make it as
a doctor professional the link for those videos are
in the description below. So let me just clear the screen and get back
to what I was doing. So the next command that we have is the docker stop
command now the Dockers top commanders basically used
to shut down any container. So if There's any container
in your Docker engine which is running
right in your host. And if you want to stop it, then you can use this command
and do note that. This command would not be
shut down right away. It might take a few seconds because it would be
graceful shutdown waiting for the other dependencies
to shut first. Okay. It's not a force stop. It's a very gentle stop. That's what this commanders
but we have something called as a Docker kill command. Okay and what this doctor kill
command does has it ungracefully stops your container as if there is Container that is actively running it
would straightaway kill it in spite of its something
similar to for skill, right? So that is a difference
between these two commands Docker stop and Docker kill
kill would straightaway kill your command now before I show a hazard of this, let me go forward and talk
about a few more commands. There is something called as
a Docker remove right Docker RM this one what it does is
it removes a container at this point of time
you have to remember that if you want to remove
any container from your host you have to First stop it and how will you stop
it by the two commands that I explained in the previous two slides
you either for skillet, or you kill it gracefully
using the docker stop command or the locker kill command and once you've used
those two commands, you can remove them
from your repository. Okay, and we
have another command. That is the docker RM. I okay. So the doctor Adam
would remove containers, but if you want to remove images
itself from your repository, then you can use
the docker RMI command. Okay guys, so these are the four
different Enter commands that we have your
which is are regularly used. Now. Let me open my terminal
and show you how they work first. Let me do a Docker PS and
since there are no containers which are currently active. What I'm going to do is
I'm going to start a service. Okay, I want to containerize
a particular service and then I will show you how to stop it
or kill it or remove it. Okay. There is one particular
image demo app one. Okay, which I use to deliver
my previous session. There was the docker compose
session right over there. I used that particular image and I created
an angular application. So I'm going to first
start that service and the command
for that is Docker Run - - RM I want to say
port number is for to double 0: for to double zero because it's
an angular application. Let's give it a name. It's - - name right. So let's give it a name
my angular application or let's give a name My Demo up. Patient. Okay and demo app one is
the name of that image. So when you hit enter first, the image would come up
right damage would be spun and the container will come up. Let's just wait for the container
to become active. So let me first open
a new tab of this terminal. Okay, and here let me run
the command Docker PS and you can see
that few 42 seconds ago. This app was created right
the demo app one here. It says the web pack
is compiled successfully. So if Go to my Firefox
the service would be active. The angular application
would be active Okay, but if I want to temporarily
stop this container or if I want to kill
this then I can use those commands Docker stop
or I can use Docker kill. Okay, so let's use
those commands and see how they work. I'm going to say
danke stop followed by the container riding. Correct hit enter. So the doctors are stopped. Now. If I do a Docker PS command, this container would
not be active Okay and over here also, you can see that
which was temporarily compiled. It has ended right here in the
service is not hosted anymore. So that's how the docker
stop command works. So let me go to this command
and restart the same service. And over here this time instead
of using the docker. Stop command. Let me say Docker kill. Okay. Sorry. I've just used
the same container ID. Right? So I need to do
a Docker PS first. Okay. And yeah now this
is the container ID, which I have to kill so
I'm going to say dr. Kiehl pasting this culinary
and a tender and that's container
has also ended so you're in the service
has exited from here, right? So that's how you
kill a container. That's different between
the stop command and the kill command. Okay, so I'm going
to clear the screen and after these two commands that are two commands like
Docker RM and Docker RMI, right? They are used to remove
containers and Respectively, so let me go ahead
and do that first. Let's run the command Docker RM. Okay, and now we have to specify which container
you want to kill or remove. So for that purpose. Let me first find out which are
the different Docker containers. There are there in my system. So when I do a Docker PS - A there are a number
of containers and from here. Let me remove
this test angular container. Okay. This is the name of the image
and this is the container ID. So I'm going to copy
this container ID and go back here. Let me clear it and here let me run the docker or em
with the container writing and when this is return
it means at my container has been deleted successfully and the benefit with this is I have freed up a little
more space in my host right in my doctor region now
guys are similarly we saw how to
remove a container. Okay now, let me go here. Let me do a Docker images. So this is the other list
of the different images that are there in my repository. And if I want to remove
any of these images, then I can do Docker RMI
and what we have here is we have a resume John
we have an Alpine image, which I do not need. So let me copy this redis image and remove this image
from my repository. So the command is
Docker RMI this time because remove image is
what it stands for and I can specify the image name or I
can even specify the image ID. So So image name is good enough. So that's what's
happening, right? It says untagged and deleted perfect now
I can clear the screen and what I wanted to show
you I've showed you already know if I run the docker
images card again, then redis would
not be visible here so you can see Alpine, but you can't see
red is correct. So that's how it works. So, let me go back to my slides
and go to the next command. We spoke about stop. We spoke about kale. We spoke about Docker RM, and we also spoke about Docker
or am I now Next command that is in question
is the docker exec. Command Okay. This command is used to access
any Act of container. Right? Any container that
is actively running. If you want to access the bash
of that particular container, then you can use
this exact command. Okay, and we use
a it flag over here. So you can either use - ID together or you can use
hyphen I space hyphen T. Now. What I does is it's basically it says access your Boehner
in interactive mode, so that's the option
this flag specifies and that's why we're able
to access the container. Okay, and you have to specify
which container you want to access followed
by the word bash. So let me go back to my terminal
and show you how that works. So over here, let me clear the screen
and do a Docker PS and check which Cardenas
are actively running. None of them are
running right now. So let me start
a container over here. Okay. Let me do a dog. Are in fact I can start one
of the containers. I started sometime back
the demo app one, right? The one I spoke about is
the angular application. Let me start
this same container. Let's wait for it
to come up perfect. Now it says webpack
compiled successfully. So now let me go to my browser and hit
localhost photo double zero because my angular application
is active on port number for to double 0, right. So this is
that angular application, which I was talking about. So So if I go back
to my terminal you can also see that I have specified photo
double zero as the port which is to be used to access
that application on my host. And this is the port
number it's running on internally in my container. So I'm mapping my container port to my host port and because
of this I could access that angular application on my web browser now
getting back to our slides. We are supposed to use
the docker exec command to access this container, right? So right now I cannot access
this curtain over here. Let me access this curtain
of in a new terminal. So this is the new
terminal and here if I do the same
Docker PS command, the new container is active. So from here, let me copy the container ID
and then run the command Docker exec with the flag. It followed by the container ID
and then bash bingo. So right now I am
inside this container. So all this time I would this was the user right
Erica at the rate Ubuntu. This was my host machine and
this is my username right now. Now I'm logged in as a root user
inside the container with the hammer having
the I'd eat this one because this is
what I specified over here. So now we are not
in my local system. We are inside the container and what can we find
inside the container? We would basically
find dependencies libraries and the actual application code of this particular
angular application, which is both sit over here. Right which you can see all the
project codes would be present inside this container, correct. So let's verify
if that is the case. By checking if you are actually
there and by doing an LS, you can see all
the different files here. We have a Docker file, which was used to
build this application. And then we have
a package dot Json file, which is the most
important file to build any Android application or any means that application and then we have protractor
dot corner of dot J's, which is used to test
any other applications and then we have so
many others right? We have an SRC folder. We have 1 e 2 e folder
and then you have noticed. Code module. So this is where all your
project dependencies are stored. Correct? So package.json specifies. What are the project
dependencies that you want? And this is
where it's all stored. So this is
my Android application. Right? So if I go One Directory back,
I am in this SRC folder now. Okay. Let me do an LS. I have a pure let me go One path back again and do an LS
and your you can see that I have other photos like bin games include lip local
has been shared and SRC now. These are inside my container. I hope this was
enough evidence for you. I hope it was so I'm back here. And yeah, that's
how you access your container. If you want to make changes
you can make changes yard. Okay, and since we are
inside the container, let's just create a new file. So let's just say touch F1. So the touch command is used
to create an empty file, right? So now if I do cat F1, of course, there is nothing
but let me do a sudo G edit. Okay, so I don't need
to give a pseudo. Because I'm already
logged in as root user. So I'm just going to do g8f one. Okay, so it's not letting
me access this Command, right? Okay guys. Anyways, that's how you
access the container. Okay. So let me just clear the screen
and if you want to exit the container exit The Bash then
you should use the command exit. So when you hit exit your back as the adric our user
on your Ubuntu host system interesting, right? So I'm going to clear
the screen and go back to my slides and check what's next and then we have
the docker commit command. And what this Docker
commit commanded does is that it basically
creates a new image of an edited container
on the local repository. It's simple words. It creates a new image
of any container which you have edited, correct. So let's execute this doctor
commit command and see how that works. Let me go to my terminal here. Let's first run
the docker PS command check. This is the container ID. I access my Docker container. So I hope something is
would have been there. So A new image of that
particular Docker container. Okay, so I'm going to say
copy and run the command Docker Comet and then
specify the container ID of your container and followed by that you have to specify
the name of your new image so I can say
what an NS / my image, right my angular image. So this would basically
create an image of this. Container which is running and
big or perfect it's done. So if I run
the command Docker images, then there will be a new image
with this name and tag. Let's verify that by going
to Docker images. Let's go up and as you can see
there is version Alice / my angular image. Perfect. This is what we want
to verify correct. So let me clear the screen
and go back here. And yes the work package
compiled successfully. This was the message. We got earlier. So anyways Let's
not worry about that. That's what a
Docker command does. So if I want to stop this cutting a service then
from the new terminal, let me just kill
that container service for X. So this is the ID would
have copy this and then I'm Gonna Save Docker container stop and my container
would have stopped. So here. Yes, my service has stopped
over here Bingo. So I'm going to clear the screen
and both the places. Okay now, let me get
back to my slides. So So the next command
that we're going to talk about is the docker
export command, correct? So the docker export command
is basically used to export any Docker image in your system
into a tar file, correct? So this tar file is going to be
saved in your local file system and it's not going to be
inside Docker anymore. This is another way of sharing
your Docker images, right? So one way was by uploading
it to Docker Hub. But in case you
don't want to do that, you don't want to upload
it to Docker Hub because the image is very heavy so that This is an alternative
which is used in the industry where we do a Docker export from one machine and we save
that image as a tar file and this start file is imported
inside the destination system and over there. It can be accessed again
and the container can be run. So let me show you
an example of that by first of all getting to it. Okay, so it says
Docker export, right? So this is the syntax for that. Okay, you say Docker export
you use the output flag with two hyphens you can Specify
the name of the tar file that you want to store it with and then you have to specify
your image name over here. Okay. So the image name
over here is my container so you'll have to specify
you are amazed name. So let me go
to my virtual machine and what are the docker images
that I have available? There is what Dennis /
my anger image. There is my custom Ubuntu. So what I'll do is let me save
this my custom Ubuntu image. Okay. This is my image
and the image ID is Straight to I want to copy this
go back to this terminal and what I'll do here is
I'll say Docker export double - which is the flag. I'm going to say
output flag is equal to F to specify the name
of my tarfaya, right? So my Docker tar file, I can say I can say
that my Docker tar file and Hereafter specify
the container name. So Docker images
wouldn't do for that. So I'm to do a Docker PS -
A so I have a custom image. You're right. So let me save
this particular image. So I'm going to copy
the content already copy this and paste it over
here which indicates that I will create a top five
of this particular container and this stuff
I'll would be saved in this repository itself
in America advisable to now since it's a heavy container. It's going to take
a few seconds and it's done and we can verify that by doing. And LS the name we gave us
my Docker tar file, correct? And then you can see
there's a my docket our file which is basically a tg5. So if you go back
to your documents, you can see that there's a new
tire file my Docker tar file that is created and you can modify the same
my docket our file over here. This is the newly
created tar file so I can go back to my slides here and let
me just clear the screen. Okay, perfect. So going back to my slides. I'll show you how the docker
export command works. And what's the benefit now in the next slide we have
the docker import command. The docker import command
is basically used to import any tar file. If you have any tar file, which has been given to you
by your fellow developer and if you want to create
a container out of that one, then you have
to import it, right. So how is that possible? So this is the syntax for that. The command is Docker import
and then the complete path of that demo file
of that tar file, okay. So for this particular purpose, I have already
created one tar file because I wanted to create one
which can be imported very soon. So I created a tar file
over here demo taught so it is present inside
my downloads folder, correct. So let me import that file. So I'm going to say
darker import and then I'm after specify the complete path. So it's slash home slash
a dareka /downloads. / demo Tower, let's hit enter
and this particular image has been successfully imported. You can verify that by seeing
the first few characters of the newly created image. Okay. So let's run the command Docker
images over here and you can see that just recently 23 seconds
ago New Image was created right with the same image re to 3ef and does the same
image audio over here, right? It starts with the
same sequence characters and right now it has Your name so that is how you
easily import Docker images? Okay, the first exported
and then you can import it. So let me just clear
the screens of both the tabs and now getting back to my slides mom download
my doctor import command and now comes the advanced
Docker commands Okay. So after here we saw
the docker commands which are very basic
and you know, which can be executed easily. But here comes the challenging
part Docker compose and Docker swamp, right? These are all Advanced concepts and DACA which solves a lot
of business problems. And of course the commands are
also little Advanced nature. So first, let's start
with Docker compose, you know, there are two variations to it
and the to syntax can be seen over here doctor - compose built and
Docker - compose up. So these are the two commands which work very similar
to Docker build and Docker run, right? So Docker build is basically
used to build a new image from a Docker file. Correct? Similarly. Docker compose build
is used to build your Docker compose by using
your Docker yawm Al file. Now your Yama file stands
for yet another markup language. And now in the yongle file, we can specify
which all containers we want to be active. Okay, and you have
to specify the path of the different doctor files, which will be used
to create those containers or those services and that's
what Docker compose does, right? It creates multiple services. And it's basically used
to manage all those services and the start them at one go so it would use more
than one dockerfile. Probably. If you go through
my previous video on Docker compose have explained
it in detail over there. I have used three different
Docker files and using those three doctor files. I have created three services, right and the three different
services are the angular service the express and load service
and the mongodb service. The mongodb was used as
a database the express and load was used as My back-end server and angular was used
for my front table. Okay. Now the link to this video is present
in the description below. Okay, but let me just
quickly wrap this up by saying that if you want to build that you use a Docker
compose built and if you want to start
your Docker compose and start the container service, then you can use
the docker compose up. This is very similar
to the docker run command. Okay, and that's what your Docker
compose does, right? It creates multiple doctors
services and computer Rises each of them and gets
the different containers. Has to work together so perfect. Let me go back to my terminal
and let me do that for you. So Docker PS, there is nothing. So right now we are in the home
/ it reca folder, correct. So let me do LS and there is
a folder called means that cap. So I'm going to Siri into
this particular folder and here if I do an LS, you can see that there's
a Docker compose document file. So let me do a g edit docker. Campos dot yml file. So here you can see that I have specified the commands to create
three different Services one is the angular service others
the express service and finally my database service. Okay. I've explained all these things
in my previous video. I repeat the link for that video would be
in the description below. Okay. So let me quickly execute
this Yaman file. Okay, so if I do a docker Campos build then
this command would look for this Docker compose file
inside this directory. Okay, and then it
would you know, once this image
is built I can sit we execute that command
by using the docker compose up. Okay, so I'm just
going to replace build with up this way. My Docker compose
would also be up and running earlier. I showed you an Android
application and this time is going to be a entire. Means like application
which is going to involve everything mongodb Express
angular and node.js so my expense is up and running. My angular is up and running. My mongodb is active on port
number two seven zero one seven. My expense would be active on port number 3000 and angular
as usual would be active on port number
for to double zero. So let's verify the Same
by going over here. Okay. It also says we're back up
I successfully so this time if I refresh this there's a
different application that would Come up, correct. So this is my main stack
application or important photo double zero is the front end
on port number 3000. This is my server end which simply says
full bar and then and port number two seven zero. Sorry zero one seven. There is my mongodb, right? So these are the three different
Services which are active on my waist port numbers. So going back to my terminal. I can do a Docker PS to verify that there are three
different Services of mine which are running If I want to stop each
of these Services, I can simply do
a Ctrl C from here and hopefully it stops. Yes. All three services has stopped. Let me execute the same command
and this time yeah, they're all gone. Right so the docker
PS command shows no containers active bingo. So I'm going to clear
the screen out. Okay and go back to my slides and go to the next command
and the next Advanced command that we have is the docker
swamp command Docker compose. I told you was to basically have
a multi container application. Right and Doc a swarm
is however used to manage multiple Docker engines
on various hosts, right? So usually you might be aware that your Docker engine
is hosted on one particular host and you're executing your Docker
commands over there, right? That's what we
were doing all this time. Even dr. Campos did that on the same host three different Services were
started but with Docker swamp, the benefit is that we can start those services
in Multiple machines so you will have
a Master machine which is nothing but the doctor
manager as visible from here and then you will
have different slaves or the charcoal as
worker in Docker terms. So you have a manager and work up and whatever
service you start at. The manager will be executed
across all the machines which are there
in that Docker swamp cluster. Okay. So it says right it creates
a network of Docker engines or hosts to execute
the containers in parallel and the biggest benefit
of Dr. Swarm is scaling up and ensuring High availability. Okay, so some of the commands which are associated
with Docker swarm are these if you want to first of all start off
creating a Docker storm, then you use this command
Docker swarm in it. And you say advertise. Okay, and then you have
192 168 dot one dot hundred. It's supposed to be
two hyphens over here. Okay. Yeah. So this is how the syntax is supposed
to be doctors swamp in it - - Advertise - add up and then you have
to specify the IP address of your manager machine. So if I start the swamp from
this particular host of mine, then this would assume
the role of my manager. Okay, and in
this syntax remember after specify my own IP address so that the other workers who will be joining
my network would subscribe to my IP address over here. So let's quickly go
and execute this First Command. Let me show that to you. Okay, so Let me open
up the terminal and the command is
darker swamp in it which sells for initialized
with the flags advertised adder and then the IP address
so the IP address of my VMS 192.168.1.1 hundred. Okay, so when I hit enter see what happens it says
swarm is initialized and this is the current mode. This particular node
is now a manager. Okay? And if you want other machines to join this particular
manager as workers, then the after use this token, so we offer just copy this go
to the other machines and execute it supposing this is another machine of mine. Okay. I'm giving you an example so over here you would have
to taste that token. Okay. So this is called the token
you just hit enter and then you will join us a worker. Okay? So that's how the docker
swamp command verse now. I cannot go into too
much details with respect to how Docker swamp works. Okay, because there again it
will take a lot of time and if you want to actually
learn Doc a swamp you can go and watch the other video which I delivered
a couple of months back right that video
is called Docker swab for high availability. Okay, and that is
a detailed video and you will enjoy that video because with that video
I have shown how dr. Swann can be used and you
will see the power of dock. In that particular video, so I would request you
to go there and the link for it is again Below
in the description. Okay, so I would request
you to go there if you want to learn more
about Docker swamp, but getting back to our slides
we have other commands. You're right. So the docker swarm join is
what I already explained to you. So followed by this
you will have a token. So if you give that you can join a particular
swamp cluster as a worker. Okay, so if you want to regenerate
that particular token, which is needed to join
that particular cluster then add the managers and you can Execute this command
Docker swamp join token, so it would generate that open and give it to
you and similarly. If you want to leave
the docker swamp cluster, then you can execute
this command Docker swamp leave. Okay. So if you execute this command straight away
at the workers end or the nodes, then it would simply leave okay, but at the managers
and it would not leave just like that you'd have
to append the flag Force. So let me show you that. Let me just execute
the command Docker swarm leave. It was a vodka it
would leave right away. But since it's a manager, like I said,
it says use the force option. So let's use that okay doc
a swarm leave with double flat force and it says the node
has left the swamp perfect. Right? So this is all
about Docker swamp guys. Okay, so let me go
back to my slides and cover the one last command that is that for today and that command is
the docker service command. So this command
is used to control. Any existing Docker service
beat any container or Bagheera Docker compose or Docker swamp
or anything else? Right. So talker service is
a very underutilized command. I would say because if you want to control
your different nodes when you're in a Docker swamp, then you use the docker service
you use a Docker service command to list down. The different nodes are there in your cluster you use
a Docker service PS command to find out what containers
are being executed in a particular node, and then You want
to scale the number of containers supposing you
have a cluster of five machines and then you have five containers running
in each of those machines. If you want to scale
those containers to 25, that means you will
be executing five containers on each machine, right? So for that you have to use
a command Docker service scale if you want to
stop any container on any particular node, then you use a command
Docker service stop. And then if you want to find
out the different logs, then you use command
Docker servers logs Docker servers are on and so on. Right. So the docker service command
is let me repeat it's used in sync with your Dockers warm
and Docker compose primarily. So that's why these form
the advanced Docker commands. So let me go to my terminal and quickly show you
a glimpse of this. So it's Docker service. If we do an LS, you will not have
any options listed because it says this is
not a swamp manager currently, but if I start
my Docker swamp and then if I run the same command Docker
serve as l Then you can see that the output
is different right? I have a few attributes
your ID name mode, which is basically details
about the different worker nodes in my cluster. But since no worker
has Norma cluster, there are no entry job. So that's how it is. So, let me just add that. So that is the docker
serve as LS. If you want to find
out the logs, then you can do
that too Docker service logs. So if you use
the doctor service log, you have to specify which service you want
to check the logs off. And what is the task? Right so which tasks and which service so
it's that simple guys, so that's how da cursor
which is used. Okay guys. And again if you want
to stop any service if you want to remove
any service you can use these commands
doctor service stopped or doctor service remove. What is Docker compose. So the definition says Docker compose is used to run
multi container applications. Multi containers, right? So well, the thing is you use one container usually
to host one service right now. That's what we discuss
all this time. Now, let's take a case
of a massive big application. All right on it
has multiple services. And in fact, there are multiple web servers
which need to be placed separately on a particular
server or on a particular VM because it might
cause an overhead because maybe 2/3 so
was cannot be hosted on the same same machine. So at that time what we usually do is we
we create we have a new VM and we hosted their right or we have a new
server all together. For example, if you want to have if you want to monitor
your your application, then you might
probably use niosh. So nagios you may have
there'll be times that you'll have
to hose it separately in a different machine and similarly you will have made
very other various various. So I was like Jenkins and many web services
so that time, you know instead of having
a different different machine or having a different VM. We can simply have
a different container. Okay, you can have these multiple Services hosted
in these multiple containers. So each container
will be hosting one service and then these containers
would be home run such that they can interact
with one another. Okay exactly how it works in the you know in case
of servers or VM so it's exactly the same way but it's just that it's going to be
one command very simple command, which is gone. Oh, you know get
your doctor composer and this Docker compose up. It's like a grid right so it will run. All the three containers
at the same time it will host all these and it will get them
to interact with one another. So that's what that's the benefit
with the docker compose. And that's the whole point
of today's session. Right? I want to show you how awesome
Docker compose has this way. So Yeah moving on to
what I'm actually going to show you in today's session. I'm going to show you how to set
up a mean stack application like I mentioned earlier. So first of all, the mean in means
that has four different things. So the m stands for mongodb, which is the database and E
stands for Express a chance for angular and n stands
for node.js now together. They are this is
a full stack application. Okay now since we
are using a combination of Chortles we a comet
to a means like application. Okay, that's what it's acronym to so this full sack
application is again a web service so such that, you know, you have
a front end client and you have a back-end server and then you have a database. So whenever you
have your clients or your customers interacting with your web server it
would they would be interacting with the client. Okay, the client of your thing
front end client so the data that they pass
over there, right? Whatever. As they perform
or whatever requests they make that would go to the client. Sorry that would go
to the server and the server would you know do
the necessary function? So it would have to sometimes
need to fetch the data from the database. So in that case it would fetch
the data and provide a response or sometimes it might have to do
with these are the functions. So the actual functionality
would be done by the server and the displaying part
would be done by the client and the actual data would be
stored inside the database. So that's how The foods
like application works its combination of for these three services
the front end client a back-end server and
a database and that's what I'm going to use. Right. So if I want to have
these three services, then I would have to create
three different containers. Right? So I have
a container number one, which I can use for mongodb which would be my database
have continued number two, which I can use
for my back-end server. I'm going to use express
and node.js in combination and the third service that I'm going to use
is my front end. Client, okay. So I'm going to use angular
for that purpose now, I'll be hosting
these three services inside these three containers and each of
these three containers would become would be built from
their respective doc of files. Okay, as you can see there's
dockerfile one dockerfile to and dockerfile three
now in the same way that I explained
in the previous slide. We have a Docker file should
build the image first and then that would be spun into a container the same
process follows here also. Okay. It's just that for each
of these containers separately. We would be built if we would be using a Docker
file and each of these doctor. This would be called, you know one after
the other with the help of our Docker compose file. So Campos is what is the key term
that you need to note here? And the compose file is
it's a yamen file basically, okay yet another markup language
and in the common file you for you specify the location
by a Docker file is present and then you also
specify the port numbers that container needs
to use to interact with the other container. Okay, and at times
if you have a database in place, you might also have
to specify by the link of the database server and the database
will be connected. So for that purpose you do that. So that's how the
docker compose works. And that's the overview that I've given you
right three containers, you know built
from 3 doctor files, which would be called by
the docker compose file, which is a yeoman file
and there you go. You will have a web application
hosted that's up and running. All right, I mean is nothing
but a full stack application that that involves
The combination of these four Technologies angular
node.js and express and mongodb. Okay. So the three services that might mean sack application
involves are primarily they are the front end client
the back-end web server and the database. So this is the same thing
that I explained a couple of minutes earlier. But since you have
a pictorial representation, I hope this can you can relate
to this better, right? And my friend and client
is going to be my angler and the backend server
would be no Jason Express and database is going
To be mongodb. Okay, so you guys
should have any problems now and these three services would be hosted separately in
the three different containers. Okay, and that would be built
from my Docker compose file. So that's what
I'm going to do now. Now, let's see how to do containerize
and deploy a mean app by using Docker compose. great, so First of all,
let me open my Virtual Machine. So first of all, I have my terminal here and now
I want to show you my project. Okay? So this is my main
stack app is the folder where I have my project present. So as you can see there
is one angular app folder, which is basically
containing all the codes of my for my for my client
for the front end. This is the back end server where all my codes
are press it again. And this is the composed file
which I have written and this Docker compose file is what is going to do
all the work for us. Okay. It is really Work and one thing
you might notice here is that I don't have
a Docker file for my mongodb. Right? So I mentioned earlier
that I would be using a doctor or file for creating
the dirt container. But in this case,
I don't really need to do that. Okay. So a Docker file is
a little complicated procedure, but for my database, I don't need to build something from scratch and I
don't need something customized so I can just simply use
an existing mongodb image, which is there
on the docker Hub. I can use that and Link
that with my back-end server. So that's why I don't have
that but instead I have directly called
that mongodb image over here. Okay. So this is the yongle file
and if you guys are watching this video
at later point of time, you can also relate
and you can understand what I have specified here because I have mentioned
the in comments. I mentioned what each
and every line does. Okay, so it'd be helpful for you people do come back
and have a look at this later if you are having any problem, but in the meanwhile,
let me explain each line here. So in the first line we are
saying the In to be used is 3.0. Okay, so you have to install a version
of Docker compose separately and Doctrine will anyways
be there, right and you have to download
a version of Docker compose which matches your
Docker engine version. Okay, because certain versions of compose is not compatible
with certain versions of engine. So you have to just look up
the right version and I am using version 3.0 of compose and I have version 1.1 six
of my doctor engine. Okay, so just make
no doubt make note of that. And yeah, you are
specify the version that's gonna be our first night and after that you simply
specify the different services that you want to run. Okay you the command the key
word for that is services and you give a collinear and you specify the three
different container names? Okay. And each of these
containers will contain the actual services. So in case of my angler just going to be the name
of my container number 1 right and here I'm saying
build this container from the documents that's present in
this particular folder. This particular path. Okay, similarly expresses
the name of the second container and I'm asking it to build this container
from the dockerfile that's present in
this particular path Express. So in case of my mongodb image
creating the container with the name of database and I'm not giving
a Docker file here. I'm just saying pull the image
from the docker Hub. Okay, so it would use a image that is the Mongo image
with the latest tag and it would Use that okay. So let me just quickly go
to my photo and show you where the rock
of eyes are present. So this is the angular app now since my compost pile is
here from my compost file. This is the pathway
my Docker file will be present. Right? So this is that Docker file. So let me just open this dog for
and keep it here and similarly if you can if I put this go back there's
Express server folder, right? So this is the code
where my coach for server-side is present and my Docker file
for that is present over here. So Now coming back to my yamen file after
specifying the path of each of these darker files
specifying the port numbers on which they should be running
inside my you know, where the port mapping
how the port mapping happens. So whatever application hosting
inside the docker container, right that will be hosted
in one particular port number of your darker. So you have to map it to one of your port numbers
of your Local Host machine. If you want to see the you--if
you want to interact with the web browser
then you have But map it with a particular port number. Okay. So I have a said
this is going to be the local machine port number
on which it would be visible. And this is the port number on Docker where the application
is going to be running. Okay. Similarly for express. Its 3000: 3004 mongodb. It's to 7 0 1 7:
two seven zero one seven. So each of these port numbers are default
for these applications. Okay, so I haven't done much
of a deal over here. Let me just since I've explained the Camel
file in a decent fashion. Okay, there's one
more thing left. There is links. Okay. Now this line if you see I'm linking
my server side to my database. Okay now since I have a database where it meets the fetch data
from on a regular basis, we have to give
the keyword links with a colon and the specify
the container name. So that's why my third
container is going to be the mongodb container. It's going to have
the container name of database and I'm linking that over here. All right, so
it's pretty simple. And now that I've explained
each of these Campos files it's time. I explain the doctor files so This is
the first Docker file, which I created for my front end
and it's very similar to the dockerfile that I use
for my previous session. If you people remember if any of you are aware
there in that session, you might realize
that first of all, I'm using a from command
to pull the load image. Okay with the version 6. I'm doing this
from the broker up and inside this image. I'm creating a new directory. Okay make directory is
the Linux command that you use. I'm doing that with the pflag. I'm creating this pot. Okay, so I'm using
the pflag so that creating the entire path the parent path
also / user / SRC. Okay, so I'm creating this
folder inside my Docker image and I'm changing
the working directory to the newly created folder
or new newly created path. Okay project path. And what we need to do
is we have to copy all your dependencies have
to copy all your past or your project codes
and all these things right? So that's what I
mentioned to you when I was delivering the slides
all your Project codes all your applications codes the dependencies libraries
will all be packaged together. So that's what we are doing
here first thing copy the package or Json file
to the project path. Okay. Now if you can let me just show
you the package or using file. So this is the package
or Json file. First of all,
I'm copying this one inside my Docker container
Docker image now, that's because this file
is the most important file which has details
about which dip version of dependencies are
needed for your code. For my angular code
which is present over here, but he followed my angular code
represented in s RC / app path. So whatever I would need the versions of
that dependencies would be have to would have to be mentioned
in the package.json file. Okay. So I'm copying this file
inside my current Image First and Afric copy it I'm running a npm cash
clean command Okay, so MDM search for
node package manager. It demands your application here
and caching is understand. You're just removing
the cashier. It's not a very
important command but the important
commanders npm install. So when you give
the npm install command what this does is it
would first of all look for the package dot Json file. Okay, the npm which is node package
manager would look for the package.json file and whatever versions of dependencies are
mentioned inside it. Those would be downloaded. Okay, and they will be present inside a new folder called
nodes underscore modules. okay, so that would be created and it would have to be moved
to this particular path inside your it should be moved
inside your documents. Right so that command is
what I'm doing next. So after downloading the node underscore modules all
the actual dependencies that along with the
actual project codes. So those also I'm copying by giving the.com
and you're okay. So when I say dot
whatever is present in the host machine everything in that particular path
would be copied to my Docker container path of this and then I'm simply doing
an Exposé photo double zero. Indicating the fact
that I want this container to have an open port
at this photo double zero. Okay, and the same folder
of 0 is what I'm using over here since full double zero is where
angular is hosted a mapping that to my host machine sport
of photo double zero. Okay this I do
in the yamen file. But anyways, when I
specify the port number, but it's running I can simply
do a npm start command. Okay, and when you
run an npm start It's your son
your node package manager, but would straightaway
look for your codes. So your codes would be present
inside the SRC folder. Okay, you have
another folder called so it would look for everything here
and whatever is present here. It would start executing them. And yeah, that's a of
course the dependencies would be present
inside the same image. So your application will be
successfully hosted in that way. Okay and similarly going
to the third doctor or second dockerfile. This is Creating
the this oversight. Okay, and if you can notice there's
not much of a difference. So the almost every step is the same except for me
exposing the port number here. So I'm saying my server would be hosted at Port number
3000 of my Docker container. Okay, and this again,
I'm mapping in the human file to the host machine
port number 3000. Okay. So that's the only difference. This is the only difference
in both the doctor files. And now that I
have also mentioned where these files
are present inside. My almond file I can simply
execute this Docker compose file to have my service to my having
to have my servers hosted. So the command for that
is let me check. Where am I right now? Okay home / a trigger. I'm going to see
Dee do the folder and here we have the same set
of files and folders, right? So this is the file
that we want to execute and the command to execute
the docker compose file is Doc. Oh - compose space up. Okay. There's a very simple command
and you dockerfile would basically be executed
as you can see. It's starting the angler
one container database one container and the server
side container great. So guys, this is going
to take a minute. Okay, the angel abs are the development server is
listening on localhost for 2.0. Great. This is this indication
of my client side. Okay, so it says
webpack compiled successfully. That's great. My web services are hosted. Now. What I can do is I can open
my web browser here and go to those particular
port numbers and see if my services up and running. So if you remember my client
is hosted at Port number for to double 0, right. So let's hit enter. I can either give localhost
or I can give 0.0.0.0. Either of that will do and as you can see
my angular client is up and ready and my app is simply
about having a forum page. Okay, so I can add details. You're my first name
last name phone number he deals and Just click on add
to get the details and that details
would go to my database so that it's a very
simple application which we have created. And similarly you
can verify if the client if the server is running
by going to the port number where it was hosted
and you can recall this is the port number
was hosted on Fubar great. So this was the authentication that I needed and localhost
two seven zero one seven is when mongodb is hosted. So for mongodb container,
we get a message like this. It looks like you are trying
to access mongodb over HTTP on the native driver port
and if you get this message, it means your continues
operatic very good. Alright so we can just Let's start web application
by giving the name Behavior. Okay. So the first name, let me say I'm going
to have my own name. I'm going to save our them. All right last name. I'll go Venice phone number. I can just give
a random number here. Okay, and if I say add this data
would go into my database. That's my mongodb container. All right, great. So it shows that already I have
these rails present. Okay, so I have this one record and now this record
has been added. Okay. Now let's verify
that by making an API call. Now I have only explain
the client aspect. The service had aspect is
something that didn't explain right you guys might you
guys should know by now that the client will take
the request from the sorry the server would take
the request from the client and query on that. Right? If it if there's a you know, if there is if there is
a request to access the database then it would fetch it
from the database and respond to the client. So let's do that. This of course is the UI
that I created which shows my database
but anyways to verify if the same thing has gone
into my database. Actually we can do that by going to the server here
and Going to this particular. URL, okay, there's a /
EPA / contacts, right? So this is basically an API call that my server is making
and at this URL at / AP / contacts. It can view what data is present
inside my container. So it's it says that the first record
that is present is this one it has an idea of this which
was generated automatically. And of course the first name
last name and the phone number was what was given okay, and this was the this was
the record that I created. And as you can see
when this is present, so if you want a little if you want to play around
a little bit you can do that and let me just do that by deleting one
of these records. Okay. I'm going to delete
the first record. And now if I just go back
and refresh this page, you can see
that the first record is gone. Okay, so we have this version. We have NS and
the phone number now, that's because we deleted
that from the database itself and I made a call
from my server from a client. Sorry, so I hit a delete button. Button from my client that Quest went
to my server first and the server would indirectly
go to the database and delete that particular record. And since I did a /
AP / contacts as I refresh this this
would return whatever is present inside the database so it's
that's what is visible here. Right? So currently this
is the only record that is present in my container in my data
mongodb database, right? So that's what's visible here
and similarly this To functionalities. I showed you there are a couple
of more functionalities and you know that we
can do with this image. You can retrieve
one particular record. If you want to you
can do all these things. Okay, and this is
just my application. You can come up
with your own customizations. You can build your application
in your own way, correct? And you can do all these and of course I
cannot go into depth I can teach you in detail what the words different parts
of this application is but instead I can point you
to one of The videos which is, you know, a red record video
which has a complete tutorial on how to create
a mean stack application. Okay. So let me just give you the link
of that video in some time. But before that I just
want to quickly show you that this was
the express server page. And again, we had the package
or Json file here the Apple IDs, which is basically
the entry point into my server and in this app dot JS file, we would have details
about what apis are there what functions Function calls
can be made and whenever that particular function call
is made by the client. Then it would be routed to this route dot J's file
inside the routes folder. Okay. So the definition of those functionalities
would be present here. So whatever actions
need to be performed when clicked on that there
will be specified here. So that's how the
server communicates with the database and forth. Right? So that's how it works. And yeah, that's the explanation
of both the mean stack. At the company explanation
write the angular app and the express server. What is Docker swamp? So a Docker swamp is
a technique to create and maintain a cluster
of Docker engines. Okay. Now what I mean when I say a cluster
of Docker engines is that they will be
many Docker engines connected to each other forming a network. Okay. Now this network
of Docker engines is what is called as
a Docker swarm cluster. And as you can see
from the image over here, this is the architecture
of Docker swarm cluster. Okay, and there will always
be one doctor manager. In fact, it is
the docket manager which basically initializes
the whole swarm and with the manager they
will have many other nodes on which the service
would be executing. So there will be times when the service will also
be executing at the manager sent but basically the managers
primary role is to make sure that the services
or the applications are running. Earning perfectly
on the docker nodes. Okay. Now whatever applications
or services that are specified or requested they
will be divided and they would be executed
on the different Docker nodes. Now this app is called
as load balancing, right the load is balance
between all the other nodes. So that's what happens
with the Dockers warm. And that's the role
of a doctor managers. Now, let's go and see
what are the features of Dockers warm and why it's
really important and whites. You know the go to standard
in the industry. That's because
with Docker swamp, there is high availability
of these Services. Okay. It's so much so that they can be hundred
percent High availability all the time, right? That's what high
availability means, right. So how is
that possible that's possible? Because at any point of time even if one node
goes down then the services which are running
inside that note. They can start
the manager will make sure that that service is just that Services started
on other nodes, right? So the service
is not hampered even though the note maybe
down the load is balanced between the other nodes
which are active in the swamp. So that that's what a Docker manager does and
that's why the doctor manager is heart of the Curse one cluster. Okay, that's one feature. The other feature is
auto load balancing. Now again, the auto load
balancing is something that is related
to high availability itself. We're at any point of time. If there is any downtime
at those times, the manager will make sure that those services
are not stopped and they are continued
and executed on other nodes, right so that the manager
would do but along with that load balancing
also comes into the picture when you're when you want
to scale up your service. Has supposing you
have say three applications and you have bought
three notes for that. Right? So including the manager
you will have 4 nodes because manager is also
technically a node. Okay, so you have
a manager node, and then you have
three different nodes. So in this case
the three services which you deploy
they will be running on three different nodes. And if you want to scale them
at a later point of time, let's say you want to scale
them up to 10 Services. Then at that time you
the concept of Auto load balancing would still
come into the picture. We're in the attend services. They would be divided
between the nodes. All right, so it would be such that you will have three
probably three services running on one node 3 more services
running on the other node, and the remaining three services
on the other node. And the one service that is left out
that would you know, sometimes be run on the manager or it would be load balanced
on some other node. Okay, and the best part of Dockers you don't need
to do any load balancing. It's all done or on
its own right? So there's an internal
DNS server with which the deed. The doctor manager manages
and the DNS server, make sure that it allocates. It makes the DNS
server make sure that all the nodes
are connected in the cluster and whenever any your load
is coming it would balance the traffic between
the different nodes. Okay. So that's one big Advantage
with auto load balancing and another feature is that
of decentralized axis? So when we say decentralized
access it means that we can access these managers
or these notes from anywhere. So if you have
these These managers or order or these nodes
hosted on any server. Then you can simply SSH
into that particular server and you can get access to
that particular manager or node. So if you access the manager, then you can control what
services are being deployed to which nodes. Okay, but if you log in or
if you sh into the server which is a node
then you can control or see which service is running
inside that node itself. Okay, but however, you
can't control the other nodes if you are inside a node only
the The manager node can do that for you? But anyways, all that we need is to log into
or SSH into a doctor manager and you know control which
services are running right? So that's all we need
that can happen this way and of course it's very easy
to scale up deployment. So I also spoke about that
earlier where you know, you can if you want let's say
you already have but answers and if you want to certainly
scale it up to 50 or say a hundred servers
hundred Services, then what you can do is you
can just buy a few more sir. Us and deploy
those hundred Services into those servers, right? So it's a very simple
or very simple functionality where you can do it with just
one single command one single command is all it takes to scale
up your number of services or applications to
the desired amount. Right and you will have
multiple Services running inside that same Docker node. So each node can have
a probable probably have 10 or 15 service is running and it basically depends
on the number of nodes. Are you have but Ideally, you wouldn't you
shouldn't do that. You cannot have but too
many services running inside the same note because that causes
performance issues. Right? So all those things you can do and finally is this concept
of rolling updates and rolling update is
by far the most catchy feature because when we said
rolling updates we what we mean is
these applications are Services which are running right? They will have to be updated
at one point of time or the other down the line
you will have to update it. So at that time
what will You cannot you know upload update manly
in every single machine, right? If you don't have darker if you have hosted
your web servers on either virtual machines or on
actual web servers, then what happens at that time, you would have to go
to each and every system and then probably
updated everywhere right or you might have to use other
configuration management tools. But with the help of Docker, you don't have
all those problems. You can simply, you know specify the the
you can use the rolling updates functionality for that. And you can specify a delay. So in the delay, it would update one service
or each service which is hosted or deployed inside every node. It will update each
of those Services one after the other with a delay
of the specified amount of time. Right so between so even when one surface is getting
updated the other service is not down and because of that
there is high availability since the other service is
still up and running you don't there is
no downtime caused, right so you can be sure of that. In spite of and rolling updates
are very simple also, so you just again it just one command
and you're all done. That's these are the benefits
of for Dockers Home and these are reasons why you
should Implement Docker swamp in your organization. If you have a massive
web application web servers, which is deployed
over multiple servers. So that's the big benefit
with the Dockers warm, right? So moving on to the next slide. Okay. Now it's time for the demo. Okay. Now, let's see
how to achieve High availability with Docker swamp. But before I get started
with the Hands-On part where I would be showing
you on my virtual machines, I want to First go
through what I want to show you with respect
to high availability. Okay, and how to achieve it
with Docker swarm? Okay, so first of all so
first of all men terms of high availability
The ideal definition is that you have the application of the services deployed
in each of the web servers? Okay. Now look at this architecture where I have about two nodes
and I have one manager. Okay, and I have Docker engine running on each
of these each of these nodes and each of these are
all highly available. Okay. So at this point of time, I don't have any problem okay
with respect to any services and my application is deployed
in each of these servers. Okay, each of these servers
or each of these nodes. So at this point Time if I access if I try
to access my browser, and if I try to access
this port number in my browser, I can see
my application running. Okay. Now this is the application
which I will be showing you my demonstration on and this
is also the application which I executed a couple
of sessions back. Okay, the the link
of this application the demo of this I will share it
at the end of the session. Okay, but don't worry about that because this session is all
about Docker swamp. So getting back. Back, what I was saying is since these are hosted
in each of these servers. I can access the application that I have deployed
on each of these machines. But look at this scenario
where my service is only hosted on one particular
node this time. Okay. I have the other services
connected to my cluster. Okay. This is my swamp cluster
where it's all connected, but the application is
not hosted on these two nodes. So at this time,
can you guess what happens? Can anybody does anybody think that the application
will not be accessible on these machines? Can anybody tell me that
if you people think like that, can you just Well, if you think like that, then you people are wrong because since it's
connected in a cluster these Docker whatever is hosted
on one particular node, they can also be accessible
on other nodes. So even even in
spite of the fact that these servers do not have the application
running the web port on which this application
is hosted, right? This port number
will be internally exposed to all the nodes
inside this cluster and since the port number
on which it's running over here that is for to double zero. That is exposed to the cluster then in all
the other nodes in the cluster on the same port number
for to double zero. The application with would be
accessible right same thing with even this particular node. So on for two double zero, you can access
this angular application. This is the second scenario
of high availability. But this is just a scenario where you don't have
your application. This is when this is
the third scenario where High availability is actually being
implemented, okay? Okay. Now you have a scenario where you have your three nodes
and two of your nodes or one of your nodes goes down Okay. So this time you
don't have your application itself forget about the fact that doctor is
not the application not hosted forget
about that fact. Think about this scenario where your node
is not accessible. It's down for some reason for some natural Calamity
at that point of time. Do you think you
can't access it? You can well that's because the the again
the nodes will be connected inside the docker class. The swamp cluster and the port number
would be exposed. So because of this reason you
would still be able to access you would still be able
to view the angular application on these servers right now. That's the benefit of having
a Docker swarm cluster. All right. So this is how the hive a high
availability factor is achieved with the help of doctors form. And this is what I'm going to show you
in my hands on part. But before I go to that part,
let me just just quickly run through these Docker
swamp commands Okay. So these commands is what I will be using
extensively in my demo and they're also the most
common swamp commands that you need to use when you're starting
with your Dockers warm cluster. Okay. So first of all to initialize
the swamp you use this command you say Docker swarm and in it, you use double flag and
say advertised Adder. Okay followed the followed by
that you specify the IP address of the manager machine or that same machine
where you are. Starting this service. Okay. So the when you
when you do this, whatever IP address is specified
here that particular machine would be acting as a manager. Okay. It is also ideally
the same machine on which this command
is running right the IP address of the of what you specify your
it should be the same machine. So that's the thing
and whenever you you should this command
this swarm would be initiated along with the manager being
this particular machine. Gene which has this IP address. Okay. That's what happens
when you do a initializing the when you initialize
the Swarm and of course when you initialize the Swarm
you will get a token. It's more like a the
key Enter key using which your other workers
can join your Docker cluster. Okay, but getting back
to our Dockers warm, once you finish lies your swamp you can list
the different services that are running
inside that swamp Okay, you can list the different you
can list the different. Nodes that are running
you can check which all nodes are connected
to your swamp cluster. You can check what all tasks
of services are running. You can check you can create
a new service new service as in a new container
and then you can also remove that a new container and you can scale them
up using these commands Okay. So use the docker service LS
to list down the services that are running then if you want to drill down
on one particular service and check in which node one particular service
is running then you can the docker service
PS command Okay, so it lists down that process when you shoot with the name
of the service that you want to check for and then if you want to create
a new service, then you use this command
of Docker service create then you specify the name
of the servers in fact and you got to specify the image with you want to which you want
to use to build that particular container and service? And finally to remove a service
you use this Docker service rm4 by the name
of that particular service. And finally if you want
to scale your services, then you can use this command
Docker service scale and then you can just specify the name of the service
and you can specify the number that you want to scale it up to. Okay. So in this case if I had the servers
which are which was which had two replicas then
by simply specifying is equal to 5 I can out of scale
it up to five different. I was right. So those are these are the swamp commands which Which
are applicable from the manager and and now going
back to the node and if you want to list
down all the nodes that are there in your swamp. Then you can use
the docker node less. Okay, do note that here. It was all about
the different Services. Okay, and these commands cannot
be run on the docker nodes? Okay. They can only be run
on the docker managers. Okay. So here you have
the docker node LS which lists lists
down all the managers. And the nodes and then if you do a Docker node PS
followed by the service, which you want to in fact, if you do a Docker node PS
it basically lists down all the containers or services that are running
inside that machine. Okay. Now this command again,
it can be run on even nodes. Okay, this cannot be run
on all the nodes. So the node LS it can be only
run on the manager. And finally if you want
to remove a particular node from your service from
your cluster then you can run. Run the command Docker node RM
followed by the ID of that particular node. Okay, but at times you
might not be able to do that. That's because the node
might still be connected to the cluster. So in that case, what you have to do is
you have to use a Docker swarm leave command. When you use this command
you can you can get if you run these commands
from the nodes. Then the nodes would leave
the cluster and then You can just end your cluster, right and finally you if you can just run the Dockers
mom leave from the manager and then you can end
the whole cluster itself. So even the manager would leave and manager would ideally be
the last instance to live right? So when there are nodes
there you cannot you cannot have the manager leave
with the notes being present. So that's one thing and at times you would be given
a error saying that you cannot leave the cluster
because you're a man. What that I'm you can use
the flag Force flag. Okay. So at this time you are as a manager you
can leave the cluster and your cluster session
ends there, right? So these are the top commands
which are in question. So yeah, I think it's time
for me to go to my session. Okay, it's time to go
to my Hands-On session. Where I'll open up
my virtual machines. So for the demo purpose,
I have got three different VMS. Okay, and inside these vm's. I have three different
Docker engines and I will be basically using two
of the doctrines as my node. And I would be using one
of them as my manager. Okay. So this is my manager one as you can see over here
and This is the password. Okay. So this is the manager one, which I'm going to start
the service the whole swarm and the services from here. Okay, and if I go here? This is my worker one
as you can see over here. All right. And this is worker to now
in these two nodes. I would be executing
my applications or Services. Okay. So first of all, if you want to create
the swamp service, then you have
to run the command. Dhaka swamp in it advertise Adder followed
by the IP address. So the IP address
of this manager machine is 192 dot 168. Sorry dot one sixty eight
dot one dot hundred. Okay, so great. So my swarm is initialized
and as it says if you want to add
a workout with this warm, then you have to run this token. Okay. Now this is the token. Let me copy this token. And run this
at the nodes and okay. So I'm going to go
to worker 1 I'm going to paste this token. And when I hit enter it says this node has joined
the swamp as a worker. Now. Let me verify that if I go back here and if I issue
the command Docker node list, then it says
that I have one manager which is myself myself is
being indicated by this aspect. Okay referring to
this own system, which is also the leader. So it says manager
status leader correct. The state is And availability
it's active and recently I added the worker node. So it says even
this is available. Now, let me go to the third VM
and enter the token here and it says even this node
has joined as a worker now if I go back to the manager and
run the same node list command, you can see that the worker
to has also come in now. Okay now that's because I have issued
the join command at my node. And so I'm going to clear
this clear the screen and now we can start
creating our services. So first of all, if you want to create a service
the command is docker. service create followed by
the name of the name flat. Okay. So you specify the name
of the service that you want to give ammo? Let's say I want
to say angular application. I will say this and followed by this we should specify
the name of the image. So the image name
is demo app one. Okay, and along with this. I also want to specify
the port number on which I want to do The Binding because the angular application which is being hosted
in one particular. Port number in turn my container
that has been mapped to my browser port number right if I want to access it
on my web browser that is this Firefox. So for that reason,
I will use the - pflag and I'm going to say
photo double zero of the browser Port should be mapped to photo
double zero of my contain Abode. So this is the command. Okay. Now this command simply
creates one instance of this service angular app, which will be built
from this image demo app one. Okay, and it would Bose
the port number 4 Double Zero from the container to the port number
for 2.0 of my browser. So let me hit an enter. And let's see what happens. We got to give it a few seconds because it's a big
application, right? Yeah. So now let's do a talker. Docker service LS, okay. You can see that one. Android application is created. Now. This is just a warning. Okay, you can ignore this
because this is the confirmation that your service
has been started. Okay, you can ignore such warnings if you just
what you need to look for. Now. If you get this image ID, then it means that your service
has been created. Okay. This is the service ID basically
so as you can see you're right. Now the mode is not replicated. There is just
one single instance and the same thing you can see. Got it says replicas is
one the same name which is specified the image that it used and the port number
where it's active right now. Okay. Now let me do a docket
PS command from the manager and check if this application
is running inside this node. So yes, it says that this application is running
over here now parallely. Let me go to my worker one. Okay, this is also connected
to the same cluster. So I'm going to do
what Docker PS over here. You can see that I
have got no output. So this means
that there is no container that are started
inside this node. Okay. This is the worker 1 similarly. Let me go to work
or two and say Docker PS again, there is no output
when it come you know with this. Okay. It says no. I started now if I go back to the manager
and verify I can verify where in which node
this application is started and the command for that is docker service PS followed by
the name of the application that is and lower rap. So when I hit enter you can see that the name
of the application is this this is the ID and image that was there
and the note but it's running. So it's hosted in the manager
one in my system itself. It's hosted. Okay primary system. The results say
it is run running and the current seed is running
about a minute ago. Okay. Now let me go to my browser and access Local Host
for to double zero. Now as you can
see this particular, this is the Android application
which I've hosted. Okay. Now I've explained what this application is about in one
of my previous actions. I would request you to go to
that video and get more details about this application. Okay. I'm going to just quickly
get back to my session here with respect to swarm. So since I have started my application I
can access over here. Now as I explained earlier
all the nodes in your cluster can see the application
that you've started. Right. I explained that
order right now. Let's verify that by going
to the other nodes. So in spite of the container
not being hosted on this particular node, I can get the same local I can get the same
anger application over here because the port number
would be exposed internally between the different nodes
in that cluster. Same thing with
my Docker worker to right. So if I do well,
I've already done a Docker PS you can see
there is no container here. So let me just quickly go here and do a local host
for double zero. Yeah, so you can see
the application is hosted. Even on this particular node. Now this is good news. So this means that your application
is successfully hosted on the cluster and it's accessible
on all the nodes right now. I'm gonna do a Docker node LS. And yeah, we have
three different nodes. It's the application
executed over here if you want to verify
that you can also do this. docker service LS, okay,
it just one application. And if I do PS with the angular name
with the image name, it says it's running here. Great. This is one this is
one of the scenarios which I want to show you. Okay, but I want to show
you another scenario where the application
can be hosted on multiple nodes at the same time
from the manager. Okay, and the command is not going to be
too lengthy also. Okay. So last time what I did I basically executed
the container at Right, so that was executed
only over here. Let me but before I go
to the next scenario, let me remove this service. Okay, so the command to remove
the services Docker service? remove Angular app so
when you get this output, it basically means
our application has stopped the deployment has been removed. So if I try and refresh
this folder will report it says it's unable
to find anything there and similarly you
won't be able to find it on any of the other nodes also because the cluster
itself does not have access to this particular
angular application right now, but now let me go back to what I was talking
about the second scenario where I can start
the same service. On all the three nodes, okay. The same dock ourselves, which I created
I'm going to issue that with a slight modification. So after my port options
after this flag, I'm going to use
this flag of mode. Okay, I'm going to say
mode is global now with the help of this flag. The application which I
am deploying which I am, you know hosting this
will be basically deployed onto all of the all
the three nodes of mine. Okay, we can so I can show you that by first hitting enter and
let's see what the status comes. Okay, so we did just
take a few seconds because it's being deployed
to multiple nodes, right? That's the only thing so yeah again, the service
has been created. This is the service ID. Now. Let's do a Docker PS and check
and there's one instance of this application running
in this same manager. Okay, like before it's running
over here also and let me verify that if it's running over here this time by running
the same Docker PS command. Yes, as you can see
sounding seconds ago. This application was
created and similarly if I go to the third VN and run
the same Docker PS command. It's opened over here. Also, this means that the application
this time was deployed to all the three nodes parallely. Okay, we can also
verify that by going to the foldable reports
of each of these machines. So this time the app
is back up right? It's running again. Same thing. I can refresh this. And I will see
the application coming here. Just connecting. And similarly over here. Also, I will have
the same success scenario. Okay. It's connected. Great. This is the movie rating system
that is been deployed and all and across all the three VMS. Now let me verify this give
you the way you can confirm. This is by running the docker Service First you do
the docker service LS. Command Okay. So with this you have
the more s Global, okay, and it says replicas
3 of 3 that's because there are
three different nodes connected and since it's deployed to all the three it
says replicas 3 of 3, correct. The only difference
last time was it was one out of one the same as three
or three and to give you a for to drill down. Further into details as
to if it's running on each of these nodes we can use
the command Docker service PS followed by the application name that is angular app. So when I hit enter As you can see it says there's
one instance running on Walker one one instance running on manager one and
the third instance running on worker to great. So this is what we
this is the real fun part with Docker right with once we command you
can do all these things. So let me also I mean, we just verified this right
now now comes the concept of high availability if any of my node goes
down then what happens. But I still get access
to my application over there right that question
needs to be answered. So, let's see
if that is going to happen. So for that. Let's say well my internet
of my worker one, right? This is my worker one. Right? Let's say I am I enter my notice
down and to get my note down. I'm just going
to do a disconnect. Okay, so right now it's
not connected to the internet. And if I go here and do
a Docker node LS command, which would list down
all the different nodes in my system. You can see that. The status of worker
one is down. Okay all this time. It was getting we
were getting ready. That's because the state is was that's because the
server server was up. But since I turned off
the internet in my worker one, it's telling state is as down
but in spite of that I won't have any problems
accessing any application, okay? So even though I
refresh it you can see that on this port number I
could access the application. That's because in
spite of the fact that this node is connected
to the cluster. I can access this. Right and the Very fact
I can do that is because all the machines
are all the nodes in the cluster will have
the port number opened, right? It would be exposed between all the other nodes
the same concept. I explained during
my slides, right? So in spite of the fact that my note being down
I could do this now this solves one. If I availability, right? So in this case, even if I have like
multiple nodes going down then some of the nodes
which are you know, good enough which are
healthy dose can service. They those can satisfy
all my services for a temporary period of time, but of course, I'd have to bring up
my your notes again, right? So this is how one high
availability can be accessed that that's one thing. So, let me just go back to my worker 1 and N
able internet again. Okay, so that I can continue
with my demonstration. Okay connected now, so if I do a doc
or node Docker node LS again? Let me just refresh this. Huh? Yes now it says
the state is ready. Great. I'm going to just
clear the screen. Now since I ran
last last command where I did it in global mode, I had an instance running
on each of the nodes, right? So this time let's say
I don't want to do that. I have three different nodes, but I want to host the
application only on two nodes. Well, I can do that. Also, I can set the number of replicas of my service
over there in the command where I'm starting my service. So let me go back
to that start command and modify it as per our needs so I'm going
to remove this Mode Global. Once you remove this flag. You can add you can add. Replicas and set the number
of services you want. Okay, but before this I
would have to remove the service, right? Sorry, my bad. I just forgot to do that. So let's say Docker service
remove angular app, so I'm have removed it now and I'm going to
restart the service. Okay. So now let me start
modifying this start command. So I'm going to remove
this global mode and I'm going to say Replicas and I'm going to set
the replicas to to now this would indicate that I will have to running
instances of this service between the three nodes. Okay, it will load balance
between the three nodes and the manager
will choose on its own. It will deploy the application on two of the best
performing nodes. Let's verify if
that's happening. So yeah the success so it's successful. We can verify
that by doing a docker. Note PS. Okay. This would basically less down if the container
is present in this node. Yeah, there is one container or
One servers running over here. But to get a detailed
to get more details. Let's run the docker
service PS command, okay. Let me just clear the screen
out and run the command again for you. So when I do this it says that two instances have been created right
one has been started on my worker 1 and the other
husband started on my manager one right two instances
between the three nodes. Let me also do
this just for you. Let me do a Docker service LS should to confirm
the replicas, right? It says the mode is replicated
mode and it's two out of two, correct. So no. No hassle anywhere here, right? So if I refresh it, I would still have
the angular application hosted. This is worker one. Its enemies host over here, so I don't need
to verify anything. But to give you
a confirmation I can do that also by running
the docker PS command. So the docker PS would list
down all the containers and services running
in this particular node. So when I hit enter I
have one entry here, okay for once I was at
got started however in this node to the worker to I do not have
the application right? So let me verify that by Running
the command Docker PS. Okay, it's not running here. There's no service. But in spite of that the application
would be running here. So that's the concept of Docker cluster wherein
all the nodes will get access to what's there
in the docker cluster. So that's the fact. And now comes the concept
of scaling up and scaling down, right? This is one thing which a lot
of people have this doubt because it's not always done right in spite
of having a cluster where you have only
three nodes we can scale it up to any number of services
that we want to so right now. I have a two different Services, right if I do a darker service
and if I do LS you can see that there are
two Services running now if I want to Scaled up
to let's say five Services. I can do that too. That would also happen
and the simple command do that is Docker service scale and we should
choose the application and we have to set the number we want to scale it up to let's say
I want to scale it up to 5. So in this case three more
services would be added on to this cluster. Okay. I'm going to go
ahead and hit enter. And yeah, so it says the app the application
has been scaled to 5 now. Let me run the same
Docker service list command. And when I do that it says
right now the three replicas have already been started
and let's give it a time. Let's give it a few minutes so that it can start
on all the other nodes. Okay, in the meanwhile, I'm going to clear I'll clear
the screen first and I will do a docker. service BS and angular
app this would tell me on which nodes my applications
are going to get deployed. So it says out of the five
on worker one. Yeah, they're on two of those services
will be running on worker 1. Okay, as you can see
our service number one. This is the service
number two, right? This is running
on worker one and again on vocal to there will be
two Services running. You can see the so Services
over here worker to and then on manager one. There is one server is running. This is because I scale
it up from two to five. Now. Let me do a Docker service
list command to check if all my replicas or up. Yes. So we've given sufficient time
and buying up all the services are up and running we
can check it over here, but we don't need to
because we know for sure that it's going
to be hosted anyways, so this is good news, correct. So yeah, this is this is
how we can easily scale up. We can easily scale down
and we can achieve a lot. A lot of comfort
by using Docker, correct? So, yeah, so yeah guys. So this come brings an end to my session to my Hands-On
session have also showed you how to scale up and scale down. I've showed you the concept
of high availability. Also the whole concept
of load balancing happened here, but I still there
is one more thing, which I also want
to add on from my side. Okay, and that is why
will Services be executed at the managers and write
a manager ideally does not. Is not supposed to do
any work, right? That's what the workers offered
the manager just manages. So this is the question
that you can come up with. It's a very valid question. So if I want to do that, then I can you know again run
this one command and enable that functionality also. Okay, and the command to do that is there is
Docker node update. I can use the
availability flag here. Okay, and I can say drain and I can choose
which node I want to drain. So when I saw drain basically stops allocating services
to that particular node, which is specified
so you over here if I specify manager one then and if I hit enter Then
from now on the surface, which is allocated
over here, right? This would shut down
and a new service will be created on either Walker
one or work or two that would happen or
if I Want to drain my manager I can also
drain one of the workers. I can either drain
work of one avocado. But let's say
for our in our case. We want to down the manager so I can do that by simply
hitting enter over here. And yes, we've got this as
the return return value. That's great. So now if I do Docker service
PS angular application, which is the same
command you can see that the manager
one has been shut down. Okay, and there has been
an additional service that has started on worker 1. So right now there is worker
one running is one two and three so three running on vodka one and these two
are running on worker to right. So I'm going to clear
the screen here and we execute the same command
and also show you what happens. Now when I do a darker
node list, okay, the nodelist will basically lays
down all the nodes connected inside that cluster. Right? So I'm going to do
a Docker node list and over here this time. You can see that
for my manager one, which is this ID. The state is ready. However, availability
is not active. It is drained. Okay, even though it is
a leader it is brain. So from now on if I scale up the servers
or whatever I do even if in case of high
availability at that time. Am I cannot know services
will be allocated to my manager unless and until I
remove the train. Okay, I can you know, remove the drain
by again specifying the command of active. So let me run that command and show you that so here
instead of saying drain if I change the availability
to active then I can start allocating services to
my manager also. So if I hit enter it says I've got manager
one has the return value. Again, if I run the same document LS
command availability is there and from now on whichever if I if I scale up at only
at that point of time, well my manager
start getting resources. So these would what are
or what our existing right? This will not get allocated to my manager in case
of there's any downtime if any of my node goes on then at that time manager
one will get access. Right and yeah
that would happen. So this is the simple
demonstration which I on to show you it sounds simple but this solves a lot
of Industry issues. Correct? It's one of the one of the best tools I
have worked on Docker and Dockers warm is
one amazing technology that also have witnessed. So I hope you've also understood
what kind of you know what I'm talking about
over here, correct. So yeah that brings an end
to my demonstration here. So before I deep dive into what
exactly is Takin it working. Let me show you
the workflow of talker. All right guys. So this is the general
workflow of talker. So a developer writes a code that defines all
the application requirements and dependencies in an easy
to write dockerfile and this talk of file
produces Docker images. So whatever dependencies are required for a particular
application is present inside this image. And then when we run
this Docker image, it creates an instance. And that is nothing
but the docker containers this particular image is then
uploaded onto the docker Hub. So from these repositories, you can pull your image as well as you can upload your images
onto the docker Hub, then from the docker Hub, very steams just
a quality assurance team or the production team will pull the images and prepare
their own containers, as you can see from the diagram. Now these individual containers
will communicate with each other through a network to perform
all the actions required. This is nothing
but talk a networking. So what exactly
is Takin it working. So when containers are created
these isolated containers have to communicate
between each other. Right? So the communication Channel
between all the containers introduces the concept
of talker networking. All right. So now what would be the goals
of Dhaka networking? So Docker is flexible. In other words,
I mean pluggable by flexibility. I mean that you can bring
in any kind of applications or operating systems or Any kind
of features in Docker and deploy them in Docker
next door can be easily used in cross-platform
by cross-platform. I mean, you can have n number of containers running
on different operating systems, like one container can run on the Ubuntu host
other container can run on the window host
Etc like that. Right so you can have all
these containers work across with the help of swamp clusters. After that we have
doctor offering scalability as talker is a fully
Booted that work. It makes the applications grow
and scale individually, then we have talker using
decentralized network this enables the capability
to have application spread and highly available. So in the event
that a container or a host suddenly missing
from your pool of resources, we can automate the process of either bringing
up additional resources or passing over to the services that are still available
apart from offering the decentralized network. We have talked of being really
user-friendly so doctor makes it really easy to automate
the deployment of your services or containers that would mean a make
things easy for you in your day-to-day life. And finally we have darker
offering out-of-box support. So the ability to use
talk Enterprise Edition and getting all
the functionality is very easy and straightforward make
stock a platform to be very easily used. So those with the goals
of Docker networking now to enable these capabilities
we have the container. Booking management and to do
that we have to live Network. Now. What is this live Network
select network is an open source in which you can read
through the source code and you can automate all
of that as an open source. So left network is
basically a talk library that implements all
of the key Concepts that make up the CNM model. Now what exactly is
this container Network model? Well contain a network model
formalizes the steps required to provide
networking for containers while providing Being
an abstraction that can be used to support
multiple network drivers. So CNM requires
a distributed key-value store, like console to store
the network configurations. The container Network model
has interfaces for ipam plugins and network plugins. The ifan plug-in apis are used
to create delete address pools and allocate or deallocate
container IP addresses, whereas the network plug-in
apis are used to create or delete the networks and add or remove the containers from Looks I'll
continue Network model is basically built on three main components
the sandbox endpoints and the network object itself. So a sand box contains
the configuration of a containers Network
stack just basically includes the management of the containers
interfaces routing tables and DNS settings. Now a Sandbox may contain
many end points from multiple networks, right and then
point is something which joins a Sandbox to a network endpoint can belong
to only one a network but may only belong
to one sandbox and finally as I was talking
about network network is a group of endpoints that have
the ability to communicate with each other directly. Now that you know a brief about
the container Network model. Let me tell you
the various objects involved in this model the container
Network model comprises of five main objects the network controllers
drivers Network and point and sandbox starting
with network controller Network. The object provides
the entry point into the lip Network that exposes the simple apis for users such as
the docker engines to allocate and manage networks since lip that work supports
multiple active drivers both inbuilt and remote network controller allows users
to bind a particular driver to a given Network. Next comes driver driver is
not a user visible object but drivers provide
the actual implementation that makes the network work. Driver can be both
in built-in remote to satisfy various use cases
and deployment scenarios. The driver owns the network and is responsible
for managing the network which can be further
improvised by having multiple drivers participating in handling various Network
management functionalities after the driver object. We have the third object as network network object
is an implementation of the container Network model as I said Network controllers
provide apis to create and manage the network. Object whenever a network
is created or updated the corresponding driver
will be notified of the event the lip Network
treats Network object at an abstract level
to provide the connectivity between a group of end points that belong to the same network and then also
simultaneously isolate them from the rest
the driver performs the actual work of providing
the required connectivity and isolation. The connectivity can be
within the same host or across the multiple hose
after that the next object. That we have is endpoint as I discussed before
and point Maley represents a service endpoint. So it provides the connectivity for services exposed
by a container in a network with other services
provided by other containers in a network. So the network object
provides the apis to create and manage and points and an endpoint can be attached
to only one network since end point
represents a service and not necessarily
a particular container and point has a global scope
within the cluster as well. And finally, we
have the sandbox. So the sandbox represents
containers network configuration such as the IP address
Mac address roots and DNS entries
a Sandbox object is created when the user request
to create an in point on the network the driver that hand us the network
is responsible to allocate the required network resources
such as the IP address and pass the info called a Sandbox info back
to the live Network. So the lip network
will make use of the OSP. Civic constructs to populate
the network configuration into the containers that is represented
by the sandbox. So a Sandbox can have
multiple endpoints attached to different networks. Alright guys, so that was a brief about the
various Network model objects. Now, let me tell you
to various network drivers that are involved
in Dhaka networking Dhaka networking has mainly
five network drivers involved with it the bridge host. None overly and Macklin Network. So starting with the bridge. Bridge network is
the default Network driver. So if you do not specify
a driver then this is the type of networking you're creating. So the bridge network is
a private internal Network created by the docker on host. All the containers are attached
to this network by default. The containers can access each
other using this internal IP. And if it is required to access
any of these containers from the outside world, then port forwarding
of this containers is performed to map the port
onto the docker host. East Bridge networks
are usually used when your applications run
and Standalone containers that need to communicate. Another type of network
is the host Network. This removes the
network isolation between the docker host and the docker containers and then it uses
the host networking directly. So if you were to run
a web server on Port 5000 in a web app container
attached to the host Network, it is automatically accessible
on the same boat externally without requiring to Is the port as a web container
uses the host Network? This will also mean that unlike before you will now not be able
to run multiple web containers on the same host
on the same port as to put are now common to all the containers
in the host Network. The third option is
the none Network. The containers are not attached
to any network and to not have any access
to the external network or the other containers. This is usually
used in conjunction with a custom network drive. Over and is not available
for swarm Services. The next Network that we have in the list
is overlay Network. So to understand this network. Let's consider a scenario. Let's say we have multiple
doc host running containers each talk a host has its own
internal private Bridge Network in the 172 Point 2 point 17 series allowing
the other containers running on each host to communicate
with each other. However containers across the hosts have no way
of communication with each other unless Published reports on those containers and set
some kind of routing yourself. This is where the overlay
Network comes into play with Doc is warm. You could create
a new network of type overly which will create
an internal private Network that spans across all
the nodes participating in the swamp cluster. We could then attach
the containers or services to this network using
the network option while creating a service and then we could get
them communicate with each other through this overlay Network so you can see that That you can
use overlay networks to facilitate communication
between a swarm service and a standalone
container or between two Standalone containers
on different docherty mints. And finally we have
the last Network. That is the Macklin Network. So Macklin networks allow
you to assign a MAC address to a container making
it appear as a physical device on your network. Then the talk a demon Roots
traffic to the containers by the Mac addresses. And then the Maitland driver
is sometimes the best. Choice when dealing
with Legacy applications that expect to be directly connected to the physical
Network rather than routed through the docker
host Network stack. So guys that was about
the waitress network drivers. Now, let me brief you
a little bit about talk a swarm and tell you the significance of torque a swamp in Dhaka
networking in simple words. If we have to Define doc
a swamp then doc a swarm is a cluster of a machine or running on Docker
this provides scalable and reliable platforms
to run many containers. Owners which enables
the it administrators and developers to establish and manage a cluster of Takin notes as
a single virtual system. So as we know that Docker swarm is
a technique to create and maintain cluster
of talker engines. What exactly is this cluster
of talker engines? So let me tell you that in a cluster
of talker engines. There will be many
Docker engines connected to each other forming a network
this network of talk engines is what is called as
a Docker swamp cluster as You can see from the diagram
on the screen and this is also the architecture
of taka swamp cluster. So they will always
be one doctor manager. In fact, it is to talk a manager which basically initializes
the whole swarm and with the manager. They will have many other notes on which the server
should be executing. So there will be times when the service is also
executing at the managers and but the managers role
is to make sure that these services
are the applications are running perfectly
on the docker nodes. Now whatever applications or services that are
specified or requested, they will be divided and then they would be executed
on different notes as you can see
in the diagram here. So these different notes
are nothing but the workers. Alright guys, that's all you need to know
about Docker networking. Now. Let's move on to the Hands-On
part in a Hands-On part first. I'm going to show you
how to create a simple Network and how to deploy the service
over your network and after that will create
a swamp cluster and then we'll connect to services and And we will scale
a single service. Alright, so let's get started
with our hands on so first we're going to deploy
an application named apache2 by creating a Docker service
in the default Network. That is the bridge Network. So apart from that will also
initialize the swamp cluster as we want it to work
on two different nodes. That is the manager node
and the worker node. So for that let
me open my terminal and then let me type in the command sudo talker
swarm in it - hi. - advertise - addr and then mention
the IP address, right? So I'll mention the IP address
of the manager node. And then I'll click on enter. Once you click on enter you'll
be asked for the password. So type in the password
and then you can see that the Swarm has
been initialized now to connect the slave node
to this particular manager. You have to copy this link
and then go to the sleeve note open the terminal
and then paste it here so you can see that this node has joined
the Swarm as a Look up so that is the manager
and this is the slave now. Let's go back to the manager
node and over here. We're going to deploy
an application named apache2 by creating a Docker service. So for that you have to type in the command Docker
service create - - name give the name
of the application - - more that is which more
we want it to work. We want you to work
in the global mode - d - P that is port forwarding. And then we'll mention the port
where it's going to work. So it's going to work on 8 0 0 3 and then mention
the account name from which the docker image
has been pulled. So once you click
on enter you can see that your doctor service
has been created now to check whether your doctor service
has been created a lot. You can use the command
Docker service Ellis. So this will list all your
running services at present. We just have one service that is apache2 now to check
whether it is running or not. You have to go
to the slave node. And then open the web browser
here and then let's go to the Local Host
to the port a 0 0 3. So let's go to the port. So you can see a message
that it works. So that means our application
has been deployed onto a container and then it is also connected
to the swamp cluster. So the worker also has
this particular application now if you want to deploy
a multi-tier application in a swamp cluster, how will you do that? So let me show you
how to do that. So before I do that, let me tell you
what are we going to connect? So basically we
have two applications. That is the web application
and the mySQL database. So the web application
has two parameters. Is that is the course name
and the course ID and once you mentioned the details and you click on submit query
it will be automatically stored in the mySQL database. This multi-tier application
is connected with each other through the overlay Network. So let's start doing it. So first, let's create
the overlay Network. So for that you have to type
in the command Docker Network. Create - D overlay
my overlay one. So my overly one is basically
the name of the network that I am giving you
can give any other name that you want after that. Let's create a service
for the web application. So for that I'll again type in the command Docker
service create - - name name of the application
as web app one. - d - - Network and will connect
it to my overlay one network and then - be for port forwarding and then we'll mention the port
on which it is going to run and then we'll mention
the Account Details from which this Docker image
will be pulled. So after that you can just click
on enter so you can see that you're talking
service has been created. So let's check it once again. So for that you'll type
in the command as talk a service Ellis. So you can see that the web app one service
has been created now, we'll create another service
for the MySQL application. So for that you'll type in the
command Docker service create - - name MySQL that is the name
of the application - d - - Network
the network to which we want to get connected
is my overlay 1 - P for port forwarding. Let's mention the port and then the account
details for that. You mentioned
the Account Details. So you can see that your doctor service
has been created. So let's check again. So for that will type in the command Docker
service Ellis and you can see that the MySQL service
has been also created. Now, what you have
to do is you have to go to the web application and inside which you have
to make some changes. So to go inside
the web app service. You need to know
the container ID. So to know the container ID, you have to type in
the command Docker PS. So this will list
all the container IDs that are present on this node so you can see But up one and a bash you two are
present at present. We need container ID
for web app one, right? So to go inside this container, you'll type in the
command Docker exe see - i t and then copy
this container ID and then paste it here and then
end the command with bash so you can see that you've gone
inside this container. Now, you have to go
to a file index dot PHP and make some changes. So for that we will type
in the command Nano. and mention the directory So
this PHP file opens up now, you have to change
the server name to mySQL since we want to get connected
to the MySQL server and then change the password to
ADT Eureka and let's say we keep the database name as Hands-On after that use
the keyboard shortcut Ctrl X, press on Y and then
save the file. So once you're done with this, you have to exit the container
so for that type in exit and you can exit
the container now. You must have observed here
that only the web app and the Apache to Services
can be seen on this node. Whereas we are not able
to find the MySQL that is because it is present
in the slave node. So let me show you there. So let me go
to the terminal type in Docker PS and you can see that the MySQL service
is over here. So why is that? So that's because swarm
is performing load balancing. It is dividing its containers
into two different notes so that you can balance
the load properly. So now what you have to do
is you have to go inside. MySQL container so
for that you'll type in the command Docker exe see - i t and then mention
the container ID of this particular container and
then end the command with bash. So you'll go inside
this particular container now, once you're inside
this particular container, you need the access
to use MySQL commands. So for that you'll type in
the command MySQL - you route. - P Ed Eureka. So once you type
in this command, you can see that you have got the access
to use the MySQL commands. So - you is basically
the user and - P is for the for the password
and if you have a question that why were using - it here that's because we
are opening the container in the interactive mode. So once you get
the axis of MySQL, you have to create a database and then you have
to create a table inside it so for that you'll use
the MySQL commands such as create database. Hans on And then you
have to use the database. So for that you'll type
in the command use hands on. So you can see a message that the database
has been changed. Now, you have to create a table. So for that type
in the command create table, let's say courses is the name of the table and then
mention the two parameters. So we have course name
which is of where cat-type. Let's say, it allows a length
of 15 characters and let's say we have course idea of where
cat types of 12 characters and after that close the bracket
and this creates a Able for you after that you have to exit
your MySQL connection. So for that you'll type in exit and then you have to again come
out of this container. So again type
in the command exit and you'll be out
of this container. Now, what you have
to do is you have to go to the slave mode and then
run this index dot PHP file. So for that let me open localhost since the web
application service was running on 8 0 0 1 and index
dot PHP file All right, so you can see that our doc a sample app has
been opened up now before you include the details. Let me go back to my web app
container and let me show you what all has changed
in the file. So you have to mention the server name to be
MySQL your username to be root password to be
your password and then database what I've mentioned there
that is Hands-On and the name and ID would be the parameters
that are would give so it will be course name
and course ID and then you have
a basic PHP file in which you cried the SQL command
that in certain. Took the course name
and the course ID which will basically
include the details that we fill into
the application to be stored directly into this table. Now, let's go back
to the slave node. And now let's
mention some details. So let's say we mention
a detail to be blockchain and the course ID to be
randomly some number and then we'll submit the query once you submit
the query you can see that a new record
has been created successfully. So let me just
create few records. Alright, so I've typed
in few records. Now. Let's go back
to a MySQL container and get into the database and see if a table has all the records
entered store or not. So let's go back
to the terminal. So now let me type
in the command Docker PS and then I'll type in Docker exe see - it mention the container ID now. I'll type in the command MySQL - you route -
P Ed Eureka. So this will give
my MySQL connection. So now I'll type
in the command as use Hands-On. So database has
been changed now. I'll type in the
commander show tables. So it Will show me the tables
that have included in database, so I have this table included. Now. Let me type in the command
select star from courses. So this will basically
lists all the details that are stored
inside this table. So you can see that we have entered so
many details with the help of this web app and that would directly stored
into a mySQL database. So guys, that's
how you can you know, connect multi-tier applications
over the overlay Network now if you want to scale
any particular This you can just scale that service
by using a simple command. So for that you have to go back
to your manager node, and then use the command
Docker PS to list all the containers light. So we have two containers. Let's suppose we want
to scale this web app service for around five times. So you can do
that with a simple command that is stalker service
scale web app 1 is equal to 5 so you can see that the web app one service
has been scaled to five times. Now if you want to check
whether it is working or not, you just have to type in
the command Docker service PS web app one so you
can see five instances of the same services. So guys that's how you know, you can deploy a simple application over
the default Network and also connect multi-tier applications
with an overlay Network. And finally you can scale
any particular services. So guys that was a short demo
on talking networking. The project that I'm
going to show you that of an angular application, which is created by my team and what I'll be doing is
I'll be deploying this angular application by implementing
the devops strategy. The first topic that I'm going
to talk about today is what is angular and after I talk about angular
and give you an introduction. I'm going to talk about
what is devops, right? So this is going
to be very brief. I'll quickly talk
about these two things and then I'm going to go
to the third topic which is the different
devops tools and techniques to achieve continuous deployment because this is the highlight
of today's session. I will be spending a lot
of time on this slide and on the final slide, which is continue Rising
an angular application the devops way and the divorce we basically
includes a combination of these three tools
get Jenkins and Docker. All right guys, so enough talk. Let me get started
with my first topic and that is what is angular. So angular is an open source, client-side framework for deploying single
page web applications, right and the keyword that you need to note here
is single page applications. And that's acronym Das es PA. There are quite a few
Technologies for developing single page applications. So anger is a very popular one react.js is another popular
technology similarly vue.js and we have a couple mode
and well the thing that you go to ask
me here is why single page applications right? So you You might ask. Why are we why am I having a demo
one single page applications? Right? Well, the answer for that is because single page applications
are the way forward. They are more effective
and they are easier to build and they come with a lot
of other benefits that that I of course
convention today's session because they'll get to detail
but I do have a couple of been a beneficiary of that
are mentioned on the slide and you can see
that on your screen now. And the biggest reason under
under most important factors single page applications, which are Reno created by
Technologies like angular and the other
JavaScript Technologies. These applications are really
fast and they are fast because while accessing any webpage which is developed in angular
or such technologies that I'm your browser
will fully render the entire D om in one single time and later on it only
modifies that view or the content display to you when you interact
with that web page. Great, and even these modifications will be done
by the JavaScript which runs in the background. And yeah, you can see an example
of an SP architecture over here. Right? So you have so any anyway, basically any web application
which is developed, you know with the help
of angular, right? So they'll be called
single page applications and they'll have
three different components. First of all or well let's
not say three components in my example. It's three compounds
but General they'll have different components
and the components that you can expect there to be
are those of a navigation bar where you can switch
from one one tap to another tab, then you will have a sidebar
right again you Now filter down to different options that you want to be displayed
and then of course, you'll have a Content
you'll have a Content bar, right so similar to how we have a sidebar will have
another component called the content which will be
the actual display. So whatever you're actually
viewing on your you know on your web page that will
be displayed over here. And what is displayed
here can be controlled from here by clicking on the different information
or you can also control that by switching or clicking
on a different option in the navigation bar
or in the sidebar. So So you can switch you can switch the view
the view like that. And when when you
do it this way, your browser will not take too much time to fetch
the information from the server because the entire Dome
will be felt at one goal. So that's the big benefit with
single page applications and especially angular implements, you know anger is used for developing single
page applications. That's why it's the way forward and its really popular
and it's you know, the technology is
really coming up. So my team has developed developed a single page
application using angular and that's what I'm going
to deploy today, right? Now let me quickly go to the next slide and talk
about what is devops here. Right? I'm sure everyone here knows what devops is it's
a software development approach which involves continuous
development continues testing continuous integration
continuous deployment and continuous monitoring of the software throughout
its development lifecycle. Well, I've mentioned
this numerous times my video sessions
and I expected to know this. Okay, but what I
what you might not know is which of these tools are used
for continuous deployment. Right. So on a higher level I can see that Docker is most
important tool for achieving continuous deployment. Okay, but as you
can see on the screen, I will also be showing the act
of continuous development and continuous integration
in today's session. So continuous development is achieved with
the implementation of git and GitHub continuous integration is achieved with
the implementation of Jenkins and continuous deployment is achieved with
the implementation of docker. Well using GitHub you can pull
the code from the repository, right and then using Jenkins. We can deploy that code
to the production environment or the servers or virtual machines
whichever suits you and then finally we can make use
of Docker to containerize those deployments. So that's how the different
the devops tools that you see here get
Jenkins and Doc up. That's how they can
be orchestrated to achieve, you know Automation and you know
for software development. So that's how things go
and in my anger application. I'm going to use
these three tools right these three drops tools. Now moving on to the next slide. Now this slide which is all about deploying
an angular application. This is the most interesting
slide in today's session and you can ask me why
and the reason for that is, you know, the reason
it's interesting is because we are using Docker majorly, right? We are pulling all
the code from get and then we are using Jenkins
integrated into darker and we are creating multiple
containers by using Docker. Basically Docker container sizes application along with
all its dependencies, right and when we say
contain arises It means that we are packaging the code of the application along
with all the required packages and dependencies in
a lightweight manner, which is not too heavy on the server on which we are
deploying the application tool, right so and the best part
with these Docker containers are that they can be run
on any operating system irrespective of the one
it was built on. Well what that means
is I can containerize any application in my case. Well, let's it in my case. I have the angular application. Right? What kind of dependencies will my angular
application app the see that your angular application would primarily need are those
of node.js node package manager, which is again sync acronym Das npm. And of course the package
dot Json file, right? So no DS is basically
going to be the back end for your angular
application and p.m. Is going to install and angular application and
maintain all its dependencies and the versions
of those dependencies and the package dot. Json file is again the most
important file right because it's going
to contain details about your project about watched what Dependencies are needed and what versions
of dependencies are needed all these things will be present
in your package dot Json file. So basically these three files
will be the dependencies and in my case, what I can do with help
of container is that you know, I can have a container. I can install all
these dependencies I can place them all together
and I can simply you know, you know without having
an operating system that's actually powering it. You know, I can package
all these things into this particular container
and I can just you know, share it with. The people and
what the other people have to do is they just have
to you know run the container and they can just
boot it with the help of any operating system. Right? So that's the
benefit with containers so developers can
containerize any app that's created in
a Mac operating system and probably they
can upload that container or image to Docker Hub and someone in remote location
can download that Docker image and you know, it's been a contender out of it
while the personal that's remotely located while he's having
a different operating system. The guy who built
the application right who sorry who built the docker image, he could have been he
could have done it using a different operating
system and the guy who's actually running
the container at you know, a remote location. We can have a different
operating system. So that's the big benefit. So you guys are
getting the advantage that lies you're right
with the help of Docker and with the help of continue
Rising all your applications and dependencies - the actual operating system. Okay. So anyways, I think
it's time to move on and now I've reached
my demonstration aspect right? So now that I've told you what
exactly is what I'm going to do, right that's going to be
the architecture of how I'm going to deploy
my angular application. Now that you know
it I'm going to start with my demonstration and what I'm going to do is
I'm going to do it with the help of continuous deployment using Jenkins a Docker and we
will also use GitHub from where we will be
pulling the code right? So let me first of all open
my Machine for that. So first of all to achieve
this continuous deployment like I told you Jenkins is
the broker, right? So Jenkins is the one
that pulls the the code from the repository and that is what which is going to help
us build Docker images and spell containers
out of those images. So what I have to do is have to
first of all open my web browser and launch my Jenkins Jenkins
the dashboard, right? So Jenkins is by default
hosted on port number 80. So let me just Lon CH
that particular web, you know Lawns at Port number
on my Local Host. Sorry for the delay guys. It's lying a little bit. Right, so just the port number
by Jenkins is hosted. Now in the meanwhile, let me just quickly go
to my terminal and so we my project folder where my
actual application is present. so here's a dominant and my project is present
in this folder. I've created a demo folder inside which there's
a top movies, right? So the top movies is the project which I created the project
folder and what you see here. These are the different files and folders present
inside this project folder. Let me also open it to my file
explorer here and explain what these different packages
files and folders are Forum. So this is the project that I created and
as you can see there are a number of files are
and the number one file that I want to talk about. Is that of a Docker file
right now the Profile is basically used to build
your Docker images and spend containers or of
those Docker images, right and to build
your Docker images you specify the commands
inside the dockerfile, right? And then you just you know, execute that Docker image which is built by issuing
the drunk command that I'm your Docker container would be spun and that I'm
your container would be ready and your application
will be hosted on a particular port number and then would be put
to a it would be mapped to a particular port
on your local looks so all these functionalities
are done with the help of Of dockerfile, okay now but that is only
with respect to Docker and the other things here the other folders and the files
that you see here. These are with respect
my angular application. So we have different files
like the package.json file. We have known
underscore modules. We have done we have SRC folder. So all these things well,
first of all, let me talk about
the package.json file right now this package or Json file is
a very important file which contains all the details about my project
it contains which version of which Which dependencies
my project needs words the name of my project what versions of dependencies my project
needs to implement? All these details
will be present inside my package.json file. So without the package or Json file your application
cannot get hosted its you know, for those of you on you
can consider this to be like the metadata, right if we will know
what is the metadata. So baggage out Jason does
a similar role right? But here comes the question. How will the package.json file
be initiated right from? How do you Acute
the package.json file. What's the first step? And that's where this whole node underscore modules folder comes
into the picture. Right? So you have a command
called npm install, right? So npm install is
nothing but n PM stands for node package manager, right? So it installs
all the dependencies that your project needs and when you run the command npm
install through your terminal, so at that time it will look for the package or Json file
in that particular directory. So I have to execute the npm
install from the A tree where my package or Json
file is present from your if I execute that command that I'm it would first fall
initiate the package.json file and whatever dependencies
are present over here for my project for my code. All those would be
installed and downloaded and installed from from
the node repository right in my case would be
no order no repository. But yeah, otherwise it you just
downloaded from the internet and you would have it already and all those dependencies
will be present inside this folder called
node underscore modules. Right and this this folder node underscore modules is going to
be a very heavy folder, right? There's gonna be a lot
of content here. So it's ideal that you know, you don't place it
in the GitHub. If you want to share your
project with someone else you I'd you in the real
world environment. What happens is you just share
the package.json file and when they do the npm install
the time they would you know, automatically get all
the dependencies installed as per the package dot Json file, whatever specified you so that's what it does and then
you have other files. You're right the
configuration files like Protracted odd
configuration file you have the typescript configuration file typescript
lint configuration file and you have the other
other files are so guys. All these configuration files
are the configuration for your angular
applications right beat the typescript configuration or the linting configuration
of the protractor. All these can be considered, you know, basically,
these are of the boiler plates that come with the actual
angular application. So that's these are
these are dependencies, right? So you need it with your project
and and the folder SRC. So this is where your actual
project would be present. So what about
code you've written for your Android application
that will be present here. So yeah, these are
basically the contents of my repository and these are what is needed for continue
Rising my angular application and maybe you
would find you know, you have not explained
this folder ID to e, so this one basically, you know is used for, you know,
the end-to-end testing. So whatever is needed
for that it's president. This package up. But yeah on a high level this is
what you need to know. These are the packages. And the first thing that I got to do is to to do
containerize this application. The first thing I have to do is after pull the code
from our GitHub repository and I will do that with the help
of Jenkins, right? So even though I have it locally in a real-world
environment developers or you know Engineers would we would pull
this code from GitHub? Right? So I will show you
how that happens by first of all going through
my Jenkins dashboard here. This is my dashboard. I already have
a object called demo. So this is the one that I want to show you
a demonstration of so this contains I've already pre-built the environment so
that I don't waste much time and no downloading
everything and you know, because downloading everything and installing everything
would take a lot of time. So if I have the environment ready I can just
shoot you straight away. So I have it over here and
if I go to configure I can show you what are the elements
that have defined already? So let's just wait for this to come
up for a minute. First of all, we have to go under the source
code management, right? So this is where you need to first of all
enter the GitHub repository from where you want to pull
your code right now. Let me just open
this repository and show you what I'm going to pull. It's basically the same content that is there in my local host
in my in my whole system, right? So whatever whatever you saw
here are the file explorer. Most of the contents
over here are there in my GitHub repository
except for the node modules because this gets installed automatically when you run
the package or Json file. Yeah, so you guys
see this right? So we have the same Adobe folder
we have the SRC folder and then we have various other files
like the angular CLI Jason we have the docker ignore. We have the dockerfile that's present while
the docker file is present inside the GitHub repository. The reason I have I have the dockerfile inside
the the GitHub repository is because wherever my code
is present, right, so that's where my execution
should ideally happen. And if I have my dockerfile
present in the same directory Then I can run my dockerfile. I can use my dockerfile
to build my Docker image inside that repository and it will also look
for the dependencies and the angular code
for my application all these things
from the same repository. So that's why I have the dockerfile
in the same Repository. Right, so that's what doc file is used for and then similarly
we have the other dependencies like the package dot Json
the 80s conflict got audition the other things which I spoke
about which was there over here. So the same thing we have
in our GitHub repository. So getting back to our Jenkins. Be forceful specify that you want to pull
the code from here. And what we do next is we
can go down to the build option. So under build we have
our shell here, right? So whatever commands
you specify here, they would be executed
on your shell. So since I am using
a Linux system, I have chosen to execute
these commands on my ex on my shell well in case you guys are executing
it at your end, you know, if you're using
a Windows system, you might probably want
to choose Windows batch command and then specify the commands that you want to run
a new windows. CLI, right, so that's
the only difference but yeah, whatever commands
I specify here. They will be run on my shell
and the commander. And first of all running is
that of Docker build and building the image
called demo app one. So right now I'm using
the dockable - t flag T command to build
a new image called demo app one and it would build this application based
on the dockerfile which is present in this folder. So this is the folder
where my dog Is present right so I can you know, in case I do a CD and if I give this folder
then I would have moved to that particular directory and then I can just simply
replace this path with a DOT. So that's another alternative. But yeah, otherwise, you can specify the entire
absolute power also here. So I've done that and it's basically
creating a new image based on the dockerfile and the instructions present
inside the dockerfile in this command. And then in the second command
is the docker run command. And okay, so the image
was created your demo app one. So that image is
basically being run. Okay, you spend on an image
into a container by running this command Docker run
and you specify other options while doing this, you know, we specify flag flag
RM we specify single flag copy and then we specify
the port numbers. So the the pflag
is used for mapping your Docker containers put to your host machine sport so
over here the photo double zero that you see here. This is the port
number of my yard. Host machine on
which the subsequent or the equivalent port on which
my Docker container, right? So whatever is basically president medical container the
port number whatever was hosted at that would be visible
inside my photo double zero port in my your host system. Okay. So yeah, it's
an angular application right? It's a web application. So what you have to do
is you have to host that and one of the ports and I have see by default
angular applications are hosted on port number 4 double zero
and I have also specified. The same in the package
dot Json file. So that's where you specify
the path the port number and what I'm saying here
is whatever is running inside my Docker container in
port number for two double zero, that should also
be visible or available in the port number 4 double
zero of my host machine. So that's what the
flag is transferred. And then we have
the name of giving I'm giving this container,
which I'm building. I'm giving it a name
top movies one. And yeah, this is basically the
image same image that we are. First of all spending
with head buffer dockerfile. So these are the two common that I'll be running
and at this point of time if there are any any of you who are new
to Jenkins or you are if you are new Docker if you execute the same commands
from your you know, execute shell you
might have a problem. So can you can anyone can any of you guess what that problem
might be right so I can give you a hint the problem
would lie over here. Okay, so it would lie right at
the beginning of this command. Well, no problem. See the thing is
any Docker command that has to be run. It has to be run
with the pseudo axis. Right only a root user
can only the root can execute any document
especially the build and the Run command. Okay, but there are some few
commands which can be executed without the pseudo but these two commands
especially needs to access if you're excluding the same
two commands from your terminal then you can just
simply, you know, prefix this whole command
with sudo here and then the shell would prompt
you for the password and then you can The password but what would you do
in case of Jenkins? Right? So this is Jenkins. You cannot know put pseudo here because Jenkins cannot manually
enter the password for root access, right? So root credentials
Jenkins does nap. So in this case, what you need to do is you have
to give the pseudo credentials or the root credentials
to Jenkins itself. So Jenkins is actually a user if you guys notice
if you go under you know, if you if you know,
or let me just tell you this. Jenkins is a separate user
because it's a web server, right and any commands that you
execute through Jenkins, it would be executed
as the user Jenkins. So what you have to do
is similar to have you, you know, use us, you know, you execute commands without pseudo how you create
a new Docker group. And you add your user the user
from which are executing you add that user
to the group similar to that. You have to add your Jenkins
user to the docker group and you should give the dog
a group The Root axis. So the docker group
would basically be on par. Through in terms of the access
that it has over the system. So that's the important step
that we need to do because otherwise
if you don't enable this axis, then your current
your commanders are going to get executed. It would say
failure permission denied. So that's the thing. And yeah, if you have
these two were ready then it's pretty much ready, right your dockerfile would be you
know used to build the image and then that image would be
used to get the container out. So I'm just going to save this
and quickly show you how to build this application. Okay. So to build the application we can simply go
to build now, right? You can see the build history
or these are the previous times. I run the same command and if I do it again bill now
just build a scheduled and do it see a new-build pop
up over here build number 212, right? So if I click on this and if I go to console
output over here You'd get to see what is
the status of this build. So let me just go here. Yeah, if you go
to console output, you will get to see
what's happening. So similar to the output that you get
on your terminal something that you'll get over here. Okay, we're here already. So let me just quickly go up. As you can see the first set of
commands have started executing using your you know by Jenkins and you know the first of course the first one was
to pull the data the code from get right there
from the git repository. So whatever was there
is being fetched over here. And then the First Command that we're executing
on the shell can be, you know differentiated with this command the plus
symbol basically indicates that it's command
as being executed on the Champs the command prompt
so the comma the docker build - t demo app is the command that's being built and
when you build it you can That there are various
steps being performed. So for each line
in your dockerfile, there will be a step
that will be performed. Now. Let me quickly go
to the dockerfile and explain what are the different steps
that are going to be performed. Okay. So at this point of time, I'm going to go back
here and let me open the dockerfile and explain
the different steps. Because if we want
to host a note, you know, if you want to host
an angular application we have to first of all Bill
pulled a node image, right? Your angular application
would be present or you know, it would be a hosted only with only when there's
a load application which is running
at the back end. So the First Command that is from it's going
to pull the node image which has the tag 6 right so
version number 6 of 4 node, so probably this is what is going to get pulled
with the help of from node 6 and when it pulls Then what do you call it was you have
to use the Run command to make a directory inside
this particular image. So you use the - pflag to specify the path that
you want to create / user / Sr. C / app. So you're creating
this particular path inside your Docker image, which you pulled and then you're changing the
working directory to the path that you created by using
this command working work dir and this and the first thing
that you got to do that you need to notice here is Is the package or Json file which is are present
in my local system that I'm moving to my path, which I created inside
my Docker image now that is because this is the file that
contains all the dependencies that are needed to to do
basically download all the node and a node node modules, right? So what about dependencies
are are there inside the node modules? They will be downloaded with
the help of package or Jason. So right now it's present
in my local system am specifying that so stirring Docker
to copy this file into the A patio and then
when your once you've done that it asked me to run
the npm cash clean now, if you are, you know running the npm
install for the first time or if you're using npm the first
time you might not need it but since I've run this command
earlier and the using it because I want to avoid
any version conflicts between the different dependencies. Right? So dependencies can be afraid
of different versions of angular 2 version or angular
for version on all those so I'm just using this two
different to to you know, keep myself healthy there and Then I'm using
their run npm install. So this is the most
important command which would basically
start everything. So the npm is
the node package manager. And when I the moment I issue
this command my package dot. Json file would be searched for and one it's located the
commands there the dependencies which are there
inside those would be created inside a Docker modules, right? Sorry node modules
inside that folder. Everything would be created. So that's what
this command does. And all the next command is all
about copying every single file or folder which is present
inside my you know, instead of my host directory that I'm copying it
into my host folder. So the other files that are
those of configuration files that I spoke about earlier, the the typescript
configuration file, the typescript lent files
are all those applications will also be copied
to this image inside my door, you know to this path
instrument Docker image and then I'm finally saying expose
for the for to double zero because this Port on which my angularjs
application would be hosted and then I'm finishing it off with by specifying
this command of npm start. So you do the npm install here. So at this point of time
your dependencies are ready. Everything is ready to be your application is ready
to be deployed and hosted and the start is what is going to actually
do the hosting on to this port number
for to double 0, right. So that's what the the
dockerfile instructions are and the same instructions
have been Earning on my Jenkins. So it says Step 1 of 9. It's pulling from here
and it's moving to this directory
and it's you know, creating a new directory moving
the working directory to this and then copying the package. So each and every path step
is being executed one after the other
if any of them fail, then you would have
a notification saying this step failed
and check your command. So all those details but anyways since ours is
a successfully built. This is the idea
of my image, which was generated, right? And yeah, this is the tag. And added to it and the next command
that's being run from a shell is the docker run
command with the - - RM flag and the pflag
right the same command which I explained earlier. And when this command
is executed it says that your energy source
is being hosted. So this is the Local Host Port which are mentioning
on which I wanted to run and you can see the state
as you're right. It says zero percent
10 percent here you have again, you know 10 passenger
and we are basically it keeps increasing. So we have 11 percent. So it's a big process. So there's a lot of dependencies
that gets downloaded and in the meanwhile that we saw the dockerfile that
I was expecting the dockerfile. We have all our packages
downloaded and installed and your applications
actually hosted. So it says webpack compiled
successfully, right? So this is the success message. If I now open the localhost
photo double zero, then you would see
that my angular application is running up and running. Right, so you can see that the application name is
movie rating system. And this is something
that can tell you. This was the application
which my team created for me and this apple. This project is all
about no the top 250 movies that you have to watch
before you die. Right some of the you know, the biggest blockbuster
hits of Hollywood. So all those are present
here, you know, the it's it's the anger you
will get the angular feel over here by looking
at the different components. So the log out Option
that you see here. This is a different component. Right? If I log out then I
will add not get to see the list of movies. But if you login only then
you will get to see that right and then you have
the navigation bar. You are where you can switch
to different tabs. You can go to the it
reca home tab. You can go to the you
know the about tab where again we have
weather details and then if you login successfully then, you know, you will get to see
the the movie list that we have. So let's just wait for just to log in and I
can show you the movie list that we have. Yeah, so you had
a movie list here, right? So in the navigation bar, let me just click
on this movie list and you can see the 250 movies
that we choose are. You know, I have been the best
ever Hollywood movies. Yeah, so the number one movie that you have to watch is
the Shawshank Redemption, right? And then we have
all movies like Godfather the part 2 of Godfather
Dark Knight Will agreement. Which again is
my favorite movie. We have Schindler's List. We have a number of movies here
which you know, which is a favorite of course and we've created
an application this way and this is
a simple web application a single page web application that we created and you
can create all these things if you know, you know how to work
with the node.js and if you know how to work with angularjs
Right similarly similarly. If I go to the Erica tab, we have details
about a drink over here. So, you know, we believe in Tech up
your skills ReDiscover learning so we have that, you know live classes
and expert instruction. So these are the this
is the interface that we built that we created
in our application and that's what I want to show you and in
the movies list again, I mean, of course we have
the list of movies and if you click
on any of the movies, you can look at the details
of that movie as in when it was released. What is a John of the movie
who was a director actor writer who are the actors in the movie and what readings it has got
so whatever data we have here with respect to ratings
and right around Stars. These were basically, you know, you know basically got
from IMDb, right? So it was those I'm going
to be ratings at you know, we are using as dataset
in today's session. We are going to discuss
about the two most popular devops tools, which is Jenkins and darker
and we are going to see how these tools can be. Created to achieve a better
software delivery workflow. So first off let me run you
through today's agenda first. We are going to see
what exactly Jenkins is and how it works. Next. We're going to see
how dark or solves the problem of inconsistent
environment by contain advising the application. Once we're done with that. We'll briefly have a discussion
on microservices because on the Hands-On part. I'm going to deploy a micro
service based application by using Jenkins and Docker now after you've got a brief idea
about microservices, we're going to Look
at a use case and see how to solve a problem statement
by using Jenkins and darker. And finally, we're going
to move on to the Hands-On part where we will deploy a micro
service based application by using Docker and Jenkins. So guys, I hope you find the session
interesting and informative. Let's get started
with our first topic now before listing down
a few features of Jenkins. Let me tell you some fun facts
about Jenkins currently. There are over 24,000 companies, which use Jenkins
to Some of you that is Google Tesla
Facebook and Netflix. Now there has to be a reason
why such reputed and successful companies make use of Jenkins. Now, let's discuss
a few key features and see why Jenkins is so important. All right. Now, the first feature is that it is an open source
freely available tool which is very easy to use. It has various features
like the build pipeline plug-in, which lets you graphically
visualize the output and apart from that. There is also a feature
known as user input which lets you interact. With Jenkins. All right. Now one major feature
of Jenkins is that it implements
continuous integration. Now what is continuous
integration every time a double up or commits into a source
Control Management that commit is continuously pulled built
and tested using Jenkins. Now, how does Jenkins
do all of this now? Jenkins has over 2,000 plugins which allow it to integrate
with other tools like darker get selenium Etc. So by integrating
with other tools it make sure that the Fed development process
is fully automated. All right, so it is also
an automation server which make sure that the software delivery cycle
is fully automated. Now, let's see
how Jenkins works. So here you can see there is
a group of developers committing the code
into the source code repository. Now every time a developer
makes a commit is stored in the source code repository. Now what Jenkins does is
every time a commit is made into the source code repository. Jenkins will pull
that comment build it test it and deploy it by using
plugins and other tools. All right now not only is it
used for continuous integration. It can also be used
for His delivery and continuous deployment
with the help of plugins. So by integrating
with other tools, the application can be deployed
to a testing environment. The user acceptance test
and load testing is performed to check the application
is production-ready and this process is
basically continuous delivery. Now, it can also make use
of plugins to continuously deploy the applications
to a live server. So here we saw how Jenkins can be used for continuous
integration continuous delivery and continuous deployment
by integrating. I'm with other tools. All right. Now, let's move
on to what is darker. Now before we
discuss about Docker, let's compare virtualization
and containerization. Now the goal of virtualization and containerization is
to solve the problem of the code works on my machine, but it does not work
on the production. Now this problem happens because somewhere along
the line you might be on a different operating system. Now, let's say your machine is
a Windows machine and you're pushing the go
to a line X server now, this will usually
result In error because the windows and Linux support
different libraries and packages and that's why your code works on the development server and
not on the production server. All right. Now when it comes
to virtualization, every application is run on a virtual machine now
the virtual machine will basically let you import
a guest operating system on top of your host
operating system. Now this way you can run
different applications on the same machine. All right. Now you're wondering what is the problem
with virtualization? Now one major drawback
of virtualization is that running multiple
virtual machines on the same host operating system
will degrade the performance of the system now this is because the guest operating
system running on top of your host operating system
will have its own set of Kernel and set of libraries and dependencies which take up a lot of resources like the hard
disk processor and RAM and another drawback is that it takes time to boot up
which is very critical when it comes
to a real-time application. All right. Get rid of these drawbacks
containerization was introduced now in containerization that is
no guest operating system. So instead the application will utilize the host
operating system itself. So basically every container is going to share
the host operating system and each container will have
its own application and application-specific
libraries and packages. All right. So within a container there
is going to be an application and the application
specific dependencies. I hope this is clear guys. Now that we've
discussed containerization. Let's see how Docker
uses containerization now Docker is basically
a containerization platform which runs applications within
different Docker containers. So over here you can see that there is
a host operating system on top of which there
is a Docker engine. Now this Docker engine will basically run contain
a number one in container. Number two. Now Within These two containers
are different applications along with that dependencies. Alright, so basically
within a container The application is going to have
its own dependencies installed. So it does not have to bother
any other container. Okay. So basically there is process
level isolation that happens. You're all right. Now there are three important
terminologies to remember when it comes to Docker. Now the first is the dockerfile now the dockerfile
basically contains the code which defines the application
dependencies and requirements. All right, and through
the dock of file, you're going to produce
the docker image which contains all the dependencies
such as the libraries. And the packages of the application next
is the docker container. Now every time
a Docker image is executed. It runs as a Docker container. So basically Docker container
is a runtime instance of a Docker image. So now let's look at a dock
or use case now over here. You can see
that I've created a Docker file. Now within the dockerfile are basically defined the
dependencies of the application. Now out of this Docker file
or Docker image is created. So basically the libraries
and the packages that the application needs. Installed within the docker
image now every time the docker images run. It runs as a Docker container. Now, these Docker images
are pushed into a repository known as Docker Hub. Now this repository is very
similar to the git repository where in you're committing code
into the git repository in this case. You're going to
commit Docker images into the docker Hub repository. All right. Now you can either have
a private or public repository depending
on your requirements now after the image is published
a Docker Hub the production team Or the testing team
can pull the docker images on their respective servers and then build
as many containers as they want. All right. Now this ensures that a consistent environment
is used throughout the software development cycle. Now, let's look
at water microservices now guys, I'm going to explain
what microservices is because we need
to deploy a micro service based application in our demo
just to take it up a notch. I've implemented microservices. Now first. Let's look at the monolithic
architecture now over here. Let me explain this with a Example now
on the screen you can see that there is an online
shopping application which has three services
customer service product service and card service. Now these services are defined within the application
as a single instance. So when I say single instance, it means that these three
servers will share the same resources and databases which make
them dependent on each other. Now if they share resources, obviously that dependent
on each other right now, you must be wondering what's
wrong in this architecture. Now, let's say that the product
service stops working because of some problem now because the services are dependent on
each other the customer and the card service
will also stop functioning. So basically if one
service goes down, the entire application
is going to go down. All right. Now when it comes
to a micro service application the structure of the application
is defined in such a way that it forms a collection
of smaller services or microservices and
each service has its own database and resources. All right, so basically Customer
Microsoft product micro service and card micro service
will have their own database and their own resources and therefore they're not going
to be dependent on each other. All right, so they are basically independent
autonomous microservices. Alright. Now, let's look at a few
advantages of microservices. Now, the First Advantage
is independent development. Now when it comes
to a monolithic application developing
the application takes time because each feature
has to be built one after the other so in the case of the Online
shopping example only after developing the customer
service the product service can be started. So if the customer service takes two weeks to build
then you have to wait until customer service
is completed and only then you can start building
the product service. All right, but when it comes
to a micro service architecture, each service is
developed independently, and so you can develop
customer service card service and product service parallely, which will save
up a lot of time. Alright now the next
Advantage is in. When deployment now similar to Independent
development each service in a micro service application
can be deployed irrespective of whether the service before it was deployed. So each service can
basically deploy individually now fault isolation. Now when it comes
to monolithic application, if one of the
services dropped working, then the entire application
will shut down but when it comes to a micro service architecture, the services are isolated
from each other. So in case anyone
service shuts down. There will be no effect
on any other service now. The next Advantage is
mixed technology stack now each micro service
can be developed on different technology. Now, for example, the customer service
can be built on Java and the product service can
be built on Python and so on. Alright, so basically you're
allowed to use mixed technology to build your microservices. The next is granular scaling
now granular scaling means that every service within an application
can be scaled independently. Basically, the services are
not dependent on each other. They can be developed
deployed at any point of time irrespective of whether the previous service
has been deployed or not. So guys, I hope you are clear
with the advantages now over here. We're going to compare how microservices
can be deployed by using virtual machines
and Docker containers. All right. Now when it comes
to a virtual machine now, let's say that we have
a micro service application which has five Services now
in order to deploy these five services
on a virtual machine. Gene will need
5 virtual machines. All right. Now each virtual machine will be
for one micro service now, for example, if I allocate 2 GB RAM
for each virtual machine then five of these virtual machines
will take up 10 GB RAM and the microservices
may not even require so much of resources. So we just end up
wasting these resources and at the same time you're
occupying too much disk space which will degrade
the system's performance. Now, let's see how Docker
containers deploy microservices. So instead of running
five virtual machines, we can just run
five Docker containers on one virtual machine now by doing this we're saving
up a lot of resources, but when it comes
to a Docker container, you don't have to
preallocate any Ram the docker container will just
utilize the resources that are needed. And another point
to remember here is that Docker containers
are light-weighted. They do not require additional
guest operating system instead. They just share
the host operating system. All right, so this makes
them very lightweight when compared to a virtual
machine Now let's move on to the use case now. Basically, we're going to try
and understand the problem with the help
of an analogy now over here. You can see that in
the favorable environment. The soil is fertile and the tree is water
at a regular basis. And as a result of this
the tree grows properly, but when the tree grows
in an unfit environment, whether required dependencies for growing a tree
are not present, then the tree will die. All right now similarly when an application runs
on an inconsistent environment, which is not both
the application dependencies. Then the application will fade. All right guys. Now let's look
at the problem statement with a small example. Now, let's say that it developers building
an application using lamp stack. Now after the application
is developed it is sent for testing now this application runs properly
on the testing server, but when it is deployed
to production a feature or the entire application
fails now this may happen because the Apache version
of the lamp stack is outdated on the production server so due to the Difference in the software versions
on the production and development server
the application fails now in order to get rid of the inconsistent
environment problem. We're going to deploy
an application using Docker now Docker will make sure that the environment throughout the development cycle
is consistent now deploying a monolithic
application can cause many problems like for example, if one other feature of
the application stops working, then the entire application
will shut down. So for this reason we are going
to create a micro service based application. And build it on the Jenkins
server and finally use Docker to maintain
a consistent environment throughout the cycle. So over here you can see that there are four
micro services and for each microservices. I've built a Docker file. All right. So first let me discuss what each of these microservices
do now the account service and the customer service
are the main microservices but as the discovery and Gateway services
are supporting microservices now account service will basically
hold the account details of a customer and similarly the customer
service will have a list of Customer details
now Discovery service, which is the supporting
service will hold details of the services that are running on the application
and apart from that. It will also register every
service in the application. Now what a Gateway service
does is on receiving a client request. It will route
the client requests to the destination service now
to be more specific it will provide the IP address
of the destination service. Okay. So now that you know
how these microservices work. Let's move on to the next part
now basically A sickly these micro services are coded and that dependencies are put
into a Docker file. Now for each
of these doc of files or Docker images created
by packing the docker image with a jar files. Now, how do you
create a jar file? A jar file is created by running the ambient
clean install command, which basically cleans up
anything that was created by the previous Bill
and it will run the pom.xml file which will download
the needed dependencies. So whatever dependencies
are needed are stored in this jar file. All right. Now once the dockerfile
is packaged with the jar file then a darker image
is created for each of these micro services. So here we are going to use
Jenkins to automate all of these processes. So Jenkins is basically
automatically going to build and then push these Docker
images to Docker Hub. Now after the images
are pushed to dock a ham the quality assurance or the production team
can pull these images and build as many containers are to fit. All right. So basically over here we're going to create
Docker files for each of these. These micro services and then we're going to package
these doc of files along with the jar files and create a Docker image for
each of these micro Services. All right. Now after creating
the docker images, we're going to push these images to Docker Hub after which
the quality assurance or a production team
will pull these Docker images onto their respective servers
and build as many containers as they want. I hope this is clear. So now let's practically
Implement all of this. Alright guys, so I've logged
into Jenkins and I've created for different jobs each
for one micro service. All right. Now, let me just show
you the configuration of one of these jobs. Now, let's go
to account service. Let's go to configure. So guys make sure that you enter
your GitHub repository here. So go to source code management
click on get and then enter your repository URL over here. Now, let me just show you
what we're doing here. Now within the build step. I've selected execute shell now. Let me just show you
how that's done. So it's simple just
go to add build step and click on execute shell. So when you click
on execute shell or command prompt will open like this and you can type
this code on that. Now what I am doing here is
first I'm changing the directory to account services because I'm running
the account service within this job after that. I'm performing an MBA
and clean install which I explained earlier now once we're done with that we're
going to build a Docker image and then we're going to push it
to your Docker Hub account. Now, these are the credentials
to my Docker Hub account. Where's the Lakers? Are you? Name of my Docker Hub account
and enter a car - demo is a repository that I've created
in my Docker Hub account and click on apply and save our bathroom account
services are built Jaws for the other services as well. Now. Let me just show you
customer service Also. Let's go to configure now within
the source code management. Like I said earlier enter
your repository URL after that go
to the build steps over here. You can see that I'm changing the directory
to customer service then I'll perform MBA. Clean installed. So next I'm going to build
Docker images now over here customers basically
the tag of the image. So whenever this image
is going to get pushed to my doctor harm the tag
is going to be customer. All right, now similarly for account service
the tag was account. Now, I've done the same thing
for all the other jobs. All right, click on apply and save now guys
in order to run these for jobs as one workflow. I've created a build pipeline. Now this pipeline will basically
execute these four. Jobs in one workflow. Now, if you want to know
how to create a build pipeline, please refer the video
in my description box. I'm going to leave a link
where you can see how to create a build pipeline. All right. Now, let me just show you
my GitHub account now over here, you can see that I have account service
customer service Discovery service Gateway service. And also there's
Zipkin service now guys, this service basically
keeps a track of all the other services. So it's going to keep a track
of where the requests are going and how the Are getting sent
from account service through customer service now
with an account service. You can see that I have a dock
of Carl Jenkins file and a Bomb Dot XML file. All right guys, now let's start
building an application. Just click on the run now
here you can see that account service
is getting executed. Now, let's individually go
to account service first. Let's click on account service. All right here you can see
that it's building this job. So basically here what we're going
to do is we're going to change the directory
to account service. After that, we're going
to perform an MBA and clean install and then we're going
to build an image and push that image
to Docker Hub. All right, so guys remember to provide
your Docker Hub credentials. Now this job has
successfully executed now after the job has executed is
going to trigger the next job, which is customer service. Now, let's look at the build
pipeline now over here. This is turned green because account services
completed execution now customer service is currently
running so Let's look at the build
in customer service. So after the gun service
is completed execution. Where's you can also check
the output of account service from here scroll down
and says success now after Gonzales has finished. Let's trigger the
customer service. So customer service
starts executing now now let's individually look at
customer service now over here, you can see that customer
services building now in this job. You're basically going
to change the directory to customer service after that. We're going to perform an MV
and clean and Tall command and then we're going to build
and push a Docker image to Docker Hub. All right. So once this is completed
the next job in the pipeline will get executed. Okay? Alright guys, so this
is successfully executed. Now, you can see that customer service
is also turned green, which means that it has
successfully finished building. Alright. Alright guys, so you can see that customer service
is completed execution. Now, let's trigger the build
of Discovery service. Now, let's look at the output
of Discovery service. So guys, you can see
the output from here itself. Let's click on console. All right. So within the discovery service
again, we're going to change the directory
to Discovery service after that. We're going to perform an MBA and clean install and
once we're done with that we're going
to build an image and then we're going to push it
to Docker Hub guys. Make sure you have entered
your dog or have credentials. Alright guys. So this is completed execution. You can see that it says success now,
it's triggering the new bill, which is Gateway service. So here you can see that Gateway Services
started execution. Let's look at
the console output. All right, so similarly
in Gateway Service First you're going to change
the directory to Gateway service after that you're going
to perform an MV and clean install command and once you're done with that
you're going to build an image and then push it to Docker hub. So guys Gateway
service and successfully completed execution. Now the bill pipeline
has fully turn green, which means that
the entire workflow has completed execution. Now, let's go to our Docker
Hub account and see if all these images
got pushed to Docker Hub. All right, so I'm going
to my doctor harm now. Let's go to the editor
a car demo Repository. Now over here, you can see that the account customer
Discovery and Gateway Services all of these four images
of push to Docker Hub. All right. So with this we have come
to the end of the demo now after pushing these images to Dhaka harm
any quality assurance team or any testing team
can pull these images onto their respective servers so they can easily deploy
this to production. Docker and node.js tutorial
so why use node.js with Docker? So as it speeds up the application
deployment process deployment becomes easier that
the lot of things which you don't need to look
at while deploying if it runs locally on your Docker container surely
it will run on any other machine or any other server on any Docker container
application portability increases you're developing on Windows deploying
on On a Linux you don't need to care
about all that if it works in one
container it'll work in another container simplifies
the Version Control process promotes componentry use
very light footprint and puts less
overhead on the app. Now, let's start
with a simple node app. I'll do npm in it. I create an empty project. So we have a empty package dot. Json file will go ahead
and install Express. So we'll try to create
a very simple hello world application using Express. So it will be a web
application Express is a very popular web framework
created on node. So you've now if we Open the package
dot diesel we can see that the name of the application
and mainly in the dependencies. We have expressed
listed over here. Okay, we'll create
our app dot J's file. That will be our app. Let's write our
application over here. First we'll import
the express module. We'll use that object of the express module
to create our app object. We'll use the app object
to listen for this is where we actually starting
our HTTP server and as to listen on a particular port. We can use any port
number over here. 3000 the port numbers
are a commonly used are three thousand eighty eighty or
eighty eight eight four times. Basically any number any port
that is open on your system. And finally we'll just create one route where the app
would be giving a responsive that route is hit
in the browser. We'll just send. Hello from Docker. Okay, I think that's all we need is a simple
demo will try and run this. Okay, I'm listening on 3,000
blue errors will open this. Okay. Hello from Dhaka. We have it great. So I'll close this now now let's
talk our eyes this application. So as you remember
the three basics of Docker dockerfile
Docker image Docker container. We need a Docker file over here. Destroyed dockerfile,
no need to give any extension. Okay still text file
will open this. Not this is where we will tell
doc of what to do before moving on with the commands. This is the docker
Hub website Hub dr.com. I suggest you please create
account on this website, or I guess when you go to install when you
go to download the docker, I think it will tell
you to create an account or Docker ID. So once you have an account
on half doctor.com you go to explore and this is where you see all
the popular images of talker. So anytime you're working
with any language platform. For example, let's say notice
in our case or PHP or dotnet or using a database is let's say
like over here post cry or couchbase all
these standard Technologies. They have their own official
Docker images already on the docker Hub. You don't need to create images
for these from scratch because these are all
readily available to use. So in our case right now, we need to use the node
on the docker. So you just search for node. Okay, and yes, you can see we have
node is an official image of official image means
this particular Docker image is created by the people who create a node. Okay, like Mongo Express. So this Docker image
is created by the people who actually create
the Mongo Express Library. Okay, and then obviously
there are verified Publishers, like for example, Microsoft and anybody
can upload a Docker image over here in the hub. Okay, and you can also
filter Them by the categories the kind of image that you want the supported OS
supported architectures, etc. Etc. So for us all we need is
this node Docker image? So this is the image upon which
our image will be built and the container
would be working. Okay, so coming down
how to use this image. Okay. So you need to go
to this guide will open the setup page of node. Okay. So over here create
a Docker file in your node.js app project
specify the know. Road base image
with your desired version. This will be the first line
of our dockerfile. So this is the official
Docker image that we want to use and this is the version of that Docker image
not the version of node. You can have a look at all the supported
Docker image versions of node. I think 10 is good for us. I think it is the latest one. We don't actually
need to go and dig into which version supports what but this is good
for us for now. Okay, moving forward now. What do we want
to tell Docker to do? First we'll create
a working directory for the docket. So this is where we are telling
Docker to like create a app directory for itself
where whatever work or things. The doctor needs to do for our application doctor can
put it inside the app folder. Will tell Docker to copy
our package dot Json and if you remember dockerfile is
inside our app folder. Okay. So this is inside
our app folder. So we don't need to give
any paths for any files that you are referencing so I can directly say copy
package.json inside the app. Okay run npm install. So this is like very easily
understandable commands copy package dot Json
inside the app folder and then run the npm install. So this command will be
Inside this folder of the docker container and what npm install would do
is whatever packages are listed inside package dot Json if you remember
we have expressed so those packages
will be installed. Okay, then we tell Docker to now dot over here means
the current directory the file isn't so this
is our current directory and metering Docker
to copy everything from the current directory
to the app folder and then run the command node. So this is the command which even we used
to run the node app. Okay. So this is what Taco would run
inside its container and lastly if you remember
we had a port number which we were using for us. It's 3,000. So we need to expose
a Number as well, okay. Yeah, so I think this is
it we are done now. We'll try and run
this inside Docker. Okay. So first we need to build
the docker image. We only have a dark
of file right now. So next step is
to build a Docker image from the dockerfile. We need to give a name
to the docker image. I'll name it. Hello Docker. Okay, because the mistake that I did is to put
a dot in the end. So just in the current
directory we are in okay. So as you can see, these are the commands
that is running one by one. It's going through all
the steps from Note 10. What dir copy back as dot Json run npm install
and this over here. It might take a while because it needs some time
to download the packages listed in package dot Json. Okay, and then it went
through everything fine. Okay. Now we have
the image created now, we need to run the docker app. So the Almond
for that is Docker run. Also again, we do have
the docker documentation online, which you can refer any time. Okay, so any language
any platform technology, you learn always try and refer
to the official documentation, if you can like refer
from it study from it. That is the best always
another thing even in the terminal in the console. You know, you can always look at help it will list
out all the options that you can give
to the command. So these are the options
for the Run command not the docker command
the Run command. Okay all the options for the the docker command
would be darker help. Okay. So these are all
the sub commands and options that you can use with Docker. So moving on we're going
to run the image occur run now, it means we want the docker
to run in an interactive shell. So if you look at the options, that's not required,
but it's good. Okay, so - i - I interactive keeps
standard input open even if not - And T allocate a pseudo
TTY TTY means or terminal so Docker run. And we want to tell Docker which Port we exposing
and which Port via using inside the application. So again, if you look at help, there is this P option
publish lists publish a container sport to the host. So we'll be creating
a Docker container, which is running
on the host host is over Windows operating system. We need to tell which Port the docker is exposing and which Port
is being used by the app inside this port 3000
over here and this mm. Oh why Will be different. Okay, it's not compulsory that there should be
the same all the time. We can have 8,000 over
here 3000 over here. And all we need to do is
map it in the command. But over here is the same
so Docker run. So the it what basically
would do is I'll show you when the command runs. So this first 3000 is
the port number of we listed in the dockerfile the port
exposed by doctor. And the second one is
what our app is using. Okay now our image
name was hello. Soccer let's run this. Okay. We have some error it says
it's already allocated. Okay, we'll try another put
this may be due to because I was already
doing some tests and running this now once I
have changed the dockerfile, I need to rebuild my image will run
the build command again. Okay. Now I will run the image
Docker run interactive's 888 and 3000 inside the application. Hello talker. And we have running now. Let's try and run again. Okay, the site
can't be reached now if you remember we have
this talker toolbox thing open and it has its own machine
IP 192.168.1.1 hundred 192.168 attend and
hundred port number 3000. Okay, so I'm sorry, but I had another instance
running already on that Port that is why it did not allow
me to run that Docker image on that particular Port I
changed my dockerfile port to double it double it and that is the one that is being exposed. So for us over here
the port number should be double a double date and that is
where I see the message. Hello from darker, which is what actually
we have used. Hello from Docker. The reason why
3000 is working is because there's always
The another Docker instance running in the background, which I hadn't closed
and it says hello world. So there's actually
another Docker container already running on that port and that is why it did not allow
me to run the docker container on this port. So as you can see I
already had one running which I forgot about
sorry about that. So our app is running right now
on the port double it double it and that is the port exposed by
talker to my operating system, which is Windows, but the Port on which
the app is running inside the docker
container is 3,000. So the app is still running
on three thousand but Docker is exposing our app
to put double it double it and that is what we
have mapped over here and - - IIT means it's interactive
right now this message what you're seeing is
actually from a console inside the docker container. This is not from our own CMD
or command line. Okay. So now I can press
control C to end this Okay. Yeah, so as you
can see it has ended. Okay, so I was able to give
it the command to end it and that came and went
to the console of the container. So I hope you enjoyed
it the small demo. Okay. And yeah, this is what we actually did basically
create the know Jesus. I have created dockerfile build
the image and then execute it. Let's look into the topics
for today's session. So we'll start today's
session by understanding. What is a virtual machine and then I'll tell
you the benefits of virtual machine
after understanding that I'll tell you what
our talk of containers and then I'll tell you the
benefits of Docker containers after an introduction
of virtual machine and talk of containers. I'll tell you the difference
between Docker containers and virtual machine
and then the uses of them. So now let's get started with the first topic
for today's session that is what is virtual machine a word. Your machine is an emulation of a computer system
in simple terms. It makes it possible to run what appears to be on many
separate computers on Hardware that is actually one computer
the operating systems and their applications
share Hardware resources from a single host server
or from a pool of host servers. Each virtual machine requires, its own underlying
operating system. And then the hardware
is virtualized not only this but a hypervisor or a virtual
machine monitor is a software. We're firmware or a hardware that creates and runs
virtual machines. It sits between the hardware
and the virtual machine and is necessary
to virtualize the server since the Advent of affordable
virtualization technology. It departments have embraced virtual machines as
the way to lower costs and increase efficiencies now with the note of this let
me tell you the benefits of virtual machines. So the benefits of virtual
machines are mainly all the operating system resources. Sis are available
to all the applications. They have established management and security tools
and not only this but they're better known
for security controls. Now who are the popular
virtual machine providers while the popular virtual
machine providers are VMware K VM virtualbox
Zen and hyper-v. So now that you've understood
what is the virtual machine? Let me tell you what
Docker containers are. So as we all know that Docker is the company
driving the container movement and the only container
platform provider to edges every application across the hybrid
cloud with containers instead of virtualizing
the underlying computer like a virtual machine
only the operating system is virtualized container sit
on the top of a physical server and each container shares
the host operating system kernel and usually the binaries and libraries to now sharing
the operating system resources. Just libraries significantly
reduces the need to reproduce the operating
system code and means that the server can
run multiple workloads with a single operating system
installation containers are thus exceptionally light and they're only megabytes
in size and they just take few seconds to start in contrast
with that virtual machines. Take minutes to run and are
an order of magnitude larger than the equivalent
container all that. The container requires is enough
of an operating system. Ting programs in libraries and a system resource
to run a specific program what this means is
that in practice. You can put two to three as many as applications on
a single server with containers that you can with a virtual
machine in addition to this with containers. You can create
a portable consistent operating environments for development
testing and deployment. So now that I've told
you about containers, let me tell you the Types
of containers so mainly there are two different
types of containers that is the Linux container
and the docker containers. So the Linux container is a Linux operating system-level
virtualization method for running multiple isolated
Linux systems on a single host. Whereas darker started as
a project to build single application Linux containers
introducing several changes to the Linux containers that make containers
more portable and flexible to use at a high level. We can say that Soccer is
a Linux utility that can efficiently create
ship and run containers. So now that I've told you the
different types of containers, let me tell you
the benefits of containers. So containers offer reduced
it management resources. They reduce the size
of the snapshots. They're used in quicker spinning
of apps and they also make sure that the security updates
are reduced and simplified and they also make sure that there is less code
to transfer migrate and upload workloads. Now who are the popular
container providers? Well, the popular
container providers are the lyrics containers to talk
and Windows server. So now that I've told
you individually what a container is what
a virtual machine is and how do these two work now? Let me show you
the major differences between Docker containers
and virtual machines. Well, the major
difference is come with operating support security
portability and performance. So let's discuss each one of these terms one by one and
let's know the differences between both of them. So let's start with
the operating system. Support the basic architecture
of Docker containers and virtual machines differ in their operating system
supports containers are hosted in a single physical server
with the host operating system, which is shared among them. But the virtual machines on the other hand have
a host operating system and an individual
guest operating system inside each virtual
machine irrespective of the host operating system. The guest operating system
can be anything like it can be Linux windows or
any other our operating system. Now the docker containers
are suited for situations where you want to run
multiple applications over a single
operating system kernel, but if you have
applications or servers that need to run on different
operating system flavors, then virtual machines are required sharing
the host operating system between the containers
make them very light and helps them to boot up
in just a few seconds. Hence the overhead to manage
the container system is very low compared to that
of Virtual machines now, let's move on to
the second difference that is Security in Docker since the hose kernel is shared among the containers
the container technology has access to the kernel
subsystems as a result of which a single vulnerable
application can hack the entire host server providing
root access to the applications and running them with a Superuser privileges
is therefore not recommended in Docker containers because of these security issues
on the other hand. And virtual machines
are unique instances with their own kernel
and security features. They can therefore
run applications that need more privileged and security Now moving on
to the third difference that is portability. Docker containers are
self-contained packages that can run
the required application since they do not have a
separate guest operating system. They can be easily ported
across different platforms. The containers can be started and stopped in a matter
of few seconds compared. That of vm's due to
the lightweight architecture. This makes it easy
to deploy Docker containers quickly in servers on the other hand virtual machines are
isolated server instances with their own operating system. They cannot be ported
across multiple platforms without incurring compatibility
issues for development purposes where the applications have
to be developed and tested in different platforms. Docker containers are
the ideal choice now, let's move on
to the Final difference that is performance darker and
virtual machines are intended for different purposes. So it's not fair to measure
the performance equally but the lightweight architecture makes Docker containers
less resource-intensive than the virtual machines as
a result of which containers can start up very fast compared to that
of virtual machines and also the resource usage
varies among the two in containers the resource usage such as CPU memory
input output varies. With the load of traffic
in it unlike the case of watching machines. There is no need
to allocate resources permanently two containers
scaling up and duplicating. The containers is also
an easy task compared to that of virtual machines as there is no need to install
an operating system in them. So now that I've told
you the differences between Docker containers
and virtual machines, let me show you
a real life case study of how Docker containers And virtual machines
can complement each other. So all of us know PayPal, right? So PayPal provides
online Payment Solutions through their account balances
bank accounts credit cards or promotional financing without sharing the financial
information today PayPal is leveraging openstack for the private clouds and run smoother than 1 lakh
virtual machines. Now, one of the biggest desire
of PayPal's business was to modernize their data
center infrastructures making it more on demand. Improving its security
meeting compliance regulations and also making
everything cost efficient. So they wanted to refactor
the existing Java and C++ Legacy applications
by doc Rising them and deploying them as containers
this called for a technology that provides a distributed application
deployment architecture and can manage workloads. But must also be deployed in both private and
public Cloud environments. So PayPal uses talk
a commercial solution. Solutions to enable them
to not only provide gains for the developers in terms
of productivity and Agility but also for
the infrastructure teams in the form of cost efficiency
and enterprise-grade security. The tools being used
in production today include talk of commercially supported
engines Docker trusted registry and as well as talk
a compose the company believes that containers and virtual
machines can co-exist and thus they combined these two technologies
leveraging Docker containers and Two machines together gave PayPal the ability
to run more applications while reducing the number
of total virtual machines and also optimizing their infrastructure this
also allowed PayPal to spin up new applications
much more quickly and also on an as-needed basis since containers are
more lightweight and instantiate in a fraction of second while
virtual machines take minutes. They can roll out
a new application instance quickly patch up
an existing application. Ian or even as the capacity
to compensate for peak times within the year, so this helped PayPal
to drive Innovation and also outpaced
the computations. So guys, that's how the company gained
the ability to scale quickly and deploy faster with the help
of Docker containers and virtual machines. So now let me summarize
the complete session in a minute for you. So Docker is
a containerization app that isolates applications
at the software level if a virtual This a house the docker container
is a hotel room. If you do not like the setup, then you can always
change the hotel room as it is much easier
than changing a house, isn't it? So similarly as a hotel
has multiple rooms sharing the same underlying
infrastructure doctor offers the ability to run
multiple applications with the same
host operating system and sharing underlying
resources now, it is often observed
that some of them believe that Docker is better
than a virtual. Machine, but we
need to understand that while having
a lot of functionality and being more efficient in running applications Docker
cannot replace virtual machines, both containers and
virtual machines have their own benefits and drawbacks and the ultimate decision will
depend on your specific needs. But let me also tell you that the some general rules
of thumb that is what your machines are a better
choice for running applications that require all of
the operating system resources and functionalities. Well, you need to run
multiple applications on servers or have a wide variety
of operating systems to manage. Whereas the containers
are a better choice when your biggest priority
is to maximize the number of applications running
on a minimal number of servers. But in many situations the ideal
setup is to likely include both with the current state
of virtualization technology, the flexibility of virtual machines and the minimal
resource requirements of containers work together
to provide Environmental. Moments with the
maximum functionality. These will be the parameters. I'll be comparing
these two tools against insulation cluster configuration
GUI scalability auto-scaling load balancing updates and rollbacks data volumes and
finally logging and monitoring. Okay. Now before I get started
with the difference, let me just go back a little bit and give you a brief about
communities and Doc is warm. Okay now first of all, At ease and Dockers warm are
both container orchestration tools orchestration
is basically needed when you have multiple
containers in production and you will have to manage each
of these containers and that's why you need these tools. Okay, Cuban eighties was first
of all created by Google. Okay, and then they donated
the whole project to the cloud native Computing foundation. And yeah now it's a part of
the CN CF open source project. Okay, and since communities
was Google's brainchild. It has a huge developer
community and a lot of people who are contributing. Two communities. So if you have any errors at any point of time when
you're working with kubernetes, then you can straight away
put that error on github.com or stackoverflow and you
will definitely have solutions to those errors. So that's the thing
about communities and we consider Cuban at ease to be more preferable
for a complex architecture because that's when the whole
power of Cuban and is comes out. So communities is really strong. Okay, if you're going to use a very simple architecture
may be an application which has very few services and
which needs very few containers. Then you're not going to really
see the power of Been at ease when you have like
hundreds of thousands of containers and Broad that's when kubernetes
is actually beneficial and that's why you see
the difference between Cuban IDs and Dockers Wang, right? So Doc is form on the other
hand is not that good when you have to deal
with hundreds of containers. Okay, so functionality wise they
are pretty much head-to-head with each other. Okay. So with both you
can set up your cluster, but yeah, dr. Swarm is little easier
and it's more preferable when you have less
number of containers. Okay, but whatever it is
if you are dealing with fraud environment Then
Cuban at ease is your solution because Cuban artists will
ensure your classes strength in prod a little more at least when you compare
it to dock a swamp. Okay, and yeah the doctors from Community is unfortunately
not as big as the communities because Google is basically
bigger than darker and darker swarm is again owned by
and Marion by darker ink, so that is the deal
with kubernetes and doctors from all right, but never mind the fact
that the base continues which are used for these
are again Docker containers. So at the end of the day Docker
is definitely going to be a part of communities
and as part of dr. It's just what you do
after your containers. That's what matters
the container management part. Okay. So anyway, I have given
you a good brief about these two tools. Okay. Now, let's get down to the functional differences
between these two. Let's start with insulation and cluster configuration now
for setting up your cluster with kubernetes is going to be really
challenging in the beginning because you will have
to set up multiple things. You have to first bring
up your to Cluster. Then you have to set
up the storage volume for your cluster and then you have to set up your environment
and then you have to bring up. Up your dashboard. You have to bring up
your Port Network. And when you bring
up your dashboard, you have to do the cluster role
binding and all these things. Okay, and then finally you can get your node
to join your cluster. Okay, but with Docker swamp, it's pretty simple
and straightforward. You need to run one command
to bring up the cluster and one command add the node end for it to join the cluster and to simple commands
and your classes running. You can straight away
get started with deploying. Okay. So this is where
kubernetes fall short. It's a little more complicated, but it's worth the effort
because the classes And that you get with kubernetes
is way more stronger than doctors warm. Okay, so when it comes
to failure mechanisms and to Recovery in such places
kubernetes is little faster. And in fact Humanities will give you more security
compared to Dhaka swarm because because your containers
are more likely to fail with swamp Dynamic Cuban at ease
so it's not like I'm saying that your containers
will definitely fail but if at all they feel then
there are more chances of your continuous feeling at
swamp than with Cuban at ease. Okay, so that's
about the cluster strength and If you are really important
about your product moment, and if you have a business, which is basically running
over these containers, then I would say your preference
should be Cuban at ease because at the end
of the day business and your continuous running
in prod are more important, so the plaster is more important
and that's why Cuban at he's now moving on
to the next parameter. We have GUI now Humanities
wins over here also because Humanity's
provides a dashboard over which we can basically
controller cluster not just control we can also figure out and get know. What is the status of your Start and how many pods
are running in your class? Stop? How many deployments are there? How many containers are running how many services are running
and which are your nodes you will have all these details
in a very simple fashion. Okay, so it's not like you
don't get all these things with doctors form. Okay, you get it with dr. Swann also, but you
don't have a gy over here. There's one dashboard where you
can visually see everything so you can use the CLI
with Docker swamp and you can use the CLI
with kubernetes also, but it's just that communities
provide you a dashboard which is a little better
and to our eyes. It's a little More
easier to understand when you see graphs when you see your deployments would say you're
all your diplomas a hundred percent healthy when
you see something like that. You will relate to it a lot more
and you will like it a lot more so that
this additional functionality which you get with kubernetes. Okay, so that's where humanity
is wins over here. And I also want to add another point that with
your kubernetes dashboard. You can easily scale
up your containers and you can also
control your deployments and make new deployments
in a very simple fashion. So even non-technical people
can use kubernetes, okay? But I mean if you are
a non technical person then what are you
doing with containers? Right? So that's what
veterans would say. So season developers
would say that I mean if you're not technical enough
to deal with containers, then you don't deserve
to be here. So that is one point which can defend orcas warm
but it does not change the fact that Cuba Nattie is makes
our life easier now moving on to the third one, which is scalability
both communities and Dockers warm are very good
for scaling up. Okay. So that is the whole point
of these tools. So when we see orchestration,
this is the biggest benefit. Okay communities can
scale up very fast. Just swarm can also
scale up very fast, but there's a saying that swarm is five
times faster than Humanities when it comes to scaling up. That is the point. So I think swarm this nut
just communities over here to Victory, right? But yeah, whatever it is. It's scaling up. That's what matters
since both can do it. Well and good. So the next point
is auto-scaling now if I was a Salesman then I would use this whole point of
Auto scaling as my sales pitch because Auto scaling is all
about intelligence right with Cuban at ease. Your communities will always
be analyzing your service. Eric and whenever there's
a certain increase in your traffic, your communities will
automatically scale up your number of containers. Okay, and then when the traffic
reduces then automatically or number of containers
will also be scaled down. Okay, so there's no manual
intervention whatsoever. I don't need to barge in. So if there's a weekend coming
up and if I'm pretty sure that my website
is going to get a lot of traffic over the weekend
over the Saturday and Sunday, then I don't have to manually
configure my deployments for the weekend Humanities
will automatically do that for me and with DACA swarm. That is a major drawback because you cannot do
auto-scaling you will have to do it manually. Okay, you can do scaling. It's not that
scaling is a big deal but during emergency situations. It's really important. Okay, communities will
automatically analyze that okay, you're getting a lot
of traffic today and it will automatically
scale it up for you. Okay, but swamp is
a little different and if there's an emergency and if your containers are running out
of the number of services, which they can request then
they cannot do anything. I mean worst case scenario,
they will just die out. Okay. So this is a Cuban at ease winds during
these emergency situations because auto-scaling is
not possible with talk us warm now moving
on to my next point which is load balancing. Okay. Now with Cuban at ease
at times you will have to manually configure
these load balancing options. Okay with Docker swarm. You don't need to do that
because it's automatically done. Now. The reason you should do it
with Cuban at ease is because in communities you
will have multiple nodes and inside each node. You will have multiple pods
right and inside these pods. You will have many containers. Now if your service
is basically spanning over multiple containers running
in different parts, then there's this concept
of load balancing which you have
to manually configure because pods can let all the containers inside them
to talk to each other. Okay, but when it comes
to managing your load between these pods, that's where the challenge comes
especially when these pods are on different nodes. Okay. So you will face times when you will have to manually
configure these load balancing and you will have small issues. Okay, but it's not
that it's going to be a major you can still deal
with it but swamp wizard because you have
no The phones over here. You have a swarm cluster
in which there are containers. So these containers can be
easily discovered by others. Okay, so they use IP addresses and they can easily
just discover each other and you're all good. So that's the point and now coming
to the sixth point which is ruining
updates and rollbacks. Let's say that these two are
very important aspects and these are some
of the best features of these two tools. Okay. Now rolling updates
is basically needed for any application now, we're a software application
which is using Cuban at ease. Or not whether it's using Docker swarm or not
any application needs updates. Okay. So rolling updates
is really important because any application would
need to have updates to it. Right any software application
any web application. It definitely needs updates
to its functionality. Okay. Now if your application
is basically containerized then at any point of time, you don't need to bring
down your containers and then make the updates
with the help of using these containers
the different containers and these parts can be progressively
given the updates Okay, so Cuban at ease. We have the concept of PODS
and inside the pods. We have multiple
containers, right? So in Cuban at is what happens is
these rolling updates are gradually sent to each
of these pods as a whole. So all the containers
inside the pods will be gradually given
these rolling updates. Okay with Docker swarm. You have the same thing, but it's a little different because you don't have
pods the rolling updates are gradually sent to all the
containers one after the other. That's the only difference. Okay rolling updates
are gradually sent to different containers and
communities and in Dhaka swamp. But in Cuba nattie's it's
to all the containers within the same pod. Okay, ruining updates sent one after the other to the different
containers in the same pot. That's the point and when it comes to rollbacks again
both provide the same thing. Okay, you can roll
back your changes. So if your master at any point
of Time figures out that you're rolling up it is going to fail then you
have an option of rollback. You actually have
the functionality in both communities
and andhakas form. But the point is there's
no automatic roll back in case of your kubernetes cluster
of your master. Turns out that the update
is going to fail then it will automatically roll back to
the previously stable condition. But with swamp the Swarm manager
will not automatically do it. I mean it provides
optionality to roll back but it's not automatic. That is the only difference
between these two. So I think over
here also communities slightly beats doctors warm. Okay just nudges ahead of it. And now coming
to the seventh point which is nothing but
data volumes now data volumes is a very key concept because you can have
a shared volume space for different containers, okay. The conceptual difference
between these two is that in Cuba Nettie's
you have multiple ports and only containers inside that one particular pod
can have a shared volume. Okay, and the difference
with Q Docker swarm is that since there are no pods
pretty much any container can share the shared space
with other containers. So that is the only difference. Okay, so I don't think
I would go ahead and rate these two. It's just a functionality
and a conceptual difference between these two tools. Okay now moving on
to the last Point logging and monitoring so with kubernetes you
have Built tools which does the logging for you
and also the monitoring happens. Okay. So there is a particular
directory where you can go and you can read your logs
and you can find out where your errors are
what your deployment failed why something happened you
can get all those details because it automatically
does the logging and the monitoring part
is used by your master to basically analyze what's your cluster state
is at all the time. What is the status
of all your notes? What is the status
of the different pods in the nodes are all the containers up and running
our continuous responsive. So communities uses monitoring
for All these purposes, okay, but with DACA swarm, there is no inbuilt tool
and you have to use third-party tools something
like an e LK, right? So I've done that before and I've set up a LK
to work with my doctors mom and it pretty much
does the same thing. Okay. So with L kill, you can correct all
the logs you can figure out where the error is
and even monitoring can be done. But it's just that L key is again not a very easy tool to set
up and use that extra step which you have to do
with the respect to dr. Swarm. Okay, so I think that's pretty much
the end of the function. Novelties and the concept of differences between these two
tools communities and dr. Swann now that is the end
of the theory part over here. I want to open up for DACA swamp and communities and show
you a demonstration and give you a feel as
to how they work. Okay. So for that, let me open up my VMS
where these two are installed. Let me start the demo
with Docker swarm first. So can everybody
see my VM over here. So I have what two virtual
machines I have a master and I have a slave but when it comes
to Dockers warm, they're not called master
and slave but they are rather called manager and woke up. So my manager is the manager of the cluster
and my worker would be the one that would be
executing the services. All right. So like I said
with your doctor swarm, it's very easy to bring up the
cluster and the command to do that is very simple. You can just
specify Docker swarm in it advertised adder and just specify the IP address
of your master. Okay, so in my case my master or my man I'd be
addresses this one. Okay. So if I just hit enter
then everything is up and ready I get
the joining token. And if I just execute
this command at my node and then my cluster
would be ready and I would be joining my cluster. Okay, so I'm going to copy this. Let me go to my worker
and here let me paste this if I hit enter then it says that this node has
joined the Swarm as of worker brilliant, right? So that's as quick as it is and with your master you got
this message saying to add a manager to the Swarm you
should Use this other command. Okay, but that's only if you want another node to join
as a master or as a manager, but otherwise this command
is good enough. So your cluster has been initialized and in
just a few seconds, right? So it's as simple as that if you
want to deploy an application, you can go ahead
and straight away do that. Let me show you
a simple application. Let me run a hello world
and show you I can use the command Docker service
create and let me give the name as Hello world. Okay, and the image to be used
as the hello world image? Okay, so I will basically get
a container having the name of hello world and
that would be created and it could be both it could be running
in both the manager and on my node, but if I want one replica
of that running in both my manager and my node, then I can set then
there's another flag I can use to set that. Okay, and that is the mode flag
I can say mode is equal to Cool. So with this I will have one container running
on my manager end and one continue in my node. And this is where the differences
with respect to Cuba Nettie's because in Cuba not he's
only your nodes will run the services your manager or your master will
not execute any service. Okay, it'll only manage
the whole deployment. Okay, so that was a spelling
mistake in my command. So let me just go
here and say service. Okay. So let's just wait
for a few seconds until my container is up
and running great. So as you can see my service has
been created and this is my ID if I want to verify
my cluster State and to check if my services are ready. I can run these commands. I can do a Docker node PS. Okay in this way, I'll know how many
continuous came up and on which nodes
they were executed. So it says on my manager there
were four of these that started. Okay, and of course it's
a hello world containers. So it's basically
shut down immediately. And I can do a Docker know
Del s to identify how many members are
there in my cluster? So with this you can see that there are ideas
for two different nodes one is my worker and the other
is my manager, right? And that's how simple
it is with Docker, but with kubernetes it's
a little more different. Okay, so I think by now you got a good idea of
how simple Dockers warmest. Okay, and besides I can also check the service status by
running a few commands, right? So I have a docker. As service lso this
would basically list down the containers which are there
and the services. Okay. So it says I have
a Hello World container which is running and it's
in replicated mode and there is one replica of
this particular container, right so I can also check more I
can do a Docker service PS and I can say the name I
can say hello world and I have videos
about my container. Right? So initially there
was one container with start off in my worker and then it got shut down
and then one more on my And then a couple
of them on my manager. So these are the details
and this is how you drill down. Right? So you specify that I need
one particular service. That is the Hello World Service
and I need this many replicas wherein I want them
running on my manager and my Walker and then
the container has been deployed and it's running and that's
what you can see here. Right? So that is about Docker swamp
and let me bring the VM down and bring up my communities
bm's to show you a demonstration of Cuban at ease. Okay. Well, I hope the concept
was clear you but in case if you have any doubt, then I would request you
to go to enter a girl's YouTube channel and watch
my video on Docker swarm where I've shown load balancing
between the different nodes and I've also shown
how to ensure High availability. Okay. So in the meanwhile, let me just shut this
down to bring down the cluster. I can basically get my notes
to leave the cluster first and then shut down
the classroom itself. So if I want to bring
down my cluster, I can do it with
a simple command. But before I do that,
let me stop the service. That created. Okay. So Docker service RM
and hello world was a name. So I want to stop the service and now there are not going
to be any replicas of this service
or this container. Okay. Now, let me go back to my node. And here let me leave
the cluster the Swarm cluster. Okay, and the simple command
for that is darker swarm leave. So if I hit the command at the node and then this node
will leave the swamp Okay. So if I run the docker
node LS command over here then the entry
you for my note will be gone. It would have only one entry
that is my manager. Okay, but anyways, if I want to bring
the password to an end, then I can get you in my manager
to leave the cluster and I can execute
the same command for that I can say Swarm leave
and everything is back. So yeah, let me use Force. Okay, no one has left. So that's a bit darker swarm. Okay. Let me just close this terminal
and go to my next demo. So now let me open up my VMS which have
the kubernetes running. Okay. So this is my community's master and this is my Cuban
at ease node. Okay. Now the thing is that I'm not showing you
the entire set up over here because it's really complicated. All right, so I don't want
to go ahead and execute all the 1015 commands and show
you the entire process set up because it's going to take a lot
of time to do that rather. If you want to know how to do the cluster
set up with kubernetes, then you can go and see the blog
which I have written. And also there's another video
on how to setup the kubernetes cluster. All right. So these two would do you
of any help the YouTube video and the blog and the link to both of them are below
in the description. Okay. So, let me just straight
away get started and tell you what I have already done
if I have one. Set up the cluster. Then there are a lot of things
which I should have done. Okay starting from here. So basically I have started
my inner command over here. I've specified which
Port Network I'm using. I'm going to use a Calico
Port networks of specified that and specify the address where the other nodes
are to subscribe. Okay, and then there
are various commands which after run with respect
to setting up the environment. Okay these and then I basically
set up the calcio Padova here and then I have made room
for setting up the dashboard and then I brought
my proxy up, okay. Okay, and then from the second terminal
I've done few more things. I have brought up
my dashboard account. So I have created a service
account for my dashboard. And then I have done the cluster
old binding by saying that this dashboard that I am the admin and give
me the admin privileges. So I've done that here and then basically
obtained the key which is basically
the authentication key for accessing the dashboard. So these are the so
many other commands which are needed to be executed
from the Masters end before your no joints. And then after that I
basically went to my node. And then I executed
the one command which I was asked to execute. Okay. So this was the joint command
a generated at my master. So I took that
and I paste it here and then my node successfully
has joined the cluster. So this was the entire process
and you can go to my blog and to my videos and go
through this whole process. Okay, so I have also brought
up the dashboard, right? So let me just straight away go
to the dashboard and show you how simple it is how easy it is to
make any deployment because that is
the whole advantage. With your dashboard
right doctor for may be easier. That's what I showed
you with communities. It's much more better. And this is
the communities dashboard that comes up it comes up
on this port number. All right, and if you want
to start your deployment, it's very very simple. Okay, you can just go
to this create button over here click on Create
and then you have option. You can either write
your Json script or you can upload the Json file, which I have already
written or you can click on this create an app over here. It's basically a click
functionality and here you can just put Your app name. So let's say I want to deploy
the same Hello World app. So I'll just put the hello. I will give the name
over here hello world and then the basement which I want to use
for this is going to be the hello world image
is going to be present in my Docker Hub registry or the Google Registry. So hello world is
the image and let's say I want three pause initially. Okay and let me
just straight away. Click deploy. Okay, and with that your application is
deployed and similarly if there's anything else
you want a containerized. It's a simple if it's going
to be Nanjing server or a tomcat or an Apache server, which you want to deploy
you can just choose the base image and hit
on the deploy button and yes, it's straight away deployed and
you'll get something like this which would show
what is the overview and what is the status
of your cluster? Okay, as you can see
my deployments mypods and replica sets. Everything is healthy
a hundred percent, right? So this is my deployments. So it says that two out
of three pods are running. So let's just give
it a few minutes and then all the three pods
will be up and running. Okay, you got to give
it a few seconds because it's just
19 seconds old and yeah, these are the three pods. The third one is coming up. Okay, because yeah, it says terminated
because the hello world container its self
destruct container, right? It prints hello world and exits. That's what happens here. Same thing with replicas
sets have mentioned two or three pods, which means that at all times
there will be three pods running and my replica sets and replication controller
is the one that will control this password. Okay. So yeah, that is pretty
much it and that's how easy and simple it is
to work with kubernetes. He's right. So you can have
your own opinion. You can choose whether you want to use Cuban
at ease or you can choose if you want to use Docker swarm. Okay, so my take on this is if you have a very
simple application, then you would rather
be better off with dr. Storm. Okay, and also if you have a very few clusters,
which you are dealing with, but if you're dealing
with a real Prada environment, then I would say Cuban
IDs is a better option. And also when the containers are many number when you
have a lot of containers, then it's easier
to work with communities. You can just specify
the configurations. You can say that I need this many containers
running at all the times. I need this many nodes which are connected
to my classroom and I will have these many
Paul's running in these nodes and whatever you say that will be followed
by our communities. So that is y cubed
he's is better in my opinion and you can have
your own version. So whatever your choices between the two I would like
to listen to your opinion. You can please put in
your opinion in the comment box. And if you have any doubts
you can let me know. Alright. So before I end this video I
would like to talk about the market share
between Cuban at ease and Dockers warm when it comes to new articles or blogs written
on these two tools. Then communities beats
doctors form 9 is to 1 for every nine blogs written on
humanities as one at known dr. Swamp. So that is the differential
90 percent to 10 percent. Same thing with
web searches, right? So communities has you know
Vamo searches 90% more searches as compared to dr. Storms 10% search
and same thing you can say for GitHub stars and for
your GitHub comments. Okay. So culinary is pretty much
wins on every term here and it's way more popular and it's way more used and it's
probably more comfortable. And if you have any problem
at any point of time, you have a huge Community
which will help you out with all the replies right
off whatever your errors. So if you want Simplicity, I would say go
for doctors warm, but if you want
classes friend and if you want to ensure
High availability in your Prada specially then Humanities is
your tool to go for but however, both are equally good. And they are pretty much neck
to neck on all the grounds and this was a statistic, which I picked up
from Platform 9, which is a very famous
or tech company, right? So they write about that. So I think on that note, I would like to conclude
today's session and I'd like to thank you for watching
the video to the very end. Do let us know and what topics
you want us to make more videos, and it would be a pleasure
for us to do the same and with that. I'd like to take my leave. Thank you and happy learning. I hope you have enjoyed
listening to this video. Please be kind enough to like It and you can comment any
of your doubts and queries and we will reply them at the earliest do look out
for more videos in our playlist And subscribe to Edureka channel to learn more. Happy learning.