Welcome everyone to a Edureka YouTube channel. My name is Saurabh and today I'll be taking you through this entire session on Devops
full course. So we have designed this crash course in such a way that it starts from the
basic topics and also covers the advanced ones. So we'll be covering all the stages
and tools involved in Devops. So this is how the modules are structured. We'll start by
understanding. What is the meaning of devops? What was the methodology before devops? Right?
So all those questions will be answered in the first module. Then we are going to talk
about what is git how it works. And what is the meaning of Version Control and how we
can achieve that with the help of git, that session will be taken by Miss Reyshma. Post that I'll be teaching you how you can create really cool digital pipelines with the help
of Jenkins Maven and git and GitHub. After that. I'll be talking about the most famous
software containerization platform, which is docker and post that Vardhan we'll be
teaching you how you can Kubernetes for orchestrating Docker container clusters. After that, We
are going to talk about configuration management using ansible and puppet. Now, both of these
tools are really famous in the market ansible is pretty trending whereas puppet is very
mature it is there in the market since 2005 finally. I'll be teaching you how you can
perform continuous monitoring with the help of Nagios. So let's start the session guys.
Will Begin by understanding what is devops? So this is what we'll be discussing today.
We'll Begin by understanding why we need devops everything exists for a reason. So we'll try
to figure out that reason we are going to see what are the various limitations that
the traditional software delivery methodologies and how it devops overcomes all of those limitations.
Then we are going to focus on what exactly is the devops methodology and what are the
various stages and tools involved in devops. And then finally in the hands on part I will
tell you how you can create a docker image how you can build it test it and even push
it onto Docker Hub in an automated fashion using Jenkins. So I hope you all are clear with the
agenda. So let's move forward guys and we'll see why we need DevOps. So guys, let's start
with the waterfall model. Now before devops organizations were using this particular software
development methodology. It was first documented in the year 1970 by Royce and was the first
public documented life cycle model. The waterfall model describes a development method that
is linear and sequential waterfall development has distinct goals for each phase of development.
Now, you must be thinking why the name waterfall model because it's pretty similar to a waterfall.
Now what happens in a waterfall once the water has flowed over the edge of the cliff. It
cannot turn back the same is the case for waterfall development strategy as well. An
application will go to the next stage only when the previous stage is complete. So let
us focus on what are the various stages involved in waterfall methodology. So notice the diagram
that is there in front of your screen. If you notice it's almost like a waterfall or
you can even visualize it as a ladder as well. So first what happens the client gives requirement
for an application. So you gather that requirement and you try to analyze it then what happens
you design the application how the application is going to look like. Then you start writing
the code for the application and you build it when I say build it involves multiple think
compiling your application, you know unit testing then even it involves packaging is
well after that it is deployed onto the test servers for testing and then deployed onto
the broad service for release. And once the application is life. It is monitored. Now.
I know this small looks perfect and trust me guys. It was at that time, but think about
it what will happen if we use it. Now fine. Let me give you a few disadvantages of this
model. So here are a few disadvantages. So first one is once the application is in the
testing stage. It is very difficult to go back and change something that was not well
thought out in the concept stage now what I mean by that suppose you have written the
code for the entire application but in testing there's some bug in that particular application
now in order to remove that bug you need to go through the entire source code of the application
which used to take a lot of time, right? So that is Very big limitation of waterfall model
apart from that. No working software is produced until late during the life cycle. We saw that
when we are discussing about various stages of what for more there are high amount of
risk and uncertainty which means that once your product is life it is there in the market
then if there is any bug or any downtime, then you have to go through the entire source
code of the application again, you have to go through that entire process of waterfall
model that we just saw in order to produce a working software again, right? So that's
how it used to take. A lot of time. There's a lot of risk and uncertainty and imagine
if you have upgraded some software stack in your production environment and that led to
the failure of your application now to go back to the previous table version used to
also take a lot of time now, it is not a good model for complex and object oriented projects
and it is not suitable for the Project's where requirements are at a moderate to high risk
of changing. So what I mean by that suppose your client has given you a requirement for
a web application today now you have taken Own sweet time and you are in a condition
the release the application say after one year now after one year, the market has changed.
The client does not want a web application. He's looking for a mobile application now,
so this type of model is not suitable where requirements are at a moderate to high risk
of changing. So there's a question popped in my screen is from Jessica. She's asking
so all the iteration in the waterfall model goes through all the stages. Well, there are
no I tration as such Jessica. First of all, it is not agile methodology or devops. It
is waterfall model, right? There are no I trations once the stage is complete then only
it will be good. It will be going to the next stage. So there are no I trations as such
if you're talking about the application and it is life and then there is some bug or there
is some downtime then at that time based on the kind of box, which is there in the application
Suppose. There might be a bug because of some flawed version of a software stack installed
in your production environment. Probably some upgraded version because if that your application
is not working properly. You need to roll back to the previous table version of the
software stack in your production environment. So that can be one bug apart from that. There
might be bugs related to the code in which you have to check the entire source code of
the application again. Now if you look at it to roll back and incorporate the feedback
that you have got is used to take a lot of time. Right? So I hope this answers your question.
All right, she's finally the answer any of the questions any other doubt you have guys
you can just go ahead and ask me find so there are no questions right now. So I hope you
have understood what was the relation with waterfall model. What are the various limitations
of this waterfall model. Now we are going to focus on the next methodology that is called
the agile methodology. Now agile methodology is a practice that promotes continuous iteration
of development and testing throughout the software development life cycle of the project.
So the development and the testing of an application used to happen continuously with the agile
methodology. So what I mean by that if you focus on a diagram that is there in front
of your screen, so here we get the feedback from the testing that we have done in the
previous iteration. We design the application again, then we develop it there again. We
test it then we discover few things that we can incorporate in the application. We again
design it develop it and there are multiple I trations involved in development and testing
of a particular application cinestyle. Methodology. Each project is broken up into several I trations
and all I tration should be of the same time duration and generally it is between 2 to
8 weeks and at the end of each iteration of working for dr. Should be delivered. So this
is what agile methodology in a nutshell is now let me go ahead and compare this with
the waterfall model. Now if you notice in the diagram that is there in front of your
screen, so waterfall model is pretty linear and it's pretty straight as you can see from
the diagram that we analyze requirements. We plan it design. It build it test it. And
then finally we deploy it onto the processor was for release, but when I talk about the
agile methodology over here the design build and testing part is happening continously.
We are writing the code. We are building the application. We are testing it continuously
and there are several iterations involved in this particular stage. And once the final
testing is done. It is then deployed onto the broad service for release, right? So agile
methodology basically breaks down the entire software delivery life cycle into small sprains
or iterations that we call it due to which the development and the testing part of the
software delivery life cycle used to happen continously. Let's move forward and we are
going to focus on what are the various limitations of agile methodology the first and the biggest
limitation of agile methodology is that the deaf part of the team was pretty agile right
the development and testing used to happen continuously. But when I talk about deployment
then that was not continuous there were still a lot of conflicts happening between the Devon
the off side of the company the dev team wants agility. Whereas the Ops Team want stability
and there's a very common conflict that happens and a lot of you can actually relate to it
that the code works fine in the developers laptop, but when it reaches to production
there is some bug in the application or it does not work any production at all. So this
is because if you know some inconsistency in the Computing environment And that has
caused that and due to which the operations team and the dev team used to fight a lot.
There are a lot of conflicts guys at that time happening. So agile methodology made
the deaf part of the company pretty agile, but when I talk about the off side of the
company, they needed some solution in order to solve the problem that I've just discussed
right? So I hope you are able to understand what kind of a problem I'm focusing on. If
you go back to the previous diagram as well so over here if you notice only the design
build and test or you can say Development building and testing part is continuous, right
the deployment is still linear. You need to deploy it manually on to the various products
overs. That's what you was happening in the agile methodology. Right? So the error that
I was talking about you too busy. Our application is not working fine. I mean once your application
is life and do you do some software stack in the production environment? It doesn't
work properly now to go back and change something in the production environment used to take
a lot of time. For example, you know, you have upgraded some particular software stack
and because of that your application is Doll working it fails to work now to go back to
the previous table version of the software stack the operations team was taking a lot
of time because they have to go through the login scripts that they have written on in
order to provision the infrastructure. So let me just give you a quick recap of the
things that we have discussed till now, we have discussed quite a lot of history. We
started with the waterfall model the traditional waterfall model be understood what are its
various stages and what are the limitations of this waterfall mode? Then we went ahead
and understood what exactly the design methodology and how is it different from the waterfall
model and what are the various limitations of the agile methodology? So this is what
we have discussed till now now we are going to look at the solution to all the problems
that we have just discussed and the solution is none other than divorce divorce is basically
a software development strategy which Bridges the gap between the deaf side and the offside
of the company. So devops is basically a term for a group of Concepts that while not all
new half catalyze into a movement and a rapidly spreading. Well, the technical community like
any new and popular term people may have confused and sometimes contradictory impressions of
what it is. So let me tell you guys devops is not a technology. It is a methodology.
So basically devops is a practice that equated to the study of building evolving and operating
rapidly changing systems at scale. Now. Let me put this in simpler terms. So devops is
the practice of operations and development Engineers participating together in the entire
software life cycle from design through the development process to production support
and you can also say that devops is also characterized by operation staff making use many of the
same techniques as Developers for this system work. I'll explain you that how is this definition
relevant because all we are saying here is devops is characterized by operation staff
making use many of the same techniques as Developers for their systems work seven. I
will explain you infrastructure as code you will understand why I am using this particular
definition. So as you know, that devops is a software development strategy which Bridges
the gap between the dev part in the upside of the company and helps us to deliver good
quality software in time and how this happens this happens because of various stages and
tools involved in Des Moines. So here is a diagram which is nothing but an infinite Loop
because everything happens continuously in Dev Ops guys, everything starting from coding
testing deployment monitoring everything is happening continuously, and these are the
various tools which are involved in the devops methodologic, right? So not only the knowledge
of these tools are important for a divorce engineer, but also how to use these tools.
How can I architect my software delivery lifecycle such that I get the maximum output right?
So it doesn't mean that you know, if I have a good knowledge of Jenkins or gate or docker
then I become a divorce engineer. No that is not true. You should know how to use them.
You should know where to use them to get the maximum output. So I hope you have got my
point what I'm trying to say here in the next slide. Be discussing about various stages
that are involved in devops fine. So let's move forward guys and we are going to focus
on various stages involved in divorce. So these are the various stages involved in devops.
Let me just take you through all these stages one by one starting from Version Control.
So I'll be discussing all of these stages one by one as well. But let me just give you
an entire picture of these stages in one slide first. So Version Control is basically maintaining
different versions of the code what I mean by that Suppose there are multiple developers
writing a code for a particular application. So how will I know that which developer has
made which commits at what time and which commits is actually causing the error and
how will I revert back to the previous commit so I hope you are getting my point my point
here is how will I manage that source code suppose developer a has made a commit and
that commit is causing some error. Now how will I know the developer a has made that
commit and at what time he made that comment and very the code was that editing happened,
right? So all of these questions can be answered once you use Version Control tools like git
subversion. XXXX of we are going to focus on getting our course. So then we have continuous
integration. So continuous integration is basically building your application continuously
what I mean by that suppose any developer made a change the source code a continuous
integration server should be able to pull that code. I am prepare a built now when I
say build people have this misconception of you know, only compiling the source code.
It is not true guys includes everything starting from compiling your source code validating
your source code code review unit, testing integration, testing, etc, etc. And even packaging
your application as well. Then comes continuous delivery. Now the same continuous integration
tool that we are using suppose Jenkins. Now what Jenkins will do once the application
is built. It will be deployed onto the test servers for testing to perform, you know,
user acceptance test or end user testing whether you call it there will be using tools like
selenium right for performing automation testing. And once that is done it will be then deployed
onto the process servers for release, right that is called continuous deployment and here
we'll be using configuration management and Tools so this is basically to provision your
infrastructure to provision your Prada environment and let me tell you guys continuous deployment
is something which is not a good practice because before releasing a product in the
market, there might be multiple checks that you want to do before that right? There might
be multiple other testings that you want to do. So you don't want this to be automated
right? That's why continuous deployment is something which is not preferred after continuous
delivery. We can go ahead and manually use configuration management tools like puppet
chef ansible and salts tag, or we can even use Docker for a similar purpose and then
we can go ahead and deploy it onto the Crossovers for release. And once the application is live.
It is continuously monitored by tools like Nagi Os or Splunk, which will provide the
relevant feedback to the concern teams, right? So these are various stages involved in devops,
right? So now let me just go back to clear if there are doubts. So this is our various
stages are scheduled various jobs schedule. So we have Jenkins here. We have a continuous
integration server. So what Jenkins will do the moment any developer makes a change in
the source code it Take that code and then it will trigger a built using tools like Maven
or and or Gradle. Once that is done. It will deploy it onto the test servers for testing
for end user testing using tools like selenium j-unit Etc. Then what happens it will automatically
take that tested application and deploy it onto the process servers for release, right?
And then it is continuously monitored by tools. Like Nagi was plunky LK cetera et cetera.
So Jenkins is basically heart of devops life cycle. It gives you a nice 360 degree view
of your entire software delivery life cycle. So with that UI you can go ahead and have
a look how your application is doing currently right? We're in which stage it is in right
now testing is done or not. All those things. You can go ahead and see in the Jenkins dashboard
right? There might be multiple jobs running in the Jenkins dashboard that you can see
and it gives you a very good picture of the entire software delivery life cycle. Uh, don't
worry. I'm going to discuss all of these stages in detail when we move forward. We are going
to discuss each of these stages one by one. Eating from source code management or even
call us Version Control. Now what happens in source code management? There are two types
of source code management approaches one is called centralized Version Control. And another
one is called the distributed Version Control the source code management. Now imagine there
are multiple developers writing a code for an application if there is some bug introduced
how will we know which commits has caused that error and how will I revert back to the
previous version of the code in order to solve these issues source code management tools
were introduced and there are two types of source code management tools one is called
centralized Version Control and another is distributed Version Control. So let's discuss
the centralized Version Control first. So centralized version control system uses a
central server to store all the files and enables team collaboration. It works in a
single repository to which users can directly access a central server. So this is what happens
here guys. So every developer has a working copy the working directory. So the moment
they want to make any change in the source code. They can go ahead and make a comment
in the shared repository right and they can even update their working. By you know pulling
the code that is there in the repository as well. So the repository then the diagram that
your nose noticing indicates a central server that could be local or remote which is directly
connected to each of the programmers workstation. As you can see now every programmer can extract
or update their workstation or the data present in the repository or can even make changes
to the data or committed in the repository. Every operation is performed directly on the
central server or the central repository, even though it seems pretty convenient to
maintain a single repository, but it has a lot of drawbacks. But before I tell you the
drawbacks, let me tell you what advantage we have here. So first of all, if anyone makes
a comment in the repository, then it will be a commit ID Associated to it and there
will always be a commit message. So, you know, which person has made that commit and at what
time and where in the code basically, right so you can always revert back but let me now
discuss few disadvantages. First of all, it is not locally available. Meaning you always
need to be connected to a network to perform any action. It is always not available locally,
right? So you need to be connected with the some sort of network. Basically since everything
is centralized in case of the central server getting crashed or corrupted. It will result
in losing the entire data of the project. Right? So that's a very serious issue guys.
And that is one of the reasons why Industries don't prefer centralized Version Control System,
that's talk about the distributed version control system. Now now these systems do not
necessary rely on a central server to store all the versions of the project file. So in
distributed Version Control System, every contributor has a local copy or clone of the
main repository as you can see, I'm highlighting with my cursor right now that is everyone
maintains a local repository of their own which contains all the files and metadata
present in the main repository. As you can see then the diagram is well, every programmer
maintains a local repository on its own which is actually the copy or clone of the central
repository on their hard drive. They can commit and update the local repository without any
interference. They can update the local repositories with new data coming from the central server
by an operation called pull and effect changes the main repository by an operation called
push write operation called push from the local post re now. You must be thinking what
advantage we get here. What are the advantages of distributed version control over the centralized
Version Control now basically the act of cloning and entire repository gives you that Advantage.
Let me tell you how now all operations apart from push-and-pull are very fast because the
tool only needs to access the hard drive not a remote server, hence, you do not always
need an internet connection committing new change sets can be done locally without manipulating
the data on the main proposed three. Once you have a group of change sets ready. You
can push them all at once. So what you can do is you can ask the commit to your local
repository, which is there in your local hard drive. You can commit the changes. Are you
want in the source code you can you know, once you review it and then once you have
quite a lot of It's ready. You can go ahead and push it onto the central server as well
as the central server gets crashed at any point of time. The lost data can be easily
recovered from any one of the contributors local repository. This is one very big Advantage
apart from that since every contributor has a full copy of the project repository. They
can share changes with one another if they want to get some feedback before affecting
the changes in the main repository as well. So these are the various ways in which you
know distributed version control system is actually better than a centralized version
control system. So we saw the two types of phones code Management systems and I hope
you have understood it. We are going to discuss a one source code management tool called gate,
which is very popular in the market right now almost all the companies actually use
get for now. I'll move forward and we'll go into focus on a source code management tool
a distributed Version Control tool that is called as get now before I move forward guys.
Let me make this thing clear. So when I say Version Control or source code management,
it's one in the same thing. Let's talk about get now now git is a distributed Version Control
tool. Boards distributed nonlinear workflows by providing data Assurance for developing
quality software, right? So it's a pretty tough definition to follow but it will be
easier for you to understand with the diagram that is there in front of your screen. So
for example, I am a developer and this is my working directory right now. What I want
to do is I want to make some changes to my local repository because it is a distributed
Version Control System. I have my local repository as well. So what I'll do I'll perform a get
add operation now because of get add whatever was there in my working directory will be
present in the staging area. Now, you can visualize the staging area as something which
is between the working directory and your local repository, right? And once you have
done get ad you can go ahead and perform git commit to make changes to your local repository.
And once that is done you can go ahead and push your changes to the remote repository
as well. After that you can even perform get pull to add whatever is there in your remote
repository to your local repository and perform get check out to our everything which was
there in your Capacity of working directory as well. All right, so let me just repeat
it once more for you guys. So I have a working directory here. Now in order to add that to
my local repository. I need to First perform get add that will add it to my staging area
staging area is nothing but area between the working directory and the local repository
after guitar. I can go ahead and execute git commit which will add the changes to my local
repository. Once that is done. I can perform get push to push the changes that I've made
in my local repository to the remote repository and in order to pull other changes which are
there in the remote repository of the local repository. You can perform get pull and finally
get check out that will be added to your working directory as well and get more which is also
a pretty similar command now before we move forward guys. Let me just show you a few basic
commands of get so I've already installed get in my Center is virtual machine. So let
me just quickly open my Center as virtual machine to show you a few basic operations
that you can perform with get device virtual machine, and I've told you that have already
installed get now in order to check the version of get you can just Then he'd get - - version
and you can see that I have two point seven point two here. Let me go ahead and clear
my terminal. So now let me first make a directory and let me call this as a deal breaker - repository
and I'll move into this array core repository. So first thing that I need to do is initialize
this repository as an empty git repository. So for that all I have to type here is get
in it and it will go ahead and initialize this R empty directory as a local git repository.
So it has been initialized now as you can see initialise empty git repository in home
and Drake I drink - report dot kit or right then so over here. I'm just going to create
a file of python file. So let me just name that as a deer a card dot p y and I'm going
to make some changes in this particular files. So I'll use G edit for that. I'm just going
to write in here, uh normal print statement. Welcome to Ed Eureka close the parenthesis
save it. Close it. Let me get my terminal now if I hit an LS command so I can see that
edeka dot py file is here. Now. If you can recall from the slides, I was telling you
in order to add a particular file or a directory into the local git repository first. I need
to add it to my staging area and how will I do that by using the guitar? Come on. So
all I have to type here is get ad at the name of my file, which is edureka.py then here
we go. So it is done now now if I type in here git status it will give me the files
which I need to commit. So this particular command gives me the status status as a little
tell me model files. They need to commit to the local repository. So it says when you
file has been created that is in the record or py in the state and it is present in the
staging area and I need to come at this particular Phi. So all I have to type here is git commit
- M and the message that I want so I'll just type in here first commit and here we go.
So it is successfully done now. So I've added a particular file to my local git repository.
So now what I'm going to show you is basically how to deal with the remote repositories.
So I have a remote git repository present on GitHub. So I have created a GitHub account.
The first thing that you need to do is create a GitHub account and then you can go ahead
and create a new repository there and then I'll tell you how to add that particular repository
to a local git repository. Let me just go to my browser once and me just zoom in a bit.
And yeah, so this is my GitHub account guys. And what I'm going to do is I'm first going
to go to this repository stab and I'm going to add one new repository. So I'll click on
new. I'm going to give a name to this repository. So whatever name that you want to give you
just go ahead and do that. Let me just write it here. Get - tutorial - Dev Ops, whatever
name that you feel like just go ahead and write that I'm going to keep it public if
you want any description you can go ahead and give that and I can also initialize it
with a readme create a posse and that's all you have to do in order to create a remote
GitHub repository now over here. You can see that there's only one read me dot MD file.
So what I'm going to do, I'm just going to copy this particular SSH link and I'm going
to perform git remote add origin and the link there are just copy. I'll paste it here and
here we go. So this has basically added my remote repository to my local repository.
Now, what I can do is I can go ahead and pull whatever is there in my remote repository
to my local git repository for that. All our to type here is git pull origin master and
here we go. Set is done. Now as you can see that I've pulled all the changes. So let me
clear my terminal and hit an endless command. So you'll find read me dot MD present here
right now. What I'm going to show you is basically how to push this array card or py file onto
my remote repository. So for that all I have to type here is git push origin master and
here we go. So it is done. Now. Let me just go ahead and refresh this particular repository
and you'll find Erica py file here. Let me just go ahead and reload this so you can see
a record or py file where I've written welcome to edit a car. So it's that easy guys. Let
me clear my terminal now. So I've covered few basics of get so let's move forward with
this devops tutorial and we are going to focus on the next stage which is called continuous
integration. So we have seen few basic commands of get we saw how to initialize an empty directory
into a git repository how we can you know, add a file to the staging area and how we
can go ahead and commit in the local repository. After that. We saw how we can push the changes
in the local repository to the remote repository. My repository was on GitHub. I told you how
to connect to the remote repository and then how even you can pull the changes from the
remote repository rights all of these things we have discussed in detail. Now, let's move
forward guys in we are going to focus on the next stage which is called continuous integration.
So continuous integration is basically a development practice in which the developers are required
to commit changes. Just the source code in a shared repository several times a day, or
you can say more frequently and every commit made in the repository is then built this
allows the teams to detect the problems early. So let us understand this with the help of
the diagram that is there in front of your screen. So here we have multiple developers
which are writing code for a particular application and all of them are committing code to a shared
repository which can be a git repository or subversion repository from there the Jenkins
server, which is nothing but a continuous integration tool will pull that code the moment
any developer commits a change in the source code the moment any developer coming to change
in the source code Jenkins server will pull that it will prepare a built now as I have
told you earlier as well build does not only mean compiling the source code. It includes
compiling but apart from that there are other things as well. For example code review unit
testing integration testing, you know packaging your application into an executable file.
It can be a war file. It can be a jar file. So it happens in a continuous manner the moment
any developer coming to change in the source code Jenkins server will pull that prepare
a bill. Right. This is called as continuous integration. So Jenkins has various Tools
in order to perform this so it has various tools for development testing and deployment
Technologies. It has well over 2,500 plugins. So you need to install that plug-in and you
can just go ahead and Trigger whatever job you wanted with the help of Jenkins. It is
originally written in Java. Right and let's move forward and we are going to focus on
continuous delivery now, so continuous delivery is nothing but taking continuous integration
to The Next Step. So what are we doing in a continuous manner or in an automated fashion?
We are taking this build application onto the test server for end user testing or unit
or user acceptance test, right? So that is basically what is continuous delivery. So
let us just summarize containers delivery again moment. Any developers makes a change
in the source code. Jenkins will pull that code prepare a built once build a successful.
It will take the build application and Jenkins will deploy it onto the test server for end
user testing or user acceptance test. So this is basically what continuous delivery is is
happens in a continuous fashion. So what advantage we get here? Basically if they the build failure
then we know which commits has caused that error and we don't need to go through the
entire source code of the application similarly for testing even if any bug appears in testing
is well, we know which comment has caused that are Ernie can just go ahead and you know
have a look at that particular comment instead of checking out the entire source code of
the application. So they basically this system allows the team to detect problems early,
right as you can see from the diagram as web. You know, if you want to learn more about
Jenkins, I'll leave a link in the chat box. You can go ahead and refer that and people
are watching it on YouTube can find that link in the description box below now, we're going
to talk about continuous deployment. So continuous deployment is basically taking the application
the build application that you have tested and deploying that onto the process servers
for release in an automated fashion. So once the application is tested it will automatically
be deployed on to the broad service for release. Now, this is something not a good practice
as I've told you earlier as well because there might be certain checks that you need to do
now to release your software in the market. Are you might want to Market your product
before that? So there are a lot of things that you want to do before deploying your
application. So it is not advisable or a good practice to you know, actually automatically
deploying your application onto the processor which for release so this is basically continuous
integration delivery and deployment any questions. You have guys you can ask me. All right, so
Dorothy wants me to repeat it. Once more sure jovial do that. Let's start with continuous
integration. So continuous integration is basically committing the changes in the source
code more frequently and every commit will then be built using a Jenkins server, right
or any continuous integration server. So this Jenkins what it will do it will trigger a
build the moment any developer commits a change in the source code and build includes of compiling
code review unit, testing integration testing packaging and everything. So I hope you are
clear with what is continuous integration. It is basically continuously building your
application, you know, the moment any developer come in to change in the source code. Jenkins
will pull that code and repairable. Let's move forward and now I'm going to explain
you continuous delivery now incontinence delivery the package that we Created here the war of
the jar file of the executable file. Jenkins will take that package and it will deploy
it onto the test server for end user testing. So this kind of testing is called the end
user testing or user acceptance test where you need to deploy your application onto a
server which can be a replica of your production server and you perform end user testing or
you call it user acceptance test. For example in my application if I want to check all the
functions right functional testing if I want to perform functional testing of my application,
I will first go ahead and check whether my search engine is working then I'll check whether
people are able to log in or not. So all those functions of a website when I check or an
application and I check is basically after deploying it on to apps over right? So that's
sort of testing is basically what is your functional testing or what? I'm trying to
refer here next up. We are going to continuously deploy our application onto the process servers
for release. So once the application is tested it will be then deployed onto the broad service
for release and I've told you earlier is well, it is not a good practice to deploy your application
continuously or in an automated fashion. So guys you have discussed a lot about Jenkins.
How about I show you How Jenkins UI looks like and how you can download plugins on all
those things. So I've already installed Jenkins in my Center is virtual machine. So let me
just quickly open. My Center is virtual machine. So guys, this is my Center is virtual machine
again and over here. I have configured my Jenkins on localhost port 8080 / Jenkins and
here we go. Just need to provide the username and password that you have given when you
are installing Jenkins. So this is how Jenkins looks like guys over here. There are multiple
options. You can just go and play around with it. Let me just take you through a few basic
options that are there. So when you click on new item, you'll be directed to a page
which will ask you to give a name to your project. So give whatever name that you want
to give then choose a kind of project that you want. Right and then you can go ahead
and provide the required specifications required configurations for your project. Now when
I was talking about plugins, let me tell you how you can actually install plug-ins. So
you need to go to manage and kins and here's a tab that you'll find manage plugins. In
this tab, you can find all the updates that are there for the plugins that you have already
installed in the available section. You'll find all the available plugins that Jenkins
support so you can just go ahead and search for the plug-in that you want to install just
check it and then you can go ahead and install it similarly. The plug-ins that are installed
will be found in the install Tab and then you can go ahead and check out the advanced
tab as well. So this is something different. Let's not just focus on this for now. Let
me go back to the dashboard and this is basically one project that I've executed which is called
Ada Rekha Pipeline and this blue color symbolizes and it was successful the blue Colour ball
means it was successful. That's how it works guys. So I was just giving you a tour to the
Jenkins dashboard will actually execute the Practical as well. So we'll come back to it
later. But for now, let me open my slides in will proceed with the next stage in the
devops life cycle. So now let's talk about configuration management. So what exactly
is configuration management, so now let me talk about few issues with the deployment
of a particular application or provisioning of the server's so basically what happens,
you know, I've been My application but when I deployed onto the test servers or onto the
process servers, there are some dependency issues because of his my application is not
working fine. For example in my developers laptop. There might be some software stack
which was upgraded but in my prod and in the test environment, they're still using the
outdated version of that software side because of which the application is not working fine.
This is just one example apart from that what happens when your application is life and
it goes down because of some reason and that reason can be you have upgraded the software
stack. Now, how will you go back to the previous table version of that software stack. So there
are a lot of issues with you know, the admin side of the company the upside of the company
which were removed the help of configuration management tools. So, you know before Edmonds
used to write these long scripts in order to provision the infrastructure whether it's
the test environment of the prod environment of the dev environment, so they utilize those
long scripts, right which is prone to error plus. It used to take a lot of time and apart
from that the Edmund who has written that script. No one else can actually recognize
what's the problem with it once if you have to debug it, so there are a lot of problems
at work. Are with the admin side or the Absurd the company which were removed by the help
of configuration management tools and when very important concept that you guys should
understand is called infrastructure as code which means that writing code for your infrastructure.
That's what it means suppose if I want to install lamp stack on all of these three environments
whether it's devtest abroad I will write the code for installing lamp stack in one central
location and I can go ahead and deploy it onto devtest and prom so I have the record
of the system State president my one central location, even if I upgrade to the next version,
I still have the recorded the previous stable version of the software stack, right? So I
don't have to manually go ahead and you know write scripts and deployed onto the nodes
this is that easy guys. So let me just focus on few challenges at configuration management
helps us to overcome. First of all, it can help us to figure out which components to
change when requirements change. It also helps us in redoing an implementation because the
requirements have changed since the last implementation and very important Point guys that it helps
us to revert to a Previous version of the component if you have replaced with a new
but the flawed version now, let me tell you the importance of configuration management
through a use case now the best example I know is of New York Stock Exchange a software
glitch prevented the NYC from Trading stocks for almost 90 minutes this led to millions
of dollars of loss a new software installation caused the problem that software was installed
on 8 of its twenty trading Terminals and the system was tested out the night before however
in the morning it failed to operate on the a term ends. So there was a need to switch
back to the old software. Now you might think that this was a failure of nyc's configuration
management process, but in reality, it was a success as a result of proper configuration
management NYC recovered from that situation in 90 minutes, which was pretty fast have
the problem continued longer the consequences would have been more severe guys. So I hope
you have understood its importance. Now, let's focus on various tools available for configuration
management. So we have multiple tools like Papa Jeff and silence. Stack I'm going to
focus on pop it for now. So pop it is a configuration management tool that is used for deploying
configuring and managing servers. So, let's see, what are the various functions of puppet.
So first of all, you can Define distinct configurations for each and every host and continuously check
and confirm whether required configuration is in place and is not altered on the host.
So what I mean by that you can actually Define distinct configuration for example in my one
particular node. I need this office. I can another node. I need this office stack so
I can you know, defined distinct configurations for different nodes and continuously check
and confirm whether the required configuration is in place and is not alter and if it is
altered pop, it will revert back to the required configurations. This is one function of puppet.
It can also help in Dynamic scaling up and scaling down of machines. So what will happen
if in your company there's a big billion day sale, right and you're expecting a lot of
traffic. So at that time in order to provision more servers probably today our task is to
provision 10 servers and tomorrow you might have two revisions. Jim's right. So how will
you do that? You cannot go ahead and do that manually by writing scripts. You need tools
like puppet that can help you in Dynamic scaling up and scaling down of machines. It provides
control over all of your configured machines. So a centralized change gets propagated to
all automatically so it follows a master-slave architecture in which the slaves will pull
the central server for changes made in the configuration. So we have multiple nodes there
which are connected to the master. So they will poll they will check continuously. Is
there any change in the configuration happened the master the moment any change happen it
will pull that configuration and deploy it onto that particular node. I hope you're getting
my point. So this is called pull configuration and push configuration. The master will actually
push the configurations on to the nose which happens in ansible and salts that but does
not happen in pop it in Chef. So these two tools follow full configuration and an smellin
salts that follows push configuration in which these configurations are pushed onto the nodes
and here in chef and puppet. The nodes will pull that configurations. They keep on checking
the master at regular intervals and if there's any change in the configuration It'll pull
it now. Let me explain you the architecture that is there in front of your screen. So
that is basically a typical puppet architecture in which what happens you can see that there's
a master/slave architecture here is our puppet master and here is our puppet slave now the
functions which are performed in this architecture first, the puppet agent sends the fact to
the puppet master. So this puppet slave will first send the fact to the Puppet Master facts
what our Fox basically they are key value data appears. It represents some aspects of
slave states such as its IP address up time operating system or whether it's a virtual
machine, right? So that's what basically facts are and the puppet master uses a fact to compile
a catalog that defines how the slaves should be configured. Now. What is the catalog it
is a document that describes a desired state for each resource that Puppet Master manages.
Honestly, then what happens the puppet slave reports back to the master indicating that
configuration is complete and which is also visible in the puppet dashboard. So that's
how it works guys. So let's move Forward and talk about containerization. So what exactly
is containerization so I believe all of you have heard about virtual machines? So what
are containers containers are nothing but the lightweight alternatives to Virtual machines.
So let me just explain that to you. So we have Docker containers that will contain the
binaries and libraries required for a particular application. And that's when we call it. You
know, we have containerized a particular application. Right? So let us focus on the diagram that
is there in front of your screen. So here we have host operating system on top of which
we have Docker engine. We have a No guest operating system here guys. It uses the host
operating system and we're learning to Containers container one will have application one and
it's binaries in libraries the container to will have application to and it's binaries
and libraries. So all I need in order to run my application is this particular container
or this particular container? Because all the dependencies are already present in that
particular container. So what is basically a container it contains my application the
dependencies of my application. The binary is Ivory is required for that application.
Is that in my container nowadays? If you must have noticed that even you want to install
some software you will actually get ready to use Docker container, right? That is the
reason because it's pretty lightweight when you compare it with virtual machines, right?
So let me discuss a use case how you can actually use Docker in the industry. So suppose you
have some complex requirements for your application. It can be a microservice. It can be a monolithic
application anything. So let's just take microservice. So suppose you have complex requirements for
your microservice your you have written the dockerfile for that with the help of this
Docker 5. I can create a Docker image. So Docker image is nothing but you know a template
you can think of it as a template for your Docker container, right? And with the help
of Docker image, you can create as many Docker containers as you want. Let me repeat it once
more so we have written the complex requirements for a micro service application in an easy
to write Docker file from there. We have created a Docker image and with the help of Docker
image we can build as many containers as we want. Now that Docker image I can upload that
onto Docker Hub, which is nothing. Butter git repository of Docker images we can have
public repositories can have private repositories e and from Docker Hub any team beat staging
a production can pull that particular image and prepare as many containers as they want.
So what advantage we get here, whatever was there in my developers laptop, right? The
Microsoft is application. The guy who has written that and the requirement for that
microbes obvious application. So that guy's basically a developer and because he's only
developing the application. So whatever is there in my developers laptop I have replicated
in my staging as well as in a production. So there's a consistent Computing environment
throughout my software delivery life cycle. I hope you are getting my point. So guys,
let me just quickly brief you again about what exactly a Docker containers so just visualize
container as actually a box in which our application is present with all its dependencies except
the box is infinitely replicable. Whatever happens in the Box stays in the Box unless
you explicitly take something out or put something in and when it breaks you will just throw
it away and get a new What so containers usually make your application easy to run on different
computer. Ideally the same image should be used to run containers in every environment
stage from development to production. So that's what basically Docker containers are. So guys.
This is my sent to us virtual machine here again, and I've already installed docker.
So the first thing is I need to start Docker for that. I'll type system CTL start docker.
Give the password. And it has started successfully. So now what I'm going to do, there are few
images which are already there in Docker up which are public images. You can pull it at
anytime you want. Right? So you can go ahead and run that image as many times as you want.
You can create as many containers as you want. So basically when I execute the command of
pulling an image from dog a rabbit will try to First find it locally whether its present
or not and if it is present then it's well and good. Otherwise, we'll go ahead and pull
it from the docker Hub. So right so before I move forward, let me just show you how dr.
Of looks like If you have not created an account and Dock and have you need to go and do that
because for executing a use case you have to do is it's free of cost. So this is our
doctor of looks like guys and this is my repository that you can notice here. Right? I can go
ahead and search for images here as well. So for example, if I want to search for Hadoop
images, which I believe one of you asked so you can find that we have Hadoop images present
here as well. Right? So these are nothing but few images that are there on Docker Hub.
So I believe now I can go back to my terminal and execute your basic Docker commands. So
the first thing that I'm going to execute is called Docker images which will give the
list of all the images that I have in my local system. So I have quite a lot of images you
can see right this is the size and and all those things when it was created the image.
This is called the image ID, right? So I have all of these things displayed on my console.
Let me just clear my terminal now what I'm going to do, I'm going to pull an image rights.
All I have to type here is the awkward pull for example if I want to pull an Ubuntu image.
Just type in here Docker pull open to and here we go. So it is using default tag latest.
So tag is something that I'll tell you later party at will provide the default tag latest
all the time. So it is pulling from the docker Hub right now because it couldn't find it
locally. So download is completed is currently extracting it. Now if I want to run a container,
all I have to type here is to occur and - IIT Ubuntu or you can type the image ideas. Well,
so I am in the Ubuntu container. So I've told you how you can see the various Docker images
of told you how you can pull an image from Docker Hub and how you can actually go ahead
and run a container and you're going to focus on continuous monitoring now, so continuous
monitoring tools resolve any system errors, you know, what kind of system errors low memory
unreachable server, etc, etc. Before they have any negative impact on your business
productivity. Now, what are the reasons to use continuous monitoring tools? Let me tell
you that it detects any network or server problems. It can determine the root cause
of any issue. It maintains the security and availability of the services and also monitors
in troubleshoot server performance issues. It also allows us to plan for infrastructure
upgrades before outdated system cause failures and it can respond to issues of the first
sign of problem and let me tell you guys these tools can be used to automatically fix problems
when they are detected as well. It also ensures it infrastructure outages have a minimal effect
on your organization's bottom line and can monitor your entire infrastructure and business
processes. So what is continuous monitoring it is all about the ability of an organization
to detect report respond contain and mitigate that acts that occur on its infrastructure
or on the software. So basically we have to monitor the events on the ongoing basis and
determine what level of risk. We are experiencing. So if I have to summarize continuous monitoring
in one definition, I will say it is the integration of an organization security tools. So we have
different security tools in an organization the integration of those tools the aggregation
normalization and correlation of the data that is produced by the security tools right
now. It happens the data that has been produced the analysis of that data based on the organization's
risk goals and threat knowledge and near real-time response to the risks identified is basically
what is continuous monitoring and this is a very good saying guys if you can't measure
it, you can't manage it. I hope you know what I'm talking about. Now, there are multiple
continuous monitoring tools available in the market. We're going to focus on nagas now
give us is used for continuous monitoring of systems application services and business
processes in a devops culture, right and in the event of failure nagas can alert technical
staff of the problem allowing them to begin the mediation process before outages affect
business processes and users or Customers so with nagas you don't have to explain why
19 infrastructure outage affect your organization's bottom line. So let me tell you how it works.
So I'll focus on the diagram that is there in front of your screen. So now I give is
runs on a server usually as a Daemon or a service it periodically runs plugins residing
on the same server, they contact holes or servers on your network so you can see it
in the diagram as well. It periodically runs plugins residing on the same server. They
contact horse or servers on your network or on the Internet or Source overs, which can
be locally present or can be remotely present as well. One can view the status information
using the web interface. You can also receive email or SMS notification if something happens,
so now gives them and behaves like a scheduler that runs out in scripts at certain moments.
It stores the results of those scripts and we'll run other scripts if these results change
now what our plugins plugins are compiled executables or scripts that can be run from
a command line to check the status of a host or service. So now uses the results from the
plugins. Mine the current status of the host and services on your network. So what happened
actually in this diagram now your server is running on a host and plugins interact with
local or remote host right. Now. These plugins will send the information to the scheduler
which displays that in the gy that's what is happening guys. All right, so we have discussed
all the stages. So let me just give you a quick recap of what all things we have discussed
first. We saw what was the methodology before devops? We saw the waterfall model. What were
its limitations then we understood the agile model and the difference between the waterfall
and agile methodology. And what are the limitations of agile methodology then we understood how
devops overcomes all of those limitations in what exactly is the worms. We saw the various
stages and tools involved in devops starting from Version Control. Then we saw continuous
integration. Then we saw countenance delivery. Then we saw countenance deployment. Basically,
we understood the difference between integration delivery and deployment then we saw what is
configuration management and containerization and finally explained continuous monitoring,
right? So in between I was even switching back to my virtual machine where a few tools
already installed and I was telling you a few Basics about those tools now comes the
most awaited topic of today's session which is our use case. So let's see what we are
going to implement in today's use case. So this is what we'll be doing. We have git repository,
right? So developers will be committing code to this git repository. And from there. Jenkins
will pull that code and it will first clone that repository after cloning that repository
it will build a Docker image using a Docker file. So we have the dockerfile will use that
to build an image. Once that image is built. We are going to test it and then push it onto
Docker Hub as I've told you what is the organ of is nothing but like a git repository of
Docker images. So this is what we'll be doing. Let me just repeat it once more so developers
will be committing changes in the source code. So the moment any developers commit to change
in the source code Jenkins will clone the entire git repository. It will build a Docker
image based on a Docker file that will create and from there. It will push the docker image
onto the docker Hub. This will happen automatically. The click of a button. So what I'll do is
we'll be using will be using gate Jenkins and Docker. So let me just quickly open my
Virtual Machine and I'll show you that so what our application is all about. So we are
basically what creating a Docker image of a particular application and then pushing
it onto Docker Hub in an automated fashion. And our code is written in the GitHub repository.
So what is it application? So it's basically a Hello World server written with node. So
we have a main dot JS. Let me just go ahead and show you on my GitHub repository. Let
me just go back. So this is how our application looks like guys we have main dot J's right
apart from that. We have packaged or Json for a dependencies. Then we have Jenkins file
and dockerfile Jenkins file. I'll explain it to you what we are going to do with it.
But before that let me just explain you few basics of Docker file and how we can build
a Docker image of this particular. Very basic node.js application. First thing is writing
a Docker file now to be able to build a Docker image with our application. We will need a
Docker file. Yeah, right you can think of it as a blueprint for Docker. It tells Docker
what the contents in parameters of our image should be so Docker images are often based
on other images, but before that, let me just go ahead and create a Docker file for you.
So let me just first clone this particular Repository. So let me go to that particular
directory first. It's Darren downloads. Let me unzip this first unzip divorce - tutorial
and let me hit an LS command. So here is my application present. So I'll just go to this
particular devops - tutorial - master and let me just say my terminal let us focus on
what all files we have. We have dockerfile. Let's not focus on Jenkins file at all for
now, right we have dockerfile. We have main dot J's package dot Json read me dot MD and
we have test dot J's. So I have a Docker file with the help of which I will be creating
a Docker image, right? So let me just show you what I have written in this Docker file
before this. Let me tell you that Docker images are often based on other images right for
this example. We are basing our image on the official node Docker image. So this line that
you are seeing is basically to base our application on the official node Docker image. This makes
our job easy and our dockerfile very very short guys. So the in a hectic task of installing
node, and it's dependencies in the image is already done in our basement. So we'll just
need to include our application. Then we have set a label maintainer. I mean, this is optional
if you want to do it. Go ahead. If you don't want to do it, it's still fine. There's a
health check which is basically for Docker to be able to tell if the server is actually
up or not. And then finally we are telling Docker which Port ask server will run on right?
So this is how we have written the dockerfile. Let me just go ahead and close this and now
I'm going to create an image using this Docker file. So for that all I have to type here
is sudo docker Bell slash home slash Edureka downloads devops - tutorial basically the
path to my dockerfile and here we go need to provide the sudo password. So had I started
now and is creating an image for me the docker image and it is done it successfully built
and this is my image ID, right so I can just go ahead and run this as well. So all I have
to type here is Docker Run - it and my image ID and here we go. So it is listening at Port
8000. Let me just stop it for now. So I've told you how you can create an image using
Docker file right now. What I'm going to do, I'm going to use Jenkins in order to clone
a git repository then build an image and then perform testing and finally pushing it onto
Docker Hub my own tokra profile. All right, but before that what we need to do is we need
to tell Jenkins what our stages are and what to do in each one of them for this purpose.
We will write Jenkins pipeline specification in on Jenkins file. So let me show you how
the Jenkins file looks like just click on it. So this is what I have written in my Jenkins
file, right? That's pretty self-explanatory first. I've defined my application. I mean
just clone the repository that I have then build that image. This is the target I'm using
a draca one, which is username. And Erica is the repository name rights built that image
then test it. So we are just going to print test passed and then finally push it onto
Docker Hub, right? So this is the URL of Docker Hub and my credentials are actually saved
in Jenkins in Docker Hub credentials. So, let me just show you how you can save those
credentials. So go to the credentials tab, so here you need to click on system and click
on global credentials. Now over here, you can go ahead and click on update and you need
to provide your username your password and your doctor have credential ID that whatever
you gonna pass there, right? So, let me just type the password again. All right. Now we
need to tell Jenkins two things where to find our code and what credentials to use to publish
the docker image, right? So I've already configured my project. Let me just go ahead and show
you what I have written there. So the first thing is the name of my project right which
I was showing you when you create a new item over there. There's an option called where
you need to give the name of your project and I've chosen pipeline project. So if I
have to show you the pipeline project you can go to new item. And this is what I've
chosen that the kind of project and then I have clicked on Bill triggers. So basically
this will pull my CM the source code management repository after every minute Whenever there
is a change in the source code will pull that and it will repeat the entire process after
every minute then Advanced project options are selected the pipeline script from SCM
here either you can write pipeline script directly or you can click on Pipeline script
from source code management that kind of source code management is get then I've provided
the link to my repository and that's all I have done now when I scroll down there's nothing
else I can just click on apply and Save So I've already build this project one. So let
me just go ahead and do it again. All right side. I started first. It will clone the repository
that I have. You can find all the logs. Once you click on this blue color ball and you
can find the logs here as well. So once you click here, you'll find it over here as well.
And similarly the logs are present here also, so now I we have successfully build our image.
We have tested it now. We are pushing it onto Docker hub. So we are successfully pushed
our image onto Docker Hub as well. Now if I go back to my profile and I go to my repository
here. So you can find the image is already present here have actually pushed it multiple
times. So this is how you will execute the Practical. It was very easy guys. So let me
just give you a quick recap of all the things we have done first. I told you how you can
write a Docker file in order to create a Docker image of a particular application. So we were
basing our image on the official node image of present of the docker Hub, right which
already contains all the dependencies and it makes a Docker file looks very small after
that. I build an image using the dockerfile then I explain to you how you can use Jenkins
in order to automate the task of cloning a repository then building a Docker image testing
the docker image and then finally uploading the add-on to the docker Hub. We did that
automatically with the help of Jenkins a told you where you need to provide the credentials
what our tags how you can write Jenkins file the next part of the use cases different teams
beat staging and production can actually pull the image that we have uploaded onto Docker
Hub and can run as many containers as you want. Hey everyone, this is Reyshma from Edureka
and today's tutorial. We're going to learn about git and GitHub. So without any further
Ado, let us begin this tutorial by looking at the topics that we'll be learning today.
So at first we will see what is Version Control and why do we actually need Version Control
after that? We'll take a look at the different version control tools and then we'll see all
about GitHub and get lots of taking account a case study of the Dominion Enterprises about
how they're using GitHub after that. We'll take a look at the features of git and finally
we're going to use all the git commands to perform all the get operations. So this is
exactly what we'll be learning today. So we're good to go. So let us begin with the first
topic. What is Version Control? Well, you can think of Version Control as the management.
System that manages the changes that you make in your project till the end the changes that
you make might be some kind of adding some new files or you're modifying the older files
by changing the source code or something. So what the version control system does is
that every time you make a change in your project? It creates a snapshot of your entire
project and saves it and these snapshots are actually known as different versions. Now
if you're having trouble with the word snapshot just consider that snapshot is actually the
entire state of your project at a particular time. It means that it will contain what kind
of files your project is storing at that time and what kind of changes you have made. So
this is what a particular version contains now, if you see the example here, let's say
that I have been developing my own website. So let's say that in the beginning. I just
had only one web page which is called the index dot HTML and Few days. I have added
another webpage to it, which is called about dot HTML and I have made some modifications
in the about our HTML by adding some kind of pictures and some kind of text. So, let's
see what actually the Version Control System stores. So you'll see that it has detected
that something has been modified and something has been created. For example, it is storing
that about dot HTML is created and some kind of photo is created or added into it and let's
say that after a few days. I have changed the entire page layout of the about dot HTML
page. So again, my version control system will detect some kind of change and will say
that some about duration T. Ml has been modified and you can consider all of these three snapshots
as different versions. So when I only have my index dot HTML webpage and I do not have
anything else. This is my version 1 and after that when I added another web page, this is
going to be a version 2 and after have The page layout of my web page. This is my version
3. So this is how a Version Control System stores different versions. So I hope that
you've all understood what is a version control system and what are versions so let us move
on to the next topic and now we'll see why do we actually need Version Control? Because
you might be thinking that why should I need a Version Control? I know what the changes
that I have made and maybe I'm making this changes just because I'm correcting my project
or something, but there are a number of things because of why we need Version Control n so
let us take a look at them one by one. So the first thing that version control system
avails us is collaboration. Now imagine that there are three developers working on a particular
project and everyone is working in isolation or even if they're working in the same shared
folder. So there might be conflicts sometimes when each one of them are trying to modify
the same file. Now, let's say they are working in isolation. Everyone is minding their own
business. Now the developer one has made some changes XYZ in a particular application and
in the same application the developer to has made some kind of other changes ABC and they
are continuing doing that same thing. They're making the same modifications to the same
file, but they're doing it differently. So at the end when you try to collaborate or
when you try to merge all of their work together, you'll come up with a lot of conflicts and
you might not know who have done what kind of changes and this will at the end end up
in chaos. But with Version Control System, it provides you with a shared workspace and
it continuously tells you who has made what kind of change are what has been changed.
So you'll always get notified if someone has made changed in your project. So with Version
Control System a collaboration is available tween all the developers and you can visualize
everyone's work properly and as a result your project will always evolve as a whole from
the start and it will save a lot of time for you because there won't be much conflicts
because obviously if the developer a will see that he has already made some changes
he won't go for that right because he can carry out his other work. You can make some
other changes without interfering his work. Okay, so we'll move on to the next reason
for what I we need Version Control System. And this is one of the most important things
because of why we need Version Control System. I'll tell you why now. The next reason is
because of storing versions because saving a version of your project after you have made
changes is very essential and without a Version Control System. It can actually get confusing
because there might be some kind of questions that will arise in your mind when you are
trying to save a version the first question might be how much would you save would you
just save the entire project or would you just save the changes that you made now? If
you only save the changes it'll be very hard for you to view the whole project at a time.
And if you try to save the entire project at every time there will be a huge amount
of unnecessary and redundant data lying around because you'll be saving the same thing that
has been remaining unchanged again. And again, I will cover up a lot of your space and after
that they're not the problem comes that. How do I actually named this versions now? Even
if you are a very organized person and you might actually come up with a very comprehensive
naming scheme, but as soon as your project starts varying and it comes to variance there
is a pretty good chance that you'll actually lose track of naming them. And finally the
most important question. Is that how do you know what exactly is different between these
versions now you ask me that? Okay. What's the difference between version 1 and version
2 what exactly was changed you need to remember or document them as well. Now when you have
a version control system, you don't have to worry about any of that. You don't have to
worry about how much you need to save. How do you name them? Are you have to you don't
have to remember that what exactly is different different between the versions because the
Version Control System always acknowledges that there is only one project. So when you're
working on your project, there is only one version on your disk. And everything else
all the changes that they've made in the past are all neatly packed inside the Version Control
System. Let us go ahead and see the next reason now version control system provides me with
a backup. Now the diagram that you see here is actually the layout of a particul distributed
Version Control System here. You've got your central server where all the project files
are located and apart from that every one of the developers has a local copy of all
the files that is present in the central server inside their local machine and this is known
as the local copies. So what the developers do is that every time they start coding at
the start of the day, they actually fetch all the project files from the central server
and store it in the local machine and after they are done working the actually transfer
all the files back into the central server. So at every time you'll always Is have a local
copy in your local machine at times of Crisis. Like maybe let's say that your central server
gets crashed and you have lost all your project files. You don't have to worry about that
because all the developers are maintaining a local copy the same exact copy of all the
files that is related to your project that is present in the central server. Is there
in your local machine and even if let's say that maybe this developer has not updated
his local copy with all the files if he loses and the central servers gets crashed and the
developer has not maintained its local copy is always going to be someone who has already
updated it because obviously there is going to be huge number of collaborators working
on the project. So even a particular developer can communicate with other developers and
get fetch all the project files from other developers local copy as well. So it is very
reliable when you have a version control system because you're always going to have a backup
of all. You're fired. So the next thing and which Version Control helps us is to analyze
my project because when you have finished your project you want to know that how your
project has actually evolved so that you can make an analysis of it and you can know that
what could you have done better or what could have been improved in your project? So you
need some kind of data to make an analysis and you want to know that what is exactly
changed and when was it change and how much time did it take and Version Control System
actually provides you with all the information because every time you change something version
control system provides you with the proper description of what was changed. And when
was it changed you can also see the entire timeline and you can make your analysis report
in a very easy way because you have got all the data present here. So this is how a version
control system helps you to analyze your project as well. So let us move ahead and let us take
a look. Add the Version Control tools because in order to incorporate version control system
in your project, you have to use a Version Control tool. So let us take a look at what
is available. What kind of tools can I use to incorporate version control system. So
here we've got the four most popular version control system tools and they are get and
this is what we'll be learning in today's tutorial will be learning how to use git and
apart from get you have got other options as well. You've got the Apache subversion
and this is also popularly known as SBN SVN and CVS, which is the concurrent version systems.
They both are a centralized Version Control tool. It means that they do not provide all
the developers with a local copy. It means that all the contributors are all the collaborators
are actually working directly with the central repository only they don't maintain local
copy and Kind of actually becoming obsolete because everyone prefers a distributed Version
Control System where everyone has an okay copy and Mercurial on the other hand is very
similar to get it is also a distributed Version Control tool but we'll be learning all about
get here. That's what I get is highlighted in yellow. So let's move ahead. So this is
the interest over time graph and this graph has been collected from Google Trends and
this actually shows you that how many people have been using what at what time so the blue
line here actually represents get the green is SVN. The yellow is Mercurial and the red
is CVS. So you can see that from the start get has always been the most popular version
control tool as compared to as bian Mercurial and CVS and it has always kind of been a bad
day for CVS, but get has always been popular. So why not use get right? So there's nothing
to say much about That a yes and a lot of my fellow attendees agree with me. We should
all use get and we're going to learn how to use get in this tutorial. So let us move ahead
and let us all learn about git and GitHub right now. So the diagram that you see on
my left is actually the diagram which represents that what exactly is GitHub and what exactly
is get now I've been talking about a distributed version control system and the right hand
side diagram actually shows you the typical layout of a distributed Version Control System
here. We've got a central server or a central repository now, I'll be using the word repository
a lot from now on just so that you don't get confused. I'll just give you a brief overview.
I'll also tell you in detail. What is the repository and I'll explain you everything
later in this tutorial, but for now just consider repository as a data space where you store
all the project files any kind of files that is related. Your project in there, so don't
get confused when I say rip off the tree instead of server or anything else. So in a Distributive
Version Control System, you've got a central repository and you've got local repositories
as well and every of the developers at first make the changes in their local repository
and after that they push those changes or transfer those changes from into the central
repository and also the update their local repositories with all the new files that are
pushed into the central repository by an operation called pull. So this is how the fetch data
from Central repository. And now if you see the diagram again on the left, you'll know
that GitHub is going to be my central repository and get is the tool that is going to allow
me to create my local repositories. Now, let me exactly tell you what is GitHub. Now people
actually get confused between git and GitHub they I think that it's kind of the same thing
maybe because of the name they sound very alike. But it is actually very different.
Well git is a Version Control tool that will allow you to perform all these kind of operations
to fetch data from the central server and to just push all your local files into the
central server. So this is what get will allow you to do it is just a Version Control Management
tool. Whereas in GitHub. It is a code hosting platform for Version Control collaboration.
So GitHub is just a company that allows you to host your central repository in a remote
server. If you want me to explain in easy words, you can consider GitHub as a social
network, which is very much similar to Facebook. Like only the differences that this is a social
network for the developers. We're in Facebook, you're sharing all your photos and videos
or any kind of statuses. What the developers doing get have is that they share their code
for everyone to see their projects either code about how they have worked on. So that
is GitHub. There are certain advantages of a distributed Version Control System. Well,
the first thing that I've already discussed was that it provides you with the backup.
So if at any time your central server crashes, everyone will have a backup of all their files
and the next reason is that it provides you with speed because Central servers typically
located on a remote server and you have to always travel over a network to get access
to all the files. So if at sometimes you don't have internet and you want to work on your
project, so that will be kind of impossible because you don't have access to all your
files, but with a distributed Version Control System, you don't need internet access always
you just need internet when you want to push or pull from the central server apart from
that you can work on your own your files are all inside your local machine so fetching
it. In your workspace is not a problem. So that are all the advantages that you get with
a distributed version control system and a centralized version control system cannot
actually provide you that so now let us take a look at a GitHub case study of the Dominion
Enterprises. So Dominion Enterprises is a leading marketing services and Publishing
company that works across several Industries and they have got more than 100 offices worldwide.
So they have distributed a technical team support to develop a range of a website and
they include the most popular websites like for and.com volts.com homes.com. All the Dominion
Enterprises websites actually get more than tens of million unique visitors every month
and each of the website that they work on has a separate development team and all of
them has got a unique needs and You were close of their own and all of them were working
independently and each team has their own goals their own projects and budgets, but
they actually wanted to share the resources and they wanted everyone to see what each
of the teams are actually working on. So basically they want to transparency. Well the needed
a platform that was flexible enough to support a variety of workflows. And that would provide
all the Dominion Enterprises development around the world with a secure place to share code
and work together and for that they adopted GitHub as the platform. And the reason for
choosing GitHub is that all the developers across the Dominion Enterprises, we're already
using github.com. So when the time came to adopt a new version control platform, so obviously
GitHub Enterprise definitely seemed like a very intuitive choice and because everyone
all the developers were also familiar with GitHub. So the learning curve Was also very
small and so they could start contributing code right away into GitHub and with GitHub
all the developer teams. All the development teams were provided access to when they can
always share their code on what they're working on. So at the end everyone has got a very
secure place to share code and work together. And as Joe Fuller, the CIO of dominion Enterprises
says that GitHub Enterprise has allowed us to store our company source code in a central
corporately control system and Dominion Enterprises actually manages more than 45 websites, and
it was very important for dominion and the price to choose a platform that made working
together possible. And this wasn't just a matter of sharing Dominion Enterprises open
source project on GitHub. They also had to combat the implications of storing private
code publicly to make their work more transparent across the company as well and they were also
using Jenkins to facilitate continuous integration environment and in order to continuously deliver
their software. They have adopted GitHub as a Version Control platform. So GitHub actually
facilitated a lot of things for Dominion Enterprises and for that there were able to incorporate
a continuous integration environment with Jenkins and they were actually sharing their
code and making software delivery even more faster. So this is how GitHub helped not only
just a minute Enterprises, but I'm sure there's might be common to a lot of other companies
as well. So let us move forward. So now this is the topic that we were waiting for and
now we'll learn what is get so git is a distributed Version Control tool and it supports distributed
non linear workflow. So get is the tool that actually facilitates all the distributed Version
Control System benefits because it will provide you to create a local Repository. In your
local machine and it will help you to access your remote repository to fetch files from
there or push files and do that. So get is the tool that you required to perform all
these operations and I'll be telling you all about how to perform these operations using
get later in this tutorial for now. Just think of get as a to that you actually need to do
all kind of Version Control System task. So we'll move on and we'll see the different
features of git now. So these are the different features of get is distributed get is compatible
get a provides you with the non linear workflow at avails you branching. It's very lightweight
it provides you with speed. It's open source. It's reliable secure and economical. So let
us take a look at all these features one by one. So the first feature that we're going
to look into is its distributed now, I've been like telling you it's a it's a distributor.
Version Control tool that means that the feature that get provides you is that it gives you
the power of having a local repository and lets you have a local copy of the entire development
history, which is located in the central repository and it will fetch all the files from the central
repository to get your local repository always updated and this time calling it distributed
because every was let's say that there might be a number of collaborators or developers
so they might be living in different parts of the world. Someone might be working from
the United States and one might be in India. So the word the project is actually distributed.
Everyone has a local copy. So it is distributed worldwide you can say so this is what distributed
actually means. So the next feature is that it is compatible. Now, let's say that you
might not be using get on the first place. But you have a different version control system
already installed like SVN, like Apache subversion or CVS and you want to switch to get because
obviously you're not happy with the centralized version control system and you want a more
distributed version control system. So you want to migrate from SVN to get but you are
worried that you might have to transfer all the files all the huge amount of files that
you have in your SVN repository into a git repository. Well, if you are afraid of doing
that, let me tell you you don't have to be anymore because get is compatible with as
VM repositories as well. So you just have to download and install get in your system
and and you can directly access the SVN repository over a network which is the central repository.
So the local repository that you'll have is going to be a good trip. The tree and if you
don't want to change your central repository, then you can do that as well. We can use get
SVN and you can directly access all the files all the files in your project that is residing
in an SVN repository. So do you don't have to change that and it is compatible with existing
systems and protocols but there are protocols like SSH and winner in protocol. So obviously
get users SSH to connect to the central repository as well. So it is very compatible with all
the existing things so you don't have to so when you are migrating into get when you are
starting to use get you don't have to actually change a lot of things so is as I have everyone
understood these two features by so far Okay, the next feature of get is that it supports
nonlinear development of software. Now when you're working with get get actually records
the current state of your project by creating a tree graph from the index a tree that you
know is nonlinear now when you're working with get get actual records the current state
of the project by creating a tree graph from the index. And as you know that a tree is
a non linear data structure and it is usually actually in the form of a directed acyclic
graph which is popularly known as the DH e. So, this is how I actually get facilitates
a nonlinear development of software and it also includes techniques where you can navigate
and visualize all of your work that you are currently doing and how does it actually facilitate
and when I'm talking about non-linearity, how does get actually facilitates a nonlinear
development is actually by Crunching now branching actually allows you to make a nonlinear software
development. And this is the gift feature that actually makes get stand apart from nearly
every other Version Control Management do because get is the only one which has a branching
model. So get allows and get actually encourages you to have a multiple local branches and
all of the branches are actually independent of each other the and the creation and merging
and deletion of all these branches actually takes only a few seconds and there is a thing
called the master Branch. It means the main branch which starts from the start of your
project to the end of your project and it will always contain the production quality
code. It will always contain the entire project and after that it is very lightweight now
you might be thinking that since we're using local repositories on our local machine and
we're fetching all the files that are in the central repository. And if you think that
way you can know that there are like hon, maybe there are It's of people's pushing their
code into the central repository and and updating my local repository with all those files.
So the data might be very huge but actually get uses lossless compression technique and
it compresses the data on the client side. So even though it might look like that you've
got a lot of files when it actually comes to storage or storing the data in your local
repository. It is all compressed and it doesn't take up a lot of space only when you're fetching
your data from the local repository into your workspace. It converts it and then you can
work on it. And whenever you push it again, you can press it again and store it in a very
minimal space in your disk and after that it provides you with a lot of speed now, since
you have a local repository and you don't have to always travel over a network to fetch
files, so it does not take any time to get files in your into your workspace from your
local repository because and if you see that it is actually Three times faster than fetching
data from a remote repository because he's obviously have to travel over a network to
get that data or the files that you want and Mozilla has actually performed some kind of
performance tests and it is found out that get is actually one order of magnitude faster
than other version control tools, which is actually equal to 10 times faster than other
version control tools. And the reason for that is because get is actually written in
C and C is not like other high-level languages. It is very close to machine language. So it
produces all the runtime overheads and it makes all the processing very fast. So get
is very small and it get is very fast. And the next feature is that it is open source.
Well, you know that get was actually created by Linus Torvalds and he's the famous man
who created the Linux kernel and he actually used get in the development of the Next Colonel
now, they were using a Version Control System called bitkeeper first, but it was not open
source day. So the owner of bitkeeper has actually made it a paid version and this actually
got Linus Torvalds mad. So what he did is that he created his own version control system
tool and he came up with get and he made it open source for everyone so that you can so
the source code is available and you can modify it on your own and you can get it for free.
So there is one more good thing about get and after that it is very reliable. Like I've
been telling you since the star that egg have a backup of all the files in your local repository.
So if your central server crashes, you don't have to worry your files are all saving your
local repository and even if it's not in your local repository, it might be in some other
developers local repository and you can tell him when and whenever you need some that data
and you lose the data and after your central server is all If it was crashed, he can directly
push all the data into the central repository and from there everyone and Skinner always
have a backup. So the next thing is that get is actually very secure now git uses the sha-1
do name and identify objects. So whenever you actually make change it actually creates
a commit object and after you have made changes and you have committed to those changes, it
is actually very hard to go back and change it without other people knowing it because
whenever you make a commit the sha-1 actually converts it what is sha-1. Well it is a kind
of cryptographic algorithm. It is a message digest algorithm that actually converts your
commit object into a four digit hexadecimal code Now message AI uses techniques and algorithms
like md4 md5 and it is actually very secure. It is considered to be very secure because
even National Security Agency of the United States of America uses ssj. I so if they're
using it so you might know that it is very secure as well. And if you want to know what's
md5 and message digest I'm not going to take you through the whole algorithm whole cryptographic
algorithm about how they make that Cipher and all you can Google it and you can learn
what is sji, but the main concept of it is that after you have made changes. You cannot
deny that you have not met changes because it will store it and everyone can see it it
will create commit hash for you. So everyone will see it and this commit hash can is also
useful when you want to revert back to previous versions you want to know that which commits
exactly caused what problem and if you want to remove that commit or if you want to remove
that version you can do that because sha I will give you the hash log of every government
so we move on and see the Feature, which is economical now get is actually released under
the general public license and it means that it is for free. You don't have to pay any
money to download get in your system. You can have kids without burning a hole in your
pocket. And since all the heavy lifting is done on the kind side because everything you
do you do it on your own entire workspace and you push it into the local repository
first, and after that it's pushing the central server. So it means that people are only pushing
into the central server after when they're sure about their work and and they're not
experimenting on the central repository. So your central repository can be very simple
enough. You don't have to worry about having a very complex and very powerful hardware
and a lot of money can be saved on that as well. So get us free get a small so good provides
you with all the cool features that you would actually want. So this All the get features.
So we'll go ahead to the next topic our next the first we'll see what is a repository now
as GitHub says that it is a directory or storage space where all your projects get live. It
can be local to a folder on your computer like your local repository or it can be a
storage space and GitHub or another online host. It means your central repository and
you can keep your gold files text files image files. You name it? You can keep it inside
a repository everything that is related to your project and like I have been chanting
since the start of this tutorial that we have got two kinds of repositories. We've got the
central repository and we've got the local repository and now let us take a look at what
this repositories actually are. It's on my left hand side. You can see all about the
central repository and in the right hand side. This is all about my local repository and
the diagram in the middle actually shows you the entire layout so the local repository
will be inside my local machine and my central repository for now is going to be on GitHub.
So my central repository is typically located on a remote server and like I just told you
it is typically located on GitHub and my local repository is going to be my local machine
at we reside in as in a DOT git folder and it will be inside your Project's root. The
dot git folder is going to be inside your Project's root and it will contain all the
templates and all the objects and every other configuration files when you create your local
repository and since you're pushing all the code, your central repository will also have
the same dot git repository folder inside it and the sole purpose of having a central
repository is so that you're all the Actors are all the developers can actually share
and exchange the data because someone might be working on a different problem and someone
might be needing help in that so what you can do is that he can push all the code all
the problems that he has sauce or something that he has worked on it to the central repository
and everyone else can see it and everyone else can pull his code and use it for themselves
as well. So this is just meant for sharing data. Whereas in local repository. It is only
you can access it and it is only meant for your own so you can work in your local repository.
You can work in isolation and no one will interfere even after you have done after years
sure that your code is working and you want to show it to everyone just transfer it or
push it into the central Repository. Okay, so now we'll be seeing the get operations
and come on. So this is how we'll be using it. There are various operations and commands
that will help us to do all the things that we were just talking about right now. We're
talking about pushing changes. So these are all get operations. So we'll be performing
all these operations will be creating repositories with this command will be making changes in
the files that are in a repositories with the commands will be also doing parallel nonlinear
development that I was just talking about and we also be sinking a repositories so that
our Central repository and local repository are connected. So I'll show you how to do
that one by one. So the first thing that we need to do is create repositories, so we need
a central repository and we need a local repository now will host our Central repository on GitHub.
So for that you need an account in GitHub. And create a repository there and for your
local repository you have to install get in your system. And if you are working on a completely
new project and if you want to start something fresh and very new you can just use git init
to create your repository or if you want to join an ongoing project, and if you're new
to the project and you just join so what you can do is that you can clone the central repository
using this command get blown. So let us do that. So let's first create a GitHub account
and create repositories on GitHub. I said first you need to go to github.com. And if
you don't have an account, you can sign up for GitHub and here you just have to pick
a username that has not been already taken you have just provide your email address get
a password and then just click this green button here and your account will be created.
It's very easy don't have to do much and after that you just have to verify your email and
everything and after you're done with all sort of thing. You can just go a sign in our
already have an account. So I'm just going to sign in here. Softer you're signed in you'll
find this page here. So you'll get two buttons where you can read the guide of how to use
GitHub or you can just start a project right away. Now, I'll be telling you all about GitHub
so you don't have to click this button right now. So you can just go ahead and start a
project. So now get tells that for every project you need to have you need to maintain a unique
repository it is because it's very healthy and keeps things very clean because if you
are storing just the files related to one project in a repository, you won't get confused
later. So when you're creating a new repository, you have to provide with a repository name
now, I'm just going to name it get - GitHub. And you can provide it with the description
of this repository. And this is optional. If you don't want to you can leave it blank
and you can choose whether you want it public or private. Now if you want to it to be private,
you have to pay some kind of amount. So like this will cost you $7 a month. And so what
what is the benefit of having a private account? Is that only you can say it if you don't want
to share your code with anyone and you don't want anyone to see it. You can do that in
GitHub as well. But for now, I'll just leave it public. I just want it for free and let
everyone see my work what you have done. So we'll just leave it up lik for now and after
that you can initialize this repository with the read me. So the readme file will contain
the description of your files. This is the first file that is going to be inside a repository
when you create the repository, so and it's a good habit to actually initialize your repository
of the readme, so I'll just click this option. This is the option to add git ignore. Now.
There might be some kind of files that you don't want when you're making operations,
like push or pull you don't want those files to get pushed or pulled like it might be some
kind of log files or anything so you can add those files and get ignore here. So right
now I don't have gone any files that this is just the starting of our project. So I
will just ignore this get ignore for now. And then you can actually add some license
as well. So you can just go through what this license actually are. But if you want to just
leave it as none. And after that just click on this green button here, so just create
a repository. And so there it is so you can see this is the initial comment you have initialized
your repository with the readme and this is your readme file. Now if you want to make
changes and do the read me file, just click on it and click on the edit pencil image or
icon kind of that is in here and you can make changes on the readme files if you want to
write something. Let's say just write it as scription. So this is our tutorial purpose
and that's it. Just keeping it simple. And after that you've made changes. The next thing
that you have to do is you have to commit a changes so you can just go down and click
on this commit changes green button here. And it's done. So you have updated read me
dot MD and this is your commit hash so you can see that in here. So if you go back to
your repository, you can say that something has been updated and will show you when was
your last commit little even show you the time? So and for now you're on the branch
master your and this will actually show you all the logs. So since only I'm contributing
here. So this is only one contributor and I've just made two commits. The first one
was when I initialized it and right now when I modified it and right now I have not created
any branches. So there is only one branch. So now my central repository has been created.
So the next thing that I need to do is create a local repository in my local machine. Now
I have already installed get in my system. I have using a Windows system. So I have installed
get for Windows. So if you want some help with the installation, I have already written
a Blog on that. I'll leave the link of the blog in the description below. You can refer
to that blog and install get in your system. Now, I've already done that. So let's say
that I want my project to be in the C drive. So let's say I'm just waiting in folder here
from my project. So just name it. Ed Eureka project and let's say that this is where I
want my local repository to been. So the first thing that I'll do is right click and I'll
click this option here git bash here. And this will actually open up a very colorful
terminal for you to use and this is called the git bash emulator. So this is where you'll
be typing all your commands and you'll be doing all your work in the get back here.
So in order to create your local repository, the first thing that you'll do is type in
this command get in it and press enter. So now you can see that it is initialized empty
git repository on this path. So, let's see and you can see that a DOT get of a folder
has been created here and if you see here and see you can see that it contains all the
configurations and the object details and everything. So your repository is initializing.
This is going to be your local repository. So after we have created a repositories, it
is very important to link them because how would you know which repository to push into
and how will you just pull all the changes or all the files from a remote repository?
If you don't know if they're not connected properly. So in order to connect them with
the first thing that we need to do is that we need to add a region and we're going to
call our remote repository as origin and we'll be using the command git remote add origin
to add so that we can pull files from our GitHub or Central repository. And in order
to fetch files. We can use git pull and if you want to transfer all your files or push
files into GitHub will be using git push. So let me just show you how to do that. So
we are back in the local repository. And as you can see now that I have not got any kind
of files. And if you go to my central repository, you can see that I've got a readme file. So
the first thing that I need to do is to add this remote repository as my origin. So for
that I'll clear my screen first. So for that you need to use this command. Git remote add
origin. And the link of yours and the repository and let me just show you where you can find
this link. So when you go back into your repository, you'll find this green button here, which
is the Clone or download just click here. And this is the HTTP URL that you want. So
just copy it on your clipboard. Go back to your git bash and paste it and enter so your
original has been added successfully because it's not showing any kind of Errors. So now
what will do is that will perform a git pull. It means will fetch all the files from the
central repository into my local Repository. So just type in the command get full. origin
master And you can see that they have done some kind of fetching from the master Branch
into the master branch and let us see that whether all the files have been fished or
not. Let us go back to our local repository and there is the readme file that was in my
central repository and now it is in my local repository. So this is how you actually update
your local repository from the central repository you perform a git pull and it will fetch all
the files from this entire repository in your local machine. So let us move forward and
move ahead to the next operation. Now, I've told you in order to sync repositories, you
also need to use a git push, but since we have not done anything in our local repository
now, so I'll perform the good get push later on after a show you all the operations and
we'll be doing a lot of things. So at the end I'll be performing the git push and push
all the changes into my central Repository. And actually that is how you should do that
the it's a good habit and it's a good practice if you're working with GitHub and get is that
when you start working. The first thing that you need to do is make a get bull to fetch
all the files from your central repository so that you could get updated with all the
changes that has been recently made by everyone else and after you're done working after you're
sure that your code is running then only make the get Bush so that everyone can see it you
should not make very frequent changes into the central repository because that might
interrupt the work of your other collaborators or other contributors as well. So let us move
ahead and see how we can make changes. So now get actually has a concept it has an intermediate
layer that resides between your workspace and your local repository. Now when you want
to commit changes or make changes in your local repository, you have to add those files
in the index first. So this is the layer that is between your workspace and local repository.
Now, if your files are not in the index, you cannot make commit organ app cannot make changes
into your local repository. So for that you have to use the command git add and you might
get confused that which all files are in the index and which all are not. So if you want
to see that you can use the command git status and after you have added the changes in the
index you can use the command git commit to make the changes in the local repository.
Now, let me tell you what is exactly a git commit everyone will be talking about get
coming. Committing changes when you're making changes. So let us just know what is a git
commit. So let's say that you have not made any kind of changes or this is your initial
project. So what a comet is is that it is kind of object which is actually a version
of your project. So let's say that you have made some changes and you have committed those
changes what your version control system will do is that it will create another commit object
and this is going to be your different version with the changes. So your commit snapshots
actually going to contain snapshots of the project which is actually changed. So this
is what come it is. So I'll just show you I'll just go ahead and show you how to commit
changes in your local repository. So we're back into our local repository. And so let's
just create some files here. So now if you're developing a project you might be just only
contributing your source code files into the central repository. So now I'm not just going
to tell you all about coding. So we're just going to create some text files write something
in that which is actually pretty much the same if you're working on a gold and you're
storing your source code in your repositories. So I just go ahead and create a simple text
file. Just name it Eddie one. Just write something so I'll just try first file. Save this file
close it. So now remember that even if I have created inside this repository, this is actually
showing my work space and it is not in my local repository now because I have not committed
it. So what I'm going to do is that I'm going to see what all files are in my index. But
before that I'll clear my screen because I don't like junk on my screen. Okay. So the
first thing that we're going to see is that what all files are added in my index and for
that I just told you we're going to use the command git status. So you can see that it
is calling anyone dot txt which we just have written. It is calling it an untracked file
now untracked files are those which are not added in the index yet. So this is newly created.
I have not added it explicitly into the index. So if I want to commit changes in Eddie one
dot txt, I will have to add it in the index. So for that I'll just use the command git
add and the name of your file which is a D1 Dot txt. And it has been added. So now let
us check the status again. So for that will choose get status. And you can see that changes
ready to be committed is the Eddie Wonder txt? Because it's in the index and now you
can commit changes on your local repository. So in order to commit the command that you
should be using is git commit. - em because whenever you are committing you always have
to give a commit message so that everyone can see who made all this comments and what
exactly is just so this commit message is just for your purpose that you can see what
exactly was changed. But even if you don't dry it it the version control system is also
going to do that. And if you have configured your get it is always going to show that who's
the user who has committed this change. So I was just talking about writing a commit
message. So I'm just going to write something like adding first commit and press enter so
you can see one file change something has been inserted. So this is the changes are
finally committed in my local repository. And if you want to see how actually get stores
all this commit with actually I'll show you after I show you how to commit multiple files
together. So let's just go back into our local Rebel folder and we'll just create some more
files more text files. I'm just going to name it. I do do with create another one. Just
name it I do three. Let's just write something over here. We just say second file. Sorry.
so let's go back to our get bash terminal and Now let us see the get status. So now
you can see that it is showing that I do too and I do three are not in my index and if
you remember anyone was already in the index, actually, let me just go back and make some
modifications in Eddie one as well. So I'm going to ride. modified one So, let's see
get status again. And you can see that it is showing that anyone is modified and there
are untracked files and you do and edit three. Because I haven't added them in my index yet.
So now Sebastian and Jamie you have been asking me how to like a doll multiple files together.
So now I'm going to add all these files at once so for that I'm just going to use get
at - capital a Just press enter and now see the get status. And you see that all the files
have been added to the index and ones. And it's similarly with commit as well. So now
that you have added all the files in the index. I can also commit them all at once and how
to do that. Let me just show you you just have to write git commit and - small a so
if you want to commit all you have to use - small are in case of git commit whereas
in case of get add if you want to add all the files you have to use - capital A. So
just remember that difference and add message. hiding so you can see three files has been
changed and now let me show you how this actually gets stores all this comets. So you can perform
an operation called the git log. And you can see so This Is 40 digit hexadecimal code that
I was taking a talking about and this is the sha-1 hash and you can see the date and you
have got the commit message that we have just provided where I just wrote adding three files
together. It shows it it shows the date and the exact time and the author and this is
me because I've already configured it with my name. So this is how you can see come in
and this is actually how Version Control System like get actually stores all your commit.
So let us go back and see the next operation which is how to do parallel development or
non-linear development. And the first operation is branching now, we've been talking about
branching a lot and let me just tell you what exactly is branching and what exactly you
can do with branching. Well, you can think of branches like a pointer to a To become
it. Let's say that you've made changes in your main branch. Now remember that your main
branch that I told you about. It's called The Master branch and the master Branch will
contain all the code. So let's say that you're working on the master branch and you've just
made a change and you've decided to add some new feature on to it. So you want to work
on the new feature individually or you don't want to interfere with the master Branch.
So if you want to separate that you can actually create a branch from this commit and let me
show you how to actually create branches. Now Alice tell you that there are two kinds
of branches their local branches and remote tracking branches. Your remote branches are
the branches that is going to connect your branches from your local repository to your
central repository and local branches are something that you only create in your workspace.
That is only going to work with your with the files in your local repository. So I'll
show you how to create branches and then everything will Clear to you. So let us go back to our
git Bash. Clear the screen. And right now we are in the master branch and this indicates
which brands you were onto right now. So we're in the master Branch right now and we're going
to create a different branch. So for that you just have to type the command git branch
and write a branch name. So let us just call it first branch. and enter so now you have
created a branch and and this first Branch will now contain all the files that were in
the master because it originated from the master Branch. So now the shows that you are
still in the master branch and if you want to switch to the new branch that you just
created you have to use this command git checkout, but it's called checking out it going to move
from one branch to another it's called checking out and get so we're going to use git checkout
and the name of the branch. Switch to first brush and now you can see that we are in the
first branch and now we can start doing all the work in our first Branch. So let us create
some more files in the first Branch. So let's go back and this will actually show me my
workspace off my first Branch right now. So we'll just create another text document and
we're going to name it edu for and you can just write something first. garage to save
it just will go back and now we've made some changes. So let us just commit this changes
all at once. So let me just use git add. After that, what do you have to do if you remember
is that you have to perform a git commit? And I guess one pile changed. So now remember
that I have only made this edu for change in my first branch and this is not in my master
Branch it because now we are in the first Branch if it lists out all the files in the
first Branch, you can see that you've got the Eddie one. I did 283 and the readme which
was in the master Branch because it will be there because it originated from the master
branch and apart from that. It has a new file called edu for DOT txt. And now if you just
move back into the master Branch, let's say We're going back into the Master Garage. And
if you just see the five Master Branch, you'll find that there is no edu for because I've
only made the changes in my first Branch. So what we have done now is that we have created
branches and we have also understood the purpose of creating branches because you're moving
on to the next topic. The next thing we'll see is merging so now if you're creating branches
and you are developing a new feature and you want to add that new feature, so you have
to do an operation called emerging emerging means combining the work of different branches
all together and it's very important that after you have branched off from a master
Branch always combine it back in at the end after you're done working with the branch
always remember to merge it back in so now we have created branches. Let us see and we
have made changes in our Branch like we have added edu for and if you want to combine that
back in our Master Branch because like I told you your master Branch will always contain
your production quality. Code so let us know actually merge start merging those files because
I've already created branches. It's time that we merge them. So we are back in my terminal.
And what do we need to do is merge those changes and if you remember that we've got a different
file in my first branch, which is the ending for and it's not there in the master Branch
yet. So what I want to do is merge that Branch into my master Branch so for that I'll use
a command called git merge and the name of my branch and there is a very important thing
to remember when you're merging is that you want to merge the work of your first Branch
into master. So you want Master to be the destination. So whenever you're merging you
have to remember that you were always checked out in the destination Branch some already
checked out in the master Branch, so I don't have to change it back. So I'll just use the
command git merge and the name of the branch which word you want to merge it into and you
have to provide the name of the branch whose work you want merged into the current branch
that you were checked out. So for now, I've just got one branch, which is called the first
branch. and and so you can see that one file chain. Something has been added. We are in
the master bounce right now. So now let us list out all the files in the master branch
and there you see now you have edu for DOT txt, which was not there before. I'm merged
it. So this is what merging does now you have to remember that your first branch is still
separate. Now, if you want to go back into your first branch and modify some changes
again in the first branch and keep it there you can do that. It will not actually affect
the master Branch until you merge it. So let me just show you an example. So just go back
to my first branch. So now let us make changes and add you for. I'll just ride modified in
first branch. We'll go back and we'll just commit all these changes and I'll just use
git. So now remember that the git commit all is also performed for another purpose now.
It doesn't only actually commit all the uncommitted file at once if your files are in the index
and you have just modified it also does the job of adding it to the index Again by modifying
it and then committing it but it won't work. If you have never added that file in the index
now Eddie for was already in the index now after modifying it I have not explicitly added
in the index. And if I'm using git commit all it will explicitly add it in the index
bit will because it was already a track file and then it will commit the changes also in
my local Repository. So you see I didn't use the command git add. I just did it with Git
commit because it was already attract file. So one file has been changed. So now if you
just just cat it and you can see that it's different. It shows the modification that
we have done, which is modified it first Branch now, let's just go back to my master branch.
Now remember that I have not emerged it yet and my master Branch also contains a copy
of edu for and let's see what this copy actually contains. See you see that the modification
has not affected in the master Branch because I have only done the modification in the first
Branch. So the copy that is in the master branch has not it's not the modified copy
because I have not emerged it yet. So it's very important to remember that if you actually
want all the changes that you have made in the first Branch all the things that you have
developed in the Anu branch that you have created make sure that you merge it in don't
forget to merge or else it will not show any kind of modifications. So I hope that if understood
why emerging is important how to actually merge different branches together. So we'll
just move on to the next topic and which is rebasing now when you say rebasing rebasing
is also another kind of merging. So the first thing that you need to understand about vbase
is that it actually solves the same problem as of git merge and both of these commands
are designed to integrate changes from one branch into another. It's just that they just
do the same task in a different way. Now what rebasing means if you see the workflow diagram
here is that you've got your master branch and you've got a new Branch now when you're
rebasing it what it does if you see in this workflow diagram here is that if God a new
branch and your master branch and when your rebasing it instead of creating a Comet which
will have two parent commits. What rebasing does is that it actually places the entire
commit history of your branch onto the tip of the master. Now you would ask me. Why should
we do that? Like what is the use of that? Well, the major benefit of using a re basis
that you get a much cleaner project history. So I hope you've understood the concept of
rebase. So let me just show you how to actually do rebasing. Okay. So what we're going to
do is that we're going to do some more work in our branch and after that will be base
our branch on to muster. So we'll just go back to our branch. You skip check out. first
branch and now we're going to create some more files here. same it at your five and
let's say I do six. So we're going to write some random stuff. I'd say we're saying welcome
to Ed, Eureka. one all right the same thing again that Sarah come two so we have created
this and now we're going back to our get bash and we're going to add all these new files
because now we need to add because it we cannot do it with just get commit all because these
are untracked files. This is the files that I've just created right now. So I'm using
And now we're going to commit. And it has been committed. So now if you just see all
the files, you can see any one two, three, four five six and read me and if you go back
to the master. And if you just list out all the files and master it only has up to four
the five and six are still in my first brush and I have not emerged it yet. And I'm not
going to use git merge right now. I'm going to use rebase this time instead of using git
merge and this you'll see that this will actually do the same thing. So for that you just have
to use the command. So let us go back to our first branch. Okay did a typing error? Irst
BR a MCH. Okay switch the first branch and now we're going to use the command git rebase
master. Now it is showing that my current Branch first branch is up to date just because
because whatever is in the master branch is already there in my first branch and they
were no new files to be added. So that is the thing. So, but if you want to do it in
the reverse way, I'll show you what will happen. So let's just go and check out let's do rebasing
kit rebase first branch. So now what happened is that all the work of first branch has been
attached to the master branch and it has been done linearly. There was no new set of comments.
So now if you see all the files are the master Branch, you'll find that you've got a new
five and Ed U6 as well, which was in the first Branch. So basically rebasing has merged all
the work of my first Branch into the master, but the only thing that happened is that it
happened in a linear way all the commits that we did in first Branch actually got rid dashed
to the head in the master. So this was all about nonlinear development. I have told you
about branching merging rebasing we've made changes with pull changes committed changes,
but I remember that I haven't shown you how to push changes. So since we're done working
in our local repository now, we have made are all final changes and now we want it to
contribute in our Central Repository. Tree. So for that we're going to use git push and
I'm going to show you how to do a get Bush right now. Before I go ahead to explain you
a get Bush. You have to know that when you are actually setting up your repository. If
you remember your GitHub repository as a public repository, it means that you're giving a
read access to everyone else in the GitHub community. So everyone else can clone or download
your repository files. So when you're pushing changes in a repository, you have to know
that you need to have certain access rights because it is the central repository. This
is where you're storing your actual code. So you don't want other people to interfere
in it by pushing wrong codes or something. So we're going to connect a mice and repository
via ssh in order to push changes into my central repository now at the beginning when I was
trying to make this connection with SSS rows facing some certain kind of problems. Let
me go back to the repository of me show you when you click this button. You see that this
is your HTTP URL in order that we use in order to connect with yours and repository now if
you want to use SSH, so this is your SSH connection URL. So so in order to connect with ssh, what
do you need to do is that you have to generate a public SSH key and then just add that key
simply into your GitHub account. And after that you can start pushing changes. So first
we'll do that will generate SSH public key. So for that, we'll use this command SSH - heejun.
So under file, there is already an SSH key, so they want to override it. Yes. So my SSH
key has been generated and it has been saved in here. So if I want to see it and just use
cat and copy it. So this is my public SSH key if I want to add this SSH key, I'll go
back into my GitHub account. And here I will go back and settings and we'll go and click
on this option SSH and gpg keys and I've already had two SSH Keys added and I want to add my
new one. So I'm going to click this button new SSH key and just make sure that you provide
a name to it. I'm just going to keep it in order because I've named the other ones sssh
won an SSS to just say I'm going to say it's sh3. So just paste your search key in here.
Just copy this key. Paste it and click on this button, which is ADD SSH key. Okay, so
now well the first thing you need to do is clear the screen. And now what you need to
do is you need to use this command as the search - d And your SSI at URL that we use
which is get at the rate github.com. And enter so my SSH authentication has been successfully
done. So I'll go back to my GitHub account. And if I refresh this you can see that the
key is green. It means that it has been properly authenticated and now I'm ready to push changes
on to the central repository. So we'll just start doing it. So let me just tell you one
more thing that if you are developing something in your local repository and you have done
it in a particular branch in your repository and let's say that you don't want to push
this changes into the master branch of your central report or your GitHub repository.
So let's say that whatever work that you have done. It will stay in a separate branch in
your GitHub repository so that it does not interfere with the master branch and everyone
can identify that it is actually your branch and you have created it and this Branch only
contains your work. So for that let me just go to the GitHub repository and show you something.
Let's go to the repositories. And this is the repository that I have just created today.
So when you go in the repository, you can see that I have only got one branch here,
which is the master branch. And if I want to create branches I can create it here, but
I would advise you to create all branches from your command line or from you get bash
only in your central repository as well. So let us go back in our branch. So now what
I want is that I want all the work of the first branch in my local repository to make
a new branch in the central repository and that branch in my central repository will
contain all the files that is in the first branch of my local repository through so for
that I'll just perform. get Push the name of my remote which is origin and first branch.
And you can see that it has pushed all the changes. So let us verify. Let us go back
to our repository and let's refresh it. So this is the master branch and you can see
that it has created another branch, which is called the first Branch because I have
pushed all the files from my first Branch into an and I have created a new Branch or
first Branch as similar to my first branch in my local repository here in GitHub. So
now if we go to Branch you can see that there is not only a single Master we have also got
another branch, which is called the first Branch now if you want to check out this brand
just click on it. And you can see it has all the files with all the combat logs here in
this Branch. So this is how you push changes and if you want to push all the change in
to master you can do the same thing. Let us go back to our Branch master. And we're going
to perform a git push here. But only what we're going to do this time is we're going
to push all the files into the master branch and my central repository. So for that I'll
just use this get bush. Okay, so the push operation is done. And if you go back here
and if you go back to master, you can see that all the files that were in the master
branch in. My local repo has been added into the master branch of my central Ripple also.
So this is how you make changes and from your central repository to look repository. So
this is exactly what you do with get so if I have to summarize what I just showed you
entirely in this when I'm when I was telling about get ad and committing and pushing and
pulling so this is exactly what is happening. So this is your local repository. This is
your working directory. So the staging area is our index the intermediate layer between
your workspace and your local repository. So you have to add your files into the staging
area or the index with Git add and a commit those changes with Git commit and your local
repository and if you want to push all this Listen to the remote repository or the central
repository where everyone can see it you use a get Bush and similarly. If you want to pull
all those files of fetch all those files from your GitHub repository, you can use git pull
and you want to use branches. If you want to move from one branch to another you can
use git checkout. And if you want to combine the work of different branches together, you
can use git merge. So this is entirely what you do when you're performing all these kind
of operations. So I hope it is clear to everyone so I'll just show you how can you check out
what has been changed and modifications so So just clear the screen and okay. So let
us go back to our terminal and just for experimentation proper just to show you that how we can actually
get revert back to our previous changes. So now you might not want to change everything
that you made an Eddie wanted to do a duet for or some other files that we just created.
So let's just go and create a new file modify it two times and revert back to the previous
version just for demonstration purpose. So I'm just going to create a new text file.
Let's call it revert. And now let us just type something. Hello. Let's just keep it
that simple. Just save it and go back. We'll add this file. then commit this let's say
just call it revert once just remember that this is the first comment that I made with
revert one enter. So it has been changed. So now let's go back and modify this. So after
I've committed this file, it means that it has stored a version with the text Hello exclamation
in my revert text file. So I'm just going to go back and change something in here. So
I'm just let us just add there. Hello there. Save it. Let's go back to our bash. Now. Let
us commit this file again because I've made some changes and I want a different version
of the revert file. So we'll just go ahead and commit again. So I'll use git commit all.
Saints River do and enter and it's done. So now if I want to revert back to okay, so now
you just see the file. You can see I've modified it. So now it has got hello there. Let's say
that I want to go back to my previous version. I would just want to go back to when I had
just hello. So for that, I'll just check my git log. I can check the hair that this is
the commit log or the commit hash. When I first committed revert it means that this
is the version one of my revert. Now, what you need to do is that you need to copy this
commit hash. Now, you can just copy the first eight hexadecimal digits and that will be
it. So just copy it whole I just clear the screen first. So you just need to go use this
command get check out and hexadecimal code or the hexadecimal digits that you just copied
and the name of your file, which is revert Dot txt. So you just have to use this command
kit. Check out and the commit hash that you just copied the first 8 digits and you have
to name the file, which is revert Dot txt. So now when you just see this file, you have
gone back to the previous commit. And now when you just display this file, you can see
that now I've only got just hello. It means that I have rolled back to the previous version
because I have used the commit hash when I initially committed with the first change.
So this is how you revert back to a previous version. So this is what we have learned today
in today's tutorial. We have understood. What is Version Control and why do we need version
controls? And we've also learned about the different version control tools. And in that
we have primarily focused on get and we have learned all about git and GitHub about how
to create repositories and perform some kind of operations and commands in order to push
pull and move files from one repository to another we've also studied about the features
of git and we've also seen a case study about how Dominion Enterprises which is one of the
biggest public In company who makes very popular websites that we have got right now. We have
seen how they have used GitHub as well. Hello everyone. This is order from 80 Rekha in today's
session will focus on what is Jenkins. So without any further Ado let us move forward
and have a look at the agenda for today first. We'll see why we need continuous integration.
What are the problems that industries were facing before continuous integration was introduced
after that will understand what exactly is continuous integration and will see various
types of continuous integration tools among those countries integration tools will focus
on Jenkins and we'll also look at Jenkins distributed architecture finally in our hands
on part will prepare a build pipeline using Jenkins and I'll also tell you how to add
Jenkins slaves now, I'll move forward and we'll see why we need continuous integration.
So this is the process before continuous integration over here, as you can see that there's a group
of developers who are making changes to the source code that is present in the source
code repository. This repository can be a git repository subversion repository Etc.
And then the entire source code of the application is written it will be built by tools like
and Maven Etc. And after that that built application will be deployed onto the test server for
testing if there's any bug in the code developers are notified with the help of the feedback
loop as you can see it on the screen and if there are no bugs then the application is
deployed onto the production server release. I know you must be thinking that what is the
problem with this process is process looks fine. As you first write the code then you
build it. Then you test it and finally you deploy but let us look at the flaws that were
there in this process one by one. So this is the first problem guys as you can see that
there is a developer who's waiting for a long time in order to get the test results as first
the entire source code of the application will be built and then only it will be deployed
onto the test server for testing. It takes a lot of time so developers have to For a
long time in order to get the test results. The second problem is since the entire source
code of the application is first build and then it is tested. So if there's any bug in
the code developers have to go through the entire source code of the application as you
can see that there is a frustrated developer because he has written a code for an application
which was built successfully but in testing there were certain bugs in that so he has
to check the entire source code of the application in order to remove that bug which takes a
lot of time so basically locating and fixing of bugs was very time-consuming. So I hope
you are clear with the two problems that we have just discussed now, we'll move forward
and we'll see two more problems that were there before continuous integration. So the
third problem was software delivery process was slow developers were actually wasting
a lot of time in locating and fixing of birds instead of building new applications as we
just saw that locating and fixing of bugs was a very time-consuming task due to which
developers are not able to focus on building new applications. You can relate that to the
diagram which is present in front of your screen as Always a lot of time in watching
TV doing social media similarly developers were also basic a lot of time in fixing bugs.
All right. So let us have a look at the fourth problem that is continuous feedback continues
feedback related to things like build failures test status Etc was not present due to which
the developers were unaware of how their application is doing the process that you showed before
continuous integration. There was a feedback loop present. So what I will do I will go
back to that particular diagram and I'll try to explain you from there. So the feedback
loop is here when the entire source code of the application is built and tested then only
the developers are notified about the bugs in the code. All right, when we talk about
Cantonese feedback suppose this developer that I'm highlighting makes any commit to
the source code that is present in the source code repository. And at that time the code
should be pulled and it should be built and the moment it is built the developer should
be notified about the build status and then once it is built successfully it is then deployed
onto the test server for testing at that time. Whatever the test data says the developer
should be notified about it. Similarly, if this developer makes any commit to the source
code at that time. The coach should be pulled. It should be built and the build status should
be notified the developers after that. It should be deployed onto the test server for
testing and the test results should also be given to the developers. So I hope you all
are clear. What is the difference between continents feedback and feedback? So incontinence
feedback you're getting the feedback on the run. So we'll move forward and we'll see how
exactly continuous integration addresses these problems. Let us see how exactly continuous
integration is resolving the issues that we have discussed. So what happens here, there
are multiple developers. So if any one of them makes any commit to the source code that
is present in the source code repository, the code will be pulled it will be built tested
and deployed. So what advantage we get here. So first of all, any comment that is made
to the source code is built and tested due to which if there is any bug in the code developers
actually know where the bug is present or bitch come it has caused that error so they
don't need to go through the entire source code of the application. They just need to
check that particular. Because introduce the button. All right. So in that way locating
and fixing of bugs becomes very easy apart from that the first problem that we saw the
developers have to wait for a long time in order to get the test result here every commit
made to the source code is tested. So they don't need to wait for a long time in order
to get the test results. So when we talk about the third problem that was software delivery
process was slow is completely removed with this process developers are not actually focusing
on locating and fixing of bugs because that won't take a lot of time as we just discussed
instead of that. They're focusing on building new applications. Now a fourth problem was
that there is feedback was not present. But over here as you can see on the Run developers
are getting the feedback about the build status test results Etc developers are continuously
notified about how their application is doing. So I will move forward now, I'll compare the
two scenarios that is before continuous integration and after continuous integration now over
here what you can see is before continuous integration as we just saw first the source
code of the application will be built the entire source code then only it will be tested.
But when we talk about after continuous integration every commit whatever change you made in the
source code whatever change the my new changes. Well you committed to the source code that
time only the code will be pulled. It will be built and then lll be tested developers
have to wait for a long time in order to get the test results as we just saw because the
- source code will be first build and then it will be deployed onto the test server.
But when we talk about continuous integration the test result of every come it will be given
to the developers and when we talk about feedback, there was no feedback that was present earlier,
but in continuous integration feedback is present for every committee met to the source
code. You will be provided with the relevant result. All right, so now let us move forward
and we'll see what exactly is continuous integration now in continuous integration process developers
are required to make frequent commits to the source code. They have to frequently make
changes in the source code and because of that any change made in the source code, it
will report by The Continuous integration server, and then that code will be built or
you can say it will be compiled. All right now. Pentagon The Continuous integration tool
that you are using or depending on the needs of your organization. It will also be deployed
onto the test server for testing and once testing is done. It will also be deployed
onto the production server for release and developers are continuously getting the feedback
about their application on the run. So I hope I'm clear with this particular process. So
we'll see the importance of continuous integration with the help of a case study of Nokia. So
Nokia adopted a process called nightly build nightly build can be considered as a predecessor
to continuous integration. Let me tell you why. All right. So over here as you can see
that there are there are developers who are committing changes to the source code that
is present in a shared repository. All right, and then what happens in the night? There
is a build server. This build server will pull the shared repository for changes and
then it'll pull that code and prepare a bill. All right. So in that way whatever commits
are made throughout the day are compiled in the night. So obviously this process is better
than writing the entire source code of the application and then Bai Ling it but again
since if there is any bug in the code developers have to check all the comments that have been
made throughout the day so it is not the ideal way of doing things because you are again
wasting a lot of time in locating and fixing of bucks. All right, so I want answers from
you all guys. What can be the solution to this problem. How can Nokia address is particular
problem since we have seen what exactly continuous integration is and why we need now without
wasting any time. I'll move forward and I'll show you how Nokia solved this problem. So
Nokia adopted continuous integration as a solution in which what happens developers
commit changes to the source code in a shared repository. All right, and then what happens
is a continuous integration server this continuous integration server pose the repository for
changes if it finds that there is any change made in the source code and it will pull the
code and compile it. So what is happening the moment you commit a change in the source
code continuous integration server will pull that and prepare a build. So if there is any
bug in the code developers know which government is causing that error. All right, so they
can do Go through that particular commit in order to fix the bug. So in this way locating
and fixing of box was very easy, but we saw that in nightly builds if there is any bug
they have to check all the comments that have been made throughout the day. So with the
help of continuous integration, they know which commits is causing that error. So locating
in fixing of bugs didn't take a lot of time. Okay before I move forward, let me give you
a quick recap of what we have discussed till now first. We saw why we need continuous integration.
What were the problems that industries were facing before continuous integration was introduced
after that. We saw how continuous integration addresses those problems and we understood
what exactly continuous integration is. And then in order to understand the importance
of continuous integration, we saw case study of Nokia in which they shifted from nightly
build to continuous integration. So we'll move forward and we'll see various continuous
integration tools available in the market. These are the four most widely used continuous
integration tools. First is Jenkins on which we will focus in today's session then buildbot
Travis and bamboo. Right and let us move forward and see what exactly jenkins's so Jenkins
is a continuous integration tool. It is an open source tool and it is written in Java
how it achieves continuous integration. It does that with the help of plugins. Jenkins
have well over a thousand plugins. And that is the major reason why we are focusing on
Jenkins. Let me tell you guys it is the most widely accepted tool for continuous integration
because of its flexibility and the amount of plugins that it supports. So as you can
see from the diagram itself that it is supporting various development deployment testing Technologies,
for example gate Maven selenium puppet ansible lawgivers. All right. So if you want to integrate
a particular tool you need to make sure that plug-in for that tool is installed in your
Jenkins the for better understanding of Jenkins. Let me show you the Jenkins dashboard. I've
installed Jenkins in my Ubuntu box. So if you want to learn how to install Jenkins,
you can refer the Jenkins installation video. So this is a Jenkins dashboard guys, as you
can see that there are currently no jobs because of that this section is empty otherwise We'll
give you the status of all your build jobs over here. Now when you click on new item,
you can actually start a new project all over from scratch. All right. Now, let us go back
to our slides. Let us move forward and see what are the various categories of plugins
as I told you earlier is when the Jenkins achieves continuous integration with the help
of plugins. All right, and Jenkins opposed well over a thousand plugins and that is the
major reason why Jenkins is so popular nowadays. So the plug-in categorization is there on
your screen but there are certain plugins for testing like j-unit selenium Etc when
we talk about reports, we have multiple plugins, for example HTML publisher for notification.
Also, we have many plugins and I've written one of them that is Jenkins build notification
plug-in and we talked about deployment we have plugins like deploy plug-in when we talk
about compiled we have plugins like Maven and Etc. Alright, so let us move forward and
see how to actually install a plug-in on the same about to box where my Jenkins is installed.
So over here in order to install Jenkins, what you need to do is you need to click on
manage. Ken's option and overhead, as you can see that there's an option called manage
plugins. Just click over there. As you can see that it has certain updates for the existing
plugins, which I have already installed. Right then there's an option called installed where
you'll get the list of plugins that are there in your system. All right, and at the same
time, there's an option called available. It will give you all the plugins that are
available with Jenkins. Alright, so now what I will do I will go ahead and install a plug-in
that is called HTML publisher. So it's very easy. What you need to do is just type the
name of the plug-in. Headed HTML publisher plugin, just click over there and install
without restart. So it is now installing that plug-in we need to wait for some time. So
it has now successfully installed now, let us go back to our Jenkins dashboard. So we
have understood what exactly Jenkins is and we have seen various 10 kids plugins as well.
So now is the time to understand Jenkins with an example will see a general workflow how
Jenkins can be used. All right. So let us go back to our slides. So now as I have told
you earlier as well, we'll see Jenkins example, so let us move forward. So what are what is
happening developers are committing changes to the source code and that source code is
present in a shared repository. It can be a git repository subversion repository or
any other repository. All right. Now, let us move forward and see what happens now now
we're here what is happening. There's a Jenkins server. It is actually polling the source
code repository at regular intervals to see if any developer has made any commit to the
source code. If there is a change in the source code it will pull the code and we'll prepare
a build and at the same time developers will be notified about the build results now, let
us execute this practically. All right, so I will again go back to my Jenkins dashboard,
which is there in my Ubuntu bar. What had what I'm going to do is I'm going to create
a new item read basically a new project now over here. I'll give a suitable named my project
you can use any name that you want. I'll just write compile. And now I click on freestyle
project. The reason for doing that is free-style project is the most configurable and the flexible
option. It is easier to set up as well. And at the same time many of the options that
we configure here are present in other build jobs as well move forward with freestyle project
and I'll click on ok now over here what I'll do, I'll go to the source code management
Tab and it will ask you for what type of source code management you want. I'll click on get
and over here. You need to type your repository URL in my case. It is http. github.com your
username slash the name of your Repository. And finally dot get all right now in the bill
auction, you have multiple options. All right. So what I will do I click on invoke top-level
Maven targets. So now over here, let me tell you guys it may even has a built life cycle
and that build life cycle is made up of multiple build phases. Typically the sequence for build
phase will be festive validate the code then you compile it. Then you test it. Then you
perform unit test by using suitable unit testing framework. Then you package your code in a
distributable format like a jars, then you verify it and you can actually install any
package that you want with the help of install build phase and then you can deploy it in
the production environment for release. So I hope you have understood the maven build
life cycle. So in the goals tab, so what I need to do is I need to compile the code that
is present in the GitHub account. So for that in the gold stabbed I need to write compile.
So this will trigger the compile build phase of Maven now, that's it guys. That's it. Just
click on apply. And save now on the left hand side. There's an option called bill now to
trigger the built just click over there and you will be able to see the the Builder starting
in order to see the console output. You can click on that build and you see the console
output. So it has validated the GitHub account and it is now starting to compile that code
which is there in the GitHub account. So we are successfully compiled the code that was
present in the GitHub account. Now, let us go back to the Jenkins dashboard. Now in this
Jenkins dashboard, you can see that my project is displayed over here. And as you can see
the blue color of the ball indicates that as that it has been successfully executed.
All right. Now, let us go back to the slides now, let us move forward and see what happens.
Once you have compile your code. Now the code that you have compiled you need to test it.
All right. So what Jenkins will do it will deploy the code onto the test server for testing
and at the same time developers will be notified about the test results as well. So let us
again execute this practically, I'll go back to my Ubuntu box again. So in the GitHub repository,
the test cases are already defined. Alright, so we are going to analyze those test cases
with the help of Maven. So let me tell you how to do it will again go and click on new
item on over here will give any suitable name to a project. I'll just type test. I'll again
use freestyle project for the reason that I've told you earlier click on OK and in the
source code management tab. Now before applying unit testing on the code that I've compiled.
I need to First review it with the help of PMD plug-in. I'll do that. So for that I will
again click on new item and a over here. I need to type the name of the project. So I'll
just type it as code underscore review. Freestyle project click. Ok. Now the source code management
tab. I will again choose gate and give my repository URL https. github.com username
/ name of the Repository . Kit All right now scroll doubt now in the build tab. I'm going
to click over there. And again, I will click on invoke top-level Maven targets now in order
to review the code. I am going to use the Matrix profile of Maven. So how to do that.
Let me tell you you need to type here - p Matrix PMD: PMD, all right, and this will
actually produce a PMD report that contains all the warnings and errors now in the post
Bill action tab, I click on publish PMD analysis result. That's all click on apply and Save
the finally click on Bill now. And let us see the console output. So it has now pulled
the code from the GitHub account and Performing the code review. So they successfully review
the code now. Let us go back to the project over here. You can see an option called PMD
warnings just click over there and it will display all the warnings that are there present
in your code. So this is the PMD Alice's report over here. As you can see that there are total
11 warnings and you can find the details here as well like package you have then you have
then you have categories then the types of warnings which are there like for example,
empty cache blocks empty finally block. Now, you have one more tab called warnings over
there. You can find where the warning is present the filename package. All right, then you
can find all the details in the details tab. It will actually tell you where the warning
is present in your code. All right. Now, let us go back to the Jenkins dashboard and now
we'll perform unit tests on the code that we have compiled for that again. I'll click
on new item and I'll give a name to this project. I will just type test. And I click on freestyle
project. Okay. Now in the source code management tab, I'll click on get now over here. I'll
type the repository URL http. github.com / username / name of the Repository . Kit and in the
build option I click on again invoke top-level Maven targets now over here as I've told you
earlier as well that Maven build life cycle has multiple build phases like first it would
validate the code compile then tested package that will verify then it will install if certain
packages are required. And then finally it will deploy it. Alright. So one of the phase
is actually testing that performs unit testing using the suitable unit testing framework.
The test cases are already defined in my GitHub account. So to analyze the test case in the
Gold section, I need to write tests. All right, and it will invoke the test phase of the maven
build life cycle. All right, so just click on apply and Save finally click on Builder
To see the console output click here now in the source code management tab. I'll select
get all right over here again. I need to type my repository URL. That is HTTP github.com
/ username. / repository name dot get and now in the build tab. I'll select invoke top-level
Maven targets and over here as I have told you earlier as well that the maven build life
cycle has multiple phases. All right, and one of that phase is unit tests, so in order
to invoke that unit test what I need to do is in the goals tab, I need to write tests
and it will invoke the test build phase of the maven build life cycle. All right. So
the moment I write tests here and I'll build it. It will actually analyze the test cases
that are present in the GitHub account. So let us write test and apply and Save Finally
click on Bill now. And in order to see the console output click here. So does pull the
code from the GitHub account and now it's performing unit test. So we have successfully
perform testing on that code now, I will go back to my Jenkins dashboard or as you can
see that all the three build jobs that have executed a successful which is indicated with
the help of view colored ball. All right. Now, let us go back to our slides. So we have
successfully performed in unit tests on the test cases that were there on the GitHub account
now, we'll move forward and see what happens after that. Now finally, you can deploy that
build application or to the production environment for release, but when you have one single
Jenkins over there are multiple disadvantages. So let us discuss that one by one so we'll
move forward and we'll see what are the disadvantages of using one single Jenkins over now. What
I'll do I'll go back to my Jenkins dashboard and I'll show you how to create a build pipeline.
All right. So for that I'll move to my Ubuntu box. Once again now over here you can see
that there is an option of plus. Ok, just click over there now over here click on build
pipeline view, whatever name you want. You can give I'll just give it as a do Rekha.
pipeline And click on ok. Now over here what you can do you can give some certain description
about your bill pipeline. All right, and there are multiple options that you can just have
a look and over here. There's an option called select initial job. So I want compiled to
be my first job and there are display options over here number of display builds that you
want. I'll just keep it as 5 now the row headers that you want column headers, so you can just
have a look at all these options and you can play around with them just for the introductory
example, let us keep it this way now finally click on apply and ok. Currently you can see
that there is only one job that is compiled. So what I'll do, I'll add more jobs this pipeline
for that. I'll go back to my Jenkins dashboard and over here. I'll add code review as well.
So for that I will go to configure. And in this bill triggers tab, what I'll do I click
on build after other projects are built. So whatever project that you want to execute
before code review just type that so I want compile. Yeah, click on compile and over here.
You can see that there are multiple options like trigger only if build stable trigger,
even if the build is unstable trigger, even if the build page so I'll just click on a
trigger even if the bill fails. All right, finally click on apply and safe. Similarly
if I want to add my test job as well to the pipeline. I can click on configure and again
the bill triggers tab. I'll click on build after other projects are built. So overhead
type the project that you want to execute before this particular project in our case.
It is code review. So let us click over there trigger, even if the build fails apply and
Save Now let us go back to the dashboard and see how our pipeline looks like. So this is
our pipeline. Okay, so when we click on run Let us see what happens first. It will compile
the code from the GitHub account. That is it will pull the code and it will compile
it. So now this compile is done. All right, now it will review the code. So the code review
has started in order to see the log. You can click on Console. It will give you the console
output. Now once code review is done. It will start testing. It will perform unit tests
or it's a code has been successfully reviewed with the as you can see the color has become
green. Now, the testing has started it will perform unit tests on the test case is that
there in the GitHub account? So we have successfully executed three build jobs that is compile
the code then review it and then perform testing. All right, and this is the build pipeline
guys. So let us go back to the Jenkins dashboard. And we'll go back to our slides now. So now
we have successfully performed unit tests on the test cases that are present in the
GitHub account. All right. Now, let us move forward and see what else you can do with
Jenkins. Now the application that we have tested that can also be deployed onto the
production server for release as well. Alright, so now let us move forward and see what are
the disadvantages of this one single Jenkins over. So there are two major disadvantages
of using one single Jenkins over first is you might require different environments for
your builds and test jobs. All right. So at that time one single Jenkins over cannot serve
a purpose and the second major disadvantages suppose. You have a heavier projects to build
on regular basis. So at that time one single Jenkins server cannot simply handle the load.
Let us understand this with an example suppose. If you need to run web test using Internet
Explorer. So at that time you need a Windows machine, but your other build jobs might require
a Linux box. So you can't use one single Jenkins over. All right, so let us move forward. See
what is actually the solution to this problem the solution to this problem is Jenkins distributed
architecture. So the Jenkins distributed architecture consists of a Jenkins master and multiple
Jenkins slave. So this Jenkins Master is actually used for scheduling build jobs. It also dispatches
builds to the slaves for actual execution. All right, it also monitors a slave that is
possibly taking them online and offline as required and it also records and presents
the build results and you can directly executable job or Master instance as well. Now when we
talk about Jenkins slaves, these slaves are nothing but the Java executable that are present
on remote machines. All right, so these slaves basically here's the request of the Jenkins
master or you can say they perform the jobs As Told by the Jenkins Master they operate
on variety of operating system. So you can configure Jenkins in order to execute a particular
type of builds up on a particular Jenkins slave or on a particular type of Jenkins slave
or you can actually let Jenkins pick the next available. Budget get slave. All right. Now
I go back again to my Ubuntu box and I'll show you practically how to add Jenkins slaves
now over here as you can see that there is an option called Mana Jenkins just click over
there and when you scroll down you'll see man option called managed nodes under the
left hand side. There is an option called new node. Just click over there click on permanent
agent give a name to your slave. I'll just give it as slave underscore one. Click on
OK over here. You need to write the remote root directory. So I'll keep it as slash home
slash Edureka. And labels are not mandatory still if you want you can use that and launch
method. I want it to be launched slave agents via SSH. All right over here. You need to
give the IP address of your horse. So let me show you the IP address of my Host this
my Jenkins slave, which I'll be using like Jenkins slave. So, this is the machine that
I'll be using as Jenkins slave in order to check the IP address. I'll type ifconfig.
This is the IP address of that machine just copy it. Now I'll go back to my Jenkins master.
And in the host tab, I'll just paste that IP address and over here. You can add the
credentials to do that. Just click on ADD and over here. You can give the user name.
I'll give it as root password. That's all just click on ADD. And over here select it.
Finally save it. Now it is currently adding the slave in order to see the logs. You can
click on that slave again. Now, it has successfully added that particular slave. Now what I'll
do, I'll show you the logs for that and click on slave. And on the left hand side, you will
notice an option called log just click over there and we'll give you the output. So as
you can see agent has successfully connected and it is online right now. Now what I'll
do, I'll go to my Jenkins slave and I'll show you in slash home slash enter a car that it
is added. Let me first clear my terminal now what I'll do, I'll show you the contents of
Slash home slash at Eureka. As you can see that we have successfully added slave dot
jar. That means we have successfully added Jenkins slave to our Jenkins Master. Hello
everyone. This is ordered from 80 Rekha and today's session will focus on what is docker.
So without any further Ado let us move forward and have a look at the agenda for today first.
We'll see why we need Docker will focus on various problems that industries were facing
before Docker was introduced after that will understand what exactly Docker is and for
better understanding of Docker will also look at a Docker example after that will understand
how Industries are using Docker with the case study of Indiana University. Our fifth topic
will focus on various Docker components, like images containers Etc and our Hands-On part
will focus on installing WordPress and phpmyadmin using Docker compose. So we'll move forward
and we'll see why we need Docker. So this is the most common problem that industries
were facing as you can see that there is a developer who has built an application that
works fine in his own environment. But when it reach production there were certain issues
with that application. Why does that happen that happens because of difference in the
Computing environment between deaf and product I'll move forward and we'll see the second
problem before we proceed with the second problem. It is very important for us to understand.
What a microservices consider a very large application that application is broken down
into smaller Services. Each of those Services can be termed as micro services or we can
put it in another way as well microservices can be considered a small processes that communicates
with each other over a network to fulfill one particular goal. Let us understand this
with an example as you can see that there is an online shopping service application.
It can be broken down into smaller micro services like account service product catalog card
server and Order server Microsoft was architecture is gaining a lot of popularity nowadays even
giants like Facebook and Amazon are adopting micro service architecture. There are three
major reasons for adopting microservice architecture, or you can say there are three major advantages
of using Microsoft's architecture first. There are certain applications which are easier
to build and maintain when they are broken down into smaller pieces or smaller Services.
Second reason is suppose if I want to update a particular software or I want a new technology
stack in one of my module on one of my service so I can easily do that because the dependency
concerns will be very less when compared to the application as a whole. Apart from that
the third reason is if any of my module of or any of my service goes down, then my whole
application remains largely unaffected. So I hope we are clear with what our micro services
and what are their advantages so we'll move forward and see what are the problems in adopting
this micro service architecture. So this is one way of implementing microservice architecture
over here, as you can see that there's a host machine and on top of that host machine there
are multiple virtual machines each of these virtual machines contains the dependencies
for one micro service. So you must be thinking what is the disadvantage here? The major disadvantage
here is in Virtual machines. There is a lot of wastage of resources resources such as
RAM processor disk space are not utilized completely by the micro service which is running
in these virtual machines. So it is not an ideal way to implement microservice architecture
and I have just given you an example of five microservices. What if there are more than
5 micro Services what if your application is so huge that it requires? Microsoft versus
so at that time using virtual machines doesn't make sense because of the wastage of resources.
So let us first discuss the implementation of microservice problem that we just saw.
So what is happening here. There's a host machine and on top of that host machine. There's
a virtual machine and on top of that virtual machine, there are multiple Docker containers
and each of these Docker containers contains the dependencies 41 Microsoft Office. So you
must be thinking what is the difference here earlier? We were using virtual machines. Now,
we are using our Docker containers on top of virtual machines. Let me tell you guys
Docker containers are actually lightweight Alternatives of virtual machines. What does
that mean in Docker containers? You don't need to pre-allocate any Ram or any disk space.
So it will take the RAM and disk space according to the requirements of applications. All right.
Now, let us see how Dockers all the problem of not having a consistent Computing environment
throughout the software delivery life cycle. Let me tell you first of all Docker containers
are actually developed by the developers. So now let us see how Dockers all the first
That we saw where an application works fine and development environment but not in production.
So Docker containers can be used throughout the SCLC life cycle in order to provide consistent
Computing environment. So the same environment will be present in Dev test and product. So
there won't be any difference in the Computing environment. So let us move forward and understand
what exactly Docker is. So the docker containers does not use the guest operating system. It
uses the host operating system. Let us refer to the diagram that is shown. There is the
host operating system and on top of that host operating system. There's a Docker engine
and with the help of this Docker engine Docker containers are formed and these containers
have applications running in them and the requirements for those applications such as
all the binaries and libraries are also packaged in the same container. All right, and there
can be multiple containers running as you can see that there are two containers here
1 & 2. So on top of the host machine is a docker engine and on top of the docker engine
there are multiple containers and Each of those containers will have an application
running on them and whatever the binaries and library is required for that application
is also packaged in the same container. So I hope you are clear. So now let us move forward
and understand Docker in more detail. So this is a general workflow of Docker or you can
say one way of using Docker over here. What is happening a developer writes a code that
defines an application requirements or the dependencies in an easy to write Docker file
and this Docker file produces Docker images. So whatever dependencies are required for
a particular application is present inside this image and what our Docker containers
Docker containers are nothing but the runtime instance of Docker image. This particular
image is uploaded onto the docker Hub. Now, what is Docker Hub? Docker Hub is nothing
but a git repository for Docker images it contains public as well as private repositories.
So from public repositories, you can pull your image as well and you can upload your
own images as well on to the docker Hub. All right from Docker Hub various teams such as
QA or production. We'll pull the image and prepare their own containers as you can see
from the diagram. So what is the major advantage we get through this workflow? So whatever
the dependencies that are required for your application is actually present throughout
the software delivery life cycle. If you can recall the first problem that we saw that
an application works fine in development environment, but when it reaches production, it is not
working properly. So that particular problem is easily resolved with the help of this particular
workflow because you have a same environment throughout the software delivery lifecycle
be Dev test or product will see if a better understanding of Docker a Docker example.
So this is another way of using Docker in the previous example, we saw that Docker images
were used and those images were uploaded onto the docker Hub. I'm from Doc and have various
teams were pulling those images and building their own containers. But Docker images are
huge in size and requires a lot of network bandwidth. So in order to say that Network
bandwidth, we use this kind of a work flow over here. We use Jenkins server. Or any continuous
integration server to build an environment that contains all the dependencies for a particular
application or a Microsoft Office and that build environment is deployed onto various
teams, like testing staging and production. So let us move forward and see what exactly
is happening in this particular image over here developer has written complex requirements
for a micro service in an easy to write dockerfile. And the code is then pushed onto the get repository
from GitHub repository continuous integration servers. Like Jenkins will pull that code
and build an environment that contains all they have dependencies for that particular
micro service and that environment is deployed on to testing staging and production. So in
this way, whatever requirements are there for your micro service is present throughout
the software delivery life cycle. So if you can recall the first problem we're application
works fine in Dev, but does not work in prod. So with this workflow we can completely remove
that problem because the requirements for the Microsoft Office is present throughout
The software delivery life cycle and this image also explains how easy it is to implement
a Microsoft's architecture using Docker now, let us move forward and see how Industries
are adopting Docker. So this is the case study of Indiana University before Docker. They
were facing many problems. So let us have a look at those problems one by one. The first
problem was they were using custom script in order to deploy that application onto various
vm's. So this requires a lot of manual steps and the second problem was their environment
was optimized for legacy Java based applications, but they're growing environment involves new
products that aren't solely java-based. So in order to provide these students the best
possible experience, they needed to began modernizing their applications. Let us move
forward and see what all other problems Indiana University was facing. So in the previous
problem of dog, Indiana University, they wanted to start modernizing their applications. So
for that they wanted to move from a monolithic architecture to a Microsoft Office architecture
and the previous slides. We also saw that if you want to update a particular technology
in one of your micro service it is easy to do that because will be very less dependency
constrains when compared to the whole application. So because of that reason they wanted to start
modernizing their application. They wanted to move to a micro service architecture. Let
us move forward and see what are the other problems that they were facing Indiana University
also needed security for their sensitive student data such as SSN and student health care data.
So there are four major problems that they were facing before Docker now, let us see
how they have implemented Docker to solve all these problems the solution to all these
problems was docker Data Center and Docker data center has various components, which
are there in front of your screen first is universal control plane, then comes ldap swarm.
CS engine and finally Docker trusted registry now, let us move forward and see how they
have implemented Docker data center in their infrastructure. This is a workflow of how
Indiana University has adopted Docker data center. This is dr. Trusted registry. It is
nothing but the storage of all your Docker images and each of those images contain the
dependencies 41 Microsoft Office as we saw that the Indiana University wanted to move
from a monolithic architecture to a Microsoft is architecture. So because of that reason
these Docker images contain the dependencies for one particular micro service, but not
the whole application. All right, after that comes universal control plane. It is used
to deploy Services onto various hosts with the help of Docker images that are stored
in the docker trusted registry. So it obscene can manage their entire infrastructure from
one single place with the help of universal control plane web user interface. They can
actually use it to provision Docker installed software on various hosts, and then deploy
applications without doing a lot Of manual steps as we saw in the previous slides that
Indiana University was earlier using custom scripts to deploy our application onto VMS
that requires a lot of manual steps that problem is completely removed here when we talk about
security the role based access controls within the docker data center allowed Indiana University
to Define level of access to various themes. For example, they can provide read-only access
to Docker containers for production team. And at the same time they can actually provide
read and write access to the dev team. So I hope we all are clear with how Indiana University
has adopted Docker data center will move forward and see what are the various Docker components.
First is Docker registry Docker registry is nothing but the storage of all your Docker
images your images can be stored either in public repositories or in private repositories.
These repositories can be present locally or it can be present on the cloud dog. A provides
a cloud hosted service called Docker Hub Docker Hub as public as well as private repositories
from public repositories. You can actually pull an image and prepare your own containers
at the same time. You can write an image and upload that onto the docker Hub. You can upload
that into your private repository or you can upload that on a public repository as well.
That is totally up to you. So for better understanding of Docker Hub, let me just show you how it
looks like. So this is how a Docker Hub looks like. So first you need to actually sign in
with your own login credentials. After that. You will see a page like this, which says
welcome to Docker Hub over here, as you can see that there is an option of create repository
where you can create your own public or private repositories and upload images and at the
same time. There's an option called explore repositories this contains all the repositories.
These which are available publicly. So let us go ahead and explore some of the publicly
available repositories. So we have a repositories for nginx reddish Ubuntu then we have Docker
registry Alpine Mongo my SQL swarm. So what I'll do I'll show you a centralized repository.
So this is the centralized repository which contains the center West image. Now, what
I will do later in the session, I'll actually pull a centralized image from Docker Hub.
Now, let us move forward and see what our Docker images and containers. So Docker images
are nothing but the read-only templates that are used to create containers these Docker
images contains all the dependencies for a particular application or a Microsoft Office.
You can create your own image and upload that onto the docker Hub. And at the same time
you can also pull the images which are available in the public repositories and the in Docker
Hub. Let us move forward and see what our Docker containers Docker containers are nothing
but the runtime instances of Docker images it contains everything that is required to
run an application or a Microsoft Office and at the same time. It is also possible that
more than one image is required to create a one container. Alright, so for better understanding
of Docker images and Docker containers, what I'll do on my Ubuntu box, I will pull a sin
2x image and I'll run a sin to waste container in that. So let us move forward and first
install Docker in my Ubuntu box. So guys, this is my Ubuntu box over here first. I'll
update the packages. So for that I will type sudo apt-get update. asking for password it
is done now. Before installing Docker. I need to install the recommended packages for that.
I'll type sudo. Apt get install. Line-X - image - extra - you name space - are and now a line
irks - image - extra - virtual and here we go. Press why? So we are done with the prerequisite.
So let us go ahead and install Docker for that. I'll type sudo. apt-get install Docker
- engine so we have successfully installed Docker if you want to install Docker and send
two ways. You can refer the center is Docker installation video. Now we need to start this
docker servicer for that. I'll type sudo service docker start. So it says the job is already
running. Now. What I will do I will pull us into his image from Docker Hub and I will
run the center waste container. So for that I will type sudo. Docker pull and the name
of the image. That is st. OS the first it will check the local registry for Centos image.
If it doesn't find there then it will go to the docker hub for st. OS image and it will
pull the image from there. So we have successfully pulled us into his image from Docker Hub.
Now, I'll run the center as container. So for that I'll type sudo Docker Run - it sent
OS that is the name of the image. And here we go. So we are now in the Centre ice container.
Let me exit from this. Clear my terminal. So let us now recall what we did first. We
installed awkard on open to after that. We pulled sent to his image from Docker Hub.
And then we build a center as container using that Center West image now. I'll move forward
and I'll tell you what exactly Docker compose is. So let us understand what exactly Docker
compose is suppose you have multiple applications on various containers and all those containers
are actually linked together. So you don't want to actually execute each of those containers
one by one but you want to run those containers at once with a single command. So that's where
Docker compose comes into the picture with Docker compose. You can actually run multiple
applications present on various containers with one single command that is docker - compose
up as you can see that there is an example in front of you imagine you're able to Define
three containers one running a web app another running a post Kris. And another running a
red is in a uml file that is called Docker compose file. And from there. You can actually
execute all these three containers with one single command. That is Takin - compose up
let us understand this with an example suppose. You want to publish a Blog for that you'll
use CMS and WordPress is one of the most widely used CMS so you need one. Default WordPress
and you need one more container for my SQL as bakit and that my SQL container should
be linked to the WordPress container apart from that. You need one more container for
phpmyadmin that should be linked to my SQL database as it is used to access mySQL database.
So what if you are able to Define all these three containers in one yamen file and with
one command that is docker - composer, all three containers are up and running. So let
me show you practically how it is done on the same open to box where I've installed
Docker and I've pulled a center s image. This is my Ubuntu box first. I need to install
Docker compose here, but before that I need python pip so for that I will type sudo. Opt
get installed. Titan - VIP and here we go. So it is done now. I will clear my terminal
and now I'll install Docker compose for that. I'll type sudo VIP install Docker - compose
and here we go. So Docker compose is successfully installed. Now I'll make a directory and I'll
name it as WordPress mkdir WordPress. Now I'll enter this WordPress directory. Now over
here, I'll edit Docker - compose dot HTML file using G edit. You can use any other editor
that you want. I'll use G edit. So I'll type sudo G edit Docker - compose dot HTML and
here we go. So what here what I'll do, I'll first open a document. And I'll copy this
yeah Mel code. And I will paste it here. So let me tell you what I've done first. I have
defined a container as and I'm named it as WordPress. It is built from an image WordPress
that is present on the docker Hub. But this WordPress image does not have a database.
So for that I have defined one more container and I've named it as WordPress underscore
DB. It is actually built from the image that is called Maria DB which is present in the
word press and I need to link this WordPress underscore DB with the WordPress container.
So for that I have written links WordPress underscore DB: my SQL. All right, and in the
post section this port 80 of the docker container will actually be linked to Port eight zero
eight zero of by host machine. So are we clear till here now? What I've done I've defined
a password here as a deer a cow. You can give whatever password that you want and have defined
one more container called phpmyadmin. This container is built from the image corbino's
/ talker - phpmyadmin that is present on the docker Hub again. I need to link this particular
container with WordPress underscore DB container for that. I have written links WordPress underscore
DB: my SQL and the port section the port 80 of my Docker container will actually be linked
to Port 80 181 of the host machine and finally I've given a username that is root and I've
given a password as Ed Eureka. So let us now save it and we'll quit Let me first clear
my terminal. And now I run a command sudo Docker - compose. Up - D and here we go. So
this command will actually pull all the three images and we'll build the three containers.
So it is done now. Let me clear my terminal. Now what I'll do, I'll open my browser and
over here. I'll type the IP address of my machine or I can type the hostname as well.
First name of my machine is localhost. So I'll type localhost and put a zero eight zero
that I've given for WordPress. So it will direct you to a WordPress installation page
over here. You need to fill this particular form, which is asking you for site title.
I'll give it as editor acre username. Also, I will give as edureka password. I'll type
area Rekha confirm the use of weak password then type your email address and it is asking
search engine or visibility which I want. So I want click here and finally, I'll click
on install WordPress. So this is my WordPress dashboard and WordPress is now successfully
installed. Now what I'll do, I'll open one more top on over here. I'll type localhost
or the IP address of a machine and I'll go to Port 80 1814 phpmyadmin. And over here,
I need to give the user name. If you can recall. I've given route and password has given as
a do Rekha and here we go. So PHP, my admin is successfully installed. This phpmyadmin
is actually used to access a my SQL database and this my SQL database is used as back-end
for WordPress. If you've landed on this video, then it's definitely because you want to install
a Kubernetescluster at your machine. Now, we all know how tough the installation process
is hence this video on our YouTube channel. My name is Walden and I'll be your host for
today. So without wasting any time let me show you what are the various steps that we
have to follow. Now. There are various steps that we have to run both at the Masters and
and the slave end and then a few commands only at the master sent to bring up the cluster
and then one command which has to be run at all the slave ends so that they can join the
cluster. Okay. So let me get started by showing you those commands on those installation steps,
which have to be run commonly on both the Masters and and the slave and first of all,
we have to update your repository. Okay, since I am using Ubuntu To update my app to get
repository. Okay, and after that we would have to turn up this vapp space be the Masters
end or the slaves and communities will not work if the swap space is on. Okay, we have
to disable that so there are a couple of commands for that and then the next part is you have
to update the hostname the hosts file and we have to set a static IP address for all
the nodes in your cluster. Okay, we have to do that because at any point of time if your
master or if your node in the cluster of fails, then when they restart they should have the
same IP address if you have a dynamic IP address and then if they restart because of a failure
condition, then it will be a problem because they are not be able to join the cluster because
you'll have a different IP address. So that's all you have to do these things. All right,
there are a couple of commands for that and after that we have to install the openssh
server and docker that is because Humanity's requires the openssh functionality and it
of course needs Docker because everything in kubernetes is containers, right? So we
are going to make use of Docker containers. So that's why we have to install these two
components and finally we have to install Q barium. You're black and you have cereal
now. These are the core components of your Kubernetes. All right. So these are the various
components that have to be installed on both your master and your slave and so let me first
of all open up my VMS and then show you how to get started now before I get started. Let
me tell you one thing. You have a cluster you have a master and then you have slaves
in that cluster, right? Your master should always have better configurations than your
slave. So for that reason, if you're using virtual machines on your host, then you have
to ensure that your master has at least 2 GB of RAM and to core CPUs. Okay, and your
slave has 2GB of RAM and at least one core CPU. So these are the basic necessities for
your master and slave machines on that note. I think I can get started. So first of all,
I'll bring up my virtual machine and go through these installation processes. So I hope everyone
can see my screen here. This is my first VM and what I'm going to do is I'm going to make
this my master. Okay, so all the commands to install the various components are present
with me in my notepad Okay, so I'm going to use this for reference and then quickly execute
these commands and show you how communities is installed. So first of all, we have to
update our Advocate repository. Okay, but before that, let's log in as s you okay, so
I'm going to do a sudo OSU so that I can execute all the following commands as pseudo user.
Okay. So so to OSU there goes my root password and now you can see the difference here right
here. I was executing it as a normal user, but from here am a root user. So I'm going
to execute all these commands as s you so first of all Let's do an update. I'm going
to copy this and paste it here apt-get update update my Ubuntu repositories. All right,
so it's going to take quite some time. So just hold on till it's completed. Okay. So
this is done. The next thing I have to do is turn off my swap space. Okay. Now the command
to disable my strap space is swap off space flag a let me go back here and do the same.
Okay swap off but flag. And now we have to go to this FS tab. So this is a file called
FS tap OK and we will have a line with the entry of swap space because at any point of
time if you have enabled swap space, then you will have a line over there. Now we have
to disable that line. Okay, we can disable that line by commenting out that line. So
let me show you how that's done. I'm just using the Nano Editor to open this fstab file.
Okay, so you can see this land right where it says swap file. This is the one which after
comment out. So just let me come down here and comment it out like this. Okay with the
hash now, let me save this and exit. Now the next thing after do is update my host name
and my hosts file and then set a static IP address. So let me get started by first updating
the hostname. So for that I have to go to this file host name, which is in this /hc
path. So I'm again using Nano for that. You can see here. It's a director - virtualbox,
right? So let me replace this and say okay Master as in Cuba not he's master. So let
me save this and exit now if you want your host name to reflect over here because right
now it says root at the rate at Oracle virtualbox the host name is does not look updated as
yet and if you want it to be updated to k Master, then you have to first of all restart
this VM or your system. If you're doing it on a system, then you have to restart your
system. And if you do it on a VM, you have to restart your VM. Okay, so let me restart
my VM in some time. But before that there are a few more commands, which I want to run
and that is set a static IP address. Okay, so I'm going to copy this if conflict I'm
going to run this config command Okay. So right now my IP address is one ninety two
dot one sixty eight dot 56.1 not one and the next time when I turn on this machine, I do
not want a different IP address. So to set this as a static IP address. I have a couple
of commands. Let me execute that command first. So you can see this interface is file. Right?
So under SC / Network, we have a file called interfaces. So this is where you define all
your network interfaces. Now, let me enter this file and add the rules to make it static
IP address as you can see here. The last three lines are the ones which ensure that this
machine will have a static IP address. These three lines are already there on my machine.
Now if you want to set a static IP address of your and then make sure that you have these
things defined correctly. Okay. My IP address is not one not one. So I would just read in
it like this. So let me just exit. So the next thing that I have to do is go to the
hosts file and update my IP address over there. Okay, so I'm going to copy this and go to
my Etsy / hosts files now over here. You can see that there is no entry. So after mention
that this is Mike a master. So let me specify my IP address first. This is my IP address
and now we have to update the name of the host. So this host of - Kay Master so I'm
just going to enter that and save this. Okay. Now the thing that we have to do now is restart
this machine. So let me just reset this machine and get back to you in the meanwhile. Okay.
So now that we are back on let me check if my host name and hosts have all been updated.
Yes. There you go. You can see here, right it recorded k Master. So this means that my
host name has been successfully updated we can also verify my IP address is the same
let me do an if config and as you can see my appearance has not changed. All right,
so this is good. Now. This is what we wanted. Now. Let's continue with our installation
process. Let me clear the screen and go back to the notepad and execute those commands
which first of all install my openssh server. So this is going to be the command to do that
and we have to execute this as pseudo user. Right so sudo apt-get install openssh server.
That's the command. Okay, let me say yes and enter. Okay. So my SSH server would have been
installed by now that makes clear the screen and install Docker. But before I run this
command which installs Dhaka and it will update my repository. Okay, so let me log in as pseudo
first fault. Okay, so do is use the command and okay I have logged in as root user. Now.
The next thing is update my repository so after do an update update. Now again, this
is going to take some more time. So just hold on till then. Okay, this is also done. Now
we can straight away run the command to install Docker. Now. This is the command to install
Docker. Okay from the aggregate repository. I'm installing Docker and this specifying
- why because - why is my flag? So whenever there's a problem that comes in while installation
saying do you want to install it? Yes or no, then when you specify - why then it means
that by default it will accept why as a parameter. Okay, so that is the only constant behind
- why so again inserting Dockers going to take a few more minutes. Just hang on till
then. Okay, great. So Docker is also installed. Okay. So let me go back to the notepad. So
to establish the Kubernetes environment the three main components that Kubernetes is made
up of RQ barium cubelet and Cube cereal, but just before I install these three components
there are a few things I have to do they are like installing curl and then downloading
certain packages from this URL and then running an update. Okay. So let me execute these commands
one after the other first and then install Kubernetes. So let's first of all start with
this command where I'm installing curl. Okay. Now the next command is basically downloading
these packages using curl and curl is basically this tool using which you can download these
packages using your command line. Okay. So this is basically a web URL right so I can
access whatever packages are there on this web URL and download them using curl. So that's
why I've installed car in the first place. So when executing this command I get this
which is perfect now when I go back then there is this which we have to execute. Okay, let
me hit enter and I'm done and finally I have to update my app get repository and common
for that. Is this one apt-get update? Okay, great. So all the presentation steps are also
done. Now. I can say to me set up my Kubernetes environment by executing this command. So
in the same command I say install cubelet you barium and Cube CDL and to just avoid
the yes prompt am specifying the - wife lat. Okay, which would by default take yes as a
parameter. And of course I'm taking it from the aggregate repository, right? So, let me
just copy this and paste it here. Give it a few more minutes guys because in Sony kubernetes
is going to take some time. Okay bingo. So my humanities has also been installed successfully.
Okay. Let me conclude the setting up of this cube root is environment by updating the communities
configuration. Okay. So there's this file. You're right Q beta m dot f so, this is the
cube ADM is the one that's going to let me administer my Kubernetes. So after go to this
file and add this one line, okay, so let me first of all open up this file using my Nano
editor. So let me again log in as soda OSU and this is the command. So as you can see
we have these set of environment variables. So right after the last environment variable
have to add this one line and that line is this one All right. Now, let me just save
this and exit brilliant. So with that the components which have to be installed at both
the master and the slave come to an end. Now. What I will do next is run certain commands
only at the master to bring up the cluster and then run this one command at all my slaves
to join the cluster. Alright. So before I start doing anything more over here, let me
also tell you that I have already done the same steps on my node. So if you are doing
it at your end, then whatever steps you've done so far run the same set of commands on
another VM because that will be acting as your node v m but in my case, I have already
done that just to save some time, you know, so let me show you that this is Mike a master
of and right here. I have my K node, which is nothing but my communities node and I've
basically run the same set of commands in both the places, but there is one thing which
I have to ensure before I bring up the cluster and that is and short the network IP addresses
and the host name and the hosts. So this is my communities node, so All I'm going to do
what chat and say /hc posts. Okay. Now over here. I have the IP address of my Cube ladies
node. That is this very machine and a specify the name of the host. However, the name of
my Kubernetes Master host is not present and neither is the IP address. So that is one
manual entry we have to do if you remember let me go to my master on check. What is the
IP address? Yes. So the IP address over here is one ninety two dot one sixty eight dot
56.1 not one. So this is the IP address. I have to add in my node end. So after modify
this file for that, all right, but before that you have to also ensure that this is
a static IP address. So let me ensure that the IP address of my cluster node does not
change. So the first thing we have to do before anything is check. What is the current IP
address and for my node the IP addresses one? Ninety two dot one sixty eight dot 56.1 not
to okay now, let me run this command. Network interfaces. Okay. So as you can see here,
this is already set to be a static IP address. We have to ensure that these same lines are
there in your machine if you wanted to be a static IP address since it's already there
for me. I'm not going to make any change but rather I'm going to go and check. What's my
host name? I mean the whole same should anyways give the same thing because right now it's
keynote. So that's what it's gonna reflect. But anyways, let me just show it to you. Okay,
so my host name is keynote brilliant. So this means that that is one thing which I have
to change and that is nothing but adding the particular entry for my master. So let me
first clear the screen and then using my Nano editor. In fact, I'll have to run it as pseudo.
So as a pseudo user I'm going to open my Nano editor and edit my hosts file. Okay, so here
let me just add the IP address of my master. So what exactly is the IP address of the master?
Yes, this is my k Master. So I'm just going to copy this IP address come back here and
paste the IP address and I'm gonna say the name of that particular host is came master.
And now let me save this perfect. Now, what I have to do now is go back to my master and
ensure that the hosts file here has raised about my slave. I'll clear the screen and
first I'll open up my hosts file. So on my masters and the only entry is there for the
master. So I have to write another line where that specify the IP address or my slave and
then add the name of that particular host. That is K node. And again, let me use the
Nano editor for this purpose. So I'm going to say sudo Nano /hc posts. Okay, so I'm going
to come here say one ninety two dot one sixty eight dot 56.1 not to and then say Okay node.
All right. Now all the entries are perfect. I'm going to save this and Exit so the hosts
file on both my master and my slave has been updated the static IP address for both my
master and the slave has been updated and also the kubernetes environment has been established.
Okay. Now before we go further and bring up the cluster, let me do a restart because I've
updated my hosts file. Okay. So let me restart both of my master and my slave VMS and if
you're doing it at your and then you have to do the very same, okay, so let's say restart
and similarly. Let me go to my load here and do a restart. Okay, so I've just logged in
and now that my systems are restarted. I can go ahead and execute the commands at only
the Masters and to bring up the cluster. Okay. So first of all, let me go through the steps
which are needed to be run on the Masters end. So add the master of first of all, we
have to run a couple of commands to initiate the Kubernetes cluster and then we have to
install a pod Network. We have to install a pod Network because all my containers inside
a single port will have to communicate over a network Port is nothing but a network of
containers. So there are various container networks, which I can use so I can use the
Calico poor Network. I can use a flannel poor Network or I can use anyone you can see the
entire list in the communities documentation. And in this session, I am going to use the
calcio network. Okay, so that's pretty simple and straightforward and that's what I'm going
to show you next. So once you've set up the Pod Network, you can straight away bring up
the communities dashboard and remember that you have to set up the communities dashboard
and bring this up before your notes join the cluster because in this version of Cuba Nettie's
if you first get your notes to join the cluster and after that if you try bringing the kubernetes
dashboard up then your communities dashboard gets hosted on the And you don't want that
to happen, right? If you want the dashboard to come up at your Masters and you have to
bring up the dashboard before your nodes join the cluster. So these would be the three commands
that we will have to run initiating the cluster of inserting the poor Network and then setting
up the Kubernetes dashboard. So let me go to my master and execute commands for each
of these processes. So I suppose this is my master. And yes, this is my k Master. So so
first of all to bring up the cluster we have to execute this command. Let me copy this
and over here. We have to replace the IP addresses. So the IP address of my master, right? So
this machine after specified that IP address over here because this is where the other
IP addresses can come and join This is the master right? So I'm just seeing a pi server
advertise the address 56.1 not one so that all the other nodes can come and join the
cluster on this IP address and along with this. I have to also specify the port Network
since I've chosen the Calico poor Network. There is a network range which my Calico poor
Network uses so a cni basically stands for container network interface. If I'm using
the Calico poor Network then after use this network range, but in case of few want to
use a flannel poor Network, then you can use this network range. Okay, so let me just copy
this one and paste it. All right. So the command is pseudo Cube ADM in it for Network followed
by the IP address from where the other nodes will have to join. So let's go ahead and enter
So since you're doing for the first time give it a few minutes because kubernetes take some
time to install. Just hold on until that happens. All right. Okay, great. Now it says that your
kubernetes master has initialized successfully that's good news. And it also says that to
start using your cluster. We need to run the following commands as a regular user. Okay,
so we'll note that log out as a pseudo user and as a regular user executes these three
commands and also if I have to deploy a poor Network then after run a command, okay. So
this is that command which I have to run to bring up my poor Network. So I'll be basically
cloning the yamen file which is present over here. So before I get to all these things
let me show you that we have a cube joint command which is generated. Right? So this
is generated in my masters and and I have to execute this command at my node to join
the cluster, but that would be the last step because like I said earlier these three commands
will have to be first executed then after bring up my poor Network then after bring
up my dashboard and then I have to get my notes to join the class are using this command.
So for my reference, I'm just going to copy this command and store it somewhere else.
Okay. So right under this Let me just do this command for later reference. And in the meanwhile,
let me go ahead and execute all these commands one after the other. These are as per Cube
entities instructions, right? Yes. I would like to rewrite it. And then okay. Now that
I've done with this let me first of all bring up my pod Network. Okay. Now the command to
bring up my pod network is this Perfect. So my calcio pod has been created now I can verify
if my poor has been created by running the cube CDL get pods command. Okay. So this is
my Cube serial get pods. I can say - oh wide all namespaces. Okay by specifying the - oh
wide and all namespaces. I'll basically get all the pods ever deployed. Even the default
pose with get deployed when the Kubernetes cluster initiates. So basically the kubernetes
cluster is initiated and deployed along with a few default ones especially for your poor
Network. There is one part which is hosted for your cluster. There's one pod For Your
Rocker board itself, and then there's one pot which is deployed for your dashboard and
whatnot. So this is the entire list, right? So if you're calcio for your SED, there's
one pod for your Cube controller. There's a pot and we have various spots like this
right for your master and you're a pi server and many things. So these are the default
deployments that you get So anyways, as you can see the default deployments are all healthy
because it says the status is all running and everything is basically you're running
in the cube system namespace. All right, and it's all running on my k Master That's Mike
unit is master. So the next thing that I have to do is bring up the dashboard before I can
get my notes to join. Okay, so I'll go to the notepad and copy the command to bring
up my dashboard. So copy and paste so great. This is my communities dashboard, which as
you know, basically this part has come up now. If I execute this same Cube serial, get
pods command, then you can see that I've got one more pot which is deployed for my dashboard
basically. So last time this was not there because I had not deployed my dashboard at
that time, right? So I don't need to plug my iPod Network and whatnot and the other
things right? So I've deployed it and the continuous creating so in probably a few more
seconds, this would also be running anyways in the meanwhile, what we can do is we can
work on the other things which are needed to bring up the dashboard the first fall.
Abel your proxy and get it to be hope for web server. There's a skip serial proxy command
Okay. So with this your service would be starting to be served on this particular port number.
Okay, localhost port number eight thousand one of my master. Okay, not from the nodes.
So if I could just go to my Firefox and go to local Lowe's 8001 then my dad would be
up and running over there. So basically my dashboard is being served on this particular
port number. But if I want to actually get my dashboard which shows my deployments and
on my services then that's a different URL. Okay. So yeah as you can see here. Localized
8,000 / API slash V 1 right this entire URL is which is going to lead me to my dashboard.
But at this point of time I cannot log into my dashboard because it's prompting me for
a token and I do not have a token because I have not done any cluster old binding and
I have not mentioned that I am the admin of this particular dashboard. So to enable all
those things there are a few more commands that we have to execute starting with creating
a service account for your dashboard. So this is the command to create your service account.
So go back to the terminal and probably a new terminal window execute this command Okay.
So with this you're creating a service account for your dashboard, and after that you have
to do the cluster roll binding for your newly created service account. Okay. So the dashboard
has been created and default namespace as per this. Okay, and here I'm saying that my
dashboard is going to be for admin and I'm doing the cross the road binding. Okay, and
now that this is created I can straight away get the token because if you remember it's
asking me for a token to login, right? So even though I am the admin now have a not
be able to log in without D token, so to generate the token I have to again run this command
Cube City will get secret key. Okay, so I'm going to copy this and paste it here. So this
is the token or this is the key that basically needs to be used. So let me copy this entire
token and paste it over here. So let me just save this and yeah, now you can see that my
community's cluster has been set up and I can see the same thing from the dashboard
over here. So basically by default the communities service is deployed. Right? So this is what
you can see but I've just brought the dashboard now and the cluster is not ready under my
nodes join in. So let's go to the final part of this demonstration. We're in I'll ask my
slaves to join the cluster. So you remember I copied the joint cluster which was generated
at my Master's end in my notepad. So I'm going to copy that and execute that at the slaves
and to join the cluster. Okay. So let me first of all go to my notepad and yeah, this is
the joint command which I had copyright. So I'm going to copy this and now I'm going to
go to my node. Yep. So, let me just paste this and let's see what happens. Let me just
run this command as pseudo. It's a perfect. I've got the message that I have successfully
established connection with the API server on this particular IP address and port number,
right? So this means that my node has joined the cluster we can verify that from the dashboard
itself. So if I go back to my dashboard, which is hosted on my master master Zen, so I have
an option here as nodes. If I click on this then I will get the details about my nodes
over here. So earlier I only have the keymaster but now I have both the key master and the
K node give it a few more seconds until my note comes up. I can also verify the same
from my terminal. So if I go to my terminal here and if I run the command Cube CTL get
nodes then if we give me the details about the nodes which are there in my cluster soak
a master is one that is already there in the cluster but cannot however will take some
more time to join my cluster. Alright, so that's it guys. So that is about my deployment
and that's how you deploy a community's cluster. So from here on you can do whatever deployment
you want. Whatever you want to deploy you can deploy it. Easily very effectively either
from the dashboard or from the CLI and there are various other video tutorials of ours,
which you can refer to to see how a deployment is made on Kubernetes. So I would request
you to go to the other videos and see how deployment is made and I would like to conclude
this video on that note. If you're a devops guy, then you would have definitely heard
of communities but I don't think the devops world knows enough of what exactly kubernetes
is and where it's used. And that's why we had Erica of come up with this video on what
is communities. My name is Walden and I'll be representing a tárrega in this video.
And as you can see from the screen, these will be the topics that we'll be covering
in today's session as first start off by talking about what is the need for communities? And
after that I will talk about what exactly it is and what it's not and I will do this
because there are a lot of myths surrounding communities and there's a lot of confusion
people have misunderstood communities to be a containerization platform. Well, it's not
okay. So I will explain what exactly it is over here. And then after that I will talk
about how exactly communities works. I will talk about the architecture and all the related
things. And after that I will give you a use case. I will tell you how communities was
used at Pokemon go and how it helped Pokemon go become one of the best games of the year
2017 And finally at the end of the video, you will get a demonstration of how to do
deployment with Kubernetes. Okay. So I think the agenda is pretty clear you I think we
can get started with our first topic then now first topic is all about. Why do we need
Kubernetes? Okay now to understand why do we need Cuba Nettie's let's understand what
are the benefits and drawbacks of containers. Now, first of all containers are good. They
are amazingly good right any container for that matter of fact a Linux container or a
Docker container or even a rocket Continuum, right? They all do one thing they package
your application and isolated from everything else, right? They isolate the application
from the host mainly and this makes the container of fast reliable efficient light weight and
scalable now hold the thought yes containers are scalable, but then there's a problem that
comes with that and this is what is the resultant of the need for Kubernetes even though continues
are scalable. They are not very easily scalable. Okay, so let's look at it this way. You have
one container you might want to probably scale it up to to contain over three containers.
Will it's possible right? It's going to take a little bit of manual effort. But yeah, you
can scale it up. You know what I have a problem. But then look at a real world scenario where
you might want to scale up to like 5200 containers then in that case what happens I mean after
scaling up, would you do you have to manage those containers? Right? We have to make sure
that they are all working. They are all active and they're all talking to each other because
if they're not talking to each other then there's no point of scaling up itself because
in that case the server's would not be able to handle the roads if they're not able to
talk to each other correct. So it's really important that they are manageable when they
are scaled up and now let's talk about this point. Is it really tough to scale up containers?
Well the answer for that might be know. It might not be tough. It's pretty easy to scale
up containers, but the problem is what happens after that. Okay, once you scale up containers,
you will have a lot of problems. Like I told you the containers first for should have to
communicate with each other because Not so many in number and they work together to basically
host the service right the application and if they are not working together and talking
together then the application is not hosted and scaling up is a waste so that's the number
one reason and the next is that the containers have to be deployed appropriately and they
have to also be managed they have to be deployed appropriately because you cannot have the
containers deployed in this random places. You have to deploy them in the right places.
You cannot have one container in one particular cloud and the other one somewhere else. So
that would have a lot of complications. Well, of course it's possible. But yeah, it would
lead to a lot of complications internally you want to avoid all that so you have to
have one place where everything is deployed appropriately and you have to make sure that
the IP addresses are set everywhere and the port numbers are open for the containers to
talk to each other and all these things. Right. So these are the two other points the next
Point our the next problem with scaling up is that auto scaling is never a functionality
over here? Okay, and this is one of the things which is the biggest benefit with Cuba Nets.
The problem technically is there is no Auto scaling functionality. Okay, there's no concept
of that at all. And you may ask at this point of time. Why do we even need auto-scaling?
Okay, so let me explain the need for auto scaling with an example. So let's say that
you are an e-commerce portal. Okay, something like an Amazon or a flip card and let's say
that you have decent amount of traffic on the weekdays, but on the weekends, you have
a spike in traffic. Probably you have like 4X or 5x the usual traffic and in that case
what happens is maybe your servers are good enough to handle the requests coming in on
weekdays, right? But the requests that come on the weekends right from the increased traffic
that cannot be serviced by our servers right? Maybe it's too much for your servers to handle
the load and maybe in the short term. It's fine maybe once or twice you can survive but
they will definitely come a time when your server will start crashing because it cannot
handle that many requests per second or permanent. And if you want to really avoid this problem
what you do you have to scale up and now would you Lead keep scaling up every weekend and
scaling down after the weekend, right? I mean technically is it possible? Will you be buying
your servers and then setting it up and every Friday would you be again by new Star Wars
setting up your infrastructure? And then the moment your weekday starts. Would you just
destroy all your servers? Whatever you build. Would that would you be doing? No, right?
Obviously, that's a pretty tedious task. So that's where something like Cuban Aires comes
in and what communities does is it keeps analyzing your traffic and the load that's being used
by the container and as and when the traffic is are reaching the threshold auto-scaling
happens where if the server's have a lot of traffic and if it needs no more such servers
for handling requests, then it starts killing of the containers on its own. There is no
manual intervention needed at all. So that's one benefit with Kubernetes and one traditional
problem that we have with scaling up of containers. Okay, and then yeah, the one last problem
that we have is the distribution of traffic that is still challenging without something
that can manage your containers. I mean you have so many containers, but how will the
traffic be distributed? Load balancing. How does that happen? You just have containers
right? You have 50 containers. How does the load balancing happen? So all these are questions.
We should really consider because containerization is all good and cool. It was much better than
VMS. Yes containerization. It was basically a concept which was sold on the basis of for
scaling up. Right? We said that vm's cannot be scaled up easily. So we told use containers
and with containers you can easily scale up. So that was the whole reason we basically
sold containers with the tagline of scaling up. But in today's world, our demand is ever
more that even the regular containers cannot be enough so scaling up a so much or and so
detailed that we need something else to manage your containers, correct. Do we agree that
we need something right? And that is exactly what Cuban Aries is. So Kubernetes is a container
management tool. All right. So this is open source and this basically automate your container
deployment your continue scaling and descaling and your continual load balancing the benefit
with this is that it works brilliantly with all the cloud vendors with all A big cloud
vendors or your hybrid Cloud vendors and it also works on from Isis. So that is one big
selling point of kubernetes. Right? And if I should give more information about communities
then let me tell you that this was a Google developed product. Okay. It's basically a
brainchild of Google and that pretty much is the end of the story for every other competitor
out there because the community that Google brings in along with it is going to be huge
or basically the Head Start that communities would get because of being a Google brain
child is humongous. And that is one of the reasons why kubernetes is one of the best
container management tools in the market period and given that communities is a Google product.
They have written the whole product on go language. And of course now Google has contributed
this whole communities project to the CN CF which is nothing but the cloud native Computing
Foundation or simply Cloud native Foundation, right? You can just call them either that
and they have donated their open source project to them. And if I have to just summarize what
Humanities is you can just think of it like this it can group like a number. Containers
into one logical unit for managing and deploying an application or a particular service. So
that's a very simple definition of what communities is. It can be easily used for deploying your
application. Of course. It's going to be Docker containers which you will be deploying. But
since you will be using a lot of Docker containers as part of your production, you will also
have to use Kubernetes which will be managing your multiple Docker containers, right? So
this is the role it plays in terms of deployment and scaling upskilling down is primarily the
game of communities from your existing architecture. It can scale up to any number you want. It
can scale down anytime and the best part is the scaling can also be set to be automatic.
Like I just explained some time back right you can make communities communities would
analyze the traffic and then figure out if the scaling up needs to be done or the Skilling
noun can be done and all those things. And of course the most important part load balancing,
right? I mean what good is your container or group of containers if load balancing cannot
be enabled right? So communities does that also and these Some of the points on based
on which kubernetes is built. So I'm pretty sure you have got a good understanding of
what communities is by now Write a brief idea at least so moving forward. Let's look at
the features of Kubernetes Okay. So we've seen what exactly kubernetes is how would
users Docker containers or other connector or containers in general? But now let's see
some of the selling points of humanities or why it's a must for you. Let's start off with
automatic bin packing when we say automatic bin packing. It's basically that communities
packages your application and it automatically places containers based on their requirements
and the resources that are available. So that's the number one advantage the second thing
service Discovery and load balancing. There is no need to worry. I mean if you know, if
you're if you're going to use Kubernetes then you don't have to worry about networking and
communication because communities will just automatically assign containers their own
IP addresses and probably a single DNS name for a set of containers which are performing
a logical operation. And of course, there will be loads. Dancing across them so you
don't have to worry about all these things. So that's why we say that there is service
Discovery and load balancing with kubernetes and the third feature of kubernetes. Is that
storage orchestration with communities, you can automatically Mount your storage system
of your choice. You can choose that to be either a local storage or maybe on a public
Cloud providers such as a gcp or AWS or even a network storage system such as NFS or other
things, right? So that was the feature number three now, please remember for self-healing
now, this is one of my favorite parts of Humanity's actually not just communities even with respect
to dr. Swamp. I really like this part of self-healing what self feeling is all about is that whenever
kubernetes realizes that one of your containers has failed then it will restart that container
on its own right and we create a new container in place of this crashed one and in case you're
node itself fails, then what you bilities would do in that case has whatever containers
were running in that failed node. Those containers would be started in another node, right? Of
course, you would have to have more In that cluster if there's another node in the cluster
definitely room would be made for this field container to start a service. So that happens
so the next feature is batch execution. So when we say batch execution, it's that along
with Services Humanities can also manage your batch and CIA work loads, which is more of
a devops roll. Right? So as part of your CIA workloads communities can replace your containers
which fail and it can restart and restore the original state that is what is possible
with kubernetes and secret and configuration management. That is another big feature with
kubernetes. And that is the concept of where you can deploy and update your secrets and
application configuration without having to rebuild your entire image and without having
to expose your secrets in your stack configuration or anything, right? So if you want to deploy
an update your secrets only that can be done. So it's not available with all the other tools,
right? So communities is one that does that you don't have to restart everything and rebuild
your entire container. That's one benefit and then we have Horizonte scaling which of
course you will My that of already you can scale your applications up and down easily
with a simple command. The simple command can be run on the CLI or you can easily do
it on your GUI, which is your dashboard. Your community is dashboard or Auto scaling is
possible Right based on the CPU usage. Your containers would automatically be scaled up
or scaled down. So that's one more feature and the fun feature that we have is automatic
rollbacks and roll outs now Kubernetes what it does is whenever there's an update your
application, which you want to release communities progressively rolls out these changes and
updates to the application or its complications by this ensuring that one instance after the
other is send these updates and it makes sure that not all instances are updated at the
same time thus ensuring that yes, there is high availability. And even if something goes
wrong, then the Cuban ladies will roll back that change for you. So all these things are
enabled and these are the features with Humanities. So if you're really considering a solution
for your containers from managing your containers, then communities should be your solution.
To that should be your answer. So that is about the various features of Kubernetes now
moving forward here. Let's talk about a few of the myths surrounding communities and we
are doing this because a lot of people have confusion with respect to what exactly it
is. So people have this misunderstanding that communities is like docker which is a continuation
platform, right? That's what people think and that is not true. So this kind of a confusion
is what I intend to solve in the upcoming slides. I will not talk about what exactly
kubernetes is and what communities is not let me start with what it's not now. The first
thing is that communities is not to be compared with Docker because it's not the right set
of parameters which are comparing them against Docker is a containerization platform and
a Kubernetes is a container management platform, which means that once you have containerized
your application with the help of Docker containers or Linux containers, and when you are scaling
up these containers to a big number like 50 or a hundred that's where communities would
come in when you have like multiple containers which need to be managed. That's where communities
can comment and effectively do it. You can specify the configurations and communities
would make sure that at all times these conditions are satisfied. So that's what community is
you can tell in your configurations that at all time. I want these many containers running.
I want these many pods running and so many other needs right you can specify much more
than that and whatever you do at all times your cluster master or your communities Master
would ensure that this condition is satisfied. So that is what exactly Community is, but
that does not mean that talker does not solve this purpose. So Docker also have their own
plug-in. I wouldn't call it a plug-in. It's actually another tool of there's so there's
something called as Docker swamp and Dockers warm does a similar thing it does contain
a management like Mass container management so similar to what communities does when you
have like 50 to 100 containers Docker swarm would help you in managing those containers,
but if you look at who is prevailing in the market today, I would say it's communities
because communities came in first and the moment they came in they were backed by Google
They had this huge Community with they just swept along with them. So they have like hardly
left any in any market for Docker and for dr. Stromm, but that does not mean that they
are better than Docker because they are at the end of the day using Docker. So communities
is only as good as what Docker is if there are no Docker containers, then there's no
need for communities in the first place. So Cuban adiz and Docker they go hand in hand.
Okay. So that is the point you have to note and I think that would also explain the point
that kubernetes is not for continue Rising applications. Right? And the last thing is
that Kubernetes is not for applications with a simple architecture. Okay, if your architecture
review your applications architecture is pretty complex, then you can probably use Cuban IDs
to uncomplicate that architecture. Okay, but if you're having a very simple one in the
first place then using kubernetes would not serve you any good and it could probably make
it a little more complicated than what it already is, right. So this is what kubernetes
is not now speaking of what exactly kubernetes is. The first point is Kubernetes is robust.
And reliable now when I see a robust and reliable, I'm referring to the fact that the cluster
that is created the communities cluster, right? This is very strong. It's very rigid and it's
not going to be broken easily. The reason being the configurations which is specified
right at any point of time if any container fails a new container would come up right
or that whole container would be restarted. One of the things will definitely happen.
If your node fails then the containers which are running in a particular node. They would
start running in a different node, right? So that's why it's reliable and it's strong
because at any point of time your cluster would be at full force. And at any time if
it's not happening, then you would be able to see that something's wrong and you have
to troubleshoot your node and then everything would be fine. So Cuban, it's would do everything
possible and it pretty much does everything possible to let us know that the problem is
not at its end and it's giving the exact result that we want. That's what communities are
doing. And the next thing is that Humanity's actually is the best solution for scaling
up containers at least in today's. I could because the two biggest players in this market
are radhika swamp and Humanities and Docker swarm is not really the better one here because
they came in a little late even though doctor was there from the beginning communities came
after that but doc a swarm which we are talking about came in somewhere around 2016 or 2017.
Right? But communities came somewhere around 2015 and they had a very good Head Start.
They were the first ones to do this and they're backing by Google is just icing on the cake
because whatever problem you have with respect to Containers, if you just go up and if you
put your error there then you will have a lot of people on github.com and get up queries
and then on stack overflow will be resolving those errors, right? So that's the kind of
Market they have so it's back be a really huge Community. That's what kubernetes is
and to conclude this slide Humanities is a container orchestration platform and nothing
else. All right. So I think these two slides would have given you more information and
more clarity with respect to what kubernetes is. And how different it is from docker and
docker swamp, right? So now moving on let's go to the next topic where we will compare
Humanities with DACA swamp and we are comparing with Docker swamp because we cannot compare
Docker and Kubernetes head on. Okay, so that is what you have to understand if you are
this person over here if you are Sam who is wondering which is the right comparison then
let me reassure you that the difference can only be between Humanities and doctors Mom.
Okay. So let's go ahead and see what the difference is. Actually. Let's start off with your installation
and configuration. Okay. So that's the first parameter will use to compare these two and
over here doc a swarm comes out on top because Dockers little easier you have around two
or three commands which will help you have your cluster up and running that includes
the node joining the cluster, right? But with kubernetes it's way more complicated than
talking swamp, right? So you have like close to ten to eleven commands, which you have
to execute and then there's a certain pattern you have to follow to ensure that there are
no errors, right? Yes, and that's why I'm consuming and that's why it's complicated.
But once your cluster is ready that time kubernetes is the winner because the flexibility the
rigidness and the robustness that communities gives you cannot be offered by dr. Swamp.
Yes, dr. Storm is faster, but yes not as good as communities when it comes to your actual
working and speaking of the GUI. Once you have set up your cluster or you can use a
GOI with communities for deploying your applications. Right? So you don't need to always use your
CLI. You have a dashboard which comes up and the dashboard. If you give it admin privileges,
then you can use it. You can deploy your application from the dashboard itself everything just
drag-and-drop click functionality right with just click functionality. You can do that.
The same is not the case with Docker swarm. You have no GUI in Dhaka swamp Okay. So doc
Islam is not the winner over here. It's Kubernetes and he is going to the third parameter scalability.
So people again have a bad misconception that communities is better it is the solution for
scaling up. And it is better and faster than dr. Stromm. Well, it could be better but yes,
it's not faster than doctors warm. Even if you want to scale up right? There is a report
where I recently read that the scaling up in Dhaka swarm is almost five times faster
than the scaling up with Kubernetes. So that is the difference. But yes, once you are scaling
up is done after that your cluster strength with kubernetes is going to be much stronger
than your doctor swamp plus the strength. That's again because of the various configurations.
That should have been specified by then. That is the thing now moving on to the next parameter
we have is load balancing requires manual service configuration. Okay. This is in case
of kubernetes and yes, this could be shortfall. But with dr. Storm there is inbuilt load balancing
techniques, which you don't need to worry about. Okay, even the load balancing which
requires manual effort in case of communities is not do much there are times when you have
to manually specify what are your configuration you have to make a few changes but yes, it's
not as much as what you thinking and speaking of updates and rollbacks. What communities
does is it does the Scheduling to maintain the services while updating. Okay. Yeah, that's
very similar to how it works of darkness form wherein you have like Progressive updates
and service Health monitoring happens throughout the update, but the difference is when something
goes wrong Humanity's goes that extra mile of doing a roll back and putting you back
to the previous state right before the update was launched. So that is the thing with kubernetes
and the next parameter. We are comparing those two upon is data volumes. So data volumes
in Cuba nattie's can be shared with other containers, but only within the same pod,
so we have a concept called pods in communities. Okay, now board is nothing but something which
groups related containers right a logical grouping of containers together. So that is
a pot and whichever containers are there inside this pod. They can have a shared volume. Okay,
like storage volume, but in case of doctors from you don't have the concept of poor at
all. So the shared volumes can be between any other container. There is no restriction
with respect to that and dr. Swann and then finally we have this All the logging and monitoring.
So when it comes to logging and monitoring Humanities provides inbuilt tools for this
purpose. Okay, but with dr. Storm you have to install third-party tools if you want to
do logging and monitoring so that is the fall backward. Dr. Swann because logging is really
important one because you will know what the problem is. You'll know which card in a failed
what happened there is exactly the error, right? So logs would help you give that answer
and monitoring is important because you have to always keep a check on your nodes, right?
So as the master of the cluster it's very important that there's monitoring and that's
where our communities has a slight advantage over doc a swarm. Okay, but before I finish
this topic there is this one slide. I want to show you which is about the statistics.
So this stat I picked it up from this Platform 9, which is nothing but a company that writes
about tech. Okay and what they've said is that the number of news articles there were
produced right in that one particular year had 90% of those covered on Kubernetes compared
to the 10 percent. It on Docker swamp amazing, right? That's a big difference. That means
for every one blog written or for everyone article written on Docker swamp. There are
nine different articles written on humanities and similarly for web searches for web searches
kubernetes is 90 percent compared to Dhaka swarms 10% and Publications GitHub Stars.
The number of commits on GitHub. All these things are clearly one vacuum energy is everywhere.
So communities is the one that's dominating this market and that's pretty visible from
this stat also, right? So I think that pretty much brings an end to this particular topic
now moving forward. Let me show you a use case. Let me talk about how this game this
amazing game called Pokemon go was powered with the help of communities. I'm pretty sure
you all know what it is, right? You guys know Pokemon go. It's the very famous game and
it was actually the best game of the year 2017 and the main reason for that being the
best is because of kubernetes and let me tell you why but before I tell you why there are
few things, which I want to just talk about I'll give you an overview of Pokemon goers
and let me talk about a few key Stacks. So Pokemon go is an augmented reality game developed
by Niantic for your Android and for iOS devices. Okay, and those key stats read that they've
had like 500 million plus downloads overall and 20 million plus daily active users. Now
that is massive daily. If you're having like 20 million users plus then you have achieved
an amazing thing. So that's how good this game is. Okay, and then this game was actually
initially launched only in North America Australia New Zealand, and I'm aware of this fact because
I'm based out of India and I did not get access to this game because the moment news got out
that we have a game like this. I started downloading it, but I couldn't really find any link or
I couldn't download it at all. So they launched it only in these countries, but what they
faced right in spite of just reading it in these three countries. They had like a major
problem and that problem is what I'm going to talk about in the next slide, right? So
my use case is based on that very fact that In spite of launching it only in these three
countries or in probably North America and then in Australia New Zealand, they could
have had a meltdown but rather with the help of Humanity's they used that same problem
as the basis for their raw success. So that's what happened. Now let that be a suspense
and before I get to that let me just finish this slide one amazing thing about Pokemon
go is that it has inspired users to walk over 5.4 billion miles an hour. Okay. Yes do the
math five point four billion miles in one year. That's again a very big number and it
says that it has surpassed engineering Expectations by 50 times. Now this last sign is not with
respect to the Pokemon Go the game but it is with respect to the backend and the use
of Kubernetes to achieve whatever was needed. Okay, so I think I've spent enough time over
here. Let me go ahead and talk about the most interesting part and tell you how the back
in architecture of Pokemon go was okay. So you have a Pokémon go container, which had
two primary components one is your Google big table, which is your main. Database where
everything is going in and coming out and then you have your programs which is a run
on your java Cloud, right? So these two things are what is running your game mapreduce and
Cloud dataflow wear something it was used for scaling up. Okay, so it's not just the
container scaling up but it's with respect to the application how the program would react
when there are these increased number of users and how to handle increased number of requests.
So that's where the mapper uses. The Paradigm comes in right the mapping and then reducing
that whole concept. So this was their one deployment. Okay, and when we say in defy,
it means that they had this over capacities which could go up til five times. Okay. So
technically they could only serve X number of requests but in case of failure conditions
or heavy traffic load conditions, the max the server could handle was 5x because after
5x the server would start crashing that was their prediction. Okay, and what actually
happened at Pokemon go on releasing in just those three different geographies. Is that
the Deployed it the usage became so much that it was not XM R of X, which is technically
they're a failure limit and it is not even 5 x which is the server's capability but the
traffic that they got was up to 50 times 50 times more than what they expected. So, you
know that when your traffic is so much then you're going to be brought down to your knees.
That's a definite and that's a given right. This is like a success story and this is too
good to be true kind of a story and in that kind of a scenario if the request start coming
in are so much that if they reach 50 x then it's gone, right the application is gone for
a toss. So that's where kubernetes comes in and they overcome all the challenges. How
did you overcome the challenges because Cuban areas can do both vertical scaling and horizontal
scaling at ease and that is the biggest problem right? Because any application and any other
company can easily do horizontal scaling where you just spin up more containers and more
instances and you set up the environment but vertical scaling is something which is very
specific and this is even more challenging. Now it's more specific to this particular
game because the virtual reality would keep changing whenever a person moves around or
walks around somewhere in his apartments or somewhere on the road. Then the ram right
that would have to increase the memory the in memory and the storage memory all this
would increase so in real time your servers capacity also has to increase vertically.
So once they have deployed it, it's not just about horizontal scalability anymore. It's
not about satisfying more requests. It's about satisfying that same request with respect
to having more Hardware space more RAM space and all these things right that one particular
server should have more performance abilities. That's what it's about and communities solve
both of these problems effortlessly and neon tape were also surprised that kubernetes could
do it and that was because of the help that they got from Google. I read an article recently
that they had a neon thick slab. He met with some of the top Executives in Google and then
gcp right and then they figure out how things are supposed to go and they of course Met
with the Hedgehog communities and they figure out a way to actually scale it up to 50 time
in a very short time. So that is the challenge that they represented and thanks to communities.
They could handle three times the traffic that they expected which is like a very one
of story and which is very very surprising that you know, something like this would happen.
So that is about the use case and that pretty much brings an end to this topic of how Pokemon
go used communities to achieve something because in today's world Pokemon go is a really revered
game because of what it could write it basically beat all the stereotypes of a game and whatever
anybody could have anything negative against the game, right? So they could say that these
mobile games and video games make you lazy. They make you just sit in one place and all
these things. Right and Pokemon go was something which was different it actually made people
walk around and it made people exercise and that goes on to show how popular this game
became if humanity is lies at the heart of something which became so popular and something
Now became so big then you should imagine how big the humanities or how beautiful communities
is, right? So that is about this topic now moving forward. Let me just quickly talk about
the architecture of communities. Okay. So the communities architecture is very simple.
We have the cube Master which controls a pretty much everything. We should note that it is
not a Docker swarm where your Cube Master will also have containers running. Okay, so
they won't be containers over here. So all the containers will be running all the services
which will be running will be only on your nodes. It's not going to be on your master
and you would have to first of all create your rock Master. That's the first step in
creating your cluster and then you would have to get your notes to join your cluster. Okay.
So bead your pods or beat your containers everything would be running on your nodes
and your master would only be scheduling or replicating these containers across all these
nodes and making sure that your configurations are satisfied, right? Whatever you specify
in the beginning and the way you access your Cube Master is why are two ways You can either
use it via the UI or where the CLI. So the CLI is the default way and this is the main
way technically because if you want to start setting up your cluster you use the CLI, you
set up your cluster and from here, you can enable the dashboard and when you enable the
dashboard then you can probably get the GUI and then you can start using your communities
and start deploying by just with the help of the dashboard right my just the click functionality.
You can deploy an application which you want rather than having to write. I am L file or
feed commands one after the other from the CLI. So that is the working of Kubernetes.
Okay. Now, let's concentrate a little more on how things work on the load end. Now as
said before communities Master controls your nodes and inside nodes you have containers.
Okay, and now these containers are not just contained inside them but they are actually
contained inside pods. Okay, so you have nodes inside which there are pots and inside each
of these pods. They will be a number of containers depending upon Your configuration and your
requirement right now these pods which contain a number of containers are a logical binding
or logical grouping of these containers supposing you have one application X which is running
in Node 1. Okay. So you will have a part for this particular application and all the containers
which are needed to execute this particular application will be a part of this particular
part, right? So that's how God works and that's what the difference is with respect to what
Doc is warm and two bananas because I'm dr. Swamp. You will not have a pot. You just have
continuous running on your node and the other two terminologies which you should know is
that of replication controller and service. Your replication controller is the Masters
resource to ensuring that the request number of pods are always running on the nodes, right?
So that's trigger confirmation or an affirmation which says that okay. This many number of
PODS will always be running and these many number of containers will always be running
something like that. Right? So you see it and the replication controller will always
ensure that's happening and your service is just an object on the master that provides
load. I don't think of course is replicated group of PODS. Right? So that's how Humanities
works and I think this is good enough introduction for you. And I think now I can go to the demo
part where and I will show you how to deploy applications on your communities by either
your CLI, or either via your Jama files or by or dashboard. Okay guys, so let's get started
and for the demo purpose. I have two VMS with me. Okay. So as you can see, this is my Cube
Master which would be acting as my master in my cluster. And then I have another VM
which is my Cube Node 1. Okay. So it's a cluster with one master and one node. All right. Now
for the ease of purpose for this video, I have compiled the list of commands in this
text document right? So here I have all the commands which are needed to start your cluster
on then the other configurations and all those things. So I'll be using these every copying
these commands and then I'll show you side-by-side and I will also explain when I do that as
to what each of these commands mean now there's one prerequisite that needs to be satisfied.
And that is the master of should have at least two core CPUs. Okay and 4GB of RAM and your
node should have at least one course if you and 4GB of ram so just make sure that this
much of Hardware is given to your VMS right if you are using To what a Linux operating
system well and good but if you are using a VM on top of a Windows OS then I would request
you to satisfy these things. Okay, these two criterias and I think we can straight away
start. Let me open up my terminal first fault. Okay. This is my node. I'm going back to my
master. Okay. Yes. So first of all, if you have to start your cluster, you have to start
it from your Masters end. Okay, and the command for that is Q barium in it, you specify the
port Network flag and the API server flag. We are specifying the port Network flag because
the different containers inside your pod should be able to talk to each other easily. Right?
So that was the whole concept of self discovery, which I spoke about earlier during the features
of communities. So for this self-discovery, we have like different poor networks using
which the containers would talk to each other and if you go to the documentation the community
is documentation. You can find a lot of options are you can use either Calico pod or you can
use a flannel poor Network. So when we say poor Network, it's basically a framed as the
cni. Okay container network interface. Okay, so you can use either a Calico cni or a flannel
cni or any of the other ones. This is the two popular ones and I will be using the calcio
cni. Okay. So this is the network range for this particular pod, and this will Specify
over here. Okay, and then over here we have to specify the IP address of the master. So
let me first of all copy this entire line. And before I paste it here, let me do an if
config and find out what is the IP address of this particular machine of my master machine.
The IP address is one ninety two dot one sixty eight dot 56.1. Not one. Okay. So let's just
keep that in mind and let me paste the command over here in place of the master IP address.
I'm going to specify the IP address of the master. Okay, but I just read out. It is one.
Ninety two dot one sixty eight dot 56.1 not one and the Pod Network. I told you that I'm
going to use the Calico pod. So let's copy this network range and paste it here. So all
my containers inside this particular pot would be assigned an IP address in this range. Okay.
Now, let me just go ahead and hit enter and then your cluster would begin to set up. So
it's going X expected. So it's going to take a few minutes. So just to hold on there. Okay,
perfect. My Cuban its master has initialized successfully and if you want to start using
your cluster, you have to run the following as a regular user. Right so we have three
commands which is suggested by kubernetes itself. And that is actually the same set
of commands or even I have here. Okay, so I'll be running the same commands. This is
to set up the environment. And then after that we have this token generated, right the
joining token. So the token along with the inlet address of the IP of the master if I
basically execute this command in my nodes, then I will be joining this cluster where
this is the master, right? So this is my master machine. This is created the cluster. So now
before I do this though, there are a few steps in the middle. One of those steps is executing
all these three commands and after that comes bring up the dashboard and setting up the
board Network right - the calcio apart. So I have to set up the Calico pod and then after
also set up the dashboard because if I do not start the And this before the nodes then
the node cannot join and I will have very severe complications. So let me first of all
go ahead and run these three commands one of the other. Okay, since I have the same
commands in my text doc. I'll just copy it from there. Okay, say ctrl-c paste enter.
Okay, and I'll copy this line. So remember you have to execute all these things as regular
user. Okay, you can probably use your pseudo. But yeah, you'll be executing it as your regular
user and it's asking me if I want to overwrite the existing whatever is there in this directory,
I would say yes because I've already done this before but if you are setting up the
cluster for the first time, you will not have this prompt. Okay. Now, let me go to the third
line copy this and paste it here. Okay, perfect. Now I've ran these three commands as I was
told by communities. Now, the next thing that I have to do is before I check the node status
and all these things. Let me just set up the network. Okay, the poor Network. So like I
said, this is the Line This is the command that we have to run to set up the Calico Network.
Okay to all of the notes to join our particular Network. So it will be copying the template
of this Calico document file is present over here in this box. Okay. So hit enter and yes,
my thing is created. Calcio Cube controllers created now, I'll just go back here and see
at this point of time. I can check if my Master's connected to the particular pod. Okay, so
I can run the cube CDL get loads command Okay. This would say that I have one particular
resource connected to the cluster. Okay name of the machine and this role is master and
yet the state is ready. Okay, if you want to get an idea of all the different pods which
are running by default then you can do the cubes. He'll get pods along with few options.
Okay should specify these flags and they are. All namespaces and with the flag O specify
wide. Okay. So this way I get all the pods which are started by default. Okay. So there
are different services like at CD4 Cube controllers for the Calico node for the SED Master for
every single service. There's a separate container and pot started. Okay, so that's what you
can understand from this part. Okay, that is the safe assumption. Now that we know the
cluster the cluster is ready and the Masters part of a cluster. Let's go ahead and execute
this dashboard. Okay. Remember if you want to use a dashboard then you have to run this
command before your notes join this particular cluster because the moment your notes join
into the cluster bring up the dashboard is going to be challenging and it will start
throwing arrows. OK it will say that it's being hosted on the Node which we do not want
we want the dashboard to be on the server itself right on the master. So first, let's
bring the dashboard up. So I'm going to copy this and paste it here. Okay, Enter great.
Communities dashboard is created. Now the next command that you have to get your dashboard
up and running is Cube cereal proxy. Okay with this we get a message saying that it's
being served at this particular port number and yes, you are right now there you can if
you access Local Host. What was the port number again? Localhost? Yeah one 27.0 or 0.1 is
localhost. Okay followed by port number eight thousand one, okay. Yeah, so right now we
are not having the dashboard because it is a technically accessed on another URL. But
before we do that, there are various other things that we have to access. I mean we have
to set okay, because right now we have only enabled the dashboard now if you want to access
the dashboard you have to first of all create a service account. Okay. The instructions
are here. Okay, you have to first of all create a service account for dashboard. Then you
have to say that okay, you are going to be the admin user of this particular service
account and we have to enable that functionality here. You should say dashboard admin privileges
and you should do the cluster binding. Okay, the cluster roll binding is what you have
to do and after that to join to that poor to get access to that particular dashboard.
We have to basically give a key. Okay. It's like a password. So we have to generate that
token first and then we can access the dashboard. So again for the dashboard there are these
three commands. Well, you can get confused down the line. But remember this is separate
from the above. Okay. So what we did initially is rant these three commands which kubernetes.
Oh To execute and after that the next necessity was bring up a pod. So this was that command
for the Pod and then this was the command for getting the dashboard up and right after
that run the proxy and then on that particular port number will start being served. So my
dad would is being served but I'm not getting the UI here and if I want to get the you--if
you create the service account and do these three things, right? So let's start with this
and then continue. I hope this wasn't confusing guys. Okay, I can't do it here. So let me
open a new terminal. Okay here I'm going to paste it. And yes service account created.
Let me go back here and execute this command when I'm doing the role binding I'm saying
that my dashboard will should have admin functionalities and that's going to be the cluster roll. Okay
cluster admin, and then the service account is what I'm using and it's going to be in
default namespace. Okay. So when I created the account I said that I want to create this
particular account in default namespace. So the same thing I'm specifying here. Okay - good
admin created good. So let's generate the That is needed to access my dashboard. Okay
before I execute this command, let me show you that once so if you go to this URL, right
/ API slash V 1 / namespaces. Yep, let me show to you here. Okay. So this is the particular
URL where you will get access to the dashboard. Okay login access to the dashboard localhost
8001 API V1 namespaces / Cube system / Services slash HTTP Cuban eighties. Dashboard: / proxy.
Okay. Remember this one that is the same thing over here and like I told you it's asking
me for my password. So I would say token but let me go here and hit the command and generate
the token. So this is the token amount of copy this from here till here going to say
copy and this is what I have to paste over here. All right. So Simon update. Yes, perfect
with this is my dashboard, right? This is my Cuban eighties dashboard. And this is how
it looks like whatever I want. I can get an overview of everything. So that is workloads.
If I come down there is deployments. I have option to see the pods and then I can see
what are the different Services running among most of the other functionalities. Okay. So
right now we don't have any bar graph or pie graph shown you which clusters up which board
is up and all because I have not added any node and there is no service out as running
right. So I mean, this is the outlay of the dashboard. Okay, you will get access to everything
you want from the left. You can drill down into each of these namespaces or pods on containers
right now. If you want to deploy something through the dashboard right through the click
functionality, then you can go here. Okay, but before I create any container or before
I create any pot or any deployment for that matter of fact, I have to have nodes because
these will be running only on nodes. Correct, whatever. I deploy they have done only on
node. So let me first open up my node and get the node to join this particular cluster
of mine. Now, if you remember the command to join the node got generated at the master
and correct. So, let me go and fetch that again. So that was the first command that
we ran right this one. So, let's just copy this. And paste this one at my node end. This
is the IP of my master and it will just join at this particular port number. Let me hit
enter. Let's see what happens. Okay, let me run it as root user. Okay? Okay, perfect successfully
established connection with the API server and it says this node has joined the cluster
Right Bingo. So this is good news to me. Now if I go back to my master and in fact, if
I open up the dashboard there would be an option of nodes. Right? So initially now,
it's showing this master Masters. The only thing that is part of my nodes, let me just
refresh it and you would see that even node - 1 would be a part of it. Right? So there
are two resources to instances one is the master itself and the other is the node now
if I go to overview, you will get more details if I start my application if I start my servers
or containers then all those would start showing up your right. So it's high time. I start
showing you how to deploy it to deployed using the dashboard. I told you this is the functionality.
So let's go ahead and click on this create. And yeah mind you from the dashboard is the
easiest way to deploy your application, right? So even developers around the world do the
same thing for the first time probably they created using the Amal file. And then from
there on they start editing the ml file on top of the dashboard itself or the create
or deploy the application from here itself. So we'll do the same thing. Go to create an
app using functionality click functionality. You can do it over here. So let's give a name
to your application. I'll just say it you recur demo. Okay, let that be the name of
my application and I want to basically pull an engines image. Okay. I want to launch an
engine service. So I'm going to specify the image name in my Docker Hub. Okay. So it says
either the URL of a Public Image or any registry or a private image hosted on Docker Hub or
Google container registry. So I don't have to specify the URL per se but if you are specifying
a Docker Hub, if you are specifying this image to be pulled from Docker Hub, then you can
just use the name of the image which has to be pulled. That's good enough. Right engine
to the name and that's good enough and I can choose to set my number of ports to one or
two in that way. I will have two containers running in the pot. Right? So this is done
and the final part is actually without the final part. I can strip it deployed. Okay,
but if I deployed then my application would be created but I would just don't get the
UI. I mean, I won't see the engine service so that I get the service. I have to enable
one more functionality here. Okay, the server's here click on the drop down and you will have
external option right? So click on external this would let you access this particular
service from your host machine, right? So that is the definition so you can see the
explanation here and internal or external service can be defined to map and incoming
port to a Target Port seen by the container so engines which would be hosted on one of
the container ports. That could not be accessible if I don't specify anything here, but now
that I've said access it externally on a particular port number then it will get mapped for me
by default. And jinkx runs on port number 80. So the target put would be the same but
the port I want to expose it to that. I can map into anything I want so I'm going to say
82. All right, so that's it. It's as simple as this this way. Your application is launched
with two pods, so I can just go down and click on deploy and this way my application should
be deployed. My deployment is successful. There are two pods running. So what I can
do is I can go to the service and try to access the UI, right? So it says that it's running
on this particular port number 82153. So copy this and say localhost 321530 k hit enter
bingo. So it says welcome to Jenkins and I'm building the UI, right? So I'm able to access
my application which I just launched through the dashboard. It was as simple as that. So
this is one way of for launching or making a deployment. There are two other ways. Like
I told you one is using your CLI itself your command line interface of your draw Linux
machine, which is the terminal or you can do it by uploading the yamen file. You can
do it by uploading the yamen file because everything here is in the form of Yama Lord
Jason. Okay, that's like the default way. So whatever deployment I made right that also
those configurations are stored in the form of Yaman. So if I click on view or edit yeonggil,
all the configurations are specified the default ones have been taken. So I said the name should
be a director demo that is what has been. Oh you're that is the name of my deployment?
Okay. So kind is deployment the version of my API. It's this one extension /we 1 beta
1 and then other metadata I have various other lists. So if you know how to write a normal
file then I think it would be a little more easier for you to understand and create your
deployment because you will file is everything about lists and maps and these are all files
are always lists about maps and maps about lists. So it might be a little confusing.
So probably will have another tutorial video on how to write a normal file for Cuban its
deployment but I would keep that for another session. Okay. Let me get back to this session
and show you the next deployment. Okay, the next deployment technique, so let me just
close this and go back to overview. Okay, so I have this one deployment very good. Okay.
So let's go to this. Yeah. So what I'll do is let me delete this deployment. Okay our
let me at least scale it down because Don't want too many resources to be used on my node
also because I will have to show two more deployments. Right so I have reduced my deployment
over here. And I think it's be good enough. Great. So let's go back to the cube set up
this document of mine. So this is where we're at. Right we could check our deployments we
could do all these things. So one thing which I might have forgotten is showing the nodes
which are part of the cluster of right. So this is my master. Yeah, so I kind of forgot
to show you this Cube CDL get node. So the same view that you got on your dashboard you
get it here. Also, I mean, these are the two nodes and this is the name and all these things.
Okay, and I can also do the cube CDL get pods which would tell me all the pods that are
running under a car. Demo is the pot which I have started. Okay. This is my God. Now
if I specify with the other flags right with all namespaces and with wide then all the
default pause which get created along with your kubernetes cluster. Those will also get
displayed. Let me show you that also just in case Okay. Yeah. So this is the one which
I created and the other ones are the default of deployments that come with few minutes
the moment you install set up the cluster these get started. Okay, and if you can see
here this particular that this particular a dareka demo, which I started is running
on my Node 1 along with this Cube proxy and this particular Calico node. So Easter services
are running on master and node. And this one is running only on my Node 1 right you can
see this right the Calico node runs both on my node over here and on my master and similarly
the queue proxy runs on my node here and on my master. So this is the one that's running
only on my Note. Okay, so getting back to what I was about to explain you. The next
part is how to deploy anything through your terminal now to deploy your same engines application
through your CLI. We can follow these set of commands Okay, so there are a couple of
steps here. First of all to create a deployment. We have to run this command. OK Cube cereal
create deployment and drinks and then the name of the image that you want to create.
This is going to be the name of your deployment. And this is the name of the image which you
want to use so control C and let me go to the terminal here on my master. I'm executing
this command Cube cereal create a deployment. Okay. So the deployment engines is created
if you want we can verify that also over here so under deployments right now, we have one
entry in the array card Mo and yes now you can see there are two engines and arica demo.
So this is pending. I mean, it would take a few seconds. So in the meanwhile let this
continue with the other steps. Once you have created a deployments, you have to create
the service. Okay after say which is the node Port which can be used to access that Particular
service, right because deployment of just a deployment you're just deploying your container
if you want to access it. Like I told you earlier from your local from your host machine
all those things. Then you have to enable the node board. If you want to get your deployments
on your terminal you can run this command Cube CDL get deployments. Okay engines also
comes up over here, right? If you want more details about your diploma. You can use this
command Cube CDL describe you get like more details about this particular development
as to what is the name? What is the port number? It's sort of siding on all these things. Okay.
Let's not complicate this you can probably use that for understanding later. So once
that is done, the next thing that you have to do is you have to create the service on
the nodes you have created the deployment, but yes create the service on the nodes using
this particular command Cube cereal. Create service and say note Port. Okay, this means
you want to access it at this particular Point number you're doing the port mapping 80 is
280. Okay, container Port 80 to the internal node, Port 80. Okay. So service for engines
is created. And if you want to check which of the diplomas are running in which nodes
you can run the command Cube City L. Get SVC. Okay, this would tell you okay, you have two
different services at a record Mo and engines and they are anyone these port numbers and
on these nodes, right? So communities is the one which God created automatically enter
a car. Demo is the one which I created. Okay engines is again, the one which I created
communities comes up on its own just specifying to you because this is a container for the
cluster itself. Okay. So let's just go back here and then yes and similarly if you want
to delete a deployment then you can just use this command Cube CDL delete deployment followed
by the name of the deployment, right? It's pretty simple. You can do it this way. Otherwise
from the dashboard. You can delete it like how I showed you all your click over here
and then you can click on delete and then if you want to scale you can scale it. So
both of these deployment of mine have one porridge, right? So let's do one thing. So
let's just go to the engines service. And here let's try accessing this particular service.
Local Host. Okay, perfect here. Also it says welcome to engines right. So with this you
can understand that the port mapping worked and by going to service you will get to know
on which port number you can access it on your host machine, right? So this is the internal
container Port map to this particular Port of mine. Okay. Now if one if not for this
if this doesn't work, you can also use the cluster IP for the same thing trust ripe is
going to basically the IP using which all your containers access each other, right?
So if your body will have an IP. So whatever is running in their containers that will again
be accessible on your cluster I be so so it's the same thing right? So let me just close
these pages and that's how you deploy an application through your CLI. So this comes to our last
part of this video, which is nothing but deployment via Yaman file. So for again deployment where
I am and file you have to write your yawm Al code, right? You have to either write your
yawm Al code or your Json code, correct? So this the code which I have written. Just in
Jama format. And in fact, I already have it in my machine here. So how about I just do
an LS? Yeah, there is deployment at Dotty. Alright, so let me show you that so this is
my yamen file. Okay. So here I specify various configurations similar to how I did it using
the GUI or Rider reducing the CLI it something similar gesture. I specify everything and
one particular file here. If you can see that. I have a specify the API version. Okay, so
I'm using extensions dot a slash b 1 or beta 1. Okay. I can do this or I can just simply
specify version 1 I can do either of those and then the next important line is the kind
so kind is important because you have to specify what kind of file it is. Is it a deployment
file or is it for a pod deployment or is it for your container deployment or is it the
overall deployment? What is it? So I've said deployment okay, because I want to deploy
the containers also along with the pot. So I'm saying deployment in case you want to
deploy only the pod which you realistically don't need to. Okay. Why would it just deploy
up? But in case if you want to deploy a pot then you can go ahead and write Port here
and then just specify what are the different containers. Okay, but in my case, it's a complete
deployment right with the pods and the services and the containers. So I will go ahead and
write other things and under the metadata. I will specify the name of my application.
I can specify what I want. I can put my name also over here like Warden, okay, and I can
save this and then the important part is this back part. So here is where you set the number
of replicas. Do you remember I told you that there's something called has replication controller
which controls the number of ports that you will be running. So it is that line. So if
I have a set to over here, it means that I will have two pods running of this particular
application of Verdun. Okay, what exactly am I doing here under spec AB saying that
I want to Containers so I have intended or container line over here and then I have two
containers inside. So the first container which I want to create is of the name front
end. Okay, and I'm using an engines image and similarly. The port number that this would
be active on is container Port 80. All right, and then I'm saying that I want a second container
and the container for this could I could rename this to anything? I can say back end and I
can choose which image I want. I can probably choose a httpd image also. Okay, and I can
again say the port's that this will be running on I can say the container Port that it should
run on is put number is 88 right? So that's how simple it is. All right. And since it's
your first video tutorial the important takeaways from this yawm Al file configuration is that
under specular have to specify the containers? And yes everything in Json format with all
the Intel dacians and all these things. Okay, even if you have an extra space anywhere over
here, then you are real file would throw an invalid error. So make sure that is not there.
Make sure you specify the containers appropriately if it's going to be just one container. Well
and good it's two containers. Make sure you intend it in the right way and then you can
specify the number of PODS. You want to give a name to your deployment and Mainly established
read these rules. Okay. So once you're done with this just save it and close the yamen
file. Okay. So this is your deployment djamel. Now, you can straight away upload this table
file to your Kubernetes. Okay, and that way your application would be straight with deployed.
Okay. Now the command for that is Cube cereal create - F and the name of the file. Okay.
So let me copy this and then the name of my file is deployment or djamel. So let me hit
enter. Perfect. So my deployment the third deployment vardhan is also created right so
we can check our deployments from the earlier command. That is nothing but Cube CDL get
deployments. Okay. It's not get deployment audiometer. Sorry. It's get deployments. And
as you can see here, there is an Adder a guard Mo there is engines and there is Verdun and
the funny thing which you should have noticed is that I said, I want to replicas right to
pods. So that's why the desire is to currently we have to up to date is one. So okay update
is to brilliant available is 0 because let's just give it a few seconds in 23 seconds.
I don't think the board would have started. So let's go back to our dashboard and verify
if there's a third deployment that comes up over here. Okay, perfect. So that's how it's
going to work. Okay, so probably is going to take some more time because the containers
just restarting. So let's just give it some more time. This could well be because of the
fact that my node has very less resource, right? So I have too many deployments that
could be the very reason. So what I can do is I could go ahead and delete other deployments
so that my node can handle these many containers and pods right? So let me delete this particular
deployment and Rings deployment and let me also delete this Adder a car demo deployment
of mine. Okay. Now let's refresh and just wait for this to happen. Okay. So what I can
do instead is I could have a very simple deployment right? So let me go back to my terminal and
let me delete my deployment. Okay, and let me redeployed again, so Cube CDL delete deployment.
Okay, so what then this deployment has been deleted? Okay. So let's just clear the screen
and let's do G edit of the yamen file again and here let's make things simpler. Let me
just delete this container from here. Let me save this right and close this now. Let
me create a deployment with this. Okay. So what then is created, let me go up here and
refresh. Let's see what happens. Okay. So this time it's all green because it's all
healthy. My nodes are successful or at least it's going to be successful container creating.
Perfect. So two parts of mine are up and running and both my paws are running right and both
are running on Node 1 pause to or of to those are the two deployments and replica set and
then Services, right? So it's engines which is the basement which is being used. So well
and good. This is also working. So guys. Yeah, that's about it. Right. So when I try to upload
it, maybe there was some other error probably in the arm will file they could developments
from small mistake or it could have been because my known had too many containers running those
could have been the reasons. But anyways, this is how you deployed through your yamen
file. All right, so that kind of brings us to the end of this session where I've showed
you a demonstration of deploying your containers in three different ways CLI dashboard and
your yamen files. Hey everyone, this is Reyshma from Edureka. And today we'll be learning
what is ansible. First,let us look at the topics that we'll be learning today. Well,
it's quite a long list. It means we'll be learning a lot of things today. Let us take
a look at them one by one. So first we'll see the problems that were before configuration
management and how configuration management help to solve. It will see what ansible is
and the different features of ansible after that. We'll see how NASA is implemented and
civil to solve all their problems. After that. We'll see how we can use ansible for orchestration
provisioning configuration management application deployment and security. And in the end, we'll
write some ansible playbooks to install lamp stack on my node machine and host your website
in my note machine. Now before I tell you about the problems, let us first understand
what configuration management actually is. Well configuration management is actually
the management of your software on top of your Hardware. What it does is that it maintains
the consistency of your product based on its requirements its design and its physical and
functional attributes. Now, how does it maintain the consistency it is because the configuration
management is applied over the entire life cycle of your system. And hence. It provides
you with a very good visibility and control when I say visibility. It means that you can
continuously check and monitor the performances of all your assistants. So if at any time
the performance of any of his system is degrading the configuration management system will notify
you and hence. You can prevent errors before it actually occurs and by control, I mean
that you have the power to change anything. So if any of your servers failed you can reconfigure
it again to repair it so that it is up and running again, or you can even replace the
server if needed and also the configuration management system holds the entire historical
data of your infrastructure it DOC. Men's all the snapshots of every version of your
infrastructure. So overall the configuration management process facilitates the orderly
management of your system information and system changes so that it can use it for beneficial
purposes. So let us proceed to the next topic and see the problems before configuration
management and how configuration management solved it and with that you'll understand
more about configuration management as well. So, let's see now, why do we need configuration
management now, the necessaries behind configuration management was dependent upon a certain number
of factors and certain number of reasons. So let us take a look at them one by one.
So the first problem was managing multiple servers now earlier every system was managed
by hand and by that, I mean that you have to login to them via SSH make changes and
then log off again. Now imagine if a system administrator would have to make changes in
multiple number of servers. You'll have to do this task of logging in making changes
and longing of again and again repeatedly, so this would take up a lot of time and there
is no time left for the system administrators to monitor the performances of the system
continuously safe at any time any of the servers would fail it took a lot of time to even detect
the faulty server and to even more time to repair it because the configuration scripts
that they wrote was very complex and it was very hard to make changes on to them. So after
configuration management system came into the picture what it did is that it divided
all the systems in my infrastructure according to their dedicated tasks their design or architecture
and the organize my system in an efficient way. Like I've proved my web servers together
my database servers together application servers together and this process is known as baselining.
Now. Let's for an example say that I wanted to install lamp stack in my system and lamp
stack is a software bundle where L stands for Linux a for Apache and for MySQL and P
for PHP. So I need this different software's for different purposes. Like I need Apache
server to host my web pages and it PHP for my web development. I need Linux as my operating
system and MySQL as my data definition language or data manipulation language since now all
the systems in my infrastructure is Baseline. I would know exactly where to install each
of the software's. For example, I'll use Apache as my web server here for database. I will
install the MySQL here and also begin easy for me to monitor my entire system. For example,
if my web pages are not running I would know that there's something wrong. With my web
servers, so I'll go check in here. I don't have to check the database servers and application
servers for that. Similarly. If I'm not able to insert data or extract data from my database.
I would know that something is wrong with my database servers. I don't need to check
these too for that matter. So what configuration management system did with baselining is that
it organized mess system in an efficient way so that I can manage and monitor all my servers
efficiently. Now, let us see the second problem that we had which were scaling up and scaling
down. See nowadays, you can come up with requirements at any time and you might have to scale up
or scale down your systems on the Fly and this is something that you cannot always plan
ahead and scaling up. Your infrastructure doesn't always mean that you just buy new
hardware and just place them anywhere. Haphazardly. You cannot do that. You also need to provision
and configure this new machines properly. So with configuration management system, I've
already got my infrastructure baselined so I know exactly how this new machines are going
to work according to their dedicated task and where should I actually place them and
the scripts that configuration management uses are reusable so you can use the same
scripts that you use to configure your older machines to configure your new machines as
well. So let me explain it to you with an example. So let me explain it to you with
an example. Let's say that if you're working in an e-commerce website and you decide to
hold a mega sale. New Year Christmas sale or anything? So it's obvious that there is
going to be a huge rise in the traffic. So you might need more web servers to handle
that amount of requests and you might even need a load balancers or maybe to to distribute
that amount of traffic onto your web servers and these changes however need to be made
at a very short span of time. So after you've got the necessary Hardware, you also need
to provision them accordingly and with configuration management, you can easily provision this
new machines using either recipes or play books or any kind of script that configuration
management uses. And also after the sale is over you don't need that many web servers
or a load balancer so you can disable them using the same easy scripts as well and also
scaling down is very important when you are using cloud services when you do not need
any of those machines, it's no point in keeping them. So you have to scale down as well because
you have to reconfigure your entire infrastructure as well and with configuration management.
It is a very easy. Anything to Auto scale up and scale down your infrastructure. So
I think you all have understood this problem and how configuration management salted so
let us take a look at the third problem. Third problem was the work velocity of the developers
were affected because the system administrators were taking time to configure the server's
after the developers have written a code. The next job is to deploy them on different
servers like test servers and production servers for testing it out and releasing it but then
again every server was managed by hand before so the system administrators would again have
to do the same thing log in to its server configure them properly by making changes
and do the same thing again to all servers. So this was taking a lot of time now before
devops game you the picture there was already agility in the developers end for which they
were able to release new software's very frequently, but it was taking a lot of time for the system
administrators to configure the servers for testing so the developers would have Wait
for all the test results and this highly hamper the word velocity of the developers. But after
there was configuration management the system administrator had got access to a configuration
management tool which allowed them to configure all the servers at one go. All they had to
do is write down all the configurations and write down the list of all the software's
that there need to provision this servers and deploy it on all of the servers at one
go. So now agility even came into the system administrators and as well. So now after configuration
management the developers and the system administrators were finally able to work in the same base.
Now, this is how configuration management solve the third problem now, let us take a
look at the last problem. Now the last problem was rolling back in today's scenario. Everyone
wants a change and you need to keep making changes frequently because customers will
start losing interest if things stay the same so you need to keep releasing new features
to upgrade your application even giants like Amazon and Facebook. They do it now and then
and still they're unsure if the users are going to like it or not. Now imagine if the
users did not like it they would have to roll back to the previous version again, so, let's
see how it creates a problem. Now before there was configuration management. Let's say you've
got the old version which is the version one when you're upgrading it you're changing all
the configurations in the production server. You're deleting the old configurations completely
and deploying the new version now if the users did not like it you would have to reconfigure
This Server again with the old configurations and that will take up a lot of time. So application
is going to be Down for that amount of time that you need for reconfiguring the server
and this might create a problem. But when you're using configuration management system,
as you know that it documents every version of your infrastructure when you're upgrading
it with configuration management, it will remove the configurations of the older version,
but it will be well documented. It will be kept there and then the newer version is deployed.
Now if the users did not like it this time, the older of the configuration version was
already documented. So all you have to do is just switch back to the old version and
this won't take up any time and you can upgrade or roll back your application in zero downtime
zero downtime means that your application would be down for zero time. It means that
the users will not notice that your application went down and you can achieve it seamlessly
and this is how configuration management system solved all the problems that was before. So
guys. I hope that if all understood how Management did that let us now move on to the next topic?
Now the question is how do I incorporate configuration Management in my system? Well, you do that
using configuration management tools. So let's take a look at all the available configuration
management tools. So here I've got the four most popular tools that is available in the
market right now. I've got ansible and Saul stack which are push-based configuration management
tool by push-based. I mean that you can directly push all those configurations on to your node
machines directly while chef and puppet are both pull based configuration management tools.
It means that they rely on a central server for configurations the pull all the configurations
from a central server. There are other configuration management tools available in the market to
but but these four are the most popular ones. So now let's know more about ansible now ansible
is a configuration management tool that can be used for provisioning orchestration application
deployment Automation and it's a push based configuration management tool. Like I told
you what it does is that it automate your entire it infrastructure and gives you large
productivity gains and it can automate pretty much anything. It can automate your Cloud
your networks your servers and all your it processes. So let us move on to the next topic.
So now let us see the features of ansible. The first feature is that it's very simple.
It's simple to install and setup and it's very easy to learn because ansible Play books
are written in a very simple data serialization language, which is known as Gamal and it's
pretty much like English. So anyone can understand that and it's very easy to learn next feature
because of which ansible is preferred over other configuration management tools is because
it's Agent kallus it means that you do not need any kind of Agents or any kind of plan
software's to manage your node machines. All you have to do is install ansible in your
control machine and just make an SSH connection with your nodes and start pushing configurations
right away. The next feature is that it's very powerful, even though you call ansible
simple and it does not require any agent. It has the capabilities to model very complex
it workflows and it comes with a very interesting feature, which is called the batteries included.
It means that you've got everything that you already need and in ansible it's because it
comes with more than 750 inbuilt modules, which you can use them for any purpose in
your project. And it's very efficient because all the modules that ansible comes with they
are extensible. It means that you can customize them according to your needs and for doing
that you do not need to use the same programming language that it was originally written in
you can choose any kind of programming language that you're comfortable with and then customize
those modules for your own use. So this is the power and Liberty that ansible gives you
now, let us take a look at the case study of NASA. What were the problems that NASA
was facing and how ansible solved all those problems? Now NASA is an organization that
has been sending men to the Moon. They are carrying out missions and Mars and they're
launching satellites now and then to monitor the Earth and not just the Earth. They're
even monitoring other galaxies and other planets as well. So you can imagine the kind and the
amount of data that NASA might be dealing with but all the applications were in a traditional
Hardware based Data Center and they wanted to move into a cloud-based environment because
they wanted better agility and they wanted better adaptive planning for that. And also
they wanted to save costs because a lot of money was spent on just the maintenance of
the hardware and also they wanted more security because NASA is a government organization
of the United States of America and obviously, they wanted more security because NASA is
a government organization of the United States of America and the hold a lot of confidential
details as well for the government. So they just Cannot always rely on the hardware to
store all This Confidential files, they needed more security because if at any time the hardware
fails, they cannot afford to lose that data and that is why they wanted to move all their
65 applications from a hardware environment to a cloud-based environment. Now, let us
take a look. What was the problem now for this migration of all the data into a cloud
environment. They contacted a company called in Frozen now in Frozen is a company who is
a cloud broker and integrator to implement solutions to meet needs with security. So
in phase and was responsible for making this transition and NASA wanted to make this transition
in a very short span of time. So all the applications were migrated as it is into the cloud environment
and because of this all the AWS accounts and all the virtual private clouds that was previously
defined they all got accumulated in a single data space and this It up a huge chunk of
data and NASA had no way of centrally managing it and even simple tasks like giving a particular
system administrator access rights to a particular account. This became a very tedious job with
NASA wanted to automate and to and deployment of all their apps and for that they needed
a management system. So this was the situation when NASA moved into the cloud so you can
see that all those AWS accounts and virtual private cows. They got accumulated and made
a huge chunk of data and everyone was excessing directly to it. So there is a problem in managing
the credentials for all the users and the different teams, but NASA needed was divided
up all their inventories all the resources into groups and number of hosts. And also
they wanted to divide up all the users in two different teams and give each team different
credentials and permissions. And also if you look in the more granular level each user
in each team could also have different credentials and permissions. Let's say that you want to
give the team leader of a particular team access to some kind of data what you don't
want the other users in the team to access that data. So also NASA wanted to Define different
credentials for each individual member as well the wanted to divide up all the data
according to the projects and jobs also now, so I wanted to move from chaos into a more
organized Manner and for that they adopted ansible tower now ansible Tower is ansible
in and more enterprise-level ansible Tower provides you with the dashboard which provides
all the status summary of all the hosts and job and simple Tower is a web-based interface
for managing your organization. It provides you with a very easy to use user interface
for managing quick deployments and monitoring all the configurations. So, let's see what
answer build our did it has the credential management system which could give different
access permission to each individual user and Teams and also divided up the user into
teams and single individual users as well and it has a job assignment system and you
can also assign jobs using ansible tower X suppose. Let's say that you have assigned
job one to a single user job to another single user while job to could be assigned to a particular
team. Similarly. The whole inventory was also managed all the servers. Let's say dedicated
to a particular mission was grouped together all the host machines and other systems as
well Sansa built our help NASA to organize everything now, let us take a look at the
dashboard that ansible Tower provides us. So this is the screenshot of the dashboard
at a very initial level. You can see right now there is zero host. Nothing is there but
I'm just showing you what ansible tower provides you so on the top you can check all the users
and teams. You can manage the credentials from here. You can check your different projects
and inventories. You can make job templates and schedule job. As well. So this is where
you can schedule jobs and provide every job with a particular ID so that you can track
it. You can check your job status here whether your job was successful or failed and since
ansible Tower is a configuration management system. It will hold the historical data as
well. So you can check the job statuses of the past month or the month before that. You
can check the host status as well. You can check how many hosts are up and running you
can see the host count here. So this dashboard of ansible tower provides you with so much
ease of monitoring all your systems. So it's very easy to use ansible to our dashboard
anyone in your company anyone can use it because it's very user-friendly now, let us see the
results that NASA achieved after it has used ansible tower now updating nasa.gov used to
take one hour of time and after using ansible it got down to just five minutes security
patching updates where a multi-day process and now it requires only 45 minutes the provisioning
of os accounts can be done in just 10 minutes earlier the application Stack Up time required
one to two hours and now it's done in only 10 minutes. It also achieved a near real-time
RAM and this monitoring and baselining all the standard Amazon machine image has this
used to be a one-hour manual process. And now you don't even need manual interference
for that. It became a background invisible process. So you can see that how ansible has
drastically changed the overall management system of NASA. So guys, I hope that if understood
how I answered will help NASA. If you have any question, you may ask me at any time on
the chat window. So let us proceed to the next topic. Now this was all about how others
have used ansible. So now let us take a look at the ansible architecture so that we can
understand more about ansible and decide how we can use ansible. So this is the overall
ansible architecture. I've got the answer. Automation engine and I've got the inventory
and a Playbook inside the automation engine. I've got the configuration management database
here and host and this configuration management database is a repository that acts as a data
warehouse for all your it installations. It holds all the data relating to the collection
of your all it assets and these are commonly known as configuration items and it also holds
the data which describe the relationships between such assets. So this is a repository
for all your configuration management data and here I've got the ansible automation engine.
I've got the inventory year and inventory is nothing but the list of all the IP addresses
of all my host machines now as I told you how to use configuration management you use
it with the configuration management tool like ansible but how do you use ansible? Well,
you do that using playbooks. And playbooks describe the entire workflow of your system.
Inside playbooks. I've got modules apis and plugins now modules are the core files now
play books contain a set of place which are a set of tasks and inside every task. There
is a particular module. So when you run a play book, it's the modules that actually
get executed on all your node machines. So modules are the core files and like I told
you before ansible already comes with inbuilt modules, which you can use and you can also
customize them as well as comes with different Cloud modules database modules. And don't
worry. I'll be showing you how to use those modules in ansible and there are different
apis as well. Well API is an answerable are not meant for direct consumption. They're
just there to support the command line tools. For example, they have the python API and
these apis can also be used as a transport for cloud services, whether it's public or
private you can use it then I've got plugins now plug in Our special kind of module that
allowed to execute ansible task as job Bill step and plugins are pieces of code that augment
the ansible score functionality and ansible also comes with a number of Handy plugins
that you can use. For example, you have action plugins cash plugins callback plugins and
also you can create plugins of your own as well. Let me tell you how exactly different
it is from a module. Let me give you the example of action plug-in now action plug in our front-end
modules and what it does is that when you start running a Playbook something needs to
be done on the control machine as well. So this action plugins trigger those action and
execute those tasks in the controller machine before calling the actual modules that are
getting executed in the Playbook. And also you have a special kind of plug-in called
The Connection plug in which allows you to connect to the docker containers in your note
machine and many more and finally I have this host machine that is Elected via SSH and this
was machines could be either windows or Linux or any kind of machines. And also let me tell
you that it's not always needed to use SSH for connection. You can use any kind of network
Authentication Protocol you can use Kerberos and also you can use the connection plugins
as well. So this is fairly a very simple ansible architecture. So now that you've understood
the architecture, let us write a play book now now let me tell you how to write a play
book and playbooks and ansible are simple files written in HTML code and yambol is a
data serialization language. You can think of data serialization language as a translator
for breaking down all your data structure and serialize them in a particular order which
can be reconstructed again for later use and you can use this reconstructed data structure
in the same environment or even in a different environment. So this is the control machine
where ansible will be installed and this is where you'll be writing your playbooks. Let
me show you the structure of how to write a play book. However, play book starts with
three dashes on the top. So first you have to mention the list of all your host machines
here. It means where do you want this Playbook to run? Then you can mention variables by
gathering facts, then you can mention the different tasks that you want. Now remember
that the task get executed in the same order that you write them. For example, if you want
to install software a first and then softer beef later on. So make sure that the first
task would be install software and the next task would be install software be and then
I've got handlers at the bottom. The handlers are also tasks but the difference is in order
to execute handlers. You need some sort of triggers in the list of tasks. For example,
we use notify. I'll show you an example now. Okay, let me show you an example of Playbook
so that you can relate to this structure. So this is an example of an ansible Playbook
to install Apache like I told It starts with three dashes on the top remember that every
list starts with a dash in the front or a - here. I've only mentioned just the name
of one group. You can mention the name of several groups where you want to run your
playbook. Then I've got the tasks you give a name for the task which is install Apache
and then you use a module here. I'm using the app module to download the package. So
this is the syntax of writing the app module. So you give the name of the package which
is Apache to update cache is equal to yes. So it means that it will make sure that app
get is already updated in your note machine before it installs the Apache 2 and you mentioned
State equal to latest. It means that it will download the latest version of Apache 2. And
this is the trigger because I'm using handlers you're right and the Handler here is to restart
Apache and I'm using the service module here and the name of the software that I want to
restart is Apache. And state is able to restart it. So notify have mentioned that there is
going to be a Handler whose job would be to restart Apache 2 and then the task in the
Handler would get executed and it will restart Apache 2. So this is a simple Playbook and
will also be writing similar kind of playbooks later on the Hands-On part. So you'll be learning
again. So if it's looking a little gibberish for you will be doing and that on the Hands-On
part so then it will clear all your doubts. So now let us see how to use ansible and understand
its applications so we can use ansible for application deployment configuration management
security and compliance provisioning and orchestration. So let us take a look at them one by one first.
Let us see how we can use ansible for orchestration. Well orchestration means let's say that we
have defined configurations for each of my systems, but I also need to make sure how
this configurations will interact with each other. So this is the process of Orchestration
but I decide that how the different configurations on different of my systems and my infrastructure
would interact with each other in order to maintain a seamless flow of my application
and your application deployments need to be orchestrated because you've got a front-end
and back-end Services. You've got databases you've got monitoring networks and storage
and each of them has their own role to play with with their configuration and deployment
and you cannot just run all of them is ones and expect that the right thing happens. So
what you need is that you need an orchestration tool that all this task happen in the proper
order that the database is up before the backend server and the front end server is removed
from the load balancer before it gets upgraded and that your networks would have their proper
vlans configured. So this is what ansible helps you to do. So, let me give you a simple
example so that you can understand it better. Let's say that I want to host a website on
my node machines. And this is precisely what we're going to do later on the Hands-On part.
So first and in order to do that first, I have to install the necessary software, which
is the lamp stack and after that I have to deploy all the HTML and PHP files on the web
server. And after that I'll be gathering some kind of information from my web pages that
will go inside my database server. Now, if you want to perform these all tasks, you have
to make sure that the necessary software is installed first now, I cannot deploy the HTML
PHP files on the web servers. If I don't have a web servers if a party is not installed.
So this is orchestration where you mention that the task that needs to be carried out
before and the task that needs to be carried out later. So this is what ansible playbooks
allow you to do. Now. Let's see what provisioning is like provisioning in English means to provide
with something that is needed. It is same in case of ansible it. That ansible will make
sure that all the necessary software is that you need for your application to run is properly
installed in each of the environments of your infrastructure. Let us take a look at this
example here to understand what provisioning actually is. Now if I want to provision a
python web application that I'm hosting on Microsoft Azure and Microsoft is your is very
similar to AWS and it is also a cloud platform on which you can build up all your applications.
So let's say so now if I want to host my if I'm developing a python web application for
coding I would need the Microsoft is your document database. I would need Visual Studio
or need to install python also and some kind of software development kit and different
apis for that so ansible so you can list out the name of all the software development kits
and all this necessary software's that you will require for coding this web that it would
require in order to develop your web application. So you can list out all the necessary software
is that you'd be needing in ansible playbook in order to develop your web application and
for testing your code out you will again need Microsoft Azure document database you would
again note visual studio and some kind of testing software. So again, you can list out
all the software's and ansible Playbook and it will provision your testing environment
as well. And it's the same thing while you're deploying it on the production server as well
and Sybil will provision your entire application at all stages at coding stage a testing and
at the production stage also, so guys, I hope you've understood what provisioning is let
us move on to the next topic and see how we can achieve configuration management with
ansible now ansible configurations are simple data descriptions of your infrastructure,
which is both human readable and machine possible and app server requires. Nothing more than
an SSH key in order to start managing systems and you can start managing them without installing.
Any kind of agent or client software? So you can avoid the problem of managing the management
which is very common in different automation systems. For example, I've got my host machines
and Apache web servers installed in each of the host machines. I've also got PHP and MySQL
installed if I want to make configuration changes if I want to update a party and update
my MySQL I can do it directly. I can push those new configuration details directly onto
my host machines or my note machines and my server and you can do it very easily using
ansible playbooks. So let us move on to the next topic and let us see how application
deployment has been made easier with ansible now ansible is the simplest way to deploy
your applications. It gives you the power to deploy all your multi-tier applications
where reliably and consistently and you can do it all from a common framework. You can
configure all the needed Services as well as push application artifacts from one system.
With ansible you can write Play books which are the description of the desired state of
your system and it is usually kept in the source control sensible. Then does all the
hard work for you to get your systems to the state. No matter what state they are currently
in and play books make all your installations all your upgrades for day-to-day management,
very repeatable. So with ansible you can write Play books which are the descriptions of the
desired state of the systems. And these are usually kept in the source control and simple
then does all the hard work for you to get all your systems in the desired State no matter
what state they're currently in and playbooks make all your installations your upgrades
and for all your day-to-day Management in a very repeatable and reliable way. So let's
say that I am using a version control system like get while I'm developing my app. And
also I'm using Jenkins for continuous integration now Jenkins will extract code from get every
time there is a new Commit and then making software built and later. This build will
get deployed in the test server for testing. Now if changes are kept making in the code
base continuously. You would have to configure your test and the production server continuously
as well according to the changes. So what ansible does is that it continuously keeps
on checking the Version Control System here so that it can configure the test and the
production server accordingly and quickly and hence. It makes your application deployment
like a piece of cake. So guys, I think you have understood the application deployment.
Don't worry in the Hands-On part will also be deploying our own applications on different
servers as well. Now, let us see how we can achieve security with ansible in today's complex.
It environment security is Paramount you need security for your systems you need security
for your data and not just your data your customers data as well. Not only you must
be able to Define what it means for your systems to be. You also need to be able to Simply
apply that security and also you need to constantly monitor your systems in order to ensure that
they remain compliant with that security and with ansible. You can simply Define security
for your systems using playbooks with playbooks. You can set up firewall rules. You can log
down different users or groups and you can even apply custom security policies as well
now ansible also works with the Mind Point Group which rights and civil rules to apply
these aesthetic now disa stick is a cybersecurity methodology for standardizing security protocols
within your network servers and different computers. And also it is very compliant with
the existing SSH and win RM protocols. And this is also a reason why ansible is preferred
over other configuration management tools and it is also compatible with different security
verification tools like opens Gap and stigma what tools like opens cap and stigma does
is that it carries out a timely inspection. All your software inventory and check for
any kind of vulnerabilities and it allows you to take steps to prevent those attacks
before they actually happen and you can apply the security over your entire infrastructure
using ansible. So, how about some Hands-On with ansible? So let us write some ansible
playbooks now. So what are we going to do is that we are going to install lamp stack
and then we're going to host a website on the Apache server and will also collect some
data from our webpage and store it in the MySQL server. So guys, let's get started.
So here I'm using the Oracle virtualbox manager and here I've created two virtual machines.
The first is the ansible control machine and the ansible host machine. So ansible control
machine is the machine where I have installed and simple and this is where I'll be writing
all my playbooks and answer will host one here is going to be my note machine. This
is where the playbooks are going to get deployed. So in this machine, I'll deploy my website.
So I'll be hosting a website in the answer will host one. Just go to my control machine
and start writing the playbooks. So this is my ansible control machine. Now. Let's go
to the terminal first. So this is the terminal of my ansible control machine. And now I've
already installed ansible here and I've already made an SSH connection with my note machine.
So let me hear just become the root user first now, you should know that you do not always
need to become the root user in order to use ansible. I'm just becoming the root user for
my convenience because I like to get all the root privileges while I'm using ansible, but
you can pseudo to any user if you like So let me clear my screen first. Now before we
start writing play boo status first check the version of ansible that is installed here.
And for that I'll just use the command ansible - - version. And as you can see here that
I have got the ansible two point two point zero point zero version here. Now. Let me
show you my host inventory file since I've got only one node machine here. So I'm going
to show you where exactly the IP address of my node machine is being stored. So open the
hosts file for you now, so I'm just going to open the file and show it to you. So I'm
using the G edit editor and the default location of your host inventory file is at sea. I'm
supposed / posts. And this is your host inventory file and now have mentioned the IP address
of my host machine here, which is one. Ninety two point one sixty eight point 56.1 02 and
I have named it under the group name test servers. So always write the name of your
group under the square brackets now, I just have one node machine. So there is only one
IP address. If you have many node machines, you can just let us down the IP address under
this line. It's as simple as that or if you even want to group it under a different name,
you can use a different name use another square bracket and put a different name for another
set of your hosts. Okay. Now, let me clear my screen first. So first, let me just test
out the SSH connection whether it's working properly or not using ansible. So for that
I'll just type in the command and Sybil and pink and then the name of the group of my
host machines, which is test servers in my case. And thank changed to Paul. It means
that an SSH connection is already established between my control machine and my note machine.
So we are all ready to write playbooks and start deploying it on the notes. So the first
thing that I need to do is write a provisioning Playbook now, since I'm going to host a website,
I would first need to install the necessary software's so I'll be writing a provisioning
playbook for that and out provision my node machine using lamp stack. So let us write
a Playbook to install lamp stack on my Note machine now, I've already written that Playbook.
So I'm just going to show it to you. I'm using the Gia did editor again and the name of my
provisioning playbook is lamp stack. And the extension for AML file is Dot. Yml, and this
is my playbook. Now. Let me tell you how I have written this Playbook as I told you that
every play book starts with three dashes on the top. So here are the three dashes and
then I've given a name to this Playbook which is to install Apache PHP and MySQL. Now, I've
already got the L in my lamb because I'm using a Ubuntu machine which is a Linux operating
system. So I need to install Apache PHP and MySQL now and then you have to mention the
host here on which you want this Playbook to get deployed. So I've mentioned this over
here and then I want to escalate my privileges for which I'm using become and become user
it is because sometimes you want to become another user different from what you are actually
logged into the remote machine. So you can use escalating privileges tools like so or
pseudo to gain root privileges. And so and that is why I've used become and become user
for that. So I'm becoming the user root and I'm using become true here on the top. What
it does is that it activates Your Privilege escalation and then you become the root user
on the remote machine and then gather facts true. Now, what it will do is that we gather
useful variables about the remote host. Now what exactly it will gather is some sort of
files or some kind of keys which can be used later in a different Playbook. And as you
know that every Playbook is a list of tasks that you need to perform. So this is the list
of all my tasks that I'm going to perform and since it's a provisioning Playbook, which
means I'm only installing the necessary softwares. That will be needed in order to host a website
on my Note machine. So first I'm installing Apache so given the task name as install apache2
and then I'm using the package module here. And this is the syntax of the package module.
So you have to first specify the name of the package that you are going to download which
is Apache 2 and then you put State equal to present now since we're installing something
for the first time and it won't this package to be present in your node machine. So you're
putting State equal to present now similarly if you want to delete something you can put
State equal to absent and it works that way so I've installed in Apache PHP module and
I've installed PHP client PHP Emperor PHP GD library of install a package PHP MySQL.
And finally, I've installed the MySQL server in the similar way that I've installed a party
to this is a very simple Playbook to provision your node machine and actually all the playbooks
are simple. So I hope that you have understood how to write a Book now, let me tell you something
that you should always keep in mind while you were writing playbooks make sure that
you are always extra careful with the indentation because Gamal is a data serialization language
and it differentiates between elements with different indentations. For example, I've
got a name here and a name here also, but you can see that the indentations are different
it is because this is the name of my entire Playbook while this is just the name of my
particular task. So these two are different things and they need to have different indentations
the ones with the similar indentations are known as siblings like this one. This is also
doing the same thing. This is also installing some kind of package and this is also installing
some kind of package. So these are similar, so that's why you should be very careful with
indentation. Otherwise, it will create a problem for you. So what are we waiting for? Let us
run this Playbook clear my screen first. So in order to run a play book and the command
that you should be using to run an answerable Playbook is ansible - Playbook And then the
name of your file, which is lamp stack dot Jama and here we go. And here it is. Okay
because it is able to connect to my note machine. Apache 2 has been installed. And it's done.
My playbook is successfully run. And how do I know that? I know that seeing these common
return values. So these common return values like okay changed unreachable and fate. They
give me the status summary of how my playbook was run. So okay equal to 8, it means there
were eight tasks. That was Run Okay changed equal to 7. It means that something in my
note between has been changed because obviously I've install new packages into my note machine.
So it's showing changed equal to 7 unreachable is equal to 0 it means that there is zero
host that were unreachable and failed equal to 0 it means that zero tasks where fate so
my playbook was run successfully on to my note between. So let us check my note machine
and see if Apache and MySQL has been installed. So let us go to my node machine now. So this
is my node machine. So let us check knife. Apache server has been installed. So I'm going
to my web browser. So this is my web browser in my note machine. Let me go to the Local
Host and check if Apache web server has been downloaded and it's there. It works. Now.
This is the default web page of apache2 web server. So now I know for sure that Apache
was installed in my note machine now. Let us see if MySQL server has been installed.
Let me go to my terminal. This is the terminal of my load machine. Now. If you want to check
if MySQL has installed just use this following command. mice ql user is root then - p sudo
password password again for MySQL and there it is. So MySQL server was also successfully
installed in my note machine. So let's go back to my control machine and let's do what
is left to do. So we're back into our control machine. Now. I've already provisioned my
note machine. So let's see what we need to do next now since we are deploying a website
on the Node machine, let me first show you how my first web page looks like let me first
show you how my first web page looks like so this is going to be my first web page which
is index dot HTML and I've got two more PHP files also this salvi actually deploying these
files onto my node machine. So let me just open the first webpage to you. So this is
going to be my first web page. And what I'm going to do is that I'm going to ask for name
and email because this is a registration page for at Eureka where you have to register with
your name and email and I want this name and email to go into my database. So for that
I need to create a database and also need to create a table for this name and email
data to store into so for that will write another play book and we'll be using database
modules in that clear the screen first now again, I've already written that Playbook.
So let me just show it to you. So using the G edit editor here again and the name of this
Playbook is my school module. Okay. So this is my playbook. So like all Playbook it starts
with three dashes and here I have mentioned host all now. I just have only one host. I
know I could have mentioned either the only one IP address directly or even given the
name of my group but I've written just all your so that you can know that if you had
many group names or you have many notes and you want this Playbook to run on all of your
node machines, you can use this all and this Playbook will get deployed on all your note
machines. So this is another way of mentioning your hosts and I'm using remote user root
and this is another method to escalate your privileges. It's similar to become and become
user. So on the remote user to have root privileges while this Playbook would run and then the
list of the tasks and so what I'm doing in this Playbook is that since I have to connect
to my MySQL server, which is present in my note machine. I need a particular software
for that which is the MySQL python module and I'm Download and install it using tip
now dip is the python package manager with which you can install and download python
packages. But first, I need to install Pippin my note machine. So since I told you that
the tasks that you write in a Playbook it gets executed in the same order that you write
them. So my first task is to install pip and then I'm using the app module here here. I've
given the name of the package which is python bit and state equal to present and after that.
I'm installing some other software's using bit and I'm stalling some other related software's
as well. I'm also installing Library - with blind deaf. And after that using pip, I'm
installing the MySQL python module now notice that so you can consider this as an orchestration
Playbook because here I'm making sure that pip has to get installed first and after papers
installed I'm using pip to install another python package. So you see what we did here
right and then I'm going to use the database modules for Getting a new user to access the
database and then I'm creating the database named a do so for creating a MySQL user. I've
used the MySQL user database module that ansible comes with and this is the syntax of the MySQL
user module recreate the name of the new user which is edureka, you mentioned the password
and the preview here. It means what privileges do you want to give it to the new user and
here I'm granting all privileges for all database. And since you're creating it for the first
time and you want state to be present. Similarly, I'm using the mysqldb module to create a database
in my MySQL server named ed you so this is the very simple syntax of using mysqldb module.
We have to just give the name of the database in DB equal to and state equal to present.
So this will create a database named Eddie also and after that I also need to create
a table inside the database for storing my name and email details, right and and unfortunately
ansible does not have any MySQL table creating modules. So what I did is that I've used a
Command Module here. We Command Module and directly going to use my SQL queries to create
a table and the syntax is something like this so you can write it down or remember it if
you want to use it. So for that since I'm writing a MySQL Query I started with mySQL
user Eddie wake up the - us for the user and then for password Etc. Wake up. Now after
- e just write the query that you need to execute on the MySQL server and write it in
single quotations. So I have written the query to create a table and this is create table
are EG the name the email and then after that just mention the name of the database on which
you want to create this table, which is a do for me. So this is my orchestration PlayBook.
Clear my screen first. The command is ansible - Playbook and the name of your play book,
which is MySQL modding. And here we go. Again, my common return values tell me that the Playbook
was done successfully because there are no fail task and no unreachable host and there
are change task in my note machine. So now all the packages are downloaded now, my node
machine is well provisioned. It's properly orchestrated. Now. What are we waiting for?
Let's deploy your application. Well clear the screen first. So now let me tell you what
exactly do we need to do in order to deploy my application and in my case, these are just
three PHP files and HTML files that I need to deploy it on my Note machine in order to
display this HTML files and PHP files on my web server in my note machine. What I need
to do is that I need to copy this files from my control machine to the proper location
in my notebook machine and we can do that using playbooks. So let me just show you the
Playbook to copy files. And the name of my father is deployed website. So this is my
playbook to deploy my application and here again, I've used the three dashes and then
the name of my playbook is copy the host as you know that it's going to be test servers.
I'm using privilege escalation again, and I'm using become and become user Again The
Gather facts again true. And here is the list of the task the task is to just copy my file
from my control machine and paste it in my destination machine, which is my node machine
and for that and for copying I've used a copy module and copy module is a file module that
ansible comes with so this is the syntax of the copy module here. You just need to mention
a source and source is the path where my file is contained in my control machine, which
is home at Eureka documents. And the name of the file is index dot HTML, and I wanted
to go too far www HTML and it's index dot HTML, so I should be copying my files. Into
this location in order for it to display it on the web page and similarly have copied
my other PHP files using the same copy module. I've mentioned the source and destination
and copying them to the same destination from the same source. So I don't think any of you
would have questions here. This is the most easiest Playbook that we have written today.
So let us deploy our application now and for that we need to run this play book and before
that we need to clear the screen because there are a lot of stuff on our screen right now.
So let's run the Playbook. And here we go, and it was very quick because there was nothing
much to do. You just have to copy files from one location to another and these are very
small files. Let us go back to our host machine and see if it's working. So you're back again
at our host machine. Let's go to my web browser to check that. So let me refresh it and there
it is. And so here is my first web page. So my application was successfully deployed.
So now let us enter our name and email here and check if it is getting entered in my database.
So let's put our name and the email. It's why z.com and add it so new record created
successfully. It means that it is getting inserted into my database. Now, let's go back
and view it and there it is. So congratulations, you have successfully written playbooks to
deploy your application your provision your node machines using playbooks and orchestrated
them using playbooks now, even though at the beginning it seemed like a huge task to do
and so we'll play both made it so easy. Hello everyone. This is Saurabh from Edureka in
today's session will focus on what his puppet. So without any further Ado let us move forward
and how look at the agenda for today first. We'll see why we need configuration management
while the various problems are industries were facing before configuration management
was introduced after that will understand what exactly is configuration management and
we'll look at various configuration management tools after We'll focus on puppet and we'll
see the puppet architecture along with the various puppet components and finally in our
hands on part will learn how to deploy my SQL and PHP using puppet. So I'll move forward
and we'll see what are the various problems before configuration management. So this is
the first problem guys, let us understand this with an example suppose. You are a system
administrator and your job is to deploy mean stack say on four nodes. All right means dark
is actually Mongo DB Enterprise angularjs and node.js so you need to deploy means dark
on four notes that is not a big issue. You can manually deploy that and four nodes but
what happens when your infrastructure becomes huge you may need to deploy the same means
tax a on hundreds of notes. Now, how will you approach the task? You can't do it manually
because if you do it manually, it'll take a lot of time plus they will be wastage of
resources along with that. There is a chance of human error. I mean, it increases the risk
of human error. All right, so we'll take the same example forward. And we'll see what are
the other problems before configuration management. Now, this is the second problem guys. So it's
fine like you in the previous step you have deployed means that one hundreds of nodes
manually. Now what happens there is an updated version of Mongo DB available and your organization
wants to shift that updated version. Now, how will you do that? You want to go to the
updated version of Mongo DB? So what you'll do you'll actually go and manually update
mongodb on all the nodes in your infrastructure. Right? So again, that will take a lot of time
but now what happens that updated version of the software has certain glitches your
company wants to roll back to the previous version of the software, which is mongo DB
in this case. So you want to go back to the previous version. Now, how will you do that?
Remember you have not kept the historical record of Mongo DB during the updating. I
mean you have updated mongodb biannually on all the nodes. You don't have the record of
the previous version of Mongo DB. So what you need to do you need to go and manually
Reinstall mongodb on all the nodes. So rollback was a very painful task. I mean it used to
take a lot of time. Now. This is the third problem guys over here what happens you have
updated mongodb in the previous step on say development environment and in the testing
environment, but when we talk about the production environment, they're still using the previous
version of mongodb. Now what happens there might be certain applications that work that
are not compatible with the previous version of mongodb All right. So what happened developers
write a code and that works fine in his own environment or beat his own laptop after that.
It works fine till testing is well. Now when it reaches production since they're using
the older version of Mongo DB which is not compatible with the application that developers
have built so it won't work properly there might be certain functions which won't work
properly in the production environment. So there is an inconsistency in the Computing
environment due to which the application might work in the development environment, but in
product it is not working properly. Now what I'll do, I'll move forward and I'll tell you
how important configuration management is with the help of a use case. So configuration
management. Add New York Stock Exchange. All right. This is the best example of configuration
management that I can think of what happened a software glitch prevented the New York Stock
Exchange from Trading stocks for almost 90 minutes this led to millions of dollars of
loss a new software installation caused the problem. The software was installed on 8 of
its twenty trading Terminals and the system was tested out the night before however in
the morning it failed to operate properly on the a terminals. So there was a need to
switch back to the old software you might think that this was a failure of New York
Stock Exchange has configuration management process, but in reality, it was a success
as a result of proper configuration management process NYSE recovered from that situation
in 90 minutes, which was pretty fast. Let me tell you guys had the problem continued
longer the consequences would have been more severe so because the proper configuration
management, New York Stock Exchange Painted loss of millions of dollars they were able
to roll back to the previous version of the software within 90 minutes. So we'll move
forward and we'll see what exactly configuration management is. So what is configuration management
configuration management is basically a process that helps you to manage changes in your infrastructure
in a more systematic and structured way. If you're updating a software you keep a record
of what all things you have updated. What will change is you are making in your infrastructure
all those things and how you achieve configuration management you achieve that with the help
of a very important concept called infrastructure as code. Now. What is the infrastructure is
code infrastructure as code simply means that you're writing code for infrastructure. Let
us refer the diagram that is present in front of your screen. Now what happens in infrastructure
is code you write the code for infrastructure in one central location. You can call it a
server. You can call it a master or whatever you want to call it. All right. Now that code
is deployed onto the dev environment test environment and the product environment. Basically
your entire infrastructure. All right, whatever. No, do you want to configure your configure
that with the help of that one central location? So let us take an example. All right suppose
you want to deploy Apache Tomcat say on all of your notes. So what you'll do in one location
will write the code to install Apache tomcat and then you'll push that onto the nodes which
you want to configure. What are the advantage you get here. First of all the first problem
if you can recall that configuring large infrastructure was very hectic job, but because of configuration
management, it becomes very easy how it becomes easy. You just need to write the code in one
central location and replicate that on hundreds of notes it is that easy. You don't need to
go and manually install or update the software on all the nodes. All right. Now the second
problem was you cannot roll back to the previous table version in time. But what happens here,
since you have everything well documented in the central location rolling back to the
previous version was not a time-consuming task. Now the third problem was there was
a variation or inconsistency in Various teams, like Dev team Testament product team like
the environment the Computing environment was a different in-depth testing product.
But with the help of infrastructure as code what happens all your three environment that
is there tested product have the same Computing environment. So I hope we all are clear with
what is configuration management and what is infrastructure is code. So we'll move forward
and we'll see what are the different type of configuration management approaches are
there now, there are two types of configuration management approaches one is push configuration.
Another is pull configuration. All right. Let me tell you push configuration first input
configuration what happens there's one centralized server and it has all the configurations inside
it if you want to configure certain amount of nodes. All right, say you want to configure
for notes as shown in the diagram. So what happens if you push those configuration to
these nodes there are certain commands that you need to execute on that particular central
location and with the help of that command those are configurations, which are present
will be pushed onto the nodes now, Let us see what what happens in pull configuration
in pull configuration. There is one centralized server, but it won't push all the configurations
on to the nodes what happens nodes actually posed the central server at say 5 minutes
or 10 minutes basically at periodic intervals. All right, so it will pose the central servers
for the configurations and after that it will pull the configurations that are there in
the central server so over here, you don't need to execute any command nodes will add
automatically pull all the configurations that are there in the centralized server and
pop it in Chef both uses full configuration. But when you talk about push configuration
ansible unsourced accuses push configuration, so I'll move forward and we'll look at various
configuration management tools. So these are the four of most widely adopted tools for
configuration management. I have highlighted puppet because in this session, we are going
to focus on puppet and it uses pull configuration and when we talk about Saul stock, it uses
push configuration, so does ansible ansible also uses push. Listen Chef also uses the
pulley configuration. All right, so pop it and chef uses pull configuration, but ansible
and solve Stark uses push configuration. Now, let us move forward and see what exactly puppet
is. So pop it is basically a configuration management tool that is used to deploy a particular
application configure your nodes and manager service. Like they can possibly take your
servers online and offline as required configure them and deploy a certain package or an application
onto the node. So right with the help of puppet, you can do that with ease and the architecture
that it uses master-slave architecture. Let us understand this with an example. So this
is Puppet Master over here. All the configurations are present and these are all the puppet agents.
All right, so these puppet agents pole the central or the Puppet Master at regular intervals
and whatever configurations are present. It will pull those configuration basically. So
let us move forward and focus on the Puppet Master Slave architecture now, this is a Also
slave architecture guys over here what happens the puppet agent or the puppet node sends
facts to the puppet master and these facts are basically a key value our data pair that
represents some aspect of slave state that aspect can be its IP address time operating
system or whether it's a virtual machine and then Factor gathers those basic information
about puppet slave such as Hardware details network settings operating system type and
version IP addresses Mark addresses all those things. Now these parts are then made available
in Puppet Masters manifest as variables now Puppet Master uses those facts that it has
received from the puppet agent or the puppet node to compile a catalog that catalog defines
how the slave should be configured and at the catalog is a document that describes a
desired state for each resource that Puppet Master manages, honestly, so it is basically
a compilation of all the resources that Puppet Master applies to a given slave as well as
at the relationship between Those resources so the catalog is compiled by the puppet master
and then it is sent back to the node and then finally slave provides data about how it has
implemented that catalog and if sandbags our report. So basically the node or the agent
sends the report back that the configurations are complete and they can actually view that
in the puppet dashboard as well. Now what happens is the connection between the node
or the puppet agent and the puppet master happens with the help of SSL secure encryption.
All right, we'll move forward and we'll see how actually the connection between the puppet
master and puppet node happens. So this is how puppet master and slave connection happens
what happens first of all the puppets slave it requests for the Puppet Master certificate.
All right. It sends a request to the master certificate and once Puppet Master receives
that request it will send the master certificate and once puppet slave has received the master
certificate Puppet Master will again send a request to the slave regarding the its own
certificate. All right. So it will request a for the puppet agent to send its own certificate.
The puppet slave is generate its own certificate and send it to Puppet Master. Now what puppet
master has to do puppet master has to sign that certificate. Alright. So once it has
signed the certificate puppet slave can actually request for the data. All right all the configurations
and then finally Puppet Master will send those configurations on to the puppets late. This
is how puppet master and slave communicates. Now, let me show you practically how this
happens. I have installed puppet master and puppet slave on my sent to West machines.
All right, I'm using 2 virtual machines 14 puppet master and another for puppet sleep.
So let us move forward and execute this practically now, this is my Puppet Master virtual machine
over here. I've already created a puppet master certificate, but there is no puppet agent
certificate right now and how will you confirm that there is a command that is puppet. Third
list and it will display all the certificates that are pending in puppet master. I mean
that are pending for the approval from the master. All right, so currently there are
no certificates available. So what I'll do is I'll go to my puppet agent and I'll fetch
the Puppet Master certificate which are generated earlier and at the same time generate the
puppet agent certificate and send it to master for signing it. So this is my puppet agent
virtual machine now over here as I've told you earlier as well. I'll generate a puppet
agent certificate and at the same time I'll fetch the Puppet Master certificate and that
agent certificate will be sent to puppet master and it will sign that puppet my agent certificate.
So let us proceed with that for that. I'll type up it agent - t and here we go. All right,
so it is creating a new SSL key for the puppet agent as you can see in the logs itself. So
it has sent a Certificate request and this is the fingerprint for that. So exiting no
certificate found and wait for sword is disabled. So what I need to do is I need to go back
to my Puppet Master virtual machine and the signed this particular certificate that is
generated by puppet agent. Now over here if you want to see the list of certificates,
what do you need to do? You need to type up it so at least I have told you earlier as
well. So let us see what all certificates are there now, so as you can see that there
is a certificate that has been sent by puppet agent. All right, so I need to sign this particular
sort of again. So for that what I will do I'll type pop it. Search sign on the name
of the certificate that is puppet agent and here we go. So that successfully signed the
certificate that was requested by puppet agent. Now what I'll do, I'll go back to my puppet
agent virtual image and over there. I'll update the changes that have been made in the Puppet
Master. Let me first clear my terminal and now again, I'll type puppet agent - tea. All
right, so we have successfully established a secure connection between puppet master
and puppet agent. Now. Let me give you a quick recap of what we have discussed a lot first.
We saw what are the various problems before configuration management be focused on three
major problems that were there. All right. And after that we saw how important configuration
management is with the help of a use case of New York Stock Exchange. And finally we
saw what exactly configuration management is. And what do you mean by infrastructure
is code. We also looked at various configuration management tools are namely Chef puppet ansible
and saltstack and after that we understood what exactly pop it is. And what is the master-slave
architecture that it has and how puppet master and puppet slave communicates. All right,
so I'll move forward and we'll see what use case I have for you today. So what we are
going to do in today's session or we are going to deploy a my SQL and PHP using puppet. So
for that what I will do, I'll first a download the predefined modules for my dad. SQL and
PHP that are there in the puppet Foods. All right, those modules will actually Define
the two classes that is PHP and MySQL. Now you cannot deploy the class directly onto
the nodes. So what do you need to do? When you in puppet Boniface you need to declare
those classes, whatever class you have defined. You need to declare those classes. I'll tell
you what our manifest modules you don't need to worry about that. I'm just giving a general
overview of what we are going to do in today's session. So you just need to declare those
two classes at as PHP and MySQL and finally just deploy that onto the nose it is that
simple guys. So as you can see that there will be a code for PHP and MySQL from that
Puppet Master, it will be deployed onto the nose or the puppet agents will move forward
and we'll see what are the various phases in which will be implementing the use case.
Alright. So first we'll define a class has all right classes are nothing but the collection
of various resources. How will do that will do that with the help of modules that will
actually download a module from the puppet. Boat and we'll use that module that defines
who classes as I've told you PHP and MySQL and then I'm going to declare that class in
the Manifest and finally deploy that onto the nodes. All right. So let us move forward
and before actually doing this it is very important for you to understand certain basics
of pop it like code basics of puppet like what our classes resources manifest modules
all those things. So we'll move forward and understand those things one by one. Now. What
happens is first of all, I'll explain you resources classes manifests in modules separately.
But before that, let me just give you an overview of what are these things? All right, how do
they work together? So what happens there are certain resources or write a user is a
resource of pile is a resource. Basically anything that is there can be considered as
a resource. So multiple resources actually combine together to form a class. So now this
class you can declare it in any of the benefits that you want. You can declare it in multiple
manifests. All right, and then finally you can bundle all These manifest together to
form a module. Now. Let me tell you guys it is not mandatory that with you will combine
the resources and define a class. You can actually deploy the resources directly. It
is a good practice if you combine the resources in the form of classes because it becomes
easier for you to manage the same goes for manifest as well. And I'll tell you how to
do that as well. You can write a puppet code and deploy that onto the nodes and at the
same time it is not necessary for you to bundle the Manifest that you are using in the form
of modules. But if you do that, it becomes more manageable and it becomes more structured.
All right, so it becomes easier for you to handle multiple manifests. All right. So let
us move forward and have a look at what exactly are resources and what our class is in puppet.
Now what our resources anything that is there is a resource a user is a resource other told
you about file can be a resource. Basically anything that is there can be considered as
a resource. So puppet code is composed primarily of a resource declarations a resource describes
something about the state of the System it can be such as a certain user or a file should
exist or a package should be installed now here we have the syntax of the resource. All
right, first you write the type of the resource. Then you give a name to it in the single quotes
and various attributes that you want to Define in the example. I've shown you that it will
create a file that is I need d.com and this attribute will make sure that it is present.
So let us execute this practically guys. I'll again go back to my Center as virtual machine
now over here. What I'll do I'll use the G edit editor you can use whatever editor you
want and I'll type the path for my manifest directory and in this directory. I let Define
a file. All right and with the dot DB extension, so I'll just name it as a side dot p p and
here we go. Now what head are the resource examples that I've shown you in this light?
I will just write the same example and the let us see what happens file open the braces
now give the path HC. / I knit DDOT conf Ina DDOT conf. Colon, and antenna, and now I'm
going to write the attribute, so I'm going to make sure that it is present in sure. The
Define is created. Etsy I knit / I knit. DDOT conf comma and the now-closed the braces save
it and close it. Now what you need to do. You need to go to the puppet Asian once more
and over there. I'm going to execute agent - t command that will update the changes made
in the Puppet Master. Now we're here. I'll use the puppet agent - t command and let us
see if the file I need the dot-coms is created or not. All right, so it has done it successfully
now. What I'll do is just to confirm that I'll use LS command for that. I will type
LS Etsy. Ina DDOT Kant And as you can see that it has been created, right so we have
understood what exactly a resources in puppet, right? So now let us see what our classes
classes are nothing but the group of resources. All right, so you group multiple resources
together to form one single sauce and you can declare that class in multiple manifest
as we have seen earlier. It has a syntax error. Let us see first you need to write class then
give a name to that class open the braces write the code in the body and then close
the brace is it's very simple and it is pretty much similar to the other coding languages
that you if you if you have come across any other coding languages. It is pretty much
similar to the class that you define over there as well. All right, so we have a question
from my uncle he's asking can you specify what exactly the difference between a resource
and a class classes are actually nothing but the bundle of resources. All right, all those
Resources Group together forms a class and what you can say is a resource describes a
single. Or a package but what happens a class describes everything needed to configure an
entire service or an application? So we'll move forward and we'll see what our manifest
so this is puppet manifest now what exactly it is, every slave has got its configuration
details in puppet master and it is written in the native puppet language. These details
are written in the language that puppet can understand and that language is termed as
manifests. So this is Manifest all the puppet programs are basically termed as Manifest.
So for example, you can write a manifest in puppet master that creates a file and install
the party's over on puppet slaves connected to the Puppet Master. Alright, so you can
see I've given you an example over here. It uses a class that is called Apache and this
class is defined with the help of predefined modules that are there in puppet port and
then various our tributes like Define the virtual hosts in the port and the root directory,
so Basically, there are two ways to actually declare a class in puppet manifest either.
You can just write include and the name of the class or you can if you don't want to
use a default attributes of that class, you can make the changes in that by using this
particular syntax that is you write the class open the braces and the class name: whatever
changes or whatever the attributes that you want apart from the one which are there in
Deep by default and then finally close the braces. All right. So now I'll execute a manifest
practically that will install Apache on my notes. All right now need to deploy Apache
using puppets. All right. So what I need to do, I need to write the code to deploy apart
a in the Manifest directory. I've already created a file with DOT CPP extension. If
you can remember when I was talking about resources, right? So now again, I'll use the
same file that is side b p and I'll write the code to deploy a partay. All right. So
what I'll do I'll just I'll use the G editor you can use whatever editor you feel like
it see Pop It manifest and site. Art p p and here we go. Now over here. I'll just delete
the resource that I've defined here. I like my screen to be nice and clean and now I will
write the code to deploy a party so for that I will tight package. httpd : now I need to
ensure it is install. So for that I'll type in sure installed. Give a comma Now I need
to start this Apache service for that. I'll type service. httpd in short running through
a coma now close the braces the save it and close it. Let me clear my terminal. And now
what I'll do, I'll go to my puppet agent from there. It will pull the configurations that
are present in my Puppet Master. Now what happens periodically puppet agent actually
pulls the configuration from Puppet Master and it is around 30 minutes, right? It takes
around half an hour after every half an hour puppet agent pulls the configuration from
Puppet Master, right so you can configure that as well. If you don't want to do it just
throw in a command puppet agent - tea and it will automatically pull the configurations
are representing the puppet master. So for that I will go to my puppet agent virtual
machine now here what I'll do, I'll type a command puppet agent - t and let us see what
happens. So it is done now now what I'll do just to confirm that I will open my browser.
And over here, I will type the hostname of my machine which is localhost and let us see
if a party is installed. All right, so Apache has been successfully installed now, let us
go back to our slides and see what exactly modules are. So what our puppet modules puppet
module can be considered as a self-contained bundle of code and data. Let us put it in
another way. We can say that puppet module is a collection of manifest and data such
as Parks files templates Etc. All right, and they have a specific directory structure.
Modules are basically used for organizing your puppet code because they allow you to
split your code into multiple manifest. So they provide you a proper structure in order
to manage a manifest because in real time, you'll be having multiple manifest to manage
those manifests. It is always a good practice to bundle them together in the form of modules.
So by default puppet modules are present in the directly / HC / puppet / modules, whatever
modules you download from Puppet force will be present in this module directory. All right,
even if you create your own modules, you have to create in this particular directory. That
is / HC / puppet / modules. So now let us start the most awaited topic of today's session
that is deploying PHP and my SQL using puppet. Now, what I'm going to do is I'm going to
download the two modules one is for PHP and another is for MySQL. So those two modules
will actually Define PHP and MySQL class for me now after that I need to declare that class
in the Manifest. Then site dot PHP file present in the puppet manifest. So I'll declare that
class in the Manifest. And then finally, I'll throw in a command puppet agent - teen my
agent and it will pull those configurations and PHP and MySQL will be deployed. So basically
when you download a module you are defining a class. You cannot directly deploy the class
you need to declare it in the Manifest and I will again go back to my sin to icebox now
over here. What I'll do, I'll download the my SQL module from the puppet forward. So
forth are all type puppet mode. You'll install Puppet Labs. - my sequel - - give the night
version name so I will use three point one zero point zero and here we go. So what is
happening here as you can see the saying preparing to install into / HC / puppet / modules, right?
So it will be installed in this directories apart from that. It is actually downloading
this from the forge a pi dot puppet labs.com. So it is done now, that means that successfully
install MySQL module from Puppet Fort. All right. Let me just clear my terminal and now
I will install PHP modules for that. I'll type puppet module install. - a PHP - - version
that is four point zero point zero - beta 1 and here we go. So it is done. Now that
means we have successfully installed two modules one is PHP and other is my SQL. All right.
Let me show you where it is present in my machine. So what I'll do, I'll just hit an
LS command and I'll show you in puppet modules. And here we go. So as you can see that there's
a my SQL module and PHP module that we have just downloaded from Puppet Foods. Now what
I need to do is I have defined by SQL and PHP class, but I need to declare that in the
site dot PHP file present in the puppet manifest. So for that what I will do I'll first use
the G edit editor you can use whatever editor that you want. I'm saying it again and again,
but you can use whatever editor that you want. I personally prefer G edit and now manifest
side dot p p and here we go. Now as I told you earlier is well, I like my screen to be
clean and nice. So I'll just remove this and over here. I will just declare the two classes.
That is my secret and PHP. Include my sequel. Server and the next line. I'll include the
PHP class for that anti PHP. Just save it now close it. Let me clear my terminal now
what I'll do, I'll go to my puppet agent. And from there. I'll hit a command puppet
agent - t that will pull the configurations from Puppet Master. So let us just proceed
with that. Let me first clear my terminal and now I'll tie puppet agent - t and here
we go. So we have successfully deployed PHP and MySQL using puppet. All right, let me
just clear my terminal and I'll just confirm it by typing my sequel - we All right, this
will display the version now as just exit from here and now I'll show you the PHP versions
of adult type PHP - version and here we go. Alright, so this means that we have successfully
installed PHP and MySQL using puppet. So now let me just give you a quick recap of what
we have discussed in love. All right. So first we saw why we need configuration management.
What are the various problems that were there before configuration management? And we understood
the importance of configuration management with a use case of New York Stock Exchange.
All right, after that we saw what exactly configuration management is and we understood
a very important concept called infrastructure as code. Then we focused on various type of
configuration management approaches namely push and pull then we saw various configuration
management tools are namely puppet chef ansible and Source tag after that. We focus on pop
it and we saw what exactly puppet is its Master Slave architecture how puppet master and slave
communicates all those things then we understood the puppet code Basics. We understood what
our resources what a class is Manifest modules and finally in our hands on part. I told you
how to deploy PHP and MySQL using puppet My name is Sato. And today we'll be talking about
Nagi ways. So let's move forward and have a look at the agenda for today. So this is
what we'll be discussing. Will Begin by understanding why we need continuous monitoring what is
continuous monitoring and what are the various tools available for continuous monitoring.
Then we are going to focus on Nagi OS we are going to look at its architecture how it works.
We are also going to look at one case study and finally in the demo. I will be showing
you how you can monitor a remote host using NRP, which is nothing but nagios remote plug-in
executor. So I hope you all are clear with the agenda. Let's move forward and we'll start
by understanding why we need continuous monitoring. Well, there are multiple reasons guys, but
I mentioned for very important reasons why we need continuous monitoring. So let's have
a look at each of these one by one. The first one is failure of see ICD pipelines since
devops is a buzzword in the industry right now. And most of the organizations are using
devops practices. Obviously, they are implementing see ICD pipelines or it is also called as
digital pipelines right now the idea behind these SED pipeline is to make sure that the
release should happen more frequently and it should be more stable in an automated fashion.
Right because there are a lot of competitors you might have in the market and you want
to release your product before them. So agility is very very important. And that's why we
use eicd pipelines. Now when you implement such a pipeline you realize that there can't
be any manual intervention at any step in the process or the entire pipeline slows down.
So you will basically defeat the entire purpose manual monitoring slows down your deployment
Pipeline and increases the risk of performance problems propagating in production, right?
So I hope you have understood this. If you notice the three points that I've mentioned
it's pretty self-explanatory rapid introduction of performance problems and errors, right
because you are releasing software and more frequently. So there has to be rapid introduction
of performance problems rapid introduction of new endpoints causing monitoring issues.
Again, this is pretty self-explanatory then the root cause analysis as a number of services
expands because you are releasing software more frequently, right? So definitely the
number Services are going to increase and there's a lengthy root cause analysis, you
know, because of which you lose a lot of time, right? So let's move forward and we look at
the next reason why we need continuous monitoring. For example, we have an application which
is light, right? We have deployed it on the production server. Now. We are running a p.m.
Solutions which is basically application performance monitoring. We are monitoring our application
how the performance is. Is there any down time all those things? Right? And then we
figure out certain issues with our applications on performance issues now to go back basically
to roll back and to incorporate those changes to remove those bugs developers are going
to take some time because the process is huge because your application is already live,
right? You cannot afford any downtime. Now, imagine what if before releasing the software
on a pre production server, which is nothing but the replica of my production server. I
can run those APM solutions to figure out how my application is going to perform and
it actually goes live right so that way whatever issues of their developers will be notified
before and they can take the corrective action. So I hope you have understood my point. The
next thing is server Health cannot be compromised at any cost. So I think it's pretty obvious
guys. Your application is running on a server. You cannot afford any downtime in that particular
server or increase in the response time also, right. So you require some sort of a monitoring
system to check your server Health as well. Right? What if your application goes down
because you're so it isn't responding right? So you don't want any scenario like that in
a world like today where everything is so Dynamic, and the competition is growing. Exponentially.
You want to give best service to your customers, right? And I think so / health is very very
important because that's where your application is running guys are not things. I have to
stress too much on this right, so we basically require continuous monitoring of a server
as well. Now, let me just give you a quick recap of the things that we have discussed.
So we have understood why we need continuous monitoring by looking at three four examples,
right? The first thing is we solve what are the issues with see ICD pipeline right? We
cannot have any sort of manual intervention for monitoring in source of bye. Because you're
going to defeat the purpose of such pipeline. Then we saw that developers have to be notified
about the performance issues of the application before releasing it in the market. Then we
saw server Health cannot be compromised at any cost. Right? So these are the three major
reasons why I think continuous monitoring is very important for most of the organization's
right? Although there are many other reasons as well right now. Let's move forward and
understand what exactly is continuous monitoring because we just talked a lot of scenarios
where Manuel monitoring or a traditional monitoring processes are not going to be enough. Right?
So let us understand what exactly is continuous monitoring and how is it different from what
relation process so basically continuous monitoring tools resolve any sort of system errors before
they have any negative impact on your business. It can be low memory unreachable server, etc.
Etc. Apart from that. They can also monitor the business processes and the application
as well as your server which we have just discussed. Right? So continuous monitoring
is basically an effective system where The entire it infrastructure starting from your
application to your business process to your server is monitored in an ongoing way and
in an automated fashion, right? That's what basically is the Crux of continuous monitoring.
So these are the multiple phases given to us by n is T for implementing continuous monitoring
and is is basically National Institute of Standards and technology. So let me just take
you through each of these stages first thing is defined so in to basically develop a monitoring
strategy, then what you're going to do you are going to establish measures and Matrix
and you also going to stablish monitoring and assessment frequencies at how frequently
are going to monitor it right. Then you are going to implement whatever you have stablished
the plan that you have laid down. Then you're going to analyze data and report findings,
right? So whatever issues that are there you're going to find that pose that you're going
to respond and mitigate that error and finally you're going to review and update the application
or whatever you were monitoring right now. Let us move forward and patreon is also given
us multiple phases involved in continuous monitoring. So let us have a look at those
old. So one by one The first thing is continuous Discovery. So contentious Discovery is basically
discovering in maintaining near real-time inventory of all networks and information
assets, including hardware and software if I have to give an example basically identifying
and tracking confidential and critical data stored on desktops laptops and servers. Right
next comes continuous assessment. It basically means automatically scanning and comparing
information assets against industry and data repositories determine oner abilities. That's
the entire point of continuous assessment. Right? So one way to do that is prioritizing
findings and providing detailed reports right by Department platform Network asset and vulnerability
type next comes continuous audit, so continuously evaluating your client server and network
device configurations and comparing them with standard policies is basically what continues
audit is, right. So basically what you're going to do here is gain insights into problematic
controls using patterns and access permission of sensitive data. Then comes continuous patching.
It means automatically deploying and updating software to eliminate vulnerabilities and
maintain compliance. Right? So if I have to give you an example may be correcting configuration
settings, including network access and provision software according to end users role in policies.
All those things next comes continents reporting. So aggregating the scanning results from different
departments scan types and organizations into one Central repository is basically what content
is reporting is right for automatically analyzing and correlating unusual activities in compliance
with regulations. So I think it's pretty easy to understand if I have to repeat it once
more I would say continuous Discovery is basically discovering and maintaining an inventory a
near real-time inventory of all the network and information assets. Whether it's your
Hardware or software then continuous assessment means automatically scanning and comparing
the information assets from Gardens discovery that we have seen against industry and data
repositories to determine vulnerabilities continuous audit is basically Continuously
evaluating your client server and network device with configurations and comparing them
with standards and policies Contreras patching is automatically deploying and updating software
to eliminate vulnerabilities and maintain compliance right patching is basically your
remedy kind of a thing where you actually respond to the threats that you see or vulnerabilities
that you see in your application Garden is reporting is basically aggregating scanning
results from different departments scan types are organizations into one Central repository.
So these are nothing but the various phases involved in continuous monitoring. Let us
have a look at various continents monitoring tools available in the market. So these are
pretty famous tools. I think a lot of you might have heard about these tools one is
Amazon cloudwatch, which is nothing but a service provided to us by AWS Splunk is also
very famous. And we have e LK and argue ways right CLK is basically elastic log stash and
Cabana in this session. We are going to focus on argue is because it's a pretty mature to
lot of companies have used this tool and it has a major market share as well and it's
basically well suited for your entire it Whether it's your application or server or even it's
your business process now, let us have a look at what exactly is not your ways and how it
works. So now I give which is basically a tool used for continuous monitoring of systems
your application your services and business processes Etc in a devops culture right now
in the event of failure. Nagios can alert technical staff of the problem allowing them
to begin a remedy ation processes before outages affect business processes and users or customers.
So I hope you are getting my point. It can allow the technical staff of the problem and
they can begin remediation processes before outages affect their business process or end
users or customers right with the argues. You don't have to explain why an answer in
infrastructure outage affect your organization's bottom line, right? So let us focus on the
diagram that is there in front of your screen. So now use basically runs on a server usually
as a Daemon or a service and it periodically runs plugins residing in the same server what
they do they basically contact hosts on servers or on your network or on the Internet. Now
one can view the status information using the web interface and you can also receive
email or SMS notification if something goes wrong, right so basically nagas Damon behaves
like a scheduler that runs certain scripts at certain moments. It stores the results
of those cribs and we'll run other scripts if these results change. I hope you are getting
my point here right now. If you're wondering what our plugins of these are nothing but
compiled executables or scripts. It can be pearls great shell script Etc that can run
from a command line to check the status of a host or a service noun argue is uses the
results from the plugins to determine the current status of the host. And so this is
on your network. Now, let us see various features of Naga ways. Let me just take you through
all these features one by one. It's pretty scalable and secure and manageable as well.
It has a good log in database system. It automatically sends alerts which we just saw it. It takes
network errors and server crashes. It has easy writing plug-in. You can write your own
plugins right based on. Requirement yours business need then you can monitor your business
process and it infrastructure with a single pass guys issues can be fixed automatically.
If you have configured in such a way then definitely you can fix those issues automatically
and it also has support for implementing redundant monitoring posts. So I hope you are understood
these features there are many more but these are the pretty attractive features and why
and argue s is so popular is because of these features, let us now discuss the architecture
of nagios in detail. So basically now argue is has a server agent architecture right now
usually in a network an argue a server is running on a host which we just saw in the
previous diagram, right? So consider this as my host. So now I guess server is running
on a host and plugins interact with local and remote Hood. So here we have plugins.
So these will interact with the local resources or services and these will also interact with
the remote resources or services or host right. Now. These plugins will send the information
to the scheduler which will display that in the GUI right now. Let me repeat it. Again.
Nargis is build on a circuit. Good Agent architecture. Right and usually in argue is server is running
on a host and these plugins will interact with the local host or services or even the
remote host Services. Right? And these plugins will send the information to the scheduler
nagios process scheduler, which will then display it on the web interface and if something
goes wrong the concern teams will be notified Via SMS or through email, right? So I think
we have covered quite a lot of theory. So let me just go ahead and open my centralized
virtual machine where I've already installed now. Gos, so let me just open my Center as
virtual machine first. So this is my Center is virtual machine guys. And this is how the
nagios dashboard looks like. I'm running it at Port 8000. You can run it wherever you
want to explain that in the installation video how you can install it now. If you notice
there are a lot of options on the left hand side you can you know, go ahead and play around
with it. You'll get a better idea. But let me just focus on few important ones. So here
we have a map option here, right? If you click on that, then you can see that you have a
local host and you have a remote host as well. My nagas process is monitoring both the local
host and the remote host the remote host is currently down. That's why you see it like
this when I will be running it'll be showing you how it basically looks like now if I go
ahead and click on host. You will see all the hoes that I'm currently monitoring some
monitoring edureka and Local Host said Eureka is basically a remote server and Local Host
is currently on which my Onaga server is running right? So obviously it is up at the other
server is down. If I click on Services, you can see that these are the services that I'm
monitoring for my remote host our monitoring CPU load ping and SSH and for my Local Host.
I'm watching current load current users HTTP paying root partition SSH swap usage in total
processes. You can add as many services as you want. All you have to do is change the
host dot CFG file, which I'm going to show you later. But for now, let us go back to
our slides will continue from there. So let me just give you a small recap of what all
things we have discussed. So we first saw why we need continuous monitoring. We saw
various reasons why Industries need continuous monitoring and how it is different from the
traditional monitoring systems. Then we saw what is exactly continuous monitoring and
what are the various phases involved in implementing a continuous monitoring strategy. Then we
saw what are the various continuous monitoring tools available in the market and we focus
on argue as we saw what is not gue base how it works? What is its architecture right.
Now we're going to talk about something called is n RP e nagios remote plug-in executor of
which is basically used for monitoring remote Linux or Unix machines. So it'll allow you
to execute nagios plugins on those remote machines. Now the main reason for doing this
is to allow nog you wish to monitor local resources, you know, like CPU load memory
usage Etc on remote machines now since these public resources are not usually exposed to
external machines and agent like NRP must be installed on the remote Linux or Unix machines.
So even I have installed that in my Center ice box, that's why I was able to monitor
the remote Linux host that I'm talking about. Also. If you check out my nagas installation
video, I have also explained how you can install NRP now if you notice the diagram here, so
what we have is basically the Jake underscore n RP plug-in residing on the local monitoring
machine. This is your local monitoring machine, which we just saw right? So this is where
mine argue our server is now the Czech underscore in RP plug-in resides in a local monitoring
machine where you're not arguing over is right. So the one which we saw is basically my local
machine or you can say where my Naga server is, right? So this check underscoring RP plug-in
resides on that particular machine now this NRP Daemon which you can see in the diagram
runs on remote machine the remote Linux or Unix machine which in my case was edureka
if you remember and since I didn't start that machine so it was down right so that NRP Damon
will run on that particular machine now, there is a secure socket layer SSL connection between
monitoring host and the remote host you can see it in the diagram as well the SSL connection,
right? So what it is doing it is checking the disk space load HTTP FTP remote services
on the other host site then these are local resources and services. So basically this
is how an RP Works guys. Do you have and check underscore an Plug in designing in the host
machine. You have NRP Daemon running on the remote machine. There's an SSL connection,
right? Yeah, you have SSL connection and this NRP plug-in basically helps us to monitor
that remote machine. That's how it works. Let's look at one very interesting case study.
This is from bitten attics. And I found it on the nagios website itself. So if you want
to check out go ahead and check out their website as well. They have pretty cool case
studies the power from Internet Explorer. So there are a lot of other case studies on
their website. So bit etics provides basically Outsource it management and Consulting to
nonprofit or small to medium businesses right now bitnet has got a project where they were
supposed to monitor an online store for an e-commerce retailer with a billion dollar
annual revenue, which is huge guys. Now, it was not only supposed to you know monitor
the store but it also needed to ensure that the cart and the checkout functionality is
working fine and was also supposed to check for website deformation and notify the necessary
staff if anything went wrong right seems like an easy task but let us see what are the Problems
that bitnet X phase now bitnet X hit a roadblock upon realizing that the clients data center
was located in New Jersey more than 500 miles away from their staff in New York, right?
There was a distance of 500 miles between their their staff is located and the data
center. Now, let us see what are the problems they face because of this now the two areas
needed a unique but at the same time a comprehensive monitoring for their Dev test and prod environment
of the same platform, right and the next challenge was monitoring would be hampered by the firewall
restrictions between different applications sites functions Etc. So I think you have a
lot of you know about this firewalls is basically sometimes can be a nightmare right apart from
that most of the notification that were sent to the client what ignored because mostly
those are false positive, right? So the client didn't bother to even check those notifications
now, what was the solution? So the first solution the thought is adding SSH firewall rules for
Network Operation Center personnel and Equipment second is analyzing web pages to see if there's
any problem with Occurrences the third and the very important point was converting notification
to nag, uh alerts and the problem that we saw a false positive was completely removed
with this escalation logic. We're converting not as notifications of Nargis alerts and
escalations with specific time periods for different groups, right? I hope you are getting
my point here now configuring event handlers to restart Services before notification, which
was basically a fixed for 90% of the issues and using nagios core and multiple servers
at the NOC facility and each Target is worker was deployed at the application Level with
direct access to the host. So whatever bag is worker or agent or remote machine we have
was deployed at the application Level and had the direct access to the host or the master
whatever you want to call it and they have implemented the same architecture for production
quality assurance staging and development environments. Now, let's see what was the
result now because of this there was a dramatic reduction in notifications. Thanks to the
event handlers new configuration. Then there was an increase in up time from 85% Early
298 personally, which is significant guys, right then they saw a dramatic reduction in
false positive because if the escalation is logic that I was just talking about then fourth
point is estimating the need to log into multiple boxes and change configuration file. Thanks
to nagas configuration maintained in a central repository and post automatically to appropriate
service fourth point is estimating the need to log into multiple boxes and change the
configuration files and that happens because the inauguration configuration maintained
in a central repository or essential master and can be pushed automatically to all these
slaves to all the servers are slaves are agents whatever you want to call it. So this was
a result of using nog u.s. Right now is the time to check out a demo where what I'll be
doing is I'll be monitoring couple of services actually more than a couple of services offer
remote Linux machine through mine argue Ace hose which I just showed you right? So from
there, I'll be monitoring a remote Linux host Caldera Rekha, and I'll be monitoring like
34 Services you can have whatever you want and let me just show you watch the process
once you have installed. I guess what you need to do in order to make sure that you
have remote host or a remote machine being monitored by your nagios host. Now in order
to execute this demo, which I'm going to show you. You must have lamp stack on your system.
Right Linux Apache MySQL and PHP and I'm going to use Center West 7 here. Let me just quickly
open my Center as virtual machine and we'll proceed from there. So guys, this is my sent
to us virtualbox where I've already installed argue as I've told you earlier as well in
this is where mine argue is host is running or you can see the NOG your server is running
and you can see the dashboard in front of your screen as well. Right? So let me just
quickly open the terminal first me clear the screen. So let me just show you where I've
installed argue is that this is the path right? If you notice in front of your screen, it's
in user local Nagi OS what I can do is just clear the screen and I'll show you what our
law directories are inside this so we can go inside this Etsy directory. And inside
this I'm going to go inside the objects directory, right? So why I'm doing this is basically
if I want to add any command for example Ample I want to add the check underscore n RP command.
That's how I'm going to monitor my remote Linux host if you remember in the diagram,
right? So that's what I'm going to do. I'm going to add that particular command. I've
already done that. So let me just show you how it looks so just type generator you can
choose whatever editor that you like and go inside the commands dot CFG file and let me
just open it. So these are the various commands that I was talking about. Now, you can just
have a look at all these commands. This is to basically notify host a by email if anything
goes down anything goes wrong in the host. This is for service. Basically it'll notify
if there's any problem with the service through email. This will check if my host machine
is alive. I mean, is it up and running now this command is basically to check the disk
space like the local disk, then load rights. You can see all of these things here swap
FTP. So I've added these commands and you can have a look at all of these commands which
I've mentioned here and the last command you see is I've added manually because all these
commands once you install your get it by default, but the IP take underscore n RP which I'm
highlighting right now with my cursor is something which I have added in order to make sure that
I will monitor the remote clinics horse. Now, let me just go ahead and save this right.
Let me clear my screen again and I'll go back to my nagios directory. Let me share my screen
again now, basically what this will do is this will allow you to use a check and the
score an RP command in you're not give service definitions right. Now. What we need to do
is update the NRP configuration file. So use your favorite editor and open NR P dot c f
g which you will find in this particular directory itself. So all I have to do is first I'll
hit LS and then I can just check out the set C directory. Now if you notice there is an
NR P dot CFG file, right? I've already added it. So I'll just go ahead and show you what
the help of G edit or you can use whatever editor that you prefer now over here. You
need to find this allowed host directive and add the private IP address of your Nas device
over to the gamma delimited list is Scroll down you will find something all allowed host.
Right? So just add a comma and start with the IP address of the machine that you want
to monitor So currently let me just open it once more. So I'm going to use sudo because
I don't have the Privileges now in this allowed host directory. All I have to do is comma
and the IP address of the host said I want to monitor so it is one. Ninety two dot one
sixty eight dot 1.21. Just go ahead save it come back clear the terminal now save and
exit. Now this configures in RP to accept requests from your Nas device over why it's
private IP address, right and then just go ahead and restart NRP to put the changes into
effect now on you and argue server. You need to create a configuration file for each of
the remote host that you monitor as I was mentioning before is well now where you're
going to find it in HC servers directory and let me just go ahead and open that for you.
Let me go to the server's directory. Now if you notice here, there is a deer a card or
CFG file. This is basically the host. We'll be monitoring right now. If I go ahead and
show you what I have written here is basically first what I have done is I have defined the
host. It's basically a Linux server and the name of that. So what is Eddie raker allies?
Whatever you want to give this is the IP address maximum check attempts the periods. I want
to check it 24/7 notification interval is what I have mentioned here and notification
period so this is basically about all my host now in that hose what all services are going
to monitor our new monitor generic services, like pink then I want to monitor SSH then
I'm going to monitor CPU load is when these are the three services that I'll be monitoring
and you can find that in your side C. So was that a tree over there? You have to create
a proper configuration file for all of the hose that you want to monitor Let Me Clear
My terminal again the just to show you. My remote machine is well, let me just open that.
So this is my remote machine guys over here. I've already installed NRP so over here, I'm
just going to show you how you can restart an RP systemctl restart. And rpe service and
here we go the asking for the password. I've given that a man not a piece of its has started
actually have restarted again. I've already started it before as well. Let me just show
you how my nagios dashboard looks like in my server. Now. This is my dashboard again.
If I go to my host tab, you can see that we are monitoring to host a dinner a kind localhost.
Erica is the one which I just showed you which is up and running right? I can go ahead and
check out this map Legacy map viewer as well which basically tells me that my a direct
as remote host then also I have various sources that are monitoring. So if you remember I
was monitoring CPU load ping and SSH which you can see it over here as well. Right? So
this is all it for today's session. I hope you guys have enjoyed listening to this video.
If you have any questions, you can go ahead and mention that in the comment section. And
if you're looking to gain hands-on experience and devops, you can go ahead and check out
our website www.guitariq.com / devops. You can view upcoming patches and enroll for the
That will set you on the path of becoming a successful devops engineer, and if you're
still curious to know more about the divorce roles and responsibilities, you can check
out the videos mentioned in the description. Thank you and happy learning.