DevOps Interview Questions and Answers | DevOps Jobs | DevOps Engineer | DevOps Training | Edureka

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi good afternoon good evening good morning everyone welcome to a DevOps entry preparation webinar conducted by a greater myself I am Pradeep I have been working with IT industry for almost 15 years I have gave up to experience for several years I have been working on various different projects I've seen multiple in organizations transitioning from a classical background to a DevOps practices and I already part of various transition programs and transformation projects programs or DevOps this is a privilege for me to give this a webinar so we'll start with the first one which is demands of DevOps why we need actually DevOps what is actually trending and develops what is driving faith factor for DevOps so two primary concepts if you see the first one team collaboration agility code is deployed 30 times more frequently then all these three different boxes are basically talks about the agility or the speed of the deployment so it's actually considering the speed so code is deployed six times more frequently nowadays then be usual so that's actually possible by DevOps so if you have devups practices in place agility is possible you can deploy code 30 times more frequently and the next one is talk about reliability or the quality and it talks about 50% fewer failures of new releases using adopting DevOps practices and principles you can reduce the failure rate of new releases to 50 percent so both agility and the quality can be maintained by using DevOps practices will see DevOps market the DevOps is actually trending nowadays so according to recent research the market rates DevOps market is growing to 20% every end of the decade which is actually a huge number so if you see it started in 2012 and it's with very bare accountable numbers and then the numbers have been encouraged ever since 2015 put on 16 and the tremendous growth is expected upcoming years so there's actually a good opportunity for the DevOps professionals and there was practitioners that also have paid very well if you see on the US it's actually the starting cell there are 100,000 between one twenty thousand USD per annum which is actually a quite good and also there was practitioners are played very well in UK and Australia and some of the parts I know like the way that I work with you know they are also K very well almost close to the UK salary and also some of the places like you know Singapore and in Hong Kong there also trending a lot so if you feel like the markets are trending quite well the practitioner there are engineers like the ops engineers as senior DevOps engineers develop leads level objects all these various rules are coming AWS cloud engineers cloud engineers cloud support engineers multiple organizations are starting various new rules for the DevOps practitioners with very good salaries bedrooms so and it comes to the adopting DevOps by various organizations we only don't know I call it the recent survey when we only don't know 77 percent of the organization what they are doing but the rest of them are actually most of them are adopting DevOps company-wide so the organization is actually moving towards DevOps and some of the organizations like to do a person is actually adopting all for a particular business unit for example I work on an organization with my business unit right now is actually is trending towards DevOps and some project and teams are also independently adopting DevOps like they have their own DevOps tools DevOps practices and etc and at one percent they are not adopting DevOps but sooner or later they will probably go for DevOps because it talks about speed and agility and then the best quality so there is actually quite a lot of girls that the organization can expect adopting DevOps so today we're going to discuss some of the basic interview questions again these questions are basically coloring on a very high level and to have a more depth of understanding of each topic you have to involve to DevOps class that provided by Erica we're going to see some generalize there you know questions and questions based on topic so these questions are actually divided into various topics so general questions are actually applicable for any anybody like project managers table first queue ways these questions will others like you know what is DevOps what's the need for DevOps what is the driving factor for DevOps and etc and symbol you know internal concepts like you know what are the source code management questions continuous integration questions in the configuration management and continuous monitoring containerization virtualization has been divided in several subgroups and there most of these questions are relevant for anybody it's not just the developer or DevOps engine it can be for developer or project manager or a project lead or somebody is actually the management course somebody is just starting the carrier irrespective of the level these questions and answers are applicable for any of the person who's actually venturing is clear into DevOps area so if you see on the left side there's a small picture about the SPLC so it starts with planning stage and then once the project is planned and we actually code and build the code testicle release the code deploy to a moment and start using the code application and then keep monitoring so this is actually a general sdlc lifecycle and the various tools are basic basically used for each of these and you know phase of the SPLC so if you leave this planning stage out then you start with coding so once you start coding you just need to store that code in a repository in a version control system so for that purpose you can use git or SVM subversion now get being the popular modern version control system we're going to cover some of the topics about get today and get the reports the source control management system or source code management system or version control management system all these terms are pretty much interchangeably used and it's actually used to store the code that you're actually writing now once the code is written so basically we also use tool called JIRA and JIRA is basically to track is called a issue tracking tool or it can be call it a defect tracking or it can be called as a you know requirement gathering tool so this tool basically used to track the development activities so for example you want to define you want to develop a a project and you want to develop a small and you know GUI application and that application development can be divided into multiple paths so for example one of the tasks would be to design a GUI interface was a login ID and password screen so this can be actually recorded as a task in JIRA and assigned to a developer and then right actually estimated time in the JIRA so that you know the Stars can be assigned to the developer and the developer will start working on the activity start coding it and once the coding is complete it will change the status of that particular task in the JIRA as close or probably development complete or something so it's basically a tracking of tasks the issues can be done using JIRA and it's actually for my class iam it's a license tool but it can be used for free for small organization like you know five to ten developers for free but if it is you know an organization-wide you need to actually procure the license from Atlassian again eclipse is ID integrated development environment and it's actually used by developers to write their code into and then compile the code so if you're writing java code or darknet code you can actually use this you know Eclipse plug-in as our Eclipse IDE what was plugins and then write your code and you can compile it actually shows you basic deck servers and actually it helps you to debug in compile so widely used free open-source to also intially duty another option for you know for the developers as well okay once the code is still live the next stage is to build a code and in order to build a code we use several tools such as and and being the most popular and very being used for is there in the market for several several years and ant is basically a build tool ultimately mavin mavin actually is a you know to use for Java based project and very popular and very powerful maven you can actually run your you can compile your java bills or any other bills alternative liquid a little bit modern build tool it's actually built has a lot of out of box features for example or the Gradle you can actually run a build in other parts with absolutely no code at all for example if you want to run a maven build maybe you write hundred lines of code the same can be achieved with ten lines of code in Cuero because it has a lot of otterbox features that's about built and once the build is done you need to test the code and for that you can use the various testing tools selenium is one of the testing tool that can be used very well for the based applications or you know GUI you know testing and at your functional testings and gear you need is basically for unit testing for Java based application so this testing is you know after once it is tested then actually you all can combine all these code build and testing in a continuous integration tools the the little man you see here is actually Jenkins tool and Jenkins is a continuous integration tool ultimately bamboo is an Atlassian tool for a continuous integration has a very good you know features for graphical interface and it's actually a licensed version so you need to buy license for this one both for Jenkins is a free open-source widely used by across all organization means continuous integration tools can be integrated with this source code and then build in test phase so you can automate this you know building and testing and using the continuous integration tools you also can use it for deployment purpose all right the next thing is the deployment so if you see at this one the deployment is basically before you deploy any code you need to provision the environment so you can use puppet chef ansible or salt rack for provisioning the servers and you know various instances and then finally you can use you know configuration management principles or practices to kind of make sure those provision servers are up to the mark or have required prerequisite packages installed as required prerequisite configuration is complete and so forth and then you also can deploy using these computation management tools we're going to cover each of topic in detail you know in the next upcoming slides so once the deployment is done then your so which is an operation the next thing is monitor so the monitoring can be done using various tools so makers want to be while you use open source tool has very powerful features in build within it and ultimately loose plank sense and new really core you know the other applications some of them you need to buy a licenses for so you know this is the whole stack of the dev ops and we would cover you know in details the next topics so the first topic is generate DevOps questions right so what is exactly DevOps now DevOps if you ask anyone about the definition of de vos each person and will actually define in its own way the reason is because there is no particular definition for DevOps right so but the actual understanding is if you ask me I would say that DevOps is actually a practice that actually brings developers and operations and HR and testing and baile operations or their activities together in a collaborative fashion using proper processes and techniques to achieve the fastest and reliable software releases and that's what I actually define it for because the ultimate purpose of the collaboration of course DevOps is for collaboration and DevOps is actually using the right tools and techniques but the ultimate purpose of the DevOps is to actually give a faster releases with high quality that's the main purpose and that's basically where the return on investment is there right so if you bravely principles and practices into various segments so here is a small example source code management is one of the area continuous integration is another area continuous testing is one of the areas configuration management which also has a infrastructure as code is one area continuous monitoring is another area all these areas together combined and can be called as a DevOps practices so the next is how is the ops different from agile all right so agile is basically a methodology all right and the DevOps is basically a practice the difference is we a John has a methodology for example if you talk about scrum framework which is a widely used as agile methodology framework the scrum framework a finds a particular processes and rules right and all the development has to add her or fit into that framework so the scrum defines the development activities you know as a sprint repeatable a tradable sprint so each spring has a number of tasks that actually we define so there are some framework defined in the agile methodology and then development practices would actually will fit into that framework so that simple and agile say for example framework or methodology only deals with development search they will be installed in development and it doesn't deal with how you have to test it how you have to release the code how the operations and production support work ongoing and support will continue it doesn't talk about that but whether it's DevOps DevOps talks about you know the whole end-to-end SDLC lifecycle starting to the development testing operations and production support so it actually calls quite a lot of wide topics and it is not a framework where you actually or it's not just a methodology and it's actually more of a practice and that each organization has its DevOps practices you know defined and uniquely for each to meet the needs of the organization right so it's different it's quite different from agile methodology right church not a methodology it's just the practice okay so what is the need for DevOps why we actually need their labs or some of these example actually the first thing we have seen increased deployment frequency so we have seen the first slide where it says like 50 times more frequent than the actual usual requirement so if you see the traditional end moment traditional software companies they were probably doing two releases or three villages for you or maybe four releases per year no actually the things are changing nowadays that in order to sustain in the market we need to do respond to the change as quickly as possible so that means a new change means new deployment and new release the faster you actually make your own product or the changes ongoing changes in the market the better you know return on investment is of the better you know benefit for your product so increase department and only possible by using DevOps principles not by any means right and the lower rate of new releases the number of releases that you're doing actually can into is a potential risk for service failure or release failure so it can be reduced lower drastically by using the web's practices right the next thing is shorten lead time between fixes so if you identify an issue and how much time will take for you to fixes and then how fast you can identify those a nation right this is possible by their webs practices then faster means to recovery in time of a new release failure so that's possible by the webs because you're using continuous monitoring they're actually money during the application after you release is complete to proactively you actually check whether the application is providing you know giving a required results or not and the instead of waiting for you know at the end of failure which will probably cause a potential business or a reputation damage for your organization you actually particularly identify these small issues with gaps and then you fix them as early as possible and that's one of the you know important factor of DevOps and so continuous monitoring actually deals with this particular topic where it shall increase faster and then reliable software and then shortly times to fix the issues and then recovery time is actually is much more faster in the event of an incident or a failure the next question is name some of the DevOps tools and how these tools work together right so this can be a you know an important question like you can put this question in another way like some person can ask you the question like you know you are a DevOps professional and then the organization has given you the flexibility of choosing the budget and choosing various tools so if you are the DevOps lead or double architect or engineer what tools will you choose so to answer the question is this one first thing we start is with a source control tool and there a resource control tool available if is one of the popular because it's the way it works and the creatures that it provides so you choose it it is a distributed version control management tool and we actually see what you know detailed questions about this gate but get will use it for version control and then maven use for building and compiling your code and that most of the code is actually build in Java and so you actually can use maven because maven provides out of box features for compiling java code and then store the artifacts in a proper way for example Nexus is one of the artifact management repository solution and that is very well integrated with maven so once you compile you can actually store the artifacts in a nexus repository for example so all these things can have to be integrated with a continuous integration tool called Jenkins Jenkins is widely used open tools so it's actually best in the organization as far as I see it has a lot of features and plugins available now Jenkins can be integrated with the test tools setting is one of the examples for the web interfaces testing and functional testing and and various other testings involved it can be decrypted with Jenkins as well and the next thing is you have actually you know identified tools for your source code management your building purpose and your testing your continuous integration the next thing is to deploy this application in a particular environment so to have an environment ready up and running you need to have a provisioning system and also the project system should be capable of deploying the application as well so for that purpose you can use puppet chapel ansible these you know tools can provide can be used to provision the servers and then deployed application and provide the same consistency ultimately you can use docker for the micro service applications so docker containers provide consistent computing and warming through the SDLC the Dockers basically if you don't need a virtual machine and because you're running a small small applications like micro services and then you can actually use it with all quite your application so docker is also will be a best in our project it's not an alternate approaches but is one of the approaches so that's actually about the question number four the next thing is source code management question so here we're talking about it because git is the most widely used modern software source control management tool what function did our forms in DevOps so if you see the SDLC this is the picture of SDLC we're talking about planning the project and then start coding it and that coding when you start coding it unit is so that code in a repository and then you also should be able to share your code with other colleagues of your team so for that purpose we use code reuse gear to store the code and then share the code with other app people in your organization who are working on the same project and there and then we can further integrate that source code with continuous integration tools so that every time you do a change to your source code it will trigger a build automatically and then triggers tests automatically and then apply some configuration changes and deploy it in a moment and then can be monitored so all these things can be further automated so so we can actually use continuous integration say every time a comment is made to get repository continuous integration server proves it and composite and also deploys it onto the test server for testing purposes so that's the control gate actually performs next plane gets distributed architecture another question right now how did it work so gif is a distributed architecture unlike SVM the difference between HTML and gate if somebody arts is this one it is distributed version control system SVM is a client-server base origin control system what is distributed version control system is the repository which has the all the source code it can have multiple copies in multiple areas I will explain that now if you see the remote repository or a centralized repository for example or it can call the original repository is actually the original one the initial depository and then the each collaborator or the developer actually take a copy of the entire repository into is local data or laptop and then start working on this local repository commit the changes and then push the changes to the remote repository so that other developers other contributors can pull the changes and then start working on them so if you see that repository is having multiple copies right that's actually the distributed version control system but when it goes to subversion to centralize version control system which means there is a Central's depository that's available in a centralized server and then each collaborator or each developer actually copy only the contents required to the desktop called workspace it's not the entire repository it's just the workspace it's just a view that you can see only certain branch on the certain files you copy into your workspace and then commit the changes so that's key difference between hvn and get now indeed how the user wrote a comment that has already been pushed and made public so indeed how do you rework a comment that has already been pushed and made public so let's say you have actually using git and then you can run git commit and then you commit the change and you push it say how Jim how you want relocate so there's a get rivalled command available to revert the change so for example using get comet - gem comment restoration so the first command comments the change and then you have the use another command called git push to push the change from your local repository to visit motor position and once it is push the remote repository if you want to revert the change you have used get a rework and then the commit name so income it has a hash name to it you just provide the name and then it'll G work be comment alright like I said the question and answers I actually leave to the end so feel free to type the questions and then at the end I'm going to answer all the questions right so that way we will not break the flow right the next question is how do you find a list of files that are that has changed in a particular coming right it offers rails commands git diff tree is one of the commands that we can use to see the files that are changed so get different - are recursive and then hash hash is basically the comic hash so here the commit number actually means called hashing because it's just not the number it's a combination of characters and numbers so you just stated coming to hash and then provide it so that it shows the list of files that have been changed now they give district provides the foolish like the origin the person the author of the comment and the timestamp and then the commit message and whereas other the differences and all but don't want to see everything you just want to see the file so the next command git diff drink - no comic ID - name only - our hash it gives you only the list of files nothing else not even the commit ID or the hash arrived of this particular comic so it just shows you all the list of files that have been committed or changed in that particular comment all right and we go to the next question which is how abuse clash last and connect into a single coming a this is very very interesting any where we unique feature offers it in the traditional version control system once it is committed you cannot change the history but let's get it allows you to play around with your original history or version graph it cut into your knee now what is question squashing means combining multiple comets into a single kind of tree for example you have committed for cabbage one two three four and you realize that also commits supposed to be committed in single commit rather than in having multiple comments because you wanted to share that coming ID or the files with somebody else so for that purpose we use force watching this is called squash we use git reset command the reset actually research the version history and what it does see for example you want to commit change combine all the last four comics then in that case you need to use git reset - rock head tilde n n stands for the number of comments from the head head is related the top tilde n means the last n number of comments which means Lecce if you give four then it's going to be last for comics so once you give this command and then run and get comments so it actually combines all the comments into one single comment so you have to provide a new comic message so that new comment that is committed will have all these four comments the files are change so you're basically playing it on webcam so similarly you can actually do another thing that you fear for example you wanted to edit you know the change it it be you know the edit message so you can use a you know - M and then provide some command like for example get clogged - for my person is V - rivers and shows basically they're actually taking the same commenting message of all these previous comments and then adding it to this dis command so that you don't want to even lose me the comment messages that were committed in the past so another interesting advanced topic of a gate so that's actually mod G and we actually now move forward to continuous integration now before actually I continue this topic I just want to take a look at the question answers because I see some of the questions people and it may be related to this same topic so let me just open the keyword window and see what are the different questions about the gig okay I see there are gender question I'm skipping I'll just talk about the gift questions can two people collaborating it without pushing the goal into a market apology see if you have to this question is from cherub so if you have to elaborate your changes you have to push to somewhere so for example either you can push it to the origin or you can push this change to the other remote repository so it can be your colleagues the repository so you can push the change or the best way is actually push it and there are alternative ways without pushing to the reporter position but you still have to push to somewhere and that is actually to the another local repository so let's being answer questions in the best way or the solution is to push a call right it's possible but it's actually a little bit tricky and it'll mess up something now the next thing is can we reverse this comic back to four different comments I didn't understand this question very well can we reverse this coming back to four different comments so see this crashing means we can reverse a coming to four different comments I mean we can reverse a particular commit to any comic that we want to for example you have like done a ten college and you want to go revert back to can't commit as possible she just use a reverse and then do the comic name so that actually is the answer to the question so now again there are some other questions provided by a guy so I cannot give up if the company requires DevOps that's actually right every complete actually requires if they wanted to sustain in the market they actually have to be Arabs and that's what my understanding without DevOps even though some companies they don't call them up they have some sort of practices so you know some where the other they are falling developer basically DevOps talks about more about the these topics so the next thing is I will actually sleep the questions now if you have any questions we'll see at the end I will go forward to this continuous integration topic what is meant by a continuous integration so continuous integration is defined basically based on the commit that you're doing so every comment that you do to a repository or every change that you do has to be immediately compile and build and integrated so that's called continuous integration so the continuous integration best practices that you do a commit on your own branch for example but continuously integrate the change to the trunk or to the master or to the main line of the code so that it can be compiled immediately and then the feedback will be given back whether this fail or successful so for example you're a developer you're working on one single feature and for that purpose you created a separate feature branch culture and then you actually done your recording you have compiled it on your desktop perfectly fine but how sure you are that that code actually works with the code that the other developers are actually doing it so that's actually is what integration means you integrate your code with other developers and then that integrated code has to be compiled has to be tested whether it's working or not so that's actually continuous integration so you commit your change and then you integrate your change to other people's changes and then you build it and test and deploy it so those are the best practices if you read the called continuous integration by just humble which is a very famous book it actually talks about there is best practices for the continuous integration one of them that I like is that all the code by all the developer has to be committed directly on to the trunk and then the trunk or the main branch or the master branch and that master branch has to be integrated with a convenience integration server for example Jenkins or bamboo that actually should pull all the number of comments and for each coming the bill should trigger and then there should be an automated test that trigger and there should be a deployment that should be followed by this particular comet sure each comet is actually the result your build and a test and optionally a deploying right so that's basically is about continuous integration and then mention some of the commonly is junking plugins Jenkins is our focuses Jenkins here which is a continuous integration server and Jenkins will not work without plugins so by default if you install Jenkins it's just the shell without installing plugins you cannot use it so one of the basic entry questions about okay again the question that this session is basically covering only the basic interval questions not the in depth the topics of each level because if you wanted to have any questions related to getting in depth our specific topics feel free to enroll into elapsed cause and we're we're going to discuss you know several hours on each topic extensively and all the questioners answers can really or well answer so some of these plugins that will Jenkins uses this get plug-in or HTM something essential because it has to be connected with a source control system but to monitor and necessary plug-in a very very invasive plug-in to log into remote machines to do something more execution of commands and copied files etc and then build pipeline plug-in shows basically the pipeline is actually a convention mechanic that you can use for you know to create an option when downstream connections so basically so pipelines basically you can actually build you can see the entire you know view of your building and deploying to different environments and staging and production etc so for that purpose you can use build pipeline and then email ext plug in that space for the different notifications as soon as you get some result out of the build and the HTML publishing plug-in also to publication reports multi slave configuration plug-in and this is basically to allows the administrator to you know configure ways things are basically the agents of you know changing agents to run the jobs remotely on a remote server and to manage them you can use multi slave configuration plugin and parametrize to trigger plugin is another planet that actually lets you build based on the parameter I can pass various parameters during the building in phase and there are several more I mean these are just you bugging them that way there's other plugins like for example you want to position you can use in a check you know or if you're probably connector with eight of yes you can easily eight abbi as a plugins and if you wanted to use it for example docker images you can use docker plugins and these are the plugins and you can also use the some of the basic plugins such as you know native plugins or Jenkins for example and Lomb and dashboard release dashboard all the dashboards and you can use advanced topics like job DSL to kind of manage all your jobs and to script your whole infrastructure of Jenkins ever call you can use job DSL plugin and whereas other plugins that can use it so these are some of the basics are the next question is explain Jenkins distributed our code architecture what is the need for this architecture why so Jenkins is basically is a distributed architecture as you can see there's a master server that actually can be configured with all the jobs and the requirements and everything and then you can actually choose a particular job to run on a different server right why because you know it's because your for example you want to deploy a code you know on a Linux box and then we want to deploy the same code on a Windows box for example or you know on and a different Linux distribution so for that purpose you need to connect to that server and then do it as activity so this all can be automated by actually installing Jenkins slave on each of these nodes and then running it so that's where and then you have like village in moments you know each environment can be treated as a separate combination of separate place so Jenkins master is not just enough you just need to have Jenkins slave and you can use as many slaves as possible because it's open source tool and you know you're not paying for this right so that's actually the distributed architecture of Jenkins the next question is how will you secure Indians in other words you know what are the best security practices sort of Jenkins and you know so the question is your being sure that the global security is a parameter gained in the configuration settings of Jenkins you actually set that on so that not anyone like anonymous users can access it so you're actually restricting and animus users and you're enabling actually you know security and you can integrate probably very well with your you know exterior of your company but that's also one of the best ways and actually it's a good way of doing securing your drinking's and then the next thing is you have been able something called project metrics in jenkins so you can actually choose who can access which projects so that's actually possible by project metrics in Jenkins alright you can also automate be you know setting rights and privileges on Jenkins using custom control script that's possible that you have to do some you know scripting activity for that one also you can limit the physical access of Jenkins data folder and some of the folders so only Jenkins users can have access to that one if you log in to the server and then open Jenkins folder you have something called Jenkins data folder and then jobs folder and these folders can be also can be used to secure as well and the next thing is periodically run security audits and you know that's one of the things that each of organizations to do kind of I mean it looks like a really complicated but it's not you just learning to server every time and just see who is having access is there any change in the excesses or run some script that actually monitors these does these audits automatically right so explain how you can create a backup and copy of files in Jenkins so this is also one of the ways you can also migrate Jenkins from one instance to other instance right the same process is applicable the way that we can do is actually there's a folder called jobs and then inside the job directory all the jobs will be stood she had probably can take a copy of the whole jobs folder right and it actually takes that means you're actually showing it actually you can copy it or you can clone it in a different machines and then reuse it so that's basically for my gosh or changing or when you rename an existing job for example name then you have to also make sure the underlying jobs are in it so to answer quickly these questions it's a copy of the jobs folder that actually does the work all right next is Configuration Manager for I go to this configuration manager let me take a quick look at the questions so somebody asks like what is the meaning of by sector means see git command actually it's used for binary and you choose some sort of a binary algorithms to kind of find which comments in the projects introduced a bot so it's actually a little bit advanced kind of topic but it's actually you can use this for you know finding the bad comments was just the good comments and Chitra so again this is actually a little bit in depth topic and so we're going to put a colorable detail of this one the next thing is on the continuous integration the question is is nightly build a part of continuous integration we actually nightly builds or marketing is integration continuous integration means you comment each comment should trigger a bill when that's actually controversy explosion if you read the book urges humble and just humble clearly face that Michael bills are not continuous integration right so that's a famous but all most organizations actually adopted the principles defining that that will make sure I also you know work for that one so the answer is now the next question is what's the difference between CIN CD continuous integration and continuous deployment actually very good question there's actually a difference between continuous integration and and then the continuous integration and continuous deployment say continuous integration is talking about filling your code as quickly as possible to get this feedback whether the build is successful or not right now so each comment will trigger a build and then the corresponding test so that's actually continuous integration right and then the next phase is continuous deployment wins every bill that is run will generate some artifacts like a jar file or the Muslim sort of fact that particular new artifact the result of the D bill has to be deployed continuing and a moment to kind of test it and that's actually continuous deployment right and we also have something called CD also stands for continuous delivery which means it actually talks about the whole large picture like as soon as you build your code you have to continuously deliver that to production she talks about you know deploying continuous delivery continuous deployment to your testing moments and then the automated trigger definer workflow that actually automatically test the environment test environment and make sure if it is done then continuously deploy that to production so there are organization that follows continuous deployment or continuous delivery for example Netflix does you know maybe 30 deployments per day and they do quite a lot of deployment so the wave to the productions they way to do is each commit will actually trigger a continuous integration bill and actually create an artifact and that artifact will be automatically deployed to a testing moment and then the testing moment will actually you know that the automated tests will run against a ten bomb and a very successful and it will be promoted to production immediately so if it is filled then actually feedback will go back to the developer and then they will I get fix it and go so this process will goes on this is actually a continuous delivery rather than conventionally planner so the next stop begins continues configuration management questions like explain the various configuration management company now this is not a neutral configuration management so basically your management configuration information of each of your resources or SS right of your organization for example your this server is an asset and that each server has a its own configuration information right and then it has to be managed properly so the phases of the configuration management is first configuration identification so identify what are the configurations for your resource for example or effort so let's say you have a server and then commonly cities it has what is the memory and how many packages installed what are the conveyors can provision files of the particular server and what it is used for the purpose that it's been used for service things its mean you know where is quite a lot of massive information about each node and actually is the first beginning part its identification and that information has to be stored from there and in the olden days we used to call tmdb configuration management database where they all information is stored now with modern things like you know using chef puppet or ansible so it's actually quite a lot of things are automated so if you install these tools these configuration management tools to the fetch the information automatically installs into B into its central server right or just a great thing so and then once the configuration identification is done then then you have to manage the change which is actually change management and change management is like each change has to be deployed properly and rolled back in the evil topic radio so that's actually confusion change management and configuration shapes accounting so that's what actually it's a it provides the stages whether the configuration item is working properly or not right and this is used for reporting purposes and then the next thing is configuration audit verify the consistency of the documentation against the actual product these are the principles of configuration management you know it's just for the third purpose and then when it goes to the configuration management the actual management of the tools like reprehensible share of the picture will be a little bit different okay another question is put into question is what's the difference between asset management and a configuration management now as if management talks about more about money it's about logistics it kind of thing so but when it comes to configuration management we're talking about more from an Operations perspective it's ITIL based right it's basically for troubleshooting if there are any errors like the lifecycle is just to deploy in the determine right but when it comes to the city management it's just concerned with the you know finance like how much each will cost and in a who owns it and then what is the purchasing and leasing processes terms and also how to maintain the servers and etc such as basically licensing and all these things will be part of this asset management thing whereas configuration management is about deployment operations right so that's a different key difference ok the next thing is part of this what do you understand by infrastructure as code the infrastructure has Co has been very popular in fact there are the whole practice of DevOps is being or the DevOps evolved from the principles of configuration management and the infrastructure is code so what it exactly means that you're provisioning and environment should start with a code so if you wanted to provide a provision a Devon moment or a test of production or woman everything should be starting with code so it should contain a code you have a code and you trigger the code that'll actually pick your program and my line so it's basically you're storing the infrastructure as a code right if the definition says IAC is automation of IT operations such as building a plan managed by provisioning of code rather than the manual process right so you provide dev test in production and moments using the code of a centralized location that's a infrastructure scope the next question is what's the difference between push and pull in configuration management now why we need actually push and even pull during the controller's management is basically your centralized server actually sold the configuration information of all your notes like you know what your note a contains no beacon they no chicken things now if there is a change that a node needs to receive then directly mechanisms that would be involved either centralized in silver can push the change to the required node that's actually push mechanism or the nodes we can define a polling mechanism so that the node will consistently tricks with the centralized server and then updates it whenever that is in need right so that's actually pull mechanism however actually discussing this one because there are some tools were kind of push based mechanism some computer management tools work on pull based kind of foolish management for example and Sybil is this push push method and other tools like puppet chef and the sauce that uses pull based mechanism right so the next thing is how topic works right now puppet actually has a master and slave concept so you have a master server it's a centralized of its server that will be used for your storing all your configuration information about all your puppet slaves no puppets ladies a clear door where you probably running an apache or some kind of an application or database or somewhere so you actually manage those servers from the puppet slave so you install puppet slave on the nodes and then you install puppet master on a centralized server to manage your changes so the first thing is how it works is the puppet slave when it is connected to SSL encryption right all the slaves are connected to mas using SSL secure layer secure socket layer so now the first thing is the puppet actually slaves and the facts now what in fact fact is basically the configuration information everything about the slate like CPU number of CPUs memory number of packages installed and the quite a lot of massive information will be sent to puppet master puppet master scholarship and in puppet Marshall also has a catalog information would actually send to sleep this catalog is you know number of packages we made should be install and so our number of configuration changes that needs to be done on a particular shape so slave actually compares the catalog information see if there is a deviation say for example there's one line that needs to be changed in a count file right so in that kind of file one line and information needs to be changed up a slave identifies this change and then it actually does the change once the catalog is pushed on the master server and then the reports back the result right so the number one of the traces the node reports back the puppet indicating the configuration is complete which is visible a liquid - book so that's how actually how it works right now what are the modules and manifests in puppet so there are two things about modules and managers so manifest let's talk about on the light side what is manifest now every to change the configuration information of a particular server or a property node is actually stored in a native language called a puppet native language and these information of actually sorting manifests so for example you need to if I install an Apache or a particular server so that's actually a like called you know approaching right so this manifests actually contains the lines of core to perform some activity on an old server now module is actually a combination of such a manifest write for example you can have a manifest to install Apache another manifest installed Maria dB another manifest let's say to install Tomcat so all these combination can be combined together and it is a single module and this can be reused so the modules so if you see on the left side a puppet module is a collection of manifest and the data for this text files and templates and they have a specific directory structure modules are useful organizing of its core properly and because they allow you to split your called multiple manifest you're going to choose which manifests choose from my particular module such actually like you're not you're writing it in a Java project like you're writing each module so manifests are basically single individual files and they're written in public language called extensions with PP right so if you see the small example it says node host 2 which is basically to install this on node hosue pocket slave called host - and you're calling a class called you know patching and then the Tecna tree you're defining various to that image like you know that apache host module you're calling a module and then defining you know the name is example.com and then you're actually passing some parameters like resource port is equal to 80 and then dock root is equal to as WWE HTML so it's a small example another thing to note is like for proper calm it's actually a community - open source community where people develop the akkad modules and then they share with each other such a good place to go and look for proper tools instead of writing your own you can actually choose one of the existing puppet modules okay now the next thing is Jeff let's talk about Jeff and how the architecture explained the chip architecture now the three key terms for the chef is chef so Jeff node and then Jeff workstation three different things now Jeff silver is like puppet server is a centralized store of all the infrastructure as configuration information was told in Jeff's ever and then the check node is basically what just several managers so Jeff node is basically your civil reader catcher whatever is install your application is running and you want to manage those nodes with your chip server right now ultimately with Jeff orchestration is actually your desktop or a server a virtual box from where you actually initiate Jeff's commands right then knife is actually one of the command that we used to upload configuration changes to the Jeff server so for example you want to change is something called a a cookbook what is script with cookbook is basically a place or actually storing the configuration information all the your configuration templates and files and everything on a target in a cookbook so for example you can write a cookbook cookbook leader code you write any jeopardy as a language which is actually it'll be style and in the cookbook you actually define what package is needs to be deployed into a particular chef note so you define in the create a cookbook that cookbook as a rules number of packages required and you know the services teach you started everything combined together in a cookbook and that could book you have defined and you wanted to show that cookbook in the chip server so you use knife upload commit right show that command you use from the check workstation so you do operations like changing the chip server information using chef workstation and then chef actually server pushes that information to be chef notes right and see if you see this here on the left edge this chip client runs on node at attainment configuration information for the chef server and the chief client and knife uses API clients to talk to chef server right and knife is used to communicate with nodes using SSH and systems managed by Jeff or called nodes on the left side assist chef workstation is simply computer with a local chef repository and they're properly configured like command so this is how it works now if you just on the right side you see all configuration first to test it on the workstation so if you write a cookbook you have to test it so you can use TIFF inspect all check you can testing tools to kind of test it on your workstation and then you push it to the server and then the Jeff server actually manages automatically you know pushing the information to the check nodes you know or some process called convergence it chooses to kind of make sure the changes are actually properly pushed or the state of the chief knows is in line with what is defined in the chat server why that's actually the architecture now just an example to write a recipe to install httpd and copy index.html file to the default document will be come from the installation so a small example probably all somebody will ask interview it's very easy so recipe there's a difference between this again the cookbook recipe is basically just like a manifest and change the resources to bring the server into a particular state and then when you actually combine the recipe with all the templates and variables and all it can we call it a cookbook and there can be various recipes so together you can combine several recipes as a one cookbook so just like you know manifest is equal to recipe and then module is basically like a cookbook so in a particular module in chef you can call it as a cookbook and then puppeteer called manifest the same thing you could call it the recipe check right so here's package phase should be between a package is basically a resource it's called a resource in check and it's a predefined resource and when a defined package hd-dvd actually it will be ill install package called httpd and service is actually called another resource that actually starts or service or actually stops cells are enabled so you can actually define doing and do is basically like open brackets and end is like closing brackets and you actually find an action all right so if you don't do define an action so by default there is some action associated with that particular resource and that will be enabled for example you're mentioning here action enable comma start which means you are enabling the service called httpd and then you're starting the service so if you don't define so by default it actually automatically stops because for service action start is default we're going to see that later and the next is you're using a file research to copy a file to this target web WWE mell index dot HTML do Open bracket content you're writing the content into that file as welcome to a patch in chat so this is basically test and then end it this simple kind of a script all right so the next thing is what happens when you don't specify a resource of action so like I mentioned so for example let's say the first example is file see users administrator chapter for setting story and I do content Green is equal to hello world so you're actually basically not facing any action but whereas if you see the next snippet code it says action : create which means both does the exact same thing it creates a file called settings or omi and then with a content called creating hello world but the next thing is you are explicitly specifying an action called create and if you specify action color create it actually create even if you don't define action creates a default action for file results so that's why it actually gets executed so because create is the file resources default action should create another entry question so under the different question is all these future recipes of the same so what is the difference the first one is such as finish httpd service httpd do action area will start any the next one is it start with service into the package and then starting a service called httpd and then little it's running package HTTP installation so by seeing at this the way the chef works is it was the sequential ii from the top to down so the first one is correct because it installed the package and then started the second one is included because it's starting the service and then install the package but that's actually that's wrong you have to install package first in order to start the service otherwise it'll scale it says the package does not exist so the first one is correct so the answer is no they are not because the chip applies resources in order they appear so the first recipe ensures the HTTP packages in the store and then it configures the service the second recipe configures the service and then ensures that the package is installed which is actually only correct right the next question is about ansible in danceable a push based or a full base cm2 like i explained before edible works on a push based mechanism not a pull base well how does it work you have a server and sibyl's server where you have actually kept all your playbooks right and then those playbooks can be version controlled right and then all we can do is actually what ansible does ansible you have to specifically to write a instable commands to push these changes to the various other rate nodes that you have and then by default and Sybil uses as a switch to connect to this remote as servers that you are managing all right and then the other into a question too famous into a question is that you don't need to install ansible agent on each of these nodes that you're managing there is no such thing called ansible agents there's only one thing called ansible server you install Sybil and that ansible on a server right and then as long as you silver is able to connect with SSH and run remote command or using SSH on to the remote bomb your server can manage it all you need to do is you need to configure your inventory which is basically the number of hosts that server is managing and it properly in the Indian inventory right as long as you can do that and it can actually manage it so that's actually a difference between also you know push and pull mechanism and also the way and Civil War the advantage with ansible is you don't need to install an agent on each box so a quick entry caution can be another enter question can be like how let's say you can be off like you know you have a 50 different Linux boxes and then you are able to do a switch from your box to all these 50 new servers and then you want to install a particular application on this all these boxes how can you do it so the simplest thing to do is you know to run ansible it can be one command you know to install like say Apache on all these nodes just write a small playbook now here are the cookbooks and a check is actually same as playbooks in ansible so we call playbook financeable see right the small playbook and that actually install the Apache and then run ansible command and specify a number of hosts and that actually goes connect where is it and install that httpd package and then the project the status are we update about the color course the next thing is how ansible architecture looks like right so as i mention like and sub and mentoring inventory is basically the house the number of holes that you're managing and then it has something called a playable flavor cases same as my cookbook flavor cache modules and an API isn't plugging by all these things so each flavor you know uses some modules which uses api's and which uses plugins are able to basically code that you write and that calls these various existing predefined module right and then ansible automation engine has your on replay book against an inventory so you define a playbook to install apache for example and then run the playbook against an inventory that inventory can be a you ad servers or development boxes or production shows like and CMDB is actually connected which which configuration information and then all the host gets managed so this is a simple very simple architecture diagram right all right before we go to the next topic I want to take a look at the questions I see several questions here the questions about if ansible only uses push mechanism and sibelius uses only push mechanic for as I am aware my understanding goes well right and I have somebody that I have heard it can be used for pull mechanism as well probably is server but I have not used it in the popular way of using ansible is so we're talking about here not a trick a question but in true questions what we're talking about the intricacies and interviewer asked this question she probably is expecting pushed from here right but if you wanted to say if you can also use as pull then you have to prove it enough what is the plugin exact plug-in that uses my experience I have not used answerable for pull purposes right I have you only used for push and that's the most of the people use ansible for an dance abilities payments for push Iran cently it should be push right we're talking about like you know tracking into a question something you're probably more wiser than the person who is interviewing but you just need to answer your question accurately what the guys are actually asking you so that actually gives you success in the interview right and then the next question is what's the difference between tools like TIFF and super puppet and docker so Jeff ansible probably configuration management tools like to manage the Confucian say for example in the olden days just an example like I used to manage like various environments like 10 development in Romans 10 European moments 10 production in our servers like you know in violently Devyn mom and has sensor was you would get in service production has Chancellor's the age of this sensor it has its own role for example in the pen Devon Warren's one server is used for a level application the other one is used to run some back-end services the other one is a database server and then we have a cluster of all these things like each is done on our element patients now all is integrated actually is a very good infrastructure but what happens is when we promote code deploy code from one of these environment to another environment the deployment is successful but we have some issues the plumbing goes what after starting the service some things doesn't work some of issues we used in Congo and this is a repeated problem work because this consideration of each of these environments are not identical quite which means there is somebody has done some small change to an Apache file or changed a registry entry in the windows or probably changed a you know fine-tuned some memory parameter in alexey in the database right of the server and as a result the development everything worked fine but when it goes to the UAD it didn't work it is that small information the configuration change has never been recorded anywhere right in that purpose to manage the consistency between all the environments we use configuration management why we do it anything that any change we do we do is using either if you're using ansible use we actually do it through playbooks which is again stored in origin control if you're using Jeff we do it using cookbooks and pop it is actually Quebec modules in manifest right that's how we show that we actually reprovision environment we'll deploy the playbooks all these in a configuration information to the new environment or to the existing environment so it will be automatically updated it matches with exact you know exactly what we need right so that there will not be any failure at all when we actually deploy the actual code right so that's why that's the purpose of the configuration management tools docker on the other hand it's actually used for virtualization purposes right so there's a different slight difference between virtualization and darker we're going to cover that topic or Braco in the next upcoming slides while the next question is module fused for defining a classes can we deploy a module directly on the on the topic agent yes we can actually deploy a particular module directly on an agent right so now a module can have a default class so basically every module should have a class and if you don't specify a class it's actually takes default class so that module you can define you can actually run against appropriate that's possible all right the next question is for provisioning infrastructure on a cloud which tool is better puppet or chef both are equally powerful so in order for me to comment it's a very unwise it depends upon you know the person's interest and then the person experience so for example if you're more experienced with puppet you can use copper because even the licensing costs are more or less same of it if you're using for example AWS club I can just use an example AWS cloud has opts works tightly integrated with chip right so probably your default option would be to share if you're using club but pop it also equally powerful you can still use puppet to provision you know AWS cloud and most of the people I use with like you know those who use OpenStack they actually use puppet widely should just you know how you orbit so my understanding is pop it goes well with if you're using OpenStack and Jeff goes well if you're using Alyssa it appears again it's just a default but you still can reverse it and it works exactly the same and even much better sometimes right ok if install the default action yes it is install is a default action for packaged resource right now which is better push or pull mechanism will actually need append so for example from case to case so if you want to do one-time activity right you want to install provision servers and then using provisioning servers can be done by push mechanism because it's a one-time activity but pull mechanism is to kind of once your provision and then it's connected to an environment and you have to keep on receiving the updates and for that purpose you can use pull mechanism why the next question is it can be done by creating docker images and uploading it on Tobruk okay I'll come back a little bit later in our last section called virtualization all right the next topic is about a continuous monitoring now why continuous monitoring is necessary continuous monitoring allows the timely identification of problems and weaknesses in the software we actually deploy and it actually reduces the expenses of the organization because you are finding the issues earlier and in after an incident is happening and financial or reputational damage of a particular organization because of the service failure so you're identifying the potential failures in advance rather than in you know after sometimes it's being proactive so this is done by a traditions the this solution addresses you know continuous auditing monitoring end transaction in inch pension you're inspecting certain things now why we need continuous monitoring and that's what we have seen and what does negatives worker so we're going to cover here negatives how does it work so negatives is a open source monitoring tool and this open source monitoring tool is used for you know monitoring your edits applications and basically negatives it actually the server is a human in translation centralized the bagua server you actually install the india server module in a particular server that's actually used a centralized lee to monitor all your applications all your servers so it's a server agent based client server based application and this negros demon actually has a processor or a scheduler it has a scheduler that actually sends the results on a or excuse the plugin on a scheduled basis on a reoccurrence basis and so basically all you do is you install the negative server on a negative server so that you can monitor the local resource like all the cpu memory all the information monitoring or the service information about the acutal and you also can use something called a remote server management so using negative silver you can also manage a remote agents and for that purpose you need to install another plugin called + RP which will go to shane an extra life so basically negatives comes with quite a lot of plugin so right you can use by default menus when installed basic module contains nothing but when you install plug-ins know each plugins or from data section so you can have a plug-in to monitor the cps as you can have it won a trip to plug in the disk space so all is plug-in based right so you need to install neglige the processor scheduler or server and then energy install negative plugins right that will be executed by this process and then that actually gives you the information right so the first thing is negotiate here's a project the next billions plug-in checks the status and gets the result and the third thing is plug-in chance result to the negatives to process and then negatives actually finally displayed that information in a web GUI interface right and then this plug-in can be is in local plug-in all this plugin can also execute it on a remote box if you see the second number two here plug in Texas stages and get so that those offers remote host right in order for the remote host to work we need to have another plug we're going to see it in a second and notify said number four is notifies admin about the status process by the scheduler so notifications can be sent either by SMS or by email so in order to manage the remote host which is very important we have to install something called Lydia's remote plug-in executors right so this remote plug-in we have to install on it a remote box for example here if you have a moat Linux or UNIX host with NRP is installed as another package that we install this monitoring host which is a silver mega server actually runs a module called chicken RP and this chicken RP actually connects using is a cell to this n RPC service which will monitor where is information like chickadees check load chickity-check FTP I always check this tick load and all they are plugins installed locally on the local box and then here we have installed a and our P plugin and then on the remote box we have a chicken and RPE plugin which actually connects the NRP and then pulls the information about the remote box and then shows it on the web interface okay this is actually concept or may be used to understanding that we need to actually install and do some hands-on right but these questions kind of can give you some kind of an understanding and if somebody asks you about maybe us and easy I'm not worked on the years probably the best thing is you can just you know say that you know- is service to be installed on a server and with the right plugins that plugin selection will be run and then give the information about the CPU or a memory and stuff and then can be displayed online megastore and similarly to one into the remote host you need to install a plugin called make this remote execution plug-in like that you can actually know the status of the element box right okay what does make yours check for external commands right now- check for external commands under the following conditions at a regular intervals specified by command check interval options in a main consideration fund so actually you can set up at regular intervals command check interval option and then consider some kind of a for example 30 minutes or 40 minutes that actually then negligence will check for external commands the intervals and also immediately after event handlers are executed so when certain even to actually handlers are executed you can set up sec check that you know to run the external commands and this is the condition to regularly recycle the commands checks and it's done to provide immediate action even if the handle sub which commenced in values or it will move back to the next question what is the difference between active and passive check in maegor's now actually the major differences at few checks are initiated by Negus while passages are performed by some external applications so you can probably have an application that actually get order to pull some information or do some tricks through menus so you can actually initiate you know actions or tricks by third party applications or that's called passive chicks and your neck use can do the checks that's actually a takes a passive checks are useful for monitoring services there are a synchronous in nature which means it cannot be monitored effectively by polling their status on a regular scheduled basis right so what happens is that the negative you can specify an interval in the negative server that it can check the status of particular service on a regular interval but what if if your application doesn't depend upon that regular interval but it just needs whenever there is an integration so in that case you need to use passive checks and also if there also was a located behind a firewall which means your negatives will not work so in that case you also have busy passes another monitoring holds to kind of monitor the do the passive checks right the main feature rejects our initiative managers at the FT chick servants on a regularly scheduled basis there are two important questions like a few checks means initiative many goals and then run on a scheduled basis pastor checks mean initiated by some other application in the England of where you don't actually relay on the scheduled information like flood detection so what is actually flapping now flapping means flapping occurs when a server or a service or remote whole state is changed very frequently so it becomes ready because being very frequently so you actually confused what exactly research basically it's becoming read one time it's becoming Queen becoming red green so that means it's actually very inconsistent and how it does may use legal negatives deals it's a slightly different actually it shows the last 21 checks the results of the last 21 checks in a is so with analyzing the history checks of the results so it's basically it restores the last 41 checks of the host or the fairies the historical infections and then the next appears it uses that state a transition to determine the percentage of change value of the measure of the whole server which means it checks it last 21 and checks and then next checks it actually compares the difference like the percentage of the changes it's been happening then it actually compares the person to so if you see it host or service is determined to have started flapping when its present fate change first exceeds the high flapping trishul so there's a theory that actually there was a flapping happen so it stores the flapping information so the current state is actually extends exceeds the clapping state then the services it reminds to be and flapping if it is not then it's not flapping so it's actually the mechanism that actually you know negatives users right and the next thing is a host or a solicitor meant to have stopped slightly when it's posted a state goes below the flapping threshold so there's a flapping to so they can main fight the first e21 checks and then it actually negatives will actually compare with that value to determine whether it is flapping or not flapping right alright so before we go to the next topic I'm going to take a few questions here which is better Negus or Splunk c- is widely used and has a very you know why a lot of support but Negus is a little bit complicated to kind of install to manage so people use also Splunk so I have actually extend to use make use and it's actually a little bit tricky and you need quite a lot of expectation to work in maegor's but frankly I think it's a little bit easier than and neglige alright so if you see the tools like OpenStack users maegor's is a default monitoring tool it's just very powerful and it's there from the very beginning that's where Negus is popular nowadays what are the containers so here we have come to the containers and virtualization techniques so virtualization pretty much everybody knows so you have a big box like post for example a blade machine with 20 processes or an ability hundred processes and hundreds of jeebies you can split that and you know box into multiple virtual boxes so it should be a project can be used independently of each of these Virtual Box so each what you also require some kind of a little resources like it requires a memory it requires a CPU requires a disk space and etc so that we know about the virtualization but what is containerization all right what is continuous that containers can stop a kind of a runtime environment of an application all right and all its dependencies libraries binary configuration files and everything bundled into one package and contain raising an application platform and its dependencies remove the difference in the knowledge distribution and drive infrastructure that's the definition let me explain this in order quickly now if you for example wanted to run if you have only one box and then you want to run an application that actually runs on open to so let's say you're using red hard Linux and you have a host machine but you want to run a small application like httpd and I open the box in a native way you have to use you have to install you to create another separate server call it would do and then run it and that requires the resources memory and then separate virtualization virtual box but using containers containers are lightweight applications that you know or provides a runtime environment for your hosts for example you can install a container called we would do to run your apache in your host in itself so they are very lightweight and they don't need separate virtual boxes you can run it on your own box right and all you need is a container engine running on your host box the container engine will actually make should be installed on your host box and that actually will on top of that you can actually start multiple containers and each container has its own runtime and woman its own runtime variables its own runtime a Linux kernel and so forth so how this is possible it's possible through something called the bootloader now if you see the images of the container they are pretty small like 10 MB or maybe maximum under MB but well as you see a virtual box it's actually can be up to 500 MB to several gigabytes right so that's why containers are very you know require has only essential information to put that application and provide separate runtime environment and can be run on independently on the same box or so for example you can run F on binary on the same box and AB to binary on the same box and F very simple binary and libraries on the same mission so actually used for microservice purposes the next question is imagine a scenario where large application is broken into small composable pieces of each of those pieces and their own set up dependencies let us call it dependence these pieces as micro shows right so micro services are actually you broke the application into small small application which can be run independently and which has its own dependencies and so you're actually breaking your large application to small micro services and in order to run each of these micro sources what will you use use containers or virtual machines so basically we use containers not what your missions are the reason is because if you use virtual machine each virtual machine you actually have to create a virtual machine for the micro service you'd have to need several micro generation which means you need a large CPU and you know resource unit is consumed sell lots of jeebies of memory but if you use containers you don't need to extend anything your physical infrastructure can be as good as your laptop where you can run contain the EGR application right so each your microchip is so that's actually the advantage and lightweight doesn't use too much CPU memory just runs only the service if you install it what your bot it runs so many cities but if you can observe a container it only runs one service and it makes right so let's see a difference like right a docker file to create image an image into MongoDB let's observe this example one docker file is actually storing all your information about your container in a particular file so when you actually start this docker file and create a new content with all the rules that you have defined in this dock of I so actually what you're doing is you're actually the first thing always in a dock you file is from Ubben from and then the image the base we call it as a base image so the base image can be 1 2 can be read hard or can be a PLN X Sol Pine Linux or you know it can be a Windows in average we have Windows would be so they are very lightweight small images showing as something called either a docker hub which is actually a public registry of where or docker images are stored so when you do from Ubuntu first this actually pulls it will do small image into your host machine and then it actually wants various other commands that you are actually defining so here this example will install we're running a container and that actually installs a MongoDB image so let's see what it does so first we are downloading or going to image and then we're updating run apt-get update we're upgrading to maintainer is basically the name of the person who's author of this image right order of this container and then we're running adam apt-get update so you're upgrading your posit or e and then you're actually doing run is basically a command that we used to invoke any commands like you know when they do run and then specify something that will be invoked on the image container machine right so run apt key adb so this command will remain actually installs a key apt key into your repository so the next thing is you're running echo and deb httpd downloads in this you are this tendon and then you're actually piping it to one file called racial switch to mobile develop so you're typing this depth information into this file so that's the main go that's command the next thing is you are doing also the run apt-get update which is actually upgrading your repository and then finally reinstall MongoDB called apt-get install - why MongoDB 10gen and then you're creating your a MK DL and then you're exposing a port numbers so this is important because if you don't expose a code then your host machine radio to install the container will not recognize this up to the port right so for example your MongoDB on the container runs with port number two seven zero one seven and what that port number has to be exposed to the host machine so that host machine can recognize the port right and then you're actually running with CMD with - - port two seven zero one seven and then finally you're actually running an entry point entry point is basically the process that actually run so us are being MongoDB movie is the process that you're running so if you run this particular docker file and that actually creates an image right so once the container is created if you log into the container using Exe C and you go and run for the processes you will see only one process called use or being smaller babies nothing else so which means it's a lightweight right so that's actually about the docker images now the next question is what is my example explain how broker is used for SPLC over this is important question so how you can use a docker right so let's say you have an application you're running and that actually runs an HTTP D httpd you know service and then you have some contents for that httpd like in a web application and let's say you want to use docker is your application what would you do so you have a code and you have to find the code your copy DT or local desktop or you actually showed it in an git repository and then in order to test this or closing whether it works or not you can actually use the dollar ization mechanism so you rise a docker file and the docker files will run you know from - you say what we do and then you install run the commands like you know if they get updated we install hd-dvd mm you run a copy command that actually copies the your code from your pitch Padre to the container and then run your HTTP application so that's a simple circular edge we have a docker file that is again in a git repository and then you actually have a doctor file now you have a code and then you can actually integrate that with the Jenkins so the Jenkins actually once the bill is done it actually runs the docker file that creates a container and that containment can be tested at night it can be at any testing container that can be tested you can write and test selenium tests to kind of catch that it's working fine or not and it can only exactly same thing to kind of for this to shame docker file you can use with same your code to spin up staging and production and moments so exact same thing would actually will be open so for example you have a host machine like production post and you wanted a new container so simply run the new docker file and that actually spins up a new docker container so that it is it's actually used for continuous integration right so that's about you know as we've seen how doko provides consistency computing environment through SDLC so we have actually seen I've already explained a little bit about it so let's say you have a project code and then that actually has a docker file and okra file actually has a docker image and then the docker container and the docker image actually pulled from a docker hub for example from able to actually pull the image from the dock container right and then run spins of a docker container and if you run this code docker file you can actually start a server right in a staging same thing actually can be started on the production server so it's exactly the same type of source that you're taking to build a container so see it should also as you think and so the consistency should be authors right the man the next questions are letting this loss it's about what is docker compose all right so actually one container can have a multiple you can actually you can use the docker compose file to kind of deploy several containers right now one single dollar for example what we have seen in just an example before a docker file is about one container but how about you want to do multiple containers and then we use the darker compulsion or darker compose use YAML yet another markup language where you can actually define multiple you know docker files like the copies of the Tokra files whatever you I mentioned the doctor file similar thing you can like you can have a few lines of code for to start your web application container and then to Postgres database container and then British cache and containers so these three containers can be together locked up in a one dollar compost file and then you run docker compose that actually spins up three different containers with one single command so that's actually the is we can run these three containers with a single command using docker compose right so you need to install docker compose for that come into the box so before I can actually continue I just want to take a look at the example so suppose is a question says suppose if I need a lamp stack on my notes so I can use tools chef puppet and to do that so the question is my understanding is you want to start a lamp stack on your node and then what would you use a chair for top it reason actually use any of these to start lamp stack so what you need to do is four lamps and they made Linux which obviously is your application and for example Apache or take on Ken Jennings and then MySQL and then PHP you on the install is the institution write a for example if you're using czev you can write a you know cookbook to define to install package like dependent packages in a one single cookbook and then run the cookbook again pepper club no and leather actually installs the lamp stack that's actually good example okay the question is but at the same time I can write a docker file to upload it on the docker hub so the teams can pull the imagine build the containers that's right so in both cases I can easily deploy lamps like yes you can so what the docker file as we can see it's like lightweight you can have a single container image or all your lamp stack together right using docker you can create your own docker image right that has a lamps tank in it and there are actually examples in the docker hub right or that's actually lightweight right now not every application can work on a lightweight it requires probably you probably have a dedicated server called for Apache a dedicated server for MySQL a dedicated server for your PHP learning so in that case and they are available already you're not changing the source you're not you know poisoning everything it's collision gallery and there are physical boxes like the VirtualBox in that case you use puppet or chip to manage these dependencies and the installations right so that's the difference when I use docker why I need tools like a potential actually if you use a docker in this instance well there is less you don't need to do much with Chetan puppet but Jeff and puppet actually it can be used to kind of for provisioning all together a brand-new so so basically if you're running a docker image you don't need actually a check for puppet for that one but in order to run the docker image you are probably running on a host machine and that host machine should have all the installations to be done it should have a docker engine running it should have a ridiculous like you know public it is running because you are probably running docker file that actually copying stuff from your get image you know so for that purpose use chef and puppet so check and pop it or any other and configuration management tool to the other with docker actually gives us the best result right because we are not just talking about running a small container on a box or not an entire enterprise wide how you do it so chef at puppet has its own purpose docker has its own purpose you combine them together why to kind of make it as more enterprise wide you don't print the practices so that okay the next thing is before I end up this question I just wanted to show you what summary of the sessions we have actually seen and the generate DevOps questions and then we've seen the source code management using it and then we are also seeing the continuous integration best practices and also the you know using a little bit about Jenkins and we have seen the configuration management pull push and pull mechanisms and difference between radius configuration management tools like suffered ansible and chef continuous monitoring we've actually seen what is NRP and how this negatives work will seem containerization an example of docker file and then docker compose the difference between the upper compose and docker file is it's basically you know a docker file you actually run one container you can combine multiple containers into one single docker compose and run it using docker compose so the next question is is DevOps introduced in order to facilitate micro service architecture yes it is it's not actually just introduced to facilitate micro-service architecture it's actually introduced to see when when they're Webster's introduced it was introduced when these popular concepts about you know configuration management and infrastructure has code has started right micro-services is like modern architecture and the thought debauch you don't have you cannot have micro service architecture that's for sure but introduction of DevOps is not just for Microsoft future but also for normalize that you like all the service-oriented architectures that we already have with many organizations so whether you use micro services or not DevOps is still a requirement for agility and for the quality of your releases right and for micro services it's like a must and mandatory one all right so you guys have any other questions if you have any questions feel free to pause if you have any feedback that you wanted to give it to us you know I will spend a few more minutes to answer I know I wanted all the questions you know while running these maybe not but if you still do have any questions you know I will give you two more minutes you can post them then I'll try to answer them so I assume that there are no questions from people so thank you very much for joining this session and i hope you have you know learn something myself or learn something yourself from the questions you guys have posted thank you very much and hope to see you soon bye bye I hope you enjoyed listening to this video please be kind enough to like it and you can comment any of your doubts and queries and we will reply to them at the earliest to look out for more videos in our playlist and subscribe to our red rig a channel to learn more happy learning
Info
Channel: edureka!
Views: 236,594
Rating: undefined out of 5
Keywords: yt:cc=on, devops interview questions and answers, devops interview questions, git interview questions and answers, git interview questions, puppet interview questions, jenkins interview questions, docker interview questions, devops interview, chef interview questions, ansible interview questions, devops tutorial, devops training, devops edureka, edureka, devops engineer interview, devops engineer interview questions, devops engineer, devops roles, devops jobs, devops pipeline
Id: clZgb8GA6xI
Channel Id: undefined
Length: 93min 31sec (5611 seconds)
Published: Mon Mar 06 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.