Devops Interview Questions | DevOps Interview Questions And Answers | DevOps Tutorial | Simplilearn

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello everyone today we will be looking at the interview questions which are typical DevOps engineer faces in an interview this is both for people who are new to the DevOps world and people who are experienced in other fields like Linux or production support and are willing or wanting to move to DevOps so let's take a look so pretty much it'll start with questions about yourself they'll have a resume in front of them as obviously you would have sent your resume to them so they'll look at that and they will ask you to quickly walk them through your resume and tell them what you've been doing in your previous project so you'll have to summarize this in a way in which you know kind of walk through all three layers and you have to tell them that you work on a team where you kind of managed aw the AWS part and the automation part of it so that can also include telling them about any infrastructure deployment scripts which you have written it could also include any configuration management that you've done and deployment automation you might have done so it would also include things like setting up a monitoring system setting up the entire thing on the cloud collaborating with teams everything maybe what your code review process is scrum methodology whatever you do in your team they will definitely ask you what your team size was so you could say you've been working on a team about five to six people which would include team leads consultants managers scrum masters etc what was your role in the team that's the typical DevOps role in which you've written code for automation of stuff on the cloud they'll ask how good are you at programming now this is a typical question that's asked of a DevOps engineer to gauge whether he's fit for the DevOps role or not so you would answer that you've not written full-fledged applications such as Java or Ruby on Rails but you're good at Ruby Python shell and Perl from a scripting language perspective maybe I'm not a full-fledged programmer but I know the ins and outs of scripts one of the typical questions is are you from the dev side or the ops side and the answer is you're from the ops side but you a good hold on the programming languages which are used for scripting and configuration management and whatever code is required you can write that easily they may ask how quickly can you learn which is a gauge of the candidates ability to learn new things and flexibility to adjust himself to a new environment given the chance can you architect an application the answer is definitely yes tell them you have been working with architects recently and that you've been contributing to the architecture from a DevOps perspective and giving your input so that the application could be developed in a more easily deployable manner given the chance to lead a team can you do that you'll say yes definitely I have more than five years experience typically this is true for people who have at least five years experience leaving a team because less than that would really not be meaningful the next level of questions that come up in a DevOps interview are typically Linux questions these questions could be anything from the Linux world but typically these are the most common questions what is the command to view the crontab the command is crontab - L what is an alias in Linux an alias and Linux is something that tells you the shortcuts on that system these are defined in the et Cie batch RC file what does the chmod command do the chmod command basically allows you to change the permission of a file in Linux it can be changed from read write and executable mode so it could be changed from read to read write or to read write execute depending on the use case what is SSH port forwarding it's a way to actually forward your ports through SSH protocols so it allows you to bypass firewalls and also tunnel ports through strictly guarded environments so this is one of the ways which you can connect to instances or services in your private subnets in AWS or your data center the next question is what is a zombie process now this is about the most common Linux question asked in an interview zombie process is a process which is in a terminated state but which has not yet released the resources so basically it's most commonly a child process where the parent has exited but the child is still there basically its entry is in the process table they'll ask how check the top CPU consuming processes and you can use the top command to check that the top command gives you a very nice view of the entire Linux process table looking at the top deployment related questions which is one of the most important areas people use to gauge their candidates one of the questions which comes up is what is a Bluegreen deployment so a Bluegreen deployment is something that in which you have X number of resources running your application and say that number is 10 so you have 10 servers in a web form now in a Bluegreen deployment what you do is take out half of those from the actual production state you deploy the new code on those now in the meantime the other five or the other half would be serving the production traffic and these 5 would not be hindered or hampered in any way now the first 5 in which you complete the deployment which you've taken out of the actual production load you put those back in and you wait for those to come back into service and then you take the other 5 out of the actual load and you start deploying on those so in a Bluegreen deployment what happens is you never let the end user see the downtime it's an always up environment how do you do a hot deployment this is just a rephrasing of the previous question really a hot deployment can be done either by having two environments of the same size and then you redirect traffic through a load balancer or a proxy service to one of the environments deploy it on the other and then redirect to the first one and apply it to the second and just like a Bluegreen deployment you never show the end user the downtime and what is your roll back strategy this is something you need to be very confident about because every deployment should have a roll back associated with it so let's say your deployment fails how do you roll back the system it has to be linked to a blue green or a hot deployment so you have to say that you have a Jenkins job or a script which basically does a blue green deployment and in the middle of that blue green deployment it checks to see whether or not services are up and running and at the least the sgtp endpoints of your application are running or not have you used Jenkins for deployment yes I've used it and you've used it using a couple of strategies you've used it with plugins and I've used it with my script so my plugins used to deploy code on the environment using build or publish or SSH and I have also had my ansible or chef or puppet code which used to do the deployment for me so we used to use Jenkins as an Orchestrator and puppet and chef and ansible for the actual deployment of the target service one of the other common questions from a deployment perspective is what Jenkins plugins have you used you would say Jenkins plugins you've used maven and Gradle I've used kebechet for testing PMD for programmatic mistake detection Aztek for Ruby on Rails testing karma for angularjs testing and integration with s3 I've also used git plug-in for checking out code SVN plug-in for checking out SVN repositories upstream downstream plugin for connecting the builds in addition to this have also used archive artifacts plug-in publish HTML reports plug-in to publish test reports have you ever used user data for deployment you answer yes I've used it in AWS so when you have instances behind load balances we used to use it it will actually defy the deployment script so that every time a new instance comes up in the auto scaling group it has got the latest code on it so that latest code is checked out using the SH command which is specified inside the user data looking at the AWS questions so what is a V PC a V PC is a slice of the Amazon Cloud which they give you to run your resources what is the difference between a public and private subnet a public subnet is a subnet which is directly accessible from the internet a private subnet is a subnet that is not accessible from the internet it's only accessible from within the V PC what is reserved instance a reserved instance is an instance which is reserved for you by Amazon for a year and they give you significant price reductions on that you can buy that no upfront partial upfront or full upfront payments and you get discounts from 20 to 60 percent based on the payment type and Terms what is the difference between spot instance and reserved instance the spot instance is like a bid instance where you have a specific price which you have and based on that you were assigned instances now the moment your bid is lower than the next highest bid your instance is terminated and it's assigned to the next highest bidder a reserved instance on the other hand is not a bit Abul instance you have to buy it for a specific terms what is cloud formation cloud formation is an orchestration or an infrastructure or a server deployment tool a managed turbines deployment tool or service which Amazon provides so it takes the G Sun as an input in that G Sun you provide everything that is required to build up an environment from VPS to servers to buckets everything so it's an Amazon provided service that lets you build entire application stacks from scratch have you used route 53 yes I've used route 53 I used it for managing DNS I use it to redirect our code ID DNS to route 53 using Amazon name servers and then from there in route 53 we used to create CNM entries a records MX records txt records and we used to manage the entire DNS from there what is the best feature of AWS which you like I like the auto scaling group and the elastic load balancer because it allows you to infinitely scale your application to any level coming to configuration management which configuration management tool have you used so you can say that you've used all three chef ansible and puppet in one project or another I have good hands-on experience with each of them and I've written chef cookbooks puppet modules and ansible play books what is the difference between chef and ansible so chef is a ruby based tool you users Ruby as its main programming language it's open source just like ansible chef uses a client server and client only architecture so the difference is that ansible is simply an agentless tool so there is no agent in ansible whatsoever in chef there are two methodologies one is agent based and the other is agent list but if you talk about the difference one of the differences is that there is no agent in ansible and the other difference is ansible uses yamo for defining the state of the systems in its playbooks whereas chef uses Ruby for defining the state of the systems in its cookbooks and it's more of a programming language when it comes to chef and amel in ansible is not really a programming language it's more of a statically typed code have you written any cookbooks or modules and the playbooks yes I've used both from the repositories which each of these configuration management tools provide so for chef's I've used the chef's supermarket cookbooks and I've also written custom cookbooks for puppet I have used modules from puppet Forge and I've written my own modules for custom applications for ansible I've used the community provided Play Books as well as I've written my own custom play books for provisioning of instances on the cloud as well as managing and now production support what is the biggest issue you've faced in a production environment so you can say that there's an application we used to run on the cloud and there was some application deployment error it was a logical error in the application which caused an error behind the elastic load balancer so we had an elastic load balancer which was tied to the auto scaling group so it kind of snowballed the application and we were not able to control it so the application scaled infinitely and it was snowballing the instances so we had to go in and manually freeze the size of the auto scaling group once the size was frozen we were able to go into the instances check the logs fix the issue rebuild the ami and restart the auto scaling process so that was one of the biggest issues in production I faced recently what is your D our strategy and a live website so a D our strategy is that you do a failover check using route 53 DNS so one of them is a DNS based failover in which you've got an exact replica of your environment your web servers your database your cache so your database may not be sync in real time with a dr so it could be a one day behind sync so the traffic is switched to dr whenever there's a downtime in production automatically through a DNS switch the other dr strategy is that you can have instances in to availability zones and then you can have your load balancer to switch traffic based on the available those two zones but that is a same region DNR so from a multi region DNR the best option is to have a DNS base to switch and you can also have an interface tool interface proxy which can route traffic based on the health of your endpoints how do you scale a production web service so if you're on the cloud you can use the auto scaling group which will scale automatically based on demand if you don't want to go into instances and you don't want to manage instances you can use the elastic beanstalk for a managed stack which takes care of the load balancing itself if you have an application which is already containerized it's a better practice to run them on EECS on the cloud which would allow you to scale your applications based on demand and you can create an application load balancer there if you're in house you can scale your application by having a container management system in which you can have some base hosts on which you run multiple services so you can keep on adding multiple hosts to that cluster and you don't have to worry about the scaling part because the hosts can be added on the fly to that hardware so your docker cluster like a swarm or a kubernetes can take care of getting those new machines into your cluster and running the containers which are existing on your cluster on those new machines thanks a lot everyone this is all from the interview question Hey want to become an expert in cloud computing then subscribe to simpler Channel and click here to watch more such videos turn it up and get certified in cloud computing click here
Info
Channel: Simplilearn
Views: 196,845
Rating: 4.8774967 out of 5
Keywords: devops interview questions, devops interview questions and answers, devops interview questions and answers for experienced, Jenkins interview questions, ansible interview questions, chef interview questions, puppet interview questions, git interview questions, docker interview questions, devops engineer interview, devops engineer interview questions and answers, simplilearn, devops questions and answers, devops training, devops tutorial, Simplilearn DevOps
Id: WxjJlYFIWtI
Channel Id: undefined
Length: 15min 0sec (900 seconds)
Published: Fri Sep 01 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.