DevOps Interview Questions Answers (From Sr. Cloud Architect) | Moderate to Advanced

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello guys and girls raj here back with another video so this is beginning of the march all the companies are hiring on full force for the cloud jobs and in the cloud interview devops is a super important topic so this week's video we are going to go over some of the devops interview questions from moderate to advanced more suitable for the architect level or lead devops engineer level so the questions we are going to cover today are what is the difference between continuous delivery and continuous deployment explain a devops pipeline that you have implemented and we are going to do that by explaining a kubernetes pipeline then tell me about a challenge you faced with devops how do you implement unit testing in devops how do you implement security in your devops pipeline and is infrastructure as code necessary for devops pipeline and finally how will you implement changes in multiple aws accounts as always i have given the timestamps for your viewing convenience all right let's get started so question one what is the difference between continuous delivery and continuous deployment so devops has these four main phases source build test and prod sources when you check in your source code a build is when you compile your code unit test run unit test and you create the artifact such as like a jar file or zip file that is ready to be implemented then you test you do integration testing with other systems load testing ui testing penetration testing etc and then you deploy the artifact to the production so the source and the build phase is termed as continuous integration where the code is compiled unit testing is done and the artifact is created and that is ready to be deployed continuous delivery is the process of deploying that artifact to the prod with a manual intervention so you guys and girls see this a checkpoint sign that means that once all the unit testing and the integration testing has been done and before it gets deployed to the production a human comes and checks if everything is okay if everything is fine he or she clicks a button and the package gets deployed into production continuous deployment as you can see there is no checkpoint so continuous deployment is the process of deploying the artifact without any human intervention if all the automated testing is passed then it gets automatically deployed but if the automatic testing fails then it stops sends a notification then some human comes and take a look all right next explain a devops pipeline that you have implemented so when they ask that generally if you can give a kubernetes example or a serverless example that's impressive because kubernetes is pretty hot right now so in this video i'm gonna go over a kubernetes pipeline so the big picture for kubernetes pipeline is the developer checks the code into a code repository and then the code is dockerized and a container image is created from that code and then this docker container and this container image is stored into a container image repository and then it is deployed into a kubernetes cluster the process till the container image is stored into a repository is the build process and then the next part is the deployment so basically continuous integration and then either continuous delivery or continuous deployment so if we want to take it one step further the code repo could be github and to dockerize your application you need some tool to run some commands right and then the container image can be stored in a container image repository and in the deployment phase again you need some tools to run commands to deploy those images into the kubernetes cluster and those commands could be easy using help helm commands or cube ctrl commands so if we take these concepts and go one step further this is the actual pipeline a code gets checked into let's say git and some of the tools where you can run the dockerize commands could be gitlab or code build or jenkins and then it could be stored in let's say ecr or elastic container registry or tried tested docker hub again the deployment could be done via jenkins code deploy or gitlab so basically any devops tool that can run helm or cube ctl commands and that could be deployed in a eks or elastic kubernetes cluster in aws or it could be any cloud really the flow should remain the flow will remain the same so on this i actually have a separate video where i show how do you test your code in local machine then you dockerize it create a container image in your local machine test that container in your local machine and then how you migrate that to the cloud then you expose it to the outside world scale it etc so basically end to end i'll give the link up top as well check it out if you are interested moving forward tell me about a challenge you faced in devops so what the interviewer is trying to see is not just the challenge the interviewer is looking how you solved it so one of the challenge i faced was we were running pipeline using jenkins and jenkins generally in aws or any cloud you have to run in a vm right so there is a primary node and then depending on the workload the other secondary nodes get spun up so if lot of teams are deploying changes or checking in code at the same time sometimes it's very slow to scale and sometimes it cannot keep up because the primary node has a fixed size and to upgrade it to a bigger size requires some manual intervention or it takes some time even if you want to do it automated way so there is a scaling challenge with that we also faced idle cost because jenkins runs in an ec2 server even if there is no deployment going on you still have to pay for that primary ec2 node it has a steep learning curve and also it's hard to troubleshoot because there are so many jenkins plug-in if the team just adopted a new plug-in they have to get used to how the plug-in does error reporting and all that stuff so how did i solved it so i used code pipeline for newer projects uh code pipeline scales automatically it doesn't use any ec2 or any server it is serverless so it is pay as you go and i could use all my existing aws learning such as checking metrics in cloudwatch uh cloudtrail etc uh to use quad pipeline and also it integrates really well with other aws services if i want to call the cloud formation if i want to invoke a lambda or some other aws services it supports all that out of the box if you don't want to mention code pipeline because it's cloud specific you can also mention gitlab runner so gitlab runner also solves some of the challenges that jenkin has actually that recent gitlab runner can run on a forget so you don't have to pay for uh the ec2 and it scales uh pretty well so moving on how do you implement unit testing in devops so for this you use a unit testing framework so how you do it is you create a code in that code this is just for testing you execute your code that you are trying to implement so if you have created a jar file to be implemented but it hasn't been implemented yet you execute your code with predefined input and you know what output you should get for those predefined input and once you get the output you match that actual response with the expected response and and if it doesn't match you can also control the percentage of error let's say you call your code 10 times and if 80 percent of test case passes then it's good so if 88 test case pass then you could mark it as complete but depending on that if it is less than 8 then you can error it out and where do you implement this code so one way uh i have done it is in the pipeline implement a lambda and this lambda using this unit testing framework calls your actual code match compare all that stuff and either error out or pass so if you want to see this in action i also have a separate video where i show with actual demo how i implemented this in the pipeline so i'm going to give the link up top feel free to check it out as well moving on very important topic how do you implement security in your devops pipeline so this goes into the dev set up statory so devops plus security so security of the devops life cycle so the first thing you have to implement is authorization as in the person who is who is submitting the pipeline does he or she have have the authority to implement the application so generally how we implement it is when you submit a check in a code and a devops job gets submitted we know who submitted or checked in the code and we know what application that code is being deployed to so generally we have a ldap group and that user should be part of that ldap group for that application that's how we check if the user has permission or not you can also implement it using iam user groups different ways but if the user is in the group the pipeline goes fine if the user who submitted this code or the devops pipeline is not in that ldap group or the iam user group then it is rejected and the pipeline fails another part of implementing a security in the devops is you have to scan your code for vulnerabilities such as you should scan cloud formation for poor security uh so like if you are creating a security group in the cloud formation with everything very open anyone can come and access your application it should check that you can scan code for vulnerable libraries sql injection etc like if you're using some library in your code which is not secure check against the osap top 10 vulnerabilities etc you also scan the repository for docker container you should scan the image and see if the image is secure and you can also scan the running code and container so when the code is running or the container is running in the production you should also scan that it will be impressive if you mention the tool names so for example the scan coding for vulnerabilities you can scan cloudformation using cfn nag it's open source plugin there are also many tools that you can use you can google it one of them is like parliament and there are many others scanning code for vulnerable libraries you can use fortify for this can rip scan repository if you are using ecr ecr gives you this feature you can also use tools like sysdig falco etc to scan your container repository scan running code container you can use a twist lock for container for scanning scan running code you could use a black dock and you could google it there are many other tools finally uh this is again you can impress the interviewer so the devsecops is general security best practice you you said all that then mention that hey if your application needs even more security compliance such as fedramp high hippa sock etc say that you can appropriately pick golf cloud instead of commercial cloud right and you can pick and choose different services that is compatible with the security compliance requirement so i'm going to give a link in the description you can check it out for aws what services are what compliant like what aws devops services or fedramp hi hipa etc so you can give couple of this example if you are interviewing in a company with a very high security requirement also it's a good idea to explain with one actual example like if you are interviewing for like a kubernetes architect or something you don't have to go through like regular scanning code you can explain with one example moving on is infrastructure as code necessary for devops pipelines so this is a trick question right so infrastructure as code example is aws cloud formation right so you can run a cloud formation from a devops tool or yourself and it spins up aws infrastructure there are other ways to spin up infrastructure as well such as you can run aws cli commands right so basically in your devops pipeline you can have some plugin or a compute or a tool which can run a wscli command to go spin up aws resources so infrastructures code is not absolutely necessary but it makes your life much easier so this is the part the interviewer is trying to listen from you right trying to see if you answer this so infrastructure as code makes it faster why because it's easier to change and version control infrastructures code template than commands right because you can check in the template in a git and you can revision it etc and you can run tools on it than just aws cli commands also generally with this infrastructure and infrastructure as code tools such as let's say cloud formation or cdk aws gives you controls and monitoring where you can go and track changes to the infrastructure like before you deploy a cloud formation you can run a change set and see what changes are being done right before you run the template so there are a lot of out-of-the-box features that you get with infrastructure's code with just running the awscli commands you do not get all that also all devops tools support infrastructure as code any tool you name such as code pipeline jenkins gitlab flux spinnaker etc they all support infrastructure as code for running cli you need to code the iwscli commands in plugin or maybe like in lambda or something and put it in your pipeline so there is additional management overhead and headache for running cli commands in devops pipeline also try to understand why is the interviewer asking that is the interviewer thinking that if they use cloud formation for example they might get locked into one cloud so mention that hey you can all use tools like terraform which is infrastructures code which can help you deploy infrastructure in any cloud so you don't get locked in into a particular cloud and you can use this terraform from any devops tools along with all these advantages that i just went through all right moving on how will you implement changes in multiple aws accounts so again this is a really realistic questions that if you haven't worked in actual devops project you will never face this because if you are just learning devops you are doing everything in one account but in reality enterprises have multiple production or test accounts and you cannot just go and implement in each account separately it is impossible so how do you do this using devops pipeline so how you do this is using a hub and spoke model so it looks like a bicycle tire right with hub as the central account and then the spokes are in different child accounts the central account is called a primary or master or shared services account this master account assumes roles in all these child accounts and then submits cloud formation stacks so all the devops tools and administration needs to be done only in this master account so you submit let's say cloudformation stack once in this master account using devops pipeline and this master account can assume role in the accounts that you wanted to go and do it and just submit cloud formation stacks into multiple accounts so you don't have to define the same devops pipeline in all these separate accounts and you don't have to go and submit each separately so this is a this is a trick question this you know for those of you who are working in actual enterprise projects in devops you know that's how you implement this all right so that's all the questions and answers also coming up i am preparing myself for kubernetes certifications uh so i plan to do a lot of videos and hands-on and practice tests for these kubernetes certifications so i'm gonna do that all in youtube uh so please like and subscribe uh if you want to be part of that also if you like this video smash that like button click subscribe let me know in the comment section if there are any other devops interview questions that you want me to cover all the likes and subscribe really help this channel grow we are almost at 10 000 subscribers can't wait till we cross that milestone all right with that guys and girls i end this video i'll see you guys and girls in the next video bye
Info
Channel: Agent of Change
Views: 18,091
Rating: 4.9684043 out of 5
Keywords:
Id: tBlj1J11f88
Channel Id: undefined
Length: 20min 41sec (1241 seconds)
Published: Sun Mar 14 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.