GCP Interview Questions | GCP Interview Questions and Answers for Experienced | GCP DevOps Interview

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey folks my name is raish and welcome back to another video in the series of devops interviews now in today's interview I have a profile with me so this person is around 5 plus years of experience and has 2 years of experience in devops and around 3 plus years of experience in Cloud this person is kubernetes certified and I have divided my interview into three four parts uh this is one our total uh in total interview and the first part is the project so I would be asking him what exactly do you do in your project what are your rules and responsibility based on that I would ask him a two or three situation based question this will go around for 5 to 10 minutes uh that would be all the second thing that I uh asked him is a few more situation based question and this will go around uh 5 to 10 minutes with sonar Cube init why do we use sonar Cube what exactly are the nities in sonar Cube that was the second thing the third one is for Docker uh in that I talked about around 5 to 10 minutes of about Docker like what exactly do you do in Docker what are the different questions on that that's another thing and the fourth thing is the major thing that I took around 10 to 20 minutes uh a bit less than 20 minutes but that is the most important the Crux of this interview I went for gcp this time so this person was working on gcp since uh a few months and uh more than a year I guess and then he was quite confident on that so I asked a question on based of that I gave him scenarios three basic scenarios uh these are the scenarios that you generally face in certifications or something and these are the scenarios that you uh someone asked you in an interview so some kind of that so do take a look for the gcp one and uh this went around for uh 15 to 20 minutes if I am not wrong and then at the end I asked him a few kubernetes based question and that would be all so this was the whole interview for today it is close to 1 hour of interview all right so uh just before moving further I would like to request that kindly subscribe the channel because it really motivates me to create more content like this more interviews like this and it takes a lot of time to record these videos work on it edit and do other stuffs all right so kindly subscribe the channel so uh let's dive right into the video and let's get [Music] started okay uh I started my It Journey in 2017 as a assis admin uh uh I stayed for the first company for about uh 2 and a half years till the end of 20 19 then I moved to a more devops role starting from 2020 and I have been uh moving from company to company to gain experience and uh to get to know others other Stacks so uh uh I'm basically uh devop from 2020 but I have been also freelancing to uh gain some more experience since I don't have the yearly experience as some people would would would like okay okay so your total years of experience is five plus years right yes and in devops uh develops specifically from 2020 so that would be uh three uh two years and uh from 2020 yes exct so 3 years till now okay and in cloud cloud mainly would be from uh the time I started the devop so that would be three years as well okay okay okay so um can you currently uh walk me through the project that you're working on right now and what is your roles and responsibilities in that okay so my current role uh I'm a senior devops uh mainly we are doing uh support currently because we deploy the uh infrastructure a while back uh the last uh big task that I WR on was migrating our uh tools uh including uh Jenkins from an older cluster that had issues with volume uh two a new cluster that I deployed with the teror uh so I moved the Jenkin deployment uh over and then also I took the old uh data volume because we couldn't afford to lose uh data that we had over over there uh and also uh it included moving our Nexus uh deployment and the owner Cube as well uh and the starting from next week I'm going to work on the Prometheus uh side of it because because we need it as as well to be migrated over to the new cluster uh and the role also includes a bit of mentoring uh because we have a junior devops so uh that is my job is to be heror okay and what exactly do you do in your project can you explain me that uh okay so uh the uh it's not a project per se because we have uh uh it's a day by job so uh my so my responsibilities including oh includes support and doing some R&D and uh working on our infrastructure so the current big task that I have is migrating our tools from the from the old uh kubernetes cluster to the newly deployed uh your voice is breaking a lot uh sorry um yeah so um in your last work right so can you explain me the application that you folks are working on how you were deploying what is the cicd that was happening uh okay so uh car Curr all of our stack is Java uh we have our our repos in GitHub and we uh uh and all of our pipelines are in GitHub actions and the deployment part is done through Jenkins uh because uh because we uh because that that was the way that they that they started with uh they were uh they were previously uh doing the repos and the pipeline from uh gitlab mhm but then we but then we migrated over to GitHub that was about 3 months ago uh and all of our pipelines are through GitHub actions and the deployment part is through Jenkins and Helm charts okay okay okay so uh let's say um this is a situation based question okay so uh let's say uh you are the only devops engineer in your team okay and uh the developers are writing code in Java so I want you to create a pipeline end to end that the first step should be checking out the code it should go to production with proper amount of testing and the security principle has to be invoked and static testing Dynamic testing if you want and how will you make sure that this pipeline goes end to end and everything works out it's a Java based application uh could be a spring boot application and runs on the web so the user who is accessing the application or the customer is accessing from the web that's all so I hope you got it I can repeat it if you want uh can then you repeat it please again basically it's a Java based application written in Java okay and it's a web based so people can access it through web so you have to create a pipeline in such a way that it goes till production okay end to endend flow has to be done you can have multiple environments in that and the code has to be compiled has to be has to go to the artifactory or something whatever you can push it over here you can push that if you want to put kubernetes Docker or any other technology you can do that I want to know each and every stage what will happen in the first state second state third state till it goes to the production so that's what you have to design so let me know uh okay so it will be a single pip line yeah single Branch pipeline yeah okay so if we have a single Branch pipeline we need to uh uh deploy it all the way to production so I guess the first step would be uh testing the fode with with this own or cube uh sorry what would be the first step to uh test the code with the owner Cube okay then like why are we having the first step as sonar Cube and not the checking out and not the compilation uh just second we do the building in our case we build a code and then we check it with s or no the building and the sonar Cube they they will they will be in a single uh step uh as we as we can have the quality uh Gates check for the requirements so that the uh uh that we might put into a place so so so that the code can be clean or up to the uh up to the standards that uh that we have uh that will be the first part and then the second part will be to uh build it uh in a Docker uh container and then we can push it to our repo and for uh this uh environment we will say it's over to uh to gcp uh and then we and then we can deploy over to uh to the to the dev cluster uh for this I would say we can have uh a manual uh approval uh step uh that the QC team can promote uh if they see uh if they uh if they see that the development Branch or the dev environment is uh up to uh oh sorry uh if uh if the if their tests pass in the in the in the dev environment uh then we can promote to the staging environment and then the final uh and then the final uh tests of the QC team can be can be done there and then if they see that uh everything is uh okay on their uh end then they can promote it manually over to the pro over to the production site okay so where is the testing happening over here uh the testing it can be um uh basically the QC team can can can do the testing on their part or or we can integrate some uh the testing steps in our pipeline okay what sort of testing you will do uh I uh basically we can do a smoke a smoke test after the deployment to uh check if the build and the application is is basically uh is basically uh running so uh that can be done okay so uh when you were checking uh the code quality uh using Sonar cube right in the first first two steps uh what exactly what kind of results does your sonar give uh sonar Cube will give uh okay uh in our uh it will basically either pass or fail uh and if it fails let's say the quality gauge that we did uh uh there will be a it will give us a few RL that uh that the dev can uh can click on it and and then he will uh and then he will see the results or what is the uh issue exactly of the of fail okay so like what exactly this quality Gates uh the quality Gates uh is basically where we set uh the um the standards that we need uh in our uh code base uh uh just like uh uh actually I don't know exactly what this uh the steps in it because the lead in engineer in uh the back end team is the one that is responsible for the standards that that he uh that he places in it so uh I don't know exactly what they do in it but we but I know that it breaks uh that uh that when it breaks the backend team they will have to uh check why uh why it failed and then they will uh go over the code again and then they will fix it and the pipeline will pass this quality gate check okay so uh you were saying that uh your build sometimes passes sometimes fails because of those quality Gates Right mhm so uh what are the what is the crit CR of being pass or being fail like when does it pass and when does it fail uh okay uh it should basically pass all the time but when but when it fails uh it will be based on the on the on the standards that they put in the quality gate uh I think um just uh you need to remember one of those time that they fa and the reason uh I think once it failed because of some duplicates in their uh code base so the duplicates uh it can't be over a certain percentage and if it passed this percentage threshold then they would have to redo their uh their code base to uh to avoid this duplication okay uh do you have any idea what exactly is uh code coverage in sonar uh I uh the the term yes but but what exactly does it do uh no okay so like what by the definition what do you understand uh by the definition the code coverage it seems to uh code coverage uh what can it uh put coverage uh I think it can be related to uh that there are that there are some tests that can cover uh a lot of the a lot of the functionalities in the code or the functions and and the classes in the code I think it can be uh it can be like this but again I don't know ex exactly the term or the meaning of the term exactly okay okay okay no problem um so I can see that uh you have worked on Docker as well right mhm yes okay so U I'll start from the very Basics uh why do we need uh Docker or or I can make it very simple like what was the problem that Docker was trying to resolve and has resolved or has resolved uh okay so the main problem would be uh to separate the environments uh just like when we have a application that is node or Java uh if we have a single uh VM or server uh the the the thing is uh if we want to install several uh versions of Java or node uh and then we would need to compile each project uh to uh to a specific version uh it will be hassle uh so basically the so basically doer uh it has given us a way to uh compartmentalize uh each uh uh each uh uh each uh uh each service and uh and that and the the dependencies that it would need uh and then uh and then it would be portable so that so so that the docker uh container can can run on on any machine basically okay uh have you ever worked on Docker swarm I think it was once or or or twice before okay why do we need Docker form uh Docker swarm basically we would have several the nodes that would uh that would join the Swarm cluster so I think it's uh it's like a Cu cuber uh it would be uh it would be for scaling purposes main okay so have you ever written that um have you ever written a Docker comp compos file yes okay so uh did you write in in uh yaml or something else yes has okay so can it be written in Jason uh actually I I haven't encountered uh if it would be okay to put it in Jason but uh but I don't think so because the difficult is doer d d compos by so I don't know exactly if uh if this is uh possible or but not but I don't think so it can be done overon okay okay so uh if you uh ever exit the docker container will you lose the data uh if we don't have a attached volume mhm uh I I think yes the data would be uh destroyed and how do you attach uh uh volume do you know the command or something uh I think if we do a doer run we can do d v and then we would uh give it the path on the host and then the path on the container okay uh this is your first round or second round for what exactly here here yeah uh no this is my first uh interview okay first okay okay okay um what do you understand by uh term known as hypervisor uh can you repeat this again sorry hypervisor heard about it uh yes yeah yeah what exactly it is uh the hypervisor uh is a software uh that uh that can take the bare metal uh resources uh and then uh and then it can be divided over to uh several other smaller uh uh VMS uh it's like uh uh like uh chemo and uh and let's say esxi uh these are like uh hypervisors okay okay okay so uh if we talk about the docker architecture so can you throw more light on that what exactly we know as a client what is Docker host what is registry and something what are the components that it touches can you throw more light on that uh okay so uh if I understood this right uh do is an application that we install uh there is a d that is running uh and then we can build the docker Y images and from the docker image we can uh give it a tag and then we can push it to our registry uh it can be uh it can be a public or any a private one uh and then and then we can uh we can pull this uh tag uh from the ribo uh and then we can uh deploy a container from this tag in uh any in firment that has the docker running uh or a kubernetes cluster I don't know if this if this answers your question or not okay uh so when uh your Docker um has a client and it talks to Docker host what are the commands generally you write so that it the client talks to the docker host uh so basically if I'm at the terminal I can do dockor PS to uh check the docker containers that are currently up uh Docker PS will show you the number of what containers or processes no uh the doer PS will show me the number of the containers that are okay so if there is no container it will show zero right and if there would be container it will show the number and the details of the container I'm talking at the point where a client will make a call to Docker host mhm so like you can there are three commands for that you can tell anyone uh see you you know right we have a uh client right Docker client and we have Docker host and then it comes to the registry which is on the web both your uh client and host uh probably would be on your local right so there is something that clients make a call to Docker host there is a Docker demon over there it gets the call executes the process any any any light on that uh I don't think you mean do we do a Docker but a login so I don't think this right okay uh there are three commands Docker build pull and run so whenever you run them they make a call to Docker demon right okay and this Docker demon will look for containers will look look for images and stuff something like that and this Docker Damon is also responsible for talking to the registry registry can has a lot of stuff your engine X your obuntu your Centos or your open stack or something that's that's how it works generally okay okay okay so okay so by this question so you me basically we can build pull and push run basically okay run okay okay no problem okay uh so uh have you ever done uh any doc Docker monitoring in production or anywhere else uh we did it like uh we used the Prometheus and the node exporters and the what was it called thator the PS uh the C was a c advisor there there there there was an there was an exporter for the doctor Damon that would show us the number of uh containers that are up and the resources that they are currently using uh and and uh and for uh kubernetes cluster we did like uh uh Prometheus and the node exporters and then it would show up on the grafana dashboard okay so uh in your current project how are you using do mhm how are we using Dockers uh we do the uh okay in uh all of our pipelines there is a step that would build the docker container uh no the docker uh the docker image and then it will be pushed to uh to our to uh to our private Docker registry uh and then the the and then the specific tag uh will get uh deployed uh in uh in the specified the cluster either it is Dev testing and the taging or the production okay so who is writing this Docker file or something is anyone writing uh or you are writing uh no uh yeah uh actually uh the docker files uh they they are basically the same uh and they have been there before the time that I started with this company uh so they are basic uh they have uh uh from the stop we have a uh uh there is a Bas OS image uh that that they are all B sharing uh and then and then for each specific specific uh Service uh there are some uh envs that are being Set uh that is basically they are all this they are all basically the same Docker file because of the services are all the same language they are all they are all by Java so so they are basically the same proper file okay okay uh so you have worked on gcp a lot I can see in your resume right uh yes so yeah uh we'll uh start with the gcp now so uh in the world of cloud computing we have uh four types of cloud right can you name all of them and what exactly they are uh I know public private hybrid uh these are three public private and hybrid there is one more uh on Prem I think on Prem is not on Prem is the will be the same as the private uh actually I don't think I know this fourth one okay that's the community Cloud basically but okay that's fine uh can you just walk me through like what exactly is public private hybrid uh okay so the public is like is like when we use uh gcp services and and then we would pay for the resources or the or the services uh the the the private Cloud uh uh the private cloud is basically we can use our uh our servers to uh deploy our uh own back Cloud uh that uh that can be uh not publicly accessible to the general public or we can use the something like uh new botanics I think they are they are also uh they have the private Cloud offering mhm okay uh the hybrid uh it will be uh it will be uh if we are be using both you mean the combination yes combination of can you give an example of it okay so let's say we have our databases in our uh data center uh and they are uh connected over a private uh APC over uh over to the uh over to the uh gcp instances or or the or the ec2 instances over on uh AWS uh and they are communicating uh uh over a private IP address okay uh so basically uh Community cloud is uh when multiple companies are able to share the same amount of online storage space when uh using a community Cloud so for example uh there are three four companies and they are sharing their own space and you can use that or maybe for some PS or something uh they they might not be secure might be secure that totally depends so yeah that's that's one idea of it okay uh with respect to gcp what did you find uh that is different from AWS and Azure and other cloud service providers uh uh basically the uh am part of the uh uh it is very different uh when we uh uh it is basically different from AWS uh a AWS I I think because I started uh with it so I guess it was for for for um I think it was kind of easier because I started uh in uh AWS so when I tried to do the same uh the same thing over in GCB uh it had a different uh meaning or it had a different uh a different way of being implemented there so so there is a user and there is a service account and we give it to the specific permissions and I like like like this and that there is a key file that we can authenticate with so what this is uh the first thing that I found that was kind of the big difference in uh in the in the in the in the way that they did the uh the uh I am there okay okay so uh while working on the gcp have you heard about Google application engine or gcp application engine uh yeah I think I think I think yes uh but but I haven't uh gone to use it yet but from but uh but from my understanding is that we can uh give it a a code base and then uh and then uh and then it will deploy it uh on uh its own and and uh uh and we uh and we don't have to manage the uh underlying infrastructure at all I think that is what's done okay um so basically you will get the immediate uh in Google app engine you can a get the ability to immediately run your code so whatever you are writing you can run on that and yeah you said it right it's uh basically serverless and this will ensure that your app is constantly accessible to the user so uh the idea behind it that gcp will handle all the management of this so this uh is kind of uh your gcp has a fleet of servers they provide it and that what happens okay so uh what is the difference between i p A and S and can you give me an example of it uh can you repeat the question sorry uh I a uh P A and S A so can you give me an example okay uh infrastructure as a service uh is like where uh it's like uh Google uh uh it's like when we uh in Google when we have the uh uh the uh VMS and then uh and then uh and then uh and then we create them and then we uh deploy our services over uh but the management will be uh uh uh but we will be responsible uh we will be responsible uh in in this kind of management uh that is I uh SAS or a software as a service uh software as a service uh uh it's like Zoom uh there we have a service that we consume so uh uh so we aren bothered by anything else uh for the platform as a service uh it's like uh Google uh app engine where where we uh where we only uh give them the code and then uh and then the deployment part is uh automated uh we don't have to manage it okay so if I talk about databases what category does it uh come under the databases I think will be SAS SAS uh because we don't have to manage it uh uh no we we have to manage the users and the permissions and all this so my guess it can be pass not s okay okay uh have you ever looked into gcp devops or just GCB uh no gcp devops no I haven't encountered it okay no problem by any chance have you heard about something known as stack driver uh stack driver uh it's uh I think regarding the Lo the logging part of the gcp because uh we're uh because we had a task it was uh several uh it was in the Q3 of 2022 where we had to uh to deploy uh Helm chart for the stack driver so that so that so that we can get the metrix over from uh gcp okay uh but I don't think I know uh any more information about this about this dector either okay okay okay consider a scenario okay and uh that I think you have already kuber certified so this would be related to gke okay mhm now what is happening uh there are multiple applications okay MH uh but there is a main application which is written in uh any language let's say nodejs okay and as a devops you have deployed it to the Google kubernetes engine gke and this is going in production okay okay now this application is making request HTTP request to the dependent applications I mean there must be other application that are dependent on it okay you want to anticipate which dependent application might cause performance issue how will you do that uh okay so uh I can repeat the question if you want uh okay please okay I ask part that exactly okay so uh there is an application which is written in nodejs and it is deployed on Google kubernetes engine gke okay and this application is making request uh to dependent applications uh and these requests are HTTP in nature okay you want to figure out which dependent application can cause performance issue how will you do that which dependent uh service will cause an uh uh issue right performance issue yeah they can cause so how will you do that what would be your strategy as a devops engineer uh I would like to anticipate the service that will be the bottleneck for the application mainly yeah but how would you figure that out that you understood the question correctly how would you figure that out uh I think there are there are several ways to doing this uh first we will have to test uh each uh each uh phase we can uh okay so first we first I would set the a source limit for the dependent apps uh and then uh and then we can do uh testing like uh like like we like we stress uh test uh uh we stress test each uh each of the dependent apps uh and then we can uh collect the Matrix and then we will see which one of them uh can cause a spike in the resources so then we would uh increase the appc account for this uh pod or for this Service uh so this way we would basically understand uh uh we would understand if uh we would understand if this service uh that uh that if it got to uh that that if it got uh requests uh or too high of them uh it would cause a spike in our by uh in Source site I don't think if uh uh if this answers your your your question okay uh so uh the thing is I ask you stack driver before that because uh the when you connect uh all these applications with stack driver Trace there's an option of inter service HTTP request so uh you can go there and check the neck I mean that's straightforward answer to this okay yeah okay I didn't uh no problem no so how do you do logging and stuff uh in your system uh they basically before any I uh came they have uh they had uh elastic uh search and uh and the applications are throwing uh H uh to to the elastic search uh but I would uh say that in the coming months uh we can explore the a Loki option uh because it would be much uh simpler for the deployment and and we would uh uh and uh it would be simpler uh because of the logging uh inside of Aoki is simpler than elastic St okay okay cool cool um so uh I'll give you one more U scenario question uh consider uh you working on an application and you have to deploy deploy it okay and let's say tomorrow uh there is a weekend okay okay so you have deployed this new release on the internal application during a weekend maintenance window why I'm saying this because uh at the weekend A lot of people are sitting at home they're not uh having a lot of access there there is minimal user uh tragic right now after this window ends you learn that one of the features is not working as expected and that happens in the production environment okay okay so uh there is an extended out after that and a lot of people complain your uh client will complain that this is not working uh so in the haste what you do is you roll back the changes and uh you deploy a fix okay okay now uh for these kinds of situation you want to modify your release process to reduce the meantime to recovery so that you can avoid extended outage in the future what will you do uh uh you want to avoid this kind of extended outages what will you do uh so uh in my current uh in my current experience uh uh there should be the QC team that are responsible uh for the promotion of the uh app uh from the staging to the production site so basically it's uh so basically it will go down to the testing part so the testing it should be uh uh it should uh com uh sorry it should cover all of the features that uh that should have been uh deployed uh but the uh outage uh there shouldn't be a large window uh because when we understand that there is issue we would roll back IM immediately uh so I guess I haven't been in a situation uh that had uh this kind of big outage due to uh uh due to uh production deploy because we would easily go back to the to the uh to the to the previous tag or a verion so let's see okay uh do you have any idea of deployment strategies that we have in devops uh yes there is the blue green and the canary deployment so what exactly is a blue blue green deployment strategy uh blue green deployment strategy we have uh two matching environments uh and then we deploy to the first one uh if uh if the uh if the deployment passes and uh and the testing says that the environment is uh okay so then uh we can uh say that this is the green one and then uh and then it will be deployed to the production uh and then the uh sorry uh and when we try to deploy a new feature it will go to the other uh in uh to it will go to the other environment and then if the checks pass uh it will be the new uh green one or the or the uh or the new production in ironment okay so that scenario that I gave right for the outages you can adopt a blue green deployment strategy whenever you do a new code release for The U continuous deployment server and you can add one more thing like if you have a separate continuous integration server or a process you can add a suite of uh unit test in that so that uh you can verify uh any changes in that so that's why uh okay that's why I was asking that okay okay okay so uh when we talk about uh gcp um uh what what are all the services that you have worked on uh we have worked on uh the storage in GCS we have worked on uh SQL for uh uh yes we have worked on SQL and uh gke uh and the cloud uh DNS okay uh these are mainly the services that we are using from day to day okay um have you ever worked on Google compute engine API heard about it uh I have heard about it but I haven't had the chance to uh to uh uh interact with it okay uh when we uh talk about a cloud architecture uh we have multiple layers right uh can you talk me about that what what are those layers or any any three or four layers in Cloud architecture okay so are you talking about uh physical layer tier f for example or what exactly you can give an example of three tier but uh I was just talking about physical layer infrastructure layer kind of this thing what exactly are these what components do we have in that with related to gcp as well if there's any any idea about this physical infrastructure platform application layer something like that uh I don't think I know what this is but I don't think you you but but I don't think you are you are asking about the ISO model right so the seven so the seven lers M oh I I I don't think that this is a correct okay I mean no problem um so have you created any project from scratch in gcp uh in gcp I have deployed and I have created uh uh uh cluster from scratch for example uh that was uh one uh okay there were two one that I did manually and the other it was it was it was from uh terraform okay uh so you worked on terraform as well right uhhuh okay okay okay um this is my last question with respect to gcp uh heard about big query uh uh yes what are the advantages of big query and what what does it do uh from my understanding but I haven't had the chance to uh deploy or to interact with it uh in a production inir uh but uh it can query the uh SQL databases that we have deployed uh we can create a con action over uh so that uh so that we can uh gather some uh data for example for other uses uh that is basically what I understand about it okay okay it's uh basically a substitute for hardware setup uh for uh for the traditional data warehouse and it's used for it so the benefit there are multiple benefits like there is uh no need to make a request for backup uh recovery or something and uh it does not need any provisioning of resources before usage so it assigns them on the basis of uh your need and your usage and it's it's a fully maintained and managed service so that's that's a few things about it okay okay uh uh last question sorry last question on uh this uh have you ever used binary authorization in Google Cloud uh unfortunately no have any idea about that uh no as well sorry because uh in GK and Cloud run they use binary authorization to make sure like what is uh trustworthy for container images when they are deployed so I thought you might have worked on that okay uh no problem uh uh so you are kubernetes certified right uh yes uh yes so when did you clear that examination uh I don't remember actually but it was about uh think four months ago I think okay so uh it was like uh how many exam does it was like mid uh I think it was in the mid of 2022 uh because uh because I uh because I joined my uh my company now uh it was in March so I cleared it in the next several months so I think about mid uh so June or July for example I think okay okay uh what do you understand by Nam space in kubernetes a name space is a logical operation of resources or of the yes of the resources so uh so it's like that we can have uh several uh environments in a single cluster okay and uh any idea about uh Ingress default back end what exactly it is uh so for the uh for the increase part uh I I haven't uh I I didn't use the default resource uh I have uh deployed the uh engine exter resource controll and the traffic as well uh Engish controller I didn't use default one okay um so let's say uh your pod is not getting scheduled okay how will you troubleshoot it as a devops engineer okay it's not getting scheduled I would uh I would run the this the descrip uh the describe comment to check exactly what uh to check exactly why why is it not getting getting scheduled uh and then based on that I would uh take uh further actions so I so we can say for uh for example uh that the this that that that describe command it uh displays that there are uh in sufficient uh ppus uh so so that would mean that we have that we hit the resource limit uh for the current uh for the current nodes so then I would uh try to uh decrease its resource limit uh or decrease uh any other resources uh of other services uh that uh that uh that that might have been uh overly allocated okay uh so in your recent uh work uh have you come across any uh what was the like latest uh problem that you resolved on kubernetes uh the latest problem uh it was regarding to uh our uh uh uh to the discs that W getting mounted over in the pods uh but unfortunately we couldn't understand why uh because uh because the discs were working fine out outside of the cluster as uh as we created a VM and then we mounted the B discs over and then we could uh access all of the all of the uh all of the files uh so uh unfortunately unfortunately that the down time uh for the deployed Services there uh it was uh we couldn't we couldn't Trish put it uh because of the because uh the services were were a affecting uh the devs so uh so then we had to uh deploy a new cluster okay and uh you did all single-handedly or you took help from someone uh no this uh part I did uh all on my own okay okay okay uh yeah M uh I think I'm done do you have any questions for me uh for the technical part I would say uh uh I would like to ask uh any feedback that you might give me that that could help me uh in any of the uh aspects that we uh okay uh I mean this is one feedback so uh first of all whenever uh you talk as a devops engineer you need to understand the end to endend flow of any application okay okay for an example if I say that I work on a Java based application as a developer then I would know the architecture of it as a devops you need to know if I'm using a Java based application what would be the first step when it gets checked out uh the next step when it compiles builds the third step when it get tested each and every step has to be done what kind of software it is touching what kind of Technology stack is touching uh where is it going how are uh my test cases running on it and if you're talking about sonar what is the Cod coverage that is happening how is it happening if the Cod coverage is coming 45% and and on the other hand it's coming 85% what is the difference between these two situation who is writing the quality Gates it's totally your responsibility to know all of these and after that how it is getting tested on uat environments stage environments production environments or a div environment what are the branching strategies that are involved and uh how are we deploying it the deployment strategies the canary the the other one uh blue green deployment how is it happening the thing behind so that's the very basic thing that you know as a devops and now we are getting uh Cloud also mixed with devops so we expect someone who is into devops uh should know Cloud as well so how is it happening on the uh on the cloud and what are the services that is touching what is happening beside beside it so as a devops you need to think about that I mean it's just not like you clicked one and it's getting deployed and everything is getting done people are getting access uh and they are able to run it no that's not an opportunity that's for someone who is like a fresher in devops world or one or two years of experience but as when you grow in devops you need to know each and everything so that would be my one uh feedback that uh you can learn and improve on okay uh okay I okay with thank you so much okay yeah thanks
Info
Channel: LogicOps Lab
Views: 3,264
Rating: undefined out of 5
Keywords: gcp interview questions, gcp interview questions and answers for experienced, gcp interview questions for freshers, gcp interview, gcp interview questions and answers, gcp interview questions and answers for freshers, google cloud platform interview questions, google cloud platform interview questions and answers, google cloud platform interview, gcp scenario based interview questions, gcp scenario based questions, gcp devops interview questions, gcp devops interview
Id: zu8-9RULD5E
Channel Id: undefined
Length: 63min 28sec (3808 seconds)
Published: Tue Oct 24 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.