AWS Interview Questions | AWS Interview Questions for Solutions Architect | Intellipaat

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hey guys welcome to this session by interlibrary in this session we're going to look into aws interview questions and not in the regular way so we are going to try to simulate how the interview process happens and tell you and show you how exactly you should answer the interview questions and once that will be done we'll also add a set of interview questions which you can use to clear the next interview you get into and before moving on with the session please subscribe to our channel so that you don't miss our upcoming videos and also leave a like if you enjoy our content now let's begin with this session [Music] hi hi korea let's start off with this interview process so let's start with the introduction of yourself all right so i'm basically a developer and i wanted to switch my domain to cloud so that's why i basically took up upon myself that i've learned aws i've done a lot of projects as you can see in my resume and yeah so i mean this is my first job as a cloud engineer yeah i see your resume and i think you worked as a developer so how exactly did you learn aws so i basically self learned and i also took a course uh from intel apart so i did my training over there and there i have done a lot of projects uh which actually gave me the confidence that yes i can actually contribute to an organization like yours and be a good cloud engineer so now looking at your resume it seems that you have worked on fsx yeah so what do you think is the difference between fsx and efs and what gives the edge that you chose fsx over here okay so uh fsx basically is a is again a shared uh drive uh service as efs is but the major difference is that when you're using fsx then you get high io right so whenever we have an application which requires high input output rate then it's better to use fsx because even the pricing is on the higher side right uh if you do not have an application which requires high or intensive io uh configuration in that case we can use efs and the application that i had to use in my project had a high i o and that's why i chose fsx okay so now let's look into rds so there is an rds cluster where blue green deployment has not been set up okay okay so now if the master goes down what exactly will you do to bring the cluster up okay so if my master will go down that means uh i'm i will not be able to write anymore on the clusters but the read operation will still be valid because i will have multiple read replicas had there been a blue green deployment in that case what i could simply do was i could switch to the other cluster but since you said there's no blue green deployment that basically means there's only one master right and that master you said it went down right so what i'll do is i'll basically promote a read replica to be the master and that's how my right operations will again start okay so now let me ask you some very basic questions which are also really important so how is stopping and terminating an instance different from each other okay so when we stop an instance what basically happens is that the storage is detached from the server and this server is basically given back to aws right but when we terminate an instance what happens is even the storage is uh like erased and it's given back to aws along with the server so uh in in terms of pricing if i were to explain if you stop a server then you're only charged for the storage right but when you terminate a server you are not charged for anything and the server basically goes out so in the same pattern let me ask you another basic question which is also really important so how exactly can you choose an availability zone okay so uh when choosing an availability zone the first thing that i have to make sure of is the target audience that i'm catering to for example uh let's say i have an application where the user base is more in the mumbai region right so what i'll do is i'll have to choose a region which basically is nearer to the mumbai region in order to reduce the latency so latency is one thing that i'll have to make sure of second thing that i have to make sure of is whether the pricing is something which the organization is agreeing to right for example uh let's say in the us i'm getting a very less price for the server and let's say in the india wherever i'm launching the server the price is very high right so that factor also has to be catered in when basically i'm thinking about where to launch a server right so if the latency is low if the price is low uh then i mean it comes down to uh which availability zone will i be choosing right so whatever so sometimes what happens in aws is if you're trying to launch an instance right it will say it is not available in a certain availability zone so that's why we will be basically setting or we will be trying out our application in the region or in the availability zone where i do not get any errors after doing all the research about latency and the pricing part and then i basically decide that this is the availability zone that i'll be deploying my server and okay got it [Music] okay so now let's look into the next question yeah so give me the difference between stateless and stateful systems okay so uh stateful systems are the systems where the server remembers about uh whatever the user was doing or whatever job it was doing right if i were to give you an example let's say i have 10 jobs that i want my system to execute right and what my system does is it basically scales itself to three or four servers and these three or four servers are now basically solving the ten jobs that we gave them now if the first system has to pick up a job he should know that the second or third system has already solved that job or not all right so this is basically a stateful system where they are aware about uh you know their surroundings but when we talk about stateless system in stateless systems uh the systems are not aware about what has already happened or what is going to happen right for example uh let's say again taking the same example if you have ten jobs and four systems are doing it uh it might be because all these four systems are not knowing what the other systems are doing they might do the same job repetitively right so to avoid that we basically make sure that uh you know the applications are stateful sometimes stateless applications also make sense right uh but i mean if we don't have the disadvantage in place then stateless systems make sense otherwise they do not oh you got it yeah so now tell me this is aws lambda stateful or a stateless service okay uh so honestly i've never worked on aws lambda so i would not know whether it's a stateful or a stateless okay so there is a very important thing over here so if you do not know about something that is being asked in the interview so it's better to say you do not know about it than to give a wrong answer since it gives a wrong signal to the interviewer that you might do the same when there's a critical you know situation in the company and something which you do not know of you might give a wrong suggestion so it's always good to basically uh talk about what you know and the things that you do not know of uh it's better to tell the interview that you're not comfortable in that topic all right yeah so let me answer that question for you so aws lambda is a stateless system okay okay so now on that another question so how to make an application stateful while using aws lambda okay so uh any any stateless system if you are dealing with uh let's take the same example wherein i have four servers and i have a sequence of jobs which i want the four servers to execute manually so uh to make it stateful what i can do is i can add a queue in front of my servers and when i add a queue i have all the 10 jobs so what will happen is the moment a job is taken up by a server this queue will delete that job uh from itself right so in that case my systems will always get jobs which are not taken up by any other server and hence we can it can mimic the behavior of a stateful system [Music] so now let's look at our final question sure so you have a video transcoding application and the videos are processed according to a queue so now if the processing of a video is interrupted in one instance it will resume in another instance okay and currently there is a huge backlog of videos which needs to be processed and for this you should obviously add more instances but these instances should only be available until the backlog is reduced so now which instance type would you choose to make this happen so i'll give you some options let's say on demand squad instances and reserve instances okay all right so guys if you noticed here this is a pretty big situation which the interviewer has given so in this situation the first part of the situation says that there is a video transporting application if the video is interrupted to be processed in one system it resumes in another right and finally the question is about which instance should you use so the first of the ques part of the question is actually not relevant to the answer and this sometimes happens uh to basically just confuse you right to make you think that it is related to each other but in this particular question it is not related okay so now now the main question that the interviewer has asked is that i have the videos which have to be processed and they have to be processed fast since there's a huge backlog which is already there so we need to add more instances and these instances are only required till the time uh you know the videos are actually processed and the videos when the videos are not no more there then we don't need the servers anymore now the three options that i've got which are actually also the aws pricing options for ec2 is reserved instance on spot instances or you have on-demand instances so reserved instances we will only use when uh you know there is a time lens that i know that i want the servers for for example i need it for one year or three year time of frame but here since we do not know that amount of time that we require the servers for reserved instances will not make sense uh now since this is this application uh we we basically require more servers why because we want to get rid of the backlog very fast right so in that case spot instances also they do not make sense because spot instances will stop the moment the pricing goes up right so the only option that we are left out with is the on-demand part so on-demand instances will be perfectly suitable for this kind of a scenario the reason being that uh we just want them till the time that the videos are processed so we know that once the videos are done i have to terminate the instances and that feature we get in on demand instances we do not get this in reserve because uh i'll have to you know i have to commit to a time in reserved instances for the time that i'll be using the servers for so for this option i think uh uh on demand instances would be the correct option okay even got it all right so the first question says what is the difference between an ami and an instance so guys an ami is nothing but a template of an operating system it's just like a cd that you have of an operating system which you can install on any machine on the planet right similarly an ami is a template or is is an installation of an operating system which you can install on any servers which uh fall into the amazon infrastructure all right you have many types of amis you have windows ami you have ubuntu amis you have uh centos amis etc there are a lot of amis that are present in aws marketplace and you can install them on any servers which are there in the aws infrastructure all right coming on to instances what are instances so instances are nothing but the hardware machines on which you will install ami right so like i said amis are templates which can be installed on machines these machines are called instances and again instances also have types based on the hardware capacity for example a one cpu and 1gb of machine is called t2.micro right similarly you have t2.large you have t2.extra large then you have i o intensive machines you have storage intensive machines you have memory intensive machines and all of these have been classified in different classes right depending on their hardware capability so this was the difference between an ami and an instance our next question asks us what is the difference between scalability and elasticity all right so guys scalability versus elasticity is a very confusing topic and if you think about it so scalability is nothing but increasing this the the machines resources for example if your machine has 8 gb of ram today you increase it to 16 gb therefore the number of machines are not increasing you're basically just increasing the specification of the machine right and this is called scalability when you talk about elasticity we are basically increasing the number of machines present in an architecture we are not increasing the specification of any machine for example we choose that we require a 3gb machine with around 8 gb or 10 gb of storage right so any replica which will be made or any auto scaling which will happen it will only happen to the number of machines it will nowhere be related to the specification of the machine the specification of the machine will be fixed the number of machines will go up and down and this is called elasticity on the other hand scalability is called is basically termed as the change of the specification of the machine that is you're not increasing the number of machines you're basically just increasing the specs of the machine for example the ram the memory uh the hard disk etc and this is the basic difference between scalability and elasticity moving forward our next question is which aws offering enables customers to find buy and immediately start using software solutions in their aws environment now you can think of it as say you want a deep learning ami or you want a windows server ami which specific software is installed on it right so some of them are available for free but some of them can be purchased in the aws marketplace so the answer for this is aws marketplace it's basically a place where you can buy all the aws uh systems that you are or all the aws uh or non-aws softwares that you require to run on the aws infrastructure all right so the answer is aws marketplace moving on our next question would fall under the domain of resilience architecture so all the questions that we'll be discussing henceforth in this domain will all be dealing with resiliency of an architecture all right so a customer wants to capture all client connection information from his load balancer at an interval of five minutes which of the following options should be chosen for his application all right so i'll read out the options for you the option a says enable aws cloudtrail for the cloud blanca for the load balancer option b says cloudtrail is enabled globally option c says install the amazon cloudwatch logs agent on the load balancer and option d says enable cloudwatch metrics on the load balancer all right now if you think about it cloudtrail and cloudwatch are both monitoring tools so it's a bit confusing but if you have studied it deeply or if you understand how cloudtrail works and how cloudwatch works it is actually not that difficult all right so the answer for this is a that is you should enable aws cloud trail for the load balancer reason being uh option b is not correct cloud trail is not enabled by default or is not enabled globally to all the services option c says install amazon cloud watch so option c and option d you will not even consider the reason being that you're talking about the log of the client information right what all people are connecting to the load balancer what ip addresses are connecting to the load balancer etc cloud watch deals with the local resources of the instance that you are basically monitoring for example if you are monitoring ec2 instance cloudwatch can monitor the cpu usage or the memory usage of that particular instance it cannot take into account the connections which are getting connected to your aws infrastructure right on the other hand cloudtrail deals with all these kind of things wherein client information or any kind of data which can be fetched from a particular transaction all of that can be recorded in the logs of cloudtrail and hence for this particular question the answer is enable aws cloud trail for the load balancer moving on our next question is in what scenarios should we choose classic load balancer and application load balancer all right so uh for this question i think uh the best way to answer this question would be to understand what exactly is classic load balancer and what exactly is application load balancer all right so a classic load balancer is nothing but uh you know it's an old-fashioned load balancer which does nothing but round robin based uh distribution of traffic which means it distributes traffic equally among the machines which are under it it cannot recognize which machine requires which kind of workload or it requires which kind of traffic whatever data will come to a classic load balancer will be distributed equally among the machines which have been registered to it on the other hand application load balancer is a new age load balancer which basically deals with identifying the workload which is coming to it right it can identify the workload based on two things it can either identify it based on the path for example uh you can say that you you have a website which deals in image processing and video processing so you can see it it might go to intellipower.com images or slash videos so if if the path is slash images the application load balancer will directly route the traffic to only the images servers right and if the path is slash videos the application load balancer will automatically route the traffic to the video servers and this is application load balancer so whenever whenever you are dealing with multivariate traffic that is traffic which is meant for a specific group of servers you would use application load balancer on the other hand if you have servers which uh which do the exact same thing right you just want to distribute the load among them equally then in that case you would use a classic load balancer our next question says if you have a website which performs two tasks that is rendering images and rendering videos both of these pages are hosted in different parts of the west right but under the same domain name which aws component will be app for your use case among the following all right so this i think is an easy question reason being we just discussed this right so the answer for this is application load balancer the reason being the kind of traffic which is coming is specific to its workload and this can be differentiated easily by an application load balancer okay so we are done with the resilient architecture questions now let's move on to the performance architecture domain where we'll be discussing about how to uh about architectures which are performance driven right so let's take a look at the first question so the first question says you require the ability to analyze a customer's click stream data on our website so they can do behavioral analysis so your customer needs to know what sequence of pages and ads their customers clicked on this data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertise click through which option meets the requirement for captioning and analyzing this data all right so the options are amazon sns aws cloudtrail aws kinesis and aws ses so let's first uh start with the uh odd win out uh options right so we have amazon sns which leaves with notifications so obviously because we want uh to basically we want to track uh user data right so sns would not be the app choice for it because sending multiple notifications in a short amount of time would not be apt similarly scs would also not be the app choice because then we will be getting emails on basically the user behavior and this would amount to a lot of emails so hence it's not an appropriate solution i think uh then we have aws cloudtrail and aws kinases actually both these servers can do this work but the keyword over here is real time right you want the data to be in real time so since the data has to be in real time you will choose aws kinesis cloudtrail can cannot pass on logs for real-time analysis kindness is especially built for this particular purpose and hence for this particular question the answer will be aws kinases moving on then our next question is you have a standby rds instance will it be in the same availability zone as your primary rds instance okay so the options are uh it it's only true for amazon aurora and oracle rds second option is yes third option is only if configured at launch and the fourth option is no all right so the right answer for this uh i want to think about it like this that whenever you want a standby rds instance it will only be there when your rds instance stops working now what could be the reasons that your rds instance could stop working probably it could be a machine failure or it could be a power failure at your at at the at the place where your server has been launched it can also be probably a natural calamity which would have struck your data center where your server exists so all of these could be reasons which could lead to disruption in your rds service right now if your standby rds instance is also in the same availability zone as your primary these conditions cannot be tackled or these situations cannot be tackled all right so it is always logical to have your standby machines in some other place right so that uh even if there is a natural calamity or if there is a power failure you your instance is always up and ready and because of that aws does not give you the option of launching your standby rds instance in the same availability zone it always has to be in another availability zones and that's why the answer is no your rds instance will not be in the same availability zone as your primary instance all right so our next question is you have a web application running on six amazon ec2 instances consuming about 45 percent of resources on each instance you are using auto scaling to make sure that six instances are running at all times the number of requests this application processes is consistent and does not experience spikes all right so the application is critical to your business and you want high availability at all times you want the load to be distributed evenly between all instances and you also want to use the amazon ami for all instances which of the following architectural choices should you make all right so this is a very interesting question so basically you want to run six amazon ecd instances or six amazon easy to instances and they should be highly available in nature and they would be using an ami of course because they are auto scaled so which among the following would you choose so you have the options deploy six ec2 instances in one availability zone and elp deploy three ec2 servers in one region and three in another region and use elb you should deploy three ec2 on one easy that is availability zone and three in another availability zone and i should deploy two e0 instances in three regions and use an elastic load balancer all right now uh the correct answer for this would be uh c the reason being that amis are not available across regions right so if you have created an ami in one region it will not be automatically available in another region you will have to do some changes and only then or do some operations and only then it will be available in another region so this is reason number one so the region options mentioned over here get casted out because of this reason second if you look at the uh first option which says deploy six ec2 instances in one availability zone that defeats the purpose of high availability because like i said if there is any natural calamity or a power failure at a data center then all your instances will be down so it's always advisable to have your servers distributed but since we have that uh limitation of using an ami and therefore and also the limitation that it is not accessible across regions we would choose distributing our instances among availability zones and i'd say we have we just had the option of two availability zones right it could be three availability zones and we could deploy two two servers in each and this would also amount to high availability all right and of course because you want to load balance traffic uh if you apply an elb on top of three availability zones it will work like a charm regions across regions it can become a problem right and but in availability zones it definitely works and it will work perfectly all right so the answer for this question is you would be deploying ec2 instances among multiple availability zones in the same region across an elb all right just a quick it for guys intellipart provides online aws certification training in partnership with future skills mentored by industry experts the course link is given in the description below now let's continue with the session so our next question is why do we use elastic cache and in what cases all right so the answer for this is basically related to the nature of the service of elastic cache so elastic cache as the name suggests it's basically a cache which can be accessed faster than your normal application for example if you talk about a database instance from which you are gathering information right if you're always dealing with the same kind of query for example you're always fetching the password for particular users right so if you're using an elastic cache that data can be captured or can be cached inside elastic cache and whenever a similar request comes in which is asking for that kind of data your mysql instance will not be disturbed the data will directly be relayed from elastic cache and that is the exact use of elastic cache right so you use elastic cache when you want to increase the performance of your systems right whenever you have frequent reads of the similar data so if you have frequent views of similar data we will probably be querying the same kind of data every time and basically that will increase the load on your database uh instance but to avoid that you can you can basically introduce an elastic cache a layer between your database and your front-end application and that would not only increase the performance but also decrease the load on your database instance right so uh this was all about performant architectures guys and next to me would deal with secure application uh and their architecture so let's go ahead and start with the first question of this domain which talks about a customer wants to track access to their amazon simple storage service buckets and also use this information for their internal security and access audits which of the following will meet the customer requirement so basically you want to just track access to the s3 packets now if you want track access let's see what are the options so you can enable cloud trail to audit all amazon s3 buckets you can enable server access logging for all required amazon sd buckets enable the request to pay the option to track access via aws billing or you can enable aws s3 event notifications for put and post all right so i would say the answer is a and reason being why is the answer not b because server access logging is actually not required when you want to deal with tracking access to the objects present in the s3 bucket a requester pays option to access why aws billing again it's not required because there's a very simple feature of cloudtrail which you which is available to all the buckets across s3 so why not use that and using notifications for s3 will not be apt reason being there will be a lot of operations that would be happening so rather than sending notifications over each and every operations it is better that we log those operations so that whatever information we want of the out of the log we can take in rest we can ignore right so the answer for this is amazon using aws cloudtrail okay our next question is imagine if you have to give access of aws to a data scientist in your company the data scientist basically requires access to s3 and amazon emr how would you solve this problem from the given set of options okay so you basically want to give a particular services access to an employee and we want to know how would we do that okay so the options are we should give him credentials for root uh second option being create a user in im with a managed policy of emr and s3 together create a user in im with managed policies of emr and s3 separately giving credentials for admin account and enable mfa for additional security okay so a rule of thumb guys never give root credentials to anyone in your company even yourself you should never use root credentials always create a user for yourself and access aws through that user all right this was point number one second whenever you are you want to give permissions to services or permissions of services to of particular services to people you should always create or use policies that pre-exist in aws right so when i say that i basically mean never merge two policies okay so for example if you if you're using emr nsc together that basically means that you create a policy that gives you uh you know the required access in one document that is in one document you mentioned the access for emr and the in the same document you mentioned the access for s3 as well well this is not suggested reason being you have policies created by aws which is uh which are basically created and tested by aws so there is no chance of any leak in terms of security aspect second thing is see needs change right so if tomorrow your user says he doesn't want access for emr anymore he probably wants access for easy2 right so in that case what will you do if you had policy in the same document you would have to edit that document correct but if you create a document separately for each and every service all you have to do is remove the document for emr and add the document for the other service that he requires probably easy to you just add the document for easy to and your s3 document will not be touched right so this is more easier to manage than to uh you know writing everything in one document and editing the code later to give permissions of specific services that he requires now right so that is something that is not much manageable so the answer for this is create a user in im with a managed policy of emr and s3 separately all right let's move on to the next question so how do system administrator add an additional layer of login security to a user's aws management console so okay so this is a simple question the answer for this is enable multi-factor authentication so a multi-multi-factor authentication basically deals with uh rotating keys that the keys are always retreating so every 30 seconds a new key is generated and this key is required while you're logging in so once you have entered your email and password it will not straight away log you in again give you a confirmation page for a code that you have to enter which will be valid for those 30 seconds now this can be done using apps so you have a app called if you have an app from google you have apps from other uh third-party vendors as well right so these apps are basically compliant with your aws right and you can use them to have access to those keys which are changed at every 30 seconds all right so it is better so you if you want to enable multi-factor authentication it is the best way of adding a security layer over the traditional username and password information that you enter all right so our next domain deals with cost optimized architectures so let's discuss these questions as well so first question is why is aws more economical than traditional data centers for applications with varying compute workloads all right so let's read out the options so we have amazon elastic compute costs are built on a monthly basis okay amazon ec2 costs are built on an hourly basis which is true amazon easy to instances can be launched on demand when needed true customers can permanently run enough instances to handle peak workloads all right so i'll say because this question is talking about the economical value of aws option b and option c are correct reason being you're charged according to the r and at the same time you can have them on demand if you don't need them after two hours just pay for two hours and then you can you don't have to worry about where that server went right so this is very economical as compared to the fact that when you buy servers and their need finishes say after one or two years when their hardware gets outdated so it becomes a bad investment on your part right and that is the reason aws is very much economical in terms of uh the reason being that you know the it is it charges you according to the r and also gives you the opportunity of using servers on the basis of on-demand pricing all right so this would be the answer so option b and option c would be the right answer for this particular question moving further our question says you're launching an instance under the free tier usage from emi having a snapshot size of 50 gb how will you launch the instance under free usage here so the answer for this question is pretty simple it is not possible right you have a limit on how much of size snapshot size you can use that would fall under the free tier 50 gb is not the size uh is basically a size which will not fall under the amazon free tier rules and hence this is not possible all right our next question says your company runs a multi-tier web application the web application does video processing there are two types of users which access the service premium users and free edition users the sla for the premium users for the video processing is fixed while for the free users it is indefinite that is a maximum time limit of 48 hours how would you propose the architecture for this application keeping in mind cost efficiency all right so to rephrase this question basically uh you have an application which has two kinds of traffic one is free traffic and one is premium traffic the premium traffic has an sla that the task say should be completed in say one hour or two hours uh the free traffic they do not guarantee it when it will finish and it has a maximum sla of 48 hours so if you were to optimize the architecture for this uh at the back end how would you design the architecture that you get the maximum cost efficiency possible using this architecture all right so the way we can deal with it is uh there is a thing called spot instances in aws which basically deals with bidding so you bid for aws servers in the lowest price possible and as long as the server prices are the in in the range that you specify you have that instance for yourself so all the free users who are coming to this website can be allotted to spot instances because there is no sla so even if the prices go high and the systems are not available it does not matter right you can wait for the applications for processing if you're dealing with free users but for premium users since there is an sla you have to meet a particular deadline i would say you use on-demand instances they are a little expensive but i think because premium users are paying for their membership that should cover that part and spot instances would be the cheapest option for people who are freeloaders or people who are coming free on your website because they do not have any urgency of their work and hence can wait if required if the prices are too high for you all right so our next domain will talk about operationally excellent architectures so let's see what all questions are covered in this particular domain all right so imagine that you have an aws application which is monolithic in nature so monolithic applications are basically which do not uh which which have the whole code base in one single computer right so if that is the kind of application you're dealing with it's called a monolithic application now this application requires 24 7 availability and can only be down for a maximum of 15 minutes if had your application been not monolithic i would say that there would be no downtime but since it's a monolithic application the question has mentioned there is an expected downtime of say 15 minutes how will you ensure that the database hosted on your ebs volume is backed up now since it's a monolithic application even the database resides on the same server as that of the application so the question is how will you ensure that your database is backed up in case there is an outage so for this answer i'll say the answer is pretty easy you can schedule ebs snapshots for your ec2 instance at particular intervals of time and these snapshots would basically act as a backup to your database instances which have been deployed on ec2 so hence the answer is abs instance back snapshots all right our next question is which component of aws global infrastructure does aws cloudfront use to ensure low latency delivery now aws cloudfront is basically a content delivery network which basically means if you are in the us and the application that you're accessing has servers in india it will probably catch the application in a u.s server so that you can access that application faster than to send traffic packets over to india and then receiving them back all right so this is how cloudfront works it basically caches the application to your nearest server and so that you get the maximum latency sorry the minimum latency possible and it is possible using aws h locations okay so as locations are basically the servers that are located to near your near your place or near a particular availability zone which basically cage the applications which are available in different regions or are at fire fireplaces just a quick it for guys intellipart provides online aws certification training in partnership with future skills mentored by industry experts the course link is given in the description below okay guys that's it i hope you enjoyed this conversation as well as the tips which were shared in this particular conversation also guys if you have any more questions or if you have any doubts that we discussed in the video please put it down in the comments section and we will be happy to answer okay now let's go ahead and look at some more interview questions which could be asked in the next interview to get into you
Info
Channel: Intellipaat
Views: 29,127
Rating: 4.9284863 out of 5
Keywords: AWS Interview Questions, AWS Interview Questions for Solutions Architect, AWS Solutions Architect Interview Questions, AWS Interview Questions and Answers, AWS, WS Interview Questions and answers for experienced, aws solution architect interview questions, aws solution architect certification, aws interview questions for experienced, aws certified solutions architect training, aws interview tips, aws training, aws interview questions, AWSQuestiinsForBeginners, AWSAdvance
Id: XbsnOTwvxgQ
Channel Id: undefined
Length: 42min 25sec (2545 seconds)
Published: Sat Mar 06 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.