Solution Architect Technical Interview (Master the Solutions Architect Interview Questions)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
are you looking for solution architect interview questions or cloud architect interview questions if so this video is for you [Music] hi my name is michael gibbs and i'm the founder and ceo of go cloud architects and we're an organization that's dedicated towards building high performance cloud computing careers personally i've been working in technology for over 25 years and i've been coaching others or mentoring others to get their first tech job or get promoted in tech for over two decades today we're going to talk about preparing for that cloud architect interview or that solutions architect interview and we're going to do so by giving you some solutions architect interview questions or cloud architect interview questions now when you're asked these interview questions from the hiring manager they're asking it for a few reasons first and obviously they're trying to check your technical competency but they could do that in a lot of different ways the reason they're going to ask you a lot of open-ended questions they want to not only see your technical competency but your ability to describe technology your communication skills as well as to see how you act under pressure so what's the best way to act under pressure be prepared for the pressure in the first place if you're prepared it'll be a review when you're there not the first time so you won't be stressed so that's why we have these videos on cloud architect interview questions to make sure you're successful on that cloud architect interview or that solutions architecture the first question that we're going to ask you is to describe the four types of disaster recovery options like cloud and the strength and the weaknesses of each approach so first obviously you need to know the four types of disaster recovery options on the cloud and then you need to know the reasons why an organization would choose each so let's walk through that together the first disaster recovery option is just simply backup let's say you're on a cloud provider or your data center you take an image of your servers and you move it over to the cloud you back up your data and you back it up to the cloud and maybe you say once per day you take your data from the cloud and you sync i mean your data center and you synchronize it with the cloud that's it that's back up what's great about this you back your data up to the cloud it's very cheap and if you needed to get the organization's systems up and running within about 12 hours you could do it the organization is completely up it's been a wonderful low cost high environment way to do disaster recovery high reliability and extremely low cost okay so let's talk about the next time in the previous version we talked about copying your data and synchronizing can say every 24 hours so the next version would be to keep our data synchronized a little more frequently but not all data needs to be synchronized a little more frequently so let's talk about how we can do this in disaster recovery version 2 we're going to take the same images of our servers as we did previously and move them over to the cloud we're going to back up our data say every 24 hours and move that to the cloud but in this case we're going to keep a database synchronized between say our data center and the cloud and by keeping our databases synchronized between the data center and the cloud guess what now our transactions are synchronized in both cases so if there's an outage in our main environment say our main data center we go up and running on the cloud it's still going to take us about 12 hours to come up but our transactions are synchronized so our data is fresher so it's an improvement over disaster recovery option one the next form of disaster recovery disaster recovery option three the cloud really shines here and this is because the cloud has auto scaling so disaster recovery option three really revolves around auto scaling and what this method is is in your data center you've got your actual real environment and you create a replica environment in the cloud meaning if you had a network load balancer and a hundred web servers in the data center maybe you set up a network load balancer or two and two web servers in the cloud and is in an auto scaling group so that can scale out as needed and say you do this for all of your environments then here's what happens if anything happens in your data center traffic will be redirected to the cloud and all your systems are running in the cloud and they're all synchronized but they're in auto scaling groups and they're small so all the traffic would be redirected to the cloud the cloud would auto scale systems would scale out in about 45 minutes to an hour the cloud would be a perfectly operating fully fully established disaster recovery environment for the organization so disaster recovery 3 is really amazing it provides high speed high performance failover for disaster recovery and it does it at a relatively good cost by leveraging small instances of everything on the cloud a replica environment and using auto scaling now the last form of disaster recovery is the pure pure pure active active and here's what this does if you've got an organization that's got a thousand web servers in their data center you're gonna have a thousand web servers in the cloud if you have a thousand app servers in the data center you can have the same thousand app servers in the cloud it's just gonna be a mirror image and if anything goes wrong in the data center the traffic will just get redirected to the cloud and everything's running so what have we got we've got backup we've got backup plus synchronizing our databases we've got a small instance version on the cloud using autoscaling and we've got a complete active active hot hot environment those are your four types of disaster recovery options on the cloud the next question we're going to ask is a networking question and it's a really really important question most of what we cloud architects do is we take systems from the network and the data center and we migrate them over to the cloud now what are we migrating we're migrating data as well as systems and the way we get those data or systems to the cloud could be one of a few ways if we've got a private line or a direct connection or a vpn we can just transfer it over but if we don't have enough capacity on those lines or we have too much data to transport and not enough time we might need services such as the import expert service the snowmobile or the snowball and the capacity and how much data we're going to have to transfer is going to be there but you're never going to know this if you don't know the capacity of your link so the next question is how much data can you transfer in 24 hours on a gigabit ethernet link so i'll say it again how much data can you transfer in 24 hours on a gigabit ethernet link so you will see this on exams like the certified solution architect professional but you will see this on interviews so how do we determine this well we know that a gigabit per second is a thousand megabit per second so we also know that there are eight bits to a byte so let's do this first if we take a thousand megabits per second because it's mega bits we first convert that into bytes so we take a thousand megabits divide by eight and now we know we can transfer a 125 megabytes per second well there are 60 seconds in a minute so if we take our 125 megabytes and we multiply times 60 we know that we can now transfer on this link 7.5 gigabytes per minute now there's 60 minutes in an hour so if we take seven and a half gigabytes which is the amount of data we can transfer in a minute times 60 minutes that equates to 450 gigabytes per hour now there's 24 hours in a day so we take our 450 gigabytes multiply times 24 and we get 10.8 terabytes so we know at least theoretically we can transfer 10.8 terabytes on a gigabit ethernet link next question why can't we get the full 10.8 terabits on a gigabit ethernet link now this is where we're asking your networking knowledge so what we're looking for here is we want to know that you know about the way the network works meaning just because we have a gigabit doesn't mean we can get it so for example if there's tcp and there's flow control going back and forth and acknowledgments that will cost us bandwidth but when we take the packets or the when we take the data and we put an ethernet frame on there the ethernet frame as a header adds overhead then we slap an ip header on there for the tcp networking and that has overhead we might actually have a vlan tag or something like that and that also adds overhead so the reason we will never get maximum performance on our link is due to the overhead and the encapsulation methods of ethernet ip in other words so there's always going to be about five to ten percent overhead on the link so now you know why you can't get the full utilization out of a link because there's an overhead on the way it's the same reason that if you have an 80 terabyte hard drive and you format it you might only get say 72 percent 72 gigs in capacity like an aws snowball because the overhead associated with it in this next question we're going to talk about three kinds of storage when do you use object storage when you use block storage and when do you use file storage first let's talk about object storage you should be able to tell the employer that object storage is the type of storage area network that's very unique in the way that it operates and then it takes data and breaks it down into objects and that each object has metadata or data about that object and because each object has metadata about that object it is very easy to search very easy to query and can integrate really well into a big data environment you should be also be able to tell the employer that object storage is not regular storage and that it does not get used by computer systems like regular storage that object storage is not suitable for regular computer systems because anytime something is modified even a little bit it would create a new version and that's why object storage could not be used as like a hard drive whether it be a swap file or an operating system or anything that would constantly be changing you might also want to be able to say that object storage is really more like a database and that it's not hierarchical in nature and basically the data is just placed into object storage and there's like a database pointer that points to the location of the object storage you might even want to be able to tell them that object storage is used for software distribution it's great for backup and archival purposes and it's great for data lakes so that's what you realistically want to tell them about object storage you could also call it s3 if you're dealing with awf you could also call it cloud storage if you're on the google platform that day and you could also call it blob if you're on microsoft but realistically speaking what we're just talking about is object storage the next type of storage is block storage now block storage is another type of storage area network technology where data is broken down into blocks now what makes block storage so good for the cloud providers is it effectively enables you to place the storage environment or the blocks anywhere it needs to be in the storage environment so by doing so it effectively decouples your compute from your storage so that's why your cloud providers are using block storage now block storage is network storage so it's not going to be as fast as local storage and its speed is going to be limited and its throughput is going to be limited by the network the type of network you're using so if you're limited at one gig or 10 gig or 100 gig that's going to be the limitation of performance that you're going to get from your black storage because it's network storage now also be able to describe the block storage looks and feels when it's mounted just like a hard drive so block storage is used in the cloud computing environment when organization needs something that would function as a virtual hard drive why did the servers in the cloud need a virtual hard drive because the servers come with basically the storage as part of it it could be called instant storage it could be called ephemeral storage basically what that means is the storage that comes in your virtual machines which is very fast goes away with system reboot so if you're going to have a server in the cloud and the server is going to need to have anything that's stored on it you have no choice you're going to use block storage because you can't store it on the instance itself so that's why organizations use block storage and that's why cloud providers use block storage because it scales so well the next type of storage is really network file storage and if we there's really two kinds of network file storage if we're dealing with unix and linux systems we're really dealing with some version of the network file system that was invented by sun micro systems now oracle you know a while back or we're dealing with some form of server message block for windows system so if we're dealing with aws for example we're going to have two options we're going to deal with their version of nfs called the elastic file system and we're going to use the elastic file system when we've got a bunch of linux and unix servers that need to look at the same information they're all going to mount a shared drive and that's why we're going to do that for our linux and unix systems so shared information used by lots of servers we're going to use efs and the aws cloud nfs and the data center now let's say we've got a lot of windows systems well we could obviously set up a server and run samba on it but if we're in the aws cloud they've got fsx which is windows servers that are basically a fully managed file system for windows so basically windows file servers so those are the storage options you have in the aws cloud and the purposes for each one now this next question is going to be on dns and we're going to base it on the aws flavor of dns today and we're going to say describe aws route 53 and the main routing policies and what they do first you need to know that aws's brand or the amazon brand of dns is called route 53 interestingly enough dns is tcp and udb port 53. so you also should know that dns basically maps a name to an ip address in its overall function basically giving you a name like www.go cloudcareers which is really easy to remember as opposed to its ip address so now let's talk about the types of routing so the first type of routing is simple routing this is the first type of simple routing policy what this is is we map an address to a name www.gocloudcareers.com that to whatever it's ip addresses simple routing the next type of routing policy with route 53 is the failover routing policy this is quite simple we've got two data centers there are two servers somewhere send your data here if this goes away send the data here so just go to the primary the primary business go to the secondary that is failover routing policy here's what happens the system send a health check basically dnx is here and it says a message hey are you there are you there are you there and the data center says i'm here i'm here i'm here and as the dns keeps sending a health check saying are you there the data says i'm here if for example the data center stops responding to health checks are you there are you there are you there no response data is shifted over to the backup data center and that is failover routing policy now the next policy we're going to talk about is geolocation and geolocation is really cool geolocation will route traffic based upon the location of your users so is your location routing i leave my house in florida i go visit my family and our village in greece i'm in my village in greece i go to connect the internet and something amazing happens i connect to the internet and i want to go to a page and it sends me to a page with greek writing all over it what happened what happened is geolocation routing policy geolocation routing policy is i go to a country it looks at my source i p address by my source i p address it knows where i am where my location is and then it sends me to the closest website for example based upon my ip address so if in paris i'll get sent to a french website if i'm in the middle east i might get sent to an arabic website it is really cool figure out your source ip address figure out your source country and then route you to a different load balancer so for global organizations that might have an arabic webpage and a french web page and a spanish webpage an italian web page and a greek web page and a mandarin web page how neat is it that you can be able to figure out who the user is by their source ip address and then send them to the best page for them that is geo-location routing and it is cool now there is another geography-based routing option that you have and that's going to be called geoproximity routing and realistically speaking this you're going to use this when you want to route traffic based upon the location of your resources but have a little bit of control where you're sending your data so that's realistically speaking the geolocation routing policy it gives you the ability to modify the size of a region and shift your location your data from point a to point b now let's talk about some other much more commonly used ones latency based routing here's what latency routing is when the user goes to figure out what the website and it hits the dns server it will determine what region they're in and what's closest to them what has the lowest latency and it will send them to the environment that has the lowest latency which is going to give the best user experience the next one is called multi-value answer routing policy and the multi-value answer routing policy is kind of this random policy basically speaking you've got a couple of web servers and multi-value answer will just basically route randomly to whichever one it feels like at the time interesting random generally speaking if you can engineer things ahead of time it's generally a good thing so most organizations do not use the multi-value answer routing instead they have a policy but it's good to know that you have that option the last option we'll talk about and this is a great option is called weighted routing policy and weighted rallying and rated routing enables you to load share maybe you want to send 50 of your traffic to one place and 50 to another or 70 or 30 percent or better yet you've got your new your old website and it's running great everything's good users are happy your team makes a new website but you don't want to lose your old website so with dns you send 90 to your old one and 10 to the new website and you get feedback the new website works it's great it's great then you shift another 20 over the new website and then you shift everybody to the new website when you know it's good weighted routing perfect opportunity for new websites way to test new applications so aws dns flavor is route 53 we talked about simple routing we talked about fail over routing we talked about geolocation routing geoproximity routing latency-based routing multi-value answer routing and of course weighted router today we talked about the disaster recovery options we talked about data transfer speeds we talked about overhead on the link we talked about storage and why it's used on the cloud and we talked about dns routing policy thank you so much for watching this video i look forward to seeing you a new video very soon take care it was so nice having you join us for this video today let me tell you about some free services we do for the cloud community once per week we actually have a free question and answer session on live on youtube where you can come and ask us any questions you want about building your career related to cloud computing or networking we'll answer them in real time for you because we want to get you to your goals several more times per week we have guests from industry industry experts that i've known for decades that are movers and shakers that have changed the world that can give you information so you can build the best career i invite them periodically they are on my show if there's a chance to do some free training on our channel we'll do it live because we want you all to have the best skills for the best career so please subscribe and hit the bell i look forward to seeing you and i look forward to assisting you in your technology career thank you so much this is michael gibbs from go cloud architects
Info
Channel: Go Cloud Architects
Views: 42,608
Rating: undefined out of 5
Keywords: cloud architect interview questions, solution architect interview guidance, cloud architect technical interview, cloud architect career guidance, solutions architect technical interview, cloud architect job, solution architect interview, cloud architect interview guidance, aws interview questions for solution architect, aws solution architect interview, aws interview tips, aws solutions architect interview questions, azure solution architect interview questions, cloud hired
Id: 8IJUHf5cdbc
Channel Id: undefined
Length: 20min 54sec (1254 seconds)
Published: Mon Nov 29 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.