Backend Developer Mock Interview | Interview Questions for Senior Backend Developers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello everyone we are back with another round of duty mock interview series i am there exactly that tuning and i'm from montreal canada and at turin i work hiring the best engineers by helping them with the vetting process and i have more than 17 years of experience in my expertise lies in javascript so today we'll be interviewing faisal for the role of an experienced uh back-end developer okay and stick until the end because this interview will blow your mind okay so first of all uh hi-fi is faizal how are you doing how is your day doing so far yeah i'm all good it has been a great day so far how are you doing jose i'm doing great always believe me thanks for asking so uh first of all could you please introduce yourself and tell me a little bit about your professional experience okay your background and then i'll take from there sure jose have passed from pune india i am a google certified associate cloud engineer and a full stack developer with five years of experience in designing developing and deploying scalable highly available and secure applications on web 2.0 and web 3.0 technology stack i have solid expertise on web 2.0 technology such as react angular node mongodb and sql moreover i have got hands-on experience on web 3.0 technologies such as ethereum polygon solidity and ipfs i am looking forward to work in a fast-paced startup environment where i can leverage my experience and expertise to create products that scales adds value to human lives and create revenue streams for the organization that's awesome and so thanks for sharing okay i'm glad that i'm your interviewer alright check your profile alright check your test that's why we do here okay before we go to an interview uh before we check every candidate's profile uh tests anything you have done at tutting i check it before okay um all right so uh your background as i said look amazing could you please tell me a little bit about some exciting back-end projects or or any kind of project that you have working on sure josie well i've worked on more than 10 projects in the past five years but in the interest of time let me talk about one of them which i found the most interesting and the most relevant one for this position great recently i got an opportunity to develop a cross-platform application for an american client in the domain of automobile servicing it was a team of 15 members where we created modules such as user authentication scheduling automobile servicing appointments with pick-up and drop-off tracking service history of a vehicle nudging users so that they would be sending their vehicles for servicing at regular intervals in this project we use the power of react and react native to develop a cross-platform application which was accessible both as a web application and as in the form of mobile app in the back end we used node.js and mongodb and deployed the entire solution on microsoft azure with these technologies we were able to provide dynamic scalability business continuity and low operational cost to our clients well that's one of the project that i recently worked on today cool and believe me when i when i listen when i hear low operational costs i love it and i'm pretty much sure they love it as well right and i have a follow-up question for you so implying okay why did you choose this tech stack react react native node.js mongodb and azure sure definitely let me go one by one for for each one of them so uh one of the major reason for choosing react and react native is that we wanted to create a cross-platform application and because our our view layer was different in the case of web and mobile however the logic behind the view layer had to be shared so react and react native between react and react native what we did is we created a shared library that was used in both the web application as well as in the mobile app and that is how we reduce the amount of time and efforts that went in to create the front end and we brought in reusability by by using react and react native talking about yeah talking about node.js one of the major reason why we picked up node.js is because of its scale and and it gels up very well with mongodb now in addition to that the reason why we chose mongodb was it provides horizontal scaling using sharding out of the box so that was one of the major reason why we chose node.js and mongodb in in this particular case uh no sorry go go ahead moreover since our company was a microsoft azure partner we chose a microsoft azure cloud platform and it also reduced the cost for the client since we were a partner of that that was interesting answer so i'm going to ask you later on how is how did you use how you guys scale up your node.js application how how did you scale your up database okay but uh for now okay let's stick on the back end and what are some of the best practices okay that you are using for i would say performance in testing sure that's an interesting question and and i've seen i've seen many teams dealing performance testing but that is the first testing that i feel should be performed in any application right so let me talk about some of the performance testing techniques that we recently used on the automobile servicing project so the client being a startup i was unsure of the load that the system should be able to handle from day one during the normal and the peak traffic hours and hence we had to come up with the assumptions of queries per second and request per second based on the market area that the client was willing to cover so let me talk about couple of aspects and the best practices that we considered while we were performance testing this application first and foremost we created a separate performance testing environment on azure cloud with configuration similar to that of a production environment where we had a read where we had multiple read replicas and a right tb moreover we leveraged the power of azure serverless functions to ensure that the application logic has dynamic scalability out of the box available and any also to ensure that the business logic layer never bottlenecks the entire performance of the application moreover we deployed load balancers to ensure that the load is approximately evenly distributed amongst all the service instances also to ensure efficient utilization of available resources finally we used j meter to track the performance of the application and tweak the architecture to meet the assumed queries per second and request per second of the system well that was all about performance testing that we did there cool and so in your experience what is acceptable in terms of uh requests per second well that finally boils down to the amount of users that we are expecting and and the request per second and queries per second you can take it to a higher level based on the amount of users but when you take it to a higher level you would also be incurring more costs because based on that you'll have to scale up your application layer as well as your database layer and hence it we have to go case by case basis rather than uh aiming for a fixed number i mean there's no one shoe fits all number for queries per second in requests per second is what i believe cool all right so okay so my next question for you is how did you scale up your node.js application in that project that you mentioned sure so first and foremost we used the dynamic scalability which is available out of the box when you go with a platform as a service solution which is provided by microsoft azure serverless functions is what we call it so when we talk about serverless functions we the developers need not worry about the deployment we only worry about the application code and the cloud provider in our case azure takes care of providing dynamic scalability and also having pre-warmed up instances of of those services to ensure that we are able to serve the p cars of traffic uh talking about the database layer of scaling mongodb provides sharding out of the box by sharding i mean to say we can horizontally scale a mongodb database by using the correct shard key so let me give you an example of how we horizontally scaled the servicing history details so the way we sharded the data was based on the state in which the car belonged to so we had multiple shots of database each storing the servicing history of cars i mean belonging to a given state in america right so that is how we horizontally sharded the entire database of car servicing the benefit that we reap out of it is that one single database would be having lesser load and the load would be equally distributed amongst all the available databases so that is the benefit that we reaped out of horizontal scaling in mongodb in addition to that to scale up reads we also created multiple read replicas to ensure that one database is not being hit for both read and write queries so transactions would be handled by one database and read queries would be handled by the read replicas which is available in mongodb that's cool yeah you just killed my next question okay how do you scale up that but yeah i'm glad that you mentioned so basically uh this uh master we work with master and we work as a proxy right and then you have slaves and then all uh say re all right request goes to one server and the read one goes to the next one right both slave ones yeah you just killed my my next question but yeah i'm glad for that so um what do you understand by acid property of a system oh great the term asset refers to atomicity consistency isolation and durability of a transaction that occurs in the database system let me talk about each one of them in detail yes atomicity guarantees that if one of the transaction fails or one of the queries which is a part of the transaction fails the entire transaction should fail and the database will be reverted or rolled back to a previous state whereas consistency ensures that after performing a transaction the database will always be in a valid state as per the constraints enabled in the database tables moving forward isolation ensures that concurrent execution of transactions result in a system that would be obtained when the same set of transactions would be executed sequentially finally durability makes sure that if a transaction has been committed it will always be available in the database irrespective of a catastrophic failure such as a power loss or network issue well that was all about acid system acid properties of a database system cool and you mentioned catastrophe so here in my case it would be a snowstorming yeah all right and you mentioned uh layering your application several times okay so uh what do you mean about laying okay your application and why layering your application is important and also uh could you provide some good and bad examples sure sure definitely so layering in my opinion is of absolute importance and it is very important when it comes to back-end applications to ensure separation of concerns the unit testability of the code and reusability of the code as well one of the most common layering approach is to divide the application into controller or the routing layer services or the business logic layer repositories or the data access layer now let me talk about one of the apis that we have to unfortunately refactor as it did not follow the correct principles of layering so there was an api to fetch the car servicing history details to fetch the previous services that the car has went through and their respective service centers it was mainly used for reporting purposes so the developer who developed this api earlier assumed that it would always be used by a rest client and then he made a mistake of returning http responses such as 500 200 201 etc from the business logic layer now later there was a requirement where an internal api wanted to access the same api which was used to fetch the car servicing history details now the problem happened because the internal communication between api to api happened via grpc protocol whereas if you remember earlier the developer had hard-coded http status codes in the business logic layer now in order to ensure that we were able to reuse the same business logic layer for the grpc client as well we had to abstract out the http status related code from the business logic layer keep it in the controller and hence enable reusability by by ensuring that the same business logic layer is being used by a rest client as well as a grpc client and hence i always consider low level design and spending some efforts in low level design to ensure that the layering is is thought up front rather than thinking at a later point of time okay and then did you use any design pattern like uh pub sub or how how was the communication between your rest api and your grpc application i remember one of the occurrences where we had used pubs up so there were webhooks in order to receive events from a third party so whenever we used to receive this web hook we we used to have an event emitter which would emit an event and there was a subscriber who would always be listening to that event and logging web hooks in the database table so this is one of the occurrences where we have used a puffs of pattern in our application i hope this answers the question yeah you did i'm not going deep on on this question uh today okay yeah i'll just give you a tip for our audience so if you want more questions for design important uh patterns so leave a message in the comment section below uh we are going to make a video for you okay explain design patterns so system design as well if you want it you just need to drop a message so uh i would like to also call upon our react developers our back and develop front-end developers fellows here if you want to get a remote job my tip for you is go to tutoring.com less jobs okay and apply for the job that you uh that is more suitable for your tech stack okay go um turin.com jobs search for the job buy tech stack and apply for it so once you are applying uh you just need to pass through through the vetting process and then you will get a job a remote job as we did okay uh one more thing so if you are enjoying this content please uh consider to give us the big fat thumbs up for this video and comment in the section below uh what is your answer if you would answer it in different way okay subscribe to our channel if you are not subscribed already and let's go back to you faisal and then let me talk more about the background or talk more about database okay uh you mentioned uh sharding what was great uh but i would like to ask you uh about index okay what do you understand about uh indexing data indexing database sure so indexing one of the major reason why one would index data in the database is to ensure that the search is efficient so let me talk about uh different indexes that are available in the database some of if i remember correctly we have clustered indexes and non-clustered indexes so when a clustered index is applied on a table column it ensures that the rows are physically stored in the database in a sorted order of the indexed column values i'm sure you must have seen that whenever we apply a primary key on a column the primary key column is automatically considered as a clustered indexing column and whenever we retrieve the value from the database it would always be in the order of primary key column by default and hence we need not apply any sorting on on primary key whenever we retrieve the data on the other hand when when when we talk about non-cluster indexes when non-trusted indexes are created on a table column it creates a separate table with the sorted values of the indexes and a pointer which would point to the actual physical row so let me give you an example let's assume there's a table of students with id and name of the student your id would generally be a primary key which would be considered as a cluster index yes and on the other side let's say there's a requirement to search the students by name so we can create create a non-clustered index to ensure that the search by name is performed in an efficient manner yeah that is how i've used indexing in in one of a couple of projects of mine i hope this answers the question no you did and so let me just give you one more exam so let's suppose i update this table okay and i add email there and now we have id name and email right and then i would like to then the email is unique okay which another kind of index right so the email is unique uh the name could repeat so we could have ten files all in that table and i'm i would like to search okay for name and email okay and then my query it starts getting slow okay what are the reason for that and how could we uh uh debug it and also improve it yeah one of the ways that i can see the query to improve in in this particular case is by making a composite index that could be applied on a group of columns right so this is something that i can remember creating a composite index if you want to improve the query performance of multiple columns or a group of columns in a given table cool and is it consider uh a cluster index or no clustered index it would definitely be a non-clustered index for sure all right great so next question what's the difference between where and having oh that's an interesting one so where clause is generally used to add condition on rows in the table whereas having clause is used to add conditions on the aggregations which are obtained out of those rows let me give you a business requirement where we have used both where and having together along with group by so there was a this is requirement where we had to fetch the brands of the cars whose mileage is greater than 10 miles per liter and their average yearly maintenance cost is less than one thousand in this scenario wear was used to filter out cars whose mileage is more than 10 miles per liter and then we use group by to group the brands and we use having using the average as the aggregator so we use having with the average as the aggregator to calculate the average cost and then pick up only those cars whose whose average yearly maintenance cost was less than one thousand dollars so this is how we use where followed by group by followed by having to get those brands satisfying the criteria cool and then i have more questions regarding where okay um what's the difference between uh where and on in sql okay that's that's again an interesting one so people generally confuse between where on and and having in in sql so on is used to define join condition on the tables whereas where clause is used to define filter criterias on those joined rows yeah in case of inner join both where and on can be used interchangeably but in my opinion the for the purpose of readability and accuracy of results we should always be using on to join the tables i mean to provide the conditions for joining two tables whereas filter should always be i mean where should always be used to filter the joined rows well that is the difference between on and and where in sequel cool so let me share my screen okay and then i will show you two examples and then i will ask you to explain me okay sure all right firezone so here i have two sqls okay uh i'd like i will explain to you okay and then i'll ask you one question related to that sounds good interesting let's go all right great so here i have this table user where i'm getting everything okay okay better than that could you please explain me what i'm doing here yeah let's start with the first one sure thing yeah so in the first query as i can see we are selecting all the columns from the user table and we are joining it specifically doing an inner join with the dependence table where the dependence user id is equal to user's id basically there's a primary key foreign key relationship in the dependence table from the users table yes after joining after joining we are then obtaining the users whose id is greater than 10 and their name is that's contains turing in it so that is what would be fetched as a part of first query yes correct and what's about this second one yeah in in case of second query what we are doing is so rather than joining it just based on the primary key foreign key relationship we are also adding new conditions for join such as the user id should be greater than 10 and the name should have durant in it correct let me just change to tuning then we have the same results supposedly right what's the difference between these two queries yeah the major difference that i find in in these two queries is that the joining condition so in the first queries the join would be performed on the entire table first and then the records would be fetched and filtered but in the second case the joining would be performed based on the entire condition that we have written there including the and and the like there yeah so this with this we would be preventing the joints to be performed on extra rows correct and then we might get some different results here okay uh the recommended one is the one first one on the second one the recommended one is always the first one as i mentioned we should be using on to provide the joining condition and we should be using where to provide the filter criteria great so thanks for explaining that and my next question for you is related to api gateway okay could you explain me uh what is the api gate pattern sure so an api gateway sits between the clients and the services it acts as a reverse proxy to route request from clients to the apis behind the gateway now in addition to routing an api gateway might have other responsibilities such as authentication monitoring load balancing ssl termination request response caching and shaping static response handling etc one of the major reason why we use api gateway is that it encapsulates the internal structure of the application and it hides it from the outside world so if when we have an api gateway the client that is a front end need not know the ip addresses or the domain names of each and every apis that are deployed in case of microservices the client only need to know the api the ip address or the domain of the api gateway which then takes care of routing requests to the respective apis so that's the benefit of having an api gateway in your architecture cool and let's move on to uh database again okay so uh you mentioned in your infrastructure you have used the uh mongodb right and then also what kind of uh cash mechanism are you familiar with yeah so uh if i remember correctly we used redis to perform in memory caching so redis is an in-memory key value data store which can be used for caching frequently accessed data so one of the requirement was to have master data away from mongodb so we did not want to trouble the read replicas of mongodb to you know load it for fetching master data like details of country etc so what we did is every time a service would speed up it would then be loading the redis database the in-memory redis database with all the master data so whenever there's a request to fetch master data from the client side we would always be fetching fetching it from the redis rather than you know loading up the mongodb with those read queries which could easily be done by a caching mechanism using redis cool okay that was uh interesting okay and so when should we use redis over um mongodb or should we use both is ready and one would be a great fit yeah it is so let me let's talk about mongodb and redis in detail and then i'll give you an example of how we used it so mongodb is a schema-less document based permanent data store on the other side radius is an in-memory key value data store which enables faster access of data but the storage of data isn't permanent when you compare it with mongodb so there was a requirement in in the automobile servicing application to create leaderboards so whoever would be servicing the vehicles and would be getting better feedback from the client they would be awarded with the higher amount of points and finally those points would be converted in some dollars when their seller is credited so in order to show these leaderboards which are frequently accessed but it changes every month so what we did is we would be storing these data of leaderboards in the redis cache rather than triggering a query in mongodb every time someone is you know trying to fetch leaderboard details so this is how we use best of both like redis and mongodb together to to get performance out of the architecture cool and regarding the year tech stack okay uh the tech tag that you mentioned before in that project so uh if node.js okay is a single threaded how could be how does node.js handle concurrence yeah this was one of the question that came from our clients as well why are you using mongodb uh node.js is a single threaded so we the the way it works is the magic lies in the way node handles asynchronous operations such as file i o dns calls or network calls so whenever node gets an i o request it creates or it it puts that io request on a separate thread and then the thread would be performing the i o operation and finally once the i operation is either completed or there's an error in it it would be putting the event on the event queue of the node and then there's an event loop which would keep reading the messages from the event queue and as soon as it finds out that the i operation is completed or any asynchronous operation is completed in the event queue it would then continue the execution in the main loop so that is how note being a single threaded it intelligently internally creates separate threads to to you know handle asynchronous requests cool and that was my last question for you okay thank you for this uh really nice speaking okay uh i i hope everybody liked that as i did okay and to everybody else thank you for tuning this mock interview series uh series alive okay and we'll be back very soon with many other mock interview uh please let me know in the comment section below which kind of mock review would like to hear from us okay and what kind of videos would like to hear from us as well just drop a message drop a comment and then we are going to make it for you okay so uh what else don't forget to subscribe to our channel and uh hit the like button if you enjoyed this content let me know in the comment section below if you would answer this question in different way i would love to to see that okay um volunteering on twitter instagram facebook linkedin uh youtube so we are everywhere okay keep close to turin and i'm pretty much sure that you will find the job that you are looking for as that said that's a wrap uh thank you again faysal for your kind words and your amazing answers okay take care and stay safe and learn more have coding [Music]
Info
Channel: Turing
Views: 77,084
Rating: undefined out of 5
Keywords: Backend Development, Backend Developer Job, Backend Jobs, Backend Developers, Turing Backend Jobs, Turing Backend Developers, Remote Backend Developers, Turing Backend Developer Jobs, Backend Development Jobs, Turing Remote Jobs, Turing Remote Developers
Id: e_i2RETG0yo
Channel Id: undefined
Length: 31min 55sec (1915 seconds)
Published: Thu Jul 07 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.