Mock Interview 5+ year experienced | Spring Boot | Java | Microservice | System Design | Code Decode

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey guys welcome to code decode in this video we are going to cover the mock interview for D he is having around 5 plus years of experience as a Java backend as well as front end developer so let's get started also if you want to be a part of this interview series just fill up form which is given in the description below add your details and we will reach ASAP to you also we are thinking of starting a series where we are going to review your resume so if you feel that your resume is not up to the mark and you want it to be reviewed just let us know in the comment section also there's a link given in the description below you can add your resume and your details we will also do a resumer review for you guys so there are two different forms one form to be a part of this smck interview one form where we are going to record a session where we going to review your resume and uh because we have received many resumes which are very good so if you want any kind of review for your resume just add your details in the that form and we will review that particular resume for you also so uh let's let's get started now uh let's let's start our mock interview with d now hi denh how are you uh I'm very good I'm doing uh very well actually and how are you all good all good so before starting uh this mock I can see that's a very nice background so where are you right now and how is the weather around uh I'm basically right now in Canada I mean especially in Montreal so it's bit snowing and bit cold here like It isus 2 so it's a chilly weather so that's really nice to know uh CH let's get started uh so can you please introduce yourself uh yeah sure uh my name's denh and uh I'm from Canada and uh I'm basically from India I'm mov here two years back for my masters and uh I finished my studies and I'm looking for the jobs now especially in software development field awesome that's awesome so can you please brief briefly discuss about your Tex taxs and projects uh sure so I have worked with Oracle in India uh for the text stack of java and springboard uh uh for the web development applications uh so after that I moved here and I started working for the startup with the textt of react and uh node.js and uh recently I did my internship with the net so it's like I have so many text tacks in my bag but I always go with a Java that's like my go-to language go to language but it's yeah yeah it's like the first language I choose whenever I give any coding test awesome awesome so I can see that in the resume you have worked with front end as well as backend so can you just give some insight about how Spring Security you have implemented from end to end be it from Front End to back end how how have you secured your application uh yeah sure so generally we enforce the security mainly on the back end itself because uh the client browsers uh it is secured like we can have like multiple browsers running but main thing which we have to ensure is our servers are running fine so we generally go with some of the security features like JWT uh because generally we go with the HTTP rest apis these are like stateless applications which means like when every time when you make a request you don't know uh you don't have any prehistories so every time you have to authenticate which is like a burden so we use JWT to uh uh to enhance the performance so where we uh make a hash out of the authentications and so that we can authorize the user for the next time so this how we Implement in the back end and in the front end we going to pass the same JWT token for every time in the cookies or a session so that backin can read out and Yeah it it also comes with something called TTL like time to leave so it means like there is a shell life of like few few minutes or seconds so on it's done it's reauthenticate okay and there comes the concept of refresh tokens when the expiry comes yes yeah exactly and for what kind of authorization servers have you used for your tokens to getting your refresh and access tokens uh I generally used uh OCTA o 2.0 U that's that's that's I think that's one of the popularly getting used uh authentication and authorization tool so and it's uh spring also comes with a spring security uh which I prefer like also return on top of this so this is the main thing which I have used correct I agree that OCTA you have used as authorization servers and Spring Security is something which properly integrates with the OCTA parts so yes completely agree on that front now my question is that how uh from from where did you hit OCTA so uh usually OCTA comes with a page that that gives you a page where you have to put user ID passwords and then he will give you the excess tokens which will store in your sessions and all those things agreed so how how the complete flows from from which uh place you hit the URL you do the authentication for how authorization is done with OCTA what are roles Grand types I I would like to Deep dive a bit if you are okay with it yeah yeah sure uh I think when we make an account in Octa so generally we get the API key or access key which uh and also like call back URLs so generally the call back URLs could be like once the authentication success so what could be the redirecting URLs so we generally give our own application end points so once the authentication success so and we use that API key in our application so that uh whenever uh client make a request so first uh we can hit to the OCTA servers and it authenticate itself so these authentication could be like uh our own web username and password authentication or it could be any social logins it could be Facebook or uh Google login or LinkedIn so once authentication is done uh so it will redirect to our application along with the access token So based on the access tokens it will have it's called authorizations where like U uh based on the user limitations so only that kind of like accessibility we will give so that helps to like limit the user security at the server side okay so basically what you do is from the front end uh you hit the OCTA that's how for the authentication part get the excess token with that excess token you hit the back end that's how uh yes uh we can also have a like a transparency from the front end to directly OCTA like we can have our own uh uh web layers in between which will hit get first and that will hit the OCTA so there are two way but uh I would go with like the second one now suppose uh there's an unethical hacker sitting in between trying to access your backend application so how would you make sure that the token that he is giving is a Justified token so how do you make sure that the token is right so the the token what uh it's getting generated it's B is based on some hashing algorithms it means it it's a one way like u based on that hash code if I just use JWT uh which can if you hash back it like we can get like three part out of it so we I think one of the part is signature which is just to authenticate whether this hash is corrupted or some someone like uh changed it or not hacked it or not uh but in our case if let's assume like if everything is fine but uh the in between part indicates like uh the actual part it could be like uh pass it could be password or some authentication IDs which actually we save it in the database and once we uh get back this part we check with the database to ensure like is corresponding is matching or not if it matching sorry yes oh no no it's fine okay I was curious to know how do you validate that token at the back end so uh yeah once we get back the middle part of the JWT so which is which holding the user information so we check it in the database whether the password or some authenticated ID is matching so if it's matching it means like uh the user is a actual user so that's how we uh ensure like the user is actually a the one trusted user or someone who is hacking even though the hash code looks the same uh but when we uh uh hash it back to the actual word so it means like it's not always give the same value so that's the thing okay but it's always said that claims the the the middle part the body part should not have the passwords and everything otherwise they can be easily decrypted because JWT token is just an encryption you can even go to jw. and decrypted so having password there will reveal your password to hackers uh no I think it's Works based on like a hashing is only like a one way of uh it's called like one way like mapping it means like from the the keywords we can get to know like what's the hash code but other way can't be possible so you can't get to know the actual password sorry I said password but you can't get to know the password so generally anyone can't get to know only the user know his own passwords so when we save it in the database so it's always be the hashing so we never save the password directly in the tables correct so whenever JW token C comes you will never have your uh passwords and any personal information in the board body because it is easily viewable by uh even by j. you just PR your token and decrypt it and you will be able to see everything you have in the body so correct on that front you can have NE never have the sensitive information in your body part yes nice uh sounds good to me also um now I can see that you have implemented schedulers in your application so my question now is that uh what kind kind of scheder you have implemented and suppose that scheduler is uh being implemented as a code in your application that code is being repoed on multiple n instances on cloud and suppose three instances on cloud so it will run thce and suppose uh that particular scheduler adds a record in the database for example ID 1 name the at 12:00 so I'll have at the end three entries in database rather I should have just one entry so how to prevent that particular scenario any clue uh yeah I mean generally we go with the spring scheduler but the drawback as you mentioned uh if there are multiple instances the spring Schuler can't able to handle this instance it end up with running on all the three instance so we want something like a single turn Behavior here like for that we can go with something called shedlock so how it orchestrate is like rather than uh it will let it to run only once and like and ra for the all the instances so that's how internal architectures are working perfect sounds great uh I can see that you have implemented caching also in your application so how did you implement caching so uh the caching generally uh I mean uh either we can use the third party which is already available so which is like uh redis is one of the popular caching uh it's nowadays which we use uh to enhance the performance there are few datas which we find like frequently are getting used so rather than hitting the database which takes some additional time as a latency so we can use this caching which is like inmemory uh key value mappings so so generally what happens like whenever uh our request comes to the server server first check it in the caches so if it is there in the cache it will check get it if it is not there then it make a database hit to get the data okay you told me something about about inmemory caching so in memory caching is uh what is in memory caching so generally what happens like database could be on the same servers or it could be on the some of servers when we go with the cloud application sometimes the database could be on running in some of the server to make the application more resilient so the problem there's no problem the thing what might happens could be the latency because sometimes if the database is residing in the server Which is far from our uh application servers so there could be a delay so in that case what we can do is like on the same server where the application is running we can have like a temporary memories so where we keep the data which for the fractional U time like there is something called TTL along with this data or if something got updated in the main database then it will make it as dirty and rewrite the data back so that at least user can see the up to-date data in a fast time nice so uh you I agree that there can be a chance of dirty read till the particular TTL if that is not reached and the thing is updated so how how you prevent dirty rates uh either you keep the time very short then what will happen if the time to leave is very short again a lot of database calls is going in and cashier is of no use but if time to uh live is long enough you can have a chances of dirty rate so any possible ways how you can prevent this dirty rate with caching uh so generally for the server it can't get to know whether the data is dirty or not because only the database has a clear idea whether this data was corrupted so there could be be a chances of reading this diry data but this kind of data we keep this kind of data especially for the applications like where some datas need not to be accurate like for example if you use Twitter so we don't want like what I see on my table sorry page is need not to be like as accurate as yours so when we can compromise on the consistency we can go with like this kind of uh data of I mean caching I would say but if it's highly Financial then I would say dirty read can't be something which we can compromise because we can't we don't want to see like the account balance something inaccurate on the two different accounts correct so okay I agree that you can use this caching mechanism only when there is some kind of uh eventual consistency is okay yes I yeah that's a good way of saying the eventual consistency perfect sound sounds great to me nice so um any hibernate caching you have implemented or the redest one yeah I mean if you use the jpa family uh for the OM tools so it could be either hypernet or Eclipse link there are different uh implementations available the H GP itself supports something called entity managers and entity level caching but if you use hybernate we can also have a a like level two and level one and level two caching which is like a session uh caching and also session Factory caching so one is for the particular instance of the session other one is like session Factory like all the sessions will have a same caching technique so this helps us to like get the entities uh in a very fast way rather than getting every time from the database level perfect and that reduce the latency nice so uh there are two level of caching you said the first level caching and second level caching so which is associated with uh session which is associated with session Factory and uh which is by default enabled and which you have to enable explicitly any any clue on that uh yes so session level uh is called L1 caching and session Factory is the L2 so by default we enable the session level caching which is like limited to particular session so but if you want some data which should be like could be cached across the all the sessions then we put it in the L2 level caching which is under the session Factory perfect because generally we create the session from the session Factory so all the sessions are like inherit the same cash of the session factories perfect so even if cash session is closed your data is still cached yes yes even the one session is closed still the data will be there in the session Factory which is the L2 perfect so uh it's always said that uh session Factory should always be singl T any reason why it is asked to be singl turnon and only one instance should be there by by U as the name indicat single T like U first thing is like it's a costly operation I would say but I mean in the normal words U this as same as like creating the instance of the database so we generally want it to be like U highly uh synchronized uh applications so that no two uh interactions can corrupt the datas so when we make it as single turn so that uh every time every new sessions we use the same uh sorry every uh New Creations will create the only one instance and it will get reused for the all the upcoming uh requirements perfect basically one connection to DB nice yes now suppose I have my SQL DB I have Oracle DB and I have supposed post gr DB three uh DBS for my one application so how many session factories will be created for these so um it generally like maybe three sessions Factory it can be created B but for one I mean particular to one data datase because like each and every database has its own like isolation level and consistency it can vary from database engine of that database perfect yeah session Factory can be uh like configured database specifically so yes you have to give the URLs you have to give the passwords usernames that can differ for all the three so session Factory to create a connection to database for three different database you have three different session factories uh perfect perfect on that front uh also I can see that that you have used graphql so uh why graphql why not rest API what was the scenario where you were forced to go to graphql for what reasons uh generally U it's not a limitation of the rest API it's just like when we go with the rest API there are so many end points come to picture like it's very hard for the any client applications or any like third party to remember this so like for example if I have like a a student as the uh main entity and if you have like all the crude operation then like it end up with like so many end points for suent TBL and similarly for like account again like so many end points to avoid this we go with the concept of graphql like there will be like only one end point which can take the uh like one end point but it can take the payload in such a way that it can handle internally in a very specific manner so that's one of the main reason uh we go with the graphql uh but yeah it's up to the Implement like what kind of which one to go with so generally it's up to the team to decide whether the rest AP a graphql okay so what scenarios you face because of which you choose graphql over race API because I can see that you have worked on both so there must be clear distinguishing you can see that why you have used this yeah sure so uh so generally we call like uh fine grain and Co grain kind of like applications uh generally the rest APS are the one we will fall under like something called a code scen I guess uh where like uh there will be like joints between the multiple entities and uh you don't have to uh pass all the details when you make a rest calls like calls so in that case I would say like keep it in the rest API but if you have like to avoid so many joints if you want to send the data in like a single uh payload request and uh perform different operations on different entities so I prefer to go with graph Q okay also there's some differences in the responses also so suppose in the response I do not want all the fields in the response for a suppose I have an entity employee I have hundreds of fields there I just want the ID and the name then that partial response can it be sent with rest API or can it only be sent with the graph something there's a difference with the responses also right particular field selection is returned with the graph we can also limit in the rest API as well based on like what kind of dto we are returning because like uh yeah I I didn't go in depth about this but uh in rest AP also we can limit what response we are responding back we also come up with something called Json ignore and yeah some crucial Fields if you don't want to ex we don't want to export we can at the rate and out with Json ignore ignore so yeah but I feel like it's still like too much of laborious task to do this using rest API which can be easily automated with the graph Q correct correct I can see that yes there are high uh advantages of graph C over stpi so yeah one of this is that you can have only partial responses sent rather than the complete dto in multip because if the user wants that I want name then you can give name but if you make it with d it is hardcoded in your uh pojo that is yes if it is 100 feels you are just sending ID and name now suppose I am a different kind of user I would ask you to have the salary also then will you change the pojo you can do the source code changes right yes yes yes so what I want I will ask you that just give me that that can be done with graph Fel that cannot be done with rest API because the source code changes has to be done which is very bad so that's what I find as a advantage of graphql among multiples but yeah yes that's one scenario I might have worked with but fine um next uh I would ask you that uh I can see that you have uh done some external configurations with your spring boot application also so how did you make those external configurations in Spring boot app uh I mean could you add it more details like when you say external configurations okay understood so they are properties files right there there's a property file and there's yaml files so what you do is you put your properties and customize ation to your springboard applications through those files so uh I can see that you have used both properties files and yl so what difference you found in both the properties uh and which is better and why uh actually uh uh much there is no difference it's just like the way we write the files uh the structure is different the yl is like yet another markup language as a name indicate so the main problem with like normal the property file the application do property we use the Dot Structure so so the thing is like we have to keep on repeating the same prefix for uh every uh changes for example it has to be like uh server. name server. Port like everything but whereas in the yaml uh we will write like a server as a main parent and inside that we will write name and Port as a child so it's hierarchy that's yes it's more hierarchical like yeah there's another way called the Json but uh I think the poly getting use is the yaml one so but functionality wise there is no difference it's just like a structurally we write it in a better way so that to avoid repeating same keywords in the property files so I agree ke yaml is more readable than properties file also uh you can use map list those kind of data types which you cannot use with uh your properties F it's everything is string in properties file agree agree yes take it nice um I would uh go with microservices now so uh okay how did you handle distributed transactions in your microservices uh so when it comes to distributed transactions uh it's easy to handle that in the monolith architecture because we can have one transaction and write all the operations we want to perform but in case of distributed we uh bate the functionalities to different micros service the main problem is if something goes wrong then we have to roll back but since it is under in different transactions we have a I think two-way of approach so whether we can go with synchronized or asynchronized I think the commonly getting used is asynchronized Saga patterns uh I can elaborate on that there is one more approach called synchronized wave which is like a 2pc or two-phase commit approach the main advantage of saga is like as I said is asynchronous way so that your uh uh your client need have to wait for the others to give back the respond or acknowledgement so here like each and every transaction has divided as Saga uh and the main job of it is like it ensure its own transaction is getting executed and also once it's done it will trigger an event so that next transaction can begins and if something goes wrong it will there is nothing called roll back because since these are working in a multiple different microservice so it will send a revert command where we have to do the reward of the database changes perfect can you please design an e-commerce uh site for me just a very high overview like what components will be there how interactions will be there how what kind of interactions will be there what kind of database we will use can we do that okay yeah yeah sure sure okay so I'll just share my screen so denish can you uh this is a uh collaboration Tool uh here we can both collaborate and have a work so can you please uh design an e-commerce website what overall components you can have and how they can interact with each other what database what type of databases they can have so basically very high level overview of e-commerce uh will be good for me uh sure so uh generally we say the e-commerce I would take U either flip cart or Amazon as an example here so if if I get this kind of questions first I would think what could be the functional requ requirement and what are non-functional requirement because functional requirement is something like uh business steam or some customer would specify what kind of requirements we want so in the e-commerce first thing is like uh uh I I don't go in depth about like login platform and everything because uh the I will concent more on the the main part here which is like a a homepage where I can see all the uh items and uh I can uh add the uh I can also have a search functionality and uh uh add to cart and the payment so these are like a main uh area which I would consider as a functional requirement and when it comes to non-functional requirement I would think about scalability so that uh because there could be um millions of users are accessing this application in a same time so I have to ensure that my application is more resilient it shouldn't be a uh it should respond so that it won't get uh uh down or anything in any any fraction of time and latency has to be reduced it means like I can't wait forever to get the respond for any of my uh uh bookings so yeah these are my main area of Interest so okay so what what I have done is I have made both so if you want to put notes you can put in in the left side and you can draw in the right side and you can also do that if you want to put notes in this both or else if you just want to draw you can use SK so anything if you're forgetting so I what I'll do is I'll put uh some requirements for you the way you set you can have function and nonfunctional requirements perfect on that so what I want is functionally being a client I would say that I would have the functionality of I'll put the functionality here so that it will be easier for you uh I need the functional uh things that I should be able to um search product then I should be able to uh add to cart then place an [Music] order then uh assign a delivery [Music] partner track my uh track my product and at the end uh uh send email [Music] notifications for every change in order [Music] status anything else do we do with e-commerce no right uh now no I think okay now coming to nonfunctional ones uh I would say I would require you to handle load due during sale [Music] time uh sorry and I would say um balance also I would say that uh how to prevent system from uh spam hits how can we do that because being an unethical user what I can do is that I can put hundreds of uh requests like search this SE this SE this how would you prevent n number of hits to your legitimate system being a hacker I can do that so that you can handle also uh during peak hours you can scale it how would you do that and and as you go ahead with the uh drawing and uh components I can add more as an in when required so this is uh my functional and non-functional requirements I can go to Canvas now you can come back and see your functional and non-function requirements back when you click on both so you can go ahead and uh we can collaborate over this particular problem so yeah based on this functional requirement um yeah I will start a simple design first then I'll keep on add the enhancement based on the non-functionality as well so yeah generally we have a client here client could be either uh like a mobile or it could be uh web or any uh lab yeah this has to be like a responsive web screen so that every different resolution can see that uh HTML page in a clear way and um there could be many uh users because since we are talking about the e-commerce website we can come up with like um millions of user in know we call that as like a daily active users So based on that we have to decide like how many replica of the servers we want and uh how we are calling those servers so for that we generally go with something with a load balancer here so um I generally write it as client and U if load um M yeah it's a load balancer uh I also want this has to be like a proxy so we can use something called engine X which Al which ACC as both load balancer and also proxy and other thing uh mentioned in the non-functional requirement is something called like U spam hits so we can use something called as uh limit handlers like it will keep track of like how many requests are coming from like same uh IP address so if if it find it like uh if the the number of request is miscellaneous like it's not something uh it's like active uh user request in that case we can uh it will like stop the user to hit that part so which we call that as rate limits yeah rate limits that's what sorry that's what I was trying to say no problem okay yeah I yeah I don't have any filter kind of icons so I would have used that here so okay no problem uh um okay so then we have like different kind of like a uh so after this we have a web service which will take the request from our load balancers so the load balancer will choose like which instance of the service we have to call so uh we have a web apis and uh uh SE service um uh I'll write the uh rest of the service soon I just want to go with one by one so when we say search uh it also good if you also include something called as a caching so we can add some red cache for the sear service okay so this generally cash the uh the data like all the items which are coming from the inventories so this data need not to be accurate like sometimes we can search the product but even though if you don't available in the car in the inventory so this need not be like 100% consistence it can be like uh eventual consistence as well so that's totally fine uh so and generally we can go with the database here uh of no SQL or SQL but I was thinking uh since the items could be like different U off schema structure so we can have a something no so it depends on like what kind of cloud service we are using if you're using uh we can go with the Cassandra or mongodb it's up to I mean there's no problem but it's just like if you use Cassandra it's provide eventual consistency so I I will go with the Cassandra here okay so so basically you're going with no SQL for s Service yeah no SQL so it can be uh yeah it's need not be like no SQL but I find it here more relevant as a Cassandra I uh perfect uh uh I write it as no SQL uh here so the search product and uh we can ALS since it's a search we can also include the AWS elastic search features uh so that uh the data I mean search features can be very fast so we can have a elastic search in between nice um so so in in case of my search like whenever user search in the search area so search will first comes to the load balancer So based on I can have multiple replica of my search service it's like a micros service so there will multiple instances So based on the load um the load balancer will choose one of them uh search service which is available at that instance and send the request so generally how it works is like based on the keyword and uh uh it will match uh it can use any one of the algorithms like uh there are different algorithms available but yeah uh the one which is uh efficient we can you go with that uh okay elastic yeah I understood no worries I understood okay so yeah generally for this it will work based on the whatever the data which is available in our database so we have all dump all the data in our nosql and our elastic search will I mean query these nosql and index those data based on the keywords because user can search in a different criteria area and there could be typo as well like I mean it's not like every search I mean query has to be correct spelling so so that our application can understand those keywords and also still has to figure it out so that's where our elstic search will come into pictures to make it more uh efficient way of searching so this is pretty much I use for the search and for the card features uh I also use the redish here this is for like some uh uh keep the search already criteria in my uh caching so that if user is we may know like sometimes what is a commonly uh top search items so we can keep that in the red is cach a so that uh we can first get those data from cash then if it's not there then it from our U Main Service and uh so that's fine and uh add to the card uh is also here so the these services are like multiple replica so more than one instance are available so in the art to the cart service like uh when user add to the cut it means like U it has to be like a map so I generally like map between like uh this user and his all the items so it could be like uh one to many mappings so generally if it's a mapping uh it could be debatable but I can go with the SQL database so here it could be uh my SQ mycle is better suit uh because there is something called a cap algorithm which is called consistency availability and the I forgot about the last one it is uh yeah so yeah so in my case I need complete consistency because uh and the availabilities um so that's the main reason uh so that's the reason I go with the myql here okay also there has to be some relationships between your users your yes card items yes exactly uh I did not uh mention few of the I mean main required users here like when is user we needed and also items uh those two also needed but yeah uh the card service generally will uh ensure that the mappings are happening correctly so we need something which has a strong uh schema structure and also like efficient way of joining so in that case I go with the SQL perfect so sound good uh and art so place the order so and these Services has to talk each other for uh because since it is a micros service it doesn't have all the compete information it just perform one single uh responsibility so I can uh I mean in the behind the picture I can go with either uh rest template or a f client algorithms to communicate with the services but but uh but when it comes to system design uh I don't want to go in depth about the uh coding but when it comes to uh place order so when user place the order assign it to delivery patterns so here we are to use also use something called Saga design patterns because uh these three things I mean uh placing the order assign the delivery and also payment this has to be happens in kind of like a single transactions so uh I'll scroll down a bit uh is it okay if I reduce it to fit yeah yeah sure okay go uh okay card service so in the card service I also put something called uh uh no no uh maybe in the place order U I use some I use called Kafka here because whenever we place the order item has to get reduced from the inventory and there is some periodic update has to be done so that next user won't get any problem of like uh ordering the same item but eventually it will come up with like no items are available so I need a Kafka here uh so that once I place the order u i can update that in my inventory so that my no SQL will get updated correct so this CF will be over this so uh when you place an order and you're putting it to Kafka Kafka is a sync communication right it will be put to some topic yes now if you reduce it if if there's some lag because it is a synchron communication there is lag in picking up the topic and by the same time second user comes and will ask for the same product but you have only one inventory of that particular product so you didn't reduce it so he will be able to place an order you will be also be able to place an order but you only have one stop so that is a problem so are you sure we can use CFA for this pH uh yeah in that point uh yeah it could be end up with like inconsistence because we can't uh have like two users like uh like trying to access the product which is not available so yeah maybe C could be better candidate for the notification part uh I would say but yeah for the this point I missed out yeah for the inconsistent B I can't go with Kafka since it is asynchronous so uh this has to be happens like in a sync way uh so that uh there won't be any inconsistence perfect so once we place the order I will this has to get communicate with the cart service so that I can see like what the items which are added to the cart so that I can place so also there can be inventory service who can manage inventories for the particular product uh yes I think uh I mentioned but I didn't add I guess uh we need a inventory so so that any uh sales team not sales team uh any uh inventory teams like it could be any vendors Tak an add the items to the perect yeah that's okay for that that's okay I understood ke you can have one more Service uh uh also uh there would be one need for um you have inventory with you right now after placing this you can also have one user service as well as one product service because every vendor will add some product also yes so it will be good to manage product as a different service so that it can help you to have a single responsibility principle followed each micros service will have its own product otherwise each task would have to be done by cart service but the task of card service is to only manage your uh cards for a particular user so I was just thinking to have these many uh microservices uh you can go ahead with your version I'm okay with that also no no that that would make sense like I was thinking like this is like normal clients we can also have like vendors where where they will have like a inventory service where they can like add these items and this invent service has a link to the nosql so that uh these elements will get mapped perfect okay okay and it's a very big uh application to uh draw along but uh whatever you're drawing is something which is very understandable so that's fine I'm good with it yeah and uh for uh uh assigning the delivery Partners uh yeah so uh we also need a sign delivery uh Partners so we should have one more service to keep uh the keep like uh we need a table to keep like uh all the delivery Partners which are available so so I go with the SQL here because uh I mean I find it like all the deliveries partner have like more structured data so that's the main intention so uh and like what kind of delivery they're providing so all details will become come under my SQL database and uh when I assign uh I also has to trigger the uh so generally the place the order service will ass call A our assigned delivery Partners to get the who who can uh who can be assigned for delivering these items so once this done there is one part is missing is the payment uh because like yeah it it has to be get assigned to the delivery partner after the payment is done but all these thing has to be done in like a asset property generally we can go with like a could be a saga design pattern or uh uh 2pc uh commit pattern so that transaction has to be done either complete or has to get rolled back so I have a question on this yes so as suppose uh if you use Saga so it will not be having the atomic uh nature right because Atomic nature says uh do it at once it should happen or completely should not happen so when when it comes to Saga you place an order and yeah you assign a delivery pattern later on because an event is being generated now suppose your payment is failing so you will do a roll back do not assign a delivery partner and make the order cancelled so you cannot maintain the asset property here end to end because uh eventually you are rolling back but at that time it it's not happening so asset properties are actually not getting followed with this particular uh uh Saga design pattern now also I have one more thing in my mind when you are placing an order so you do not need the delivery partner to be assigned as soon as you are placing an order right you have placed your order whenever some delivery partner is available then it will be assigned so here I can agree that you can use Kafka you don't need a synchronous I agree perfect yeah so it's yeah mostly it depends on how we want the use case yeah if you I think the one you mentioned is like a better use case I don't need to asend delivery partner in a sync way so in that case I can once it is done I will trigger a topics to my Kafka so that my delivery Partners like uh I can call like listening for this topic you uh Kafka topics perfect So based on that like any one delivery partner will get assigned based on the kind of like order is coming and uh and it also get what kind of delivery Partners should be assigned from this so once this is done uh I can use the same Kafka to trigger another notifications for the notification part so we have a notification service and uh this based on the user uh preference like if user just want the SMS or if user want it to be email kind of notifications and everything uh based on that we can uh have a s i mean notification handlers it could be SMS or it could be uh email or it could be any other different way of uh intimating the users yes and there is one thing I yet to write is something called uh uh tracking the orders once the delivery partner is also assigned uh user can track the uh orders it's called like past histories U so this generally like uh so once so one part I missed out here is like once the delivery part is assigned yes yes yeah go ahead yeah so once the delivery partner is assigned uh we also keep some log dat uh it could be like AWS S3 instance uh so it could be uh for the logs so this help us to like keep track keeping the track of like what is going wrong and also what kind of items we can bring into the e-commerce website in the future like let's say like some items is getting uh sold out very well so we can have some analytic tool on top of this it could be like Apache spark or something and then we can decide like okay we can have these kind of items in our inventory in our future uh something like that or something we can add a discount or something uh which we see it on like on every holidays uh and for the tracking the product so once the delivery part is also assigned so uh we can have like a same Kafka uh notifications which uh help us to like uh put an entry in our uh table which help us to track the uh items okay it could be uh since I said items are like could be in any different structure I can keep it as no SQL here also so this generally like contain the historical data in a chronological order like uh and also like there will be a flag which says like whether it is delivered or not so that help us to like keep track of like whether the item is delivered and I mean corresponding checkpoint whether it is uh dispatched and or it is got assigned to nearest service something so that helps the user to keep track of uh where exactly the items are getting okay so just to show the tracking part what do you think that you are going to show it in your system I think it will be just the time it will be the place where it is and the product that's it right so I think is it not more of the structured data just the type of value into that particular column is changing is it uh yeah I mean that makes sense because like I I choose noq because I thought I'll keep the record of the item but I think I already have it here so I don't need to uh I already have it here so I don't have to keep it as a keep it a clickable and uh navigate to your uh product service where the product description is already there sounds good to me yes SQL looks good better to me I agree yes yes uh here I need to keep track of like uh at what what is the time stamp whether it is delivered or not and uh who is ball in code like whether it is still with the dispatcher or whether it is with a vendor or something so so once the item is delivered uh the delivery person will get the confirmation from the user and it will trigger I mean it can be asynchronous like of it can trigger topics to our Kafka and Kafka will Mark the item as delivered in our track records because uh it need not be like immediate operations like you will see the update imediately on your database agree so this help us to something to do it in the background in a performance benefit way perfect sounds great to me uh this system is very good for me I agree this is good now I have a query that U now suppose I have uh my one of the application say suppose um my AO service is fetching some data for my AO stable now it's a huge data and I see that there's some performance issue uh the SQL query or the query is taking loads of time for fetching so how do you optimize that front how do you optimize your queries so uh the first thing is like I need to figure it out what is why it is taking time so couple of Reason could be possible one is uh if the query is taking time because uh it lack with the indexing then it's better we can add a index because indexing is nothing but a compound of data structure we use to get the items in a fast way so so that's the first thing I will check I mean it's called analyzing the query uh or profiling the query so get to know whether if it's a query issue if assume it's a it's not a query issue but if my databas is getting very huge then we can have like something called partitioning it could be there are two way one is partitioning other one is sharding U generally uh this e-commerce website could be a location based like we go with the there is no difference between much in the sharding and uh partitioning but the main difference is sharding means like we keep the database in a different uh servers like it's like a same uh same table but we keep it in a different places whereas like in partitioning it comes under the same servers but replica not replica like the partitioning happens in the same servers so since the e-commerce can be one in us one in India and everything so we can have a sharding concept we can keep the data which is local to the particular place that's the reason we have like amazon. in amazon. us so this help us to like have a like logical level of uh uh uh keeping the datas perfect and uh yeah I think if it's yeah if it's related to the database that's the immediate workaround I would figure it out so you have uh finetuned your SQL queries fine yes um now Suppose there uh is a Kafka topic and Kafka topic is facing some lags so how do you remove those lags in that particular topic any clue on that so generally uh Kafka comes with the resilient uh it's one of the resilient feature of the Kafka is like if anything goes wrong or like if something is going uh failures uh Kafka Works in a concept of like uh multiple uh Brokers so there will be like leaders and other one is like a followers or a worker nodes worker Brokers so but your your question is like if something happens wrong then how it rectified right correct so Suppose there are multiple consumers they are reading from this now suppose uh there are multiple uh messages which are left to be so there's a lag in the topic so any it's very uh border case it's very Corner case if you have worked upon it then only you will be able to uh answer so it's very Corner case so have you worked upon clearing the lags in the topic um no actually I mean uh the thing one thing is like okay that's okay because it's very Corner case I was just thinking if you have you got a chance to work upon it or because it's very similar to like you might not get a chance to work upon garbage collection things every time in with the Java application so it's very Corner case that's completely fine this systems looks good to me uh thank you so much for this much input for creating a complete e-commerce there are end things you can add also you can improve also but for me this is good yeah there's one thing we could also add it is something called CDN uh generally to keep the static datas the the thing what I'm thinking about is like there are images uh and also few datas which are like same like for example if you're booking for pen I mean it's like a common image which we can show so we can use something called CDN It generally comes like a web symbol I think there are popular CDN available like Aline and uh and uh Cloud flare there are like popular ones where uh which keep the datas geographically across the endpoint so that it can hit the nearest CDN to get those images perfect so this really needed when the items are added to the cart so we can keep those images and static data or static htmls in there so that you don't have to hit the server to get those data okay so this is when your your application is geographically located agreed yes perfect any more inputs you want to put through this uh yes I think I also mentioned about analytics here um which help us to like enhance our application and uh yeah the one thing you mentioned is about like like how we handle the uh scaling during High sale time like especially during holidays or Diwali so generally uh we uh if you use the cloud service applications it also comes with something called alerting uh features it track the logs and uh we can have like some kind of like a threshold so if it's exceeding our CPU utilize like more than this it automatically span the new uh I mean scan the servers or if it's get going below than that then it will also so like reduce the number of instances of the servers so this help us to like orchestrate in a in a dynamic way rather than like hardcoding number of servers we want for the applications perfect sounds great okay I think I'm good with this question now uh I would ask one more thing uh I have one question on the search service part now search service is going to show you all products right so uh since you have worked both on front end and back end can you give me some some idea how you can add the paginations to your n number of products you are getting on the page so database can have thousands of products also your front end can show that so how pagination works for both uh sure so uh okay yeah so there is two different way one is like lazy loading of the data like we can load in a bulk manner like first we load only like 100 then uh we keep on load other one is something called infinite scrolling uh which is like when you scroll upon the page whenever it hints when the scroll hit the end of your viewport it will make a call to a database by sending the next page number to the back end so that our query will fetch the next 100 records starting from that particular batch size so this is the efficient way to avoid loading all the data on the page uh even though we can't show all the data in a easy way so this is called like uh yeah one is called lazy loading of data other one is infinite scrolling and we the front end part right because uh yes yes uh yeah that's the front end part like uh the way we trigger it we decided in the front end but the pagination and also like loading the data in a chunk is something we can tune it in the skl queries perfect so uh at the back end how did you do it with the batch operation that's how yes uh we specify like next batch numbers and uh I mean what could be the next record uh ID I mean this is something uh logically we can do uh like uh I think u i I'm not sure exactly with the query but I use the jpa uh methods which can uh which we can specify with our entity manager like what's the next uh next batch number and what could be the size of the next batch okay okay so batch is what you have used for the bation at back end and for front end how you have mentioned the pation is using the uh infinite the whenever the end is hit that's how you scroll and uh so my question over here is that during pation when you scroll down when you hit the end you send a request to backend backend process it gives you data so there will be a lag in between right yes so how do you manage that lag uh I mean uh generally the pages uh it's I mean when we say like it hit the end it need not be like hit the end spot of a page it means like uh the client's code is written in efficient way like when it tried to figure it out when we're going close to the end it automatically make the call so it's not like at the end it will like just before the end so that user won't feel there is a lag lag yeah actually but it could be possible there could be a lag we show but Spinners are responsive web page just where you can see something called like a gray color like kind of like a buzzing screen then it will get replaced with the actual data perfect perfect sounds great um okay and also we add something called throttling or a debouncing concept so that user I mean we don't make a search for every single character we type so we only make the search when user give up the when we user keep the key a from the keyboard in that case only we so this will avoid making unnecessary queries for every type we do perfect perfect sounds great uh okay I think I'm good with this I'll switch back now okay perfect uh now we are back uh tday so I'll uh start with the another part if you're comfortable with cicd and git can I uh go ahead with that yeah yeah sure okay so uh what branching technique you have used for your project and uh how how do you how you people collaborate with each other uh sure so we generally use the git for the versioning control and uh generally we have like a Dev branches and the QA and uh yeah generally we have like a master branch which is we also called like a stable branch and for every monthly releases we create a new Branch from that it depends on how many what's the scale of our team so if there are many team members in the team so in that case we also consider called a feature branch which uh so once the changes are done on the feature so we will do a kind of like a testing so this will get deployed on the testing environment so first developer will do their own testing by following all the test cases if everything looks fine for the developer and also the automation cases are running fine in that cas in case we deploy the same branch on the testing environments so once the qls also test we uh merge this to our a pre-pro branch it's not directly merch to the stable so in the pre-pro branch we going to it looks like a same production Branch uh like it but customers are not using that uh changes this will just to ensure that our changes are working fine in the customer environment once everything is looks fine then we deploy it on the production perfect so uh as soon as you merge your code to a particular Branch say Dev or your prepr or prod how is your uh code is getting deployed to that particular environment it's uh it's generally called like profiling like uh generally we specify what kind of environment we are running so based on that particular profile file uh if I just want to explain with the springboard uh we have application do uh application. property files generally that's like a default file we use but we can also have something called application Dev application PR application staging So based on the environment these file will used and it will overhead the settings of the existing one so for example for my local I will use my local DB but for the QA we will use some other DB so these changes we will write in the corresponding property file which will get overridden by the property files okay but then uh my question is regarding the deployment part so okay yeah as soon as you merge it you raise a PR you merge it to particular Branch how is that code deployed to particular envirment uh okay so generally it uh uh okay the git automatically we uh it depends on what kind of envirment we're using for the deployment it could be G GitHub action or it could be junkins so generally it will uh check periodic changes are available so that's the reason we use like continuous inte Integrations and deployment continuous deployment so this to ensure that uh every changes has to get deployed as soon as the new changes come into picture so it automatically see if there is any change in the commit IDs if it figure it out then it automatically start to run the continuous to run the test cases then it will automatically deploy the changes okay understood so uh you talked about CI and CD so can you explain about uh each of them and what tools you have for CI what tools you used for CD and what are the if you used Jenkins what are the build stages uh sure so as a name indicates CI is the continuous integration and uh continuous uh deliv deployments for the CD so when you see continuous integration means like uh generally I write my code uh so that uh I mean I also write something called junit so that my codes are more uh unit test case based and U we could also follow some some tdd patterns so ensure that our codes are more robust uh but once the changes is done we have to integrate with other component as well to find it whether it's working fine so that's where our CI come into picture so instead of getting the failures in a big chunk in the end moment so what we do is like every commit we do it will trigger the continuous or integration test suit which will do some C of like a crude operations as automation cases so if anything goes wrong it will block ourself from the for the deployment so that we can fix that so that's where the CI come into picture so I generally write the junit test cases and also selenium it's up to the team which uh which kind of testing automation has to go with so yeah once the every continuous is running fine then it will go for the deployment so it's the same thing whenever it notice the change in the commit ID so it automatically start to deploy it will create a jar fil out of it uh it depends on how we are orchestrating the system if you go with the container based structure like Dockers or something so it will create a new uh uh image out of it and these image will create a new uh uh containers from these images and it will get deployed so yes agre so you have used Docker for your containerization right yes yes so you might have written some Docker files also and images yes yes so in our same project uh we can have something called docka files which have like bunch of instruction so it's could be like uh what could be the our jdk person and uh I mean yeah what could be our entry point and what are the port should be enabled so these like a bunch of instruction helps the Dockers uh to uh in uh install these kind of packages and uh uh execute the jar files okay so basically your Docker container that would have everything every Library everything that you need for your running on your instances now suppose the previous uh example the ecut you will have like minimum of 10 10 of your microservices so 10 containers would be running so how do you manage those containers on production uh yeah it's a good question so generally we use uh popularly kubernetes for the orchestration as a tool so generally this the job of the kubernetes is like scaling of these containers so we can push or commit our uh images to our GitHub cloud and we can make it like very automated with our kubernetes so that whenever it sees like there is some over overhead in the services it can create a new instance from our container so from our image sorry from the image it will create a new containers or if something is not getting used then it will can do the reverse way so okay okay so you will use kubernets for your maintaining of your pods uh on the yes production suppose uh one of the Pod is crashing it is a task of kubernetes to restart the Pod with a different instance yes so kuet is uh one thing you can use for production now suppose uh I have something wrong with my production so you must get yourself alerted that yes even uh even though you are uh Kutis is trying to restart your pond but it is a loop back crash that is a situation when it is trying multiple times to restart your application or it is not able to that is loop back crash so how do you handle those situations so generally the developer will get pager Duty Call sometimes if if the periodic fail happens so we can still smart these alerts to think like something which can control or something which we can't for example if something is a my scale is going down which is something which we can't control so that we can't do anything until the database team has to fix but if server is going crash and our kubernetes is can't able to get it up the worst case is we have to manually bring this up uh but uh I I never encoun the case where uh we used any automation tool to uh get it more resilient apart from kubernetes so yes manual intervention can help you in such cases where or else uh your logs will help you so in the kubernetes you have many tools like Rancher and all where you can see the logs why your uh particular application is not able to get up and if you have something deployed recently and that is making your uh application crash again and again you can revert it back yes so uh how how how uh did you make your versioning and how do you roll back so every time it it happens in our life right every time on the production your code will uh will will not happen like it will work seamlessly so how do you revert on roll back what is your strategy to roll back so generally when we create the uh uh new instances of the Dockers so we also give something called versioning so it's like a name space followed by name and the version so it means like the kubernetes have the versioning of our of the image so it means if last version is not working fine so it automatically goes to the previous one and create a new instance out of it so sorry new container out of that image so how how how will he roll back uh so okay so we have a images uh with the ver numbers so it means like uh kubernetes will have those in a historical way so which is the latest and which is the previous one so if the latest one is not working fine then it it try sometimes if it's not going then it maybe feel like it can try with the previous one and it will check the previous one and it'll create a new cont container out of that image it's generally how we do with the DAT as well like if the our latest is not working fine then we go with the previous versions and create a new Branch something so that is a manual intervention you do uh no no no uh it's it's something which could be automated with the kubernetes uh if I'm not wrong uh but in the worst case we can manually do that as well understood okay okay that's where our like terraform or like infra as a service coming picture like where rather than we do it manually we can use this kind of like a I work with pumi which is like a terraform similar but yeah which can work as a infra as a service like where we can orchestrate the infrastructure understood so uh what all infrastructures have you uh orchestrated with pulumi uh we uh I mean I worked with Azure so the b2c is the one like we use it very heavily for the authentications and I really worked on the alert partings where uh we can trigger the alerts in a efficient way whenever something goes wrong so these two uh I worked and I made it bit automated with help of the pulumi it's generally written in the AML file it's not like any uh any other coding knowledge is required just mainly a property file with like bunch of instructions so yeah that's pretty so basically you have created your alerts as an infrastructure in the system uh yes so the main job was what we did was like uh if if we get so many logs or something is going wrong so we analyze these alerts and we trigger the actions uh by with the help of the py like what kind of actions should be done okay okay understood also I can see in the resume that uh you have a achieved 90 to 95% of code coverage which is very nice uh or very good uh practice for unit testing I would ask you a question that uh what all things you have included for your test cases what all things you have excluded uh and why did you exclude those particular uh parts of your code and where those exclusions and inclusions are added as a configuration in your code Okay so when we write the automations I generally concentrate on the unit test cases like every method should have its own test cases and I write with a junit f which supports something called nested classes like uh so we can I can segregate the functionalities in the test cases and uh the mocking is something I use a lot because there are few things when you're checking about the functionality you don't have to make a DB call every time in that case we can mock those calls and test the you can Mark what data you want to back and write your functionality for the test cases so that will increase my test cases I also use some code coverage tool and also sonar Cube U to do the static analysis of my code base to see what could be the uh what are could be the some technical Dev not technical Dev like some technical issues which can be fixed and you have done that in local configured that as a plugin uh yes uh it just we how to write uh I I did that for the Android project not for the uh spring board one but it's pretty same I think uh we just have to uh mention a few of the properties in our build files uh what could be the because sonar Cube uh has its own uh listening port numbers and also the URL so we just have to specify those end points and what reposit it has to check and uh and we have to run a command uh which will run the static analysis on this sounds good sounds good with me um I have a question on this unit testing part only so suppose we have dtos dto have gets and Setters so You' have written test cases for that also so generally it's a bad practice of writing for like few of the things especially for the Getters and Setters and for the naming conventions and uh variables because uh yeah yes yeah that makes no sense yeah just simp adding some unnecessary code to our database just to get percentage yes yeah I have seen that kind of practice as well like we add some test cases which doesn't help but it will just increase the coverage coverage yeah I agree so adding the test cases for dtos is not required so how do you exclude those DS Where where is that exclusion for those DS written of so we can write like a rejects patterns which can uh exclude these kind of packages which we don't want to track in our coveres so uh we uh I mean have written in XML with mavin like we can specify like what kind of packages you can use like dot star for the dtos like generally we follow the vering sorry package patterns for all the files like some dto should go under dto package and entity should go so these Getters and Setters pojo files uh we don't have to test it so generally when it comes to pojo it has to go with like integration test which we do uh with running database queries running database queries uh I mean like uh I was talking about the entity files like we don't write the yeah correct if something is wrong with that file then our uh gr operation definitely fails correct so that that that is that can I can agree that they can be tested with the integration testing uh in the previous uh scenario you have given me 10 microservices now these micros service interacts with each other for uh creating or order creating your cart so they work seamlessly for testing that end to end process you should have some system testing in place so did you do any kind of system testing with your multiple microservices so that they work simultaneously seamlessly with each other uh yeah I mean uh I have written the test cases but generally it's more like a integration test case only where we make the call to the uh rest template uh with by passing the payload so so one way we have to ensure like these uh services are running and also like the for this particular request the corresponding response is coming back or not so we don't have to test cases itself you have involved this particular part yes uh because like even when we make a call to different micros service we just need to pass some payload and we also ensure like these datas are getting created or responding back to this particular payload so that's question on this practice because what happens is whenever you have build you are building something every time a build test cases runs so every time a call goes to that so suppose three times might build stages are something like that for CI and CD both might build cases runs so every time your call actually goes to the second API gets the data which is not good because it will take some time because rest API calls are in in latency introducing things so every time uh unit testing the same call again and again doesn't makes sense to me so why not in this way that we can have some bdds or the Cucumber test cases where you can have a system testing in place for these micros services that the unit testing with the rest API also so rather you can mock the rest apis and you can test test the NN service with single system integration testing does that sounds good to you yeah I mean the for the case you mentioned I yeah I do agree with go with the behavior testing rather than um testing with like integration way of like testing and that that's how uh I look at that uh everybody have a different scenarios that the thing that you said will work for sure it will work but it will just add some latency here and there to your system which will actually be a problematic to we Developers so just prent our time yeah it's based on like uh use case and what kind of testing we are doing we can go with a different approach perfect perfect even your will work no doubt on that can see that you have uh worked with the reactive programming also so um in which scenario you have choose to go for reactive programming not the blocking programming that we usually do with the spring spring programming way uh so generally we go with the reactive programming because the threats are limited like uh so we can't have like threads waiting for the response to come back so when we write any calls these are like as synchronous calls like because few of the operation has to be done one after the other so in that case we block the threats to wait for the response to come back so in that case uh we can go with a reactive programming where uh something called like a call back like once the action is done it automatically respond back so the whatever the thread is available it will fetch it and uh execute the rest of the part so it's need not to be like uh forcefully we have to use reactive programming it depends on the applications wherever uh we find it like there are so many requests are coming especially could be like a streaming apis or not streaming API streaming platform or e-commerce website we can go with a reative program it comes with something called flex and mono uh like it's the same same thing uh a mono means like it Rel return like one instance like flexes for more okay so uh what kind of uh engine you have used so there's a reactor uh that you can use how how did you implement yeah project reactor so how did you implement that so uh it's like normal like when we create the uh any spring uh spring boot project so we have to specify like what kind of dependency we want so there is something called like project reactor so we can use that as a as a mavin or a Gradle uh properties so in our palom file so it automatically download the required jar files for us you just talked about Maven and Gradle what are these two uh these are called like a different way of uh writing the build build files so one is the build tools not build file build tools uh so when we generally I think all of you all of us like familiar with the mavin which we can easily see with pal. XML the gradal is also something which does similar uh we can see that whenever we use any uh packages we whenever we stck it in an mvn repository there are different flavors available one is mavin other one is Gradle so Gradle is like there is it's naming conven are bit different but it's it does the same but I noticed the gradel works bit faster than the mavin uh I mean it could be the internal uh architecture but it's there is no hard and fast rule we can go with any perfect so uh okay so what is p why did you use p in your applications uh so the main problem if you don't use pom is like uh if you want to enhance our uh jar files to the next uh I mean next versing so the thing is like I have to do manually uh that's the main problem like I have to install the new version put it in the respective folder so that my code will be running as is it and another problem is like if I'm giving my project to someone else and the other user doesn't have the required versioning then it is a problem the code doesn't start to run so the good way to automate this is by using something called P file where uh I can specify what is my jar and what is the version I want so what it will do it will first go in the check it in the local mavin repository if it is available if it not will go check it in the global so so in this way behind the picture it will get all the jars and it will run so that user won't find any manual task to install any jar files otherwise you have to add those J files to build packages and then packages class path I agree cor correct class path I agree so uh yeah that's where you define all your starters your required pack external libraries where you can have it in a one single chunk perfect uh okay so that was about pal you have okay um now I have one more thing I can see in your resume you have worked with Canan so can you please explain what is Canan and why did you use it uh Canan is uh I mean uh you're talking about the scrum scrum architectures right Canan yeah sorry I got confused so that is true yeah okay so uh this is how we can like this one of the like project management uh like I I think we all are familiar with agile methodologies like how we do the Canan also works like in a similar way it's uh it's not predictive it's like adaptive like it means what like uh we do the changes as in when like requirements come into picture uh uh I haven't uh worked in any team which uses can ban but sometimes the team uses mix of like can ban and also like uh uh scrum so yeah I uh I mean I work mainly in the scrum and the iel methodology IES and that's written in a resume you can be hit up on this oh okay I I did not check that no worries no worries just have a look with the SK band thing written in this I think this is more of the uh strict way of following the uh Sprint work as far as I understand okay um um so the next thing I would have asked you is that um what is concurrent hash map why would you use it and how how it works internally so uh like the main with the regular hashmap what we use is is something called like a a fail fast sorry it's a fail first uh hashing it means what like whenever uh we are iterating on this hash map and concurrently if something is going change in this hash let's say like we're adding new entries so it will throw an exception because it the uh the way it works in a different because it keep on check the hash size if something is getting changed it will throw the exception called concurrent hash exception to avoid this kind of like uh scenario we can go with like a new family of the uh hashing is called concurrent hashmap so it will works in efficient way so that even if you do add any entities to the hashmap while you are iterating so it will be uh efficient to handle that kind of scenarios so that will be very useful when we have a yeah exception will be uh I don't think any exception will come because this is kind of like a main uh uh hashing we use especially for the multi- thread environments when there are more than one threads are manipulating on the same hashmap so how does it work interally how how it makes sure that multiple threads are working concurrently without any concurrent modification exception coming uh because it's uh yeah when we say multi-threading uh the main issue is like uh it can one thread can uh tamper the data of which other thread is accessing or updating so it uses concept of synchronized so that uh no two threads can uh it's called Mutual exclusion no two threads can enter the shared block critical section in the same time but for the concurrent hash map it will have like a let's say you have a hashmap size of 16 then it have like a capacity of like by two it means like eight thre can parallely access the hashmap in a concurrent way it means like it will resort to two blocks for each of these threads uh so that no other thread can uh so that other thread can access the rest of the blocks okay so basically you want to say that the complete map is divided into segments and each segment is logged by a particular thread another thread cannot if it is reading everybody can read all the segments but if it is writing only that segment lock is taken by that thread the threads are uh okay to be uh accessing the other segments sounds great okay so it's a thread safe correct it's thread safe in that way uh and since uh it is better with performance with the synchronized one we are going with the concurrent hashmap perfect uh also I would ask you the last question the last question from my end uh the hashmap is a key value pair right so for the key uh you usually have IDs integers now if suppose an employee class I want to make it a key so what all things I have to make to that particular class to make it a capable enough class to become a key for a hashmap so uh generally uh if you want to make any object as the key uh the two things we have to override one is the equal method and other one is uh hash code so the main reason is like even though uh we have to ensure that whether these two object are same or not it can be easily distinguished based on these two methods so whenever uh when we we do any put it will automatically call these methods and get to ensure that whether it is a update to the same object or it is a new insert So based on the implementation of equals method because the key has to be unique always in the hashmap that is why to make the uniqueness consistent you have to have the hash code equals over perfect yes uh I think that is it from my end uh I'll just go with a quick feedback most of the questions you have answered perfectly fine you are all set for the interviews go ahead crack it up all the very best for all your interviews just have a quick look with the Spring Security here and there and else you are perfectly fine also have a look at the keywords that you've added in the resume because they are very attractive when when I saw your resume it's perfect resume I mean if I would been given a chance I would show this resume to other candidates that this is how your resume should be so if if the time permits if you have some enough time I'll I'll I'll do that session with you so just have a small here and there and you all set for your interviews um any feedback for me because you might be in a uh like a situation of giving interview so any feedback for me if uh if I was rude if I was all say no no uh I like this uh interview session very well uh comparatively the previous one because the previous one was mainly on Java this was like bit beyond and also it included some handson which uh I noticed most of the company would do because they do not just the question answer they also do something called system design interview nowadays uh so which I felt it very useful from your platform uh yeah I would really like to see I think I took your Udi subscription uh there I noticed how we are orchestrating the applications right so I mean I really would like to see those kind of videos more because because nobody would does that kind of videos only way we feel that when we join the companies I agree thank you so much for that but thank you so much for sharing your knowledge with the community it will really help the community a lot and you're all set and prepared for interviews for 5 years experience you are all set and all good so all the very best thank you so much uh so that was all about the mock interview we had with the and he is uh like very well prepared according to us uh thank you so much D for taking your time out and helping us to have this knowledge shared with the community uh also if you want to be a part of this mock interview series and you want to see uh how good we are with this particular friend just let us uh know by adding your details in the Google form which is given in the description bar below add your details and we will reach as as soon as possible to you thank you
Info
Channel: Code Decode
Views: 52,038
Rating: undefined out of 5
Keywords: code decode, mock interview java developer, mock interview spring boot, java mock interview for 5 years experience, system design mock interview, microservice mock interview, spring boot interview questions code decode, microservice interview questions code decode, java interview questions code decode, system design interview questions code decode, code decode java interview questions, code decode interview questions, codedecode, spring boot code decode
Id: xMlcsFLk-CU
Channel Id: undefined
Length: 94min 47sec (5687 seconds)
Published: Tue Jan 30 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.