MICROSERVICES ARCHITECTURE | DEPLOYMENT STRATEGIES| PART - 10

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so far we'll learn about so far we learned a lot of different concepts about micro-services and in the end of the session or we have to care about is how to deploy them and then let the third party or the UI or the other service access these services for that we need to deploy these micro services on some place it could be server virtual machine or a container or in cloud wherever it is and then let those services run and then made to be accessible by UI or whatever services now before deploying them we need to set some of the deployment goals and here the list of the deployment goals we need to achieve when we are deploying these micro services on the servers so the first one is scalability and throughput and definitely why while you are deploying these micro services we definitely have to think about scalability because we need to that's what it basically provides through port or to scale our applications up or scale out so we definitely have to think about scalability and the next thing is reliability and availability obviously we are building mixers for a reason because we need our application to be high available and also it should behave reliability reliable so reliability and high availability is very important so the third thing is isolation when we are deploying our services one services shouldn't really affect or should disturb the other services which we have deployed so we need a pure isolation around these micro services where we were deploying and the fourth one is resource limit and obviously we can't just let this server services consume whatever resources or how much ever resources they want to in the server wherever we have deployed say for example if I specified that this particular micro services micro service is going to take say once if you and 2gb of RAM that means that that service should be allowed to consume only that much resource not beyond that so we need that feature as well and the fifth one is monitoring obviously monitoring is the trivial part or maintaining the application so we need to somehow be able to monitor all the deployments are the services which are running of the service and also the last but very important goal is the deployment should be cost effective that means we have to utilize our resources wherever we have deployed these services to the maximum extent and keep our billing on the cloud infrastructure or server cost low so that's what which makes cost effective and also makes business profitable so let's see what are the different patterns available or people use in industry to reach or to achieve all of these goals when deploying the micro services so let's understand the very first deployment pattern product multiple service perot hosts as the name suggests it is running multiple services in a single virtual machine or a physical server is called as multiple server perforce pattern so as the image shows here this is a one virtual machine or a physical server in which we are running micro service one service - okay that's what it basically about you can run more than one micro services it could be 10 or it could be 15 micro services but that's not really ideal situation but just to give an example you can run as many number of micro services in the same worship machine and that's what this pattern is all about usually these services run as a separate process of the same operating system there could be for example this service is also written in Java this also on written in Java that means you can actually run one tomcat server and both will be the instances of the same tomcat server and serve the traffic or if this one is not Python oh this is on Java maybe you can deploy separate servers and configured to redirect the traffic here so it doesn't matter how you basically configure it's this pattern says you can run multiple services on it this is very traditional approach usually now also we do the same thing like running multiple services on a single machine but this is not really advisable because it's has its own drop but this is what we used to do there are much better patterns which the industry has adopted these days let's understand the advantages and disadvantages of this pattern yeah before that how do we scale here so if you want to scale out all they had to do is add one more virtual machine or a physical server and run the same copy of the services over here as well and then you need to load balance then we have micro services upon running on multiple servers or virtual machine that way you are basically scaling out based on how much are based on the traffic which are receiving that's skidding part now the advantage of using multiple server per host pattern is it is very efficient in resource utilization if you ask me how basically if you see in an in a virtual machine we are basically running one or more services in here so even if the service is not really serving a lot of traffic maybe these are the services which are running in the same machine might be busy and using as much as possible hardware resources and try to serve more traffic so that way at any given part of the time most likely one other other services will be efficiently using the hardware so efficiently or it could be a situation where every services serving a lot of traffic that means that they've basically tried to maximize the utilization of hardware either way we are utilizing hardware and it is very cost effective for the business and the second one is faster deployment and you can ask me how it's that he if you just start one server you basically have a place where you can actually deploy a lot of other services are a lot of different services that way it's much faster to deploy what are the disadvantages of this pattern so the first one is poor isolation if you see both services are actually living in the same operating system that means a disservice might disturb the other service at any point say for example if this service is placing some temp I in some directory maybe the server service might delete those things this is just a very simple example but there could be a lot of different use cases where one service can affect the other services it could be on the resource consumption as well what if this guy or this service is consuming all of the resource available in the server and it is not leaving any hardware resource for the service at all that is the second disadvantages that we can't really limit the resource or service so what happens is as I said any service can utilize all of the resource available and you might start the other resource and they will never some more traffic and the third one is dependency conflict say for example these two are java application what if this guy using some package of some specific portion and this service is utilizing using the same package of a different version that could be possible conflict of the libraries which are used or it could be the O's level libraries which is conflicting to run this service as well so we have to have a one more isolation layer on top of it just to resolve the dependency conflicts and that is kind of like it increases the complexity of you know deployment now let's learn the second deployment pattern called a service per VM or container so in this pattern what it says is you basically deployed my your micro service in one virtual machine are in a container so this is how it looks basically if you think this as a virtual machine or a container you basically have only one micro service which is running inside of it there is you are not supposed to run multiple service in here at all so that way if you want to really scale out when there is high traffic or a huge demand of throughput or whatever so you basically add one more instance of the virtual machine container and let it run how you basically add is you basically will have an image which is already filled for a given microservice maybe image one is for micro-service one or you will have one more image for microservice to you basically if you want to scale up the micro-service to you basically take this image and then you scale it up deploy it to scale it up if it is VM you are using to run this micro service then it will be a virtual machine image or if it is a container it's basically a container image which you basically use it those are pre-built images okay that's how you basically scale it now using virtual machine was how we used to scale all these days until we basically started to adopt containerization right but now containers are hardest thing which we are using in order for micro service deployment but until now we used to use a lot of virtual machines for deploying these things say VMware has a lot of tools which you can actually use it to scale up or scale down in your if you go to the AWS you have auto scaling groups and all the dynamic load-balancing where it basically adds an ec2 whenever you see high traffic but if you want to scale the containers and there are container orchestration technologies like kubernetes or dr swamp which basically it does the same thing you basically give the images it automatically scales based on the based on so many factors it could be based on the number of requests coming in or it could be based on CPU or whatever so many other different based on different parameters now what are the advantages and disadvantages of having the service per VM container and also let's discuss a little bit more about what are the specific advantages of having p.m. versus container it is an extract the advantages of using a service power VM or container ease the first one is isolation this is slightly different when it comes to virtual machine and container let's discuss the advantages and disadvantages specifically for VMware and container as well the advantage of using this pattern is better isolation and it is secured how if you ask if you look at this we are actually running one micro service in one instance of virtual machine or a container that means that this service cannot really touch any other services which are even though if they are running on the same server or same hypervisor or whatever it is so it is very isolated from other services and it is also secure when it comes to virtual machine it is really secure container is like highly secure but there are some cases where it is not rated as secure but it is still a better option to go back and the next one is manageable Wow when you say manageable it means that it is very easy to deploy and scale out as I mentioned we have a lot of technology so kubernetes and dr. swap which basically manages all of these instances of deployments so it is much easier and and third one is it is fast but this is specifically for container if you want to spin up one more instance of this service income if you are using container it is really really fast because contains a very lightweight it is not clear during the all of the operating system on it so it is much faster but if you are using virtual machine it is not really fast ok and and the fourth one is auto scaling so auto scaling should be much easier because you have an image you just need to spin up one instance of that image so auto scaling when using virtual machine or container should be much easier compared to the previous pattern that is multiple service or virtual machine orbit per physical server so these are the advantages and what are the disadvantages so the disadvantage is slow in case of if you are using virtual machine as we discussed since the virtual machine runs the entire operating system the size of the image for virtual machine will be higher because it has the whole operating system in it and it also takes little bit more time to copy the image to wherever server we are we want to speed up one more instance it takes time to copy and also virtual machines are little slower in booting up because it takes time to start the whole operating system so that way if you are using virtual machine for service per VM way of deployment then it is a little slow but if you are using container containing start fast it spins up one container much faster so it is it's still fine to go with container and the next thing is not very efficient in utilizing the resource because it has to run the whole operating system whereas if you're using container it's not really running an operating system so it is very lightweight and it is not going to consume a lot of resources just to run the container later but in case of I should mention it consumes a lot of resources just on the operating system itself so it is not really efficient and then third one is not so secure but as we discussed if you are using only container it is secure but not to the level of the security which virtual machine provides but no matter what what is the industrial trend or the most used way of deployment ease service per container it's not the virtual machine we used to have service per virtual machine until a couple of years back but now the industry is moving forward with containers and with the high you know high ten for kubernetes and docker everyone is actually using containers for deployment of micro services so let's learn the third and the last deployment pattern for micro service is called a server less as the name suggests it is server less it means that basically there are no servers involved in our deployment at all then you might be thinking where the hell my Tour de France your code definitely run on the server but you don't need to really worry about scaling up or scaling out the servers or instances of your micro services are maintaining them configuring you know service registry API gateway and all of that stuff you don't have to worry about anything just that you have to write your piece of code which basically accepted request and then you basically return the response how exactly it works is say in case of AWS console you basically log into a W lambda AWS calls it as AWS lambda in Google Cloud they call it as Google Cloud function as your concert has as your function basically every major cloud provider supports suffer less with different names so what you have to do is log into your console they basically give you a place where you can actually paste the code or you can write your code basically the business logic and then that's all you need to do you need to make a little bit of configuration like API gateway setting something like that and once you do that you have your service or an API up and running in whatever cloud provider you selected so how it works is you basically store the core of that function or the service in the cloud provider they basically know when to trigger that piece of code usually it's by HTTP request they trigger that function or the code which you have placed there and then after processing you basically written whatever response and they will send it back as a HTTP response these functions are not just used for HTTP requests on response they can be triggered based on different events which happens across the 12 platform say for example in a Tobias if there is an s3 bucket you want to trigger this particular piece of code or a function which you have put in there or a service we shall put in there on based on some of the events which happens on s3 say in s3 bucket if someone uploads an image you can configure it to trigger this function and you can access that image do some processing on that and then just exit in case of HTTP you basically triggered on an HTTP request on a specific path and then you will basically return the response or do some processing but while you're configuring for HTE you will actually need an okay it bit you already that you already learned it will get weighed you know all the functionalities right in this case the inca gateway is also provided by the same cloud infrastructure provider and then you just need to define the path at which the this function is we is going to be triggered and or maybe you're going to mention what you may take or HTTP method it is going to get triggered so another is a request to coming in one that specific path with that particular method your function which is configured where you get executed and you will basically send the response back so it looks so easy then why the hell we actually took a lot of time to learn all of the different concepts and components of micro-services you might be thinking that actually it is true that you don't need to really worry about all of these things if you are actually using it obvious land or real summer less functions but it's not entirely true because there are some of the limitations of using server laces 7s way of deploying so we will talk about it later so let's understand a little more about it so now you can ask a question then how will this scale it and how the scale out are scaling works say for example when the request comes in this function is working right now now suddenly more number of people are coming in to your surveys or more number of requests are coming how do suffer less manage to scale for that the answer is you don't have to worry about it at all and you don't have to worry about adding more servers or containers or p.m. VMs or anything the cloud infrastructure or the cloud provider will basically scale these functions to more instances or whatever it is you don't really have to worry about it all you have to worry about it is put a code put the code or business logic into that server list function and that's what you need to do so even though this you are you your application is getting one request per second still words or one request per day it still works if you're getting 10,000 requests per second still it works that's the beauty of service and the top provider does not give you access to configure any of the background or the intervals of how they scale it they will take care of it you don't have to worry about it it's just that they always make sure that that function is going to be scaled automatically as irrespective of what is the number of requests coming into that service or that function absolutely zero worries that's the beauty of beauty about it you don't get to get to know whether they are using containers or whether they are using physical servers whether they are using ec2 or virtual machines to scale that you will never know and they will never let you know they basically do it all in their back-end so now it's I think time to understand what are the advantages and disadvantages if this is so cool then why don't we really use services everywhere ok let's understand that so the advantage of using server less is first one is focused on the code only right so generally when we are thinking about deploying micro services in application we always think about the infrastructure and how do we handle such a huge number of requests and all of that right in some unless you don't really need to worry about it you just need to write the business logic and put it there scaling and you know maintaining all of that stuff stuff the cloud provider will take care of it so you just need to focus on the code so the second advantage is no worries about to scaling I think we just discussed about it you don't need to worry about it they automatic scale it the third one is a as we go this is the very interesting part so how the billing happens in case of ec2 or virtual machines are any other servers you basically that the cloud provider basically count the number of minutes your you have used that particular machine or the resource based on the type of virtual machine will do you right say for example simply $1 per R is the Billy great if you've used it for 10 hours no matter you are using it to serve one request per second or 10,000 if it's per second in that using the Machine you still end up paying 10 R into one dollar that is total 10 dollars but in this case it doesn't happen the billing is calculated based on the number of requests which the platform handled for the specific function plus number of time or number of seconds or minutes which this function took to execute and also based on the resource like CPU and the van which it consumed so it doesn't really matter you have uploaded your code and you never need to pay if your service is getting 0 requests per second or 0 request in one day you don't really need to pay anything you can just have all of the service available loop in say for example it appears and you are paying zero dollars a month or anything because no one is really using that zero requests for your service but on on one fine day if someone figured out there is a really good service and started to use it immediately maybe you're getting thousand requests per second they automatically scale it and you have your service available and the billing starts to count based on how much the resource usage and number of requests so you just basically pay the bill only when the service is used so that's a really cool thing about it it's like pay-as-you-go based on number of requests from this was consumed so now it's time to understand what are the disadvantages so the first one is not all of the runtimes are available for in the server list platform in in say for example 8 of us there are only limited number of runtimes available some of the famous other commonly used runtimes all obviously available but if you want to run your some new language which you have built maybe that's not really possible or some of the not really used runtimes are definitely not available and also handling the third-party libraries you can't just go into this platform and then you know do apt-get or pip install that's definitely not possible the only way is you can package a bundle of all the dependencies and you can also provide along with this function so it is available in some specific path maybe in /aa video someplace and you will have to import from there that's so one disadvantage and the second one is it's actually kind of expensive I said that okay the billing is really good because pay as you go but in an overall if you can you know consider how much you actually pay when your services really used every day and every second there is a certain threshold when the number of requests actually crosses that threshold you actually kind of pay more money when you're using services suffer less but when you are using ec2 maybe you would have paid a little less that's the one I think I think definitely should read blogs where they have compared about what is the pricing difference between servers and when you use easy to read about it you will display I understand in better the third one is vendor lock basically what happens is if you write your function specifically for any W is lambda you can't just take that same code put it in Google Cloud functions and run it because some of the request response is specifically you know written or customized or that code is actually dependent on a specific platform in case of AWS we basically write code to access the objects which basically a Tobias provides when the function call will happen so you can't really take that code and run it in somewhere and so it's you basically have to be using that code only on this platform so there is a vendor lock of possibilities and the fourth one is debugging pane and definitely you will not you can't the whole service server list setup on your local machine so it's very difficult to debug and check what's happening really though there are some of the tools available where you can actually simulate server less on your local machine and then try it but that's in general is always painting to debug server less code on any of the cloud platform and the first one is the stateless and short running processes only usually these functions are executed upon him and the request comes in and they die immediately once a request is went back it doesn't really remember anything which happened earlier so that gives your hang that yeah your code whatever you use it along with the server list platform should be stateless it can't it can't be stateful if you really want to have some states preserved or something you will be basically providing through some elastic cache or database or s3 and then you'll have to do lot of things so usually these are the functions which are really useful to do some processing which are short-lived and which are you know stateless and also these are really good for short-lived because say for example in 8 of years you can't really have request and response cycle which actually goes beyond 5 minutes it actually gets timed out so there's a hard time out for the question response to go finished so anything you execute should be should be finished before 5 minutes you can't just keep on running for hours and ask so that's one more pain point I think yeah I think I was spending more or less for the important things related to the server list there is open source question of subol is also available quite a spoken fast i if you don't really want to use any of the cloud providers and you want to set up your own cluster of servers you know deployment then you can actually use open pass and then deploying on your cluster and try it out as well yeah I think this would be the last video in these series of sessions about micro services I think definitely this will help you to understand what really micro service is and all the different component components of the concept stretcher microservice definitely it will help to help in your interviews also meets your micro-services concert stronger so you could really like this session please share with your friends like any youtuber says please share subscribe and comment thanks a lot I think I'll try to include all the you know interesting links along with the videos and as usual thanks a lot for watching do subscribe if you haven't thank you
Info
Channel: Tech Dummies Narendra L
Views: 23,424
Rating: 4.9552794 out of 5
Keywords: Amazon interview question, interview preparations, software interview preparation, developer interview questions, Facebook interview question, google interview question, Technical interview question, software architecture, system design, learn System design, MICROSERVICE, learn microservices, introduction to microservices, advanced microservices, microservice architecture, microservices system design questions, microservices deployment types, deployment patters microservice
Id: XJS_GwcLfHc
Channel Id: undefined
Length: 28min 58sec (1738 seconds)
Published: Fri Mar 06 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.