What is a service mesh?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what is the service mission service mesh is a concept which applies to micro services so let's start with that let's say we have an application a huge monolith a single codebase multiple teams working on it it contains all the features of your entire application and let's say we split that monolith and do a set of micro services talking to each other and because we do that we get a lot of benefits now since each of your component in their entire application is very small you can deploy that frequently because that change will be small and it will not affect the all other parts of the system similarly if that component crashes or goes down because of this isolation it will ensure that it doesn't affect other parts of the system you can individually scale your components so if your payment module or your payment services you must a lot more you can increase its scaling independently of scaling any other modules it improves your culture because now each team is working on the single business capability and you get flexibility so as long as the contracts the EBI between two micro services are not changed they remain intact you can change the language or the framework that you use to deploy a micro services and you can change your database system so each micro service can use its own set of cues and things like that but since now we have converted a monolith into a distributed system we also get a set of challenges so these set of challenges are listed here so the challenges on the Left we'll talk about it in the subsequent slides there are also some challenges which are listed on the right but we are now going to cover that into this particular video so let's start with the list on the left the first thing is the service discovery so let's say now you split your model into micro services each micro service could be running on a different machine or a different VM with its own IP address and port so how will one service get to know the IP addresses of other services right one option is of course to just hard code the IP address but the problem is in cloud if your VM is restarted or your service has multiple copies then it will have different IP addresses after the restart or each copy will have a different IP address so of course you cannot hard-coded what you essentially need is a registry of sorts so let's say you have a single service registry and whenever any of the services comes up it starts it will register itself to this registry it will say hey my name is service a and this is my IP address you please keep it in your eyes straight whenever there is another service which wants to talk to this service it will look up he'll say hey I want to talk to service a can you give me the IP address of service a during this lookup the service registry will give back the IP address of service B and then the service C can talk to service a and that is what this connected represents and this is what we have this spring cloud project Eureka 4 in this project we have two components the first component is an application which is also called as you rayker server which you deploy it this app this is a spring boot application and this acts as your service registry right so this is the application all your clients or all your services will talk to to register itself and to look up other services so it and the second component is called a eureka client it's a library that you need to add it to your each of the microservices projects once you do that whenever also of course you need to add a set of properties but once you do that when you start the service will do the same thing it will register to the utica server and whenever the eureka client wants to look up from the server it will give back the IP address and it can connect to it but there are a few problems with this one is since this is an application the service' history the eureka server it's a standalone application you have to manage it this is also a single point of failure not because if you have n micro services all are talking to this single service registry so even if for few minutes the service registry goes down it can affect your entire system right so that's why it's a single point of failure and if to be highly available all the time there is this PR awareness and clustering concepts available murica server where you can deploy multiple copies of this eureka server the other problem is you need to add libraries and code to your micro services for this entire thing to work so how we took an example of payment service requiring scaling let's say you have a service a which has multiple copies okay all running at the same time here you need to add this library called ribbon on your client which will get all the addresses of all the copies from the rain spring and then internally it will decide how to do load balancing between copies that is it can first time choose to connect to copy one and second time it might connect to copy - it might connect to copy three rolling deploy say is that let's say you have this version 1.0 of the service and all your traffic is going to that you want to be able to start a version 1.1 a new update to your service and at the right time you want to be able to switch all that traffic from 1.0 to 1.1 right and and you want to ensure that while you do this transition your services are not getting affected so your traffic is still getting handled properly but behind the scenes you are doing an update to your service that is also a similar problem called canari deploys so let's say you want to do you have updated a feature and you want to test it with a very small crowd before deploying it to a wider audience right so let's say you want to expose your new feature only to 5% of the users so somehow you need to have an ability to push only 5% of your traffic into this new version of the service and the rest of it the 95% will still go to your old service and once you think that ok this feature is now working really well you will slowly migrate more and more traffic to this version 1.1 and eventually you will migrate the entire hundred percent traffic to this new version right so this concept is called deploys there is another problem is now that your flow of an application can go through multiple microservices right so how will you do that in this case so whenever a request comes in you need to be able to assign it a unique code okay this is also called as a trace ID and then that unique code every micro service when it logs for that request it needs to log with that unique code so that then you can trace your entire flow across all these services using that particular unique ID another challenge of micro services is security how do you ensure that when two services are talking to each other they use HTTPS so for that you need to install certificates on both these services and then of course there is also this concept of certificate rotation so let's say every 30 days you ensure that the certificates are renewed or replaced so that if someone gets hold of that certificates they cannot use that micro service all the time the second a security issue is called networking policies how do you ensure that you have a set of rules that this service is allowed to be talking to only service B and so we see but not service D right you need to be able to have these set of rules that which service can talk to which other service so now that we have seen some of these problems or some of these challenges of micro services and let's see how service mesh is a concept that intends to solve all these challenges the motto of service mesh is do not burden my core with all these infrastructure related decisions so all these things that we talked about are sort of administrative or infrastructure related or communication related they are not related to your actual business code right so we want someone some platform some framework some code to take off all these responsibilities from within our application code and do it somewhere else on a different plane so let's say initially we had a service a directly talking to service B right now let's say instead we have this new program we have this new application let's say it's also built in Java it doesn't matter okay let's say we have this application called sidecar that is always running in the same place where a micro service is running so where there is service fee it will have its own program running along with it called a sidecar program and then similarly where a service B is running in the same machine in the same container we'll also have a similar program called sidecar whenever service eight wants to connect to service B it will not connect to it directly it will just always talk to the sidecar so sidecar will act as a proxy for service a and then it's a sidecar responsibility to talk to the other services sidecar right so this sidecar proxy is offloading all your communication or infrastructure related difficulties that we spoke about from within your core we are offloading that to the sidecar yeah so that's the concept or advantage of having a sidecar now let's say let's add one more component to this let's say we have a control tower control tower could be a program which always manages these side cars right we don't want to put the code in the sidecar we want this sidecar to be managed some by someone else and let's say that someone else is this called control tower this control tower will be responsible for pushing all that configuration about service discovery networking policies traffic management load balancing certificates for mutual TLS authentication and so on and so forth that control tower takes care of all these problems and challenges we don't want to deal with that within our code if we go one step even further what we can do is since now we have this control tower component this program which dynamically takes care of all these side cars we can do all those things that we spoke about so in this example let's say we have a service and updated service that we want to test out we can ask the control tower hey just and 1% of my traffic to this particular service Canary version while rest of the 99% traffic goes to the same service the stable service similarly we can check the headers of the request we can say if the request is coming from an iPhone send it to this service otherwise always send it to service a one more advantage of control tower is and the sidecar is since now your services are not talking to each other directly sidecar can decide which communication protocols to use so in this case let's say initially it was using HTTP 1 it can switch it to HTTP 2 or it can switch to G RPC you can understand the advantages of this when you have not just two or three micro services when you have hundreds of services and your control tower this control plane is responsible for managing all of that and there is not a single line of code that you have to write to manage these things the advantage of it becomes even more apparent and that this whole concept of a control tower plus a sidecar is called a service mesh right the service mesh allows you to upload this undifferentiated heavy burden is heavy lifting from your core on to a control plane and driven by these side curves and one of the projects which implements this service mesh is called sto and NY so envoi is a project which acts as the sidecar that we don't have talked about so this is service a it has a sidecar and why and then in the control plane you have this project called Sto the configurations are pushed down to the side curves the TLS certificates are also pushed down and then all the metrics and network policies of which services can talk to what service is also pushed down by this control plane and then between the invoice if they can choose which protocols to use right so sto and NY are combined one such service mesh project that said that's the responsibilities of service mesh service mesh is a mechanism by which you do not have to write the core for any of the challenges of micro-services right so service mesh is the component or is the framework or the platform that will take care of all these challenges for you that's it for this video if you have any comments let me know and talk to you in the next one thank you
Info
Channel: Defog Tech
Views: 100,199
Rating: 4.9316506 out of 5
Keywords: service mesh, microservices, microservice pain points, microservice issues
Id: QiXK0B9FhO0
Channel Id: undefined
Length: 13min 47sec (827 seconds)
Published: Sun May 27 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.