Service Mesh: What & Why ? - a new series

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so you're building a service based architecture whether it's micro services nano services or even giga services for service meshes there are some fundamental things that you need to know about service to service communication if service a makes a call to service b there are a couple of things we usually do if the call to service b fails service a may want to retry now this concept is called retry logic developers usually have retry logic in their code base that handles these kind of failure scenarios this logic may be different between different types of services and different programming languages now how many times do you retry before giving up and what if too much retry logic causes more harm than good between service a and b we may want authentication now service a has to have logic about how to authenticate with service b and service b has to have logic about how to handle the authentication so their code bases grow and become more complex now sometimes we may also want mutual tls or ssl connectivity between microservices we may not want services to talk over port 80 instead talk over port 443 securely this means that certificates would have to be issued for each service has to be rotated and maintained for every service this becomes a huge maintenance nightmare at scale another one is metrics now for service a and b we may want to know what is the number of requests per second that service a sends and what is the number of requests per second that service b receives we may also want metrics on latency and the time it takes for service b to respond there are great metrics libraries available but this takes development effort which makes the codebase grow and become more complex now what if service a calls service b but service b makes a request to service d and c sometimes we may want to trace requests down to each service to figure out where the latency may be service a to b might take five seconds service b to c only takes half a second but b to d takes four and a half seconds now tracing these web requests will help us find the slow timing areas in web systems this is very complicated to achieve and requires a lot of code investment to each service in order to trace each request and latency sometimes we may also want to do traffic splits and only send 10 percent of traffic to service d now in traditional web servers we have firewalls that allow us to govern which services can talk to each other now with distributed systems and microservices at scale this is almost impossible to maintain the more services we add the more we have to continuously tweak complex firewall rules and we may need to constantly update and set up policies of which services are allowed to talk and which services cannot talk to each other so if we take a look at service a and b and we add retry logic we add authentication between the two we add mutual tls between the two we add metrics about requests per second and latency we start adding logics to trace requests from service a to b b to c c and d we scale this because of demand we add services e f g h i j as you can see this adds a ton of development effort and operational pain which does not scale well this is where service mesh technology comes in in software architecture a service mesh is a dedicated infrastructure layer for facilitating service to service communication usually among micro services a service mesh is designed to make the network smarter basically take all that logic and features i spoke about earlier and move it out of the code base and into the network which keeps your application code bases smaller and simpler so you can get all of those features and keep your code base mostly unchanged so let's take another look at service a and b the way a service mesh works is it discreetly injects a proxy as a sidecar into each service this proxy hijacks requests coming in and out of the pod for service a and b this means that the web packet would hit the proxy in service a first then route to the proxy and service b before actually hitting service b instead of us adding logic to service a and b the logic would live in the sidecar proxies we can cherry pick the features we want in a declarative config say we want tls on the proxies would manage their own certificates and rotate them automatically say we want auto retries on the proxies would retry requests in case of failure say we want authentication between service a and b the proxy would handle authentication between the services without the code knowing we can turn on metrics and automatically see requests per second and latency between every pod in the cluster without adding code to our services no matter the programming language they will all get the same metrics all of this is defined in a declarative config file per microservice this makes it easy to opt in and out it also makes it easy to scale especially when you have more than 100 services in your cluster so to start the series on service mesh we're going to need a really good use case so if you take a look at my github repo i have a kubernetes folder and in here i have a service mesh folder with a readme this readme indicates all the steps i'm about to show you guys today so be sure to check out the link to the source code down below so you can follow along in the service mesh series we'll be taking a look at linkid and istio now service measures cover a wide variety of features as i mentioned earlier but the great thing about a service mesh is that it's not something you just turn on in your cluster well you can but that's not the approach i would recommend instead i would recommend installing a service mesh and then cherry picking the features that you would need and turn them on for services that you need and then apply that approach until these concepts mature within your team and then once you gain value from it you can decide to expand these features to other services or more features as you need so in the series we're going to need a great use case so what i've done if you take a look at the kubernetes service mesh folder you'll see i have an applications folder with three micro services for this use case these services make up a video catalog which is basically a web page that's going to show a list of playlists and videos so let's take a bit of a closer look at the design so we start with a simple web ui which is called videos web this is an html application that lists a bunch of playlists with videos in them so you can see we have one box for the videos web that we're going to run in docker and for the videos web to get any content it needs to make a call to the playlist api so the videos web is going to run in the browser and load a bunch of playlists so videos web needs to make a web request to the playlists api to get a list of playlists now the playlists basically hold data like a title and a description of the playlist and a list of video ids the playlist api also has to store its data so it's going to need a database so you can see we have a videos web making a call to the playlist api and the playlist api gets its playlists from the playlists db now let's add a little bit of a complexity so if we take a look at what a playlist looks like it's a json response a playlist has an id a title and a list of video ids now take note that these videos is just a list of ids there's no video title video image thumbnails or descriptions here this data is stored in the videos api so we have a separate api that we can pass in a video id and retrieve all the content of that video so here's the video api and it has its own database called videos database so if we take a look at the full architecture we have a videos web and when that loads in the browser it's going to fire off a single web request to the playlist api to say get me all your content the playlist api is going to make a request to the database to load all the playlists it has then the playlist api is going to loop through each playlist and get all the video ids needed to populate the playlist and make subsequent network calls to the videos api to get the video content from the videos db this will result in network fan out between the playlist api and the videos api making a lot of requests between them and the videos database and this is intentional to demonstrate a busy network so to build and run this application locally you can change directory to the kubernetes service mesh folder and then say docker compose build that'll go ahead and build all these applications in docker containers and start them up so now that my containers are built i can say docker compose up which will start up all my applications and if we go to the browser on localhost port 80 we can see we've successfully loaded the videos web which has rendered four playlists over here and we can expand each playlist to see the video title and thumbnail so the videos web makes a call to the playlist api loads all the playlists the playlist api makes course to videos web gets all these thumbnail images and titles and sends it all back to video's web for rendering so now that we have our applications running in docker containers locally to take a look at service meshes what we're going to do is deploy all of this stuff to kubernetes so for that i'd like to use product called kind which helps me run a kubernetes cluster locally in docker containers so to do that i have instructions here on how to create a kind cluster so i say kind create cluster i call my cluster service mesh and i run kubernetes 1.18 so i go ahead and run that and that'll spin up a kubernetes cluster within a container on my machine where i can deploy the video catalog application now to deploy these micro services to kubernetes i'm going to need yaml files so if we expand the applications folder we see we have our three micro services and to deploy each of them if i expand them you can see i have a deployment.yaml inside the playlist api folder in the videos api i also have a deployment.yaml in the videos web i also have a deployment.yaml so for the videos web let's take a look at the deployment yaml file which is a simple deployment showing one replica the container we've just built we're going to call our container and deployment the videos web we're going to expose port 80 and we're going to define a small service here to expose our videos web now to deploy it i'm going to change directory to the kubernetes service mesh folder and i'm going to say cube ct i'll apply minus f and i'm going to apply the videos web deployment yaml file that'll go ahead and deploy that pod to our cluster and expose it virus service so we can see we have a pod creating for our videos web and now with the pod running i can go ahead and port forward to that pod so i can access it in the browser and if i port forward we can see that there's a connection and i open up localhost we can see that there's no content that's because we only have a videos web and there's no api to make a connection to so now we need to deploy the playlist api so to do that i'm going to leave this port forward running i'm going to open up another terminal and i'm going to change to the kubernetes service mesh folder and let's take a look at the playlist api yaml we can see it's very similar to the videos web it's a deployment with one replica it's running the container we want to run it's running port 1001 oh it has a couple of environment variables about its database and it has a simple service to expose it we also have another deployment object which is the playlist database which in this case i'm going to be running reader 6 as my database and i'm going to define a small service here to expose the playlist database so to deploy the playlist api and database i'm going to say cube ctl apply applications playlist api the deployment file that's going to go ahead and create my pod for my playlist api as well as a redis pod with services exposing each of them and now we can see we have our videos web up and running our playlist api and database and to expose the playlist api i'm going to port forward to that one as well so i'm going to say cube ctl port forward and i'm going to port forward to the playlist api service on port 81. so the videos web makes a call to localhost port 81 to talk to the playlist api so if i go ahead and refresh this we can see after a few seconds we've loaded the playlist successfully but notice that there's no video content now that's because we have to go and deploy the videos api microservice so to do that i'm gonna leave this terminal running and i'm gonna open up another terminal and i'm gonna change to the kubernetes service mesh folder and then let's take a look at the left here i have the videos api deployment.yaml and it's very similar to the playlist api it's a deployment with one replica we have our image that we're going to run here it also has some details about its database it has a service to expose itself and we have another deployment of one replica of our database running reader 6 as well and a service to expose that database and to deploy that i'm going to say cubect i'll apply and i'm going to apply the videos api deployment.yaml and now finally if i say cubectl getpods we now have all the services running and if i go back to the browser i can hit refresh and now we see the full application we see the playlist api serving all the playlists and the videos api serving all the content of each of the videos now i hope this video lays a good foundation of the understanding of the challenges of service to service communication and also a good foundation of what a service mesh is now i'm pretty excited because in a future video i'll be taking a look at a service mesh how to install it we'll take our videos application and we'll cherry pick features from the service mesh to see what value it adds now be sure to like and subscribe and stay tuned for the next one also remember to check out the community page in the link down below and if you want to support the channel further be sure to click the join button and become a member and as always thanks for watching and until next time [Music] peace you
Info
Channel: That DevOps Guy
Views: 13,286
Rating: 4.9893618 out of 5
Keywords: devops, infrastructure, as, code, azure, aks, kubernetes, k8s, cloud, training, course, cloudnative, az, github, development, deployment, containers, docker, messagebroker, messge, aws, amazon, web, services, google, servicemesh, linkerd, istio, mesh, gateways, gateway, proxy, traffic
Id: rVNPnHeGYBE
Channel Id: undefined
Length: 13min 37sec (817 seconds)
Published: Wed Sep 30 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.