Is your Nginx Ingress Controller Observable - Part 1 with Prometheus

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
if you always have tried to observe the communication between your end user and applications hosted in your companies cluster then this episode is for you we are going to study how to observe an ngnx ingress controller [Music] [Music] welcome to is it observable the main objective of is it observable is to provide tutorials on how to observe a given technology today's episode is part of a series related to the nginx controller but in fact in reality it's more related to logs we will try to answer to one question is the nginx controller observable we will have three distinct solutions in three distinct parts part one we will explain what is an ingress controller and look at the prometheus exporter part two we will focus on how to retrieve relevant kpi utilizing luxur part 3 we will build a lock stream pipeline extracting our logs and transforming it into a metric if you enjoyed today's content don't forget to like and subscribe to the channel so let's see what we are going to learn out of this part one of the nginx controller series we will do an introduction explaining how to expose your service out of your cluster utilizing of course an english controller then we will jump into what are the type of metrics that we would like to collect from an index controller to get a good visibility and then we will present the prometheus exporter and of course like usual we'll jump into a tutorial so let's start so let's jump into the introduction in kubernetes to expose your services out of your cluster there are several ways to do that so several solutions so either we can basically create a service with a specific type called load balancer and that will help us to expose our service out of the classroom just to remind there are several type of services in kubernetes there is the cluster ip the node port and last the load balancer the service type load balancer will allocate an external ip address to our service so if we create several load balancer services it means that we will have several public ip out of our cluster and as a result there will be an impact on the cost of our cluster especially if you're using any cloud provider bye bye so how can we handle one ip address for several services and have a mechanism that will wrote the traffic to the right services so there are two solutions that will allow us to do that the first one is basically we introduced it it's utilizing a service smash so if you don't know what is a service mesh i would recommend to watch the episode dedicated to service mesh and specially to istio in this specific episode we introduced the concept of service mesh and we explained the various components of istio and what you can achieve with it especially with the istio ingress gateway combined with gateways and service virtual services that will basically help us to achieve what we're looking for last there is another solution that will help us to expose our services out of our cluster and it's basically utilizing a kubernetes ingress controller the ingress controller will receive an incoming traffic and route it to the right service kubernetes provide in fact the interface of the ingress code getaway but there are no implementations so if you want to use it you will have to use an existing implementation of an ingress controller and there are several ones out there so there is obviously the ndnx ingress controller we have the hr proxy ingress controller you have contour and you have all those cloud provider solutions so azure has one of them aws as well then you have traffic ambassador kong and so on ingress controller will be the main component that will be in charge of receiving the external traffic ingress component has a specific structure in kubernetes so here is an example of an ingress deployment file the ingress interface will specify the routing roles to handle the traffic to our application so let's say we have http domain slash app1 where we want to root it to service one and we have another path so my domain slash application 2 where we want to route it to the service tool so we will create two backend rules to be able to route the path to application 1 so that will be our service 1 and the path 2 to service 2. the interface is just the description of the rules you will need the implementation to basically make the routing happening the ingress controller will evaluate the defined rules and manage the redirections within our cluster because our ingress controller will be the entry point of a cluster we probably want to get a proper level of observability recently ingress controller has introduced several crds custom resource definitions to enhance the configuration of our ingress so you have virtual service servers you have a virtual route that will allow us to create traffic split routing and more you can also configure the size of the number of connections the size of the upstream queue and more then we have policies to create ipsource that will be authorized to interact with our applications what are the type of metrics we would like to collect from our ingress controller to measure the health of the ingress controller we want to report metrics that will help us to understand system metrics so average cpu usage of the ingress controller the average memory so the memory usage of the ingress controller and then more application metrics like the number of bytes exchange the time to first byte uh the number of requests splited by http code the request time the client response time the upstream downstream and many more and much more what is even more important is to collect the right dimensions that will allow us to filter to split and build efficient dashboards metric is fine but remember our ingress controller will serve various application services various paths so it would be very important for us to report the number of requests coming in a given service so let's see the various options to collect the desired data so the ngnx ingress controller has a status page the http endpoint status page will report metrics like the active connection the number of accepts the handled connections the request requires a minimum with the help of the status page nginx provide a prometheus exporter that will basically help us to collect those indicators so here is the link to the official nginx permit exporter but keep in mind that few of those metrics were required to enable prometheus data so you will have to modify the configuration of your ingress controller to make sure that the prometheus metrics will be exposed in fact it's an environment variable called enable prometheus metrics that needs to be set to true and you can also modify by the way the port and few other concepts through those environment variables of course you will need to make sure that the port exposing the prometheus metrics is available and reachable through the njx nginx controller the default port by the way is 9113 but this could be like mentioned modify by modifying the environment variables you can customize the port by modifying the environment variable called prometheus metric listen port you will be able to retrieve the same level of details providing another status page and you will also get information about the worker queue and the last reload few metrics like the latency will require to enable an extra module the latency dash metrics and this one also could be enabled directly from the environment variable when deploying the nginx controller in your cluster you're going to deploy it in various way but you can also do it through a hand chart the harm chart will allow you to automatically enable fuse settings like the port of the prometheus exporter enabling the promote exporter and if you want to enable the latency as well and the engine is ingress controller will be deployed through a stateful set and all the settings of nginx will be done with the help of environment variables like mentioned before and also a config map that will basically replace the nginx configuration part to that configuration file you'll see that you will be able to adjust your logging format if you want from the documentation of the nginx ingress controller there is clearly a process to diagnose in that procedure it explains to look first at the metric exposed by the exporter but it will give you the basic level of understanding the level zero so you will see that you will probably quickly need to jump into the logs produced by dnx controller to have more insight and more understanding on what's going on so it means that logs is the source of truth it's a great source of information for our observation so the tutorial this tutorial will require several things of course we will need a kubernetes cluster with one two or three nodes we will have to deploy our ingress controller in our case the nginx ingress controller then we of course we will expose prometheus metrics so we need prometheus so we will deploy the prometheus operator to able to excret those new metrics from nginx we will have to deploy a service monitor to let prometheus operators scrap those new metrics we will deploy a demo app the hipster shop the we will configure the ingress to expose graffana on one path and the hipster shop on another path once the metric has been scrapped from the njx exporter we will connect to grafana and build a dashboard using prometheus metrics alright so let's start alright so like every tutorials that we deliver on is it observable there will be of course a github repository like usual so here we're looking at the overall guitar prepo that will consider thor is related to ngna uh i mean gen x in general and as you can see there are three solutions exposed um as of now when i'm recording this episode today we have only two tutorials available uh so obviously you will depending on which part you're looking for you should click on the right part so let's start today it's the part one using prometheus so let's click on this one and we'll bring up to the readme file related to that episode which basically use a prometheus so similar to the previous episodes uh you will need of course a communist cluster so either you can use as i do a gcp cluster in gke or if you don't want to use this and prefer use ranger you are free to do so without an issue of course as usual you have to clone that repo and we will walk through the virus step of the installation so like exposed uh kubernetes has the interface of ingress controllers but it doesn't have the uh implementation so we need to install one of the implementation in our case it will be the nginx ingress controller so step one let's add i already have done it so you need to add the nginx repo to um to the helm here pro if you want if you want to install it for the help um and i have uh with the great thing with utilizing helm is that helm will provide you a couple of variables uh e with the helm chart and uh and if this is exactly what we're looking for and we want to enable use the stuff so remember we need to enable the latency metrics to expose response times we want to create automatically the promethean exporter and last we want to create a config map that will be used for the settings of nginx today we're gonna not gonna touch base on this one but still uh it will be required in any case all right so first thing uh i will basically copy this command like this clap so simply run that command so now i'm running it so it will deploy our helm chart with our ingress controller so let's have a look so it's you can do a basically uh cuddle uh to see what we have in terms of pods in this department in space so we can see that we have our nginx ingress which is running here we already have one so the implementation has been deployed and in fact as i uh may you may not have seen it but there is also a config map that has been deployed uh it's the nginx config configuration that i have forced through the helm variables you hear i'm precisely i have a specific name for this nginx configuration file so this configuration config map will host our configuration of nginx but today in this part we won't touch it because we are not going to adjust the logging because we're not touching any locks here so what i would like to do now is to show you to describe the pod related to nginx because you will see that there are several environments like mentioned that nginx use here it is uh so arguments in fact uh so you can see that uh the old the various modules so the various things related to nginx are enabled through those arguments so you can see that we can find the uh prometheus metrics import uh that's the prometheus uh exporter has been enabled uh so most of the thing here has been uh created so um that that's the great news if you use helmet charts you don't have to touch basically those arguments but if you want to use it of course you have to touch paste those deployment files the thing that i like to do next because we have an ingress running now and then going straight away i'd like to get the ip address of our uh uh our uh ingress controller let me run this command so i have put it over the command to get the variable but let me remove uh the rest of the command so you will be able to see the ap address so here it is so here the uh the ingress controller comes with a service and here are a load balancer services that expose an ip so here is the iep address of our uh in our ingress getaway so i will note it down late for later on because i would need to update two deployment files and i'm referring to here it is i'm going to show you here the first one is related to the hipster shop so the hipster shop i have added in uh i've customized the deployment of the hipster shop and i'm not using any low generator service like express explained in the beginning but i'm using an ingress so here i'm adding an ingress and i'm trying to add the new ingress rule so here uh i i don't know if you use a nip dot io but in the bio that gives you a local dns for demo purposes is perfect um so here you can see i have a put it online boutique uh because it's the hebrew shop and i need to put the ap address here so that's why i need the ap address because we will do a set command to change the the configuration of our ingress as you can see the this definition of this aggressive verse is fairly easy i have only uh one host um and the path is slash and that will bring me to the front end uh service of the hipster shop that's very specific very basic i have another one uh for a grafana as well that i have prepared as well uh so here is the grafana one so similar to the the online boutique it's doing the same thing so i'm here i'm replacing the ip address so now i have like a two host names same thing and uh i'm gonna reach out to the path to reach out to the prometheus graffana service all right so now it's clear so let's uh grant those two commands to update uh the the those two deployment files here and i'm gonna paste those in this command line so now those two deployment files has been updated so now we have this uh the next step that i i need to do of course is to deploy the hipster shop because we need a hipster shop and then so the only thing that we'll have to do then is to deploy prometheus uh with the various exporter the service monitor and then we can jump into creating a dashboard so i'm creating first the name space for hipster shop and then we'll have to apply a roll binding here it is and last i'm deploying this manifest file and i will utilize my ingress controller there it is um so you will have to of course add the home repo for uh for prometheus but i already have uh uh so i'm updating all my repo to make sure that uh i'm have the latest version of the prometheus operator and then i can simply install the chrome all right now we have this let's do a just we could cuddle get parts to make sure that we have everything so the cubestack metric is here not exporter is there and graphite is also there so let's get these services to see what we have so we should have uh we should have a graphana services which is here it's a cluster ip so i'm gonna um i can i could have i could change it um or i will i will move it to a node port so i'm just gonna do keep ctl edit and i'm going to edit the usbc which is and i'm going to change it to node port just basic of this tutorial all right all right now this has been edit and last i would have to deploy to go to the [Music] uh to the repo before grafana here it is and i'm gonna apply my ingress now that we have deployed the ingress uh i'm just gonna make get the the default password of the [Music] of graffana which is stored in a secret so usually it's the it's a prop operator it's always the proprietor the default one so if you want don't forget to change it it will be a required tool for security reasons of course the other thing that we need that we have not deployed yet uh is the service monitor so with the prometheus operator you have new custom crd files that are exposed and those custom resource the new crds bring a new object called the service monitor so basically it helps you to configure prometheus so you don't have to configure the configuration the prometheus configuration file because there will be a config map so here it's make more sense to utilize those objects so to do that first thing i do i you create a new uh a new service called the nginx prom metrics um that will collect the metrics from the nginx in ingress so the service that we had so let me apply this one so it will create a service that will basically expose the various data and then i will be able to use it in the service monitor directly so now that we have this the other thing that we need now after applying this is to jam to gra fana in fact so let me go to uh the gcp console for my case i have those two um uh two ingress that are currently exposed so let me test them first so hipster shop is is responding that's perfect and the other one should goes to grab panna which is great so admin is the default user and remember the default password is prom dash now that we have graffana and our hipster shop application working we can now create a dashboard so for that uh the good news we don't have to configure anything in grafana because the data source because it's been installed with the operator it's been already configured with prometheus so big news now let's jump into a dashboard so here what we're going to do here is create a new dashboard so now let's add a new panel so the first panel i want to add is to check uh because we let by looking at the metrics of nginx you will see that we have couple of metrics available and we will see uh by playing around that due to the missing dimensions we won't be able to get the details that we're looking for to get number of connection per parent points but the first thing which is important i think for us is to check if the ngnx is up or not so i think that that could be a quite interesting indicator so i'm gonna just take that up so the value is very simple it will return one uh based on if it's up or zero if it's down so here currently it's a graph so let's remove that graph and change it with let's say a stat like this so we can do this so that will be our our statistics then on the display uh we we don't need to have the values to be honest the the last value will be good all right like this this seems better with the value one in here so now we we can use a stat so that will send it uh i will just put in area so it could be colorized everything um and what i would like to do is maybe probably put some uh some thresholds here so by default with a threshold so if it's zero it's going to be an error of course and if it's here okay i'm gonna put green and here it's gonna say one all right so now if it's one it's green and if it's zero it's gonna be basically red so now we have our great graph just to see the health let's save it so let's say uh up mgmx up something like this now we can have this one so we can put a very small stuff here just to have that as a as a green uh as a as a reminder and uh let's save that dashboard name it uh ngnx ingress controller like this the other thing that i we could do is also add another graph so we will we will adapt of course the stuff so a few things let's have a look at the metrics so let's jump into uh by selecting nginx here so first of all we have the number of uh the last reload so i'm not interested in this as you can see here we have the server response latency because we enabled the latency so we could probably plot some response times stuff but let's first before we jump there let's have a look a few things so we have the marker the virtual we have now no virtual route or virtual server so in our case doesn't make sense but you can also basically count the number of servers uh the virtual servers and so on what i'm interesting is maybe have a graph floating the accept uh the handle the readings all the the various split it on all those uh stuff so let me grab one for the accept so it's uh of course it's it's uh it's a counter so to make the per second i will be like we saw in the prom ql we will have to do right and then over the last uh let's say 30 seconds maybe up i think that would be good up and that's it so now we let's plot now we have the number of of uh connection accepted per second uh we can add another one so i'm gonna add another metric so that's gonna be the b so here we have the first one and we can do the same one so let's copy paste this one because it's going to be very similar instead of the accepted we want we don't want to accept we want the handle active connection now let's split everything so let's do the number of uh the number handled let's say all right so i'm gonna just copy this function here right here is and same thing 30 seconds and right vote now we have the the handled and the accept so this is the value is the same and now we can also add the number of waiting so that could be also a good interesting indicator so you can add as many so you can see that there are counters for accepted uh handle active so let's put just the the the waiting so let's let's figure out that we just want to have the weight right all right here it is so the main room problem here if you look at the data that we are exposing here you see that there is no labels that will allow us to split by path we have to ingress but in that day that i'm getting out of the prometheus here i'm not able to split so which means we cannot achieve what we were initially looking for uh which means basically split the number of connections so figure out where which which uh service gets that connection and and that detail wouldn't have it so that's that's uh that's the main problem related to uh to the prometheus metric we'll see and that's why we we have other episodes so let's name this one called uh it's the uh the ngnx connections for seconds so because here we have the various stuff let's apply let's save let's apply it like here it is let's save our dashboards let's put this one underneath like this now we have then now we can add another dashboard oh sorry another graph and in this one i want to have the number of requests coming in that could be also interesting so now we have the number of connections but it could be also relevant to follow the number of requests coming in in our ingress so for this scroll down there is a counter in the bottom say http request and this is pretty interesting so uh any as a similar to what we did before you can see there's no end points for us so no way of splitting by my services so that that's that's the main problem so let's put a rate again because remember it's uh we it's a counter so 30 seconds like this and now we have the number of requests per seconds coming in in in our ingress control so that's gonna be number of [Music] http okay now we have this stuff all right so now it's good uh but let's move this one here close to the other one oh not here here oh he doesn't want to all right never mind um he doesn't like this so let's put it that way that way maybe he would let maybe be more accepts what we're looking for so now we have this uh last thing is the response times so let's create another one so we can and uh we have no worker so there's no value so let's have a look at the response times so this is this metric is is an provided by another modules in the status page so you see here there is the upstream server uh response response time latency it's a bucket in fact so i can see the value here so i and here in this particular case if you look at the labels available as you can see we have the hipster shop online boutique ingress so we have the data that we're looking for and when you have the pod names this is this is more precise from from from our perspective uh i know that i have the data that i'm looking for so i'm gonna do the sum all right so now we have the sum of all the of the sounds so now we're gonna do a rate of this one so that will be the per seconds so i will have the milliseconds all right and over the last 30 seconds again there it is let's refresh all right and now we can do the average say average and with the average the good thing is that we would be able to split by the dimension that we're looking for so we have the you can see we have the pod using the pod and oh no the pod name is the one are we looking for but if we if we look at service see oh same thing it's no it's more the prometheus end point so we just put the upstream there all right so now we have two values the default grafana and the hipster shop so there's no major traffic so let's just response times response times maintenance all right so now we have those stuff uh what we could do maybe is put some traffic so i'm gonna go to the hipster shop do some few actions and go back so now we should have something here instead of let's say put the last 30 minutes so now we can see that this this uh response times coming in with the device data so so this is as you can see so now we we've created a very basic dashboard i could also add the work cues and and few things about the workers but it won't help us it won't bring the the data that we were looking for especially for us where we wanted to split per name spaces so it's interesting it's uh but it doesn't provide what we're looking for so that's why we need to jump quickly on the part two where we can see how we could give give more insights on uh this uh those dashboards by utilizing logs instead all right so that's it for the part one of the nginx controller series as you can see we explained a couple of things of course what is an ingress controller uh did the various implementations of the ingress controllers available in kubernetes the we looked at the prometheus exporter that is provided by nginx and we built this dashboards in grafana so as you can see uh it's interesting it brings the level zero of understanding of what's going on on your ingress but it's not enough we need to extend it so we will see in the part two on how to utilize your logs alright see you soon for the part two [Music]
Info
Channel: Is it Observable
Views: 127
Rating: undefined out of 5
Keywords: cloud, grafana, ingress, k8s, kubernetes, nginx, observability, prometheus
Id: Qz-nN7CGBII
Channel Id: undefined
Length: 38min 1sec (2281 seconds)
Published: Thu Dec 02 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.