Kubernetes monitoring with ELK stack | Demo

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everyone this is dick shaped welcome to my channel in this particular video I will be talking about monitoring kubernetes cluster with LK stack so this is the second video about this small recurring communities cluster with LK stack so in the first video I have explained about what are the tools held what I use in this setup browser just given a brief about all the tools that I'll gonna use in this particular demo okay and so let's get into the demo quickly so so I'm already my given it is a stirring okay so let's check any resources has been created or not q CTL get all so all these things I'll be doing it in now q my phone system namespace so so I mean in cube - names BR - system so so it is basically it will be having already our community supports and services and demons it's running so so that's the reason now it is showing up so if you want to see which is the current or namespace that I am working on so the command is Q CTL + config so then you can give a view okay so this is how you can you'll be able to see your particular namespace so you can see the namespaces cube system so to do this demo so I have a couple of Vimal files okay so basically for each tool I have taken one my ml file in that I have many communities of objects so I will just go through all the fields that I've used in my viable files so so first I'll going to create our elasticsearch deployment because this is as a centralized storage I'm using so when my 5b to a metric bead or a loss - so when I deploy onto kubernetes cluster so it will satisfy our search for elasticsearch so that's the reason first i'll gonna create this one and then I'll create down logstash Cubana and file the end I will create a Cubano okay so this is the elasticsearch formal file so basically I've used service account first I've created a service account the AP version is v1 and the kindest service account the service account is basically for authentication authorization so we will use service account and then I'm naming it as elastic search - logging and then now the namespace that I'm gonna apply this service account is Q - system so as I'm gonna create all these deployments or stateful sets on Q system namespace rights that's reason I'm putting this service account on to that namespace and these are all the basic comm labels okay and then the next one is cluster rule so cluster role is basically a rule you can say in laymen terms so it is a room so what resources you can access and what actions you can take on those resources okay so the kind of cluster rule and the API version I've used IVAC or authorization not KSA AIDS I will slash we went and i'm nimah geetha by a cluster role as elasticsearch I feel Maki and these are all the labels and the next one has rules so I'm just specifying so what ap groups can be accessed and what actions and what resources in that ap groups in this particular section Suns is taking AP groups so if I give this double quotes so what it does what does it mean is like all the poor AP groups are included here so the EP groups in the sense laid tougher each and every kubernetes object is related to some API version right like if you see a service account it is v1 and if you see for the crustal rule it is our be a see some authorization and then if you see for the deployment a stateful set complications so that that is AB / v1 and if you see for the service is women and if you see for ingress so it is extension /v 1 beta so I am specifying so I want to use only code EP groups so that is basically now perhaps we when we burn and these all a base will be included in this one so that's the reason I'm just giving a double quotes under that keeping groups I want to use this resources like services namespace and endpoints okay and the verbs will basically tells what action so in my case it is only a get or if you want to do any other actions like list or something so then you need to specify and the next one is now we have a service account which is basically used for authentication and authorization so now I have created a rule on so write which is basically occurs cluster room so I need to attach this rule with my service account so that's where my cluster role binding comes into picture so cluster role binding is a bridge between my service account and our cluster rule so which I saw cluster rule to my service account as you can see the kind is a cluster or binding and the APA version so this is our BAC authorization APA version and the name namespace is cube system because I will be creating the same system and naming it as elasticsearch - logging and these are the labels that I'm gonna use and then in the subjects so I need to tell so on what my I need to apply the root so it can be also a group of users or a single user or it can be a service account in my case it is service account I am specifying elasticsearch locking Oh so if you see when I create a service account I've given a name so the same name I've used here and the labels again I'm defining so elasticsearch logging and so though those things have I have already discussed I guess so so this is the kind and the name of service account I am giving and specifying the namespace also so on boertie I mean this service account has been created and then I'm doing the rule reference so this is basically what role should be applied on this service account so this is a roll reference that I'm giving so what role should be enticed with service account now the next one is so as you can see so a PA group has just specified the double quote site so that means or already I have specified so this is all code a big group site so it I mean this ap version and this particular one will come under or EPA groups so you can list if you want to be specific then you can specify this API versions directly here so I mean if you want to generalize then you can just give a double quotes okay and then the elastic cells deployment so this is a year I'm using kind stateful set because some so stateful centers so see a similar to deployment itself but a slight difference so it can I mean so stateful set can do a roll update and a rolling update and it can maintain the pods as a deployment but there's small differences like up it will give a very young I mean now a sticky identifier for a pod means like a venue creator app or using deployment so you as you can see so it had simple or many things like a random number random thing will be concatenated with your raw cordoning so when you create out stateful set I mean like it will be added with index the index will start with 0 so when we create this particular stateful set now you will get to know so I have two replicas set right so you create like this so the pod name is basically el estilo assume like elasticsearch I'm blocking and then it will attach a slash 0 and slash one so when let's say that 0 pod has been killed and it is restarted again it will be given the same name but in case of deployment if you kill the pod so then in creator or with a different name itself ok so that is the slight difference so which gives a very I mean unique I mean identifier so slight identification basically when I create this one you'll get to know okay so then I'm just giving a metadata so it is very similar to kubernetes the configurations also the memes piece I am giving cube system and then labels I am specifying and service Li so service name I am the specific as elasticsearch - longing and I need two replicas that's the reason because this is the storage for both my five beat and the metric beat right so that's the reason I am just taking as two replicas and the update strategy is rolling update so basically if you don't mention also by default it is rolling update and then the selector so this is very important so here whatever the masley will send a mass labels you will view right so these labels should match with your drop pod labels okay as you can see so you from here are rapid definition will start so under the labels I have this one so the same thing should be there in match labels under the deployment okay and then the pod definition starts metadata and the labels are dictated or the next one is a spec so specifications are far pod and again I am specifying the service account name elasticsearch - logging and if you see here my continent parts will start image I'm using official image elasticsearch image and then I'm naming it as elasticsearch I login and the resources I'm giving it like how much CPU of to be used and then I am specifying the ports so two ports one is for like accepting the data and one is for sending the data okay and the next one is volume mount so basically so when you deploy any activities objects unto you what you're Cuban it is flustered so not any objects so when you deploy deployment so what it happens so first so I assume so this will sassoon this is a deployment okay what it will do is educate and deploy them I'm fig map the config map volumes okay so then it'll create a what's the city okay so what did you do so whatever you have bearing the volumes if you want to copy or to your pot so what you need to do is you need to specify the volume modes so volume ounce is a bridge between your volumes of God when you specify that so so this volume will be mapped to a particular folder in the pod so whatever there and this one will be transferred here and whatever then the pod will be transferred to this volume okay so this is our main volume amount we need to see works and so whatever then this volume will be copied to pod and whatever then in the four will be copied to only okay and then so my elasticsearch will expect some 11 on one variable so the name is the environment new variable leave is name space I'm just taking it from information basically this values you - system so so if you want to create in any other this like a namespace so you don't need to change this consolidated so obviously she take from iterator okay and then the volume so before my creation of port this will be created and then I'm different backing to buy more and then the next one is in it container so for elastic size to run successfully so I need to set down my systems be m dot max max count is equals to this particular value so if it is already set by here voice so you don't use this container so again also just for confirmation or I just wanted to be set at value because if I don't set it like my elastic search will Murphy so that's the reason I just need to confirm so I will just gonna use any container and then I'll set that particular value in each container is basically so this container runs before your application publication container in our case is this elastic search okay and the next I am thus creating a service so to access this deployment outside the port au service basically outside so then I'll gonna specify as a node port so the EPA version as we went had the kindest service and better a time giving and that part so it is a node port if I want to access any part outside my when it is our network so then I need to give our note port type is no food and the ports and if you see here note what I've given now thirty-one thousand CC five so that means I'll be able to access my Rusty's elasticsearch and this particular node food and selector so whatever I've defined in port should be the same so if it is something different then the service will be created for any other point so let's go on to my kubernetes master and then deploy it I'll just clear it near the screen so I have already copied those files on to make even it is clustered I just need to cube CTO play FF and I'll give elastic search so it will take little time to come up so until then I'll just pause the video yes now if you'll see my elastic search stateful application is up and running so as I have already mentioned you right as you can see also the naming of your pod is like something like zero and one so if I really start this particular pod again it will be 0 only so if I restart my pod which is being created by deployment so then this particular number whatever the concatenated number will definitely wanna change so so that is one I mean if any cases which requests this kind of thing so then you can go for fateful applications okay so we'll see basically through to have a sticky identifier so sticky identification then we will go for state full of so now so what I can do if my deployment is successful or not I want to this verified I'll be able to access I'd be one double three five work this is the Jason that you're gonna get oh so that means here [Music] elastic search is working fine okay so the next one is so we will deploy logstash basically which is a bridge between our five beat in our case I beat and elasticsearch which is just a log forwarded it will take logs from informant and it is just a satyr like it will do some modifications use a formatted output or to the destination okay so let's go ahead and let's see so the yml file for that so log stash so this is also very simple so so I am using a config map because I wanted to change the configuration which is there inside my log stash okay so basically the EP version is v1 and the kind is config that the metadata and this giving a Nima's log stash - confi config mac i want to create this under in space q - system so and these are the files the configuration files that I need to copy onto my ear log stash pod so lock says dot yml I'm giving are giving HTTP host and then the conflict path and then a log stash conflicts so I am just defining and what port I need to access the data and what filters I need to do so so if I get the input so what kind of harm field I need to select and how do I need to modify it okay so as you can see I am getting a message and then I am passing it to Jason and then so I am taking some client host name also so I am doing some manipulations here and then so far so which destination I need to send the output if you see elasticsearch logging so I have defined them this as a service name in my elasticsearch if you remember so elasticsearch locking right so that is where I am referring it here elasticsearch logging and 92 double zero that is a port I am using okay that is about the configuration and then so there is an official document for logstash on for all the fighters so you can go ahead and you can see all these configurations in detail and put in the put all these links in description so if you want to do any modifications that if you want to do any other changes then you can go ahead and do it and then basically I'm creating a deployment so so then the version is HAP's v1 and then deployment and metadata I am giving so I need only one replica so as I'm using it is optional in my case because file beet and McQuaid is already sending data which can be accepted completely by elasticsearch as it does the demo test I am just showing like how we can push our locks to logstash and how we can modify it and again share it with elastic sludge ok and then I'm using a template the labels and make sure this mass labels are match with your drop pod labels as you can see it is matching and the containers are on just using official image of logstash and then I am enabling phi0 double for because I will be accepting the input input stairs that's the reason I am just enabling it and so the volumes work works as I've explained in the seer in a few minutes back rate so that is how it works like first it will be created volumes and whatever the config map have created right so those files will be copied to volumes first and then I am referencing that those things my fault so when I specify this config volume so this is where I'm specifying it right so whatever the files which are there in this volume will be mounted onto this folder ok in case if I do any changes by logging into the container in this particular folder then that file will also be available in this particular sorry something went wrong in my system so so then so what I'll be doing so I'm just uh so creating a service over the kind of service and the eb version as we went and then I was creating an Q system so I don't need to access externally so I am just stuck so give me I mean I'm creating a cluster IP so I don't specify anything so type here so which is basically a clustering fee okay so which can be accessed only inside you're given a DS question okay third port and target food I am specific net as Phi zero double four okay so let's go so I am specifying type also sorry I missed this one so if you don't specify also by default this is classified and then let's go on on to Cuban it is master so let's deploy this one of cube CTO ugly - F and then logstash deployment okay so it will take a couple of seconds QCT okay what's yes it is up and running as you can see so logs test deployment so it is okay so the next one is file beat file RB is basically a used for monitoring a particular log file and a particular folder inside your missions so inside your nodes so that is the reason file beat we will gonna use this my scenario so service account so I have already explained so I don't go I don't take much time to explain these things so so you can see the cube system and then I am specifying I'm creating in a cube system the service account so I have already mentioned so many service account can be attached to a same namespace so again I am just giving the cluster rules the name is Phi B I am specifying the Reese resources what I want to use and what actions I need to do so in this I'm specifying gate Watson list and then plus the rule binding to map this cluster rule and service account and I'm just going quickly because I have already explained what a service account and class rule and plus role binding so this the names might change the resources what I've used in the elastic search and year it might differ but the King I mean the usages see okay so then the next one is config map so again this is I've taken from official document so this time court actually go to browser and show it to you so this is the official document so as you can see I just first for running a file become community so I've got this particular link so and also they have given a community deployment file so it's consists of all the things so this is the file this have copied unto my cabinet is this this is what I'm gonna be feeding on my cutie these questions so one change that I did was so let me show that oh yeah so again if you see a file be dot yml this is a configuration default configuration file of file B so if you see here so they are given there I'll get as elasticsearch but I don't want to do that right so I want to use logstash in between so that's the reason I am the specific fiying block stresses by your estimation and as you can see so I'm just giving five zero double four so I'm input code I have defined this right so for that poor and special are saying okay and this is basically the configuration file so I just copied from the official document and Highness are in this configuration file so we will be specifying what folders should be a monitor and what log files should be monitored okay the next one is basically file beat will if you see the document itself so it basically monitors this particular folder our lip docker containers because all our container blocks will be there and at this particular folder so if we take that particular blocks having all the locks right so that's the reason so this configuration I've taken and then I am changing the destination and then again I'm using a one more config so which is basically kubernetes lemon as this is used for kubernetes monitoring so I'm specifying I mean like what our container ID is to be taken into account and I am specifying so so there is a proper documentation so you can go to modules and you can check all the modules and out configure a file bit and you can check out all these options and so this is about the config maps that I'm gonna create and I'm creating this particular file bit as demon said because each and every one node in my cluster should have demon said in my case I have only one node let's say you have n nodes so all the king node should be having this file beat as or the pod so because it should collect the information from a where Lib a continuous water right so every every on node should have this particular part so that's where I'm deploying it as Stephen said 'man site will create a pod in each node when you eat a node so then this part will also be deleted when you add a new node for a super nice cluster this particular core will be deployed at automatically okay so this is again similar to deployment so the labels you need to specify and this is a pod section so if you see in the pause pack so I'm using a service account file bead and then termination grace period I'm giving 30 seconds and then container or name is five bit and I'm using again official image and then I'm giving arguments so when I create a pod so on what arguments I thought should be running so this is this is the argument I'm just giving and then basically you need to supply a few inner on Mike variables so in my case I can just take off actually I don't know buy a skip this because my destination is a log stash not the elastic search so anyway like these are the environment variables you need to give directly pushing elastic search and then I'm giving all this configuration security context and the resources CPU limits and requests and then I'm using volume modes basically I'm reading a volumes and I'm copying or whatever the configurations which have been created by using config maps and then I'm referencing it inside my pod by using volume modes let's go ahead and let's create this file weight so cube ctle F and then five it so it is a little time so let me do QC t okay what's so as you can see now so my file beam should come up yeah so you can see now so file beat is running so the next one is so five meters for getting the container locks it is basically to monitor a particular lock file or a particular folder right so the next one is a metric beat again this is the tool from elastic elastic operation so I mean this tool is basically used for giving the matrix of your system and getting the metrics of your services which are running on your system so that is the reason we will use metric bit so again this for this also have used official document by mail file so metal body as you can see so metal bit is deployed in two different ways so basically I need to deploy two instances of metal bead so one is for one has a diamond set and one has deployment so one Y as a diamond set so so all the system information so I need to get so metrics related to my host and system metrics and rocker standards and metrics metrics from all the servers right so in this case I should have a port which will be fitted on all nodes so that's the reason I am creating here it has a diamond set and I know I need to create one more as a deployment because I need to retrieve the metrics that are unique two whole cluster okay so that's the reason I need to create this particular metric bead as a deployment as well as dimension okay again if you go for this URL so so you'll be having a minimal files so I have correctly used an axon because in my case I am directly sending you to an elastic search so I'm just copy pasted everything whatever it was earlier so this copy pasted here so I have taken all the things here so this is similar to file bit only so service account and questionable binding and plus to rule the config Maps so so these things and these environment variables I'm sending it here and as you can see so there is something called as modules in metric beat so we're American specify what from the matrix that's you need to get so I and I want to get our CPU load memory Network all this I wanted to get so these configurations and is creating and then I'm creating as you can see I am just creating a demon set okay because I want to have this part in all the nodes that I'm gonna have read so it will be there in all the things and all the nodes basically and then M if you see in board spec section I'm using the image metal bit official image and then then I'm using this arguments when I start my making a beat container so I'll be starting with these parameters and specifying values like environment variables that are needed in my configuration files by default committed beat order by MLS the configuration file so what are all the things that basically where my destination is from where I need to get the information all these things will be there in metric bleed and en if you see system dot yml so in this I'm specifying what it should be accessed ok so I've used multiple modules so if you want to see this things also I know that's what I said like they have a proper very nice documentation you go to modules and I can just check on the system or you and you can will be getting many things system modules and you will be able to see many things there so they'll give our each and everything how you can use enough so that is what and then I'm feeding her demon set and also I'm creating the same thing as a deployment and also because I need to collect the whole flustered unique data related to pluster nothing right so which is basically a cube state matrix those kind of things I want to do right so I'm just creating a deployment and daemon set for metric beat so all these files have already pushed into github so I'll just leave a link so you want to deploy these things on your cooking in this cluster you can go ahead and you can clone that and then you can apply qck laughing can do on your humanity systems okay then let me do a Q CTL play - F and then it is my trick bait and then so yeah it has been created q CTO get pods so as you can see now method bead I guess it has been created yeah so as you can see one deployment has been created and one demon said okay so demon said for each node or to get the information abort our system matrix information and deployment so I am just want to get my cube matrix so that's the reason and just deploying my deployment so as you can see so the configuration file for deployment and the configuration for demons head is different how do you need to keep in mind okay and then now so we have log stash so basically the bridge between our beats and elastic search and elastic search to store our data and then metric bead and file B to Harvard I take the data from Cuban in each cluster and send it to elastic service or log stash right so the next thing is now we have the data we need to visualize that for that reason so in my case I'm using a Cubano so kibana is one of the visualization tool in lk stack so so for that I'm using I'm just creating a deployment and then a service to access outside okay so I have configured by using ingress I just wanted to keep as very simple so that's reason I'm this exposing as a service and node put so then I meet accessing by using this book let me go ahead and walk you to give this deployment file the version SR have slash viewing and the kind is deployment and then I'm giving a namespace on what it should be deployed and I'm giving a replica as one and then the selector the selector should match with a pod cell labels so it is it should be the same always and then if you see the spec part of the containers I'm using cabana official image and then I'm specifying the resources this is very simple actually and the enrollment variable so I need to send what is my source you basically elasticsearch URL I'm giving HTTP elastic sir - logging and service name - but I've configured service language so I'm giving back name and the and then I'm exposing I'm expecting accepting that data on pi/6 double zero I am giving that one and then I am to access this particular pore outside the world so I am just feeding a service for that basically this is very simple service version apps me even the client is service and then if you see these are the labels and new spaces queue system and then the spec if you see the speck part so which is basically a note code because I wanted to access outside the voids so that's the reason and the note food I've given as 3 1 double three sinks and the selector it should be similar to here or labels okay so let's go ahead and let's create this even also so cube CTR Li - f and [Music] okay so when I do this you feed your get words so I'll be able to see Mikey Bob Paul also which is up and running so I want to access this one right QCD honk it all so then let's see whether the service has been created or not yes the queue on our logging service has been created 17 seconds ago to access this fill so I need to take my notes IP sorry yeah so the Lord ideas this one and then then I need to give the port on which I'll be accessing so which is basically three one double three six so when I do this one so you'll be able to see the queue on our homepage so let's dig a couple of seconds to Laura so this is the default page you're gonna see then what you need to do is you need to click on index patterns okay so then you need to give a index name basically so in my case I'm just giving it as log stash - star so this is basically my index fighter and then click on next and the senate so the time filter field I am just selecting time stamp and then click on create index okay and once this is successfully done so you'll be able to see the fields here okay so once this is done successfully just you can discover so now if everything is successful so then you'll be able to see the logs in a couple of seconds yes now I have all the couple of Nina lecture logs which are related to my containers and related to my cisterns so all those things so if you want to filter based on a pod so let's say I want to access the logs which are related to this particular part so let me take it as file bit only okay so I want to access the locks of this one you just go ahead and you in the search field so you just give your name like this and click on the search oh sorry I'll just given a port here so just click like this so if you take a couple of second to filter out and then you'll be able to see the data so now as you can see so I mean the locks are related to I'll beat or so it's here that's the reason like so this is by using this particular LK stack so we will be able to store our logs in one of the centralized system so if that part goes also the locks will be available here so so that's what I'm you can play with this I'll add filter and then you can update this filters the time range so you want to see the locks of this party good full day so then you need to click on this one I just wanted to access the last 15 minutes long so that's the reason I'm the selected one okay and as you can see so if you don't need your locks in your notes so it will use lot of it'll consume a lot of memory right so you'll be there should be somewhere so some jobs should be scheduled to delete this particular locks select same in my case so what I did was there is one more thing that I do it so which is basically a curator so elastic elastic who is providing us the security so which basically deletes or logs so for a particular time period also so what I did on this creating a cron job is basically so this is similar to normal cron job so it will run this job will run on a particular timestamp so in my case so every week or every a week once so I am running this cron job so what I am doing so I am just running this particular command okay so pip install and then elasticsearch curator i'm just deleting by using this command so i am deleting my logs it is logs I mean if you store keep on stirring your logs it is like a key angular of memory in your system so it is always recommended so if you want to have only the logs for last week at a lost mine store only that so you can delete the older locks this is how you can delete so it is just if I go to or when it is Usher I can just create this for also CDL of play life death and then curative so this will be I mean creating my logs every once in a week so this is about heal haystack so if you like the video please share and subcribe thank you have a good day
Info
Channel: Deekshith SN
Views: 13,905
Rating: undefined out of 5
Keywords: Devops, ELK
Id: b4wOV6vlqPU
Channel Id: undefined
Length: 40min 29sec (2429 seconds)
Published: Wed Apr 22 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.