Watch-as-I-code: Elastic stack on-premise setup

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
um i think we did a bit for everyone so uh welcome to the session everyone um uh basically the the purpose of me doing this is that um i worked with elastic since about the beginning of 2018 uh late 2017 actually and at that time we were using elastic 6.4 um and then i left that job about a month back and since then i've talked to at least five six different companies and they're all at least most of them are very interested in what elastic can do and they're really surprised by the kind of capabilities it has so i wanted to do this session in order to go through the thought process that i go through when when i set up a stack in elastic stack and elastic has some products which are very good value for money actually so you can set them up very quickly very easily and they provide very good capabilities right out of the box without having to invest a lot of time in that so so i'll be talking about those um and and i will also uh show a bit of log stash and a bit of uh apm as well because uh that is something that um unless you've actually used it it's a bit tricky to get around so i have what i've done is that i have set up most of the things already but i will also be setting them up some of them as we go on and this is a live live setup session so most probably they're going to be some mistakes uh but but basically that that's what uh you know a set of a setup is like so i'll try to share as much as possible of my own thought process uh while i do these things so so basically uh the question is that why should we use elastic stack you know there are others available as well so elastic actually has a very powerful stack and it's used for system observability so absorbability is a newer concept in which you can actually see what is going inside your system uh whether it's monitoring whether it's structured logs whether it's alerting or there's something called tracing or distributed tracing so elastic gives capability for that and elastic has a basic license which does not it does not cost any money and you can set it up on your own machine and and those capabilities are in that basic license as well so you don't need to spend a lot of money or any money at all to actually set it up and you can use it on your own to monitor your own infrastructure and to monitor your own applications elastic is also used for geospatial analysis so it has very good capabilities for doing maps and you can also analyze the performance of your applications so it has something called distributor tracing in which you can enable and you can see what is happening inside your application a java application a python application um and i'll give an example of that so what we will be building so uh right now i have a spring boot application uh running on a ubuntu vm and this is running on port 1990. i have an hr proxy above it and this hd proxy writes to a log file and from the browser i will call that uh it's a proxy and it will talk to the springboard application um in addition and from the elastic side we will have three elastic search nodes so let me just fix this and we will have kibana and we will have log stash and apm so this is the web configured so there is five vms and this is the way it's configured uh from your own point of view it really doesn't matter where you put the kibana or the logstash or the apn but it's very important that you put elasticsearch instances on on separate machines or on separate containers if you're doing kubernetes or docker storm um because this way you can get some reliability and if one of the nodes fail the other one can take over so elastic uses something called the rod protocol is like this modified version of the roth protocol and what it does is if one of the nodes goes down uh one of the other two nodes votes among themselves and makes themselves the master and then they can do cluster coordination among themselves um so doing this master takeover or main takeover this is relatively easy and elastic it's there in a lot of other no sql solutions as well but it's from what i've seen it's not available in most rdbms solutions like mysql and postpress you need to work around that but elastic has this out of the box and for that you need an odd number of master machines so these elastic nodes these are actually have data mastered and all the capabilities that come out of the box um so what people typically do is that you set up an elastic stack cluster in such a way that each node has all the capabilities unless you are you know very well what you are doing in that case you have separate master nodes you have separate data nodes but you are starting just starting out you're doing for the first time it is strongly recommended that you just strongly recommend it from my site that you actually go ahead and set up the default configuration where each node is a master node and a data node both and then eventually as your cluster grows bigger your needs grow bigger and you're more familiar with the clustering operations then you can create dedicated masters and dedicated um data nodes um if you want to get some like rule of thumb uh so there is so this is actually being set up on-prem but elastic has a cloud-based uh model as well so you can deploy it on the cloud so you can get uh without using credit card you can sign up and you can create a cluster so like two minutes ago like more like 10 minutes ago i set up an elastic stack cluster and when it when it gets created you can go in the cluster and you can go and click edit when you click edit it will show you the configuration that is that it has used by default so if you see over here it's saying showing that there is a data node over here and if you go to dedicated masters it says that there are no dedicated masters right so they're automatically added when there are at least six elastic source nodes so this is running on aws so if i want to add six nodes i can't add them yet and i need to take the ramp or node all the way up to 58 gb and then i can add three nodes so there are two zones so this is an availability zone or a data center for aws so two zones mean that there are six nodes so there are three nodes per zone and only then you will see that you know it says that okay now you can create a dedicated master so the rule of thumb is that you shouldn't unless you really know what you're doing don't add did you get hit master in the beginning of your cluster design so i ran a cluster that was doing something like 1.5 million uh ingest reads uh per day and we had three nodes and we increased it up to five nodes but but we didn't have any issues with that okay so we have a command i'm new here gardener has become expedited yes um so so the reason for doing this session is for the people who just joined is that there are a lot of people who are coming in and trying to use elastic and they really don't know where to begin so my job is to actually help people you know try to find some footing where where do you actually start so so that's the basic idea um so we have three elastic search nodes over here their master and data both uh we have on another you want to vm we have a kibana node and another you want to vm we have a logstudent apm server node what these are i will explain them as the session goes on um and if you see this basically from here you can go to here so what we will be doing is we will have so elastic has these products called beads so beads are a small lightweight daemons which run in the background so we have something called filebeat which we install over here and this will read hf proxy log so the hd proxy server over here it's actually writing to disk and we have a five bit service which will read from this disk line by line and it will send this on to elasticsearch we will also have metric b packet b and ordered p these are three other products separate products and these read the vm itself so you don't need to point to a file they talk to the operating system itself and mutually does essentially what what top does so if you want to look at the state of your especially your cpu your memory and your disk usage uh metric bit will record that every few seconds and then send that to elasticsearch packet reads your packet interface your network interface and ordered b basically looks at the uh or the ordered logs of your system so who logged in wouldn't log in who used a wrong password to log in those kind of information is there so we want to uh comply based on audit uh for for your environment for example you work for a bank or a telco and you want to have a system that can implement a certain level of auditing capabilities uh you can install audit feed on all of your machines which need to be audited and you can send their uh stats to elasticsearch and then you can view them in kibana so i will show this for one year and you can essentially migrate it to multiple vms um and lastly just to show this who is talking to who so so i just showed you filebeat which released from h proxy this goes to logstash and then it goes elasticsearch so the reason i'm and then matrixbeat packetbeat in order to design a directory elasticsearch and then we have something called an apm agent which sends it to an apm server and goes to elasticsearch so what is essentially happening over here is that mutually packaged already the data that they read from the vm that doesn't need to be transformed it can be sent directly to elasticsearch if you want to transform it in some way there is a capability on elasticsearch node itself so it it's called an ingest node or in just capabilities you can do that over here but there's another thing called logstash which is another separate product in which you can send the uh the data that that filebeat has read and you can modify it over there the good thing about logstash is that it can you can persist the data to this so for some reason if your uh elasticsearch goes down uh or there's some other issue on on this side then log stage will start keep persisting the data so by default it persists in inside memory but you can also process to disk and you can send these set the size for the disk to a few gbs or however much you want and it will start writing your disk but when when it gets to the end of that memory limitation then it's then it rolls back and starts rewriting the old folder so that is something you need to be careful about but logstash has that capability another thing is that so over here uh an elastic like i said there's something called ingest log so we are not going to ingest node in just capability so we are not going to use that uh but what it can do is there is a language for painless and you can run that inside there and it's a pretty powerful language and you can modify your your field for example you have a string field you want to do an integer field you can do that over here you can do the same thing in log stash um but there is a difference between them for one thing when you do this ingest operations on elasticsearch that puts some load on the node itself um so that is something that you need to take care of that you know if you're doing ingest and you're doing indexing of the data as well you don't want the node to be overloaded uh so that that is that is a that is something that needs to be uh evaluated while you are while you're making that decision uh with lockstash you can do the same kind of things you can do different operations you can also do something like turning a string into an integer the good thing about logstash is that it can talk to systems outside elastic as well so if you want to talk to mysql database and you want to take some data from the mysql database and put it inside your data that you're receiving from file b you can do it on log stuff in the ingest node over here you cannot do that so with logstash you can you can talk to systems outside other than elasticsearch as well and that that's a very uh powerful and interesting capability to have and the other good thing about logstash is that it uses you can write in ruby actually so if you know ruby or if you know any higher language it's actually very easy to write very complex logic and run for loops and all those kind of things and that's a very powerful capability so the last thing is the apm server so what the apm agent does is it does something called distributed tracing so if i want to show you what what that is so in elastic you have something called apn application performance management uh if you use something like jager or zipkin it does something similar as well so what it does is so for every transaction i show you something like this for example there is a http call for orders and in that you have a get call and you have a get get api orders call then you get api call and then so on so you get this very very nice um kind of a stack of services that are being called and how much time each of them takes the whole request or in other words in in in apm terminology it's a tracing terminology it's called a trace so this is a trace and inside there's a span there's another span there's another span so you know how much the whole trace takes and how much each of these fans takes right so that that's a very good way to figure out which part of application is actually slow so for example the you you had an orders request and the http get request took only two milliseconds but the mysql query took 50 milliseconds so you would know that you know my skill is very so i need to look at that maybe there is no indexing over there with some other issues over there so i've actually used this in one scenario where our postgresql was taking a very long time and he figured out using this distributed tracing this apm kpm agent that we used so the apm agent over here you uh this is actually not run as a daemon so there are different ways of running it the way we will run it is that we will attach it directly to the uh jar file to the springboot jar file and we will use that to send the apm data um so so i will just go on and start uh the uh installation process so basically what you do is so i'm running this on on aws so i have three nodes uh and i have a kibana node and i have an application server apn and log station on this machine uh the reason why i call it unsecured is that the installation that i'm going to show you is an unsecured installation which means that uh there is no username and password on this so if you want to set up on your own on your own system make sure that it's behind a firewall it is inside a dmz it's it's not visible to outside uh to the internet right now it is but it's kind of a demo so that's why i have it but there's a way to make it secure as well but that that's quite involved process and that is other sound so on top so so so this installation is actually unsecured so what i've done already is that i'm i've installed elasticsearch on node one and node three or i think node two and i need to install it on third one and the process is absolutely identical to all three uh but but as i go along i will show you how how i do this um so let me just confirm and i'll just look at the ip address for this so this is the public ip address for my machine and i can go over here so this is this is ubuntu machine this is your 120 i'm running it on and i want to just make sure that this is running elasticity or not so for elastic search by default if you want to talk to elasticsearch like with other databases for example postgres you have psql for mysql your mysql command line um interface or you use a jdbc drive or something like that with elastic everything is on on on an http rest interface so you want to read something if you want to write something you use a get or a post for those kind of commercial and it runs on for 9200 so i'm just trying to check whether it is already installed over here or not so so if it is already installed and it is running then it will show you it will give you back this um information and this is this is a json uh authorized editor and it's showing the name of this node showing the name of the cluster and are showing the version number 7.9.1 so so this is how you confirm that elastic is actually running if you want to locate the logs you can actually go to i think i need to do a pseudo for this yeah so you can you can see the logs and you can see what is happening inside actually um so but this is not the machine it's another machine actually so that means it confirmed with the second one was it this one okay it's also on this one so it's this third one which needs to be which means destination i'll just ssh over here i'll just make sure so so there are some steps for installation elastic so what i typically do is if i'm doing this on a u12 machine i will go to uh install um elastic search so what typically happens is that uh so different products uh like prometheus or elastic or and any product uh they have a certain release cycle and they release it very frequently the last thing the thing is that the uh for example the 7.8 release it came out about eight weeks ago and like six like two weeks ago 7.9 came out um so elastic releases very frequently so every six to eight weeks is a new minor release for elastic and and the minorities itself is like a completely new product there's so many new very nice very interesting features um so some some organizations they actually don't uh update their apt repositories that frequently but elastic does it on a very frequent basis uh so you can actually whenever you want to install you can install it from the from the apt repository as well so the way to do this is to actually go over here and you can run this and then you can install this so this is so actually went to google for uh installing elasticsearch on apt using ubuntu i meant to install this larger source vpn package elastics on documentation is actually really good so you don't need to go anywhere else except i i have some my own shortcuts i've written them somewhere so i use them if i want to do it on a weekly basis so there's another command you need to run so what this does is that it takes the elastic search uh debian information and it adds to this file so if you want to see it i can actually go over here and see this so so this so basically what this tells us you want to node notice that if you want to get lost in packages you get these from here and lastly this so you do an apt-get update and this will get the latest information for the velocity clusters and then you do an apt install so the reason why i'm doing it like this is that on my other nodes i actually installed elasticsearch 7.9.1 um but i think yesterday or day before uh seven point nine point two came out so when you're when you're starting coming elastic cluster is actually a very good idea to make sure that the even the minor and the micro version numbers are the same so typically elastic is very um compatible very flexible but it's always a good idea to make sure that the versions are the same so that so that you don't run into some strange issues so basically what you do is you can specify the package name over here that i want to get this particular one so i'm installing the last research 7.9.1.7.9.2 which is the latest one if i just ran apt-get install elasticsearch it would have installed 7.9.2. so what this does is it puts elastic search executable in here so you have a pen elastic search over here so you have an executable for this and you have all these other executables as well so these are used to set up a secure cluster so certif uh generate certificate uh key store and then you have uh yeah so so these these are actually used for generating certificates and setting up passwords and setting up certificates uh we're not going to do that over here but but this is this is basically where it uh installs and typically for um any elastic uh stack product configuration you always have this atc and that product name it is over there so i'll just sudo and so so for example you have btc file page it seems repeat so you have etc elasticsearch over here so the configuration file for elasticsearch is is in this file it's elasticsearch.yml and we can see that you have these settings over here so what i will do is i will take the setting that i have for my other other files or from other nodes and i will use them over here so i won't copy them because i'll put them in one by one so i will just look at the several years so so what this does is so i'm activating cluster and i'm calling this this so i already have two nodes in this cluster i'm going to add a third node and in order to make sure that it joins the same cluster i need to make sure that the cluster name is the same so if the cluster name is the same and they can talk to each other then they would join the same node so it's also very helpful because for example you have a three node cluster and after using it for like this decision happened to me so after using it for like a year we got new customer you got a lot of data coming in and things started to get slow so we just added a new node with the same cluster name and it just joined the clustering it was very painless and things started improving um another thing is that you can give the node a proper name as well uh you can call it node one mode two whatever it is uh what i typically do is i just use whatever the host name is um so and then i make sure that the host change go to htc host and change the host name to the one that is that is uh that should show up so so that is a personal best practice but you can use a string field over here as well um other than that so these are the paths to the data so when elastic is what elastic does is that it uh the information that it has about the data that is storing the shards and the replicas and the other cluster information it puts it into slash or slash website last time so this is a directory so it puts it over here by default so make sure that wherever your directory is it has enough space um a good practice is to actually create a separate disk and then have a lot of space in that disk and and attach that disk to the uh to your vm uh and that way uh you you will have more space than your uh and it won't be on the uh rule of root disk itself so the logs are created over here and this setting bootstrap memory locks is actually very important because uh elastic what it does is that it uses a lot of uh jvm memory and it wants to make sure that the memory does not that they may be available to it it does not change a lot so you need to lock the memory and you need to freeze it um so the recommendation by elastic is that whatever is the memory of the vm that you're using make sure that uh it is half of that but not more than 32 gb um so certainly this is what what what i've read and this is what you should use so so i always make sure that i do this this setting over here that mood start memory lock is set to true a few days ago i was another session and it was a micro finance bank and they were having problems with their memory their performance started going really really bad and just couldn't figure it out so um so if you if you use if you use this setting if you don't set the setting then it's going to be fine for a few days but after a few weeks or months you want to return to serious problems so make sure the memory setting is done so in order to set so i'm trying to set up so i'm trying to do a production cluster uh in terms of memory in terms of performance uh as much as possible um and and if you do these things they should be good enough for as long as the form uh performance of the cluster is concerned and you can optimize it afterwards so over here what i've done is that i have in address i've got underscore atf0 underscore local what this does is that it looks at whatever is the so if i do an ip address so i have two network interfaces right so one is the loopback interface and one is the eth0 on which the private ip address is available so when i do underscore eth0 it whatever is the ip address map to this this is available uh it it gets bound to this um particular ip address and also to the loopback address so you can basically do a curve to 172 31 20 two forty one or nine to two hundred you will do this and also you will be able to do a local host you want to if you want to be able to do all of both of these things you should uh make sure that you do this you you put both in the network and this is an array and you can put more than one as well the third one is setting is so this is discovery scene host so this is a setting which says that which nodes are actually available uh for for for the cluster and i'm going to add all three of them uh over here so these are the private ip addresses of the three nodes the one the node i'm actually using over here and the other two nodes as well and then open tracing cloud native no this is not cloud native actually um so this uses the elastic stack uh it's elastic stack uh stacks on apm tracing it is actually compliant more or less with open tracing with most of the things uh but in cloud native some of the trolls that they have for example for example has application gateway or firewall uh they don't they have so firewall i'm pretty sure doesn't have support open tracing but if they do support open tracing then then they can be part of the open tracing uh flow but this is not cloud native this is this is actually everything is being done on-prem so over here so so the other setting the first setting was that these are the initial list of hosts to perform discovery for this node so these are the nodes that we'll talk to when it when it starts up and it needs at least one master node which you should be able to reach in order to join the cluster so this is one of the node private ip addresses of the node and the other setting we don't need to do any other settings so this is actually for this file itself this is pretty much what you need um there are a couple of other settings as well uh so there are some jvm settings that you need to do in order to make sure that that the performance is good so we will give certain amount of memory to this uh to this particular jvm for elasticsearch so if you want to figure that out you can see that how much memory you have so this particular vm has about 7.8 gb 8gb of memory and it is about 6 uh gb free right now so so it has about 8 gb memory so i can give it like 3 gb and that shouldn't be a problem so it's slightly less than half but it shouldn't be more than half so i will give it 3gb memory for this particular node um there are some settings for that so this is a so i'm not plugging my phone blog but actually i wrote these down for for a quick reference because i ended up doing this so much uh that as well so so these you can actually find it at this url as well but you can also these are actually there in the elastic search um documentation as well but it's a bit scattered so it's a bit tricky to actually go there and find it uh but but it's it's not that much so what what you can do is you can you should go to atc jvm options so this is the node we're doing this on so we do we go over here and we change the memory over here so the xms the xmx we change it to 3gb so this is the amount of memory that they should have um and then you go to the security limits file and you want to make sure that the limits are set to unlimited so this is what you want to for for uh for a centos or for developer-based system it could be slightly different so we are telling it that uh memblock should be set to unlimited so these are jvm related settings so this direct does not exist we just need to create this directory and then we create a new file called override.com in this directory and then we add this line so these are three addition settings that you need to do then you can do a demo reeler and then you fantastic search start so this will start elastic search and hope and hopefully it will join the cluster with other two nodes so this will take maybe a minute minute and a half so let's see what happens and fingers crossed so i will just look at the logs and let's see if the service started you can go localhost so it started but we want to see whether it's part of a cluster or not and want to see whether all three nodes in this cluster have the memory settings done correctly so in order to do that you can run this command so it's what it does is it elastic has different apis so there's a node api there's an index api so this nodes api it actually tells you what is the condition of each node so i do a query for that and then i check whether um for this particular attribute in that in the data that is returned so i do that and it shows me that there are three nodes and for all all three of these nodes and localization true it means that the memory has been locked um if you want to see the complete uh output of this you can so so it certainly comes out comes out like this which is just useless so what you can do is you can always put a pretty at the end of this and it will give you a nice formatted json and this is the output that we just saw but we actually filtered it for this particular um text and this is how we got so we have we have three elastics search nodes now if you want to do this for the other two nodes you just repeat the exact same process so there's no difference you just you can actually even um if you go and look at this file this is actually there is nothing over here which is specific to this particular uh node so you can just uh copy paste this file with sftp over there and then just restart it as well so that that's that's equally convenient so so we'll install elasticsearch over here so the next thing we're going to do is install matrixbeat and uh file and attribute or repeat and packet v and i'm going to do this on this machine so over here i have an application server running and i want this machine to actually send its own metric information i can do this in any machine it's absolutely fine but i'm going to use this particular machine ideally you should be doing this on every machine of yours because you want to see what is happening inside the system it's very important to keep an eye on that so i will just assist over here so on this machine i if you want to install the metric you can go over here you can do the same thing you can do install repeat um it's the same thing that happened last time um so we need to do this we need to install this package we need to set the repository definition and then we do a update and do a apt-get install and attribute um so i've actually done all these tests before so i won't do them again but this step uh i have actually done as well but i will still do this again just to show so basically what you do is uh you install this and this is already installed so so the process is very similar to setting up elasticsearch you just need to run these four commands and and this fifth one also because you want to make sure that the service is restarted whenever your vm is rebooted you want to select restart start so this is set to auto start and then for example in elastic we had so on elasticsearch we had a slash adhesive slash adc style glasses search over here we have slash atc slash metric so in metric b what we have done is that it has a setting file over here which is this file so so what so with these settings you really don't need to do uh much with these settings you can set them as they are so metric beat has some modules as well i won't go into those but those are uh specific for different services uh if you want you can give some tags to this so the data that you have that comes out it can have certain tags for example the machines that are that have basically been installed and installed on them they might be production machine or staging machines that kind of animation can be there that information can also come up uh through these attributes as well and then you have dashboards so so there is a setup.kibana there is a output or elastic search and output dot log star so this is actually um important for you to know because so we already have kibana running somewhere right so we have kibana running on this particular port um and what i've done is that i've i'm telling you to repeat that the kibana instance is running on this host and this is actually used to load kibana dashboard so kibana has uh kibana is a visualization for elasticsearch and it is used to display the uh information that is that is like received from from different sources uh i can give you a visualization we'll set up later on but but basically it looks like this so you can go and see the logs that you have received for example throughout the day and you have something called dashboards so the dashboards are pre-built visualizations which you don't need to build them yourself so elastic has built them for you so there are five wheel network peep dashboards then they repeat has their own dashboards and you don't need them but if you want you can use metric view to install them for you the downside is that you get a lot of these dashboard which you won't be using and you would want to delete them afterwards but the good thing is that they already have like really good dashboards and they're just available um so in order to load the dashboards what you need to do is you need to make sure that setup.kibana so by default it's like this so so you can go over here you can put your own hostname and you can matrixbeat setup so what this will do is this will talk to the kibana instance that you have configured and it will install those dashboards inside inside kibana so what is actually happening is is that it uses the elastic search in cluster that is being run in the background and those information for those dashboards basically set up over there and then kiwana can read those particular dashboard definitions and display them deposit and understand them but this is essentially data inside and last resource cluster so kibana itself does not run its own database doesn't have its own memory but it actually talks to the elasticsearch cluster and stores all its information over there so this takes a few i think it should take like half a minute maybe but but this is a bit of a time consuming process so what this will do is it will show up over here so let's do a metric beat and we can do a metric beat you don't have kubernetes you don't have so we can do a microwave system so if you do a multiplayer system you can overview easier so ecs is actually elastics on standard for data uh the way the way it defines the data the way it handles it so it's called ecs and we can see different visualizations for for the material data so i just started this so i think it should show up so i can change the time filter i can say show me for the last 15 minutes into load and i think meat repeat is not running so that's why it's not showing up but but these are the dashboard that gets set up when you when you run this right so if this process is completed um and what we can do is we can go back and we can tell this matrix that where is the elastic which is running so you can send it over there so you can do that by setting an array of elastic surfaces so what you've done actually over here is that you've told that we have three elastic search nodes which are all data nodes and you can send to any one of them so what it does is it talks to one of them and then and then sends the data to that one and if i've actually not tested it i'm pretty sure that one of them is not available it talks to the other one and sent center over there so so you can send give it an array of last research hosts um so for me typically what i used to do was i would send a directory last research i would not use logstash and until evaluator so so once you've set up the private ip addresses of the three nodes so these are three last nodes one of them which we just set up so these are their private ip addresses and these are they talk on port 90 200 um to each other uh they talk so basically uh metrically talks to them on port 9200 to each other they talk on port 9300 so uh so we're just sending it to matrimony over here so we will just service star network so we can see whether it's working so there's an error over here i don't know what it is but let's see um i think it's trying to talk to nginx and it can't find that so i did some setting for nginx as well um but i think i changed it somehow but but it should still send the information so if you want to see how it's let's just go over here and see what's happening over here so in the dashboard you can see that this data just flow down right so it's showing that this is the amount of disk that has been used uh it also shows you traffic matrix uh and it shows you that top holes by cpu or these so i have only one hose which showing that using point five percent it's green if it goes up then it's the color is going to change so that's a nice visual way of seeing what is happening uh you can also click this link give a host overview and you can get all this information uh this is like a very nice looking top that you have and it shows network information as well and then you have if you have containers it shows you what's happening inside containers but i think this is different way of setting this up um i'm not sure how these containers are showing up but maybe i have container d running over there but i think it's a different concept so so so so i'm mainly installing and testing this on on vms and terminal down here because if you go to uh docker containers uh then that's different uh game altogether um but but what this does is sets metric information of the system to elastic stack if you want to see how frequently this data is collected so you can go to the module.directory and you have all these modules so you can enable them and matrix we will start using those modules so one of them is always enabled by default so we put it disabled behind it it's disabled but if you remove that then that will not be enabled so this one is always enabled and if you go over here you can see that it's reads every 10 seconds and these are the metric information cpu load memory network process processor it sends these out and also information for file system and fstat so you can also check whether your disk is getting filled and if it crosses 80 percent you can generate an alarm for that uh that's possible as well so the alarm capability it's it's not so for long capability you want your elastic to set sent along so it varies what kind of launch can you send some of them are there the paid version some there is some of them are there in the free version um but but for either of them you need to enable security you need to enable expect security for that so so i won't be showing alarms but uh right now you can see that this information is being read every 10 seconds and this information is immunity one minute and it's being sent to uh elasticsearch and from the last research keep on under um so that was uh matrixbeat so similar to that we also have uh so there's another thing i want to show you so so this is actually a dashboard but something that you end up using very frequently it's this discover tab so if you go over here so you have a dashboard over here and then you have a discover over here so what this covers shows you is how frequently data is coming in so it adjusts this time automatically so i said show the last 15 minutes and it's showing a time stamp every 30 seconds so every 30 seconds you have two three two three uh three records coming in um and this is actually from file beat so so we can change index so this was actually metric b so if you go to metric beat we will see that uh these are the number of records that are coming in every 15 minutes uh and this is actually the law that you saw uh the information that you saw over there uh this is actually in the form of structured log so by structured log the good thing about this is that you can actually run a query on it so for example i want to i want to see logs for service type system you you can run a query for anything so what you can do is you can go add a filter and you can use service type is so it will apply that query and it will print filter for those for those items uh in our particular case i think it matches all the documents so that's why it shows up uh so that's how you query documents another way of querying documents is that you can write it over here so you can do a system uh for example this is actually called kql so this is another way of querying documents um you can you can build a user filter over here you can do a kql over here and then you can save that particular view and then you can for example let's call it myview and then you can share it with other people so you can go to this permalink and you can do a short url and it will create the short url and then you can send it to anyone and they can open this particular view and see that particularly you can also embed time information in that so that is also very helpful as well um so so so we installed metric period we started matrix v um so there's another thing that i want to talk about it's that so this thing or you see over here metering b dash star this is actually called an index pattern so when elastic wants to search through a certain type of log files it needs something called an index pad and for that you need to go over here go to stack management and then you go to index patterns so in this pattern is actually relevant to kiban arts and kibana uses this to actually a shoulder visualization so these are some of the visualization uh patterns i've already built so this one is duplicated so you can i think when i install this right now it duplicated this but what you can do is you can create your own index pattern that match different kind of uh patterns so i will just delete this one uh and i will and basically what you can do is you can go over here create index pattern and for example if i want to do a matrix beat star so it will show you that these are the indexes that will pass so index is kind of like a kind of like a table um and and uh when you do when you create an index pattern it can read inside those tables uh so this is how you create it and then you can go next step and then you will tell it whether it is time series space or time database you can direct that this is the timestamp value for this and you can create next pattern for that and once you create it and this is a regular expression so if you know regular expression you can create an expression based on whatever you want then you can go to the discover tab and you can use your own index pattern that you created to show that so in the beginning there is no index pattern so unless you load elastic through dashboards uh you need to create it by yourself i'm not really sure whether loading dashboards create creates the index pattern itself as well but if you don't if you go over here and there is no index pattern available then you can just go ahead and create it from from stack management and index patterns right so we don't really need this so we can just go ahead and delete this um so in addition to that there's another service that we will install uh it's called uh packet beat another one called already so installation for those is very similar so i think we know this by now already so we can do a i've already done these steps so we will just do it so people who are joined just now maybe or late you can get you can go uh go to um this metric for example and you need to install the public signing key you need to install these packages and then you can install that particular uh d21 so we installed the attribute already we can install ordered beat and again since 7.92 0.2 v has been released i want to make sure that the version the same so i do apt care installed or it will and as i already already been installed but but basically this gets installed and it's available in etc and we have already so again over here i uh did not change any of the other settings but out of the box it tells you that it checks for file integrity it checks for um host animation login information um and there's some other settings as well and then you can tell it to send it you can do a setup on kibana if you want you can load dashboards i'm not doing that right now and you can do an output to elasticsearch uh if you want to set the dashboards on on kibana we can do that as well but but it's very similar to uh the previous one actually that would do that so i haven't done this so quickly i just want to look at the kibana settings so i will look at the kibana okay so this is the kibana url doesn't run so let's put this over here so you do a setup about kibana you set the kibana hostname and then you do an audit tweet and then you do a setup it's working so this will load some auditory dashboards so what audit b does is that it gives you really nice security information so elastic is actually focusing on a lot on system security and there are different ways of ensuring that the system is secure and autobeat is one of those very important parts of the ecosystem which is used for this so this is i actually um so this is so it uses a mix of metric the packet beat or it'll be for the security section so it has a dedicated security section before this we used to be separated dashboards but now elastic has this dedicated security section for for uh uh audit beat and other other information that is available so so we installed dashboards and then let's just do a service so this will set all repeat information and we can actually go and see whether we have any audit so this actually shows up the last dashboard that we ordered so we'll just go back here and see there's some other good information so let's say already system overview so what's going on over here so i just installed it and i don't think it was running already so let's do the last 15 minutes thing let's see if it shows up so actually the memory that i'm using that that's that's only 3 gb you need like 8 to 12 or 16 gb vm and you can give it like 6 or 8 12 gb to the elastic itself so so so so it's showing that it's reading some information regarding what would be uh so they're showing you which packages are installed um you can check for secure unsecured packages uh you can look at the hosts that are being monitored so only one and these were at 9 25 so i actually logged into this machine so i don't know where this came from but basically if i do something like for example uh i let's go out of here and i will try to log in you know and then it's saying permission denied so this should show up over here that you know somebody was trying to get into the system and this event shows up over here and so there was a logout event that came up over here and it takes a few seconds to consume and there's another event over here and they're saying that somebody called sami tried to ssh from this ip address and you know you can generate an alarm based on this or your system guys can look at it and say you know something fishy which is going on somebody trying to use this uh go into our system uh you can also look at all the users that exist on your system so in this particular system we have 38 users uh and a lot of them are system users and you have processors you can see what processors are running on this uh so this is actually for for for ubuntu lens machine um i'm not sure if if ordered p does the same thing on windows machine or not but uh windows machines also have something called wind log beat just like ordered peak and matrix there's something called wind lock b and you can install that uh and that will give you a lot of information as well and that reads windows logs again i'm not sure if audit b audits window system as well but um but for windows you have that additional beat available as well so the third one that we're going to do is um so third one you're going to do is is called packet bead so it's just so so with packet feed you can uh drill down uh into the details of a particular packet and it gives you a lot of information and also helps you with the overall security system and again 7.9.1 is installed uh the setting for this is in target we got wireman i think it should be a bit familiar by now so we have set up kibana and we have output one directly to elasticsearch so i've already installed the packet dashboards and we will just start the packet pick service so what this does is this uh we can go over here and look for information so uh so i so i think you can appreciate that you know like in i think less than an hour we have we have a system monitoring we have uh audit audit logging and we have all these like really cool things that are available um and with packet beat you can do a lot of cool stuff as well so for example i'm not sure this buy this is not showing up but this does show up so you can see on a map where your sessions are being originated from uh and these are the other factor level information that is also available over here so you can check out network flows um so which part of the network is dropping to who this hasn't been running on for so long but if it had it showed like very nice very nice visualizations um so a lot of a lot of times you don't have i mean some people don't have the knowledge some people don't have the time to build these visualizations these dashboards so you can get them out of the box and you can also use them as an inspiration to build uh further uh visualizations so actually this will show up later let me just try something so i think i have a hello world application running somewhere so i just did http request over here and let's see if it shows up i think if you do a refresh over here okay it's not showing up but um but it looks at http traffic and that hdb traffic can also show up on a map so you can see which part of the world is getting you're generating the most amount of traffic from so actually i have it over here so i think i ran it earlier as well let's not show enough so so so this is something that needs to be investigated but but basically you can you can have uh you have a map as well which shows up so so we've already installed these beats which sent the data directly to elasticsearch so next thing i'm going to do is uh i'm going to show you log stash and an apn and um so i will what i will do is i've already written the configuration for that i'll just walk through that um and if there are any questions i can answer them but what i have done is that uh so what you can do is you can go ahead and install logstash and i think we've already on this machine let me just confirm 172 31 1639 so so you can do the same thing that you did over there so you can do install so it's always a good idea to go directly to the elastic documentation rather than to digital option or somewhere else so the documentation is actually pretty pretty so we can just check what happens so if it's already installed it should be in log stash or uninstalled it so so all products are like installed and user share so you have elasticsearch and users share you have uh file menu share as well so so i guess you cannot specify versions version number exactly for logstash um i'm not sure why uh this happens but but 7.9.1 is already installed so with with lobster the thing is that it's slightly slightly different from from the other configuration so if you remember you're going to slash atc whatever that was the product name and then we did a go into uh for example log stash uh immediately.yml right so for example we have block standard uh but over here you really don't need to do much right so for example i told you that what lobsters does is you can send data to this it it queues that or and it keeps that inside memory or inside on disk so if you want to cue that in on on disk then you need to change this to persisted right so what this will does is it will write to disk so this is always a good idea to do this because no matter how much memory you have if something happens to your largest cluster and you file with descending data sending data then then it is going to become the time is going to come when your memory is going to get exhausted and your older data is going to start get over it that happened to me once so i know that from experience so it's always going to be to process it to this to disk um so so we have log station installed and we have uh let me just check where is my file date so my file is on the same server so we have log search involved installed and before log stars actually i want to talk about file data so file bit again is installed using the same process so we can do a file beat apt get installed you do all the other settings and then do the file so while we install now what filebit does is so this is actually the kind of interesting part so in file with you can configure you can first of all write the dashboards so like i did the dashboards for for the other you can do dashboard file b uh i haven't actually used fileview dashboard because i didn't really need them but but what filebeat does is that for example in this machine i have uh i haven't have a node node application that is running on port 3000 right so it's a simple application it just sends out a hello world right and i have an nginx service that is running on port 50 right so this and this is a reverse proxy so if i do a nginx um it basically what it does is it listens to everything on port 80 and it just reverse proxies it to 3 000 and this is actually what shows up and so this is my engineering talking to my node application now what i want is that these log files that these are being created in this location bar logs so i want to see who is accessing i want to get uh store the logs and structure them properly and and and then access them so so i ran this and it actually this is my request that just came in so i'm using a mac so just show that and it says that it's got to get slash so so usually what you want is when you have when you have log files that are being generated on disk uh you want you want to be able to uh access them uh in in a visualization so you can use file grid for that so we can tell file b to use a default setting for nginx and read and read these files so we can do that by going into slash utc slash file beat so id or yml and we can so i will just disable this i'll come back to this in a minute and so what you've done is that you've told filebeat that you know send your output to elasticsearch we can send it to logstash and we can send it to elasticsearch support but we are saying that you send it directly to elasticsearch uh and what we have done is that in filebeat you have a control d folder so if you go over here you have oh no sorry this is atc so i told you that i'm running an nginx service so there's actually a nginx module over here and it's enabled if not disabled you can see that there's no disable at the end of it and if i go and open this i'm saying that i want you to access the access logs the error logs and there's something running ingress controllers for kubernetes so i just tell it to access the access logs and the error logs and to parse them and convert them and turn the individual integer field into integer and do all that stuff and send it directly to um to elasticsearch so that i can see it in kibada so so i have this so just to tell you again so i have this file feed or yml file where this part is actually by default turn enable is always false so when you open it for the first time it already falls what you need to do is you need to go over here and set the elastic search out so you need to set the output dot elasticsearch host to the host that you have but but don't set the uh lord's log status lost so in five bit you can only send output to either last i search or to lock stash not to work so what we are doing is that you send it to elasticsearch so i need to so i did this setting so i set output to elasticsearch and in modules i renamed the so it was like this initially so i renamed this to this and inside and make sure that access log enable is true and analog is also good so i will now start file bit and this should start pressing the feature request so i will just do a slash bar slash so i'm not sure where this is coming from but the computer but but basically what we are looking for is this right so so this shows up over here um and this should actually show up in kibana because this request should be processed by file beat over here and from file b it should be code direct to elasticsearch i'll be configured so if we go over here we should be able to see it so so i will go to the discover tab and it's showing so this is something that you might see undefined is not a configured index pad so what happened was if you remember i deleted and this is actually not the name by which elastic identifies its uh its indexes so this is kind of a label so even if you if you delete something and you re create it again if you come to this dashboard if you remember the old index id um so so that's why it happened um so so don't worry about it if you see this error so so so we are sending that file information to elastic and it's saying that it's not there's no resultant knowledge to create criteria it's probably because of this i hope so fingers crossed and and and basically it's showing a lot of data so i guess been running for a while now so what this does is so we just ran it now and we will should be able to see some nginx information and these are the internet that we just generated right um so what this does is that it does this things like uh it takes out the uh source ip the time stamp uh the url information so the server status actually not the one that we're looking for um so i think this is some service now let me say let me just run this again so i will go back to this and let's run this and i hope it shows up so okay so what happened well this was the actual uh request that i sent and what this did was that it got that request it parsed it it processed it and it even ran this jio location on it so elastic has this geoip database that it uses for lookup uh and it used that to also infer this additional information so we want to do itself it's probably pretty time consuming but but it can do this kind of really cool stuff where it can find out you know what is my ip address what is the telco provider name you know even my lifelong is over there so you can get all that information just from enabling the nginx module now if we go to order log i think this should show up so elastic has different data types so this hash shows that this is an integer uh this t shows that it's a text type this is a timestamp and i think this is boolean yeah and this is a geoip so the gip is a particular type uh and and you can and this is specific elastic so if you identify a document as uip it will it will uh it will save it as a guip location but this information that you see this is separate processing so you need to explicitly tell elastic that i need to process this additional information and based on this guip so nginx the nginx module is actually smart enough to do all these things for you so if you go to the order log that we just did so i go to security overview and we were doing this audit log test and i think no i think that was the order log dashboard yeah i was hoping that would show up over here but so uh so i don't know so there's a way to for that to show up over here but what you can do is i mean it's kind of a shortcut but what you can do is just make sure that fewer logs so they are quicker to load so this jio ip that i just came here it's somewhere over here sold at your location you can go over here on this filter and click this visualize button so it will create a map for you a new map for you and populate that information over here so i just showed you that you know this came from pakistan but if i had lots of different people accessing it all over then all of the geo points will show up over here right um so so that that's the one thing so that was nginx uh nginx filebeat using fileweed's own modules so the other thing that we're going to do is we also have an hf proxy service running over here on port 8080 and that hl proxy service writes to uh disk so i've configured so agent rocket typically doesn't write to this but i've configured in such a way so if i do a refresh over here the service running there's nothing behind it but but you can see that this is running and what this does is so if i do this log file so i want to see this log file generated uh this log file is structured so i can query it i can process it i can do a visualization over it um and i will use the same file before this but what i will do is instead of using the um i think there's a nature proxy module as well yes so there's an edge of proxy module as well so if i use this it can do a lot of cool stuff for me but i don't want to use this i want to use log stash for this just to show that you know you always don't have a module available uh and um you can have any kind of data so it doesn't need to be an hp proxy file nginx file or apache file it can be like telco cdr data or it can be any other kind of data some csv files some excel file return to csv and you can use that and you need to be able to know how to do it without using an available module so so we will actually use the same file we i just change it and the way i will change it is that i will instead of sending it to elasticsearch i would say that you know send this to block staff so just make sure that indentation is right this is yml and yml is very sensitive to indentation so we are telling it that you know there is a localhost 5044 there is a log stash running this is a default setting for uh for logstash and let's see if this is running and so there's no lock stash running so we will start log station a bit um another thing i need to do is i need to make sure that this is enabled so what this does is if you want to explicitly enable something explicitly certified file information you need to go to this section at the top file boot imports select type is log enabled is true and give the path so you can give the power to a directory it will do everything or you can give the part to a particular file for example here so i've given a part to a particular file over here so this is the log file for which proxy basically generated from this um so i did this i changed the output from last year search log stash that's it and one thing just to be sure i will do is i will rename this to i'm not sure if it's going to make a difference but so finally will actually start listening start reading the h proxy log file which is here again it will start reading this file line by nine and five into that is actually quite intelligent so if you um so for some reason facebook proxy is not listening it will send it uh to uh so for example sorry uh so if logstash is not listening filebook will send it to log stage it's not listening it's going to remember uh which was the last record that that got consumed by logsdash and we're going to keep re-trying keep it try and keep it trying um so it does something called it guarantees at least once delivery so so that in that way it's quite valuable so we will go to um so i installed log stash and lock slash configuration is in utc lock stash and so so like i said so lock size configuration is actually like this file information that i've received you don't actually change this in lockstep so you have something called comp.d directory so any file over here with the dot conf extension will uh be used for the consumption of the input data so so i have an hr proxy over here so in hp chip sorry uh it's a proxy configuration so what so what this does is so you have three main sections one is the input section one is the filter section which i've written twice you can use one filter section you can use more than one that's fine and you have the output section or so what this does is it says that i'm listening on port 54 5044 on the local machine and when i receive that i will use a grok pattern so graph pattern is something which uh takes a a particular file a particular line and it transforms it based on a certain regular expression pattern and what i am doing is that you can then take that expression and you can add additional data to that so so what this does is that it takes the sorry enthusiastic so so what this does is i'll just remove this it's not necessary so what this does is that the h a proxy log that that we just saw it gets sent here by logsdash it goes into this drop section which basically opens it up so for example over here as we could see that so from a line of text it becomes structured data like this so this is this data is actually structured so it has you can query it so so this does that um then you for example i have a particular field that is being received over here which is so this is h proxy information so this is actually time pack and connect so i can use um logstash's own internal language to do these kind of things so i can convert a field to integer this is string field and now i can also run ruby code so in so you can use a rule you can use this filter uh input and then you can do a ruby block and then you can do a code block and then you can write multi-line code to actually do your conversion so so in order to understand this you can actually go over here and see logstash ruby event api and what this does is you can understand that the data that is received that is actually in the form of an event so it closes understands it as an event and you can get particular fields out of this from here and you can basically run a ruby code on this and this is a simple ruby code if no ruby then you can uh manipulate the index as you want so and then and once all this is done this is actually sent to elasticsearch so i sent it to only one last user but you can actually put all three nodes over here and this actually tells you what should be the name of the index what this does is it automatically creates an index based on the meter information for example this will be file beat 7.9.1 and this will be today's date so what this does is that every day at the end of the day at 12 o'clock after 12 o'clock it creates a new index for the particular day so for time c database is actually a very good idea to have uh rolling indexes which will change based on on a particular date or for an hour depending on the kind of volume you get so so this is how you use log stash and i will just start this oops please so let's see if doctors are running or not so it's actually takes a bit of time so file beat and the other beats they are pretty fast but last time take some time and you should be able to see successfully started lobster so this is actually the port for the lobster api so this is actually different so this is not the pipeline that you're using right so that's running on a different port and that should also start over here i don't know if you can see this okay so so logstash is basically started and when we get the ah proxy log file we will send it here uh but before but you can see that it's a proxy log over here uh sorry not this one we don't need this anymore this one this is actually saying service unavailable because uh it's expecting a spring boot application in the background so if i go to hproxy it's basically a reverse proxy and it's a reverse proxy to port 1990 logos and what's happening on port 1991 local was is that i have a spring boot application so uh the springboard application is actually a demo explaining jpa demo from from the from the stream boot application itself and i just went downloaded it um and i i just created a jar file out of it right and what i also did was that i so i can just run this this will start running on port 1990 and what happened over here so let's so what i will do is before before running this uh let me also show you uh installation for apm and then we can run hf proxy file b and apm all together because it's all supposed to come together um so so with apn apn is also actually installed over here so again use the same um process so we do the sudo apt get install apm server i think so if i go over here i can check what is the elastic um so for this you can go to the install again from the repositories and you can so it's called apn server right uh so you can install apn server using the same commands i think this should work yeah it will be downgraded okay it's probably using another i think it's already using um yeah i think when i installed this 7.9.2 was already out so this is yeah so 719 anyway this works so so what you do is with with you can install apm like all the others and uh apm server server. so apm is actually a completely different topic all on its own um so what i've done is i've said that you know this is the ip address that the apm server itself needs to listen to so whenever another apn agent wants to talk to apn server this is the port on which it is listening to and again you should see an elastic setting ram is not important and i think so you have an output or last research and this sends information to last episode so so we can just uh service start this using apm dash server start so we have an apm service running we can just confirm that so so we have this apm service running and we can actually uh send tracing information to this apm server so the tracing information goes from our our jar file which is this one this will go to this apn and then from the apm it will go into elasticsearch uh so there are different ways of sending information to apm server uh you can you cannot send this information directly to the last apm server actually does some modifications with the data it has it adds this additional information that turns into a trace so you need to send it to apm server and then apm server send it to elasticsearch so so there are different ways to actually send uh apm data from the client from the server to the apm server so the easiest way that i found is to use an elastic api agent and you can actually find this from into an elastic vpn agent for example we have a java application it's a java jar so we can go over here and i think this is the location but we need to so yeah so we can go over here so so you saw with the others that the version number was 7.9 or something so with apn agent the original numbering is the difference so it's one point something so so it shouldn't be um too surprised with this so finding the agent itself was a bit tricky um set up the agent manually set up with this observation and then you can you go to maven central so you use this java agent option you go to meghan central and then from your central you can download it over here so once you download it so this is the file that i downloaded this elastic api and to run this you essentially attach it to your to your your jar file and how you do that is actually i just uh found this information again on on the over here so it's it's all given over here nothing special about it uh but what you do is you pass this agent as a command line argument right and what this does is that it instruments the jar file so it auto instruments some of these are the parts of the jar file for example jdbc called rest calls um rest controller calls these are automatically instrumented which means that there is a start time before the call and an end time after the call so that you can tell how much time it took and that racing can happen so what i do is i give this java agent java agent plastic apm agent uh path over here then i did uh named miami called my demo it can be anything uh which package that i want to uh process is this one and then i said that this is uh the apm server so the api server is running on the same machine so i gave it this private ip address and i just told it that where is my job first that is actually living so so this shows drum i hope okay this is not this did not run let me see what's going on connection to pay me so this is running let me just try and see maybe there is an indentation so it tries to run this service on port 1990 um so i'm not sure what happened why this happened but what should happen is that there's a service running on port 1990 um and the hf proxy is actually talking back to this so instead of trying to diagnose this let me go and show you what what the output should be so i ran this also yesterday and so elastic has this full plus section called apm so you can go over here and you can go to apm so previously for with elastic and kibana you used to have dashboard for this but since recently elastic is maturing a lot uh they have these dedicated sections so apm has its own section and so this did show up but this is not what we're looking for something else and so this was the actually the request that should have been generated so oh no not this one so there's another one let's do a last 48 hours so this is what happens when you do a live demo okay transactions so it shows you the transaction that that happened in the last 48 hours and this is not showing up but um it actually had a really nice transaction over here which showed a database query as well so i think uh yeah sorry i was in the long run so so my the application that that i use it has this so it it's a it's actually a springboard application and it has a controller called uh get get get and it has a method called get all users um and and when that agent proxy gets called that calls the spring root application and this and this controller gets called so when you instrument it like this so this agent over here it automatically instruments this particular jar file right actually let me just check one more thing so what it does is it actually instruments this jar file and if you and it shows up like this and if you go over here it will show you how much time was spent so there was an app span and then there's a mysql database running in the background so that's called mysql database um and that was the transaction distribution so if you have a lot of transactions so one of my transactions learning of my transaction from 0 to 50 milliseconds and one particular request to 600 to 50 milliseconds and then you can go and actually see that this was actually the trace of the um of the controller and it's called a database uh query from inside and this is where the educator is query itself right uh and how long how long did that did this take and the beauty about this is that you don't need to change write any additional end of code to do this open tracing stuff so this is actually open tracing compliant um and you get all this information out of it and if you have like um a lot of web control a lot of http controllers or other database queries uh they will all start showing up over here and you can then actually go in and see um which of uh which which uh of your uh which part of the application has the has the maximum impact so if you have a lot of slow queries they will show up on top and it will show you that you know these are having the maximum impact like this is the average duration for every query 99 25 is showing up uh this much and these other transactions per minute and this kind of impact it has so you can do a very nice analysis uh of the of the performance of your application yeah so uh so that's it for me um if anybody has any questions uh ashutosh if you are still there you can ask me i'll try and answer uh thank you swami okay thank you very much uh ashitosh and michael uh for this late night session i hope everyone took something from it and and that's it from me thank you very much bye everyone
Info
Channel: Official Elastic Community
Views: 1,399
Rating: undefined out of 5
Keywords: elasticsearch, elk, elastic stack, elastic community, elastic stack on-premise
Id: 0r-3BbxW4UI
Channel Id: undefined
Length: 105min 43sec (6343 seconds)
Published: Sun Sep 27 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.