Filebeat with Elasticsearch 8.x - Part 1: Install and Secure

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video I want to show how to install configure setup and secure file a beat so that it can be used with elasticsearch and kibana version 8. filebeat is a very useful tool to ingest a wide range of file sources for devops analysis in a few days I will be releasing A Part 2 to this video part 2 will focus on configuring email alerts and notifications on top of file beat but for this video I just want to focus on getting things up and running this video assumes you already have elastic and Cabana installed if you haven't done that yet I will leave a link in the description of this video to another video that shows you how to get elastic and Cabana up and running the other thing we need is a new server that we will be installing file beats on I spun up a new installation of Ubuntu 20.04 here in this window during this video I will also install the Apache web server to this machine so that I can demonstrate an example of how filebeats can ingest logs of installed applications so before we begin I want to say that if you find the contents of this video helpful then give this video a like and if you want to stay on top of the videos as we release them then you can subscribe to this channel all right let's get started let's install filebeat to this new instance of Ubuntu 20.04 I'm going to pull up the installation instructions so I'm going to use the app get approach which you can find on the elastic website uh under the documentation for filebeat and it's simply just get the public signing key install this dependency update the repository information and then update and run file B so I'm going to copy all four of these commands and paste them into one bash script so I can run it all at once so let's open up a Bass script you can call this file anything you want all right and I'll just paste all four of those commands in here and I'll just run it so it might take a moment so I'll pause okay installation has been complete so if I go to the EDC filebeat directory in here you will see some of the default configuration files or the template files that file be comes with and let's open up one of them this filebeat.yaml is our main configuration file so once we're in here I'll explain the options that are most relevant to us so the first option would be the filebeats.inputs and in this section we can list all the types of data inputs that we want file beat to ingest and you can see in this yaml file that filebeat has already given us an example of an input type to start from the first type of input that we will ingest is of the type file stream and specifically we're going to look in the VAR log directory for all files that end with the DOT log extension and um this type here actually there's many different types of logs you can ingest if we go to the file beat website uh I have a tab opened here configure inputs when you're configuring inputs these are all the different types that you can consume so file stream is the one we're going to demonstrate in this video but there's a lot more that you can ingest each with different options to configure so for example if you want to ingest logs from a mosquito broker you would have to set the type mqtt and then there will be options specific to mqtt that you can configure right and they're all listed in here okay or if uh let's see what other options yeah there's like so many if you want to ingest HTTP Json then the type you need to set is of type HTTP Json and there are all these options that you can configure so if you scroll down these are all the different options you can configure for the HTTP Json type but we'll keep things simple for now and just focus on the file stream and these are all the options you can see all the options available for file stream on this page here right many of them we don't need to set for now because we're doing a a quick Dabble right so all right so what are the properties we would need for file stream well we would have to give an ID for every single type of input and that ID should be unique so I'll leave this as is uh I believe enabled if we delete the slide then it will be enabled is defaulted to true or you can just explicitly change it to true right now we could add more uh directories uh to monitor but I'm not going to do that for now I'll just keep it simple like this so moving on to this filebeats.config.module section this section is very similar to the filebeat.input section just above the difference is that in this section we can list the file types that we want to ingest in separate yaml files underneath the modules.d directory and filebeat constantly watches or observes this directory for changes and when there are changes file B will automatically load those changes into its system and Report those to elastic and Cabana whereas any file types that we've specified in this section over here these require a system CTL restart okay so I'm going to actually enable config.modules because there are some file types that I want to ingest later on during runtime so I'll demonstrate that a little bit later we'll choose a Reload period of 10 seconds okay moving on all right I do want to set up dashboards so what happens here is that if we go to our Cabana and if we click on dashboards over here we're going to see that this section here is empty by setting this to True later on we're going to run the setup command and the setup command will load into kibana and elastic a lot of different lenses and dashboards that become useful for us to monitor uh or review a lot of the logs that get fed into the system okay so this is optional but it's really good idea to turn on okay I'm gonna specify our Cabana URL okay so this URL is basically just taken from this browser here and let's specify our elastic connection I am using commercially signed certificates and I'll use the elastic super user so the first thing we're going to be doing is running a setup command and because setup is a one-time activity I don't mind using a super user credentials here so these are the same credentials that I use to log into Cabana with when I need super user access but when we use file B to publish ongoing event data to elastic filebeat should use a less privileged user that way if this server is compromised at least Bad actors will not gain access to the super user account but for now because we need to do a one-time setup I'm just going to use the super user and I think that should be it for now so let's go ahead and run the setup command so I'm going to go into this uh USR directory uh oh nope this is not the right one yeah so the USR share file beat and then the binary directory and then here will be some of the commands that come with the installation of filebeat so I'm going to run the file beats command then we're going to run the oh before we even run the setup we should test our configuration file for any errors oops C uh Etc file B to file B Dot yaml and path.home so the path.home is the directory where all our file B commands are foreign data because eventually we're going to set up a key store to store to store some sensitive information and I will put that in the varlib file beat uh directory I noticed on some of my colleagues machines that they don't need these two extra flags path.home and path.data but for whatever reason I seem to need it on my system so I'm going to press enter and this will just test the configuration file okay configuration seems okay let's also test the connection to elastic and Cabana uh if I do output okay so everything seems to be okay here this is the actually the this is actually the address to our elastic.evermite.net server okay so if you're using self-signed certificates you might see some warnings instead but that's completely fine all right and just like that we should be able to run the setup command foreign and it's the same as a test output and test config commands but we just change those two options to setup now before I run it uh right we can see that our dashboard here is empty after running the setup this there should be dashboards in here so I'm going to let this thing run and it might take a moment or so so I'm going to pause for a moment looks like the setup is complete so now if we go to Cabana and if I refresh this dashboard page we can now see a whole lot of dashboards available to us and each of these dashboards reflect different services that you could potentially install on your server okay the other thing we should also check is if we go to stack management click on index management under data streams we should see there's a now a file B data stream here you click on the index here click on this you can see that a lot of the schema or mappings have been set up it might take a while to load because the mappings could be quite extensive oh there we go right so you can see the mappings are here now all right so uh we've got filebeat set up oh one more thing we can check is under the discover tab we should also see an opt-in here for file B prior to setup this option would not have been here okay so I think at this point we're almost ready to enable and run filebeat actually I just remembered before we actually enable and run filebeat we need to create a less privileged user for file B to use um let's go to stack management and oh actually even before we do that I should pull up the the instructions on what are the Privileges needed for that less privileged user and here on the file beats documentation there's a section for creating a publishing user and the user the publishing user needs a role with these privileges all right so we're going to create a role with these Privileges and then assign that role to a user okay so let's go to our security rules and we'll create a roll call file beat publisher and we will let's see what are the Privileges monitor read index life cycle management read underscore pipeline so those are things that we need read ilm read pipeline and what else we need the create dock on the file be index file beat create dock and let's create that role all right let's assign that role to a user create user and the username can be anything we want I'll kind of keep it the same I'll keep it the same as the role name full name I'll just use the same thing email address it doesn't matter because I don't intend on sending myself a password or anything so password I'll just pick something easy like abcd1234 because I don't ever plan on logging in with this user with these credentials but in your production environment probably use a much more secure password let's assign this role that we just made and create the user all right and at this point I could just go into the filebeat file and I could just replace the username with file beats hyphen publisher and password abcd1234 system CTL enable and start and file B would start loading information into elastic but I think I prefer to use an API key as opposed to login credentials like this so let's go ahead and do that I will go to oh we can't create the API Keys through here we actually need to use the dev tools uh so let's do that I'm going to hide this hint and let me copy the command okay so what this command is saying is we're going to Ping in the elastic rest API endpoint called security API key Grant and we're gonna log in with the user filebeat publisher and this is the password associated with this user uh because we're using the grant type password so we have to specify the password here and by doing that it's going to create an API key uh with the ID filebeat publisher or the name filebeat publisher so let's go ahead and do this and this is the API key or this is the API key and this is the base64 encoded version of this API key so I'm going to copy this and I'm going to save it to notepad there we go paste it in here and I actually need to do a little bit of massaging of this let me take the ID and I'm going to take this API key over here right and I could take the string and paste it directly in here because it's ID colon API key ID colon API key so I could paste it in here but I don't like leaving um sensitive information in plain text in a file like this because when Bad actors actually compromise the server and actually gain access to this file they will also gain access to that API key so instead what I'm going to do is I'm going to put an environment variable here foreign environment variable later called es underscore API underscore key and uh I'm going to use the file B keystore to encrypt that environment variable and load it in here at runtime that way Bad actors will never gain access to that key simply by looking at this file all right so let's quit this let's now go to USR share file uh yeah file beat go into the binary directory and let's click on in fact it's the same as our setup command but the difference is we replace the word setup with keystore add and the environment variable that we want to create API key hit enter thank you uh you don't have keystore yet so yes let's go ahead and create it now paste in the value copy paste and there we go so now uh we should be using the filebeat publisher user to connect to elastic so systemctl enable file beat system CTL start file beat and we'll give it a moment uh to load up well actually once it loads up we should be able to see it under the Discover tab right so after a few minutes something should load up here so I'll pause until that happens so it's been a few minutes and there's still no files loading in here uh I'm I took a look at the journal CTL and what I see here is that there's an error and I've seen this in all the other beats as well where there's a 403 error it file B can't seem to do a bulk create and uh my solution has always been to update the Privileges of the file of the beat user so if I go to the users and I go back to this file be uh publisher user I find that if I add the editor role it seems to solve the problem it's not mentioned in the documentation at all about assigning an extra role to enable this publishing user so if anyone knows why just let me know but for now if I update this user with the editor role uh it seems to work but before it even works I need to actually revoke this API key the reason is when you create an API key it it retains the Privileges of when this was initially initialized right so just because I've added the editor role to this user this API key does not acknowledge it I actually need to destroy this key and recreate it for the new key to pick up the new privileges so let's go back to Dev tools now I'm going to hit run and these are our new API keys so let's paste in here I'm going to set the same format again ID colon API key all right I'm going to copy this and I'm going to run that keystore command again to update the environment variable let me just make sure I copy this copy overwrite yes paste in the key system CTL restart file beat service and now let's go to our Discover tab and let's see if in a few minutes we start to see records in here okay now we're finally seeing some data in the system I can also click to the observability area overview and in this section we will see a widget here for log events I can click on show log stream here or I can go to logstream over here thank you and I can see the similar information so this stuff here is basically um parsed entries from log files that show up here same with this discover tab now let's take a look at some of the dashboards I mentioned earlier that a lot of these dashboards will be empty because we haven't installed many of these services on our server services like Salesforce or netflow and such so naturally these dashboards would be empty but there are some that would be of interest to you so they would be the syslog dashboard SSH dashboard sudo dashboard and new users and groups dashboard and I'll give it a moment to load up but I expect all of these dashboards to be empty right now because we haven't actually enabled filebeat to read logs from the system right and what I mean by that is if we go over to our server here if we look at our file beat.yaml file right now we are only going to read log files um in here but if you also recall we have this modules folder that's actively being monitored and is looking for any yaml files in here so let's go in there and in here these are all the module files for many different types of services that you could potentially install on your server but one of the things that's already that comes with Ubuntu are just system log files and let's open up this one system and you can see that we can enable the monitoring of syslog files so I'll change this to true and authorization logs I'll change this to true and with that said I'm gonna enable this file uh by removing that the dot disabled extension because this file beats Dot yaml uh is saying look for anything with a DOT yaml file extension so that would be this system CTL and I don't have I shouldn't have to restart the filebeat service uh I think file B should automatically recognize this file the system.ctl file and momentarily we should see results in these dashboards so I'll give it a moment or let me just hit refresh quickly to see if anything has come in okay so I'll give it a moment I've waited a few minutes and these are still empty uh so I went on to server and I took a look at some of the journal CTL files and I can already see a lot of issues so you see all these warnings if I scroll to the right it looks like there are permission issues that prevent the indexing of event publisher.event uh so let's see if we can resolve that I've seen this before and I believe again it's because the documentation on the Privileges needed for a publishing user isn't complete you can see this one entry here where it says the uh if you want to use modules then you must have this pipe read pipeline privilege but I think this is still insufficient so what I'm going to do is I'm going to come back over here and I'm going to go to our stack management click on roles and let's find this role I'm going to increase the Privileges here by adding anything related to Pipelines the ingest Pipeline and manage pipeline I'm going to ignore the log stash pipeline I don't think we need that because we're not using log SAS just yet I'm going to update this rule this user file beat publisher already has that role right so I'm not going to change anything here but I must revoke this API key and regenerate it so that a new API key will have the new uh will have the new Privileges and permissions so let's generate a new one all right let's copy this paste it into our Notepad and let's recreate what we need this is the ID this is the API key colon API key all right and we're going to let's see if I can find here let's share a file beat binary and file beat keystore add es API key all right so all the same Flags again and let's go ahead and add this API key I'm going to copy this overwrite the old one paste this in here okay and now I'm going to go system CTL restart to file beat and sometimes this restart a file B can take a minute or two if there's just a lot of things going on in the background like additional setups or something or something some kind of logging or Cache needs to be cleared but this seemed pretty quick for me this time and what I'm going to do is take a look at the journal CTL to see if any more warnings happen [Music] all right so far so good just mostly infos so I'll wait a moment to see if uh uh here let me just bring that Cabana back so I'll wait a moment to see if anything comes in here okay it's been a few minutes I can already see data for uh SSH login attempts and yeah you can see that the server is out in the world somewhere so it's getting spammed by a lot of bots out on the web sudo commands I haven't done anything lately so let's actually try something I'll just type sudo Echo hello world and maybe in a moment that will be reported here and new users nothing yet but if I create a new user and I think hopefully in a moment that information will show up here as well and for the sys log let's say I'm just going to add a system log and we'll see if it shows up in here in a moment I'll take a look at the journal CTL quickly oh looks like there are still warnings so if that doesn't work I oh okay so there is a syslog issue here okay so let me just wait a moment uh just to see if anything gets into elastic all right so uh I can see now also the sudo commands are being reported uh new user Creations are also being reported here the syslog still seems to be failing and I do have a way around this so it's not ideal but what I can do is if I go to the filebeat dot yaml file whoops let me go back here all right and I'm going to temporarily disable this uh less privileged user or rather I'm gonna in I'm gonna actually use a super user all right so that's my super user account I'm going to use the super user to write the first record in here okay and once that first record has been written here then I will revert back to the less privileged user then it seems like that less privileged user can uh continually publish more information uh down the road it's just this initial record that needs to be created by the elastic user at least that's what I found from my experimentation so let's refresh okay I'm gonna pause for a moment all right so it looks like the syslog is now finally loading in here while using the super user account I can look for pizza record hopefully something shows up yep if I click in here only one result came back I think the word Pizza should appear here yeah so it's here okay so now let's revert back to the less privileged user system CTL restart file beat okay and I am going to uh clear this filter and let's do another system log and let's say chocolate all right and hopefully this record will also appear in here in a moment uh with us using the less privileged uh user so I'll wait a moment okay it's been a few minutes let's see if chocolate appears whoops maybe case sensitivity okay good all right so it's using the less privileged user let me just do one more test to be sure right without and I'm not going to restart file beats or anything let's just see if in a moment that record appears here okay it was actually pretty fast only a few seconds and I can already see this record appear here so I won't even bother filtering for it so we've got all four of these system dashboards working uh and just for fun let's actually get a another dashboard working let's get the Apache dashboard working so Apache is a web server a very popular one and uh file B comes with the tools to automatically ingest and parse Apache log files so right now this dashboard will be empty because we don't have Apache installed on our server here so let's install Apache Apache 2 web server so we'll give it a moment okay Apache has been installed let's figure out the IP address of the server and if I go to our web browser and I'll paste in the URL there we go we get the default Apache 2 web page and this Apache dashboard will remain empty until we enable it in the modules directory so let's go here modules and we see that there is an Apache dot yaml file let's edit that and I'm going to enable this true and also the error logs true okay let's move this over to apache.yaml or rename this file and hopefully without needing to system CTL restart in a moment we start seeing data in here but I have a feeling it actually won't work for those same permission issues again let me just see file beat let's see if we get any warnings yet uh second last line says cannot index but right okay so I have a feeling it won't work but I'll give it a few minutes so I waited a few minutes and only these two or three records came through so this doesn't look complete so I'm going to try that hack again which is to escalate the user to super user and then let the super user load one or two records before reverting back to the less privileged user okay so wait a moment okay this is looking much better now and now I think I can reduce the Privileges again and I think everything should be fine at this point yep perfect uh so I intentionally generated this 404 error with this URL while using this less privileged filebeat user so everything seems to be working well actually not everything another issue I often notice with elastic is that some of the maps uh where it's trying to transcribe IP addresses to geolocation uh sometimes they'll seem to fail so if like these ones yeah so these two maps are empty but I'm pretty sure there should be data here and the reason these Maps might be empty is because elastic normally has a data set that matches IP addresses to geographies and this data set is normally downloaded at the time of installing elastic it should automatically be downloaded actually but I've noticed that in my uh a lot of times in my installations of elastic that automatic download tends to fail and we can actually double check that by going to our console and there's a command that you can run let me just find it if you ping this endpoint a get ingest geoip stats and you can see that there's one failed attempt at this download so I have a workaround on how to get this to work it involves um kind of rebooting elastic a few times and adding a property to the elasticsearch.yaml and then removing it so let's kind of go through that process okay I'm going to SSH to my elastic server then I'm going to go to the elasticsearch.yaml file and I'm just going to add a line into this file so give me a second copy and then paste in here foreign so I'm going to disable the automatic download I'm going to restart elastic okay elastic has been restarted now if I run this endpoint again all right looks like no attempts were made to download the geoip data set and now I'm going to delete that line that I added and then I'm gonna restart again okay elastic has restarted now if I run this endpoint again okay I've confirmed that the new geoip data set has been downloaded and now if I go to my Apache dashboards I hit refresh and I'll do so also with the SSH and now you can see that the geoip enrichment is available and going to the SSH dashboard and I've hit Refresh on this one great we can see geoip enrichments here you can see that there's all these I guess spammers and Bots trying to uh trying to SSH login to our servers this concludes part one of our tutorial on file beat in a few days we will be releasing part two which will focus more on alert rules and email notifications if you found the contents of this video helpful then give us a like if you want to stay on top of the videos as we release them then consider subscribing to our Channel see you in the next video
Info
Channel: Evermight Systems
Views: 7,559
Rating: undefined out of 5
Keywords:
Id: Bquc9I63DA0
Channel Id: undefined
Length: 43min 7sec (2587 seconds)
Published: Mon Jan 02 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.