Setup Elasticsearch Cluster + Kibana 8.x

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video I am going to set up a five node elasticsearch cluster with cabana before we get started I should outline what we need and I should also Define a few glossary terms as well let's start with the resources we need for this tutorial I will be using six different Ubuntu machines five will be used for elastic search nodes and one will be used for Cabana I also prepared six publicly signed SSL certificates ahead of time for these six domains if you want to use publicly signed certificates and you don't have them yet you can generate some for free with let's encrypt I will leave a link in the description of this video to another video that shows you how to generate let's encrypt SSL certificates that you can practice with if you have your own certificates that's even better and things should also work with wild card certificates next let me quickly cover a few glossary terms and Concepts let's define a node and a cluster an elasticsearch cluster is a collection of elasticsearch nodes when you start one instance of elasticsearch it is one node running in one cluster one of my earlier elasticsearch installation videos demonstrated this where we had one machine for elastic and one machine for Cabana the single elastic machine was a single elasticsearch node running in its own cluster you can spin up more machines each with elasticsearch installed on it and then connect these machines to the cluster then you'll have many nodes in the cluster the benefits of a multiple node cluster is that it helps with scalability and resilience a multiple node cluster is more scalable because it can distribute resources and capabilities amongst many nodes and it helps make your overall elastic search service more resilient and fault tolerant because any node that experiences service Interruption can automatically be replaced by other active nodes next let's talk about roles for a node a role defines what capabilities and responsibilities a node can have you can see on this page here that a node can have many roles if you don't explicitly declare them then all these roles are assigned by default to a node the two most important rules that you need to be aware of are the master role and the data role when you start a cluster there must always be one node that is designated the master node and reading from this definition here the master node is responsible for a lightweight cluster-wide actions such as creating or deleting an index tracking which nodes are part of the cluster and deciding which shards to allocate to which nodes in terms of which node is the master any node that has been assigned the role Master is eligible to become the master node when you start up your cluster some of the nodes in your cluster will vote on which Master eligible node actually becomes the master node if a master node becomes unavailable during runtime the remaining nodes in the cluster will select a new master node so that is the importance of the master role for the data role it is much easier to understand any node that has been assigned the data role can store documents for your index so there are quite a few other roles that you can look into but you can learn about them as you go that is my introduction we can get started now I will leave a link in the description of this video to All the configuration files I'm about to use and any resources I'm about to mention I have five Linux machines out in the cloud somewhere each with different IP addresses some machines use Ubuntu 20.04 and others use 22.04 each machine is labeled with a hostname like Node 1 node 2 all the way to node 5. and I also have domain names like node1.evermite.net all the way to node5.evermite.net map to each corresponding machine so you can see for example node1.everbite.net corresponds to the IP address of 172.105.3.82 which is the address of node 1. because all these machines are new instances of Ubuntu I'm going to update the distribution as well as install some packages that I may need like the Vim editor curl command new PG and gpg so let me update each of these machines K app get update app disk upgrade hyphen y for automatically agree to everything and then install Vim curl new PG and gpg and I'm going to copy this and just paste it to every single machine and then I'll just run it and I'll come back when everything's done so node one is done the others are still going but let's get ready to install elastic anyway we're going to use the app get approach so let me pull up the documentation and it's pretty straightforward just um get the public signing key install a dependency update the repository definitions and finally install elastic so what I'll do is just copy each of these commands into a shell script that way I can use it multiple times on each machine so I'll just fast forward through this cut and paste part okay everything is in there so let me just change the permission on this file and then I'm gonna SCP this file over to each of the servers so I'll just fast forward through this part as well foreign so I'm just gonna run it on each of them and I'll come back when everything is done okay all right all the installations are done and on each instance of elasticsearch each node comes with a built-in super user account the username being elastic and here's the password for the elastic user on node one and if I go to node 2 if I scroll up we're going to see the password for the elastic user on node 2. here it is this is the password for the elastic user on node 2. and node 3 node 4 node 5 they all have different passwords as well and none of this matters because once I set up the cluster I'm going to reset the password for one elastic user and it will be used across all nodes in the cluster so let's go to the ETC elasticsearch directory and in here you're going to see the default configuration files that come with our installation of elastic let's edit the main one elasticsearch.yaml foreign there's only a few things in here that we need to edit to set up our first instance of elastic as well as our cluster so let's go through this I'll start with editing the cluster name we can call anything we want I'll call mine es demo other notes are going to need to know this name to connect to it next I'm going to edit the node name I'll call it node one and we should make sure that this name is unique within our cluster to avoid any naming Collision next I will edit the network host and I'm going to use the domain name that maps to This Server so node one ever my net and for the port I'll use 9200 but you should consider using a non-default port for security reasons next we need to decide on how to configure SSL certificates for HTTP communication which is typically used for elasticsearch rest API we could either use our own publicly signed certificates or we can use the self-signed certificates that were automatically generated by elasticsearch during installation and just so you know the automatically generated self-size certificates can be found in this directory so inside this search directory most of the documentation on the elasticsearch website discussed using self-signed certificates this is because later in the video you will see that elastic comes with command line tools for working with your elastic instances some of these tools depend on the use of self-signed certificates and based on what I understand these tools require specific details of the root or intermediate certificates that come with elastics auto-generated certificates so if you use publicly signed certificates or commercial certificates from authorities like let's encrypt geotrust Global signed and others you will not be able to benefit from some of elastic's command line tools and if you use publicly signed certificates then you will also need to manually edit various configuration details to get a single elastic instance working or to get the whole cluster working so the benefit of using elastic self-signed certificates is that you can benefit from elastics command line tools the inconvenience of using elastic's self-signed certificates is that you need to put in more effort configuring SSL verification between elastic and other client applications that need to connect to elastic this is because client applications won't automatically trust self-signed certificates unless you continue with extra configuration in the near future I will create a video on SSL trust chain that explains all this in more detail but for now just know that there are trade-offs between using publicly signed certificates or elastics auto-generated self-signed certificates so in terms of what we're going to do we're going to use elastic's self-signed certificates to start off that way we can benefit from a command line tool that will generate enrollment tokens to enroll other nodes into our cluster then once our cluster is up and we no longer need the command line tool for generating enrollment tokens I will swap out these self-signed certificates with our own publicly signed certificates from let's encrypt and that will make it easier for other client applications to connect to our elastic cluster so for now I'll leave these two lines alone next is the transport SSL elasticsearch uses the transport protocol for communication between nodes in the elastic cluster and here you also have a choice between using elastics auto-generated self-signed certificates or publicly signed certificates again elastics documentation is mostly around self-signed certificates and based on my own research and experiments it seems that the elastic cluster will allow nodes to join your cluster if the nodes have specific root certificates in the trust chain so for simplicity's sake what I'm trying to say is that the self-signed certificates almost act like access tokens between a node and the cluster let's consider the other alternative where you use commercial or publicly signed certificates to encrypt the transport protocol in this situation your elastic notes are using root certificates that are publicly available to everyone which means any stranger in the world can set up an elasticsearch node and connect to your cluster without any restriction or Authentication and I've done some experiments that verified this was the case if for any reason you choose to use commercial or publicly signed certificates for the transport SL make sure you set up a firewall to limit access to your cluster also check the documentation because from what I read elastic will use ports in the range of 9300 to 9400 as the default for transport communication so you may want your firewall to restrict access on those ports and also you generally don't want elastic to be publicly accessible you should take the same safeguards protecting your elastic cluster the way you would protect a mission critical database so with that said we will use elastics auto-generated ssls for the transport protocol and we'll leave these settings as is the next line to look at is the cluster.initial masternodes as far as I can tell this field is only needed the first time you start up this elasticsearch instance which also sets up the cluster that this node belongs to afterwards I don't think this line does anything the value should be the node name of this node which in this case is node one if this value was not set elastic would just take the hostname of your Linux instance which might actually cause some problems with your cluster so make sure you set the initial Master nodes properly before you start the cluster that should be all we need to edit now we can try to start up the cluster with just this node so this might take a minute so I'm going to pause until this is done okay so let's check the status to see if anything failed great everything seems okay so what we should do next is Ping the API to see what the cluster health is but I don't remember the password for the elastic super user so I'm going to reset it so I'm going to go to USR share elasticsearch binary directory and in here are the command line tools that came with our installation of elasticsearch so I'll reset the password for the elastic user with this one here elasticsearch reset password interactive user elastic and we'll give it a moment yes and I'll use the password abcd1234 but you guys should use something more secure all right now let's ping the elastic API so I'll just curl elastic abcd1234 node one ever might net 9200 oh all right because we're using self-signed certificate so I'm going to skip verification with this hyphen K Flag there we go there we can see that our elasticsearch cluster is up but it's not showing any node all right okay let me just do underscore cluster health let's pretty this it's hard to see like this there we go now we can see that there's one node in our cluster and the name of the cluster is yes demo there are a few other useful commands so let me just type this here cat nodes this lists all the nodes in your cluster the asterisk reflects the note that is the master node and another one that's useful is cat Master which explicitly tells you which node is the master all right so we've got our cluster up with one node in it we're ready to add a second node to our cluster I pulled up the documentation on how to add a node to the elasticsearch cluster it's pretty straightforward however these instructions are only relevant if you're using the elastics self-signed certificates for both the rest API and the transport protocol if you are using publicly signed certificates then none of these instructions will work and you would need to configure the other elastic nodes manually so if anyone needs to know how to configure nodes manually for use with publicly signed certificates just make a request in the comments below and I will post a follow-up video on how to do it adding notes to the elasticsearch cluster can be pretty straightforward if we use the command line tools that come with elasticsearch so let me scroll down so we start right here this elasticsearch creates enrollment token command comes with the installation of elasticsearch so you can see it right here and you have to run this command from node one and once you do it generates a token which is then used by a different command on node 2 or the node that wants to join the cluster so once you have the token you take the token and go over to node 2 and you use it along with this command over here no wait uh actually this is not correct this command will start up no two which we can't do because we haven't even configured it yet so let me pull up a different documentation okay yes this is the right command the elasticsearch reconfigure node we will see that this command also exists on node 2 because it comes with the installation of elasticsearch so let me just show you let me find my second window yep here's node two and I'll go to the USR share elasticsearch binary directory and you can see that the elastic yeah elasticsearch reconfigure node is right here so in terms of what this command does when you run this command on node 2 with the token generated from node one it will do a couple of things first it will edit node 2's elasticsearch.yaml file to contain some connection details to the cluster and second it will cause node 2 to generate some new self-signed certificates that will work against the cluster and like I said earlier these self-signed certificates basically act like access tokens to the cluster there's another thing to keep in mind this elasticsearch reconfigure node command assumes you haven't touched anything in node twos Etc elasticsearch directory if you've made any changes in node 2's elasticsearch directory this command will fail with some errors and from my experience the easiest way to get around these errors is to just completely rebuild the server and reinstall elasticsearch and then try again so let's try this all out I'm going to go to node one here it is and let's generate the enrollment token I'm going to use the elasticsearch create enrollment token s for scope and type node and now let's just wait ah here it is here's the token so I'm going to copy this and before I use it I actually want to show you something interesting gonna open up Firefox gonna go to developer tools and use some JavaScript I'm going to type a function atob and paste in the token so atob is basically base64 decode so I'm going to base64 decode the string and we're going to see what's in it so you can see this is what's in that token it's basically connection details to node number one and the version of elastic on node one along with some of these Keys over here so now we clean a little bit of insight into what this elasticsearch reconfigure node command does it's basically parsing out some connection details from the enrollment token and probably using some of those other secrets we saw to generate the self-signed certificates so that's all useful information now let's continue with this enrollment token I'm going to copy it from node one then I'm going to go to node 2. and let's use the elasticsearch reconfigure node command and I'll pass in the enrollment token so let's paste it like this yes okay that should be good so now we go to the ETC elasticsearch directory and then I'm going to open up the yaml file and let's go all the way to the bottom so what you see here is that the discovery dot seed host field has been populated with the IP address of node one so basically this was done by the reconfigured node command this means that when node 2 starts up the first place is going to look is at node one for information on how to connect to the elastic cluster the other thing the reconfigure command did was uncomment this line here for transport.host so to summarize what the reconfigured node command did it populated the discovery seed host field it uncommented the transport host field and it probably generated new self-signed certificates that would work with the cluster so now it's up to us to manually configure the remaining fields in this file and there's only a few Fields so the first is the cluster name and it has to be exactly the same as the the actual cluster name so let's go find that I'm going to hit our API endpoint at underscore cluster health and there it is so I'm going to copy this cluster name and I'm going to paste it in here all right next let's set the node name and remember it has to be unique within the cluster so I'll call my node 2 and coming down let's set the network host to node2.evermy.net and let's set the port and let's see what else okay so SSL certificates we should not edit the SSL certificates for the transport section because those will use elastic's self-signed ones but for the rest API we will use our own let's encrypt Excel certificates so I'm going to upload them here let me make a new directory there we go and I'm going to find my local machine give me a second here it is all right so I have all the SSL certificates made ahead of time and they're here on my local machine so let's take a look at node 2.evermy.net right so those are certificates for node two and here are certificates for node three so I'm just gonna SCP these certificates onto node 2 server all right elasticsearch search node 2.everromite.net there we go so let's just make sure they're all here oh I have to change the ownership of these files let's CH own elasticsearch all right now let's check again great all our asserts are here and the permissions look okay as well so now let's update the path to our let's encrypt certificates so that we can use them for the rest API key search node 2 ever my net slash priv key one dot pm and the certificate is the same path but now it's just full chain one dot pem and I think that should be it oh I forgot to mention one more thing on node one we're using the self-signed certificates for the HTTP SSL but here on node 2 we're using the publicly signed let's encrypt certificates for the HTTP SSL and the reason why on node 2 we can use the let's encrypt ssls is because I don't plan on using the command line tools anymore on node 2. so I might as well just use the publicly signed certificates here and even if I did need to use the command line tools on node 2 I can always just swap out the let's encrypt ssls with the self-signed ones on node 2. restart elastic use the command line tools and then when I'm done just swap things back again so I think that's it for this file I think we can actually try to start up node 2 and see if it will connect to the cluster so let's go systemctl enable elasticsearch and let's systemctl start elasticsearch and this might take a moment so I'm going to pause okay it's been about 15 seconds only I'm going to system CTL status elasticsearch great this looks good let's go to node one and ping the API oh great so there are two nodes in our cluster so that's a good sign let's try to get a list of nodes right cat nodes great there are two nodes but I misspelled node two as node one so let's fix that I'm on node two go to node name right over here and I'm going to change that to 2 and let's just restart all right let's give it a moment okay no two should have restarted let's go back to node one window and get a list again great I fixed the typing mistake for node two so up until now we've been hitting the node one API endpoint and in theory we should be able to hit the node 2 API endpoint and get exactly the same details Plus on the Node 2 server we're using the publicly signed let's encrypt SSL certificate so when we hit the node 2 API endpoints in theory we shouldn't have to add The Hyphen K Flag to our curl statement because the hyphen K Flag skips the verification mode so let's actually try to hit node 2 API endpoint and do the full SL verification and see if we actually get the same details so let me pull up the old command let's remove the hyphen K Flag and let's change this to node 2. great these look like the same results and if I take this command and run it from node 2 it should be the same result as well so that's the same as well now that node 2 is up let's summarize what we did to make it a part of the cluster first on node one we ran the elasticsearch create enrollment token hyphen S node and that gave us a token we took that token and we went over to node 2 and we ran the command elasticsearch reconfigure node hyphen hyphen enrollment token and we gave it the token and what this command did was it edited our elasticsearch yaml file and it also generated some self-signed certificates that will allow node 2 to connect to the cluster next we opened up the elasticsearch yaml file and we edited the fields cluster name node name Network host Network Port and we also edited the xpac security.http.ssl fields the https cell could have been optional we could have used self-signed certificates there if we wanted to we then reviewed the fields for discovery.seed host and transport.host we noticed that the reconfigured node command populated the discovery.seed host field with some hosts from the elastic cluster so in our case just the IP address of node 1. we also noticed that the reconfigure node command uncommented the transport.host field so in the next part of this video I just finished repeating these exact same steps for node 3 4 and 5 and they are all using the let's encrypt ssls so what I want to do next is I want to go to node one and I want to swap out the self-signed certificates with the let's encrypt certificates so let's go ahead and do that this is the SSH window for node one you can see node one is part of a five node cluster and node one is the master so let's replace the self-signed certificates with let's encrypt certificates so I'm going to go to the EDC elasticsearch directory of node one and let's open up the ammo file so I think the only thing I need to change is the path to the HTTP certificates so I'm going to type key and I'm going to upload the private key over here so I still have to upload it just let me type in the path first and I'll copy this line paste it and for the certificate I will upload it to Here full chain one okay so let me make the directory search node1.evermy.net so I'm going to upload it here let me go to my local machine upload to node one node one copy from my Node 1 directory all right everything should be there and let me just change the ownership to make sure elasticsearch owns all these files oh typing mistake not chmod should be CH own all right so let's check the permissions and if everything looks good we just restart node one Yep this looks good yeah so now we just restart okay it took about 30 seconds so let's take a look at the status okay this looks good no errors let's ping an API endpoint for a list of notes I'll just search through my history there we go node five ever my net 9200 cat notes oh looks like node one is missing from the cluster let's actually try again but we'll ping the node one API endpoint instead okay node one oh here okay security exception so node one definitely couldn't join the cluster let's look at the VAR log elastic search yesdemo.log file oh there we go I see the error Master not discovered or elected yet so the issue is that the elasticsearch.yaml file for node one is missing the field discovery.seed host if you recall we use the command line tool elasticsearch reconfigure node and an enrollment token that was made from node one to set the discovery.seed host field on nodes 2 to 5. this way nodes 2 to 5 knows which host to talk to to get information on how to join the cluster however we never set the discovery.seed host field on node one we never ran the elasticsearch reconfigure node command with an enrollment token on node 1. that's because node one was the first ever node of our cluster and it didn't need to connect with any other node but when we restarted node one just now no one left the cluster and after no one left the cluster it didn't know how to rejoin the cluster it couldn't rejoin the cluster because it didn't have the discovery.seed host field so to fix that problem we just need to manually type in the discovery.cet host field for node one so let's go ahead and do that okay I'm back in the yaml file this cluster.initial masternodes I could leave it here if I want to but it doesn't do anything anymore so I'll just delete it for now next I'll just scroll up and around here I'll type in the discovery.cetos in terms of which host I should list here I should list the most stable ones that way node one will always find a host to connect through if I list too many hosts here I think it just means I have a longer list to maintain I can't really think of any other problems also it's okay to put in hosts that are temporarily unavailable elastic will iterate through this array of seed hosts until it finds one that node one can connect to so let's just start filling this out I'll copy and paste the IP from node 5 first and there we go and Port 9300 is the default Port if I had omitted 9300 elastic would have used it anyway I might as well copy the other four hosts here five seat hose isn't that much really so let me just fast forward through this copy and paste all right that should be good and I think now we can try to restart node one I'll give it a moment okay let's ping the API endpoint to see if node one is part of the cluster now so node one ever my net 9200 underscore cat nodes see all right there we go so node one is now part of the cluster this reminds me when we use elasticsearch reconfigure node command on node 2 the command only added Node 1 to the discovery.seed host field of node 2. the command didn't add nodes 3 4 or 5 to node two cetos this is because at the time nodes three four and five didn't even exist yet so just to be consistent I'll go to nodes two three four and five and I'll make sure that they all have a seed host list similar to the one we just made for node one so I'm just going to fast forward this because it's really just copy and paste IP addresses foreign should be updated now I pulled up the documentation on Discovery and in here it will go through in more detail everything we discussed about the seat hose so I think if you get a chance you should definitely read through this the other section you should read through is the one on Quorum based decision making it talks about how a master node is selected within your cluster the one thing that you should definitely pay attention to is this sentence that I have highlighted here and I'll read it out to be sure that the cluster remains available you must not stop half or more of the notes in the voting configuration at the same time so I want to take a few minutes to actually demonstrate this right now you see that node 2 is the master node of my five node cluster let's say I stop Node 1 and node 5 at the same time that's removing two nodes out of five nodes that's less than half of the nodes in the cluster this means the cluster will still be up with nodes 3 4 and 2. next if I stop node 3 that's just removing one node out of three active nodes that's less than half the nodes of the cluster and this means the cluster will still be up with just nodes 4 and node 2. next if I stop node 4 that's removing one node out of two active nodes that is half of the cluster this means the cluster will be down if I type systemctl status elasticsearch on node 2 systemctl will say that elasticsearch is up on node 2 but there will be a lot of errors in the logs of node 2 saying it is not fully functional there is also an interesting consequence to all this so let's say I also system CTL stop elasticsearch on node 2. then I systemctl start elasticsearch on nodes 1 5 and 3. although system CTL will report that the elasticsearch service is running on one five and three you will find that the cluster is still down and that one five and three are not functional this is because node 2 was the last known Master node meaning node 2 was the last node to have the most recent information about the elasticsearch cluster and the elasticsearch software will not let other nodes like 1 5 and 3 which have stale data to go Rogue and cause a split brain effect node 2 must come back online first before 1 5 and 3 can synchronize with it and be fully functional so let's demonstrate all this so let me shut down node one first and then shut down node 5 so that's stopping two out of five nodes that's less than half of the cluster nodes and therefore the cluster should still be up let me ping node two great the cluster is still up so let's go to node three and I'm going to stop it here so that's stopping one hold on system CTL stop elasticsearch so now I'm stopping one out of three that's still less than half so the cluster should still be up great it's still up now let me open up node four here it is let's stop node four and one out of two is half and therefore the cluster should be down so let me press enter great I'm not getting any response which is what I expect because I expect cluster to be down after node 4 is down next Let's test elastics safeguards against the split brain effect so I'm going to shut down node 2. and with no two being the last known master I shouldn't be able to start up a cluster with one three and five so let me start up one and let's turn on the elasticsearch service for node three and let's turn on the elasticsearch service for node 5. okay so the elasticsearch service should be up for one three and five but the cluster should still be down let me ping the API endpoint all right I'm going to Ping node one great I'm seeing an error so node one is not functional and the cluster is down and we need to bring node 2 which is the last known Master we need the last known Master to be back online before the entire cluster can be up so I'll start node two and now let's take a look at the API great the cluster is up with all four nodes and let's go ahead and bring number four back as well and now we have our full cluster again with all five nodes so we just convinced ourselves that elastic can protect against the split brain effect the last known Master node must be online before any of the other nodes will form a cluster next let's install kabana so that we have an interface to interact with our elastic cluster I pulled up the installation instructions here and it's pretty much like elastic get the public signing key a dependency a repository definition and then just finally install kibana this is a new instance of Ubuntu 20.04 that we will install Cabana to but let me update the distribution at this upgrade and also install some of the typical things I need like Vim curl new PG and gbg so while this is installing I'm gonna go to the elastic node I'm gonna make a copy of the elastic installation script and just edit it to install Cabana instead because the installations are pretty much the same the only difference is I just need to change elasticsearch to Cabana okay I'm gonna SCP this install hyphen Cabana script over to the kibana server and we will run it as soon as the OS updates are done on the kibana server okay the updates are done let's just run this install script and I'll pause until it's done okay the installation is done let's go to the ETC Cabana directory and in here you see the yaml files that come with the installation let's open up Cabana yaml we can get kibana up and running by editing some of the configuration details here there's only a few fields that we need to edit so I'll just explain the ones that we're about to edit the first field is server.port we'll use the default 5601 but you should consider using something that's not the default for more security for the server.host I'll use a non-loopback address so 0.0.0.0 and for the public base URL it'll just be the URL that we're gonna connect to from our browser next let's edit the elasticsearch.host and in here I'm just gonna list each of the nodes URL so so that Cubana can connect to any of them so one of them would be node1evermy.net Port 9200 and also node 2 and node 3. and then also node 4 and node 5. you only need to list a few stable nodes here you don't actually have to list them all next we need to provide credentials for Cabana to connect to the cluster we can do this with either a username and password or we can do this with a service account token I prefer to do this with a service account token I'm going to pull up the documentation here you can see that we can ping the elastic rest API to generate a service token all we need to do is specify a namespace and then a service if I click on the service accounts I can see the list of available services the one we want is elastic namespace and Cabana service so let's try setting this I could paste the token right into this cabana.yaml file but I don't like leaving secrets in plain text in this file if hackers gain access to the server they would be able to read all the contents of this file so what I'm going to do instead is encrypt the secret in a keystore and only load in the token on Startup I'm going to go to the USR share Cabana binary directory so in here are the list of command line tools that come with kavana so I'm going to use the kibana keystore ad I'll paste in the field that we want to set and now I'm going to go to Node 1 to generate the actual token actually it just occurred to me to make the service token we're making a rest API call which means I could have ran the curl statement from any machine it doesn't have to be from node 1. but since I'm already on node one I might as well continue from here curl hyphen U elastic abcd1234 oh and this needs to be a post statement so hyphen X post and then I'll do the URL to node one 9200 and now let's grab the endpoint which is this and I'll paste here and now let's give our token a name I'll just call it Cabana underscore token can be anything you want for the service it's Cabana and for the namespace it's elastic enter and there's our token so I'm going to copy this so copy this then I'm going to go back to my terminal window for Cabana paste it here and now this value should be part of the commodity store so let me go back to my cabana directory and in here you can see the cabana.keystore file if I open it up our service account token is actually encrypted in here so next I have to change the ownership of these files to kibana just to make sure kibana can read them and I think that should be it let's go back to yaml file all right we're done with the service account token so let's move on for the elasticsearch SSL verification mode we will set it to full we can do this because with the elasticsearch nodes for their HTTP protocol we decide to use publicly signed certificates from let's encrypt to do the encryption if you are running into issues with ssls then as a temporary course of action you can set the verification mode to none but you probably don't want to do this as a long-term solution if you are using elastic self-signed certificates then you might need to copy over those certificates from the elastic server onto your kibana server and then reference them with these fields here I haven't done this often so if you run into issues let me know and I'll see if I can post a follow-up video around this and I think now we can actually start up Cabana so let's try things out Damon reload enable and let's start this could take a moment it's been about 30 seconds and the website is giving an error some SSL air and I know what the reason is if I go to my cabana.yaml file and if I scroll up here I say https for my URL but I actually forgot to upload the Cabana let's encrypt ssls so I'm going to do that now let's actually make a directory on the kibana server I'll call it the search directory and inside the search directory will be the cabana.evermite.net directory then I'm going to go to my local machine where I already have the let's encrypt ssls here I'm going to secure copy the let's encrypt ssls onto the Cabana server at this directory whoops I made a mistake has to be one more directory down okay so let's go back to the kibana server let's take a look great everything is here but I have to change the ownership has to be owned by the Cabana user and then let's take a look again this looks good and now let's continuing editing this file we need to type the full absolute path to the certificate it can't be a relative path like the elastic server this has to be an absolute path so full chain one dot pm for the key it's just going to be the priv key one dot pm and now let's try again restart Cabana let's take a look at the status to see if any issues are coming up all right it'll take a while so I'll wait okay so it's been 30 seconds let's try this now great it load up so let me try to log in with the elastic super user that was the elastic abcd1234 good this looks like it's working and let's just do a quick test I'll go to Dev tools and let me try creating just some random index I'll call it my test index and then let's just insert a hello world document so I'll do content hello world whoops let me just highlight this all right and let's just see if that index is split across a few elastic nodes I'll do cat shards my test index great this index is spread across the cluster on nodes 1 and 2. so let's try another experiment I want to see what happens if I take out node one because we know the index is partially on node one so let's see what happens stop elasticsearch and let's go back to Cabana let's check this index again great it's only on node 2 now I waited a few seconds now let's check this again now you see that the index is on notes two and three so elastic was able to replicate or redistribute the index across a few more nodes in the cluster to improve the fault tolerance and that's pretty much it for setting up a cluster with cabana let's summarize all the edits we made for Node 1 we open the default elasticsearch yaml file and then we edit the cluster name node name Network host HTTP port and the cluster initial Master nodes once we actually got all our cluster up we went back into the elasticsearchy animal file and we update the cetos we also swapped out the self-signed certificates with our own let's encrypt certificates by the time you watch this video elastic may have changed the default elasticsearchy ammo file if that's the case you may need to make a few more edits so be careful for nodes 2 3 4 and 5 we followed a different process for the nodes to join the cluster created by node 1. on Node 1 we generated an enrollment token and on nodes two three four and five we accepted the enrollment token with the elasticsearch reconfigure node command then on nodes 2 3 4 and 5 we edited the cluster name node name Network host HTTP port and we also used our own let's encrypt SSL we also made sure that the discovery seed host and transport host looked correct once the whole cluster was up we Revisited the elasticsearchy ammo file and we took a look again at the discovery.seed host field to make sure it had all the hosts we wanted once the whole elasticsearch cluster was up we installed kibana to a different server and then we opened up the kibana.yaml file we made edits to the server Port server host server public base URL and the server SL for the server SL we use let's encrypt ssls we also enabled the elasticsearch.ssl.verification mode and we also added several nodes to our elasticsearch.host field I should also mention that we did not have to set up Cabana after all the nodes were part of the cluster we really could have just set up Cabana immediately after node number one was set up and if that were the case all we would have needed to add to the elasticsearch.host field is just the address of node number one and that's it for this tutorial if you found it useful then give us a like and if you want to stay on top of videos as we release them then just subscribe to our Channel so we hope to see you in the next video
Info
Channel: Evermight Tech
Views: 18,175
Rating: undefined out of 5
Keywords:
Id: TfhcJXdNSdI
Channel Id: undefined
Length: 57min 13sec (3433 seconds)
Published: Wed Mar 01 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.