Redis: How to setup a cluster - for beginners

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] what is up people and welcome to another video in this video we're going to be taking a look at radius high availability and replication high availability for redis can seem very complicated and daunting it is very important to understand the type of data you're storing and also reconsider whether you actually even need high availability in this video i'm planning to lay the foundation of high availability and replication so to give you the basic understanding of how to get replication going then you can tweak that to suit your needs so without further ado let's go so if you're new to readers i've made a video on the basics of readers how to get it up and running how to configure it persistence and how to write an application that reads and writes data to and from raiders check out the links down below and be sure to check that video out [Music] now if you're new to this channel everything i do is on github so if you go over to the docker development youtube series github repo i've made a folder called storage and for the series everything i do in the raiders series is under a folder called raiders i have a readme file in here that shows you how to get readers up and running which is basically the first video i've done in this video we're going to be taking a look at replication and clustering which is in the clustering folder i've created a readme down there and here's all the steps we're going to be taking a look at today so be sure to check out the links down below to the source code so you can follow along [Music] now the first thing in my readme is we talk about replication so i have a link to the redis replication document over here and if we take a look at the documentation at the base of replication this excludes high availability so basically redis just allows you to run instances as like a master slave replication replication has three main mechanisms now to summarize that the master is basically configured to replicate its data to the other replicas the replicas is also configured to read the data from the masters and ensure that they remain in sync now persistence is very important in replication because if the master dies and loses its data it'll basically start up with an empty data set and replicate that empty data set down to the replicas which will basically wipe all your data so it's very important that we configure persistence in this demo now the first important part of the replication document is authentication it is very important to run readers with a strong password because redis by default is not configured to use a password so anyone can connect to it as part of replication we also have to configure a strong password on each of the replicas and also configure each of the replicas to have a master password so to showcase this example we're going to run three redis instances we're going to run reader 0 readers 1 and readers 2. now in my readme file i've shown you the differences of the configuration file for each one of them this is radius 0's configuration radius 1 configuration and radius 2 configuration now try to spot the difference here the setup replication is very very simple the only difference is between all of them is that the replicas have the slave of he this tells redis where to look for the master node so you can see redis 1 is a slave of rate of 0. radius 2 is a slave of raider 0. so in this example raider 0 is the master the slave of key that we see here is what makes radius replication work this is what tells readers where the master is and sets up the clustering so for authentication we have to run a require pass on each one of the instances so require pass is basically the password that this radius will run with anyone that wants to connect to this readers needs to have this password defined now if we take a look at raiders one and two they also have the key master auth this is the password used to connect to the master so this will allow radius 2 and redus 1 to connect to raider 0. the reason why i have the same master auth on the reader 0 is just in case when we take a look at high availability we're going to have a scenario where the master might switch to radius 1 or radius 2. in that case we want to make sure that reader 0 can run as a replica and also connect to other masters so in my previous video on the basics of readers i showcased how to run readers with a custom configuration file since we're going to be running reader 0 readers 1 and readers 2 i've created separate config files for each of the nodes the way i'd like to start my configuration is to come over to the redis documentation on configuration and look at the self-documented config for each one of the versions of readers so in this example i took the config from redis version 6 and then i went ahead and pasted into my folder structure down here and the changes i've made are summarized over here so on each one of the instances we basically want to run this configuration so to show you an example if i go into radius1config we hit ctrl f and we search for require pass we can see we have required password set here on this instance and we can also search for master auth and we can see we have the master auth key set in here as well so this is enough to get replication going with three instances now i also mentioned that it's very important to set up persistence on each of the redis nodes to make sure they persist their data across reboots so to do that we take a look at the redis persistence documents and in my previous video i covered the basics of that it's important to understand that there's two modes of persistence in readers the first one is rdb mode which is basically readers taking a snapshot of its database and dumping it to the file system on different intervals this is really good for performance because every time raiders has to dump file to data it basically uses compute to do so so the more often you write to the file system the less performance you're gonna get but the more often you write to the file system the higher durability you're gonna get so it's a bit of a trade-off the other mode of persistence is aof which stands for append only file this basically means that every time raiders receives a transaction it writes that transaction to disk this gives you much higher durability but a cost of performance so it's very important that you firstly understand your workloads and understand what type of persistence you really need this document lists out the advantages and disadvantages of each mode and the redis community recommends that you use both types of persistence together so in this demo we're going to be setting up rdb persistence as well as append only file persistence and we're going to write that to a data volume so to show you the configuration of that you can hop into any one of the redis config files and just go ahead and search for dump and you'll basically find this file called db file name now db file name is where to put the rdb file or what you want to call that file and in this example we're going to call it dump.rdb that's all you need to enable persistence for rdb mode if you want to enable aof mode all you need to do is come to this file and search for append only and then turn it on so say append only yes and then type the append only file name you want to use the other property that's also very important is dir if you search for that you'll come across the dir key which is basically where should readers write both of those two files and i'm going to mount a volume into this container so this will allow me to persist the data folder inside readers on the host so to summarize the configuration for persistence we can see here we have a directory we want to write the data to and we have a db file name for the rdb file and we have enable append only mode and we also put a file name for that so it's going to write the dump.rdb and the appendonly.aof into the slash data folder that we're going to persist on as a docker volume we then have a summary of the raiders configuration so reader 0 is going to be our master readers 1 and 2 are going to be replicas and you can see we specified slave off so radius 1 becomes a slave of red zero and readers two will become a slave of reader zero so one master two replicas so what i wanna do is i want to say slave of reader zero i wanna grab this value go to my radius1.config you'll see that it's automatically added replica obvious because i ran the cluster before so you want to make sure you wipe any of these existing values and just make that one again slave of reader zero so we we're updating radius one to be a slave of reader zero and then i'm going to go into redis two as well and i'm gonna look for that config updated as well that will make sure that when our readers know starts up that we have our masters rate of 0 and our slaves as readers 1 and 2. now it's very important before you run these containers to know that they have to run on the same network so whether you're running in the cloud or on kubernetes make sure these containers can actually talk to one another so because we're running a docker i'm going to create a network called redis so i'm going to say docker network create redis and then i'm going to start up each one of these containers on the redis network so what i want to do because my configuration files is in the storage redis clustering folder i want to change directory into that folder so i'm going to say cd change directory into there and when i do alice you can see we have our folder with our configuration separated out i want to go and run three containers so i'm going to say docker run i'm going to call it reader zero you can see i have reader zero radius one and readers two and i'm gonna start them up on the redis network and i'm also going to mount in each config folder respectively so i'm going to mount in radar 0 into the reader 0 container and then 1 as well as 2 and then finally in the entry point i'm going to say i want reader 6.0 and i want to upgrade a server passing the configuration file that i've mounted in so it's very easy to run this all i'm going to do is i'm going to grab all of these commands and i'm going to paste them into the terminal that's going to start up three containers of redis i can then do docker ps and we can see we have three redis containers up and running so in a previous video i showcased how to write an application that reads and writes data to redis so to test our replication what we're going to do is run this example application and see if the data gets replicated so it's very simple in a new terminal i'm just going to change directory into the storage readers applications client folder you can see on the on the left side here i have a storage readers application folder and i have a small golang client application in here once i changed folders i can go ahead and say docker build minus t i can tag this image as a reader's client so i go ahead and paste that that'll go ahead and build a container image that we can run and access readers so to run that application i say docker run minus it i'm going to run it on the redis network i'm going to pass in some connection details so this app knows what to connect to so i'm going to say raiders host as reader 0 since reader 0 is our master i'm going to connect to this port and i'm going to pass in my my super strong password i'm going to copy this and let's paste it to the terminal so now we have an application up and running and if i go to the browser you can see here that we have an application that has stored a counter in redis every time i refresh this it'll basically increment that counter and write it back to reader so let's increment it to 20 and now what we want to go and do is go into redis and make sure the data has been persisted so to test the replication i'm going to open up another terminal and i'm going to go into one of the rarest nodes so in this example let's go into radius 2. so i'm going to say docker exec minus it redust2 and once i'm in i'm going to run the redis cli and then i'm going to authenticate so i say auth and i pass in my password that's okay and then to list out the keys i type keys and a pattern so i can just say star and that will list out the key so we can see we have a key called counter stored in here so we can see that the data has now been replicated to our third instance of radius our application wrote it to red zero and we picked up the key on redis 2. so that is the basic concept of replication it's very easy to set up replication in redis but what if the master fails by having replication enabled it doesn't mean you have high availability it only means that you're able to replicate the data among multiple radius instances if the master dies we actually want to be able to switch out one of the other replicas to become masters and that is how we achieve high availability so in redus that feature is called the redis sentinel so if we take a look at the reda sentinel the sentinel is what provides high availability for raiders so basically if one of the master dies the sentinel's job is to elect a new master so a couple of features it has monitoring built in so it constantly checks the masters and replicas it has notification server so it can notify your system admins if there's an issue it has automatic failover so it detects when a master is down and then it fails over to another replica it's also very important to know that you should run at least minimum of three instances of the sentinel so not running just two because basically for failover to work there has to be a majority vote on the sentinels to elect a new master so in this example we're gonna be running three sentinel services and test the automatic failover i'm gonna leave this document with you as it has several examples of different types of scenarios and how the sentinel manages those scenarios and failures now to configure this internal it's very straightforward i have a basic configuration that i'm going to showcase today we basically say we want to run the sentinel service on port 5000 we then tell it what master to monitor so we say sentinel monitor we give our master a name and then we say we wanted to monitor red as zero so we start off our sentinels by looking at the current master and we also say what the quorum value should be in the port to monitor that master on then we can configure timeout values so we can say how long should the sentinel wait for the master before initializing failover so if the master is down for longer than 5000 milliseconds it will basically start the failover process and we can also specify failover timeout parallel syncs and the most important one is the authentication password to talk to the master node so we have to say sentinel auth pass the name of the master and the password so just like i mounted in the redis configurations into those three readers containers i also have specified over on the left here um sentinel 0 sentinel 1 and sentinel 2. if you open up these folders they're basically separate configuration files so if we take a look at one of the internal configuration files you see that there's a lot more values in here that's because the sentinel has to write values to this file and when changes happen to the environment the sentinel will update this file so to start from scratch what i'm going to do is i'm just going to copy the values out of the base configuration i'm going to go to sentinel 0 i'm going to remove everything from here and paste out config i'm then going to repeat the values on sentinel 1 and i'm going to also do it on sentinel 2. so now that our configuration is ready we can go ahead and start up the sentinel containers so we can change directory into the clustering folder where our configs are located and then i'm going to say docker run minus d to run in background mode and i'm going to start up sentinel 0 sentinel 1 and sentinel 2. you can see i'm mounting each one of the sentinel configuration folders into each one of the containers and then starting it up in sentinel mode by saying redder sentinel and then passing the configuration file so i'm going to go ahead and copy all of that and i'm gonna paste it in the terminal that's gonna run three more containers so if i do docker ps we can now see we have our three reader sentinels we have three redis instances and our example application now the first thing we need to do is check the health of the sentinel so we're going to say docker logs and we're going to look at the logs of sentinel zero and when we run this we want to see the following information we want to see plus monitor my master so we can see there's one master up and running it has a plus sign and we can see there's two slaves up and running we can also see that there are two other sentinel services in the cluster that it's communicating with this is important to see this if you don't if you do not see this types of output that means that there's a problem with the sentinel service and that means your clustering is not going to work the other thing we can do is go into the sentinel service so i can say docker exec minus it sentinel 0 and i'm going to go inside the sentinel 0 container once inside i can run the reader cli on the port 5000 and i can type info we can see a lot of information here about our cluster but the most important thing is we want to go ahead and query the status of the master so what i'm going to do is i'm going to say sentinel master my master and that's going to give me some information about the master the most important thing firstly is to make sure that the num other sentinel value is is correctly updated so you can see here we have two other sentinels in the cluster making up a cluster of three sentinels if you don't see this that means there's a problem with your sentinel setup and you need to review your configuration we can also see that a master is up and connected now let's go ahead and test the failover process of our cluster so we know already that we have an example application connected to raider 0. so what i'm going to do now is i'm going to go ahead and delete the reader 0 container and we should see a new master be elected as well as the replication still remaining intact so to do that i'm going to say docker rna minus f and i'm going to delete reader 0. so now our master is down so if i do docker logs on the sentinel we can confirm this by looking at the logs and we can see that there's been a config update um it's also try to failover we can see that it's detected that there's a master down over here and it's gone ahead and switched master to a new instance it also has new slaves now reported so the master that was read as zero is now actually a slave we can also see if we go back to our example application and we refresh this it will basically be down this is because the application is configured to write to the master and now the master is down if we bring that master back up it's also important to know that the application needs to talk to the new master because the master is the right only and the replicas are read only so now to test the failover what i can do is go to my example application and this time point it to the new master so the new master is now redis one so i'm going to go ahead and run my application pointing to the new master and when we refresh our example application we can see that it's continued from 20. so the replication made sure that the data was available on all the replicas and the sentinels made sure that when the master is down and elected a new master our application can connect to that new master and continue working so hopefully that helped you guys understand the basics of replication and how to replicate your data across multiple instances and also help understand the basics of reader's clustering with the reader sentinel service and how to ensure your redis cluster remains highly available if you like this video remember to like and subscribe and stay tuned because in a future video we're going to be taking everything we've learned and we're going to run it on top of kubernetes so let me know down in the comments below what sort of videos you'd like me to cover in the future and how you manage raiders yourself and as always like and subscribe and until next time [Music] peace
Info
Channel: That DevOps Guy
Views: 24,031
Rating: undefined out of 5
Keywords: devops, infrastructure, as, code, azure, aks, kubernetes, k8s, cloud, training, course, cloudnative, az, github, development, deployment, containers, docker, rabbitmq, messagequeues, messagebroker, messge, broker, queues, servicebus, aws, amazon, web, services, google, gcp, redis, cache, sentinels, cluster, ha, highavailability, replication
Id: GEg7s3i6Jak
Channel Id: undefined
Length: 18min 41sec (1121 seconds)
Published: Fri Aug 21 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.