Redis on Kubernetes for beginners

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] what is that people welcome to another video in a previous video we've taken a look at redis what it is it's configuration replication and persistence and how to write an example application that reads data and writes data to readers in this video we're going to take everything we've learned and we're going to take a look at what it takes to run all of the stuff on kubernetes so without further ado let's go now there are two main things that are important in this video number one is replication we're going to deploy three instances of redis now to replicate redus has the concept of master and replica we will always have one master and a minimum of two other replicas now we also want to be able to scale these up in a distributed system like kubernetes pods can come and go so we're going to need to ensure that these parts can find the current master if a pod dies and comes back we need to be able to find and join the current master the second most important thing is high availability when a master dies the sentinel's job is to promote another replica to a master so the sentinel does the failover so we're going to be running reader sentinels to form a cluster out of the redis pods in my example for learning purposes i'll be deploying the sentinel separate from my redis replicas sentinel's job is to form a cluster and do the automatic failover if one of the masters die when a sentinel starts the first thing it will need is who is the master now out of these three any of them can be a master so we'll need to do something so that the sentinel can discover who the master is probably an init container once the sentinel knows its master address it'll do everything else automatically it'll contact the master find the replicas find the other sentinels and form a cluster this will allow us to scale the sentinels at will but we'll need a minimum of three else the cluster failover will not work so there is one thing i want to mention about running raiders on kubernetes that's very important when it comes to high availability so redis has a really good document around the topic of sentinel and there are some fundamental things you need to know about the sentinel before deploying and one of the things i want to stress is that there is no ha setup which is safe which basically means you need to test your radius instances to make sure that they can tolerate disruption so if a pod dies and comes back you want to make sure that you don't have any disruptions and you want to make sure that your applications can tolerate these disruptions so if you're totally new to readers please check out the links below i made a video on the basics of readers as well as running readers on high availability mode with the sentinel in that video i cover replication and the sentinel in much greater detail and we're going to be taking a look at how to run readers with replication in mind using the sentinel service on top of kubernetes so if you're new to this channel everything i do is on github so if you take a look at my github repo i have a folder called storage and in that folder i have a redis folder and under that i have a folder for kubernetes everything i do is going to be in the kubernetes folder in a readme and this is all the steps we're going to be looking at today on how to run railers on top of kubernetes so be sure to take a look at the link down below to the source code so you can follow along so the first thing we're going to need is a kubernetes cluster now i like to use kind which basically allows me to create disposable kubernetes clusters on top of docker so in this demo we're going to say kind create cluster we're going to create a cluster called redis and we're going to use kubernetes 1.18.4 so this takes a couple of seconds to run and we're good to go so now i have a kubernetes cluster up and running i can say cubect i'll get nodes and we can see we have one node up and running the next thing we're going to want to do is create a namespace now namespace allows us to group and hold all these resources together so we're going to say cube ctrl create namespace redis now the third thing we're going to need is a storage class so if you do cube ctrl get storage class this will show you all the storage classes options that's available for your cluster now this will be different depending on your cloud provider i've made a separate video about persistence volumes which are linked down below for you to check out which basically covers persistent volumes different types of storage for different cloud providers as well as storage class storage class allows us to define the type of storage we want to attach to kubernetes and then we can use a persistent volume to attach that storage to a container so in this video we're going to use the standard storage class which is basically just a local volume to allow us to persist data on our host now the next part of this demo which is very important is the configuration so if you take a look at the github repo i have a storage readers folder under that is the kubernetes guide and inside the redis folder we're going to be taking a look at how to deploy a bunch of redis pods and then have them replicate authenticate and form a cluster so the first thing is the configuration we have a redis config map and here you can see i have a config map called redisconfig and i have a redis conf now redis has great documentation around configuration and they have a config file that's the defaults for every redis version so what i've done is i've grabbed the redis 6.0 config and i've pasted the config in here as is now the first thing we want to talk about is authentication now in order to authenticate we need to put the password inside of the configuration file so what i've done is we have two pieces here we have the require pass which is the password used for any app or cli to connect to this raiders pod and then the master password which is basically the password for any of the replicas to talk to the raiders master now because we're running a bunch of pods on kubernetes the plan is to run pod zero as the master on startup and remember because we're going to be running a cluster with failover at any given point in time any of the pods can become a master so that's why we keep the password between the master and the required password the same this will allow replicas to talk to the current master it will allow the master to talk to the current replicas now the next bit to talk about is redis replication in my high availability video i spoke about replication in much more detail but basically what it boils down to is the slave of key this basically tells this redus instance where the master address is now you can see i have it commented out this is just to show you what it would look like but remember we're going to have to dynamically generate this value on the fly and i'll show you how to do so when reader starts up we're going to want to make pod 0 the master every other pod in our cluster will connect to pod zero and then the sentinels will take over so when cluster failover happens the sentinel will be able to promote any replica to a master so it's very important not to hard code this value the next piece i want to talk about is persistence now in order for reddest to persist its data i've spoken about the storage class and the persistent volume now to enable persistence in readers there are two things you have to know the two modes one is rdb mode and append only file mode i cover this in grey detail in my ha video but just at a very high level rdb mode is where redis dumps its database file to disk at various intervals this is good for performance because it takes up a bunch of compute every time readers dumps the file to disk but it's also not as durable because if redis had to die in between those intervals there is a chance of data loss a pain-only file mode is a little bit better for durability because it writes every transaction as it happens to disk which is good for durability but not so good for performance so it's a trade-off the redis community recommends that you run both so what i've done is if we search for data we can see i have a dir key and this tells redis where to write its data to so i've created a folder here called slash data and this is going to be where our persistent volume attaches to the container so the next piece is to turn on append only mode so you search for append only i've turned it on so append only equals yes and then i've also specified the append only file name you want to use so it's just going to be called appendonly.aof and then to turn on rdb mode you can just tell readers what db file name you want to use so i just say dumbfile dump.rdb so to deploy this all i need to do is change directory to the storage redis kubernetes folder and then i'm going to say cube ctl apply in the redis namespace and i'm going to deploy this config map this will give us a basic config map that each of the raiders parts can use so now that we have our config map deployed the next thing we want to do is deploy our stateful set so if we take a look at the kubernetes readers folder we have a redder stateful set yaml file over here now stateful sets are very important to run stateful workloads like databases the reason is first of all we need to provide a stable dns name for each of our raiders pods so we need the reader zero radius one redis two and so forth as we scale up each part needs to be individually addressable which is not something a deployment does stateful sets give our pods a persistent network identity that persists across reboots the other thing that stateful set gives us as well is the ability to mount separate volumes generate volumes on the fly and mount them into each of the pods automatically this is again something that a deployment does not do if you're interested in stateful sets i've made a video about it the link is down below be sure to check that out so here we run a very simple stateful set called raiders we run three replicas and you can see we're running containers called redis we're going to be running raiders 6.0 and we're going to be starting up raiders with our custom configuration we're also going to be exposing the port for redis and we're going to be mounting our data volume into the container over here now this is the path where raiders will be writing its data if you scroll down you can see that we are using a volume claim template so this will tell the stateful set to issue storage using the storage class that we have earlier so every pod part of the stateful set will get its own persistent volume and it will be automatically mounted into the container and then we also have a headless service type cluster ip none and this will give every single part a stable dns record so we'll have z greater zero radius one radius two and so forth as we scale up now one important thing to note here is that i'm creating another volume mount for the redis config map so you can see red is config i'm mounting it to etc redis but if you take a look at the volume it's not really the config map it's an empty directory this is really important because redis needs right access to its configuration this is because at any given time the redis replica might be promoted to master so it needs to be writing details to the configuration on the fly for that reason it's not great to use a config map because if the pod recreates it'll get the config map back and the details will change it will not know how to connect to the current master it won't know how to find the sentinels either this is important because if the reddest part dies and gets recreated we don't want it to get the default config again we want its configuration file generated on the fly so to achieve that what i have is an init container this is a container that starts up before all the main containers in the pod so you can see i have an init container called config and this guy's job is to generate the config map on the fly you can see i have a volume mount here and i mount the config map into temperatures and then what i do is i actually copy the temp readers config into the real redisconfig location so this container will attempt to contact the sentinel to figure out who the master is if the sentinels are not running it will default to radar 0. if this container is read as 0 it will do nothing because it's the master so we achieve this by running a simple bash script we say redder cli sentinel we try to ping the current sentinel if that fails we say master not found and we try to default to reader zero we do a quick check to see are we read a zero and if we are at a zero we simply do nothing because we are the master if we're not read a zero we update our slave of value and we point to red zero by default if the sentinel is found we run the red cli we try to connect to the sentinel we then say get master address by name and we get the ib address from the sentinel of the current master and then what we do is we update the slave of key in our conflict to point to that master so you can see we're heavily reliant on the sentinel to do the cluster failover do the leader election and then we just contact it and point to that master remember that any given time a pod can fail and restart the node can fail the parts can be moved to different nodes you have to make sure that you test all of these different types of disruptions now i've also taken a look and there are a ton of different communities and different health charts out there to deploy readers but make sure you don't fall into that trap of just blindly deploying it you want to make sure that you understand how the replication works under the hood you can see in the demo that i've just showcased you that i've basically taken an approach of trying to learn exactly how the sentinels communicate how the master leader election works and how pods can come back to the cluster and join it in the event of a failover so make sure you really go over your requirements and understand what you need from readers and then use this video as a foundation to understand how the reader sends it all works how replication works and understand all the different failure points that can happen so to deploy the stateful set i'm going to say cubect i'll apply in the redis namespace and i'm going to apply that statefulset file that's going to go ahead and create the stateful set as well as the service i can then say cubectl getpods and i can see we have a pod up and running now this redis pod is figuring out who the master is it'll realize that it's the master because there's no sentinel up and running and it'll update its configuration to say there are no slaves and then you can see radius one and redis two are now also coming up from a storage perspective we can say cube ctl get pv and we can see that three persistent volumes have automatically been created by the storage class and mounted to the stateful set pods now you want to make sure that each instance is healthy so to verify that we're going to take a look at the logs so we're going to say cube ctl logs on reader 0 and we can see that it's starting replication synchronization so we can see that this guy is the master similarly i can run cube ctrl logs on raiders 1 and i can see that master replica sync has started that's connected to its master successfully the same thing goes for redis 2. we can see it's connected to its master and it started synchronization we can also then test our replication by saying cube ctrl exec and we can go into the reader zero instance once we're in we can run redis cli and we can authenticate saying auth with our password and then we can run the info replication command and we can see this is the replication info so we have roller's master and it has two connected slaves this is a good way to test whether replication is working now we have one master and we have two replicas running in kubernetes now what if one of the replicas died it doesn't really matter because the replica will find its master anyway as i've showed earlier but what if the master dies this is where the redder sentinel comes in the job of the sentinel is to form a cluster and also allows for cluster failover so if the master dies the sentinel will promote one of the replicas to a master now to deploy the sentinel we have to take a look at the kubernetes folder and in here i have a sentinel folder where i have a sentinel stateful set file and here we also have another stateful set for three replicas of a sentinel and this time what we're going to do is we're going to run the a container called the radar sentinel it's going to run redis 6 and it's going to start it up as a sentinel passing in the reddest sentinel configuration we're going to be exposing port 5000 that's going to be the address that the sentinels talk to each other now very similarly to our radius implementation we also mount in a configuration volume and you can see that volume is also empty that is we're also going to be using an init container to generate our configuration on the fly now you can notice that we don't have a config map defined here that is because we're going to just generate the entire config from an init container because it's very simple to do now this is very easy to do with the redder sentinel because when the sentinel starts out all it needs to know is where is one of the redis instances so we can loop through a bunch of address names for red of zero red as one and red is two and the init container can go and contact any one of those instances in the case of radius one being down or radius two being down we can just loop through that list and find one of them that should be up and then we can contact that node find out who the master address is and then point the reda sentinel to the master this is all that the sentinel needs once it connects to the master it will be able to find all the replicas that are part of the replication and the cluster and it will find all the other sentinel addresses as well this is part of the behavior of the sentinel so let's take a look at our init container we have a container called config and we're running another image of raiders so we can have access to the radar cli and we're running a shell script here that contains our redis password you can mount this from a secret and we have a list of nodes that we're going to be looping through so we've got to assume here that they might be redis instances that are down so it's a good idea to take the entire list of instances and then loop through them so what we do here is we take the redder cli and we run reader cli info replication and we try to grip out the master host address from each of the instances once that is found we can then update our sentinel config by saying sentinel monitor my master with that master address that we found and then you can see i'm taking the simple bunch of commands here to generate a sentinel configuration file that's pointing to that master that it found once the init container has created the sentinel configuration the sentinel will start up it'll automatically go and find the master address find all the replicas find the other sentinels and form a cluster so to deploy it it's very simple we're going to say cube ctl apply in the reader's namespace we're going to apply the red stateful set yaml that's going to create the stateful set as well as a as a service to contact each of these stateful separate we can then say cubectl getpods and we can see that we have three sentinel pods up and running and to make sure that the clustering has worked we can take a look at the logs on one of the sentinel pods and what we're interested in is to see that it's monitoring a master so it found a master on that address it also found two slave nodes and it also found two other sentinels so you can see the sentinel is really good at finding all the replicas once it's located the master so now as i said earlier it's very important for us to test cluster failover so what i want to go ahead and do is simulate a failover by deleting one of the pods so i say cubectl in the redis namespace getpods and we can see we have all our pods up and running here so what i'm going to do is i'm going to delete the current master so i say cube ctl delete pod and i delete pod 0. now to see what the sentinel has done i can get the logs of that sentinel by saying cube ctl logs sentinel 0 and we can have it read down the logs to see what happens and if we take a closer look we can see that it's done a switch on master so it's switched to this ip address and it's also indicating that it can connect to all the slaves and it's marked the current master as a slave saying slave is down and if we run cube ctl get parts o wide we can see the current ip addresses all the radius parts as well as the sentinel pods and if we take a look at the switch statement here it's gone ahead and switched the master to this ip address so if we take a look at the ip address here we can see that redis dash 2 is now the master now to verify that i can go ahead and exec into redis 2 pod i can run the red cli i can authenticate and i run info replication and now i can see that it has taken up the role of master and it has one connected slave so when pod zero comes back up it'll become a slave and connect to this master we can also see if we go back into reader zero and run the same commands it has now taken the role of slave you can also see if i say cube ctrl get parts o wide you can see the ip addresses of each of the pods and if we take a look at the logs of the new reddest pod that came up we can see that it's finding its master it found the sentinel it found the new master and it's gone ahead and update its configuration and if we jump back into redis 2 which is the current master we can see its info replication it's taken the role of master and it now has two connected slaves so red 0 which was the master went down the sentinels promoted raiders two to the current master reda zero came back contacted the sentinel got the master address and reconnected to the master which is redust ii so the next step is what happens when one of the sentinels dies so what i'm going to do is i'm going to go ahead and delete one of the sentinel pods sentinel zero now technically nothing should happen this internal should come back it's going to loop through the addresses contact each one of the nodes it's going to ask who the master is it's going to contact the master find out then who the replicas are and connect to the sentinels so this should cause no disruption to our replication and our clustering if i say cube ctrl logs of the sentinel we can see that it's gone ahead and detected who the current master is there so it's connected to ip address 10 which is redis 2. it's also gone ahead and connected to both slave addresses which you can see here is radar zero as well as radius one and then it has the two sentinel commands here so it's picked up two sentinels one on ip address 14 and ip address 16 which is a serrated sentinel one and two so hopefully this video was helpful for you to understand the fundamentals of running radius on kubernetes if you're new to redis be sure to check out my basics videos on readers as well as my clustering guide and let me know down in the comments what sort of videos you'd like me to cover in the future now also remember to like and subscribe if you want to be a part of the community be sure to check out the links below to the community page and if you want to support the channel even further be sure to check the join button and become a member and as always thanks for watching and until next time [Music] peace you
Info
Channel: That DevOps Guy
Views: 15,011
Rating: undefined out of 5
Keywords: devops, infrastructure, as, code, azure, aks, kubernetes, k8s, cloud, training, course, cloudnative, az, github, development, deployment, containers, docker, rabbitmq, messagequeues, messagebroker, messge, broker, queues, servicebus, aws, amazon, web, services, google, gcp, redis, cache, database
Id: JmCn7k0PlV4
Channel Id: undefined
Length: 21min 18sec (1278 seconds)
Published: Wed Sep 09 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.