Connecting Elixir Nodes with libcluster, locally and on Kubernetes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in the last few articles we saw how to make our Phoenix shut up distributed at the beginning with ready and after with distributed elixir connecting the nodes together we had just one problem we had to manually connect the nodes in the is council this is an issue in production in this video we will see how to automatically cluster the Phoenix check nodes using the lib cluster library locally and on a cuban artists cluster with a dynamic number of nodes let's download at first the code of the Phoenix chart example from my counter party coding and let's use the pub/sub PG to branch let's clone the code and check out to the pub sub PG to wrench let's download the dependencies and try to run it locally so we have we can pass the port as an environment variable so the first node the port to 4,000 yes we give a name at first no the a and we start the Phoenix server here and we do the same also with this name B and port okay great no connect let's connect to be MVP and we see that has to be connected correctly let's try let's try the chat app with the two browsers so let's connect with one tab to 4000 and the other tab with 4001 so 4001 is the node B and 4000 is the denote a so user a hello and we see that the message is propagated correctly is broadcasted correctly to the other node so user B hello - and we see that it works okay so we had to manually connect the nodes using the connect function in the node module let's see how to use lip cluster to automatically connect the nodes so at first we need to add the Lib cluster library as a dependency and let's add the Lib cluster the dependency we then need to start a cluster supervisor and which is part of the Lib cluster library and with some topologies and this is the name of our supervisor so let's add a list of topologies this is really simple so we just say our topologies is chat and we use a strategy which is the gossip strategy which uses multicast UDP to gossip node names to other nodes in the network great and it should work straight away so as before we start one node the a node on Party 4000 and the other node on part 4000 won the name is B and you see that the node a is now connected to node B and vice-versa so on node list node list we see that we didn't have to connect them manually the same if I add another node name see on part 4000 - so they will connect both you see here so they will be now connected also to the node see node list node list and obviously on no see no less great let's see if everything works on on the browser so this time we have three nodes Locka last four thousand four thousand local last four thousand one local lost four thousand two which is the node C so C hello we see that its propagated correctly a hello from a and seems to work right the last one from be perfect let's now see how to deploy this distributed application on kubernetes and making it the clustering of the nodes of the elixir nodes automatic with lip cluster so we are going to deploy multiple chat nodes on my kubernetes the local setup but what I'm going to show you could work without any radical change on any cloud kubernetes setup we're going to deploy our chat nodes with a queue Burnett is deployment and we will connect them automatically together thanks to you Lib cluster and something called the kubernetes headless service which we'll see in a moment we will then create a queue Burnett is load balancer which will spread the connections from different browsers to different chat nodes so what is a headless service I've put this file nginx you test you can final discount under the lip cluster branch so let's try to see with a simple enginex deployment what is a headless service we define it like this so it's a service but we specify cluster IP none and the dns will be nginx nodes hunderd the default namespace and the port in this case is going the target port is 80 and the container so you see here the select or up engine access this service is going to target the older parts of this deployment and you see the container this is just an mg next deployment so we have a collect boot for example for replicas and yeah this is pretty simple normal nginx deployment and and this will point to to these nodes so as I said I'm going to use on my local Cuban axis a set up so I'm going to so I'm going to apply this so what it does is it creates the service and it creates the containers actually the pond so sorry okay we have four pods running and we have our service headless service you see here at the cluster ap doesn't have an IP and to see what it does we are going to use so this so with this we run a pod I call it so this is the name of the pod we are going to say and this is means that once we exit from the bash the pod will be removed will be terminated and so this is actually in ubuntu image and we're going to execute bash okay so this is a basic boom to distribution and I'm going to so I need to do an update and okay I'm going to you I'm installing the NSA utils that you have nslookup and I'm also going to install curl so if we do nslookup nginx nodes we see the list you see here of all the nodes if we scale out for example to eight replicas and we see our pods so we have this is our bash that is still running and our eight nodes and gen-x nodes pods and if we do another and as lookup we see that we are able to list all the IPS so at first let's start changing the topology so we use days strategy so cluster strategy kubernetes DNS which will use the headless service so we're going to create and the service on your service is the name of our headless service so chat nodes as we did before before we had in the example nginx nodes and the application name chat so what is this when we start the interactive elixir console we usually pass s name short name and we pass a so this actually the node and then the node name will be a and for example MVP because my local name of the of my MacBook is MVP since the pod has each part has a different IP we need to use the name option and since the APU is unique chart we can put chart this can be always the same and obviously the IP which is unique so and as before we pass these topologies to the supervisor and we initialize the supervisor with this name we are going to do another change so in the controller so what I want to do is to know in which node we are connected so softener is inspect node and also the list of the nodes this node is connected to so nodes and is node list and we're going to pass this to node is South node and nodes nodes so here is actually inspect I'm going to print this will not be really pretty but it works so so nodes and nodes and also my notes of self is we call it South node okay so no actually this in the controller was South node but as a parameter he is node so the application now is ready we need to build a docker image but before building the docker image we are gonna see first the headless service so this is the headless service like before so instead of the port 80 we are going to expose the EPMD because this is the part we need to expose between containers and this is name of the headless service and obviously cluster IP none and we are going to create a chat load balancer so we'll create since we're going to use my local setup I will have in my locker lost the part eighty opened and the target part will be 4,000 sorry 8,000 and the target part will be 4,000 and these 4,000 is the depart the chat and nodes will open so we're going at first to create four replicas and so this is the image I've already uploaded you can also build your image as we will do together the port is 4,000 so this is environment variable Erlin cookie each single node has to be started with the same erlin cookie so they need to share this secret and this is okay this is really important so we need to create an environment variable with the port with the pod IP and with this we're going to use the pod metadata which is DP and we are going to in we are injecting this into an environment variable we can use when we start the when we start the application you see here the common is elixir and these are the arguments as we saw before this is the application name and this is the environment variable this environment variable and this is the secret so this is dynamic we don't need to specify the poly P this is pretty useful actually we need this because every time we reset the pod every time we terminate a pod a new pod will make you burn Etta's will spawn a new pod and the AP will be different and we started the phoenix server so let's see how to build a docker image and then to deploy all this so to deploy the docker image is pretty simple docker image build t said the name of my docker images of phoenix is locally so this is the local name phoenix that shot example or since its local let's call it a shot and the tigers lib cluster and okay this directory is the context so we can actually use this image in in our kubernetes deployment so here instead of this we can use this image so to create all the resources we just need to do q CTL i use k is just an alias k create and this file so you see here it creates a chat employment a chat service which is the load balancer and iChat nodes a service which is the headless service so you see here shut nodes and chat which is the load balancer so let's connect to the load balancer great so we see soda nodes the connected nodes to this node so we have four nodes this is the local node and this are the other nodes okay great and here I'm connected to another node you see here so for example - hello and you see here that I see the message hi there and let's see if I take another but this is another note again these are Cee Lo and you see how the message is propagated correctly great let's put ten replicas for example let's apply the change and that's connect to you see here we have all the parts kinetic one two three four five six seven eight nine and ten and if we want to delete the resources created we just have to delete all the resources in our config file you see that the pods are now terminating and the s and the services are deleted we so how is it is with Lib cluster to connect the nodes together and deploy ulsan kubernetes have distributed phoenix chat application if you have a question or something wasn't clear please post a comment in the comment section below and subscribe to be updated with new articles and new screencast see you next week
Info
Channel: poeticoding
Views: 3,576
Rating: 4.9569893 out of 5
Keywords: elixir, phoenix, kubernetes, libcluster
Id: FARtObrKr5I
Channel Id: undefined
Length: 19min 26sec (1166 seconds)
Published: Thu Feb 07 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.