Deploying NodeJS & Nginx to Kubernetes using Helm Charts

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
quick guide on taking a nodejs and nginx application and deploying it on a kubernetes cluster using hand charts to quickly talk about the app that we'll be working on today it is a repository that has two node.js applications app and user service app is an is a node server that runs on port 3000 it has a couple of endpoints one is a health check endpoint called slash ping there's another end point that is on the root which returns a html page and another endpoint that actually proxies to another service which is on slash api users which if you can see here hits low close 4000 um on a different endpoint and return takes the data and returns it back to the client and the server that runs on port 4000 is a user service so the user service is again a node.js application it runs a port 4000 it has a single endpoint called slash v1 users which has a mocked set of users for this example and returns those users back to whoever is consuming it which in this case is app and just to throw in another layer into the mix there is an nginx uh server that also has to run alongside these not a local developer development environment but we say this is being deployed to maybe a production environment we would want nginx to sit in front of our localhost 3000 server to essentially allow accesses on port 80 and forward those along to port 3000 so that is a little bit easier for people to access our server and just as a quick optimization there is an error page that's rendered which for any five xx errors it returns this a maintenance.html uh page which just says that the site is down for maintenance so to see this app in action locally uh we are exposing a start command through the package json called start all which internally runs the start app and the start user service scripts which run the app and user service respectively so now let's just jump back to the terminal and start both of the apps and you can see that the user service is running on port 4000 and the server is running on port 3000 now we can jump to the browser we can hit localhost 3000 and we can see the root handler returning the the static text that we were sending back from the handler we can also hit the health check endpoint and we can see that that works as well and we can hit api slash users which takes a call to localhost 3000 makes another network called localhost 4000 which is the user service server and returns the set off and basically process proxy is the set of users from the user service back to the browser and we can see that here the user service was called and then the app server returned the slash api slash users call now that we have an application that runs locally let's see what we need to do now to take it from here and make all of this run as we've seen right now in a kubernetes cluster so we don't have a production environment to test out all of our kubernetes deployments but we do have the means to set up one locally so there are a couple of tools that need to be set up to first talk to kubernetes and to also set up a local kubernetes cluster the tools are all available on the kubernetes documentation page but the two things that we'll probably need today are cube ctl which is a cli tool that allows you to plug into a kubernetes cluster to you know execute commands on the cluster and the other one is mini cube which allows you to spin up an entire kubernetes cluster locally and specifically for the demo today we'll be using the docker desktops kubernetes uh capabilities so another tool that will probably be needed today is to install the actual desktop uh version of docker not just the cli you should be able to verify if they've been installed correctly by running running a version on all of them so i have mini cube up cube ctl up and docker up as well but before we go into deploying something to kubernetes the question still remains on how do we get something that is compatible with keyboard with kubernetes right and that's a prerequisite that we'll have to sort of address now which is we need to run all of these applications that we have which are the app server the user service as well as nginx in docker environments which essentially means that we have to create docker images of images from them and use those images to run containers so that it's compatible in a kubernetes environment so to dockerize our application we need to start by creating a docker file and we if you really think about it we have three different applications here our app server our user service server as well as nginx which all have to be dockerized you by creating docker files for them or creating docker images using the dockerfile configured for them so let's let's first dockerize app here so we can create a file called dockerfile under it and most if not all docker images start off with a base docker image so for example here i need a linux environment and we also need node to be running in the linux environment so here i'm just saying that use this version of alpine linux and also have this version of node available right so that's the tag that i'm using here now our docker image needs to essentially have three things right uh or rather the docker file has to have three things one is how to like what code do we need to move from our host system into the docker image what port to exposed because networking is sandbox to the running container and your host system so how do you interface between the two of them is by exposing ports so which port to expose and finally what is the final execution that needs to run when the image is converted to a running container right so to move files from a host system to the docker image let's first create a working directory so here let's create user share app so once you run the working directory command you automatically create the directory and cd into it so everything else that i do after this is going to be in uh like with respect to that user share app directory so the first thing we need to copy is our server code which is going to be available in app server.js and we'll move it to our file called server.js inside this working directory now to make this server.js actually work we need the node modules to be available so that all of our install packages are actually available on that image as well so let's copy node modules as well now we know that the app server ex like runs on port 3000 right so we need a way to talk to this localhost 3000 server and that can be done by exposing the 3000 port which allows anyone who's using this image to be able to map a host port to the exposed port by the docker image thereby allowing us to directly talk to the server when it's running on port 3000 and the last thing is to have the final command that needs to run when you convert this image to a running container so everything above this is at build time so we build the image and keep it ready but every time we spin up a container from that image we actually execute what's what's been provided here in the cmd statement now cmd is just an array of strings uh each part of the of the array is going to be one space separated entity so here we want to run the server so we say node server.js all right and this is pretty much all that we need to do to dockerize our app uh our app server and have it running and exposing the 3000 port so so let's also tweak this and copy it over to the nginx server and the user service server to dockerize those servers as well [Music] so now we have dockerfiles available for our app server our nginx server as well as our user service the only tweaks that were made were in user service we copied over the the user services node server and we exposed 4000 instead of 3000 and in the case of nginx we have a different base image we copied the default conf to the necessary configuration route for nginx and we've also copied our maintenance html to another folder that's relevant to nginx on that image we've exposed port 80 because nginx runs on port 80 and we are also saying that the nginx should not run as a daemon this is kind of important here because a container will stop running when the when this command ends right so if this reaches reaches a conclusion then you then docker things that your container has essentially uh stopped right uh nginx when it when when you when you run nginx it runs as a daemon by default which means that the command to start nginx would start and stop almost immediately so what we're just saying here is to turn off the daemon mode so that the the process that starts nginx is also the process that's actually running nginx as well so now that we have docker files available for all three of our applications let's use the dockerfile to generate docker images for each of these applications and spin them up locally as containers just to see if you've dockerized everything correctly so to create a docker image using a docker file you use the docker build command you can give it a context so in this case we're giving the current like folder as a context but the fact that but the like the fact that we have three docker files means that we like we can't let docker know which of these dockerfiles are relevant for certain um like image build statements so that can be done using a minus f argument which allows you to selectively say that this is the file i want to use so in our case we want to let's say we want to run the build command for the app server first so we can pass an app docker file and we can pass in the minus t argument to now provide a name to this image so the image is going to be called app server and the tag for this image is v1 right and tag is just a way for you to have multiple versions of the same image so now we run this we can see that a bunch of steps have run each of these steps are the steps that are available in our docker file for instance and now when we run docker images we can see that the app server is available uh with the tag v1 now let's quickly spin up this docker image into uh into a running docker container to actually see if our app server is up and running correctly so to to run an image that has already been built you can use the docker run command you can pass in the minus d flag to ensure that you you don't like use the same terminal session for like viewing the logs for that docker image so you basically detach it and let it run behind the scenes um then you can pass in the minus p flag to map a host port to the port that has been exposed by the docker image so for example here i want my 3000 port my 3000 port to be mapped to the 3000 port exposed by the docker image and finally we can pass in the image name which in this case is app server and the tag is v1 right so the moment we run this we get a sha and we can check the logs for this show by by just running docker logs on it and we can see that the server is currently running on port 3000 so test this out let's jump back to the browser and we already had this running for instance right here so i can hit the health check endpoint and we can see that the server is still running on port 3000 and this uh response is actually coming in coming from the server running inside our docker container and not the server that's running on a local system which we actually killed fairly early on so cool so the app server is running so let's do the same exercise with the with the user service service as well as the docker uh the nginx service as well so we can run the build command again except now instead of passing the docker file for app we pass in the docker file for user service and the name for this is going to be user service v1 is again the tag for it we can see that this runs as well it's been tagged now let's run the user service image and we know that this wants to run on port 4000 and just to show that the host port and the expose port don't necessarily have to be the same let me run this on 5000 on my local system but map it to the 4000 exposed by the docker image so we can see that this again has spit out a sha which means that the run command actually ran successfully and we can see that it is running on port 4000 so when i jump back here i can go to localhost 4000 slash v1 slash users but this doesn't work because we mapped 5000 from our host system to the 4000 that's available on the image so let's hit 5000 rather on our on our host system and we can see that the response comes back with the mock users so user service is also working so let the last test is going to be for nginx so let's build the nginx image and call it app engine xv1 uh mostly because nginx is like a an existing docker hub image name so this runs as well docker run and let's have this port 80 little map to 80 as well and this is going to run app engine x now let's see docker logs for this uh and we can see that the everything is up and running now we really can't test proxy passes right now because none of these images know about each other but one thing that we can test is if the maintenance page is rendered so let me maintenance dot html and we can see that ah the maintenance page that we configured in our nginx container uh sorry in our nginx server is now being rendered because we directly hit maintenance or html so as it stands now we have three images one for our app server one for our user server and our nginx server and all of these are up and running on our local system and we've tested all of these and everything seems to be going absolutely fine so now that we tested all of our docker images by running containers locally and verifying that things are working fine let's now move on to the kubernetes side of things so we did initially install a bunch of tools such as cube ctl and mini cube to be able to run a local kubernetes cluster but we just did the installation we didn't actually spin up the cluster so we can run the mini cube start command to spin up this cluster and as i mentioned earlier we also want to use the docker's kubernetes capabilities and not uh running on a different virtual machine within the system so we can pass in the driver as docker here so once we hit the start command a bunch of commands uh execute and minicube essentially spins up this cluster which should take some time maybe about a minute or so [Music] cool now that now that mini cube has spun up and there's a running cluster at this point in time we can actually verify this by opening the docker desktop again and we can see that the that mini cube is now controlling a cluster in in in docker's constant context uh so mini cube come not just minicube rather but like kubernetes installations usually come with a dashboard that lets you see what is running in your cluster and what are the different entities that are currently running and what this what the status of those entities are so we can run mini cube dashboard which tunnels and exposes the dashboard capabilities of the kubernetes installation to your local system so the moment we run a mini cube dashboard we can see that kubernetes is running and this is the default set of configuration for it but we can also see that there are there's nothing else that's running apart from just kubernetes as is and that's where everything that we did before this which is around creating docker images plays part so as part of the guide today this is sort of what we want to recreate right we want to create a cluster that runs an nginx server which sits in front of three app servers which again works on top of two user services right and the three and two are just like just numbers that are being used for this demo but technically this could be scaled up or scaled down based on whatever load that you see fit and if you notice in front of all of these three apps we have services that are running and what load balancer and cluster ipr is something that i'll get to in a bit uh but we have the way have the means of referring to a single entity who under the hood points to the individual running versions of the app so for example here all of these apps talk to a single service who then load balances between the two users running user service instances and nginx talks to a single service which under the hood load balances between multiple apps so this is the entire cluster that we are going to bring into our kubernetes installation locally and at this point in time we have the images available for all of these but how do we go about creating this cluster and that's where something called resource manifests play a part to talk about what these resource manifests are we need to step take a step back and talk about what pods are so pods are the most atomic entity that can be deployed to a cluster and what a pod essentially is is a running instance of your application so containers are also running instance of your application so what's the difference right so a pod in most cases would be just the container that you're running right so the same container that we had for our app server user service and nginx uh is just for all intents and purposes called a pod in the kubernetes and like world but the definition truly means a running instance of your application right so if you have a if you have multiple closely coupled containers uh which have to really work with each other to to to function right maybe you have like a like a like a node.js server that's sitting on top of a database for instance in cases like this you would want these containers to coexist at all times and that's when a pod in this case could potentially have multiple containers under them so again the the definition is a running instance of your app whether it's one container which is what is going to be in most cases or multiple containers is on a case-by-case basis so here nginx each of these app instances and each of the user service instances are all pods right so here if you want to create this cluster we want one part of nginx three pods of the app server and two pods of user service but then um it gets a little bit uh manual to spin up pods one by one right like we'll have to uh run the pod creation command three times to get the app server up and two times to get the user service up and if you want to scale them in the future for maybe to support a much larger workload we have to keep applying them over and over and over again so that's where the first resource manifests that we'll be working with today comes into play which is called a deployment so a deployment is basically a declarative way to work on pods so what i could essentially say here through a simple yaml file is that i want this pod to have this container and i want these many instances of that of that pod available in my cluster right and i can also provide metadata to these pods to uh make other pods and services in my cluster understand these pods right and it's all done through a single dml file now the second type of resource that we'll probably be talking about today is these um these service type resource resources so a service essentially sits in front of a number of pods and acts as a proxy mechanism between uh people who want to access these pods and the number of pods behind it so i can scale up these pods uh for to handle a larger workload or scale it down and i don't have to care about the individual ips of these but rather i would reference a single service and that service would under the hood talk to these these pods so if if let's say one of these pods gets removed for another pod it doesn't matter the service still knows about the service is still the abstraction that everyone else talks to now uh there are two types of services that this cluster has in this in this example one is a load balancer service and the other one's a cluster ip service right so the the stark difference between the two is cluster ip services are only recognized by other pods and services running inside the cluster whereas a load balancer service as you can see here is something that exposes an endpoint that the rest of the world can request to so uh any requests that are outside the cluster can only come through a load balancer not through a cluster ip because this is only relevant within the cluster and not exposed to the external world so in our example we don't want our app server to be exposed directly we don't want our user service to be exposed directly we want all requests to come through nginx maybe we have some like like some authorization that runs here or maybe we have some sort of a pre-sanity or something of that sort right so we can do all of that here and we can ensure that all of the requests are funneled through a single mechanism through this load balancer service now i mentioned that all of these are resource manifests that control these these different resources and i did mention that all of these are yaml files right so let's just create a quick uh resource manifest that controls our app pods to begin with so to actually see one of these resource manifests in action let's jump back to our code base so uh we want to create a deployment resource for app where which specifies what container i want to run and how many instances of the container i want to run right so under app i can create a file let's call it deployment dot yaml and in this deployment.dml i'm going to specify my configuration for this deployment resource so the the first key is something that's pre-configured by kubernetes it's standard across all deployment resources which is this api version we are also going to say that the kind is deployment which means that this yaml is going to be specifically to talk about our deployment resource now we can also provide some metadata and like i said uh these resource manifests allow you to provide uh metadata that for other resource manifests to to uh like link to so here i'm providing in metadata calling this this deployment resource and app server deployment right and uh now comes in the main part which is the specification for this for this deployment resource so under the spec key i can specify how many replicas i want of something right now what this something is is based on a selector so i can say that whatever is matching a set of labels whose name or whose key is server and value is let's call it app server right and what you provide under match labels is is irrelevant uh you can have any key and any value is just that you need to match this to something that you have specified in your in your container spec which is what we'll be building next so now we're just saying that whatever matches this label server app server spin up three replicas of that right so we can create a template for our our deployment and we can provide metadata to this one such key is labels and into the labels i'm going to provide server app server so this is how this piece of the replica count is actually tying into uh the template for the pods so now that we've provided metadata we need to actually provide the spec for the pods so the specs here is going to be the the containers that we want to run as part of this this pod so we we can have an array called containers here so the container name can be anything doesn't really matter uh it's just for uh like denoting or the or rather this is the name that will show up in your kubernetes dashboard so we have the name the image is what's important which is what is the image for the container that we want to run right so in our case we know that is app server v1 all right and since we want to run a local kubernetes cluster we don't want our images to be pulled from docker hub which is what the the default lookup for a kubernetes cluster is so since we have our images only available on local i'm going to be putting an image pull policy for this as never which means that never look it up always use the images that are available but based on your use case you can if you're using something that's already available on docker hub you could omit this or if you're using something from a custom registry again you will have to probably omit this then uh the next part is ports so our containers are our container is going to expose ports so we can provide a container container port so in this case this is again an array of values because a container can expose multiple ports but in our case it's just exposing a single port which is a port 3000 right and here is where you can also provide other additional env variables so uh just for the time being let's just pass in env variables just just to test this out so we can pass in an array of env variables where uh the you can provide a name for the env variable let's say no dnv and the value is going to be let's say production right so this is uh how a deployment resource manifest looks where we just say that the kind is deployment here's some metadata about the deployment itself and the specification for it where i say that for whatever spec matches this label right uh spin up three replicas of it and i've also specified that this is what the label is going to be which is server app server which is how these two tie in and here's my spec which defines what the containers are going to be inside this template where i say that my container is going to be this image and this is the port that i want to expose and these are the env variables for my container right so now let's actually apply this deployment to a kubernetes cluster and see if at least this part of my cluster is spun up correctly right so to apply certain uh resource manifests you can use cubectl which is again something that we installed early on so the cubectl command here is now directly talking to the kubernetes cluster that's running inside our docker context so cube ctl takes a bunch of commands here we want to apply a new resource so we use the apply command we apply a file which is what the minus f stands for and we want to apply the app slash deployment.yaml right and this is going to take a while to run and we can see that uh there is an error and that's because ds dns names need to have lowercase alphanumeric values so let's change this to deployment and let's run this again and we can see that the deployment has been created so now we can open up that that dashboard that we had right using a mini cube dashboard and it will open this up and we can see that there are three parts that spun up and uh the container image is not present is is the error that's that's showing up here right and that's interesting because we did build it uh locally and we said that don't pull it from anywhere right so why can't it look further or rather why can't it find this image from our local system and that's kind of where and this wouldn't make sense wouldn't be uh something that will show up in uh maybe a production environment but at least for building locally there are some certain things that we need to do to get get our mini cube talking to our local docker environment what i mean by that is if i do docker images right i can see that these images are available locally but these images are actually not available inside mini cube which is what's controlling the kubernetes cluster there so if i if let's say i run this command which uses the mini cube's docker environment and i run docker images i can see that my images are not available in the context of the docker daemon running inside mini cube and mini cube can only like pull up images that are within its docker environment right so now we'll have to probably now that we've run this eval command we are now connecting our dock or rather using minicube's docker daemon on our on our host system and now we need to rebuild these images on that docker context to ensure the mini cube is able to see this again this is not something that you would run into in a production environment but for mini cube this is a small little thing that we need to do so let's run those docker build commands again so let's build app nginx v1 so it's going to uh like pull in everything and uh like start from scratch again so if you can see here it spun everything up from scratch and now if i run docker images i can see that app engine x is available now let's also create the images for user service which again happens from scratch and let's also build it for our app server right now when we run docker images we can see that our app server our user service as well as our app engine x are all available now inside mini cube so let's jump back here and you can either use the dashboard to like check things or you can also use the cube ctl command to see what is running right now so here for example i've gotten the pods for instance uh that are running inside my uh inside my cluster and now you can see that the pods are running right and that's because we now have the image available inside kubernetes and because kubernetes is a self-healing uh environment which means that if anything errors out it tries to bring it up uh up like over and over again until uh until it succeeds so in the in the previous case we didn't have the image available so it failed but the moment we made the image available it tried to restart it and found the image and now it's running successfully right so uh let's also uh maybe check the logs for one of these running pods so i can do a cube ctl logs and the pod name and we can see that the server is running on port 3000 but again these pods are running within the cluster which means that there's no way for us to access these these uh pods directly from the outside world right so but then we can see that there are exactly three replicas running for for this uh service so now like i said if you if you have a how if you have a larger workload and you want to scale things up we can come back to this deployment at gamble and we can probably change the replicas to four and come back to our terminal and apply the deployment again and we can see that deployment ran and now when we get pods we can see that there are four pods now running where the latest pod has spun up only recently which is around four seconds ago so that's how we can declaratively scale up and scale down uh running instances of our application through a single resource manifest instead of having to spin them up one by one right so now that we have a deployment running let's add a service to abstract away the deployment or the pods behind the deployment from other services that are running in the cluster so our service is also another type of resource manifest that you can add so in if you jump back here to the cluster we can see that we want our apps to sit behind a cluster ip service so we can come back to the to the code base create a service dot yaml file and this is going to be similar to how we defined the deployment uh manifest so this is there's an api version however the api version here is v1 for services um the kind is going to be a service so uh it was a deployment for a deployment manifest and for service manifest it's a kind service similar to our deployment manifest we can provide metadata to the service manifest itself so we can have metadata name is app server cluster ip service and yep there we go and uh and all of these resource yaml files have spec that defines what the manifest is supposed to do so the spec for a service is going to define what a type is so you can have either a load balancer type or a cluster ip type since since we want a cluster ip so we'll add the type as cluster ip and every cluster ip service has requests that that come into it and requests that go from it right and that's what the port the ports key defines so you can you can highlight which port you are going to expose to the external world so in this case my port is going to be 80 but requests that come in on port 80 need to be funneled to the actual pods and each port is running on a certain port right so in our case the pods that were defined by this deployment yaml are all running on port 3000 so we can have our target port set to 3000 so what this means is requests come to the service on port 80 but then get sent to whatever sitting behind the service on port 3000 and now we need this service to work on the pods defined by this deployment right and that's again where the metadata that we define here in terms of the selector comes into play so if you see here we had a label called server app server so we will use that here which means that i want this service to sit in front of pods that are controlled or have the label server app server so now let's apply this manifest as well so apply minus f app and service dot yammer and we can see that the service has been created now when you do a q cube ctl get you can now get services which is the resource type that we've introduced into the system so if we get a service we can see that there is a app server cluster ip service that is running on this cluster ip but exposes no external ips which means that again it's an i it's an ip that people or pods within the cluster can reference but no one from outside the cluster can make uh calls to it and we can also see that this is exposing the at port as a tcp connection and if we go back to the kubernetes dashboard we can also see that we have the app server cluster ip service and we can also see that like if we click on this we can see that this cluster ip service is sitting in front of these app server deployments which are the pods so they are able to talk to each other because we've provided them or provided the service the the right selector to select so now if it doesn't matter how many how many pods this scales up to or scales down to this cluster ip service is still going to be the abstraction that everyone talks to they don't have to talk to the pods directly so now that we have our deployment that's orchestrating our app server pods and we have a service that's sitting in front of the app server let's go ahead and construct the rest of this cluster so we need a cluster ip for user service and a deployment that has two replicas for the user service port and we have a deployment of nginx with one replica and a load balancer service that sits in front of it so there's some amount of copy paste that's going to happen here so let's take this deployment yaml and put it inside the user service as well and we can change the uh the deployment name from this to user service deployment we want two replicas behind it the server is going to be called user service and the image is user service v1 and we know that the user service exposes port 4000 not 3000 so let's change that out as well now we also need a service for this user service so let's add that here as well and we change this to user service cluster ip service it's again a cluster ip type port 80 target port is 4000 and we want this to talk to any server that matches or any label whose key server and values user service which should match this key here so let's apply the the the deployment and service for the user service as well so cube ctl apply minus f um user service deployment yammer so we can see that the deployment has been created let's also go ahead and add the service yaml so we can see that that's been created as well so if you do a cube ctl get pods now we can now see that there are four instances of app server here but then now the two instances of user service that have also been spun up and they're running as well so this is being controlled by the user service deployment yaml that we added and let's also do a cube ctl get services and we can see that there is a user service cluster ip service which is also running on a certain cluster ip and to see if it's connected correctly we can go to the mini cube dashboard or rather the kubernetes dashboard through mini cube and we can see that the user service cluster ip service is here we can click on it and we can see that it's sitting in front of both the pods defined by our deployment now let's do the same for nginx and see if this whole thing works end-to-end right so let's copy this deployment and add it to our nginx folder um it's going to be called the app or let's just call it the nginx deployment we want one replica of this the server is going to be called nginx the server label is nginx as well the name of the container is nginx the image is actually app engine x since we had namespaced it again v1 and container port exposes 80 and then no environment variables for it so we can exclude that now let's put another service in front of it and this is where things may change a little bit so we'll call this the nginx load balancer service and the type here is load balancer now all right and it's going to receive a request on port 80 and the target port is the same as that as the exposed port by the nginx container which is port 80 and the selector here is going to be nginx right which should match our server engine next thing here so now that we have this let's spin this up as well so we can do a cube ctl apply minus f of nginx deployment so now nginx is spun up and we can do a cube ctl apply minus f of the nginx service.gamble all right so let's do a cube ctrl get pods just to see if the deployment went through and we can see that one engine export is up and running which is about 14 seconds ago and we can also see that we can also get the services and we can see that there is a nginx load balancer service which has a cluster ip and if you notice here instead of having none as the external ip we have a key here called pending and this in a production environment would actually resolve to an ip that you can hit but since we have a local kubernetes cluster uh we need to do some mapping through mini cube to get this to work but just note here that uh the external ip is something that nginx load balancer service will potentially expose that we can now consume from our host system directly which you couldn't do with the other cluster ib services for instance get this nginx load balancer service expose an external ip we can use the mini cube service and the service name so what this will do is it will try to like tunnel this ip to the host system so that we can access it directly so we can hit this in we can see that we can see that yep so we are somehow able to consume the the internal ip here right but if you notice the moment we go there it says that the site is done for maintenance even though we're hitting the slash endpoint right that means that we are actually getting a 5xx back from nginx and to kind of highlight why that's happening it's because if we go to our nginx default conf we've actually said that it has a proxy pass to localhost 3000 but now that we've moved to a cluster type environment here where each of these pods are have like the pods now have their own ips right localhost 3000 doesn't make sense anymore this nginx needs to hit this cluster ip service and these apps need to have this cluster ip service and so on now if you come to the kubernetes dashboard which let's boot it up quickly again we can see that each cluster ip is exposing um we can see that each cluster each cluster ip service is exposing an an ip that we can consume so technically our our nginx pod needs to hit this app server ip here or rather the proxy pass has to hit this and then every call that we make from our from our node server by the way which is here as well where we hit low close 4000 also has to hit this specific ip but here's the thing um something that we configure here in our local kubernetes deployment may not hold true when we go to another coupon it is environment right let's say we have a production environment that we needed like we're testing here and we need to take this to a production environment uh that someone else is offering we we cannot be guaranteed that these cluster ips are going to be the same right and kubernetes does solve this through service name name discovery right essentially what you can do is you can go in to any place that you want to reference another uh like another pod or another service and you can reference them by name what that means is if you if you want nginx to hit our app server cluster ip service instead of giving it an ip we can rather just give it the cluster the service name itself so i can now tell nginx to proxy pass it to app server cluster ip service and now the this name gets resolved by the cluster whenever calls are made to this to like the dns resolution does happen within the cluster for this for this domain name so let's add this here and similarly we'll have to make one change in our app server to also consume the user service cluster ip servers instead of localhost 4000 right but now there is one concern here which is that if i now run this locally this is going to start failing because this doesn't resolve to anything right so here we can probably put a small check saying that my response can either come from localhost or from a cluster ip service based on my node env which we've powered through the deployment uh resource manifest if so we can say that if the node dnv is production then my response will be from this cluster ip service else my response is going to be from localhost 4000 so this should ensure that we don't break a local development environment for this so now that now that we have this we are also powering the node by the way from here in deployment yaml so we can pass in our environment variables here now let's rebuild the images and use the new images in our deployments so that the end to end cluster starts working as expected okay so let's first rebuild our app server which is what we've changed and let's make it v2 now similarly let's rebuild nginx app engine x v2 and we don't necessarily have to rebuild our our user service because that hasn't changed right and now that we have new images let's go back to our deployment yaml in our app deployment yaml we just need to change our image to v2 and let's go to our nginx deployment yaml and change this to v2 as well let's just verify once that both of these app engine x and app server are available as v2 and now let's apply the app deployment first so we can see that that went through so now we can see that a bunch of our app servers are now terminating and new ones are running and we can also see that they were spun up quite recently and over the course of time we can see that the complete handover from v1 to v2 has happened so we are back to only four running containers and the ones that were terminating are now completely terminated and now to confirm that we are now running v2 in this environment we can go back to minicube the minicube dashboard and if we open up any deployment we can see that we have our app server app server deployment that is now pointing to the images app server v2 right which means that we have you know configured a new version of the app to go out and something that's crucial here is if we go back to the age of every pod we can see that none of the other pods have changed it's strictly only the app deployment that has gone through which means that only the app server code has spun up new new versions of the of the code on the cluster now let's also do that for nginx so let's cube ctl apply minus f the nginx deployment yaml because there's a new version of the nginx image as well and we can see that the deployment got configured you can get the pods and we can see that the nginx port the old one was is terminating and the new one is now spinning up and give it enough time and the pod has now like it was terminating at this point and now it's completely been removed from the cluster now let's open up mini cube and tunnel the nginx load balancer service and now instead of the 5xx page we now get the page that's rendered directly from our app server so the proxy pass from nginx to the node server is working uh correctly we can also use the health check endpoint to get the response back from the from the app server and now to truly check if it works end to end let's hit the api slash users endpoint which by the way just to reiterate calls the user service cluster ap service and returns the users from there so now hitting slash api users we can now see the data for the for the for the users now being passed from user service to the node to the app server and from the app server through nginx and back to us so now we've essentially constructed this entire cluster with all of these services now running in front of these deployments and anytime there's any code change that we need to do we can just go to a deployment.tml file change the new change the tag version or change the replica account and reapply the deployment to have a new version of or or the new look of our cluster up and running as we see fit so while the resource manifest files provide a very declarative way for us to control our pods and our services in our cluster we can already see that it's getting a little bit out of hand with the number of deployment files that we're maintaining and the number of service files that we're maintaining and this is for a simple application that has two two services and an nginx layer on top of it now if i if our cluster gets slightly more complex with more uh services coming into play we have a lot of these yaml files that we have to start uh like paying at paying attention to and orchestrating the right way not to mention that we have only talked about two resource manifests here technically there are a lot more resources that you can apply to your cluster such as auto scaling configurations and like config maps and dynamic configs and so on so there is a there is a significant overhead here not to mention that there's another problem which is the fact that if you see the environment variables that we're passing in and the replica account that we're passing in all of these are sort of set in stone for a certain type of environment for instance here i could say that this deployment yaml here for my app server gives me a good idea of how my production cluster is going to be but let's say i have a staging environment that needs a different env pass to it and a different replica account right because maybe the staging staging cluster has a little less resources to use so maybe i want like one replica for this server on a staging environment and i also want to pass in my node and node and we are staging here i could the easiest way would be to have two duplicates of this deployment yaml one for production so you can have deployment hyphen production.yaml and deployment hyphen staging.yaml but then that's just two environments right what if i have uh more permutations and combinations of the different values i provide here that's steadily going to linearly increase with each each type of variant of my application that i'm going to have so all of these are potential problems that you could run into when you have kubernetes running your applications at scale and uh so to solve this problem we have something called helm which sits as an abstraction on top of the resource manifest that we have as part of our kubernetes infrastructure and what helm allows you to do is it allows you to think simply in terms of the blueprint for your cluster so you define what resources need to be part of your cluster and also what values need to be passed to these these resources and all you have to do is think in terms of this blueprint and generate something called a helm chart applying a helm chart to a kubernetes instance uh will spin up the entire cluster for you and similarly uh uninstalling your helm instant helm chart would also remove the entire cluster in a single go and you can upgrade this chart with any smaller changes that you have to make over time so helm is available to install directly from from a script which is available on the helm documentation site so once you have helm you should be able to check if it's up and installed using helm version so here we can see that it's running version 3.4.0 but what is this whole helm chart uh concept right so what a helm chart is is again a simple yaml file that defines uh what your chart name is what your version of your chart is and what does the chart work upon so to kind of highlight how this how all of this works let's create a file called uh called khs and let's create a chart.yaml file inside it right so this chart.yaml file is going to have a name so let's call this our um our app cluster and it's also going to have a version so let's call this version 0.1.0.0 right now this chart is everything that we need to to tell helm that there is something that i want to install to uh to a kubernetes environment and the name of the chart is going to be called app cluster and this is the version for it now what does this chart work upon like i said it takes care of all of the all of the resources for you all you just need to know is what are the resources that you want to put into your into your uh into your environment and the way that it knows that is using this templates folder so if we all we have to do is move in all of these deployments and services that we have in separate folders into this templates directory and now helm will be able to understand that yes these are these are the different or the various resources that i want to install and deploy to my cluster so let's move in our app deployment and app service and let's just call it call it something different so it doesn't have to be called the same thing so i call it app deployment and app service app service i can also move in my nginx deployment and my nginx service so let's rename this as well nginx deployment and my nginx service and let's also move in my user service deployment and my user service service so i can call this my user service service service and my user service deployment so now that we've moved all of this here we've essentially co-located all of the different resources that we need to install in our cluster and all of these are all of these will now be orchestrated by helm which i'll show you in a bit but another key thing that i want to touch upon is the values.yaml which is another key component to a helm chart so what this values.ml file allows you to do is provide a yaml file of any format there's not a fixed format here but you can provide keys and values in this yaml file and you can use those keys and values inside your various templates which should potentially solve a problem about having different environment variables and different uh configurations for our different environments right so let's let's take the app deployment as an example here so let's say i want to configure uh the replicas and i also want to configure the the node env here right uh to start uh to start simple so let's just close a couple of files and simply concentrate on the app deployment and our values.yaml so i can create a namespace called app server in my values.yaml and i can say that the name of my app server is going to be called app server right just a simple key called name whose value is app server and i also want to provide some details about my image i want to my image name is going to be called app server my image tag is v2 all right and let's also say that my uh exposed port is 3000 right and the replicas that i want to have for this by default is 4 but based on the environment i can change it so i have this um this map created in my in my values at yaml so now let's see how we can use this to parameterize our app deployment to make it seem this for us to integrate this in different environments so the syntax that's used here is based on the golang templating language so let's plug in a couple of these values let's replace the replicas for now so i don't want four to be baked into my app deployment but rather control through my values.dml file so i can say values dot app server dot replicas is what is going to be the value for this replicas key so have app server and replicas right and uh similarly let's just change up some stuff here so i can also interpolate the the values it doesn't always have to be a single value so for example here my image is going to be dot values dot app server image dot name right and this is colon for the tag and values dot appserver.image.tag right so we have essentially interpolated two values into a single string which is joined by this uh this colon here and we also have the configuration for the exposed port so let's use that here so i have dot values dot app server dot image dot exposed port right and uh another thing that we wanted to do was we wanted to have different environment variables for different uh environments that we wanted to deploy to the default is production so let's also have a key called env in node env and use that here so dot values dot app server dot env dot no dmv right and another thing that we can probably use is this name key to kind of replace all of these app server uh instances so that we can we can be sure that we always have a single source of truth rather than having it split across multiple places so let's just select that and change all of these instances to dot values values.appserver.name and again this is just a way for us to ensure that some consistency so that we don't make mistakes where maybe potentially we've introduced a typo here in the match labels and suddenly my deployment is not annotated correctly so now we have this app deployment that has essentially been templatized and all of the values are now flowing in through our values.yaml let's also do the same thing for our app server right so we can call we can add a key call service doesn't have to be call service it can be called anything there's no uh fixed uh format and this is kind of where you'll start to see the benefit of having it templatized and having a single source of truth so we have this app server let's just change this to also take our values.appserver.name right and it's a cluster ip service so let's also just add that here so type is cluster ip and let's use that here values dot app server service dot type so tomorrow if you want to swap it out for load balancer we can just come here and change this one value and here is where we can start seeing the benefit of having a single source of truth so we have this exposed port right and we're using that to define what our container port is going to be but we also need to ensure that our app server or rather our trusted ip service is also talking to the same target port so to maintain the same value we can just point to the same values dot app server dot image dot exposed port and now we can be certain that since there's a single source of truth again changing this one value will ensure that the deployment will see the right container port and the app service will also forward the request from port 80 to the to the exposed port correctly and let's also change the selector app server dot name and there we have it so now we've essentially templatized both our service as well as our deployment and the values for it are coming through our values at yaml so let's just take a quick minute to templatize the rest of our deployments and services and then let's see how all of this places uh well together when we do a helm install [Music] foreign [Music] so now let's let's see what we've done so far so we have our app server which has some values we have nginx where we see that this is our image name image tag expose port we have one replica and the service type is load balancer and user service has a name the image is user service tag is v1 exposes 4000 and 2 replicas and we've used those parameters so we've used all the user service values in our user service deployment we've used the same user service values in the service as well and similarly with nginx we've used all the nginx values in the nginx deployment as well as in the nginx server so now that we have this chart yaml and the values yaml we can now use the helm command to use this chart and deploy it to our our running kubernetes instance here in the in the in the docker context right so first things first let's first purge all of the pods that we have running so that we can see how helm uh helps you set up everything from scratch so you can use the cube ctl delete pods or rather let's delete the deployments first rather than the pods so so now we should see that all the pods are terminating and let's also clear up all the services and we can see that all the services have been destroyed and now if we get pods they're all in the state of terminating still and we get all the services the services have also been purged so let's just wait till all the pods are done so now that now we can see that the normal pods which means that we we have a completely empty cluster no services no pods and we can confirm this by using the dashboard again uh minicube dashboard we can jump in here and we can see that there are no deployments no pods and no services this is the default service but no custom services that we've uh configured available right now in our cluster so which means that we have a completely clean installation so now we can use helm to install this entire chart which is what we've created here under the kth folder uh the chart configuration is available in chart.yaml the valleys of gamble contains all the values to work upon the templates that we have for our various resources so we can come back here to uh helm install and what helm install asks you is for a path to the to the to the chart folder and let's just see this run as a dry run before we do the actual installation and let's also ask you to generate a default name for us so if you notice here we can see that all of our resource files are just being console logged to us because we ran it as a dry run but what's very very neat here is that all of our template values have been replaced so if you see here we can see that the nginx name has been replaced everywhere the nginx image name has been replaced correctly replicas uh the same thing with the user service deployment so we have a single value of gamma that's not driving all of these templates which means that we can technically use our values yaml to drive different values for different environments but before we get there let's not run it as a dry run now and let's do an entire installation of the chart folder and let's generate a name and we can see that the deployment has gone through successfully and let's just open mini cube and or rather open the kubernetes dashboard in minicube and in a single installation command we can see that three deployments have been pushed to our cluster of these deployments are running all of these pods we can see one two three four instances of wrap of the app server one instance of the nginx deployment and two instances or two replicas of the user service we can also see that our services are up and running we have our app server cluster ip servers our user service cluster ip service and the node nginx load balancer this is still waiting because we have to do the tunneling through mini cube but in a production environment this would have resolved to an external ip and now just to check if this is working correctly let's fire up the tunnel to our load balancer service and we can see that everything is running as expected we can go to slash api slash users and we can see that the data is being funneled end to end now through the cluster now helm is great in the sense that if we do a helm list we can see the see the various running uh helm installations that are deployed to our cluster we can also see that this is the chart name and the chart version and it's good practice to maintain a proper symbol for the for the for the chart version so charts can be reused by other uh charts so similar to how docker images have base images right you can have a base chart version also so if you want to share this chart with another team or or to the public you can have your chart name and a version associated with it and actually uh like share this and other folks can use this as a dependency for their chart which means that you bring all of your resource templates and the values yaml and they can then extend upon that to build their use cases so it's a good practice to follow good semver which means that anytime there's a change to the chart itself right not the values yammer that's slightly debatable but when you have changes to the resource templates or any new resource templates get added it's it's good to update this version so let's let's make a small change to see how a second deployment can happen to the same uh cluster so let's go back to our code base pull out our user service and let's say we want to add another user to our list right so whose name is jordando right so at this point in time we need to recreate another docker image which contains the new code and then use a docker image in our values yaml to configure the new value for our for our deployment so we can come back here we can do a docker build and we can build our user service again except this time let's tag it as v2 because we have a new uh version of the of the code so we've tagged this as v2 let's also check if the image is available and we can see that user service v2 is available now now let's jump back to our values.yaml so if you go to kth values or yaml we can see that user service image tag is v2 let's bump that up to sv1 let's bump that up to v2 and let's do a helm upgrade upgrade of the same installation and let's provide the path to the chart so now what helm is doing is essentially taking the chart that we have the new version of the chart and applying it on the same installation that we had earlier but with the new values of gamble and that's why we have us we have a new revision of the of the deployment so now if you do a cube ctl get pods what's interesting here is the old user service deployments are now being terminated and new ones have taken the place if you see here these are just a couple of seconds ago but none of the other pods have been touched in any form right which means that you can use the same chart to deploy a version of your cluster but there is a clean diff that's done to ensure that running pods that have had no change in the new version of the chart are not touched and are running as they were before so let's uh ensure all the terminating pods are gone and they have and to actually see if the code is up and running within within with the new user that we've added let's again tunnel to the load balancer service through mini cube service and let's hit slash api slash users and we can see that the new user is now available as part of the data array which means the new new docker image has spun up new containers in the cluster and those containers are now serving the new code so this is how you can use helm to for all intended purposes just envision how your cluster is going to look along with the various values uh that you want to that you want to use for this version of your blueprint and use these values in templatized uh resource management manifest files and the beauty of this is if since these are templatized you could potentially override these values or yaml values through cli arguments when if you want to deploy to a different environment so these are all default values that we're applying to our templates but if you have a separate environment let's say like a staging environment you can use the helm install cli uh to pass in a different value for app server env for instance or app server replicas for instance based on the environment that you want to deploy to so that allows these templates to be far more generic and we don't have to have we don't have to have copies for every environment that we want to deploy to so similar to how we have helm installed to spin up an entire cluster and send helm upgrade to upgrade the values in a cluster we can use the helm uninstall command to completely purge a cluster as well so in like instead of having to go and delete all of our pods and all of our deployments and so on we can just use our chart which is anyway like a blueprint of what we've deployed as a stencil to also note or also keep note of what needs to be removed when you want to remove a installation so i can just do a helm uninstall and the name of the of the helm installation and it will tell us that the release has been uninstalled and now if we do a cube ctl get pods we can see that all of the pods that are controlled by this chart are being terminated we can see the deployments have been purged and we can also see that the services have been purged as well apart from the default one but the ones such as the user service cluster ip service uh app server cluster ip service as well as the load balancer service have all been purged as well so you know in a single command we can get an entire cluster up and running in a single command we can have a new version of a cluster installed and if the single command we can have the entire cluster purged as well so this is in a nutshell how we can get a node.js and an nginx application up on a kubernetes cluster using helm i hope you guys found this useful and i'll see you in the next one [Music] foreign
Info
Channel: Abinav Seelan
Views: 1,686
Rating: undefined out of 5
Keywords: nodejs, node, k8s, kubernetes, docker, nginx, helm, helm charts
Id: u3sXfcncrJQ
Channel Id: undefined
Length: 77min 30sec (4650 seconds)
Published: Sun Dec 20 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.