Production-ready Node.js on Kubernetes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hey everyone welcome thank you for joining us today today we're going to talk about how to deploy a resilient node.js application on kubernetes from scratch the unofficial title of this talk is from NPM Annette - keeps it'll create you will go from the basics of containers to deploying a production-ready application on kubernetes my name is Kamal Nasser I'm a developer advocate at digitalocean you can find me on Twitter and github that's my username there at command seven and a couple of quick notes before we get started first this talk will be recorded and the recording will be shared with everyone registered after the talk and also posted on the digital ocean youtube channel so don't worry too much about taking notes or screenshots it'll be available for everyone after the talk second if you do have any questions during the talk please enter them in the questions box and we will answer as many of them as we can live at the end of the talk all right so I'll start with a very quick outline of what we're going to talk about in the next 30 minutes but before we can even talk about kubernetes we need to talk about containers where specifically docker containers then we can talk about kubernetes and how to deploy a resilient performant and scalable applications on Curran eTI's okay I just realized my screen shot was paused so hopefully you can see my slides now yeah so this is the outline of the talk first gonna talk about containers kubernetes and how to deploy a resilient performance scalable applications on kubernetes now these are cool words but what do they mean it means that the applications we'll be deploying will be suitable to serve five users at a time or a thousands of users at a time will have multiple copies of our application running at the same time so if one of them goes down or crashes for example our users won't notice downtime because we'll have backup instances running and we'll also be able to sorry scalar applications up and down the number of copies up and down as we need to and not only that but kubernetes will always keep an eye on our application make sure everything is running perfectly and if our application does go down kubernetes will automatically detect it and bring it back to a healthy state for us okay let's get started so containers what are they I like to go with lockers official definition which says a container is a standard unit of software that packages of code and all of its dependencies so the application runs quickly and reliably from one computing environment to another now what this means is pretty much you can think of a container as a way to take your application its source code along with all of its dependencies and instructions on how to run it and package all of that up into one single unit called the container and because the container contains everything that your application needs to run you can easily upgrade or downgrade two different versions you can run multiple versions of some dependencies at the same time because each container is completely separate from other containers and when you create a container it gets its own file system its own networking stack and a bunch of other cool things that makes it isolated and secure by default so it only has access to what you wanted to have access to lastly because again because container contains everything that our application needs to run you can easily have pretty much identical environment in staging development and in production now I like to explain things by example so let's do a quick live demo here I'll go to my terminal and I mean an empty directory on my laptop there is nothing here I'm gonna quickly create a node.js application so first I'll run in few minutes this will create an empty nodejs package for me then I'll install Express because this will be a web application and it's only pulling in like 2000 chub dependencies as usual for node packages so it's alright and now I can create my ethic J's file this will be my application so first I import Express import OS then I create my Express app and define a route our route at the route that says hi from the system's hostname then I said my parts a few thousand and listen on it so this is it this is my application it doesn't really do much this application we'll be using throughout this talk I'll go ahead and save this file and let's try it out let's run it so the next yes doesn't work mm I'll go to my browser I'm at locos 3,000 reload there we go hi from my laptop's hostname so application works this is pretty much it so I'll close it now and let's take a look at what it will take for us to package this application into a container the first step in creating container is creating the docker file now a docker file is a file named opera file and it contains instructions on how to build the container image so I'll create my docker file here open it then my editor and like I said before a container gets its own file system when it runs so right now the container we're building has a completely empty file system and we don't want to start from a completely empty environment we want to have at least an operating system and no jets pre-installed because so just make things easier for us so one of the cool things about the duggar ecosystem is there's this huge library of community built images that we can tap into and just like with nodejs there is NPM which is probably the most widely used package registry with docker there is docker hub which is the most widely used uh cur registry for container images in one of the images on there is the node image and that's what we use as a base for our container so I can do is type from node this will tell docker to use this image as a base for this container then we'll go with version 13 for nodejs 13 and also say - Alpine now if I were to just do note 13 that would also work but what that - Alpine part does it is it uses a version of the node image that is based on the Alpine Linux distribution Alpine is a very minimal enix distribution it really only contains the very minimal things you need an offering system so it's very small and by opting to use it we can keep our resulting container which is small and fast to build so now that we have a base here with no disc for installed you can start copying our application into our container so first I'll set my working directory this can be anything I want I like to go slash app and I can start copying my application into the container so I'll copy my package Jason and package lock lock Jason files into the container then with those hives in there I can solve my dependencies so I run npm install passive - production to optimize for our production environment and with the dependencies taken care of I can copy the rest of my application over into the container so I can say copy dot dot this will copy everything in the current directory into the container it will copy my index.js file and everything else that I have in my a directory to the container know that we've got everything in there we just need to set the port that our app will listen on and that's three thousand so I can say expose three thousand now this doesn't actually go and do anything it's only there for documentation but it's really good practice when writing container images so that people who do use your image know what to expect if I only would just need to set the command that is run when the container is started and that will be node index J s so this is it this is our docker file this is how we will build our application now when docker starts building this image it will read this docker file look at each line and execute it line by line in order so first start with with the base image and all it will copy things in order that we specified so I'll save the docker file and now we can go ahead and build their application our container and to do that we will run docker build and by default when you create a when you build a container image it gets a hash as its ID it's kind of like a random string it's not entirely random it's based on the contents but it looks like a random string it's not very convenient to use so what we can do is choose to tag it with an additional name so I'll say I - - and tag it with the name ten island seven node the löwe app now I just need to give it the path to the current directory and once I run it it will start building the image you will see it executing the docker file step by step so we've got step one from step two working directory and so forth until we get our image built so this is the hash the image ID that I mentioned and this is the other tag that we chose to use for the image now if I scroll back up you will see that in step one you'll get a hash step two you also get a hash step three and so forth so this is what's happening here is that this is a result of the way docker built its images so instead of building an image as one single unit that has everything what docker does instead is for each line in the docker file that line gets its own layer and all the changes that this line makes get recorded separately from the other lines and so the resulting image is basically a composition of all those layers applied on top of each other and this is actually a really cool thing because if I go back into my terminal open my index 2's file and go to high here and change it from high - hello so I've made the one line change to my index yes file save it and now if I try to build my image again we'll see that for the first few steps it says using cash so up until step four for each one it says you can cash it in cash and only when it gets to step five does it stop saying that so it's happening here is that docker has detected that we've already built an image just few seconds ago and the only change right now that we've made from that older image is to the index.js file so it's smart enough to know that this is the only change that it's wit that was made so only when it gets to the step where the index.js file is brought into the container and that's step 5 where we copy everything into the container up until then nothing has changed so for the first few layers before that it can just use the exact same layers of the older image and then build on top of them the new image with the changes that we've made so this is a really smart way that docker helps you keep your image builds efficient and also fast ok so we've built a few images let's run them let's try them out now to run our image we will run docker run and we'll give it a few options so by default when you run a docker container if it stopped doctor saves it for you so it can restart it back up again if you want but because this is just for testing I wouldn't need it anymore once it's up so what I can do is pass - - RM this will tell docker to delete the container once it stopped I will also pass - D which will run the container in the background because I want to use my terminal right while it's running and this is where we actually expose the port so I'll say - P and then map port 3000 on my laptop - port 3000 inside of the container finally I just need to give it my image name and that's canal in 1702 lo app and once I run this I get this ID back and this is actually the idea of the container that we just created so if I run docker PS this will print the currently running containers on my laptop you will see this container we just made this is the short version of hid is just the first few characters of its full ID and you also see a few other details about it ok so a container is running and our application should be available on port 3000 let's try it out go back to my browser reload and just like that you see a response back from the container you'll notice that the high was changed to pillow and instead of my laptops hosting you will see the containers host name and it's actually the containers ID so just like that we took our application we package it into container and now we're accessing it through the container running on my laptop that's pretty cool but it's only available on my laptop right if I wanted to run this elsewhere or share it with others I need to push it to a registry first so I will push my container image to docker hub now and to do that I'll run docker push then just give it the image name and you will see it pushing the image also layer by there so you will see that for some of them it's as layer already exists right so another cool thing about this whole layering mechanism is that it also helps you keep your image uploads efficient because we built our image on top of the node 13 image from docker hub right and the layers of that image oror the existing already exist on the registry that's where we got them from so docker is also smart enough to know that those layers already exist on the registry we don't have to upload them so it can just reuse those layers from the registry and only upload the new layers that we just built ok so now the bushes is finished and then when in the world can take my image name and run a container and get the exact same thing that I just got on my laptop it's exactly identical that's pretty cool okay so this pretty much concludes the darker part of this hawk if you have any questions please type them in in the questions box questions box in the GoToWebinar software and you'll answer them at the end of the talk but with this zone we can move on to the second part of the talk and that's grenades you might be asking what is kubernetes why should I be being excited about it so Purina DS is a container orchestration system right and a very simplified version of what that means is it lets you deploy and manage your containers in a declarative way so there are two types of deployments there is a declarative deployment style and an imperative deployment style and there's a just one main difference between them and let's forget for a second about kubernetes and pretend that imagine that we wanted to take the container we just built and push it to production so what you could do is go to your digital ocean account create a new droplet install docker on it run the container grab your domain name to it and you'd have your application running if you if the application crashes you would have to manually go in there and fix it so that's an imperative deployment style where you want to achieve something so you go until the computer step-by-step how to get there now on the other hand a declarative deployment style what kubernetes lets me do is it lets you instead of outlining the exact steps that you need to do in order to achieve the result that you want instead you will just describe the resulting state that you want so for example I want to run this container and I want my main name mapped to it and that's it that's all you have to do you take that description send it to kubernetes and kubernetes would go on its own figure out the exact steps that it needs to do in order to get there and do them for you and not only that it will also keep an eye on everything for you so if something goes wrong it will automatically detect it and bring it back to the whole state for you so you don't have to manually go in there and fix it so I have a Korean a t's cluster running on my loschen account and you might have seen this mean that says there is no cloud it's just someone else's computer and I mean it's funny it's true but there is no such thing as a magic cloud right it's all actual servers running in data centers at the end there's just some layers of abstractions on top of that but one really cool thing about kubernetes is that even though it is built of actual servers it it lets you think of your cloud infrastructure less has individual units but more about just a generic pool or a cloud of resources so there is some sort of cloud after all and a Carini's resource it communities cluster is built of a couple of types of servers of nodes first you've got the master node and that's pretty much their brains of the system it makes all the decisions for you and it's responsible for taking care of everything pretty much and the second type is work your nodes and these are the actual nodes that your container containers will be run on and usually would have a multiple of them in your cluster so in my little ocean currents cluster I have three worker nodes you can see them here and the master node which you don't don't see here is actually handled by digital ocean for me I don't have to worry about it and the main way you would interact with the Karina's cluster is using the cube CTL command line tool and going back to declarative deployment the way you would describe your resulting States is by using these Yambol manifests that pretty much describe what components you want to have and how you want them to behave and then you would go intend those descriptions kubernetes using the cube CTL apply now if you use kubernetes before you leave very much familiar with this command in this tile of deployment but we're not gonna do that today wouldn't get what we're gonna do instead is instead of writing these animal manifests because I mean riding camels boring and no one really wants to watch that meeting them life what we're gonna do instead is build our deployment piece-by-piece also using cube detail now I do want to highlight in in a production environment you would not do this you would go and drive those animal manifests and apply them to your cluster that's that's the correct way to do it but this is just a demo and I think doing it interactively is it's a good way to see what each component does and how all those different components work together so it's time for another live demo I'll go back to my terminal and one thing about kubernetes is every everything in kubernetes is represented as a resource so kind of like an API resource I mean using cube CTL you can create edit delete patch and do all those operations on those resources so one example of a resource in coronaries is the node resource and these are the worker nodes that you saw and the screenshot before so let's try running cube CTL get node this will get all of the resources of the type node in my cluster and you will see the three worker nodes that you saw on the screen shot before okay so keep seasons working it's connected to my cluster let's get our container running on kubernetes so what I can do is run cube CTL create deployment and then I need to give it the image name of my docker image that's count seven note hello app and I also need to give it a name for this deployment so I'll just go with node - app and once I run this I'll see deployment node I've created this means this was successful so right now I should have my container running inside my kubernetes cluster using this container image we just built and another thing about kubernetes is that the very basic smallest building block is not a container but what's called a pod now just to keep things very simple and easy to understand I I'd like you to think of a pod as a container just a fancy word that kubernetes uses for a container it's not exactly a container there are some differences but just avoid any confusion you can think of a pod as a container so what we can do is take a look at the pods in my cluster and you will see that we do have a pod running name note app and some random characters and this is actually running the container we just made you seen it was created 44 seconds ago so our container is now our applications now running in kubernetes now by default when you create a pod in kubernetes if it crashes or if it goes down or anything it will not be started back up again but what we have here is a little different so when we ran QT they'll create deployment we didn't actually create this pod we what we created was a resource of the type deployment named node app so if I run cube CTO you CTL get all so get the most common types of resources in my cluster you will see that we have the pod here that we saw previously but we also have two other resources we've got what's called the deployment this is the node app name that we chose you see you can see it here it will also have a replica set which is another type of kubernetes resource so when reran keeps it'll create deployment what we actually did was create this resource we didn't create the pod we didn't create the replica set the deployment itself created the replica set of pot so a deployment in kubernetes is a high-level kubernetes resource type that manages pods for you and you can think of a replica set as another type of group in a DS resource that helps the deployment do is chop so we can't because we have this deployment here taking care of my pod if it goes down if it crashes the deployment will create a new pod to take its place we can see that in action so if I run watch cube CT I'll get pods this will run queues it'll get pods every two seconds and print the output in my terminal and I'll split my terminal here so I've got the terminal at the top and one at the bottom and if I type here cube CTO delete pod and then take this pod name and paste it in so kind of like simulating a crash so if you if you keep an eye on the terminal at the top once I delete the pod you will being terminated and instantly a new pod is being created to take its place and this is the deployment doing this job one other thing we can do is run cube CTO scale deployment then give it the deployments name note app and also a replicas three so right now we have one copy of our pod running right once I run this command I can scale it up to three copies so this will instruct the deployment to keep at all times three copies of the pod running so I run it it says scaled if you look up you will see two new pods being created so that alongside the older pod we have three pods running three copies of our application running at the same time and just like before if something happens to one of those pods a new one will be created to take its place so that we're always at three pods running okay so we have all these copies of our application running on our cluster that's let's access it let's try it out well not so fast we can't access our application right now the reason is it's only accessible from within my kubernetes cluster right I can't access it from my laptop which is outside of the cluster so in order to be able to access the application the the deployment we can do is expose it just like we did with docker so around cube CTO exposed deployment then give it the deployment name and there are a few ways to expose a deployment the first being the type cluster IP this will expose it on a private internal IP address inside of the cluster but that's also not accessible from my laptop so I don't want that the second type is node port this will expose it on a random port on on the worker nodes but it's also publicly accessible so that's what we're gonna we're gonna use and I just need to give it the port that is the port that my application is listening on inside of the pod inside of a cluster and that's 3000 so once I run this service know that exposed so what this actually did was create a new resource of the type service worse so if we look at the services and our cluster keeps it at surface you will see that we have one with a name note app the type is Northport and you also see the random port that was assigned to it so our application is now actually accessible on this port so we can go ahead and try that out now but what I will do is get my worker notes again keeps a look at note but this time it will pass - Oh wide so this will instruct the cube CTL command to print even more details about my worker notes and one of the new columns that you see here is the external IP column and these are the public IP addresses of my worker nodes so I grab one of them and I will use the curl command to simulate a browser request this will send an actual HTTP request there's just like browsing to it with using my browser but I can do it without leaving my terminal and it's in the IP address take the random port place it in and I get a response back from our pod hello from node app in the name of the pub so just like that we took the note docker image that we just built we deployed it on our careers cluster and we can communicate with it that's pretty cool but it's not good enough for production it's it's ok for a testing or development but a couple of reasons is that first I'm using an IP address and port and the approach in our environment you'll have your domain name up to it and using just DNS there isn't an easy way to map an IP address and port to a domain name right it has to be like port 80 or 443 for HTTPS yes and the other thing is that remember the remember when I talked about Carini's letting us think of our cloud infrastructure less as individual units but more of as a generic available cloud of resources we're using the IP address of one of the worker nodes here directly so that kind of goes against the the cloud of resources thing because if this worker node goes down if it stops stops responding for any reason and it needs to be replaced I our application would go down alongside along alongside it because we're using the IP address directly and if that stops working our application will stop working so what we can do here is use a different type of service to expose our application so one other cool feature of cube CTL is it lets you edit resources in place so I can run cube CTL edit service node app and once I run this cube sitio will get the yamo manifest that is backing our service that is running right now in the cluster and open it in my text editor so this is the actual yamo config for the service that is running right now and scroll down to the type and change it from node port the type that we use to the third type load balancer you'll also go up to the port here and change it from three 3,000 to 80 target port will remain set to 3,000 this is the port that is running that is being used inside of the container and that hasn't changed so this will stay set to 3,000 now once I save this file cube CTL we'll look at those changes I just made it will validate them and send them back to my cluster so now my service got those changes applied to it so if I get the services again you will see the same no that service is now on the type load balancer and it's external IP address is pending whereas previously it was not didn't have an external IP address and what's happening here is that behind the scenes Carini is creating a native digitalocean load balancer for me and attaching it to this service so if I go to my browser here with my digital ocean control panel you will see this new load balancer being created and once it loads you will see the worker nodes being attached to it these are the worker nodes in my cluster and this is automatically handled for me if I create new worker node or if I replace any one of them those changes will be made to the load balancer to now also being created this is actually it's kind of cool like the way this this works it's not actually kubernetes that's creating the load balancer for me but instead it's a different program running inside of my cluster that traded the load balancer so when I when digitalocean provisioned this managed to run Eddie's cluster for me they installed on it what's called a cloud controller manager the cloud controller manager is a type of program come only using chlorinated clusters that is that watches for a few special types of resources like services with the type float bouncer and it handles them on behalf of kubernetes and the reason this is kind of cool is that if I go ahead and do everything we've done here all these commands and kubernetes manifests if I would go and do all this on a grenades cluster on say Google cloud for example without making any changes I would have gotten a native GCP Google cloud load balancer instead of a do one because it's handled by the the cloud controlling render and it's a it's a really cool cloud the classic way to handle provider specific resources so now our little bouncer is created I will get my service again it will see that it has an external IP address now this is a public IP address and this is where my application is now available so if I curl it again and this time I don't need to specify a port because we set it to AD I get response back from one of the pots and this time it's going through the load balancer first so instead of going directly to a worker node to a pod it goes through the load bouncer which then forwards it to any one of the worker nodes so if I run it a couple of times you'll see that each time I run it I get a different pod name right these are the host names of the pods so I got a different one here here and here and this is the load balancer doing this job right like its name implies it's balancing all this incoming traffic load all these incoming requests across all of my worker nodes across all of my pods and another thing is that this load balancer will also monitor my worker nodes for me so if one of them starts responding it becomes unhealthy the load balancer will be smart enough to stop for Quest's to it so our users won't see any downtime and if one of the pods goes down it will also not for it any requests to it so it's got it's kind of like a couple layers of protection against user or client visible downtime okay so got our application running and metal balancer handling all the incoming traffic for us so just to quickly sum up what we've done is we took our no J's application we packed it into a docker container using the node image from docker hub we then built the image and pushed it to docker hub so now it's accessible to everyone and within my covalent cluster first we created the deployment that ran one copy of my pod and then we scaled that up to three copies and then we created the load balancer that takes all of the incoming traffic from the Internet and balances it across all of my different copies of my application running so now I can take my domain name map it to this load balancer probably posted on a granese reddit and all those social platforms and I can know for sure that my application will be able to handle all of this incoming traffic so we can also scale the application up to multiple replicas if you get more traffic than we expected so I sarin goes viral we can scale it up to I don't know like seven replicas depending on what our application needs and then what once the traffic counts down we can scale it back down to a one replica because it's not getting as much traffic as before so this is also a cool thing about kubernetes is that it lets you maximize the efficiency of how you use your cloud resources okay so this is it I hope this is helpful thank you for tuning in and for listening the link here goes to a page with the slides that I've that I've used for for this talk and also all the code and different commands that we used there is also a 100 dollar digital credit and clothes in that link so if you don't have the dilution account and you want to try out goober Nettie's you can sign up using the link in there and get a free free trial with $100 credit to try out everything you've learned about grenades and blocker for free and the QR code in the top right corner is that same link alright let's look at questions all right thanks so much presentation come out into the audience thank you for joining us I'm Samantha from the community team at digitalocean I'll be moderating the question and answer portion like him outside will answer as many questions as we can live and if your question is one that we don't get today we'll answer it later and share that with you via email along with the video recording so first question you come out can you describe shortly how is your terminal connecting to your digital tools like cuber nutty's for example okay should unmute myself yes so the question is how how is my terminal connected to my visual ocean Green Day's cluster the answer is the way cube CTO connects to my cluster is when you first create a cluster in digital ocean let's go to my cluster here you will be able to download a config file and this is the config file with all the authentication details that the keeps it'll need command needs in order to connect to the cluster there is also another way to do this using the do CTL command-line tool this is the digital ocean API command line API client the other way to do is is by running the OC tail kubernetes cluster make it safe config so it takes care of configuring key detail for you and entering all of the authentication bits that are needed to connect to the cluster great does do provide a private docker registry service or do we need to spit up ourselves yes this is actually a great question because this is something that we've recently launched digital ocean container registry is now available in early access to everyone it you can use it to create your own private docker container is true so for example you see I have my open registry here that's this URL it's completely private and you can use it with inner kubernetes cluster and also just regular docker containers awesome what is a good use case for cluster IP service a good use case for cluster IP service if you have a private so if you have I guess if you have a micro service based architecture not necessarily if you have a private program running that accepts any kind of connections from other applications let's say you have a database running and you want your node.js application to connect to it instead of using you would need to have to create a cluster IP service to connect to it through your through your application running inside of kubernetes but it won't be accessible outside of the cluster so it's a cluster IP service is for connecting private deployments inside of the cluster and the other two node ports and cluster IP go forth and load balancer over public publicly accessible services what kind of digitalocean accounts would you recommend for setting up a kubernetes cluster how much does it cost at a minimum how many cores and RAM will I need ok a kubernetes cluster first the master node is completely taken care of by digitalocean for you so you don't pay for it when you create a Karina's cluster it uses the same the worker nodes use this the exact same plans as regular droplets so going to the create page here I can set up my record notes the the very minimum is one worker node but you will see this notice here that if you want to prevent downtime during upgrades or maintenance you have to have multiple notes running but the very minimum is one node at the smallest plan of $10 a month and that comes with 2 gigabytes of memory one virtual CPU just like a regular adult droplet would cool should load balancers also be scaled what if they go down yes load balancers are highly available by default they're a little balancer created by digitalocean has multiple copies at multiple instances running it's managed by digitalocean for you but it's highly available everything is maintenance or bringing bet bring get it back up if it crashes is automatically handled for you so it's it scaled by default pretty much does darker push matched the container tag to the docker hub tag and upload there automatically what if the tags don't match yes when you so and I ran docker push here the the tank that was used is latest this is the default tag that is used when you don't specify a tag so if I push this again if I build a new image using the same exact same it was name and tag and then push it on docker hub it will overwrite the existing image so this is how you would basically upgrade or push new versions if it doesn't exist if I push a new tag let's say Griffon 1.0.0 it's will we create it on docker hub so it's it's synced with with the tag names that you push using the docker command which layer of the docker build process is read-only what are the security issues related with the data and the host which is matched to the data directory in the container okay so there are two questions here what's read-only and what is so first what's read-only that so nothing is really read only in the container when for that from the point of view of the application running inside of the container it has access to everything just like any Linux operating system you can run a different user and change the ownership permissions to restrict that but it will only have access to its own file system it will have zero access to the file system of the host that is running the docker container now you can mount for example if you have a if you want to mount a directory share a directory between the host and them and the run in container you can mount a volume which will share access between the container and the host system so it will have you can override files in there but when you do configure the volume there's an option of setting net as read-only so the docker container will only be able to read the files but not try to them in the demo you chose to scale your application manually to 3 replicas is it hard to make the up and down scaling automatic that's a good question I go it manually but you can definitely configure it to be automatically scaled there is what's called horizontal kubernetes k link auto scaling which will monitor the CPU maybe memory usage probably other metrics that you can you can choose to monitor and and if if it notices that the load is getting high and the application is no longer able to handle all of the incoming traffic it will automatically scale it up to the maximum that you set and the minimum that you set depending on the traffic so that's called horizontal current any scaling what kind of customization does digitalocean offer for kubernetes cluster what kind of customization so you can you can choose different versions obviously you can trade different worker nodes worker pools so so if you want to have like three let's say standard nodes then you can create an another pool with C if you optimize nodes with I guess dedicated CPU cores for very heavy workload and you can specify what deployments go on which the worker pools so you can customize cluster in that way other than that there isn't really much that you can customize other cluster you can I just look at like the settings page is that automatic upgrades and set the timing there but and like anything else is just the standard kubernetes cluster behavior if you have any specific questions I'd be happy to answer that cool you created three notes and three pods when you created the pods to kubernetes deploy them over the three nodes are only over one this is a good question it should deploy them on different on different nodes you can you can look for sure so if you get the pod you can describe this pod so get the puss name describe it in here you'll you will see which of the notes it was deployed on yeah you you I can't find it right now but you will be able to see which of the other nodes it was deployed on by default I believe it should distribute them across all the worker nodes what about the secret key and DB password do we have to upload this to a package docker hub repository no uploading secrets is never something that you should you should do on your docker images you you can if you use environment variables you set them when you run the container so when you run the deployment actually I'm not I'm gonna edit it right now when you run the deployment you can set any environment variables kubernetes also has a resource of the type secret that can that can contain secrets you can mount those secrets either as environment variables on the containers or mount them in as files in a directory but they they should always be outside of you're a doctor images why should I use kubernetes instead of building docker containers for deployment it really depends on your use cases personally for a personal project I just used docker I don't use uber Nettie's for for very heavy traffic websites very important production environment the the benefits that kubernetes provides you is all the automatic health checking that we've just seen other like cloud native integrations with load balancers and everything so it really depends on what kind of workload you're expecting what kind of environment what resources you have to manage that environment both are completely valid it just depends on your use case great when you scale database containers we'll each one contain different data if you it depends on how the database is deployed if you deploy it like a normal container using its the containers health system they will have different data you can mount just like with docker you if you can if you mount a directory inside the container you contain the the database data you can also do the same on kubernetes using a volume most database software actually use their own copies of the data and periodically sync the different copies so that they're all up-to-date but they would have their own underlying storage does it change the process if you have notes in different server locations such as SFO or NYC right now a Korean Aires cluster is can only be deployed in one region you can't create multiple nodes in multiple data centers what you can do is create multiple kubernetes clusters but that's a bit complicated and really outside of the scope of this talk because it makes things a bit more complicated in a situation where a containerized application needs to access a file system or database that is common to all instances how does kubernetes help here if each pod has its own version of the file system ok if there is if you need to share something across multiple nodes that you can you can either copy it into the docker image but in case of databases you that's not that's not a good idea you can mount a volume on multiple nodes but by default increment on do digitalocean kubernetes clusters a volume a block storage volume can be mounted only on one pod at the same time but some of the options you have here is running a network file share like NFS and other file share systems you can there is a cool kubernetes project called rogue are ok dot io I believe it basically spins up a safe block storage cluster inside of grenades for you so it you can use it to mount the same thing on multiple the same volume on multiple pods but it's a bit it's a bit it's too much for just like a simple file that's really good for like heavy production environment what's the best practice to get logging from the kubernetes nose okay you have multiple options here again you can either take the logs from the containers outputs themselves and then send those to an external service you can do that so remember when I said the pod is kind of like the container so a pod is basically a wrapper on the container you can run a pod can be one container or it can be multiple containers but if you have multiple containers in one pod it would be represented as a single unit like kubernetes so what you can do is run your application like we did now and pose and save all the logs to a file on the file system and what you can also do is run another container inside the same pod that will watch those logs and ship them to an external external blogging service so this is these are two options to shipping blocks directly from the pods the other option that you have is listening for the docker engine's logs on each worker node so outside of zooming out a bit outside of the pod just have one program running for Viroqua node ingesting all those Ducker logs and then shipping them to an external service how do we sync data between load-balanced knows it it really depends on what kind of data you're talking about you would usually use an external database like if you have a MySQL database for example it would be external to the pods themselves so you wouldn't like it would handle sharing the data for you if you're talking about files you can like share the volumes like we've mentioned before but it really depends on what kind of data you need to share just Qbert take care of rolling deployments yes is it automatic yes it takes care of rolling deployments for you so the deployment we have right now is taking care takes care of the process for you when you push a new version of the container image by default it will use a rolling upgrade we lose your rolling upgrade to push this new version so first it will create the new pods and then gradually delete the older pods this is customizable but by default it is enabled just creating load balance a load balancer service in kubernetes also create a load balancer service on digitalocean yes yeah exactly it creates a load balancer service in digitalocean and links them together and one important thing to note about this though is that when you create such a service be sure to not make any changes to the underlying load balancer yourself but instead make those changes to the service because when you make the changes to the service they will be applied and pushed to the load bouncer but if you modify the load balancer directly those changes will be reset because the kubernetes will periodically make sure that the service matches the the load balancer matches the service so always added the service itself and not the load bouncer directly would we be able to use helm to install custom packages for the cluster yes definitely great let's see there are a lot more questions here I'm trying to read through the ones that we have time to get to today all right if using micro services and a database cluster how does kubernetes handle the transaction of data do you create multiple database pots with a single master or every new node.js pod connects to one master one master assuming the database is also running on kubernetes you would so you would just like we said before you would create an a cluster IP service that would map to the database itself and all communities all own ojs pods would connect to that same service so usually it have one master node in in a database and you would connect it directly but it's running a database inside of kubernetes it's a little complicated it's definitely doable but the recommendation is to use an external external managed database because it just makes things very much simpler can I manage application deployment or version and using kubernetes or digitalocean or should it be done by an external service like gitlab of an external tool is needed do you have an integration with good lab ok assuming I understood the question correctly it's more about how to get the new versions pushed to the cluster you can definitely do it however you like it can be part of your CI CD workflow you can just like we did with cube CTO on my laptop you can connect in your as the last step in your CD continuous development workflow you would run all the necessary keep CTL command to update the image version and push it to the cluster so you can do that automatically or you can do that manually yourself if you wanted to there I know Gibbs lab integrates with kubernetes I haven't used that feature actually so I don't know how it works for like how that works so I can't say for sure is it possible to create UDP load balancer or a TLS load balancer using digital ocean kubernetes service yes so when you create you create a Korean Aires resource one of the one of the fields on that resource any kubernetes resource you can attach annotations to it and the cloud controller manager the program that creates the load answer also looks at those annotations for you so you can specify if I go to my load bounce here this is a little down so we just created you'll see that it uses a TCP protocol by default but you can add in an annotation to the load balance for a service that says hey changes to HTTPS or HTTP - you can also select the certificate also using annotations this is this is all documented in the little ocean cloud controller repo you can find it here we've got all the documentation here and how to actually apply those annotations and configure your load bouncer we've got some examples - is there a way to see how much load a pool is currently undergoing can this be done with an HTTP call sorry can you repeat that again please yeah is there a way to see how much load a pool is currently undergoing and can this be done with an HTTP call I mean it's possible in a way you I don't remember for sure if it's part of the default kubernetes services but you can there's a couple of services that you can install that monitor the notes for you and expose metrics on HTTP endpoints there's cube state metrics it's very popular one yeah there are programs that you can run to expose metrics for you I don't remember for sure if it's part of the default services Thanks does the load balance or support mqtt what's that again MQTT I don't quite know it lightweight messaging protocol I am not familiar with that but it does support these protocols so if if it's part of those then it does work cool well we have three minutes left and quite a few more questions so we may not have time for everything Kemal there are few people who asked about demonstrating auto-scaling do you have time to do a quick demo before we hop off that is that would have been very very awesome to see a life but time set that up you'd have to there's there quite a few steps to achieving that so I personally will be able to demonstrate that life no worries all right well thank you so much everybody for your wonderful questions I know that there are more questions out there especially ones that involve longer demos we'll try to answer them as best we can via a QA doc and then send that to you over email we don't get to it we're gonna send out a survey right after this when the webinar closes just tell us what more you'd like to learn about next and we'll be happy to produce more of these Tech Talks for you thank you so much for joining thank you Kemal we hope to see you again soon awesome thank you everyone you
Info
Channel: DigitalOcean
Views: 17,307
Rating: undefined out of 5
Keywords: DigitalOcean, Digital Ocean, Cloud, Iaas, Developers, kubernetes, node.js
Id: T4lp6wtS--4
Channel Id: undefined
Length: 58min 52sec (3532 seconds)
Published: Wed May 27 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.