NestJS Kubernetes MiniKube

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's up guys in this video we'll dive into the world of container orchestration with kubernetes exploring how it works why we would use it and then demonstrate a Hands-On example using mini cube with njs so we can develop with kubernetes [Music] locally what is kubernetes kubernetes or K8 is an open- Source container orchestration platform that automates the deployment scaling and management of containerized applications kubernetes provides a portable and consistent environment for deploying applications across different infrastructure environments enabling organizations to avoid vendor lockin and maintain flexibility in their choice of cloud providers or on premises infrastructure let's just take a quick look at some of the problems with manual deployments of containers so containers they might crush and they might need to be replaced we might need more or less containers depending on traffic spikes and incoming traffic should be equally distributed to these multiple instances of a container so y k8s so kubernetes we can conf we can create configuration files to specify our deployment requirements so things like the desired number of containers all these sorts of things we can then give this yaml config to any cloud provider and it can also run locally with mini Cube as we'll seen as we'll soon see this provides a standard a this provides a standardized config independent of third party providers but they can be used with a cloud provider and they will create resources and adhere to your deployment specifications and just to note Docker compose it's a great tool for local development but it creates all of your containers on the same machine whereas kubernetes it's like dock compose but it works across multiple machines which is what we typically would want when deploying in containerized services such as microservices in production we are going to start off by looking at this cluster architecture diagram so we can quickly familiarize ourselves with the concepts needed to make use of kubernetes so the first thing you'll note here is we have these things called nodes and the these are just machines so these could be things like ec2 instances or other virtual machines and you'll notice that each one of these nodes they hold pods they can contain one or more pods typically you'll just have one representing one micros service per uh node but it's possible that you can have multiple pods as shown in this diagram here and these nodes node one and node two they're known as worker nodes and these containers well these nodes these machines they run the containers so our micro Services it also has this proxy Cube proxy and this Cube blet contained within it so Cube proxy firstly a container we need to have some sort of communication outside so we can connect from the Internet to our container and Cube proxy that allows us to have that inbound and outbound requests whereas cuet cuet uh that's a service on the worker node and allows for communication between these worker nodes and this control plane or there's a thing called a master node uh such that this master node it's able to control all of these pods on the worker node so this control plane uh it interacts with the worker nodes to control them and through this Cube API server it's able to send instructions to the cloud provider as well and that then the cloud provider can provision resources uh it also has this scheduler here this control manager here so the um schuer that selects you have a whole bunch of worker nodes and the scheduler helps you select the worker nodes and put new pods in them and it's sort of related to the cube control manager which actually goes ahead and it watches the um so the cube control manager watches and controls the working nodes overall um and it corrects the number of PODS and they work closely together and then this Cube API server this is the master node that contains the API server which provides a way for the cubet to communicate and then the cloud control manager that's sort of a layer that sits on top of all of that and that's specific for the cloud provider so when we're making these AWS H sorry when we're making this K8 kubernetes configuration files and we're setting it up it's Cloud agnostic but we will deploy it to the cloud so the cloud specific stuff here will allow say AWS to provision those particular resources like the ec2 instances and all of that sort of stuff so that's in a nutshell the cluster architecture and the main Concepts that we need to know and just to note you'll still need to actually create things like a load balancer and file systems on AWS and create your node instances like you still need to write the code and create your nestjs API for example and hook it up to different AWS services and all that sort of stuff uh however what kubernetes will do is it will manage your deployed application so we'll make sure that it's running if it crashes it'll reset it you'll scale the number of containers as needed and all of those sorts of things and we'll do that across different ec2 instances or different virtual machines and with that you can see that it's a quite powerful orchestration management tool for containerized applications such as microservices we need two things to be able to create clusters and interact with them locally so we're going to need Cube CTL and we're also going to need mini Cube on the kubernetes website you can see how we can install Cube CTL and Cube Cil it's just a tool for setting instructions to the cluster this whole thing is known as a cluster and cluster architecture so we got the worker nodes and the master node and all of the other things in the control plane and everything your whole application that makes up the entire cluster so what Cube C does is it allows a way to send instructions to the cluster so to create update delete a new deployment for example and then that sends instructions to the master node um so we say we want to create more pods for a container and it's needed for communicating to the cluster both locally or remotely so we'll still need this in a proper production employment a deployment as well but we'll see it here in this local deployment uh that we'll do so you can just go ahead and install that and if you're on a Mac and you have an M1 M2 Chip it's quite simple because you don't need to install you can just go through the instructions here and I usually use Windows but I actually just switch to a Mac because everything with Docker just seems to run smoothly with a Mac for some reason so um if you have a choice between the two I'd recommend the Mac in that case uh however if you don't have an M1 M2 Chip or you are on a Windows machine you might need to get what's known as a hypervisor and that's in the instructions you can just sort of Follow that and get that and set that up that would just be an extra step but with the M1 M2 chips you can just go ahead and um install it directly basically just following the steps and then mini Cube here this is a local Tool uh so it's a tool for testing kubernetes locally and it just uses the virtual machine on your machine to make a cluster on there and we'll see even though that it's going to be on our local machine basically each of the individual services that we would expect to be deployed on different machines or ec2 instances or whatever it'll just give it a different port number and they'll simulate what it would be like if it was deployed on different machines and basically if we test it on Mini Cube uh we only need to make a couple of tweaks and then we'll be able to deploy things for real so you can just go ahead and follow through that get the kubernetes uh Cube CTL and also get the mini Cube so we'll need both of those and as I mentioned before uh you also need dock desktop so you'll need Docker desktop and what I'm going to do is I'm going to use Docker desktop as the driver because as I mentioned before uh that integrates really nicely with M1 M2 chips so I don't need to install anything else but if you don't have one of those types of machines you'll need to uh get a hypervisor you could get something like virtual box I think that works on both that would probably be the one I would use uh so basically you just need to install that and then select that as your driver when you're putting start there but I'm just going to use the docker and another thing to note is you should have a dockerhub account this is super easy to do all you got to do is go to dockerhub or hub. doer doom and just log in or sign up and then you can go ahead and you can create your images and one thing to note here is in Docker desktop you need to be signed in to your Docker account out here and they'll just make everything smoothly because as you'll see here we've got mini Cube running here and we want to uh interact with that and finally before we get started I just want to say that we I have this repo here so all the code is going to be in this repo and you can just clone that and you'll need a clone if you want to follow along step by step or you could just watch what I'm doing here so with that said let's jump over into the code so I'm just going to open visual shio code here and as you can see here what we have is we just have a basic nestjs application this is just a hello world application essentially and you can see the get hello method here I've also created this other get method to the endpoint SL crash and that's just going to throw an error and the reason I'm doing that is because we want to crash our server and make sure kubernetes is um you know redeploying another container and patching any errors that happen there and then I also got this simple file post and get command so just to have a look at that quickly if we just send uh adjacent object with the text in Postman we can essentially we're just going to write a file called Temp txt and at the moment not going to pen to it it's just going to override it and we're doing that to demonstrate persistent volumes because when we turn off our containers and we exit out of it we don't want to lose any data and we'll see that this is actually going to be very similar to how you do for a database and so if you're interested in seeing like a proper deployment on AWS with nestjs and postgres uh let me know and I'll I'll make a video if there's enough demand or something like that but pretty much all the steps that we're going to do here today they're 95% going to be able to reuse for a proper deployment obviously there's going to be some Cloud specific stuff uh and some tweaks but uh this is a really good first video to understand how cubes works and how to develop locally in your local environment and specifically for an sjs project as you can see here we just create this temp file we post that and we can go ahead and we can just get that as well so nothing too fancy here so what you want to do is the first thing you want to do is just open the command line here and I'm just going to run a clear here so what we're going to do is we're just going to check that we've installed things properly so we're just going to run a cube CTL version D- client and if you get something there that means you've have the goop CTL uh on your installed so that's good and other thing you want to check here is the mini Cube so you can just go ahead and you can just start it and I've already done this but you can just run mini Cube start D- driver and let's say you're using Docker you can just go ahead and use Docker or if you're using virtual box you'll just type in virtual box so perhaps you have a hypervisor rather than using the docker on the latest M1 M2 chips so I've already got that running and you can see that running and we saw it running before in Docker desktop here so with that said what we can do is we can go ahead and we can create a repo on dockerhub so let's go to dockerhub and create a repository and let's just call this nestjs K8 I'm going to call it pv2 because I've already got a pv1 oh and that should be sjs so sjs kubernetes and the eight in kubernetes that just represents the number of letters between the K and the s in case you're interested why there's k8's here I've already done one and I got PV here that's going to be persistent volumes and twos just because I've already got one repo for that so this is just going to be a njs image which works with kubernetes and persistent volumes so just go ahead and create that so now we can see that we've got this here and if you've never used Docker Hub before it's basically like GitHub but for your Docker images so what we can do is in at the root here we can actually just go ahead and we can create a Docker file so let's go ahead and do that create a Docker file here and we'll also create a Docker ignore file so so dot Docker ignore so in the docker ignore file basically what I want to do is I want to ignore the node modules and in case you know you build it locally and I want to ignore the docker file and I'm also going to do this where I'm going to ignore all of these yaml files which we haven't created just yet but essentially kubernetes is all about just configuration files with these yaml files and that's just like the set of instructions and we want to specifically not include the docker file and the configuration files into our image because they'll interfere or it can potentially interfere with the uh how Docker Hub does the build and how you retrieve the image and stuff like that probably it's all good but just to lean on the safe side so we just create this Docker ignore file here and we will also create a Docker file here so if I look at my node version here we can see that I'm on 183.0 so I just wanted to make sure that I got the same version as I've created this so we can get the image from dock Hub the node image and I just want to get specifically version 183.0 just to avoid any uh versioning errors so now what I'll do is I will type in the working directory this is where this will be the main directory that will be in on the actual container itself so I'll just call that app call it whatever you want and then what I want to do is I want to copy my package Json file over and we might want the lock as well so let's just do package star. Json and then I'll move it over into our working directory here so what I can do is I can just run a npm run oh sorry run npm install so that will just install all of our node modules on the actual container and then we can copy all of our stuff over so we can just copy everything from this root directory and it's important to note that we're in the root of this directory so we can copy everything over here and then what we want to do is we actually want to build the project so we can run the run mpm run build and that will use if we just look in the package Json here if we look at the start if you look at the build command here they'll just run the nest build command so we got that our Port if we just look at the main TS file we see that we're listening on Port 3000 so this is just the standard hello world boiler plate basically so what we want to do is we want to because we're creating a container even though it's on our computer it's running on a virtual machine on our computer so to be able to interact with that Port that that container exposes we actually need to expose that Port 3000 so we're able to access it from our machine so our non virtualized machine so finally what we can do after you've built a sjs project basically is going to create a dis folder you don't have to do this locally but it has this main JS file here so we'll need node installed and I've got the specific 183.0 version so we can use the node command here and then CU we're in the working directory of the route we can just go this SL main here and by default we'll just look at the Javascript file there so you can just go ahead and you can just save that so now that we've built the docker file we can actually push that to Docker Hub so so basically what I want to do here is I just want to do a Docker build- T to give it a tag and this is where we give it the tag of our repo here so we just created that before and if I do Docker build- T John peping which is my Docker account username slash the repo that I've created and I want to do that for the folder I'm in so I can just do this dot here and that's just going to build our Docker image which will run through the steps that we've outlined and then what we want to do is we want to make it available to Docker Hub so we can just go ahead and we can just run a Docker push command and then we can just put in that repo there notice we didn't have to include any of the uh kubernetes files and this will just take a little bit now if we go back to Docker Hub and we refresh this page it might just take a moment but we can see that we have got this latest tag here so we actually have pushed our image two dockerhub so what we can do now is we can go ahead and create our kubernetes configuration files and these are just yaml files that allow that allow you to provision resources so what we can do in the rout here and you could put it in a kubernetes folder of and you'll see folders for k8s or kubernetes since we got a pretty basic example here I'm just going to do the standard putting it in the rout so the first file we want to create is going to be called deployment. yam and it doesn't have to be called deployment. yl but could be anything you want or you could even make it more descriptive like your microservice name say Tod do- deployment um and a deployment basically what we're doing in this file is we're just setting up we're telling kubernetes um or we have this configured file so basically kubernetes will just read this file so we create this deployment. yaml file and this file is just a way to to tell kubernetes how we're going to do things and we're specifically doing a deployment here and you'll see other things for example things relating to storage and that will be a different thing and we'll have a way to communicate with outside world so that'll be a service but we're specifically talking about the container deployment right now so the Pod deployment I should say which contains the container and it also can have a its own volume although we want to have a persistent volume and we'll set that up too so we can just start this is just a basic config thing where we just specify the version here and actually I've got the dockerhub H sorry I've got the extension for kubernetes and if you have that you can just type in kind here and you can also just type in deployment and then you'll get like this boiler plate here I'm just going to close the terminal for the time being and I'll just make it a little bit bigger so basically by default we got this apps slv1 that's just uh you know that's a default you can just find that on the KUB kubet website you just leave that as is basically we've got this kind called deployment we've got this metad data so we just need to name our app something so I'm just going to call this njs API PV 2 it doesn't have to actually be called um the same thing as what I've named it in Docker Hub oh I didn't even name it the same thing anyway so basically I'll just leave it as that just to demonst show that you don't need it to be the same so what we can do now is we can firstly we can see we got this spec here and spec it should have replicas in it so replicas is how many instances you want essentially so we just want to have one but if you wanted to have more than one you could specify there and it's really easy if you start off with one and then you just change it to two and reapply the changes then you'll see your project scale as needed sort of thing and that's one of the nice things about kubernetes it's just a super simple way to manage everything from this yaml file so we have this selector here where we match the labels and I'm just going to have the label and this doesn't have to be called app by the way we could call this API we just got to make sure that we have it called API everywhere and the value of course you can change as well so let's just call this nestjs API so we're just specifying that this deployment is going to uh have this spe particular specification where we have one replica so the number of desired pods and we have the match label here and because you can have multiple uh things happen here and then we need to reference it but right now we just got this nestjs API and that's why since we can have multiple things here uh we just need to specify exactly what that is so we have this template metadata labels and that will be matching with that there so that's that's the this is here this is this is the spec of the deployment and then if you look at this spec this nested spec this is the spec of the Pod so remember that the Pod uh sorry the deployment can have more than one pods but this is what's happening within the Pod itself sort of thing and for this what we can do is we can have the container coners we can give it a name I'll call it API so you can have the image here the image is going to be this here so this is going to go to dockerhub and if it's on Docker Hub Docker Hub is the standard one so you could just put in that but if you're using something else I think you need to put in the full URL but we're using Docker hub and basically that will just go to dockerhub and get our image so we got all of our software that we need which is basically node and then has the installation steps and all that sort of stuff that we' seen in the docker file you'll get all that stuff for us for the particular container so when it when we create containers that all that will be taken care for us so we don't have to do that over and over again so basically something you can add here is you can add poll on whatever version it is so we only have one version but you could imagine that you'd make changes to your image maybe you change Docker file or change something in your application and you want to rebuild that image typically we just want to get the latest one so you can just specify that and alternative to that is you can have this image pool policy and we can set that to always and if we just hover over this and the good thing about this kubernetes uh extension is if you ever forget or you're not sure what anything is you can just hover over things and it tells you what it is so you can just read like oh the image Pol pool policy is one of always never Etc and and always it means that the cubet always attempts to pull the latest image and container will fail if it pull if the pull fails so you know you can specify pretty much anything in this configuration file as you need it so you don't have to memorize anything at all all you got to do is just have a sort of General understanding and then just sort of tweak your configurations to your project requirements as needed so basically what we want to do here is we say okay well we're going to need some ports and actually that's specified here um so I'll just put it here as well and the ordering doesn't really matter of these things but the uh indentation and um sort of syntax with the colon does matter so the container port is just going to be 3,000 and if we look at these resources these are just some default resources basically all we're doing is saving some texal file not even much just like a line so we really don't need much at all so you need us just sort of play with these numbers for your requirements and you don't even have to have this here but it's a good idea to have these limits so you don't exceed your your resources and blow the budget but here we see we've got this CPU and we've got 500 M and we've got uh memory here 128 uh megabytes so they're the limits we have there let's just keep those default limits we don't really need to change it although you could for example have a request limit so you could have the CPU as uh 0.1 for example and the memory as uh 128 megabytes and I actually use this different value on my that I've already pushed to GitHub so I might just change that toste 500 M I'll change it to 0.5 and I'll make this 256 not that it needs to be but just to keep things consistent with what it is I've done previously so pretty basic stuff here we've got the ports here now we also want to have something known as volume Mount so we'll come back to that in just a sec but one thing to note is like how does kubernetes know how to tell if your container's up like how does it know how to restart a container or shut down a container redeploy a container and basically that's just built in and basically it just checks like a your slash endpoint by default uh a periodic time say 10 seconds um but you can actually overwrite that if you wish so we have these ports here let's let's just move these ports up a bit so we got the name we got the image we got the image pool policy let's just put these ports here we can have something else called a liveness probe and a liveness probe what that does is it just is like a health check and we can have a HTTP and we can do that on the path slash which I think it is by default anyway and then we can specify the port the 3,000 so in addition to http get through our liveness probe we can also have a period in seconds and we'll say every 10 seconds we want to make a HTTP get request to slash on Port 3000 and perhaps we want to do that initially straight away so you can also have this initial delay seconds and I'll just set that to two here so that's just a health check so we can tell cetes how like under what circumstance should we redeploy so if you go to the endpoint slash and it's failing well then it'll destroy that container recreate another container and it'll do that a few times and it will incrementally double I think so it might happen in a second then 2 seconds and 4 seconds and 8 seconds and so on and that's just a way to have safety over the number of things you provision and you can have safety measures on top of that as well like with these limits with resources and stuff like that plus Cloud specific stuff and monitoring so we won't get into those details much more here but just to sort of finish up on this deployment yaml file um well one thing we want add to this is we want to add volumes but before I do that I might go to something even more essential because you don't necessarily have to have persistence volume so we've seen how we can create our deployment let's go ahead and we'll just create a file called service yaml and again this could be called whatever you want and this will just be a service so let's just grab this and API version V1 kind service metadata I'm going to call this API service and this whole service all it is it's just a way to communicate outside of the container so we mentioned that before so we got this spec we got this selector and this actually needs to be changed to a API because we changed our key to API and then we also changed the value to njs API so let's just go ahead and use that and that's how we can select the know it's going to be this deployment because you can imagine kubernetes you're going to have big deployments with lots of services and right now we've only got one but you can imagine once you have more you need a way to select which uh you're connecting to and have a way to organize that so basically we can say let's just say for the ports here we want to use the protocol TCP and we want to do that on Port 80 so 80 is the port for HTTP so we want to be able to make HTTP requests from our computer and to the Target Port of 3,000 so the node server on the containers running on Port 3000 but if we to deploy to say AWS on a HTTP server we want 80 and if you're not you I want https you want 443 um but we'll see with mini Cube that that port number actually gets changed anyway because we need a different way to uh connect to simulate where on different servers so oh and the fin fin thing to note here and the most important thing to note of this whole file basically is this type here so we have this type load balancer and load balancer essentially means that if maybe we'll get some information if hover over it so basically you got it defaults the cluster IP and that means you have [Music] internal communication so if you have another pod for example um like let's say you have a micros service you have an API Gateway you want that to be exposed to everyone but then it goes to another micros service you need a way to communicate from the first micros service to the second micros service and in that case you'd probably use something like cluster IP whereas load balancer is is good for endpoints that you want to be um available to the public but it also in addition to that has a load balancer over your containers so essentially you'll distribute load if you have more than one instance running it would do that for you so that's a really important thing to note there so what we can do now is let's go back to the deployment yaml file and what we'll do is we we will what we want to do is basically like when we're on our local environment and we have the API call and it creates the temporary file or gets the text from the temporary file we don't lose that file but you can imagine if you have a container and then you shut down a container well that's lost so we want a way to be able to keep that and you do that with persistent storage and if you don't set up you you'll just but a default have it lost but we'll set this up so in line with resources here we can put a volume mounts option here and we can give it a name so I'm just going to call it text volume and then we need a mount path and I'm just going to put this in a text folder and we're going to call it temp.txt as the file is called and then you can have a subpath if you wish and just to help it know to go to the temp txt file here and with that if we I got to find exactly where this is now if we go in line with containers we can get the volumes so just to note and reiterate we have a pod which has a container on it it can have more than one container but typically you would just have one container on it denoting your micros service the container can also or a pod can also have a volume and but typically that volume would be like with the container so it be lost but what we're doing is we're specifically sort of overwriting that and we're creating persistent volume so even when that container gets shut down we can still access the information and it'll just be stored somewhere else on the system essentially so what we can do here is we can just give it the name and we just specify the name above so we just just keep that same name there text volume and this is going to be a persistent volume claim so we'll come to this in just a second here and we got to give it a claim name so I'm just going to call it PBC and I'm going to save that and yeah I was just making sure that the linting was working correctly so essentially firstly we'll come back to this but essentially alls we need to know is we have this claim name and essentially we're going to connect that to another file so firstly let's just go ahead and create a file called pv. yaml this is going to be a persistent volume here so let's just go C assistant volume and a persistent volume we have this metadata so we'll give it the name let's just give it the name PV and we'll have a spec here we'll give it so for capacity I'll just make sure I'll just put something random here so 1 gab uh although you don't want it to be more than what you've requested volume mode will have a file system CU we just have a file here access modes read right once we have the we don't need this persistent volume reclaim policy the storage class name we can just make that standard use the standard options there Mount options we don't need any of those NFS we don't need any of that so basically all we've got here is we got capacity volume mode access mode storage class name and then we want to have host path here as well and the path that we want here it needs to be the same as what we have in our deployment so we have this text here and we want this to be of the type directory or create so if the folder doesn't exist you'll create it otherwise you just use use the directory so that's our persistent volume and we'll note that that isn't necessarily a persistent volume claim because of that persistent volume this is going to work regardless of the Pod but then to be able to use that in a particular pod we need to have a assist volume claim so we'll have this PV C file do yaml and in that file we can make use of it so we can just say well we have the kind here and the kind is going to be a persistent volume claim so we can just call this PVC and that PVC is going to be referenced by the volume mount for the deployment of the Pod or so that's how that connects and then that's how we can keep things so we have the spec and we need to have the volume names because it needs to reference that PV file that we just created and to do that we can just say PV because that's what we called it here and then the resources the requests here you know it can't be uh you know more than the total that we've given it so the total will given it for the volume is 1 Gaby so let's just say well 1 Gaby here uh the volume mode we've got file system here although I don't think we actually strictly need that because we've got it outlined here so we got the volume name we got the resources we got the Quest the storage 1 gab access modes read write once and then also we can have the storage class name and then that needs to be the same as the other one so we just go ahead and we put standard here so before we apply all of those configuration files what we can do is we can just run a cube CTL and we can say get pods for example and this is just to make sure that we don't have anything running that we've been working on previously so since we've just started we shouldn't have anything on here and we can do the same thing for deployments we can get the deployment see if anything's there we can get the services so none of that's there so oh well for the services you get the default kubernetes one but that should just be there anyway so now we can see that nothing is already existing there what we can do is we can run the cube CTL apply and then the- f for the files that we want to apply so I'm just going to apply all the things we just created so we got the pv. yaml and then the PV c. yaml and I'm doing these first because they don't have dependencies so PVC depends on PV for example and then and the service depends on deployment so we'll have a service yaml here and oh sorry the deployment depends on service so we can also just say okay let's take the deployment yaml file there so I just run a cube C apply DF equals and then all of the yaml files in order so from least dependent to most dependant and I just run that and that just configured all of the yam files that we just created so with that we can go ahead and we can run a mini Cube command and then we can run the service for the actual API service that we've created here so let's just go ahead and run that and we can see that it opens up a hello application for us so I'm going to go into Postman and I'm going to check that everything works here so firstly I'm going to run this endpoint here so that works and then I'm going to try crash the application here and we get a 500 internal server error and that should crash our pod but if we rerun this here we see we get the hello world again so we can see that it actually reran our broken pod or our broken container so let me post say hello world with two exclamation marks and I'll save and I'll post that and then I'll try and get it and then you can see that I get it with two exclamation marks there so that make sure that that works but the interesting part of this will be let's just say contrl C so we stop that from running and then when we run it again it'll be on different port simulating a different machine but what we can do is we can actually go ahead and we can undo all the things that were created by running these and you can do that by running a cube CTL delete and then you can delete all of the files that will delete everything that was created with that it will delete all the resources that those yaml files provisioned for us it doesn't delete the files themselves it deletes the things that they created we could rerun this for example so we could just run the cube CTL apply all of the files again and then it recreates everything for us and then finally if I run a mini Cube and I start the service or the API service again with an KN at the end there we can see that it takes us to this new Port which simulates that it's on a new computer now so 55274 so if I try to get this on the New Port so it was 55223 now it's 55274 I try and get this we can see that it actually does get this again and of course we just saw a minor exclamation mark because I was playing around things as I paused the video you can see that the Pod is persistent here so we've turned it off we turned it back on we got a whole new server which in it basically simulates that we got a whole new ec2 instance container rerun and we didn't lose the data because typically you wouldn't get anything back at all and then now we're able to get that so let me know if you want to see another video after this which would be say postes and nestjs on AWS just so we can sort of have that finale there um if there's enough demand for that I'll make a video on that let me know okay thanks so much for watching and I'll see you in the next video cheers
Info
Channel: Jon Peppinck
Views: 587
Rating: undefined out of 5
Keywords: kubernetes, kubernetes tutorial, minikube, how to install mongodb on kubernetes, kubernetes mongodb, install mongodb on kubernetes, kubernetes mongodb tutorial, mongodb statefulset kubernetes, deploy mongodb on kubernetes, kubernetes database deployment, kubernetes cluster, kubernetes installation, kubernetes node js mongodb, nestjs mongodb, nestjs mongoose, kubernetes beginner, node js kubernetes, kubernetes monitoring, istio vs kubernetes, kind kubernetes, devops, k8s
Id: sqv3bxcx8H8
Channel Id: undefined
Length: 52min 8sec (3128 seconds)
Published: Wed Apr 10 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.