[ Kube 43.2 ] Getting started with KinD | Local multi-node k8s cluster in Docker containers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
do you want to run kubernetes in docker containers there is kind and i've done a video on kind i think yeah it's about a year ago and a couple of you asked me to redo this video the latest version and here it is okay so before going into this i'm gonna show my notes to give you some rough idea of what i'm talking about in this video okay all right so if you think of a kubernetes cluster it's just a bunch of machines running your production workloads or running your containerized workloads right so assume you've got a kubernetes cluster and you've got a bunch of nodes you might have a couple of masters and then few worker nodes and things like that so these nodes can either be physical machines on your data centers it can be virtual machines on your local workstation it can be virtual machines on some cloud providers like aws gcp google azure and things like that so if you take a look into a node the node will have a container runtime of some sort for example docker container runtime although kubernetes is deprecating docker container run time like container d cra so basically each of your cumulative node will have a container run time to run the containers and you've got the usual kubernetes components if it's a master it's going to have api server controller manager scheduler manager cubelet and if it's a worker node it's going to have a cubelet and then q proxy thing and it will run all your parts here so that's the uh thing that's happening inside your node all right so as i said you can run this in a physical machine these kubernetes nodes can be physical machines virtual machines and even they can be containers so that's what we're going to look at this video kubernetes in docker basically these nodes are going to be docker containers we're going to use darker containers as kubernetes worker nodes using a tool called kind right let's get started i've opened the kind website i'll put a link to all these pages in the video description if you want it so you'll see a banner here that says docker shim deprecation does not impact kind you already might know and i've done a video on this kubernetes is deprecating docker as the container runtime so going forward you won't be able to use docker as your container runtime in your kubernetes cluster so you have to use container d crio or other container runtimes and i've done videos on a container d crio if you want you can search in my playlist or if you can't find any video just uh let me know and i'll be able to share the link for you all right so you need docker installed on your laptop so that's the only requirement you need to have either docker or podman installed right so this is just to provision the kubernetes nodes but the nodes themselves won't be using docker so you use docker on your laptop to deploy this kubernetes cluster and the kubernetes nodes themselves they use container d as the container runtime within them so they won't be using docker so you don't have to worry about dr shim deprecation notice from kubernetes okay in my original video that i did a year ago the installation method that i used is go module so i downloaded and installed go language runtime and then i used go to install kind but this time i'm not going to use go i don't want to install go on my machine just for using kind so instead i'm going to download the stable binaries for they've got binaries for different platforms if you're on linux windows mac they've got binaries in the releases page in the github page which i'm going to show you in a minute let's go to quick start and installation and in here if you're on linux it's just these three commands a curl command to download the binary you set the executable permission and move it to a location that's in your path let me first run this curl command right in here i don't have kind and let me run this command right if i do an ls so that is kind here i just need to set the executable permission change mode plus x kind and i'm going to move kind to user local bin and if i do which kind i've got kind and if i do kind version and i'm running version 0.10.0 if you just run kind on its own you would see some basic health documentation which we'll come to in a minute so the first thing i'm going to do is i'm going to create a kind at kubernetes cluster the simplest command is kind create cluster that's it so it's going to give me a cluster with single node with a single master now let's try it out okay so that's the very first command that i would try it's going to pull the docker image node version 1.20.2 so that's the docker container image that it uses for the kubernetes nodes it's preparing the node it's writing the configuration so once this is complete you would have a cube config file ready that you can use so we just ran kind create cluster and what it has done let's explore so the first thing is if i take a look in dot cube directory and there is this config file so that's the q config file that kindness automatically generated for me okay and if i do docker images and you will see kind node that's the one kindest node so we've got it has actually pulled the container image and if i do docker ps and we have a docker container running so that's called kind control plane kubernetes master node now i can do kinda get cluster so that will list all the clusters that i'm running at the moment so at the moment i'm just running one cluster which is kind and i can do cube ctl cluster info because i've already got my cube config file there we go so that's good and we've got access to our cluster and i can do a cube ctl get notes and if you look at the the naming convention of the nodes it always starts with the name of your cluster which is kind so when you do kind create cluster it's going to create a cluster with a default name which is going to be kind and if you do kindly get clusters you will see the cluster name has kind but if you want to create a cluster with a different name you can pause minus minus name let's say for example my cluster and that will create a cluster of name my cluster and all the nodes will have a prefix of my cluster the name of the cluster so that you know which cluster this particular node belongs to cube cto get nodes we have that cube ctl get fonts dash a all right all of them are running we've got coordinates hcd kindnet api server controller manager proxy scheduler and local path professional so let me show you this docker ps command again and we have this single node cluster running that's a single master node running in the docker container so if you look in here it's the port forwarding is set up so that's my local machine 127.0.0.1 and this particular port 42829 when i hit that port it directs me to 6443 inside this container where the api server is running so if you look in the dot cube directory this is my cube config right so let's take a look at the cube config file so that's the cluster certificate and we've got the context context is set to kind dash kind and this is the important bit here the server so i'm using the cubeconfig file this particular file to connect to a kubernetes cluster and this tells me where my kubernetes cluster is okay so let's close this and if i grip for server in the queue config file so that's the server that i'm connecting to so that's the api server that i'm going to interact uh to connect to my kubernetes cluster and if you look in here it's exactly the same so when you hit this 8.42829 on your local machine it's going to take you two six four four three on this container where the api server is running okay so let's do a docker ps again and i'm going to exec into this node and see what's actually going on so we've got docker running in our local machine and we've used that to create kubernetes nodes and i'm going to exec into docker exec minus it control plane i'm going to launch a dash shell okay so i'm inside this master node which is a docker container and i can do which cri ctl so we've got cri ctl and if i do cri ctlps so those are the list of containers running in this docker container so we have the cube proxy kindnet hcd api server scheduler controller manager continuous local cloud provisions so all these containers are running within this container so that's kubernetes in docker all right so we've got everything and this is using container d and not docker so let me exit out of this or let me go back to it again and i can do edc kubernetes so that's where you've got all your cluster configuration files and if i show the manifest directory so that's all your static manifest for the scheduler controller manager api celebrated city so it's a proper kubernetes distribution running inside a docker container so let me exit out of it and if i do cube ct i'll get nodes minus o white and you can see the container one time that it's using is container d version 1.4.0 right let me delete this cluster kind to get cluster we've got kind kind delete cluster so you don't have to pass in the name of the cluster if your cluster is kind if you're using the default cluster you don't have to specify the name but if you specify a different name for your cluster you need to specify minus minus name right so in our case it's the default cluster so i'm just going to do kind delete cluster and that's gone and if i do docker ps my machine is gone cluster info there's nothing there and if i do ls.q you still have config so it won't delete the config file anyways okay so now what i'm going to do is i'm going to create two clusters kind create clusters i'm just creating a default cluster so once that's ready i'm going to create another cluster i'm going to show you how it uses cube configuration file and how you can switch between the clusters so i'm going to wait okay so our first cluster is ready if i do kind get cluster so we've got kind and i can do grep uh sorry i can do docker ps so we've got one container running and that's the end point and if i grip for server in my cube config file i can say it's the same three eight zero five seven three eight zero five seven so i just created a q config file that i can use to interact with this particular cluster okay i can do cluster info that's working fine get nodes that's working fine right so now i'm going to create another cluster kind create cluster this time i'm going to pass in a name my class so you can't create you can't just run kind create cluster the second time because it's going to create a cluster with the name coin because we already have that cluster it's going to error out so for that i'm going to give it a specific name all right so our cluster is ready and if i do kind get cluster so we have two clusters now kind and my cluster all right and if i do docker ps so we have two docker containers running one is for kind and the other one is for my cluster and if i grab for server in my cube config file so now you can see two entries so one for three eight zero five seven which is this one the first cluster the other one is three nine three double one which is this one that's for the second cluster so basically it has created it has merged the cluster configuration into a single queue config file so let me show you what's inside the cube config file so you've got the cluster certificate for the first cluster and then the cluster certificate for the second cluster and here you can see the name of the cluster is kind dash kind and here it's kind dash my cluster so you've got both the clusters in a single cube config file let's do cube ctl get notes and see where where it's going to take us okay so if i do cube ct i'll get notes i haven't changed anything previously it was showing it was connected to my first cluster and then since i created my second cluster so now it's pointing to my second cluster so how do i interact with my first cluster so i can do cube cdl get nodes minus minus context so if i pass in the context of my first cluster i can interact with my first cluster so you can see here that's the name of the node you need to use minus minus context and you pass in the context and you can interact with a particular cluster and if i do my cluster kind sorry it's not my cluster kind dash my cluster okay get nodes kind dash my cluster and you can interact with this cluster so that's how you interact between multiple different clusters with a single cube config file so to make our life easier there's a command qctl config get context so that will show you that will read your q config file and show you what context you've got so the moment we've got two contexts i mean two clusters kind my cluster and kind dash kind and the asterisk specifies which cluster is active at the moment so if i do cube ctl get nodes and i'm connected to my cluster this cluster here because that's the default context so if you want to switch the context you can also do so i've shown you minus minus context kind so that's one way you pass the context to every single cube ctl command that you run or you can change your context here cube ctl config use context so you switched your context to kind and now if you do cube ctl get notes by default you will get you will be connected to the uh cluster in the context okay and if i do config get context now now you can see the asterisk mode to this cluster because specifying this is my default cluster now so all my cubesat commands will be done on this cluster and i have to specify the context or i have to switch to this context in order to interact with this particular cluster all right so so far we've seen how to create a single node kubernetes cluster a single node just a single master node now let me show you how you can create a multi-node cluster so you won't be able to create using the kind create cluster command but you have to create a configuration file and then use the configuration file while running this kind create cluster command so before that let me delete all my clusters kind delete cluster so if i just do kind delete cluster it's going to delete the default cluster kind and i can do kind get clusters sorry kind get clusters and now again if i do kind delete cluster it will still try to delete the kind cluster but it will leave my cluster here so i need to specify name and that's gone and i can do dr ps my containers are gone now so now let's see how to deploy a multi-node kubernetes cluster i'm going to go to the documentation and here in quickstart configuring your coin cluster okay so all i want is this one multi-node kubernetes cluster so if you use this configuration here you will get a single master node and a two worker nodes but i'm going to go with this one here i want to deploy a multi-master kubernetes cluster right copy that i'm going to paste it into kind.yaml paste i don't want three worker notes i just need one worker node this is just to show you about the multi-master kubernetes cluster all right that's done and in your kind create cluster command you're going to pass in minus minus config option and you're going to use the uh the font that you've created temp kind dot yaml so now you can see here preparing nodes it is preparing four nodes three master nodes and one worker node you can also see here configuring the external load balancer so you will be accessing the cluster through the load balancer because in a multi-master setup you don't want to access any particular master node so if that master node goes down you won't be able to access the cluster so you will have to access the cluster through a load balancer right the cluster is now created and i can do docker ps now and if you see here docker ps now i've got five containers so that's the worker container the worker node and we have three uh let me reduce the size so you can see it better right docker ps so we have uh the worker container and we have the g3 control plane containers and we also have the external load balancer container let me prep for server in my queue config file and that's the server that we are going to current that's the end point we're going to connect to and in docker ps40607 is this one so that's the external load balancer container so we are not connecting to any of these master nodes we are just connecting to the external load balancer that will direct our traffic to these one of the three master nodes okay and i can do cube cdl cluster info cube ctl get nodes and the version is 1.20.2 let's take a look into the load balancer container okay i'm going to log into docker ps docker exec minus it interactive mode kind external load balancer i'm going to get a shell into it okay so if i do an ls here and i can do a ps and see what's going on and there is a hatchet proxy process running and it's using this particular configuration file let's take a look at that configuration file cat user local etc proxy.config our configuration file and if you look in here so that's the front end any connections coming to 6443 on this container load balancer container will be directed to one of these three servers so it's kind control plane six four four three kind control plane two six four four three point control plane three six four four three so basically it's just forwarding the traffic to the api server port on one of these three containers so it's using kind control plane kind control plane so it's actually using the name of the container not the ip address because all these containers i'm gonna do a ping test now ping kind control plane and i can ping so this uh hatchet proxy docker container can ping the other nodes the other containers using the name and i can do ping control plane two that's working control plane three that's working and i can also ping the worker container kind dash worker and everything is working that's because if i do a docker network ls so there is this kind of docker network created and all these docker containers for this particular cluster the kind cluster kind get clusters so we've got one cluster and all the docker containers that got created for this cluster they share the same network so that's why they were able to talk to one another using the name okay and i can show you docker inspect for example um if i take a look into the external load balancer container and if i go to the end and if you look in the network settings here it's using the kind network and similarly if i do docker inspect kind of one other node for example control plane and look at the network settings it's using kind so basically they all all these docker containers they use the same network and that's why they were able to talk to each other by their name and not by their ip address okay so that's multi master okay get notes all of them are ready okay get parts dash a and we've got all these spots running api server for all the three master nodes and what i'm going to do now is i'm going to show you how to deploy or how to provision a specific version of kubernetes cluster let me delete this cluster find delete cluster okay so i'm going to go back to the documentation and i'm going to go to the releases page so here so i'm using 0.10.0 so if you want to use a very specific version of kubernetes you can do kind create cluster and you can look at the help option here and there is a option that you can pass minus minus image node docker image to use for booting that cluster you can either use kind create cluster minus minus image which image you want to use so every specific version of kind can support a specific list of versions of kubernetes so for example the 0.1.0 version of kind can run one of these versions of kubernetes so by default it goes with 1.20.2 but for example if you want to use 1.19.7 you would have to use this full name of the image you just specify this and it's going to work or there is an option to use it in the configuration file so remember the configuration file we created kind.yaml so here you can specify for example image is let's copy that yes you can specify this and you can specify a different image for different notes for example i can specify image for this one let's say let's go with 1.17 paste yeah so basically you can specify what image you want for every single node but i won't advise you to run different versions of kubernetes for different nodes stick with a same version stick the same version for all your notes okay let's go with this approach i'm going to do kind create cluster minus minus image let's go with 1.19.7 this okay so as you can see here it's fetching version 1.19.7 kind node preparing node it's just single node because i'm not using my configuration file and the default kind cluster starting the control plane and we will see a cluster with 1.19.7 right that's done and if i do kind sorry cube ctl get notes there we go so that's version 1.19.7 and i'm going to delete this kind delete cluster all right so now what i'm going to show you is um it comes with the default storage class that you can use if you want to use persistent storage quickly show you how it all actually works for that i'm going to create a cluster a default cluster coin create cluster all right cluster created and it says it's installing a storage class let's take a look at what's in the storage class all right so if i do a cube ctl get storage class you can see there's the storage class here called standard and that's the default storage class so straight away i can start using persistent volume claim let's give it a try okay so i'm gonna go into my play directory and i'm going to git clone my kubernetes repository i'll put a link to my github repository in the video description i'm going to go into kubernetes yamas and i think yeah i've got a couple of examples of persistent volume claims so what we're trying to do here is i'm going to create a persistent volume claim and see whether it creates a persistent volume so for this i'm going to use for example let's say this one here before that i'm going to edit that pvc nfs.yaml let's change the name to pvc demo storage class i don't have to specify the storage class if i have to specify the storage class it's going to be standard that's because that's the name of the storage class that kind has deployed for me but since that's the default storage class i don't have to specify it and access mode the local port provisioner only supports read write once so if you try this with read write mini your persistent volume claim will be in a pending state forever and it won't provision a persistent volume if you want you can give it a try i'm going to change this to read write once i'm requesting 500 meg of storage the persistent volume claim is called pvc demo okay save that and cube ctl create minus f for pvc nfs.yaml right cube ctl get pv come off pvc there we go so that's our persistent volume claim and it's in the pending state let's take a look at why it's actually in the pending state all right and if i do cube ctl get storage class it says volume binding mode is wait for the first customer so when you create a persistent volume claim it's not going to create a persistent volume unless you use a pod to bind it okay so that's what it says wait for the first consumer did i say wait for the first customer last time so what we've got is cube ctl get pv come off pvc we've got a persistent volume created but it's in pending state okay so i also have an example of a part that i can use yeah this one so let's edit that vi for dash pc box persistent volume so this is just a simple example of a pod that makes use of that persistent volume claim and the persistent volume plane that i am going to use in this part is pvc demo so that's what we've created earlier and it's a busy box container that does nothing but sleep for 10 minutes and i'm mounting this under slash my data within the container okay so i'm going to create that cube cdl create minus f and that part yaml okay so i've done that and if i do cube ctl get parts and ender is getting created and if i do cube ctl get persistent volume and persistent volume claim now you can see a persistent volume claim is bound to this persistent volume so we've got a persistent volume created and our part is running fine so that's how easy it is so if you want to use persistent volume you can start using it if you're using kind all right so now what i'm going to do is i'm going to show you how you can use a load balancer using metal lb load balancer installing metal will be using default manifest so i've done a video on metallics exactly the same process but for the completeness sake i'm gonna quickly show you how it actually works but if you want more details on the metal lb and how to deploy it on any kubernetes cluster watch this video cube 33.1 but anyway going back to the kind documentation we've got few of the manifests to apply before we can start using load balancer so first i'm going to copy this command and run it so it's just basically creating the metal lb namespace cubectl get namespace so it has created the metal lb system name space nothing more than that and then create the member list secret copy paste that's done and apply the metal beam manifest that's going to deploy all the resources that we need board security policy service account cluster roll roll binding daemon set deployment and everything so now if i take a look at metal lb system get all okay controller port is getting created we've got speaker demon set deployment everything is running now cool okay so now what i'm going to do is i'm going to create a service of type load balancer but that's one more thing sorry i forgot this one more you need to set the address pool so so that metal will be knows what address range it can use to hand over the load balancer ip address so for that you just copy this command here just to understand what's your docker network ranges right if i run this command so it says my docker network is 172.18.0.0.16 okay and i'm going to copy this configuration file and open up a temporary file paste that content here and here i'm going to change my docker network to my network 172.18 it's basically 172.18.0.0.16 network so i can use any range and i've just changed it to 172.18.255.200.255.250. all right so just that command here this docker network inspect command just shows you what network range your docker runtime is actually using and then accordingly you need to specify what ip range you want to use for metal lb to hand over the ip addresses right and now let's deploy this cube cdl create minus f metal lb.mo config map created now so now we are good to test this out so for that i'm going to create a simple nginx deployment cube cdl create deploy nginx minus minus image engine x get part container is getting created chip city will get parts containers still getting created containers getting created all right so our engine export is finally running and i can expose it cube ctl xposed deploy engine export 80 type is load balancer okay so that's exposed and if i do a cube cpl get all we have our port running and we have our service we have our deployment exposed as a load balancer and has already assigned the load balancer ipad so that's going to be our external ip address that we want to access so you can either add it to your dns or you can put it in your etc host file or you can directly access using the ip address so let's take a look i've got a links which is a command line web browser and if i type in the ip address and the load balancer ip address 172 18255.200 and that's welcome to nginx page so that's working okay so give this a try and let me know if you've got any questions i'll be happy to help and i will see you all in my next video until then keep learning and keep on learning bye
Info
Channel: Just me and Opensource
Views: 3,016
Rating: 4.9652176 out of 5
Keywords: just me and opensource, kind kubernetes, kubernetes kind docker, kind docker kubernetes, kind kubernetes tutorial, kubernetes kind tutorial, kubernetes kind installation, kubernetes in docker containers, multi node kind, kind kubernetes multi node, kind kubernetes metallb, kind kubernetes load balancing, kind k8s for beginners, kubernetes certification training, cka ckad kubernetes, just me kind kubernetes, set up kind kubernetes, k8s kind install
Id: kkW7LNCsK74
Channel Id: undefined
Length: 30min 1sec (1801 seconds)
Published: Sun Apr 25 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.