Kubernetes 101 workshop - complete hands-on

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] [Music] [Music] cool hello everyone and welcome to the second workshop followed by cube simplify and this workshop is kubernetes 101 my name is sam patrick and i am a cncf ambassador working as director of technical evangelism at civo uh building the next gen cloud platform um so ceo is a cloud hosting company um that has its own managed kubernetes offering based on g3s and yeah i'm very excited for this particular workshop uh i'm going to deliver it so this will be interesting and um i love the enthusiasm going on in the chat so hello to all the people who are in the chat thank you so so much for joining in and make sure to share the stream now so you can copy the link and share it all across your social media so that maximum people can take the benefit uh now since you have joined uh just be there for the next few hours and whether you are new to kubernetes whether you are revising kubernetes just stay there just hold on for a few hours and i promise you that you learn something today and it will be fun exciting hands-on uh a lot of you know q a everything lots of stuff will be there just hang in there till the end and enjoy the workshop a few things first uh make sure to be respectful to each other in the chat uh we are all here to learn and not to comment or do any stuff so this status chat is moderated by cube simplify ambassadors um and they will be moderating the complete chat they'll also be available for um you know the q a and will try to help and keep you all going uh make sure to join the discord uh let me just add the discard link so make sure to join the cubesimplifier.com discord because there you have a workshops channel uh and in that so basically in youtube chat you won't be able to post screenshots links all that stuff but in cube simplified discord you can post your queries your concerns what is working what is not working and our ambassadors will make sure that your query is getting answered so i think that's that's a very good way i think last workshop it worked really well this workshop also it should work really well so cubesimplify.com discord make sure to join the workshops channel and um anything you know that that's not working your site it'll be uh managed by the ambassadors um so now let's kick start our kubernetes 101 workshop yeah one last thing promise one last thing so before uh many people were asking you know what are the prerequisites and all that stuff so three because it's actually the workshop one so workshop one was taken by chad it was uh docker and linux uh fundamentals so if you have not watched then possibly you should have watched that because that's workshop one because i'll not cover the basics of containers and all that stuff this will be core humanities hands-on from scratch and uh let the fun begins let me remove the banner first okay let me share the screen i hope you are able to see the screen and i'll take out my magic pencil and we will take off so today's session is all about kubernetes and there are a lot of things that i want to cover a lot of things that i have like left because this is kubernetes 101 workshop and this is not complete end-to-end kubernetes and it cannot be we cannot cover that in in a single kind of workshop because there are a lot of contents a lot of topics but i'll give you all the pointers in the end where you can go to learn further obviously my channel but i'll i'll tell you that uh so in this workshop what we will cover is introduction to kubernetes that is very very important to set the stage right uh then the basic components like pods deployments stateful set networking and services some level of the authentication authorization and admission um then the conflict map secrets and if time permits then probably a local and a cube adm set up i think i'll i'll do this one i'm planning to this one but this one if the time permits i'll i'll show you what it can be there so a lot of things to cover hang in there get set go so introduction to k-8s so first is like um the first kind of section that i want to cover introduction to k-8s also covers a lot more what is kubernetes why cubanity architecture yaml file what is it kubernetes components imperative declarative name space label selectors so this is actually before going into any of the uh real hands-on stuff because uh you need to know some theory that is true that is true for anything like you need to know some basic some theory and after that you can um you know kind of go to the hands-on and don't worry i'll tell you where to do the hands-on you don't you can easily uh you know uh easily join and perform the tasks within your browser so there's no worry whatever laptop you have you should be able to join so what is communities as per the like talks or wherever you have read the definitions kubernetes also known as k8s k8s is why do we say k8s because there are eight alphabets between kns so is an open source uh orchestration system uh for automating deployment scaling and management of containerized applications it is a cncf graduated project so this project is hosted under the umbrella of cncf and it is a graduated project over there now what happened was in back in the days uh you all know like google so google has massive scale at which they run the kind of infrastructure services and all that stuff so internally they already were running a few systems they already had developed those distributed systems called borg and omega so those are those were the closed source system and which were solving a large scale type of problem i think they still might be there because they are handling stuff at massive scales now kubernetes was born with the best practices with the learnings with the same dh sharing the same dna of borg and omega so that's how like communities came into a picture it was first launched in 2014 i would not say it's fairly new but i would also not say it's fairly old like the programming languages java and also it's still fairly new and people want to learn people are learning in today's era especially where the containerized application is everything you need to know like kubernetes is need of the r even if you're a software developer if your devops engineer sre developer advocate uh support engineer any of uh the you know any of the workings that you are doing at the particular organization you definitely need to know uh you know containerization and communities because it is the need of the r to be very honest i would i won't say like you know if you're doing this don't do it and also it adds a lot of plus you know a plus skills to your uh whatever you call resume or profile or scales knowledge it is very good to have humanity's uh knowledge uh designed for scale uh so kubernetes is actually designed for massive massive scale like you know hundred thousand containers uh hundred thousand pods um and it can run anywhere like on your raspberry pi's on your perimeter systems virtual machine clouds uh so it can run anywhere so that's that's kubernetes like you have learned from the from the previous session like containers right so you know you have containers so who is orchestrating them it's gates so that's the fundamental like uh although k-8s is doing much much much more than just orchestrating because it is doing the scaling it is doing a lot of things which we are covering just below this so why communities so that's that's a very critical question why why actually we need communities when we we have containers okay we have learned docker we have uh our file in python or node and we have containerized it using you know written uh docker file for it we did uh docker build and we created an image and then we did docker run so it's running fine what's the problem right i'm i'm running it right so my docker container is running i'm able to serve the traffic to my application uh you know uh do the port forward and everything stuff then why why do i need kubernetes such a complex ecosystem where you need you know such a huge learning curve and all that stuff is there why do i actually need to learn communities uh now the thing is this is fine you are running a container few containers are fine but what about scale i am not talking about two three four when a monolithic apps when a monolithic application so this is a monolithic app when this gets divided into micro services so there are different apis takes take any example like you know any ticketing system booking or air flight booking so they have a lot of pages they have vacation page they have hotels page they have bookings page you go to each pages where you have the payment gateway page where you have the flight search page flight booking page so you can see so many microservices are there within a single application and so you have you this is the first set so this is the first set now you need to run in order to maintain the availability of the application that something doesn't go wrong over a black friday sale or whatever sale is there then you scale it you like have another set of these same micro services what about the development development and staging environment okay let me let me scale it more so i create so this was for broad environment let me create their environment let me create stage environment let me create the pre-broad environment and let me just do all again with same number of replicas you can see the overhead so now you are managing like thousands of containers now tell me if what happens when this goes down you manually need to fix it what happens if this goes down you manually need to fix it how do you monitor each of the custom scripts how many custom scripts you will write so you have to write so much custom scripts to monitor one environment second environment get the metrics trace the metrics and then build another system for that uh what about the flexibility what about you want to add you know you want to extend this or you want to run it the same thing you want to create a disaster recovery setup how would you do that you have to to the same huge process once again i mean that's a tedious job so we cannot monitor like this we cannot run containers at scale like this we cannot monitor them we cannot uh run multiple pods in multiple nodes i need to manually have some logic that you know i want this part here replica of this spot another container not here so many things are there then i need to what about flexibility flexibility is not there auto scaling no auto scaling is there scheduling how do they should you i have to manually write the scripts and schedule that uh what about the self healing capabilities like if if something goes down then automatically uh can i say like i just need 20 copies of my application running and and there should be some person or someone from the application they are you know that you know software's a piece of software that is automatically creating those replicas and i don't have to do it manually so all these things is done by kubernetes and all these overhead that you have to do you don't have to so there is a learning curve for kubernetes uh there are things you need to know about communities there is like uh you know some terminologies you need to know about communities but you can see the benefit right you can see like the flexibility the auto scaling the scheduling the monitoring out of the box so many things you are getting when you're running it at scale and it's built for production like you you can you know from you can expand from small scale to large scale very easily because you never know what startup idea you might think right now people who in chat you might be you know the future uh ceos of some company so you might be starting a small organization with some small traffic but suddenly you create a you know a killer product and the traffic goes like spikes what would you do you can't do the auto scaling you can handle all the things with communities but when you are doing on your own you can't do that so that's where kubernetes comes in so kubernetes is an orchestration system so what is kubernetes given is an orchestration system it is used to manage all the containers at scale so it manages their scheduling it manages how they run where they run it manages their auto healing like if you if you say i need three copies of my uh replica then three copies of your replica will be there three cop replica in the sense the same copies like a photostat so that that will be there so that's what kubernetes is now the first hurdle of uh you know this particular workshop is complete like what kubernetes is why we are here so we are here to learn kubernetes kubernetes and orchestration system orchestration is just you know running managing scaling monitoring all the containers which are there uh and it is that it is a massive piece kubernetes is not small it's a massive piece of software which helps you achieve all these things now let's uh come to the architecture like how how things work in kubernetes what what actually is there when you say kubernetes okay so this is a architecture let me zoom in this area cool so what we'll do is this is me this is my twitter image i am trying to run a port so uh cubectl so if for the first timers cubectl is a tool is a command line tool that you use to interact with the cluster so you need a way to interact with the cluster you will be using cube ctl cube cutter cube kernel whatever you want to call it but i call it cubectl so this is the tool that you will be using to interact with the cluster any sort of operations that you are doing you will be using cubectl so cubectl run nginx nginx if you're not understanding any of this thing it's completely fine i'm just trying to make you understand kubernetes components so sayim has given a command and wants a pod to run basically wants a container let's come to the port terminology later on so i just want my nginx container to run i can just do right docker run uh hyphen id and give the image nginx right so i want to run in kubernetes in kubernetes how do i run it i do it cube ctl run nginx and give the image hyphen iphone image nginx nginx is a web server uh so i i do that what happens this request this request goes to the api server now what is api server now what happens is whenever you are talking about kubernetes what you will be doing is you will take a few machines a few machines in the sense bare metal machine raspberry pi's or uh virtual machines on cloud so in cloud you can create virtual machines on your laptop you can create a virtual machine so just say you have created two virtual machines at one virtual machine you have a thing called control plane and another virtual machine is called so your this particular pod this particular pod which is there this particular container that i want to run this will run on your worker node so your worker node or this machine will be actually running your container and you will be able to see welcome to the nginx page your traffic will be served from this particular machine which is over here and then the control plane node is managing everything now now we are seeing how it is getting managed so there are different components so there are different components let me yeah so there are different components and first we are talking about the control plane mode so control plane is uh is basically we call it control plane node and it has the control plane components because it is controlling your complete cluster that is the whole terminology but what are the components that we will be installing that are there so api server scheduler controller manager etc and ccm now what is api server api server is the main brain so api server is the main brain of kubernetes so whenever anything anything that you are doing with kubernetes goes to api server all the calls goes to api server you can provide a yaml file you can provide a json file you can use the imperative way of doing the cube ctl run and every command will go to api server now this cube ctl run command will go to api server now api server will do three things api server will do three things authentication authorization and admission so user first whenever a request comes i i need to know right if anybody is entering my house whether uh that person is genuine or not whether it be they have the key uh or not so or whenever you enter your offices you have the id right that's that's the authentication that you are doing so that's what it does a user a user so a user is authenticated with the headers passed so whatever is is getting passed so user is getting authenticated then it is authorized like whether a person who has entered is authorized to do a particular task like if someone comes to your home to uh you know um let's say fix a light pulp whether he actually knows how do you know whether that person is you know actually uh the right person to do that particular job or whether he we have we need to validate that whether he's authorized to perform this particular action so that's the authorization and then you have some sort of policies like when it is admitted there are certain policies that it needs to check for example very simple example like when you are running this in nginx image i just need to check that this nginx image is coming only from docker hub and not anywhere else so that i'll be having a image policy webhook so i'll be checking against that webhook validate it i can also mutate so we are not going to do the details of the web validation and mutation but it is also a big concept like uh webhook uh validation and mutation and then it saves to etcd now what what is etcd etcd is actually the key value database for your kubernetes cluster so all the cluster states like you know you have your application running in kubernetes so you your communities need to know right that it is running this application it should it will store its information at least it will store its cluster information somewhere so it is storing its cluster information so it is a key value it is a key value store for distributed system api server writes to it and whenever you talk about the ha mode you will be having three you will be having three nodes like you you need to have three you need to maintain the consensus um i i can briefly touch on that so you need to have three etcd nodes for h so that you can uh and i think 2 and 2 and minus 1 is the uh tendency or whatever is the number so if 3 are there then you can tolerate the failure of one that is there so you have uh so this is the for the h a one but you that's actually out of scope of this workshop but there is like for h a unit three and it uses the raft algorithm so there will be a leader election consensus will be there and the leader will be elected and then that will be serving the that will be serving as the key etc where your data where apis are always right okay so where we are with respect to the request so now we have told api server i want to run engine export now you're what you are doing so api server says okay i have the request i have authenticated authorized and validated it looks perfect now let's move on to the next stage from my side it's okay the second component that comes into play is called the scheduler so scheduler is very intelligent component in kubernetes scheduler will find the best fit node to run your pod it will take the resource and request so there is something called resource request and limit where you can define how much cpu memory this particular part needs and what is the upper limit so you can define all those so scheduler can read that scheduler can take that information scheduler also knows the capacity of the nodes so it has the node information like uh like there are three nodes like there are two nodes over here so if it it does it have the capacity does it have the capacity which which which has the capacity that can actually fit this particular node so it will intelligently uh find it and then update the uh label inside the pod that it has to run on this so it tries to find the best node based on paints toleration affinity node selector and updates the pod spec section with the node so there are change toleration that you already define and all that stuff uh then you will be it will get the best fit node now now where is now where we are with respect to request so we have cube ctl run engine x hyphen f image nginx we still want to run the pod it is not running what is happening where's the request now so it went to api server we went to scheduler now at least we know that it will run on node a because a scheduler has said that this particular part has to run on node a okay then that piece of uh that piece of information is also stored in dcb it's updated uh api server only talks to etc no other components rest all other components talk to api server like all our body parts talk to our brain so we have a central control system so that's why api server is also there now next component let's move to a separately different dimension which is the cubelet we'll come to rest of the components of the control plane later on but let's move to cubelet so this is the worker node on worker node we'll be having few components which are constant across any number of worker nodes that you add so one will be the container runtime you can have docker but i mean from 1.24 i mean it's you can have but that docker shim is maintained by mirantis so just say just assumes you are new you will be having container d so you will be having container d as a container run time interface as a c r i container runtime interface so you will be having container d and you will be having another component which is called cubelet and you will be having another component which is called cube proxy cubelet is the component that keeps on polling api server hey api server do you have anything for me to run hey tell me you have anything for me to run and this also will say hey do i need to run anything and as soon as the scheduler said okay hey this part needs to run on worker node one immediately immediately the api server industry yeah so immediately api server will say okay run this this is a request that has come you have to run a pod with image engine x and with so and so spec section so there will be a spec section that i'll cover in the yaml yaml section so run this now when the cube lit how cubelet runs it so cubelet will talk to the cri cubelet will tell use the cri which is the container runtime interface and it will tell uh you know i need to run this particular part can you fetch the image uh can you fetch the image from the registry uh can you attach the network uh can you do you know then there are like cni cri csi three components are there so three components which are there it is called cri cni csi so cri is the runtime interface which is container d cni is the networking component in kubernetes which is planar i told you this kubernetes involves a lot of terminology so keep your brain and mind open and understand so you need to learn some of the terminologies that's why i'm going slow over here yes i'm going slow so cni is flannel calico celium and few others as well and then here is the storage drivers so storage component is csi container storage interface and the implementations include so those are the interfaces the standards then they have the implementations like container d is the implementation flannel is the implementation long open ebs is the implementation so then container d will get the image attached you know whatever the ip the roots and all that stuff will be there and then q proxy uh q proxy is another component that maintains the network rules on the node so in the node obviously you need one pod needs to communicate with another port one pod need to communicate with another part of another node how to do that so q proxy maintains a network rules on the nodes these network rules allows network communication to your prod pods from network sessions inside or outside of the cluster so it maintains the list of basically iptables or ibbs and every time a pod is created the ip tables is handled by q proxy and you can actually check ip tables hyphen l command in on the node that you are running you will get the list of those ip tables so where we are now so now what it did is so request came here scheduler defined cubelet said i want to run like aps ever told cubelet run this particular part it talked to the cr uh cri and then used its cni to attach the network and all that stuff fancy stuff that it does it updated the ip tables and now the pod started running now the pod is running so internally like obviously internally there's a shim there's a shim that is there container d shim which will be talking to container d then you'll be having run c and run c actually runs the container so that's actually the deep dive stuff which is there so now the container is running and yeah the container is running now the thing is this is just a simple example where does controller manager comes into play now there are different objects in kubernetes which we'll talk all in this workshop now if i want to run five replicas of my application so i will create a different type of object which is there and that will be controlled by the controller manager so controller manager is a kind of binary that is having multiple controller managers like replica set controller manager deployment controller manager job controller manager uh stateful set controller manager demon set so those these are actually kubernetes objects when i go to that section you'll understand what a kubernetes object is so these are like this is a pod right like this is a like this is a pod so you can create deployment you can create stateful set demon set and all that stuff so controller manager will be controlling that your out of those five replicas if your node goes down your three replicas went down then at least those three replicas are spin up on another node will be responsible of control manager control manager will tell api there is a mismatch and i only have two replicas ready out of three please create another one then again schedule will say okay this particular node has it and run it over here so that's how the process goes on that's a reconciliation loop so this is the terminology that you will hear often when you are the kubernetes ecosystem reconciliation loop so that's the reconciliation loop which is there and your controller manager is responsible for continuously looping through and checking if your desired state and actual state are same so in kubernetes you define the desired state i want five replicas of my application i want five photocopies of my application in very layman terms controller manager is responsible for running those five copies of your application now the last component which is not uh yeah which is kind of uh not the core component dependent on the cloud so it is ccm cloud controller manager so whenever you i'll cover in the end so whenever you kind of create a service type load balancer or anything that connects to the cloud so basically you cannot create a load balancer ip kubernetes is not intelligent enough but cloud controller manager if your cloud is supporting that then you can create those load balancer and cloud controller manager is responsible for talking to the cloud and to the kubernetes cluster to perform certain actions so that's depend that's very tied to the cloud that's its name name is also cloud controller manager so that's your in short the architecture diagram so it has gone way below which is fine which is fine no worries i'll i can just delete these cool so that was your kubernetes architecture now you know the complete process of when i run a cube ctl run nginx what happens what all components are there in the control plane node what all components are there in the uh worker node and your application actually runs here this is your application this is this port this pod is your application this pod is your application okay so this is your application and these are the components so i hope i have gone really slow uh in in the architecture so i hope you understood the architecture of kubernetes just give us a thumbs up in the chat if you understood the architectures of architecture of communities completely cool let's move on to the next topic which is yaml uh now again i told you it's it's beginner friendly it's kubernetes from scratch and now uh the next important topic is yaml because this is what you will be actually creating so what exactly is yaml let me zoom in a bit so yaml is a widely used you know file syntax in the cloud native system cloud native space you have human readable data serialization language so it is a human readable data serialization language and it's it is like yemen yaml and markup language that is that is the full form of yaml yaml and markup language uh so it's very easy indentation matters a lot let's try to understand its syntax so this particular is the example of a pod so this is example of pod now in yaml everything is key value object so here now don't don't try to understand the what is in the yaml just try to understand the yaml syntax because this particular section is on yaml so you have your api version and v1 so it is a key and a value key and a value then this is also key and the value but sometimes sometimes the you you need complex stuff which can be represented as objects so here metadata metadata is the object and these are the attributes of the object these are the attributes of the objects next comes the list so whenever you need to define a list in the inside yaml like for example there are multiple containers that you need to define so that's that should be a list list of items which are there so then you have list which is there and this will be the list item so how you represent list items by having a dash so always remember like the two spaces over here matters a lot the two spaces over here mat as well the indentation matters a lot now this this represents that these are part of this object now this represents that this key value pair is part of this attribute inside the object so you need to make sure that your indentation is perfect so yeah then you have list item so if you have another container then you will put another list and do same image radius and name blah blah blah so you'll do same now suppose you need to write the uh the commands or you need to write you know multiple sentences inside a yaml file so how to do that so you do that pipe symbol so you create a pipe symbol for multi-line string and another one example is obviously the ca certified so this is actually a search file from kubernetes so i'm actually giving you the use cases as well in inbuilt so you have the ca search file and there obviously you need to write the certificate and now to write the certificate you need to have a multi-line string and for that you need to append it with a pipe symbol another example can be a pod inside a shell script so if inside a pod you need to write a shell script you need to have a have this particular pipe symbol let's go here now inside the container like inside some uh the yaml file sorry i'm used to kind of call it container and container now inside the containers first of all this is yaml so you have to focus on the yaml syntax not anything which is there in the file because anything which is there in the file will be covered later on when we when we go to that section obviously so just i'm trying to explain you yaml syntax over here uh so in this i want to use the dollars so you can see i'm using this particular placeholders so image deploy tag this is a placeholder for there and just a second the screen broadcast has stopped it should not be a problem cool we're back so uh where we were yeah and we have to use the dollar sign for the environment variables and if we want to have multiple uh files itself like multiple objects complete a full multiple object we can do a separation by three colon so dot dash dash dash one file can be here and one file can be here so this will be read separately and applied separately cool so i think uh that was pretty much i mean yaml is not something that i would want to you know cover in more detail than this because just a key value it's a it's a key value pair very fairly simple to write uh these all terminologies you have to learn when whenever we move to the next sections then what is the object you just should know like okay this is the object this is the list these are the list items which are there so you should know what all things are there cool let's move to our next section which is what is the difference between imperative and declarative commands so when you are running when we were running in the architecture diagram right we were running cube ctrl on nginx hyphen for image index very simple example we are taking we're doing nothing so that is the imperative command imperative means we are doing it via the cli that is the imperative command we are doing as a cli declarative means the yaml file we are declaring everything inside a particular file we are using declarative way of doing things uh generally imperative would be limited in terms of some things that you can or you cannot do so you always end up in production writing the declarative file but that doesn't mean you have to write it from scratch you can copy paste just like a good developers or you can do something like dry run and get the skeleton of the application and then add and subtract whatever stuff which is there so that's what the uh imperative and the declarative are now inside a yaml file now inside a kubernetes object gamma file there are few things which are common across all the aml files what are they they are api version api version there that is kind they are kind then the metadata metadata section and the spec section so the most important part is called the group version resource you will you'll hear this term i don't know how much you you'll uh here you know here this term gvr but gvr is called the group version and resource so group is the collection of kind means uh like for example apps is a group and that group is having different kind that group is having deployment replicas at stateful set for so for example i instead of pod i have to use deployment that i cannot use v1 i have to use apps apps v1 so i have to use apps so that's that's how the api version and kind are mapped then the version every group has one or more version alpha beta stable so v1 alpha 1 v 1 beta 1 so all these version are there so that's that you will be often typing often seeing uh the alpha features in kubernetes beta features in kubernetes and then the stable features and kubernetes and then you have the resource so uh resource is actually use of kind in the api so whenever you are using like pod uh the deployment so this is the port deployment and this is the use of the kind in the api so that's kind of the group version and resource which is there uh then metadata section uh metadata section will generally contain two stuffs it contains all the annotations and stuff as well but we are not going into the details of annotation though they are kind of important sometimes when you are using ingress or when you're using you know some some other kind of controllers in inside communities but on a very high level or a very basic level with which you can actually start writing uh yam of a good quality ammo files you just need labels and name so they are like bare minimum so you you'll be having name this this is not required actually we never create this this is a copy from the dry run that's why it automatically appeared so you have labels and the name then comes the actual spec section spec sections means what actually you want your particular kind of object that you are creating to do what particular what particular stuff you want from your object of a particular kind to do i want to create a pod i want to create a pod so that should be having single container that should be having single container with image called nginx my container name should be nginx and that's it these actually resources are something optional dns policy restart policy are optional this is optional these are mandated so i hope you understood cuba ctl run as well now let's okay when we go there i can show you the dry run actually well i know you might be a little bit off you know bored with the with the theory but i'm telling you once again to all stick to your laptops and screens do not miss any of the things which is there okay you need to uh learn today communities be with me okay so go to killercode.com i'll be sharing the link in the chat go to killercode.com and click on areas click on areas and then you click on playgrounds and after that you click on kubernetes 1.24 now it might take a couple of minutes for you because when there will be a sudden load uh there has to be additional instances that will be provisioned in the back end and there will be some time delay so you might be seeing over capacity and stuff like that but don't worry this will scale so i have given the link in the chat killerkoda.com playgrounds scenario kubernetes if you don't have your account you can uh do that so somebody asked the gbr gvr is nothing this is this the group so group is a kind of like the [Music] collection of kinds so in in in any of the what you say software's ecosystem uh you have everything defined as apis each apis have will be having different versions and when you are developing a software you will be releasing a alpha version of your software first then beta version of your software and then the stable version of your software uh now when you are alpha version then your api version will be alpha and after that it will be beta now in that api version there can be multiple things like in in your product when it is alpha there are three features and in when when it is in beta there are four features or you have different groups so in in kubernetes there are different groups which have different kinds and that have different features so think of it like that so where we were yeah so i hope you have you are here now uh the thing is why that's why i told you you don't need uh like usually the local usually the workshops go in a way where you know i i tell you to create the cluster and all that stuff uh but the thing is now you need to you already have a cluster so let's first see how to use the cluster the cluster creation i'll tell you where to use and there are like managed ways also to create the cluster so it is it is not that complex piece i know cluster creation is hard and there are big software that helps ease out the process but let's learn kubernetes first so this is a kubernetes cluster two node cluster it will give you cube ctl get nodes so just run your first cube cpl command and give me a thumbs up cube ctl get nodes first cube ctrl command and give me a thumbs up that you ran cube ctl get notes and it's it's a big big thing right you you are in the community so that's the first keep ctl come on cube ctl get notes and next command is cubectl getpods hyphen a hyphen capital a so this will tell you all the parts that are running inside your cluster this will tell you all the parts that are running inside your cluster so you are running like this is your networking stuff this is your core dns stuff this is your etcd this is your api server controller manager cue proxy cube scheduler so all the components that we have discussed the major components and the networking pieces so they are there that's what comprise of your kubernetes cluster that's what i was telling you so this is your kubernetes cluster these are the components of your cluster and you can actually select you can see cube cpl get ports hyphen a hyphen o white but you can actually see let me clear the screen so you can actually see the node where it is running so you can see on the control plane like the controller manager api server etcd scheduler they are all running on the control plane because they are the control plane components they are the control plane components so they are running over there it's pretty simple now what i was telling you tube ctl the first command in the architecture run everybody run it cube ctl run nginx hyphen iphone image is equal to nginx oops iphone iphone image next loop ctl run engine x hyphen f and image engine x so your pod is created simple now you do ctl get false i'll tell you like we'll go to the pod section we'll discuss in more detail what a part is but i just want to take i want you to take the feel that you are running something so this is your first uh pod that you have actually run in the cluster and you now know how that part is actually running so you know the complete architecture so that's all the communication of apis or version dealer controller manager the cubelet the cni the cri everything that has done so here you can see that okay let's move on now i was telling you that we can to a dry run client yaml we actually can get a yaml file so that's what i meant so you can do a dry run client hyphen oeml so what you can do is i run client hyphen oh yaml don't give the same name nginx because you won't be able to create two parts with same name in the same name space okay so just give the name anything and you can try the dryer and client yaml it will give you the yaml file so it has it is the same kind of yaml file which which i have just shown you cool so uh let's come back to our um slides because that's enough of the demo as well because i don't want to make you too excited and you are just doing stuff and you don't know what exactly those stuff means let's talk about name spaces in kubernetes now you have the kubernetes cluster you already have it on your killer code environments you have kubernetes you have name spaces now kubernetes namespaces there are four different types of namespaces which are there so let's see also side by side cube ctl get ns so you have default cube node lease q public cube system so four namespaces are there system like automatically it means system means system pods are running in this namespace so cubectl getpods hyphen n tube system so we can see the pods in a particular namespace and to see part in a particular namespace just right in the end hyphen and then that namespace so you can see all the system ports like api server controller which are the critical components they are in the cube system namespace and you should not run your application databases in this particular namespace because that is reserved for the system then comes your let's go to this so cube system has system ports now cube public q public q public is containing the it is it is not fairly old because it i think it came in 1.16 kubernetes version uh it actually tells you the heartbeat of the nodes and every node has that lease object so this this is having all the lease objects so we can actually run this command clear the screen cube ctl at lease hyphen n cube oh sorry i told you the separate one which is there sorry i told you about cube node release q public is actually containing the cluster info and the certificate data so cube ctl get port siphon and cube public on the pods under cm i'll tell you what cm is later on but it contains the cluster info like cube cpl on uh cube ctl cluster info so this particular thing is coming from here so that is there in the public namespace so the public stuff that is accessible next is the cube node these now that was what i was talking about earlier so cube node lease actually is not very new it came i think in 1.16 and it has the um every node has a lead or lease object so that we can we see uh queue did i write that command or not anyways we can type it again cube ctl you also should know get please hyphen n q node please so you can see each node each node is having that lease basically the heartbeat of the nodes so each node will be having that and that is there the cube node is then you have the default namespace where you are creating your stuff your applications so you are creating all your applications in the uh this particular namespace now these are the namespaces now what are namespaces so namespaces makes your environments isolated now for example let's say oh i'm not in this slide show you sorry for that yeah so now let's take the example uh say you have different teams so you have a dev team in your organization and you have the uh testing teams so you have the dev team and you have the testing team which is there so each team and each team should have their own quota each team should have uh their own control each team should have their own policies for example i do not want testing team to use more than certain amount of resources i do not want uh the dev team application to be interfering with testing team application so for example um my so this is this is my app my app this is my app so my app has a version v1 i am running this perfectly fine now somebody testing somebody testing suddenly changes the image because they are testing something and i don't even know and they have added to their ci pipelines now what will happen my application is gone it is their application now what about my application so i don't want them to change my application so i will be running my application in my dev name space and they are free to run the same copy of the application with their image in their namespace so i am happy my workload is running and they are happy that they they are able to test their version of the application or if you also want to test the same different version of your application you can do that in different name spaces so it provides you the isolated environments where you can have different policies you can make sure that the dev team service account users do not have access to uh the testing team testing team service accounts do not have the access to the dev team dev team services should not be able to communicate with the testing team you have those network policies over there so you can have different set of policies different set of quotas in terms of cpu memory that you can define within the within your name spaces and each name space should be used by separate teams and for separate purpose now there is secondary reason also for the namespaces for example you are deploying monitoring resources you are then deploying your application you are then deploying your database you are then deploying uh let's say your controllers of service mesh so you are deploying multiple things and suppose everything goes to default and you do cube ctl get default oh my god so many things appear so ideally you create a namespace called monitoring and put monitoring components over there easy to monitor easy to aggregate easy to isolate put your application in in different namespace your service mesh components in different name space your database components in different so you can have segregation of your namespaces with respect to what you want to deploy on a cluster as well so don't blindly deploy everything in default namespace segregate by namespace so that it's easy to view is individualized is to monitor easy to check and all those things so uh that's what another was another reason is grouping the resources separately like monitoring databases etc and then it is different yeah different version of your application that is also we have discussed and your teams so all these we have discussed now let's see that in action here the screen so first we already saw cube ctl get name space but i did ns how do i know that ns is there i'll tell you who cube cpl api resources when you do cubectl api resources it gives you the list of all the apis which are there and there are some resources which are namespaced and there are some resources which do not require the name space for example if i do quick ctl ak resources hyphen hyphen name spaced false i think so yes these are the like uh volume um your volume your storage class your nodes so all these are uh you know not the name space record resource and uh this is the short name so this is where i get to know the short name so either you can do cube ctl get nodes or you can do cube ctl get no so cubectl get nodes it is not a namespace resource cpl get false it's a namespace resource because now right now i am only able to see the parts of the default namespace so the default namespace is actually the default where the default context as well let me give you github simplify workshop content and communities 101 yeah so we are starting this please take a note of this because all the examples that i will be doing are already present in this particular repository and we will be performing all the steps one by one yeah i have spent my weekend for you people now you just have to stay till the end so uh let's create the namespace cubectl create namespacedev ctl create namespace step namespace dev is created now let's create another one cubectl create namespace testing ctl create pod name so this is what keeps it and create pod name sorry deployment create means create actually image deployment this won't run fyi so you have to do a deploy just add deploy over there else it won't run okay and then so you can create the same name of deployment but in different namespaces now one i created in default namespace one i created in dev namespace how to list all namespaces cdl get ns so i'll be getting all the namespaces and you can see i created dev 48 seconds ago testing 45 seconds 41 seconds ago i can do a cube ctrl describe of the name space as well so dev so this will tell you like this was the dev name space it has this automatic label that is there status is active no resource quota no limit range because we haven't specified any of those things when we have created this particular namespace now this was the imperative way you can also do it in a declarative way how to get the declarative way is cube ctl create namespace demo hyphen hyphen dry hyphen run client iphone or yaml so you will be able to see uh the declarative way of creating the name space api version b1 so it is present in the v1 api version so v1 is the group and your kind is namespace and your metadata is contains the name of your namespace and you haven't specified anything in the spec section so spec section can have your quota and all those things so that's where you will be this is the declarative way of creating that now in order to delete the namespace you have to do cubectl delete and s or name space testing so it will be deleting the name space because i haven't spent cube ctl right that's why it has given this particular error and it will be deleting all the resources in that particular namespace which are created so i have actually spent a lot of time in namespace to be honest that is more than enough so if you have understood namespace please give a thumbs up yeah one last thing uh in the name space is switching the context uh so right now right now if i do cube ctl get ports i see only the pods in the um default name space but what if i have to switch the context so i switch the context to the dev namespace cubectl getfox tubectl getpods so i am having now report from the dev name space and not from the default names so if you want to switch your context to different namespace then you can use this command or you can use different you know fancy tools like cubens cubectx blah blah stuff which are there now let's go to the next section which is labels and selectors okay so let me bring it here and zoom in a few of the things because i want to show you okay let's first start with the labels okay so labels in kubernetes we can make our objects meaningful like uh you know it for for when you yeah when you have uh when you go to a restaurant and you have your you know tea coffee milk hot milk hot chocolate all the big big cans over there then they put a label on top of that right it makes it meaningful it tells you like this is coffee this is uh tea else you don't know what to do you how you are able to select the coffee how you how you will be able to select t so you similarly labels can add meaning to your kubernetes object so those are the same key value pairs so drink coffee drink tea and here it will be different set of key value pairs that you can define on your particular object so in this particular example um i did you know uh like nothing was there then we created a pod nginx and we did cube ctl get port show labels so it will give me the uh labels which are there like these are the labels run nginx they are automatically created what if we want to label the pod we want to add a label so we use a command cube ctl label give pod name and then our key and value key equal to value so key equal to value key equal to value always remember this pod is labeled again run this command cube ct will get part show label and then you will be having our newly created label over there and labels are defined in the metadata section so you can have like metadata like i told you there are a lot of things in metadata like annotations so your cni uh you know pod ipa all that stuff they come in the annotation but that is out of scope of this workshop um but you have labels also that you can define over here what else is there yeah so now let's go to the selector so in deployment like in in higher level communities objects not higher level just in some kubernetes objects uh things are selected based on labels like if a service so if i have to send traffic to a pod in a cluster then i need to know where to send the traffic and it will happen happen on the basis of labels i should know which labels uh which spots i actually have to send the traffic service will be able to send the traffic based on labels deployment deployment uh like there are five replicas so i need to know those five replicas should have the same label as i won't know that they are part of my deployment or not so in metadata i have uh this label and here i have the match labels so selecting on the base of labels so these spec will be of the containers sorry uh spec will be of the uh deployment so you are matching the label so in the selector what you do is match labels so selecting on the base of labels another example when you are doing selection now this type of selection this type of selection is called set based sorry equity based selection also one of the example other example is node selector so you can actually add a node selector and it will be based on the key value based which will be equity based there are other advanced uh selectors which are like set based labels which have in not in exist this is a bit complicated concept stay with me so this is example taken from the docs so like this selector match labels component radius this is normal equity based now second one is match expression then is match expression here i am saying key operator in values and then i give the values key environment operator not in values so in norton for very complex operation i do not think that you will be using it anytime soon because 90 of the work can be done without this or you just have to have simple labels that's it simple key value pairs let's move on to the exercise for this so create an image move to the exercise repository so labels and this director run the image in genex i think this image should already be there no because we have switched to dev name space so we can create that anyways create that uh then create a deployment nginx i think this is created let's label the pod cube ctl label port first get valuable skip ctl cube ctl get pods hyphen iphone show labels so you can see the label is there run nginx now label the pod from the next command cube ctl label port in the next app demo now again run the command and app demo is added so i'm not lying so this is how you add the label and this is how you see the labels added to your um cluster so document will only be given to those who stay till the end very simple so just keep learning so now we know about the labels and selective as well cool i think yeah after this you can come to the init container section please in your file and leave the above one because above one is something that we'll do in the end come to the edit container section okay cool now let's move on with kubernetes so till now till now what we have learnt we have learnt kubernetes we have learnt parts we have learnt uh sorry we have learnt what kubernetes is why communities we have learnt um about the um architecture deep dive we have learned about what name spaces are we have learned about uh the yaml file we have learnt about what a yaml file of pod looks like components the major components in aml file we have actually ran a particular part see the try run command imperative way declarative way and we have uh learned what all name spaces are its use cases cool that's a lot of stuff that we have covered now let's move to more action so these are pods now in kubernetes anything that you run run as pods in kubernetes anything that you run runs as ports so everything will be running as a pod you want to run a container but it will be running actually as a pod so you have your um so this is how it will look like so this is the pod manifest that we have seen a couple of times now and you might you should be aware of it now so this is the port now how it runs on a particular node so your node is nothing but a virtual machine or a bare metal machine or raspberry pi whatever it is there and each pod will be running a container so your container will be running inside the pot and you can have one or more containers which will be obviously talking about the multiple containers and its use cases each pod wherever it is created it is assigned a pod ip so there is a pod ip range every pod will be getting its own ip so each pod will be having its own ip in this this particular section in the metadata section this is the pod name and this section is the container so this is the pod now what all things we can do with the pods let's see that clear the screen and uh let's keep ctl get pods let's do a cube ctl run put your name i'll put mine iphone hyphen image equal to ingenix reports iem is created cube ctl yet so these are the operations that i'm trying to show you so after creating you can get the pod how to get the part cube ctl get pods we have say import how to describe the pod cube ctl describe cod engine x or say i'm sorry in describe there are a lot of things that you can actually know whenever your pod is kind of not starting not behaving properly not scheduling pending whatever it is there this is the first command that you should be running like describe it what is happening with your pod so first the scheduler scheduled this to this for this node then the image started pulling like cubelet gave the action of pulling the image then cubelet gave the action to start the creator then it started the container so it tells you all the events all the events that has happened it also tells you the toleration that this does it have any toleration or uh to you know go against the taint then it also tells the quality of service class which is it best effort we i will not tell you about the quality of service class because it has scope out of the scope of kubernetes 101 uh then you have mounts what all amounts are there so this mount is a service account token with which you can actually interact with the kubernetes api server from within the pod using this token which is mounted inside every container then it gives you the container information like container id what is the image uh the sha whatever is there uh then you have the host port stay what is the state of it is it ready or not what is it restart count like if it went out of memory then also the ip so it tells you the ip of that annotations that got added so what happens is uh our annotation is getting added as part of the mutation um you know the web hooks which are there uh from the uh teleco project so calico is just another cni and that's the functionality of it that it is adding the annotation so you don't have to worry about it you don't have to take care about that so you should be describing and then it also gives the name of the pod name space and the node where it is actually running and the labels as well so it gives the complete description of the pause the next step is there that whenever you create the application you see the logs of your application right so similarly you can see cube ctl logs hyphen f is to follow off the pod so i can just see the logs of my pod as well so whenever any new request will be there uh there will be logs or this so this are the this is how you see the logs of the container but other action you can perform is cube ctl get ports i can delete a parts okay sorry cube gtl delete ford in genex the pod gets deleted um you can see the exec so let's clear the screen how to go inside the pod and see what is happening so cube ctl exec hyphen id hyphen id same as the docker higher you know docker exec so qcd exec iphone id then the pod name and then hyphen hyphen then sh the shell some has bash some has and you can see it has its own file system which is there um you can do you know ps whatever uh whatever stuff is you can regularly do like you know cat um whatever is there and go to different directory user you know and then you can exit the pod pen cube ctl get pods um last thing yeah you can do cube serial get ports hyphen or wide this actually gives you the sorry w id so this gives you the wider view um not only the age so it adds more information ip node name um you know and the other stuff which is there so nominated etc so you you have the readiness gates all this stuff so it gives you extra information about your port and one last thing that you can do with pod is actually you can do it with any other any of the sources cube cpl delete what happens is uh whenever you delete a pod there is a termination grace period of a particular resource so that termination grace period i think is about 30 seconds so if for example just a common scenario right try to open up your mind whenever a request is coming whenever you are hitting a request and you suddenly put a delete command you at least want that request to process to be processed and no other further request should be there so that particular termination gray speed if it is serving a request then it will be served for 30 seconds and then it will be um things like that so that is what is pod is so you have um i hope you just have done the pod i'm not sure if i have entered the basic pod examples but from now on there will be good examples which are there in the um in that repository but for the normal part right i have been like you you ourselves have created the part with using imperative will now like for now like 10 15 times exit so just press exit and you can come out of that exec exit i will add those uh same commands if you want here anyways uh let's go to our nodes again next is the pod lifecycle so whenever a pod is created everything has a life cycle so pod also has a life cycle so whenever a pod is getting created the first step that it goes to is called pending so pending is a state where actually the scheduler is trying to find the node for it because let's say you give you know n number of resources uh and how will if there are no nodes only to cater your request then your node will always stay in bending so it will it is basically finding the node or let's say you have created a node which has a storage attached to it like persistent volume resistant volume plane so your node will stay in pending till your pv and pvc are created and bound to each other so that is the first stage second is the container creating it means like pulling the image from the registry so cri is pulling the image then starting it that cni is attaching the network and csi is getting the storage stuff attached to it so that is the container creating phase then running we already know error if there is some error in your code it errors out like you know the um exit one whatever is there crash loop back off uh process dying too many times like out of memory or your process just whatever you said is completed but you're again running it so crash looping back off because kubernetes has the ability to keep your container running but your processes are still in your hands then it succeeded so completed are there when your work is actually completed like in jobs etc so when your pod work is completed it is completed so that is the pod life cycle now you know about containers you know about containers you know about pots i was telling you i was telling you that when you are running a pod a pod can have one or more container c1 c2 a pod can have one or more container which is actually the next topic but it is worth to mention here because in it container is also another pod another container only so what happens is but init container is a special type of container that runs before the main container so it has its own use cases it's not just like that it is there it has its own use cases so it runs before the main container main container is your application container init container will run before the main container it can contain the custom code that is not present in the app so you can have some custom code which you want to run before your main container you want some file system to be changed or some files to be created or some files to be added with something before you know your application container starts uh then pre-condition checks maybe you want to check if certain services are running maybe you want to check if any any of the things which are there the preconditions for your particular container and it will run in sequential order so if you have you can have multiple init containers so you can have three or four in it containers with specific jobs and there are people use like two through three init containers before even starting their main applications because there are multiple stuff that you need to actually have ready before your application container runs so those can be in the init container and you can have multiple in container if you have multiple end container then it will run in sequential order so that's the screenshot of the container so you can see the init container which will be there so here you have two containers but we'll see that in action anyways so let's go to the workshop this is the init container one let's copy that do a vi of init so now here kind is pod in the container section you have name image nginx port i have given container port as 80 and it is mounting a volume so i'll tell you now about volume as well but first let's move to the init container so init containers init containers are specified by typing in it containers so camel casing is there okay init containers you give the name of your container the image i have chosen busy box and the command so if you want to write a command in your container if you want to write a command that you want to run inside a container so you will be writing command and then you will be giving w get o and i want the output in this particular location and then off cubesimplify.com and then it is a volume so volume in kubernetes is something uh especially the empty dir mtdr is just empty directory just understand this for now empty dir is the empty directory that is created on your node system the node it is not inside the container that volume is on the node so that volume is on the node empty directory it's not nothing is getting created but it's empty as soon as the container goes off your mtdr also went off so it is actually in the container where run whatever the container path which is there and there will be the directory now that's how we define volumes in in order to define volumes you just say volume name and give an empty dir so this is the one type of volume which is there local volume and when you want to mount that volume inside the container you will be using volume mounts i want shared volume to be mounted inside this container at this path which is slash shared i want this volume to be mounted inside nginx container as volume mounts with the name shared at the path user share in genex html so what we are doing basically we are getting w get off our cube simplify file we are storing that at shared index.html now since this is init container init container will run first init container will run as first part first container now this will create this particular file and this is mounted on as empty directory so it is shared so slash shared has now index.html now what we want is this particular stuff should be there inside the user share nginx html so whenever nginx starts instead of the default welcome to nginx html our html should be this so that's what i was telling this use case let's click on wq cube ctl apply hyphen f in it demo created cube ctl get port siphon and default you can see [Music] where it is created oh it has created in this sorry sorry init demo so init demo is running now exec into the pod and call localhost okay cube ctl exec iphone ip init demo one hyphen hyphen sh pearl local host so you can see it is not the same html file it is proper the cube simplify uh html file which has come as you know over here which is serving so you can see hash node and you can see we are actually using hash node behind keepsimplify.com perfect control c and then do exit and then clear the screen let's go back so we learned about any containers in it containers are special type of containers that run before your main container and you can use the like the similar use case uh you can mount some volumes and you can change the file system and use that file system inside the main container uh i gave you a proper use case and the example as well you can have multiple unit containers they'll run it in sequential order now the multiple containers so multi container again by now you know a single pod can run multiple containers by the way just to tell you one interesting factor whenever i told you whenever you whenever you create a pod whenever you create a pod it gets ip but what about two containers so pod will always get single ip containers will share so containers share the name spaces share the ip they talk with each other on localhost so they talk to each other on localhost okay multicontainer so multicontainer has its own use cases why you want to have two containers c1 and c2 just run one container per pod that is a good practice that is very good but sometimes there are use cases where you need to run multiple containers like the sidecar patterns like you need to have ny proxy as part of your containers you need to have log you know for application logs you want your other containers to fetch the logs and store it somewhere you need to act it as a reverse proxy to host static files from the main container so you need second container to get the static files and then host it you need to send out the network traffic using the envoy proxy and stuff like that so for that for these kind of scenarios you will be having the multi-container ports let's quickly see the example of multi-container pod very good example i hope it runs so vi multi and then let's see what this example is let's go here i can actually show you from here so what this spot does is this is a multi-container pod how to specify multi container pod in the spec section of the pod you define containers i told you containers has a list and these are the list items so you have three containers over here so you have init container so you have multiple containers you have two init containers oh sorry that is okay we'll run this anyways let me just get the multi so this is not a multi-container example actually this is but not that multicultural example this is multiple init container example so uh init containers is still not over so multiple init containers so now in this particular applications like i was telling you you can have multiple init containers as well so this is the container app container so this is the busy box container and it is just doing a sleep of 36 100. so it is the sleeping sleeping container and two containers are there two init containers are there first is init service then is init my db what each of them are doing so until ns look up and i am trying to call the uh local service and this particular service how to call a service is actually kind of uh simple so i told you that there's a token uh which will you will get the name space from that so this will give the namespace so default.svc.localcluster.org if it is there then it is success and again if this is there then it is success and if both are success then only our app container will start so i was telling you one of the use cases if you go back if you have heard properly if you have heard come on so init container so if you have heard properly that it has certain changes certain checks precondition checks so precondition checks if certain files are there on the system so if certain services are there on the system so if certain services are present then only start my container that's a very valid use case so now apply this particular yaml file i hope it works apply hyphen f multi okay init demo two created cube ctl get ports there you go now you are able to see the fancy state in it 0 out of 2 means none of the container is ready means none of the container is ready what happened one of the container is ready first step that you do is describe describe pod init demo two it is saying that i successfully pulled i am trying i have created the init my service the first init container i have started but after that unless and until the init container finishes the main container won't start so now let's create those services that we said we will create so if you go back you have svc don't worry services you don't need to worry because i'll cover it later on but first you need to create those in order for this particular demo to work error validating if api version is not there so folks oh no it's correct it's correct i added additional i yeah so both the services are created now let's do cube ctl get parts we can see we'll slowly see that it is it will start hopefully i hope i have done the names right let me go here so db service db service and app service yup it took a little bit of time cube cpl get parts cube ctl get pods now you can see now the pod is running so both the init containers were successful because i created both those both of those services this is another use case of init containers now let's move to multi container we have already understood the theory so let's do the practical copy and paste the multi container so vi let's name it c and c dot yaml and paste it so it is a multi-container pod again we have same kind of empty dir volume that i explained earlier in the container sections how to define multi container part is in the container section we have a list so first is nginx container it is again doing the um you know mounting the shared data this empty dir to user share nginx html inside the container so from shared data on the node 2 user share nginx html in the container second one is the alpine container alpine container is mounting the shared data onto mount path slash mem info so both the containers are sharing this particular directory because there is a purpose for that command what command it is running it is running bin sh hyphen c and arguments in arguments it is running a script while true date adding the date to this mem info see mem info index.html and then reading the proc mem info and adding that also to the index.html so that's a very simple basket let's apply this cube cpl apply hyphen f and c dot m c sorry m c 40 ml gtl multi container pod is created get pods container is getting created let's wait and here now you will be able to see two out of two instead of one out of one because both of the containers are running perfectly that shows both of our containers are running perfectly now let's see the logs how to see the logs of a multi-container port so cubectl ipod multi container so we can see two containers are here if we describe so container one is nginx container container two is alpine container this one now let's see the logs off in the next container so cube ctl f this after this you give the container name hyphen c internet container oh multi-container for multi so you can see these are the logs of this but we have we want to look at the html file what is happening over there so now you do cube ctl exact so this is another way to do the command to cube ctl exec hyphen i t and just do a curl local host so that we just know what all things are there in the html file so you can see uh the timestamp so this is 15 38 16 now let's run again this will be 15 38 22 so every two seconds the memory slash proc mem is getting added to the html file and you are able to view that so that's one of the use cases of multi-container pod and i hope like the example if you like and you have done till now hit the thumbs up [Music] come on you don't have to slow down things are getting started i know things are uh getting complicated but it is it is important for you to learn it because why i'm giving these examples those are the actual use cases those are the use cases that you should understand simple parts this is not a any just any humanities workshop right it's a workshop that that i have prepared i want you to understand what a pod is what are in it containers what are multi containers how to run them what are the uses of init containers and multiple init containers so you should have those those you know real kind of examples in front of you so that you can understand oh wow this is this is this html file this is mounted onto the node this is writing to this index html file and then nginx is again mounting that nginx html file in its user share nginx html so that's where you are getting the that value for one is getting writing the mem info another one is getting it in the html file so i think those are uh you know the use cases which are there let's move on so we have learnt um pods spot lifecycle init container multiple multiple init container multi-container now let's look at a very interesting topic which is probes probes are now open your brain okay try to understand so container probes are something which uh you apply to the particular pod uh now for example say you have a pod you have a pod and via service you are sending the traffic to this particular pod okay you are you keep on sending the traffic and what if the pod itself is not ready to handle the traffic or something has happened that it is not like you know suddenly there's a burst of requests this particular pod is not able to handle that much amount of traffic i know scaling is there but this part uh due to some reason or maybe one of the file got corrupted maybe one of the file got deleted maybe someone tampered with your you know container and something went wrong where this pod is ready but it is not ready it's not ready to take the traffic kubernetes doesn't know that kubernetes will keep on sending the traffic so kubernetes sends the traffic even if the pod is not ready to handle it because it doesn't know another example is deadlock situation so what if there is a deadlock situation and kubernetes still starts sending traffic like you have a complex database which is there and you know like in databases you have read the concept of deadlocking so what happens in that particular deadlock scenario your deadlock won't be able to be able to identified by kubernetes and kubernetes will still be sending traffic and it will lead to inconsistencies it will lead to data inconsistencies it will lead to not giving proper output to your application so for these there are something probes that you can define inside your pods there are three types of probes readiness probes startup probes and liveness probes so readiness probes check if the pod is ready to accept the traffic very simple check check the depends dependencies for the pod in terms of availability of the service or latency so there can be latency issues right there can be network huge network traffic issues so it can check that that uh maybe as a particular port is open or not maybe particular slash health endpoint is uh i'm getting the response or not startup probe is so let's just do liveness liveness probe is uh http get uh there are three types like http get tcp socket and exec if the response is okay then it is okay if tcp socket is a port check if the port is open then it is okay sometimes there is a port not open inside the container what happens the kubernetes won't know but in liveness you can do that and exec so that can be the custom conditions uh custom command like file check just like we did right file check so it can be a custom command to check if the pod is ready or not now cubelet restarts the container whenever the liveness phase so whenever liveness probe phase cubenet cubelet will restart the container now startup probe is a special type of uh probe which executes first and until this probe is successful none of the other probes will start running now the the misconception probably or the wrong usage or the whatever i have seen in in my experience is people start using the readiness and liveness together and they exactly give same information over there that actually isn't helpful quite helpful so they are there for specific purposes and should be used for specific purposes uh so you can also define the initial delay seconds like how many delays should be there before even running this like this probe uh you can give the timeout like what is the timeout period that uh it should be like before it cube let's receive the response from the liveness startup or readiness pro you can see the period like how much often you should be like the cubelet should be the probe should run basically and you how how after how many times you should consider it a success and after how many times you should consider it a failure so after how so let's say uh you know http get fails three times then only the pod gets restart okay and when it passes for one time then the pod will start just hold on my screen share went off so i need to reshare 600 yep and we are back yeah we should be back and let me do a bigger size screen cool so uh where i was yeah so we have um three probes readiness startup likeness uh they have their own specific purpose like to check the uh traffic latency issues then go for readiness if you have like direct http get tcp socket exec uh you do liveness and startup probe a special type of rope if you want any additional check that you want first just only that check and then other things because after startup check after startup probe it won't run again after startup probe only readiness and liveness will keep on running you should understand that like if you want to run just once for a file check and then just disappear and then other probes will keep on running let's see if any workshop has that oh nice so workshop i have a example of container probes copy that go to the playground so the playground is deleted so let's start again because my it's i think one hour since we have the playground let's start again by the way thanks huge shout out to killer koda and kim uh they have been really helpful with the workshops he has pre-provisioned a lot of things so that we all can do the labs peacefully so create the container probe and install oh sorry copy paste okay so kind pod uh spec in the specs section in the spec section you have containers now in the container you specify the probes so liveness probe http get port 80 so if i am able to get http port 80 means the container is able to take the traffic so cube ctl create hyphen f cp nobody asked me the question sometimes i'm doing cube ctl create sometimes i'm doing cube ct apply why why i'm doing like this nobody asked this question why you haven't asked this i was i'm i'm intentionally doing this so that you'll ask once so basically what happens is cube ctl will create the file once the apply will just apply the changes on top of that so if you have created once you won't be able to create again but you will be able to apply so you create once and then keep on applying the changes so it's like you can always do an apply that that works uh but if you are you know do fancy things you can do create also so cube cdl get rewards so you can see my container is running cube ctl describe pod i can show you that the liveness is there somewhere yeah you can see the liveness section is there so let's go to the workshop it says the demo with the path demo for failure and changed http get to [Music] okay what that means is instead of slash do a slash demo we don't we know that demo is not there you've seen here apply f cpu and board we cannot change actually so we need to delete this cube cpl delete i should have done force anyways and apply it what is created it will fail keep cpl get parts it will fail after some time you click here and get parts and soon after the 30 seconds i believe so because we have three checks and each check is after 10 seconds container is starting continuous doing now it's not the readiness check which is there it's a liveness check okay another thing is you can do cubes it'll get spots hyphen w okay you can now see a restart so you can see one restart you can see one restart now let's describe this cube ctl describe pod you can see liveness probe failed because that is not there that 404 that slash demo is not there only so that was one demo now let's change to tcp socket you can do that actually so you can change the tcp instead of http get path you can just check for port so whether this container port because we know container port 80 right so in tcp check you can do a port 8080 it will fail and if you do port 80 it will be a success oh next demo is is interesting and i don't have theory for that are you ready for next demo okay what we'll do is after this demo we'll take the break and it will take 10 minutes to complete this demo so let's go to the playground i hope you are by the way clear uh you know this probes okay probes are important now things will get interesting so stay with me and keep learning here so cube when you go to the workshops uh you can apply this file whatever is mentioned over here cube ctrl copy till here i will tell you what it is applying so we are just creating a matrix server it's a metric server that we are deploying on to the kubernetes cluster so it creates a lot of components metric server cluster role customer binding service deployment blah blah blah good cube ctl get for syphon a soon that will be running and it's almost running yeah it is not running yeah so it won't run by the wait way wait don't don't don't apply that i forgot one thing i have mentioned in the with this also we have to change this flag so what you do is copy from here https copy from here https go till the end and first you do cube ctl delete hyphen f cube cpl delete hyphen f this first delete all components now uh curl or hyphen o get this particular file and go inside that okay let me get the raw file it gives me the file okay don't do curl can you do a w get yeah do a w get instead so do a w get and you'll get a file bi into that bi into that and scroll up so scroll down and come to the section where you have these flags where you have these flags metric server flags in the end you need to just add this flag cube ctl insecure tls because we don't have that authentication mechanism as of now so add a hyphen and hyphen hyphen tube ctl in secure tls have you seen this vi scroll here hyphen and two hyphens then cube ctl insecure tls save it apply it yeah now let's see cube ctl get parts our metric server should start running i believe so in some time come on come on come on yeah so finally our metric server is one out of one so that's what i was checking actually pull um this is done let's move on and copy this particular file and now i'll start explaining you our camera resource request what that is so basically in in any of your components in any of your files deployments you can define the resource and request so it's very important to define resource and request for your particular parts because if you want to use the resources efficiently in your systems you should be using the resource request and resource limits so resources the the object name over here is resources resources has two which is limits and request again limits has to which is cpu memory request has to cp memory what does request me request means i am requesting this much resource minimum for my pod i am requesting this much source this much resource minimum for my pod but on the limit i want i do not want it to go beyond beyond a particular limit okay i do not want it to go beyond a particular limit but but this is cpu yeah correct so run this cube ctl create hyphen f rr i'll show you one example so cube ctl get pause oh you can set up alice's oh my god so many times how can i do bad okay it's running limit test okay cube ctl is running limit test it is fine right we can check with cube ctl top pods matrix is not available anyways it's available okay let me delete that in the next one anyways what i want to show you is cube ctl top nodes yeah so quick ctl top node show me the cpu and memory so you can see the cpu cores which are there one cpu and control plane is this much and what what we are asking over here is cat rr so we are asking 500 i believe yeah 500 we are asking and here we are but in the command line argument we are stressing on two cpus so we have a limit of one cpu we requested for minimum 500 but we are running with two what happens in this case is it will throttle so even if we are asking for two the limits will stay under one so let it generate the matrix and we will do another example so vi into vi into rr and change the name to limit test 2 because you cannot create the same imports change everything to 3 change change your cpu i mean cpu to 3 cpu to 3 and cpu to 3 change everything to 3 ctl create hyphen f now this will be very good duke ctl get parts it will always stay in pending it will always stay in pending now why it is staying in pending let's clear the screen cube ctl describe pod okay hyphen two so cube cpl describe support you can say that zero out of two node insufficient cpu we don't even have that cpu that we are requesting we are asking for three no no no we cannot ask for three because he is not there only so scheduler is not able to find a node so that's where the intelligence of scheduler comes in scheduler said i do not have any node where will i run this so let's beat in pending or you add some node over here which has this capacity now let's do cube ctl top no pods and you can see the limit test is not going beyond one so even if we specified 2 we requested for 500 but in inside the our container arguments we are running 2 but it is getting throttled so cpu got throttled and it is below the limit that we specified so it is always below one so i hope you understood the concept of what did you understood the concept of resources and now it's 9 30 perfect time to take a 15 minute break and we have a lot to cover after that very interesting section so till now what we have covered by the way we have covered uh kubernetes humanities architecture let me show you let me go back above and if let me make you feel happy that what you have you know uh covered so you have covered actually introduction to kubernetes cheers for you always keep on clapping pods keep on clapping that's it i mean it's okay it's okay we are going there now it's not that much left because uh those are now easy for you to understand because now you understand the concepts of kubernetes you understand the uh you know the concepts of yaml files you understand the concepts of resource limits i wanted to make the base clear now your foundations are perfectly fine so your foundations are clear and now we will be able to cover these very quickly just like this so when i return in 15 minutes just like this will cover everything i know you are with me and you trust me so we'll be back in 15 it's 9 30 p.m ist and i'll be back at 9 45 p.m ist which is 9 15 am pt so in 15 minutes see you [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] and [Music] do [Music] do [Music] [Music] [Music] [Music] [Music] [Music] [Music] so [Music] [Music] so [Music] so [Music] [Music] [Music] [Music] [Music] [Music] [Music] okay so we are back and i hope you are enjoying the workshop and uh you are practicing something as well the demo repository is there and the demo repository will be there so make sure to even practice practice practice practice as much as you can so that you can you know create run applications and and do a lot of things in with communities because the next set of workshops will is is built on top of communities like it will teach you more advanced concepts get offs troubleshooting chaos engineering uh you know and all those fancy things so this is the previous one was the extreme foundation the linux and docker this one is the core foundation uh and that's why i wanted to spend more time on on the kubernetes pieces so that you you have good understanding of kubernetes and the next time all those who have attended this workshop should not say that they don't know communities they should always say i know communities and i know a deep dive and all the concepts i know about you know request limits in it container multicontainer you ask me with namespaces you know what a yaml is anything you ask i know with use cases so you need to tell that yeah you you know things you know with the use cases with the examples and you have practiced that uh so that's where you you kind of need to um you know grow okay so it's 9 45 and we are back welcome everyone back and thank you so much for staying till here because the next set of stuff is going to be even more exciting you know i have kept networking till the end so uh the networking has demos and the demos will be nice and i'll show you a lot of good stuff in the networking pieces cool so let me take some questions if there are so one question was dj uh so humanities doesn't know about ha if you want to spin up the ha kind of cluster then what you need to do is you need to have three control planes so three control plane nodes which will have three etcds you see that actually can live outside of the control plane as well so let me do this version of the screen yeah so three control planes and then as many number of worker nodes you would like to attach to that so you can do that fun fact um the the by default limit per container per node is 1.110 uh fun factor you can edit that so um yeah so they are not actually aware and what about there was one more question and it contains okay yeah containers are very simple in it containers let's go there so in it containers just they run before the the main container so i gave you the example where you were checking for the files uh before they they exist checking for the services they are up or not and then only you start the main container because let's say you are starting a main container and they are requiring some services to be there your container will fail so let's not even put that in running state and you know have it have have some checks for that and then it will be there requests so requests are something that you are requesting from when you are creating the ports and the deployment so you are saying that for this particular pod i need to have the requested cpu as 500 and requested memory as hundred but i do not want them to use more than you know one cpu and this is a stress test so don't go to the commands they are just for the example you need to focus on this section uh so defining the resource limit and resource request there are interesting takes to be very honest because i told you uh i i only gave you the example even if you specify the the resources even you specify the request you can still try and use two cpus so you can still give that so basically it's no kind of but you still give the upper limit but if you have the what you call so you have the upper limit over there and don't go over there um yeah yeah i was saying there are interesting discussions um with respect to request resource whether this should be there should not be there i recommend you reading that and decide on your own uh okay now we have to do deployment any other questions next team okay our back we are not yet there and i probably won't cover that deep dive because our back is is way too much to explain but i i'll tell you a few of the stuffs for sure but then at least at least cover the authentication authorization mechanisms uh that is for sure any other questions that were there just let me check i think i answered most of them uh yeah there was initially there was like when they designed docker they did not think about scarcity docker came in very initially and uh it also had its container orchestration system which was stalker swap very easy to spin up but it was not meant for scale and that's that's the only stuff and they didn't they were not able to create that scale for that so i think you can see they failed over there and but because of docker we are all here to be honest cool lees and heartbeat are nothing just the the new concept which is there there's nothing much to know about that there's a namespace cubenotes that keeps a conflict map uh sorry the each uh node has its own lease it's not nothing nothing fancy over there so i can show you oh cube node these so you can see you know so this is the enhancement proposal that gives you the summary of node heartbeat so node heartbeats are necessary for correct functioning of kubernetes cluster and this makes them significantly cheaper from scalability and performance perspective so you can see what all things are there grace will shut down leads what is the lease and the struct type so if you want like deep dive you can go through the enhancement proposal uh but it's a newly built i think it it became stable in 1.17 by the way we are running in today's date we are running 1.24 so all the clusters that you are running is kubernetes version 1.24 yep i hope i have answered all the questions now let's get back yeah i was just waiting for 100 viewers yeah but i think 25 35 30 people dropped that's very bad you you need to stay at the end okay so keep sharing as well that next stuff that you are learning is deployments so let's come to deployments let's come to deployments so now we know about pod but a single pod when you are deploying a single board but what happens when when the pod is deleted it goes away what happens that if a node is deleted your application goes down so we need a way we need a way we need a true power of kubernetes we need a way we need true power of communities where if a part goes down then community should automatically create that part if a node goes down then kubernetes should shift all those ports to some other available node if we say we need five copies of our pod then there should always be five copies of our board so we need that power and we need kubernetes to manage that we don't want to manage that overhead that's why we are using communities right we are learning so many fancy concepts and commodities won't do this much for us so yes kubernetes provides an object called deployment uh deployment is a kubernetes object that creates uh the internally that creates the replica set so often you will hear about replica set which is handled by replica replication controller and this replicas set purpose is to maintain the replicas of a particular part replica meaning photocopies just say like you have uh you know you go to a shop and say give me five copies of my photo so so that's why so those five copies so replica segment is that but you never directly create that replica set although replica site is also a kubernetes um object so it's a kind replica set but you you never create that because you always deal with higher level abstraction which is deployment because you can do much more with that you can define the strategies you can define replicas you can define you know many things over here ah so this particular deployment if you see so it is a deployment kind uh we have some labels in the name in the specs section this is important we define replicas how many replicas how many copies of my application i want to run in kubernetes so i need to how to select those select based on the app in the next all the parts that will be created they will be having the label called app engine x what is the template of your uh pod which looks like so that template will be having these specs section that actually defines the containers the image the name and the resources again these resources we just covered above the request and the limit so this creates two parts so if we see accumulated cubes it will get parts it will create two ports now uh what happens when so now you created this particular port with nginx image but your application keeps on changing right you keep on releasing new version of your applications what happens then so whenever you want to update the application when you update your deployment image a new pod gets created so there will be a new pod that gets created service stops sending the pod to the old pod so let's consider it this way so you have four pods so you have three parts three replicas now you change the image you say i need a new image kubernetes will create one pod and we'll start sending traffic to this and we'll break the link for this pod now when it does that it does not uh it first make sure that it's healthy obviously the readiness all the probes etc check which is there so this this becomes healthy and this starts standing the traffic to this but this immediately doesn't go away it has certain grace period which is called the turbination grace period which i told you earlier as well for example it is serving already serving a particular request and it has to complete that so it will complete that particular request and then this particular pod will be created and then similarly another pod will be created and this link will break and another part will be created this like so this is the rolling update strategy where your application never goes down and you have changed the image as well so you can define the strategy by default the strategy the rolling update strategy is 25 max unavailable and 25 max search max unavailable means how many pods you are saying can remain unavailable so like say you know three are there then out of which only one remains unavailable and two should always be available max search how many pods you can extra create like if if i give you know let's say 100 so i i am saying that directly create three boards max search means the new replicas that you get how many so that's all the 25 then it will create one by one so that's that's by default strategy type is rolling update and min ready seconds is again how many minimum ready seconds it should be after new ports pins up uh before it sends the traffic to that these are important uh from the deployment perspective oh sorry you should know all these details so you should know all these days so that's what a deployment is let's try to understand it let's try to create a deployment and see how it looks like ctl let's clear the screen let's create cube cpl create deploy in the next hyphen iphone image is equal to injects uh so our deployment is created how to get it cube ctl get deploy i told you deployment internally creates replica set and it internally creates pods so you can see the deployment is in the next then it adds the replica number replica set id the random hash and then it's it adds the deployment hash so that's how the name comes so that's how the name comes so cube cpl and same like cube ctl what all we can do we have this logs this logs from the pod you can you can take you can do all that stuff uh yeah so we wanted to change the image well first let's scale it so let's update the replicas so cube cts scale deploy nx hyphen hyphen replicas i now i will be having five pods i container creating uh change this name to engine x okay it is written s over here so if you are directly copy pasting make sure to change this name to s and then you run the next command now so till now what we have done we have created the deployment we have scaled that using the command if you haven't seen because i haven't put that command so that command is called cube ctl scale cube ctl scale deploy in genex hyphen fn replicas don't worry what you can do is cube cpl edit deploy in the next you can edit it on the fly so you go edit you go in the edit mode you go to the replicas and you make the change uh three and you save it so it is edited and you will be able to see two pods are terminating and there will be now three replicas so either you can do by cube ctl scale or you can do by cube ctr edit deploy let's uh run the command which i have mentioned on the workshop uh to set the image now i was telling uh we need to change the particular image i'll tell you why also we need to change this so we can update set the image deployment in genex and change the image to nginx 115 2 so cube cpl get ports you will be able to see powers are terminating and creating terminating and creating but if i see cube cpl describe it i will be able to understand more clearly what is actually happening so you can see first i scaled up so scaled up replica set scaled up replica set then we scaled down the replica set scaled up replica set scaled down replica set after that one by one the pod goes so one two so it is like one and then 2 5 3 1 2 and now it will be three so cube ctrl get ports and one is still creating so it will be creating soon so you can see that we have all this running running running why we still have another one because this is still container creating that's why this is for so that's the search and um yeah this is the rolling update strategy search okay replicas are three two are updated four in total are there three is available one is a man available we are able to see that you can see the image that i have changed i have changed the image to 1.15.2 now um let's change the image to the next cm and now if i get pods it will say container creating okay and nothing will happen only one now sometimes you can give wrong image or some bad image or something that it goes into error stage so your traffic doesn't get hampered your all previous pods are still running and this is one new part that created it field so the deployment has stopped nothing is happening and what you actually can do is you can see uh the cube ctl rollout status uh internex deploy waiting for the deployment so if you want to see how many replicas have been updated you can use this command another fancy command is uh rollout history so just to check what is the rollout history of the deployment index so you can see we did record this we did record this so basically this one is bad one so we want to roll back to two so that's where roll back will come into place so change this demo okay i actually didn't create a demo i created a deployment with name ingenix so you need to change that to linux so i am rolled back and my pod has gone the one which was container creating and i'm back with my 1.15.2 so that's what deployment is that's how you create deployment that's how you you can also like uh cubectl logs deploy nginx pod you have to do the pods logs but you can do the cube cpl uh get parts and you can see logs of individual containers you can see cube ctl um we have done edit we have done get we have done yeah delete so delete deploy in the next hyphen iphone again i can add force cube ctl get deployed i don't have anything uh cube ctl get replica set i don't have anything here cpl get pods i have only previous parts which are there so uh i hope you understood now the deployment so there are many things actually so we slowly slowly keep on building on top of that okay we started with pods then we built it built on the multiple containers in it containers did some complex operations over there now we are going to the deployment and now let's let's move to the next stages and see what all things now we can do but i hope you understood the concept of deployment let's go back to our notes what happens when uh you want to run databases like stateful applications what happens when let's say you are running a database so let's say you are running a database and you have and you created a deployment because you right now know the concept of deployment and you created a deployment and you have created a database with that deployment so you have seen that deployment will be creating a random hash okay deployment creates a pod with random random hash and whenever the port dies there will be a new hash the the hash will change and it can be backed up by any service so there is a service so there is a service which is sending the read request to any of the replicas it's a deployment right and it is sending the right request to any of the replicas that's a big problem if i send a write request to a database to the replica this one to this replica then in the next one i am sending it to this replica second one there will be data inconsistency when we read the when we do the read operations there will be data in consistency because deployment uh doesn't you know create sticky sessions or it doesn't do yeah it doesn't create the sticky session and it will always now let's see we have given the pod name or the pod ib okay write only to this part but again if a part dies it will recreate again loss because it doesn't start with the same name so for these scenarios there is another object another kubernetes object just like for deployment there is something which is called a stateful set so whenever we want to deploy stateful application stateful application means the applications which are persistent where you need to persist the data where you need to persist the state like you are entering your information and you want to persist the database your name email etc those are persisted in the databases so those are deployed the databases are deployed as stateful set it is not a very straightforward process it is not as easy as it sounds um there are many things that you also need to take with with kubernetes so you need to take care of the replication so it replication won't be done by communities but stateful set also gives you some capabilities that can help you in your in running your stateful workloads so it will help you it will create a pod first it will be creating a pod with predictable names so if i create a stateful set with uh named web the pods will be created web 0 web one anytime web one dies it will always be recreated as web one so i always know the names so now if i want to send the data so i can do the right operation only on one particular service and service can do choose only one particular pod and read operation i can do from both and then i can have my manual replication mechanism that i need to write because kubernetes won't handle that and for this type of things for stateful set you cannot use just any service you need to use headless service we will cover the headless service in detail but but a headless service is you have a pod or your app you can have a pod or your app to directly use the dns names for writing and then have a replication mechanism in place so you can use headless service and your your pod or your application can directly use the dns names for writing into the stateful set pod so that there are no data inconsistencies and then you can have the replication mechanism between the other replicas of that replicas also are very interesting in in in this what you call stateful set because when you increase there will be a replica set new one created with the same with a sequential order like the next replica that will be created will be 2 web 2 and your replication mechanism will copy the data and this also will be ready to serve the traffic ready for the raid operations so you can create a highly available kind of cluster using this replication mechanism i know there are operators actually like percona is there but kona gives you a very good uh operator that gives you three stateful sets bound by three volumes which are dynamically allocated and then you have those replication mechanisms ha kind of cluster which automatically takes care of the replication which automatically takes care of the read write operations i think people are making making this easy though it is not straightforward then you have this is covered this is covered this is covered yeah and on the deletion side of things so when you delete the deployment it can delete in any order but when you delete a stateful set it is deleted in the order two first one then zero so it is deleted in the reverse order so that's what a stateful set is for your stateful applications databases and maintains a sticky session with the predictable names you use headless service and it gives you the dns let's see if we have a demo for that yes we have uh so okay so demo is very interesting because i will make you install the local path provisioner first okay i have only six minutes left so probably i'll restart my terminal because i don't want to get interrupted please check your time session for your killer code environment okay so let's you see there it is cool and viss.yaml i oh not this one so first is directly you apply so this is a local path storage um this helps you create your two kubernetes objects basically volumes which is called cube cpl get pv and cube ctl get pvc so basically if you do not have a dynamic storage provisioner so this is a provisioner whenever you create a pvc whenever you create a pvc it's responsible of the provisioner to installed onto the cluster to create a persistent volume for the persistent volume is nothing but a disk from your local computer so like you have a local system which is there or you have a vm so it's a disk partition which is there if you have like that that is local one is remote uh which is living outside of the cluster so this one will be inside the cluster which is fine because we are doing for testing but we cannot use this for production for production we use the external um local path provisioners which are like the cloud volumes backed up by cloud volumes or which are backed up by the dynamic provisioners nfs cluster fs and all so on and so forth so and it will automatically create a storage class for you so if you do cube city you'll get sc you will get a storage class now let's do the ss copy or pod copy your board and here what we are doing is we are creating a stateful set we are creating a stateful set we are giving it a we are specifying a service name service name is nginx we have to specify the service name because we are creating a service type cluster type none and how do we create a headless service is just we specify cluster type none don't worry i will cover this in detail in the services section but it is okay uh spec section involves replicas i want to replica same as deployment so it is almost a copy of deployment only i have image i have container board i have volume mount now here this is interesting volume claim template this means i want pvc so if i define a pvc this will create a pvc now if you have if you have a provisioner which we just installed a provisional which we just installed so a provisioner which we just installed will read the persistent volume claim template and automatically create a pv for us and then pvc and then bind it together and then the pod can use that so a pod inside can mount the persistent volume claim on the path this and it is a storage class access mode i have already written everything because we have storage class local path duke cpl apply hyphen fss tube ctl get pods get pv so you can see right now no pv is there provision there must be running and it started creating the pv and cube ctl get pods you can see here cpl get parts right now the one will be creating web zero web one is pending because its pv is not getting created now second pv is created second pvc is also created both status are bound and now our parts will be container creating perfect this is how your stateful set works so you know the name of everything you know the name of the pods you know uh the how a pvc first the pv getting created pvc is getting grease that's a complete flow now what we actually can do is we can exec into one of the pods and do because the volume right and we can same we can do here also exit f1 sorry not web 2 we haven't scaled it yet anyways web one and echo now if i do localhost localhost and i i'll be able to see hello from sayam if i do call what was our name of the service nginx default so i need to pass it in a way where did it go straightforward yeah so i need to pass it in a way which is yeah so type web 0 dot in the next dot default dot s p c cluster local there you go so what i did here i gave the pod name so i told you predictable dns so i can use this dns to actually write into the boards and then i can do my replication mechanism so this is web 0 pod name then you have your service name then you have the name space and then svc cluster local so that's your dns is getting uh resolve so you can also see i think slash utc resolve.conf and you can see over here that it is the it already has the default svc cluster local so we just need to give uh web i mean web zero so i think even this should work yes because rest will be automatically resolved so i hope the stateful demo was interesting to you and you learned something now let's come to the next section which is the important one just stay here because next portion is your favorite networking and i have prepared some interesting demos to exactly show you how how this will work and and you will you will actually find it interesting because you will you will get to know how things work so what you will get to know what happens when you run the pod how things are getting someone was asking cni this is where cni comes into place so let's uh start with networking so we have covered uh good stuff we have covered till state full state now let's start with networking just just a second yeah let's start the networking i was thinking to cover okay we can start with networking so now again you have to open your mouse open our minds and freshen up uh because this is something interesting but it does involve you know picking your brains so you need to relax calm down and hear what i'm saying so let's go here so what happens when when you run the pod so we do cube ctl get nodes okay so we have two nodes we have a pod yaml whenever we create a pod it it creates a pod very simple it creates two parts so this this particular container is a multi container pod it has two names two containers it has one container which is named busy busybox and second container which is named nginx and after the pod is created after the pod is created from here the cni will attach the cni which is the container network interface attaches the ip address and attach it to the network how it does that so this is the node okay this is the node every node will be having its eth0 address every node has a 0 from where your communication entry point all your service traffic will be data will be flowing and it has a root name space so we have studied like you know there are network name spaces uh in the previous session chad explained about name spaces linux containers is all about containing different namespaces you all agree to that so here also the same magic obviously will happen so there will be a separate network namespace which will get created so this is the pod namespace here you have the busybox container you have the nginx container and also you have a pause container now you would really wonder i just created two containers why there is a pause container what is actually the pause container we will come to that so eth0 every pod will gets its own ip address so your ip address is this so you got the ip address the cni gave you the ip address now when you exec into that particular sorry for that i think every one or it is getting disconnect anyways it doesn't take too much time cool so where we were yeah so when you exec into that particular pod and you do ip address so you will find eth0 at the rate i f9 you will find this i mean obviously it'll be different but for now this screenshot as per the screenshot you have this so this is the interface that is attached to the pod and it is attached to the root networks using the ve pair so that is something which is called virtual ethernet pairs and you can also see the root in that so you can see you know it is getting rooted with the gateway and all that stuff so it is getting over here now let's move on you know what let's do it side by side so that we understand well let's go to the workshop go up till the networking section so this is the pod create this pod okay so vipu and paste it and cube ctl apply hyphen fpo we have created the board cube ctl get pods okay this is getting created right now shared namespaces okay now in order to list the namespaces the command is ip at ns list we can see we have list but we also have another node like ssh into the node one in order to ssh just type go to another tab click on plus go to another tab and type ssh node one and here also you type uh ssh node zero one i think zero one and type ib net ns list here you'll be having more list because there are more containers over here so let's go back so there are the v each pairs created depending on the cni it may be different but usually it is v e and the e but depending on the cni that we install that can be different i'll show you how it is different actually uh so first let's talk about our pause container so pause container is actually a sleep container so it's a container that is a sleep container and it holds the network name space so the only purpose of pause container is to hold the network's namespace so your cri container d creates the network name space and pause container holds the network name space that is the purpose of that how to see that to see the network name space held by the pause container so let's do [Music] ls ns and grep linux so you can see if i do a grep of nginx i see different namespaces like there are two pods of nginx that i can see and it has a mount and a pid but it the list of namespaces lsns list list of namespaces the list of namespaces doesn't give me a list of net name space why is that because network namespace i think is does it not in the higher priority or something like that but in order to see that what we need to do is ls ns hyphen n hyphen p and take the process id from here first one now it gives you all like cgroup user net uts namespace ipc namespace mntpid so you can see the net utc ipc are hold by held by the pause container so that's how you can see that pause container holds the network namespace which is there let's clear the screen let's go back here now you know the meaning of pause container you know every pod gets ip now let's run this command ls hyphen lt where run knitting is because usually the name spaces will be here why i am running this command is just to see which was the most recent name space created so that i know what was the name space created by shared network shared pod that we just created so slash where run net ns so you can see today 18th of july latest time this one 53 this is the name space that was created okay now i can actually show you lot of good things so what we'll do is i p net ns exec into this namespace and do a ip link okay youtube cpl get pods let's see okay now the interesting things will happening will be happen what happened oh sorry cubes it will get parts won't run on the node so keeps it will get ports it only runs on the control plane shared namespace now let's go inside shared namespace cubectl exec hyphen ip and hyphen hyphen bash okay if bash is not there in sh cool and here we will do ip link now it says eat each 0 if12 each 0 if 12 same name space so we now know that this namespace is created with respect to this port now one more thing that i wanted to show was yeah now let's go back so we now know this is the name space now take this 12 this number here is important this f 12 now do i p now i am doing on the node okay remember i am doing this on the node ip link prep and 12. sorry not this one yeah you can see it's calico so i told you there are weed pairs ether eat eat from the pod to ve of the node network namespace but depending on the cni's it does it differently because they have like different set of protocols but different networking bridge and you know all that stuff so here the pairing is happening on the calico side so you can see it already created uh so in or in order to maintain the communication so how it will communicate with the root name space is using this so this is linked to the net ns so you can see the link network this is linked to the network name space this every other pod that gets created this set this type of uh network ip link this type of ip link will be created for all the ports that you create there will be a ip link there will be separate network name spaces everybody will be having the pause containers to hold that network namespace that is how it is working internally so i hope you got some gist of how networking is working between inside the containers just to recap so you have a node you have root name space you have the pod the pod gets created and the container d creates the network name space and pause container holds that network name space the cni attaches this cni is responsible for attaching the e 0 to v and then the cube proxy is responsible for creating the ip table rules okay you are seeing the links right and now it can be v e 0 and or it can be the other way round depending on the cni if time permits i will show you the weight zero we eat kind of pair as well i have the example for it now let's move on to inter node pod communication how inside a node it communicates to the pod let's zoom in a bit so you have this is the node again this is uh pod a and this is part b so suppose pod a has to send the traffic to pod b how it will do the traffic first reaches to eth0 then the traffic packet goes to eat zero then we eat we eat acts as a tunnel and traffic goes goes to the root name space then bridge bridge resolves the destination address using the arp table so there are the arp resolution tables that have the uh addresses so this will be having the ip of this and how we each will know where to transfer like the bridge will know where to transfer it will read from the arp tables and then we it will be sending the traffic to pod b so that's how the traffic travels so it to be pair in this case it will it was like calico stuff which was iplink which is there then bridge then v 2 and then from here it goes to the pod b that's how the flow works let's see how node 2 node communication works so now you have node a and node b okay node a and node b now uh a traffic has to be sent from this pod to a different board now cannot use arp as destination is not the same node so r can be used if destination is the same node now destination is not the same node so what will happen to check if the destination is not on the same node it uses bitwise operation or adding operation so it use the bitwise operation after that it finds it forward is forwards forward the request to the default gateway you you have seen here right somewhere default gateway so this is the gateway it will be sending the traffic to the default gateway come down before after the default gateway the default interface of the other node so there there is a gateway here so there is a gateway here after this it goes to a default gateway and then it goes to the other nodes each zero and then obviously after that it again uses its own arc tables and send the traffic to whatever part it wants to send uh services uses mostly ip tables and net filter and the codeine is for the service discovery and stuff so it will be it will be this but let's understand what services means and what how do we communicate and all that stuff so this was about networking power to power communication note to node communication interpod communication how you can see the pause container pause container is very important uh i hope you get got some just good gist of the uh the pause container and to be honest this video is not a networking video so i would just keep the knowledge to the kubernetes ecosystem and not going to the core networking concepts because if you go into deep networking concepts and there are tons of networking concepts which it's hard so services in kubernetes very important concept let's just see if we have completed everything from the workshop uh so i have written here okay all the steps check the network namespace this gives list of all namespaces exact into the namespace uh to see like ipnet ns exec the namespace then ip link cubectl exec shared namespace to ipad make sure ip is small now you will see its nine after add the red there will be a number uh you can search its corresponding link using ip link grep and nine so in our case it was 12 you will be able to see same network name space after link these are the vid pairs or based on the cni different how to check pause containers you can actually see uh lsns grab nginx and then you can see the lsns hyphen p the pid from above you will be able to see the pause container which is holding the network namespace so i have i hope i have clearly explained how to perform this lab as well cool now services easy stuff right looks easy lot of connections over here let's try to simplify so services in kubernetes are a way to send traffic so in the end what you want you want your end user to access your application so our end goal is a user is it should be able to access our application so when you wants to access the application they they will be creating a service so kubernetes has an object called services and it helps to navigate the traffic route the traffic from uh external or internal to other ports so you can see why services pods keep and coming going so pod will come and go so pod ip will keep on changing so there is no way we can define a particular ip for our application to use or for our dns name whatever it is we need a ip a virtual ip which is constant so service id remains constant and it load balance when you have multiple replicas so it automatically takes care if if your node and your two replicas are here and one replica is here and uh the request comes to this node it is intelligent enough to forward the request to this and not this so that so load balancing and all that stuff is also there for the services now there are four types of services one is cluster ip cluster ip service is the default service so if you do not specify anything automatically a service which is created is called cluster ib so and in in this api version is v1 kind is service metadata will have the labels and the name spec section will have the ports and what all ports you need to open what is the protocol tcp udp and what is the target port of the container target port is important because if you have multiple containers how will you define to which port it should go to this or this so you will give the target port of the container so if i want to go 80 so this is the target port that issued good and now i need so let's let's do this uh we ran a particular pod cube ctl run nginx hyphen fn image ingenix our favorite one and then we exposed this port cube ctl exposed pod in genex port 80 and then we get the svc we get the nginx svc so you can see the engine x svc and it got a cluster id so it by default it created a cluster ip service and when you see the pods for that you can see cube ctl get pods hyphen wide there are two ips for that and those two ips will be listed in the end points so there is another property which is called cube city will get end points you will never deal with it it's the responsibility of the end point controller whenever there is a new pod or this part goes down another pod comes up its ip will be changed over here automatically and the service will be sending the traffic to this particular endpoint which is always keeps on getting updated with the latest and the greatest one so and this should remain same okay so you have your labels app run nginx so this is how it selects so you have two parts with the label run nginx it will automatically be selected so that's where selector comes in so select the pods based on the label run nginx and send traffic to those that's how it selects that multiports when you have multiple ports to handle uh via service then you need to name them so right now you have single port if you even if you do not name it it's fine but when you have multiple ports you will need to name those ports as well you need to have add an additional field called name next type of service is called node port now cluster ip service doesn't actually gives you gives the access of from the outside world so in order to access cluster ip you would need something called ingress for that you would need something called ingress controller so for cluster ip you cannot do directly you would need ingress and then only from ingress controller you will be able to map to cluster id but we want traffic from outside world so ideally the best path is ingress controller then create ingress which maps to this particular service that's the ideal behavior but you can create a load balancer service it is specific to cloud it gives you the external ip so here you are seeing red external light is none it will give you the external id or you can do node port which is not a recommended way so you can uh in the type of the service you can mention hyphen hyphen type is equal to node port you can mention this it will be accessible on node ip colon node port node ip colon node port so that's this and then last one is the headless one obviously when i'll do the demo we will make it more clear headless one i told you is used for stateful set here you have to define cluster ip as none headless is used for stateful set you can communicate with the port directly using the dns i showed you how to do that you want to avoid any request uh being load balanced behind a single ip this this was the curl command that we actually ran curl web engine x web 0 service name name space svc custom local let's try to now do the demo and understand services in a more clear manner and okay are you ready let's do it so let's run cubectl engine first let's remove all the parts so cubecpl delete pods okay delete statefulset iphone iphone all iphone iphone force this is i'm running make sure you're running on the control plane okay it full set and delete pods hyphen iphone all one command go delete delete i'll make it force you see it can get parts no pods in the default name space so we have to be in a clean state so let's start let's go do it quickly cube this is this we already know cube ctl run cube ctl run and we'll label the pod so keep ctl run run uh two pods are created one by the name of nginx another by the name of engine x2 and now we are labeling the second pod manually to have a label run nginx that's why we are giving a override because that key is already present that label key is already present that's why you want to overwrite i just want to do it for fun to be honest because i want uh the service to get these both parts uh then expose let's expose this and let's dry run get a dry run of that first oh no okay oh it's fine it's fine let's get a dry run of that so this is the kind service uh you can see the spec and selector is run in genex and cube ctl get ports iphone iphone show labels labels labels we can see both have run in genex okay let's create this now expose so in order to just imperative way is cuba city exposed board engine x on board 80 service exist okay you see it will get svc a minute is it if it will delete sbc in the next and we will create our own service over here click ctl get ep endpoints and you can see you see dl get for psyphono white so you can see 1619216 and 192 16808 both the pods are there in the end points so our service is there and if i do um you've seen kell get svc if i do a curl on this particular service i am able to access that welcome to ingenix you can see over here but i cannot access this fill from outside world so let's change this service to no report so cube ctl edit svc and change the cluster ip sorry let's give the name also ingenix change the cluster ip2 come down change the cluster change the service type to node port tube ctl get svc and now you can see it has a node port now in killer coda you can go here traffic ports give a port access it welcome to nginx from the outside world you have done it and um yeah that's that's pretty much it about services to be honest so let's move next at least let me cover what what all i have prepared so it is uh the authentication authorization and admission so i told you whenever a request comes in it goes to the authentication phase authentication phase is actually the credential check so there are two types of users one is the regular user regular user is managed by the external user management system it is not managed by communities humanities just manages the service accounts so it is managed by kubernetes whenever you create a service account it will be create it will create a secret which will be having a token that can be used for authentication with the cube api server listen again whenever you create a service account it will create a secret token token can be used to for authentication i think from 1.24 you have to create the token uh promoter to version now before the authentication phase moves to the authorization phase it added a few info like the user info username uid groups fields all these things are added in the authentication phase and these are the set of authentication plugins that you can just read more about you know i just mentioned it over here an authorization phase checks if a user is allowed to perform the requested operation and admission is a policy check via the admission controllers like the image policy webhook very simple example that i always give i want to only have a particular registry from the images where the images should be there if it is from any other registry then it says don't admit this request and reject it so we can do always allow always deny and the admission controller check cool let's do the demo for this let's do the so i'll not do the simple demo actually what you can do is in i can tell you about the uh a bit about our back as well um i haven't written anything okay our back is role based access control so basically what happens is you have you have a node so you have a service account okay you create a service account and you have something which is called cluster role cluster role binding then you have role and then you have node sorry roll binding these are cluster wide resources these are namespaced resources so when you create cluster role you can bind that with role when you create a role you can only bind that with role binding so in the role so when you create a role you define the verbs like list get and the objects on pods deployment and then you create that and in the role binding role binding you give the service account so you give this service account and the role so that you are binding these together and this is defined in the role binding so that's the basic about cluster roland i mean the are that but let's try to authenticate first that's a very interesting demo that i think you like it uh con not this one authentication cool so first is uh let's let's do a cube ctl config view do a cube ctl config view and qct configure will give your cluster ip cluster address and all that stuff export it export the cluster name your cluster name is kubernetes sorry your cluster name is actually kubernetes and then you expose export the api server api server you can directly export using the conf i'm directly getting from configure so this is another command cube ctl config view hyphen json path cluster cluster server i'm getting that ah why it is getting clicked so many times yes clear the screen now let's do a curl so we got the curl command output now api server version this is open end point but what about but what about deployments if i want to view the deployments it says unauthorized forbidden user because it is saying that the user system anonymous cannot get the deployments so reason is forbidden we can get this if we pass the if we pass the client cert so we need to get the client cert and the key so let's use this yeah so key first we have to create the key and the client so how to create the key and the client is cube ctl he and the client i have not mentioned but you have to get the key and the client from the cube config i mentioned that so go to dot cube and config so you have the client so you have to do echo and base 64 hyphen b so this is the certificate okay this certificate so we make file client and we add this now let's do the key i think this is the key to same echo show you again okay echo double quotes paste that pipe base 64 hyphen b basically this is a base 64 encoded we are doing basically for the code we are decoding it this is the private key vi key save and now let's run the command again which was this and we are able to get the list of all deployments fancy we can use the uh so this is the type of first was the normal endpoint using the client cert ca cert second type of authentication we have done with the user client user certificate and key user certificate and key now third type of uh third type is we are we will be using service account token this is what i was telling third type we'll be using the service account token so in kubernetes 1.24 you have to create a service account token so let's create that token and let's go use that bearer token you can see you can again authenticate and from inside the pod like if you go inside the pod you can use this token to authenticate to uh you know do the same kind of request also you can do cube ctl proxy i can show you that like uh cube ctl proxy you can do and you can again kernel local host so if you do cube ctl proxy uh you should be able to local host 8080 and whatever stuff is there cool so you have done the authorization you know like cube ctl get you can actually create cube cpl create service accounts am cube ctl get service account cube ctl create cluster role and you can do [Music] um what is that no not this one here so you have to give i was seeing example a quick example if i can give you so a pod reader so create a cluster role name pod reader that allows you to perform only get operations and watch operation and list operation on pods so let's create this particular role and let's create a cluster role finding and take help from it so let's give ctl create a cluster role binding wait i'll not create it let's yeah so let's create a cluster rule binding with a different name with the name demo and with the cluster role of whatever we defined earlier that request pod reader odd pod reader but we need to define the we need to define the service account as well go slow go slow and yeah hyphen hyphen service account i think we can give service account yeah so we have to give like this okay cool come on copy paste and let's go above and we'll do this leader hyphen hyphen service account aim service account i think is in the default name space and service account name was i am i believe so cube cdl get and i think we can use cube ctl auth can i can i iphone help with the service account name can we do that can i create boards you ctl auth can i using service account yeah this is the one and this is the yes qcdl auth can i list perfect yes i'm able to do this is system service account so let me use yeah that is fine yes yes yes list and if we do create you see so i can get so you can see all the verbs which are there so you can read actually more about the auth can i command in the default namespace so this was the our back stuff which is there yeah our bag is uh is actually a bit you know it's not kubernetes 101 topic i so that's why i just covered the overview of that uh if you want to go like more deep dive um i can create a separate video but it's out of scope for kubernetes 101 to be honest because um rpac has a lot of user group verbs um which you can which you can actually view so you you can see the resources you can see the stuff you can see the service account permissions as well so there are commands for for it so you know cube ctl so that's how you create the role role binding you give the api resources the docs are good actually for this i didn't want to cover r back to be honest i just covered because someone asked to give a gist of that and you can see i don't have that in my notes either okay so the next topic is is also very important because when you create your node.js application or when you create your uh python application you provide that property files you need environment variables you need all those data which can be replaced you need some user names you need passwords so how you can actually use that in kubernetes terms so you can use something which are other kubernetes objects called config maps and secrets so they are application properties config maps are the application properties needed by the application very simple you create the application so you need that and the secrets needed by the applications are uh secrets are needed by the application there will be sensitive data like the passwords etc so you can create those uh using config maps and secrets so example so this is the api uh version v1 and data so kind i mean you can shuffle it anyways but kind is config map and data is actually key value pair so data is key value pair and metadata will have the name test now inside pod this is this is where it comes and becomes interesting inside pod how you will use that so inside pod you will be having spec section you will be having containers the same same containers image command and then you have env in the env in the env you give the name what will be the environment variable inside the name inside the container what will be the name of the environment variable inside the container and what will be its value its value is coming from so very clear right its value is coming from config map key reference this is the name of the config map and this is the key so i have made the label so this is the name of the config map and this is the key of the config map so your look inside the container your look inside the container has the value demo that's how you can use environment variables config second important point is config map has volume mount so you can actually volume out the complete config map again example uh api version metadata conf kind config map data so here data will be file a and i want two files hello science learn kubernetes file b is test2 now i use that inside a pod inside a pod i create a volume and volume is of type config map and i give the name of the config map inside the container i use volume mount i give the name of the volume demo and i give the mount path now if i go cube ctl exec into the pod i do slash home and cd config and do ls i'll be having those two files file a file b and those content of file a whatever are here content of file b will be test two that is how it is and we can see that in action let's do the last lab kubernetes config maps and secrets so let's create a config map you can create config map from literal from string from file uh so there are multiple ways where you can create clear the screen create config map create the port you know what let me tell you another fun fact at eof cube cpl apply hyphen f iphone enter let me draw the wrong arrows so if you do this press enter paste the aml file press enter to eof it's created so cat uf pipe qcdl apply hyphen f iphone just a tip if you want to do it for fun so the pod is created give ctl get ports and the busy box so cube ctl what do we have to do with this example okay pod is created what is reason okay i have to do the env right cube ctl logs and busy box you can see there should be look demo because we created live demo and the env name is look the value is coming from the config map test and the key is live so value should be demo which is correct so it is look and the value is demo okay demo one is complete let's move to example two this time we are creating a file a config map from declaratively so creating a conflict map in a declarative way cube ctl apply uf so demo wall 1 is created it is just the same that i showed you let's create the another container um line number two vi pv what is wrong with this uh api version [Music] flip ctl apply hyphen f oh so when you are trying demo two just make sure to add a space here okay after api version uh colon add a space over here now let's exec into the pod and see the files which were there so we have to go to cd config cd config cd slash config sorry which file it is mount path is home config okay cd slash home config ls file a a okay hello learn communities keep it up you're learning awesome cuban res cat file b and test2 cool let's try the last one with secrets so let's create a secret in order to create secret it's very simple cubes it'll create secret we want to create generic secrets there are different types of secrets like password docker password and all that stuff that you can use and this is from literal light demo so i have created that cube ctl is not found oh i am inside the container please make sure you outside the container secret is created cube cdl get secret and cube ctl get secret iphone oh yaml you can see the data it is base 64 encoded i can actually check the value same way echo double quotes base 64 hyphen d and you can see the value is demo which we actually gave here and then let's create a pod with the secret ah this is a busy box pod cube ctl delete pods hyphen icon all how it is created um yup so the secret will be there inside the pod that's it print env you can print the logs you've cpl reports and you see here logs crazy box and here it is mounted as secret cool so we have completed all the parts of the workshop for communities uh 101 and uh let's go back and check actually if we have covered everything so we have covered deployment stateful set networking uh actually instead of this we have covered the authentication authorization admission indeed and the very basics of uh are back so our backs to be honest it's like service account creation then you create cluster role or role and then you create cluster role binding and role binding that i created so cluster role defines actually cube cdl get we see if this had the cluster rule one i want to show you one thing yeah i can explain you better to be honest iphone hyphen dry hyphen run it yaml because when you see the yaml for this it will be more clear so api version for this is rbac authorization k8 io slash v1 kind is cluster role now you define rules over here so in rules what all api groups it is valid it is valid for all api groups in that resources it is applied on oh no so when you go here yeah so roll it in the rules you define the core api group and the resources on which resource this particular role will be there and the verbs are the actions like you can get pods you can watch parts you can delete uh create so all the verbs are there okay and then the role binding and the cluster role binding so role binding is name space based and cluster role binding is cluster scope based so in role binding you define the name subjects in subjects you define user or the uh whatever is that what you call service account and in after that you give the role reference so in role binding you are binding you are binding the user with the role you are binding the user with the role and you can see the main role that we need to understand is the service account one i think they should be an example for that to be honest they have given like all sorts of role or sorts of permission and subjects law yeah so here it is system service account so now it will be subjects will be kind service account and this will be the name so that's how you define the role and the role binding i hope now at least it is more clear the role and rule binding and then we have covered uh the config maps and secrets uh setup ah okay i can do that in five minutes last thing okay after that nothing so for local setup for local kubernetes you can use docker desktop okay go here download it get uh get that or you can use rancher desktop i already have that installed let me share my screen to be honest then only you'll be able to understand that share screen share screen window launcher desktop so you can see this is my rancho desktop and it is uh starting the container so it has the application i have a complete video actually on kubernetes so it will be starting a local kubernetes uh cluster on your particular uh node which is there now let's go back quickly ok so where are we where are we here are we yeah for local and then you have mini cube any cube now remember mini cube is required for your next workshop so your task is to install mini cube on your local system mini cube is required for your next workshop so you can choose the operating system you can choose your architecture you can choose beta and you can directly do the install it directly gives you the command and then you can do mini cube start you can do the cube ctl get a and all that stuff so you can do a mini cube cube ctl get a and you know uh it will be easy for you so mini cube is also for local development now in in in your somewhat production not complete production like one node one master and three control plane nodes you have something which you can use which is this come on okay let's go here yeah so go to kubernetes and playgrounds and start an ubuntu playground okay start an ubuntu playground and just run this particular command which was here curl i don't know if it comes in the chat but i have pasted it so open the ubuntu playground and run this particular command i'll tell you step by step in the meanwhile when it's running while it's running i'll tell you step by step what it does so it's very simple i mean pretty simple so first we are adding the gpg key uh just to have the kubernetes uh stuff then we are installing uh the components vim get uh wget and cubelet 124 cube adm cube ctl so all these things we have installed now next step is called the memory swap off so we need to uh the turn off the swap memory then is the overlay net filter setting the ipv4 forward system ctl uh c and what we can do we can actually refer the cube simplify cube simplify kubernetes 1.23 yeah set up kubernetes with container d so uh this blog is by saloni and uh we have written like step one run on all machines so you have to run cube adm install cube ctl this is the first step that was done disable swap swap should be disabled in order to work for cube ctl to cubelet to work properly there is a cap to make this work which is in alpha and which will mark that beta and 1.24 so you don't need that in 1.24 onwards to be honest uh then load the br filter module and the net filter and let the ip table see the bridge traffic so this is for this particular purpose then is the setting up of container d so this is the complete setup of container d so you get a container d dot d.con for you this is already done then you do the net ipv4 ipv4 and all that stuff uh and apply set up the required system parameters apply the cctl parameters without reboot uh install and configure container d so this is the curl of that installing of that app install container d and then you restart and enable container d um from 1.2 to onwards if you do not set the c group driver under the cubelet configuration cubelet will default to system d so you don't need to do anything here then you do the pull of images so cube adm config images so cube adm now we are we are interacting with cube adm cube kubernetes cluster will be set up using a tool called cube adm so cube adm images pull and we give the cri socket our container d run time path so cri socket container d and we give kubernetes version 1.24 we have given over there now is the kubernetes init so this is the phase where actually you are starting your cluster you are boot strapping your cluster so this is called bootstrapping your cluster the pod network cid are dependent on the cni and you yes i am also using flannel control plane endpoint will be the public ip of the instance it can be private as well but if you want to access it from outside using the cube config it should be public so you give the pod network um cidr uh kubernetes version control plane endpoint uh ignore please preflight error memory because we are running on katakura it requires minimum four gigs of memory if we don't specify ignore preflight errors then it will fail over there and then the cri socket and then it will install so that's the complete installation let's go back it is installed by now so if you do cube ctl get nodes so you will be having one node kubernetes cluster that you have just spawned on ubuntu and you can do cube your first command cube ctl run nginx hyphen hyphen image equal to engine x congratulations you have done everything so i'll stop sharing the screen now so great to see uh many people still sticking uh over here it's almost four hours and i think we have covered uh many many stuffs in kubernetes um that was my aim actually three hours i wanted to cover but uh we have done it in four hours yes it's it's super nice that we have covered uh almost actually more than kubernetes 101 because we have covered many things in depth we have done many examples the video will be there for you uh after after that so that you try again uh all the concepts the workshop link is there so that you try keep trying keep practicing again any doubts you have join the workshop channel on discord ask us anything and cute simplify ambassadors would be happy to help you out make sure to check out the rest of the workshops every week there is one and every week it will be super awesome next one is on github's on kubernetes so make sure you have done uh practice kubernetes uh very well so that next workshop you can understand the git ops its principle using argo cd do the ci cd get ops principles and all that stuff and you will also be able to get the certification for that free of cost and you can put that on your linkedin on your pages share that share with everyone so next workshop will be super awesome after that you'll be having uh the workshop of kubernetes troubleshooting again you have to set up mini cube i'm telling you again and again the prerequisite for for that workshop is setting up mini cube so make sure you have mini cube uh ready to launch the clusters when that workshop will be there and stay tuned awesome things are coming up from cube simplify ambassadors the video series is coming up by cube synchronous ambassadors thank you so much for tuning in for today's video share our reviews on uh twitter uh make sure you to tag me so that i feel energized and i i can do more this i have never taught from such basic level so i hope it was kubernetes from scratch for you all and thank you so much and these notes um are very critical so i will share them with you i'll put that put that in the same repository as a pdf so that you can download and do not worry about the notes it will be there in the repository in coming days and thank you so much for watching keep replaying the video keep sharing the video with your network so that everybody learns kubernetes in easy way thank you so much for watching and join cube simplify follow cube simplify on twitter follow me on twitter and uh thank you so much for watching and loved uh all the enthusiasm over here uh and thank you for staying till the end that's very important so you people are the best who stayed at the end uh not means that the others are not but they might have some work it doesn't matter but uh thank you so much for watching the video and being in the live stream i hope you learned a lot today and uh it accelerated your devops learning curve thank you so much see you in the next one
Info
Channel: Kubesimplify
Views: 829,152
Rating: undefined out of 5
Keywords: Kubernetes, kubernetes training, kubernetes bootcamp, kubernetes workshop, kubernetes hands-on, kubernetes handson, kubernetes handson workshop, live kubernetes, containers deep dive, kubernetes deep dive
Id: PN3VqbZqmD8
Channel Id: undefined
Length: 236min 2sec (14162 seconds)
Published: Mon Jul 18 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.