Kubernetes Tutorial for Beginners | Kubernetes Tutorial | Intellipaat

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello everyone welcome to this session on Cuban eighties today in this session we are going to discuss why do we need Cuban eighties exactly and then move on to discuss what is humanities after that I'm going to show you guys some of the prominent features of Cuban eighties which makes it stand out from the crowd of all the continued orchestration tools and then we want to discuss how Cuban ''tis actually works after that I'm going to show you guys the architecture of Cuban eighties which will give us more clarity on how humanities works and finally I'll show you guys how you can deploy an application on the community's infrastructure right so this is the agenda for the session let's move on and talk about the first topic which is why do we need Cuban IDs why do we need communities so to understand this topic let's go ahead and understand how our applications were designed before right so this is a sample monolithic application guys and this is exactly how our application used to be coded in the past right a monolithic application is nothing but where all the features of the application are consolidated into one code base right for example let's take this sample application to be over right so you have features like location services then you have customer service notifications mail payments and passenger management right all of this is part of one application that you have in who-ville's right but now when you click on notifications in your ruber app what your application basically does is it contacts the server and basically you get the notification from the server to your application so your application is basically reflecting whatever it is getting from the server right now earlier all these features used to be on one server right for example if my application has to reach or you know stream notifications it'll reach out to the same server and if I have to call the customer support that API query will also go to the same server right now this might seem okay to you but this caused problems reason being first of all this is all in one code base right so if I have to do any update to any part of this application for example if I have to make changes in passenger management the way we work with passenger manner I want to change something in that code I don't know update something right so for that I'll have to work in the same code base where my mail is where my notifications is where my customer services and chances are highly likely that some of these functions may stop working or may malfunction based on the changes that I'm doing in my passenger management right so they were all lot of dependency between all the features of this applications reason being it's in one codebase right they're not interacting using JSON z' they are not interacting using some other communication they are interacting within the code right and this caused a lot of dependency so say for example tomorrow my passenger management code fails so that would lead to all the features will lead to basically failing resuming all of them are dependent on each other right and this was solved by using micro sources now what was micro services in micro sources we basically decided that all the features should have different code base and should be deployed separately now my customer service is deployed separately it's in a separate codebase my notifications have been deployed separately in us it's in a separate codebase and all of them they interact using JSON right so if they have to communicate with each other they'll have to send each other some files or encoded message with the other service will decrypt and then it will try to read it and any response which is generated it is again sent to the other service using that same mode of communication right so now there were no function calls between these services it was only API calls that that is using JSON that these services were interacting with each other right but the problem is now how to deploy these services where will I deploy these services one way is that say my customer service module I deploy on one computer my notifications would be on the other server my mail service would be on some other server and similarly if there are five feature that are listed over here there will be five different servers which will be serving each a services request it sounds cool it sounds very distributed also but when it comes to scaling it becomes a problem reason being that my customer service you know the customer service module might not require that much of power that is there in one server right but still my customer service module is occupying the whole server and I cannot deploy anything over there so down scaling it became a problem right up scaling is no issue you can also upscale it but upscaling would also cause the same problem that is say you know your server is able to handle 10 replicas of your customer service module or say let's you know your customer service module is being bombarded with requests and it Peaks your CPU to say around 85 to 90 percent of its usage now what will you do if you have deployed this application on some cloud your cloud will basically deploy one more server which will have exactly the same service and now the request will be divided between the two servers right but again the problem is that on the second server because just the customer service you know your module is running it might not be able to utilize the full server right it will always be underutilized so this was a problem that we had to solve and we solved it using docker now how did we solve it using docker now docker gives you a separate environment right so we deployed services on different servers because we wanted them to be totally independent of each other they should not have any dependency on the other features environment right so B what we did we all dog arised these services and now all these services could exist on a single computer right but at the same time feel that as if they're on different computers so your application in that case will only be down when your computer or your server as a whole fails right still your applications are independent of each other for example your customer service could be running in a Red Hat docker container your payments could be running inside GS good offer container your mail could be running inside and open to docker container they all could be different and they all could be running on the same system so this solved the problem of under utilization or scaling right so upscaling is also okay so if I feel that passenger management is being over utilized wrench causing a spike in my CPU what I can do is I can probably deploy one more you know instance of pass a passenger manager one no container of passenger management and the request will then be routed between the two also if my passenger management is not working properly what I can do is I can shut down passes in the manner I can deploy one more container on it and this will not affect the other application of the other features of your application right so with micro-services what we get is zero downtime right you get a chance to update your application without causing a problem to the other services and many more right so when you got a doc raised kind of a scenario we were like okay so things seems to be okay now but now again the problem was that how do we monitor these containers right so if there are like 300 or 400 containers running how do I assess which container is running fine what is the health of a container say a container goes down how do I automatically deploy one more container on it so this orchestration of containers was a problem and this is where cubanía teas came to our help right now there are many container orchestration tools and all of them perform differently based on the kind of features they have to beretti's has the most number of features which are essential for basically doing container orchestration natively docker also gives you docker swarm which you can use to do container orchestration but personally I feel humanities has vey much more features than docker swarm and that is the reason we're discussing communities today right and most of the industries today are also using communities for the same reason that it gives you a lot of flexibility in terms of what you can do with your application so because your containers were in large number and to do monitoring we came up with Kuban at ease so if Kuban at ease was not there what were the problems let's discuss that so each service which that is the each instance of a feature which is running it because it became difficult to monitor it right scaling a particular feature also based on load was not possible reason being that you could not automatically monitor it you had to keep someone monitoring your usage of the containers and if they feel that you know your container is being over utilized or the container has died because of some reason then in only in that case it was manually you know updated that you know let's launch a new container and let's scale down the other container you know let's delete the other container so because of this like I said there was too much manual intervention which was required from the programmers end also if you felt like I told you that all of this existed on one server right so the only way that your application will be down is when your server is down so we had to mitigate this so if we deployed containers on multiple servers now to manage or to know how many containers are running for a particular service on multiple nodes also became a problem right and this there was no way we could solve it without a tool which could basically orchestrate the containers for us and this is exactly what community saves us so what is Cuban at ease so I think we have discussed most of it but Cuban entities is nothing but a container orchestration tool which is used by our application it to to basically monitor or orchestrate multiple containers the the kind of feature that it gives us is monitoring it helps us monitoring containers it helps a scaling containers it has also helps the restarting containers if any of them fails and this all happens automatically there is no manual intervention required for this right and this is what Cuban it is is all about now what are the features of Cuban ''tis we will discuss some of them like it could monitor it it scale it but let's just come more about it in detail right so the features of Cuban IDs include the first one being that managing multiple containers as one entity right so if you have four hundred containers which are running we'll probably have five features running on it you will only see those five features you will not see how many containers are running behind it right I mean you don't have to worry about the number of containers running behind it because Cuban entities automatically it takes care of the health monitoring and scaling up of your containers right you only have to worry about what kind of services you want to launch is if there is a new feature that you have to include in your application all you have to do is launch a new deployment which will basically have those feature and the number of containers that you want for that to be supported with you just have to specify the number and communities will automatically take care how it deploys it in the backend right so managing multiple containers as one single entity became very simple for us after communities resource usage monitoring was also was also solved by queue penalties for sovath cuban it is what you can do is for every container which is running you can basically define how much of your computer resources it is going to utilize rise and the moment the container utilizes more resources what cuban T's does is it deploys a new container right and that is how it basically does resource usage monitoring then we have networking so networking is an in fantastic feature in Cuban it is ver in all the containers which are deployed on the communities cluster basically can interact with each other as if they were on one system but they could be on hundred nodes or 200 nodes or 300 nodes but still they can interact with each other in respect of the fact that well they exist on the cluster so there are also a fantastic feature then we have something called as rolling update so if you want to update your application you have the option of doing a Bluegreen deployment using the rolling update feature then we have something called as load balancing right so all your containers if there are multiple replicas of the containers which are running inside your cluster you can automatically load balance among these containers based on the request right that is also a feature which is there in humanities you have health checks right so health checks basically means if your container is not running as it was supposed to learn or say it is down or you know the service itself is down cuba nighties will take care to redeploy the container so that is services always up right so you will never notice any downtime when your container is running inside cuban it is until and unless there is some problem in the code right if your code run finds everything is going to run fine in cuba nighties moving forward now let's discuss about how cuban ities actually works right so cuban it is like i said it's it's basically a cluster so you have a master and then you have several nodes inside the master right the job the master is basically the schedule containers on the nodes which are under it right monitor the nodes monitor the containers which are running inside each node and also keep a track of the logs of what is what operations are being performed on each container right so basically the master plays an important role here of monitoring everything of scheduling everything while the workers or the nodes do the job of processing with whatever is required in your application if it's processing if it's sending API calls it could be anything and that is handled by the notes but cubanía teas is running on both the master and the node right and basically it's the same software running everywhere and they can interact with each other as if they were one whole system right it is also very easy to add more nodes to the cluster to an existing cluster it can be done in a you know in a jiffy all you have to do is just there is one command that the master gives you once when it's initialize you say that command and whenever you want to attach a new node to the cluster all you have to do is first install cuban it is on it and then run that command your node will automatically get joined with your humanities master right so this is how Q manatees actually works from the outside now let's understand how it works from the inside by understanding its architecture all right guys so the first thing in cuban it is that you will deploy is a pod right a pod is nothing but a container but the differences a pod can have multiple containers as well right and that is the case when say your application is deployed in a container but it also needs say a separate container to run along with it right for example if you have a PHP container and your files are in a different container then basically these two containers can not work without each other right your files needs the PHP container and the PHP contain means the files to basically give you the website so these two containers are bound together they got supposed to work together and for those very reasons we have pods so usually pods have a single container but like I said there could be a situation where you have a container which cannot exist without the another container and in those cases you'd apply deploy multiple containers inside a pod right but for for the sake of simplicity you can understand that a pod is nothing but a container right so this is one single port this is a second pod and this is the third pod so basically we are deploying three containers of us of the same application right for example I want to deploy a website on Cuba Nettie's right let's let's assume that now when I'm deploying the website I have an option of specifying how many replicas of my website do I want to run on the cluster so that if one container goes down my application doesn't go down my traffic is automatically redirected to the second container or the third container right so for those reasons you basically specify the replica number of replicas that you want for your application right so this is the simplest entity in communities that is a pod so you deploy a pod or you deploy a number of pod by specifying it in deployment right you can deploy a pod or you can deploy a deployment now what is the deployment a deployment basically takes care of the number of pods which are running inside communities right so what you do is basically to create a deployment which automatically creates pods for you but while you're creating the deployment you specify which image in the container do you want to run and how many instances of your container you want to run right that is I'll specify the number of replicas and I also specify the image of the container in the deployment and my deployment will automatically take care that my number of containers are running on the number of I specified while deployed by creating the deployment also the image that I have specified is present inside each pod right so this is what a deployment is so to summarize a deployment basically deploys a number of pods that you want and the kind of image that you want in that pod ok so this is a deployment now if a user MA are basically our understanding that we want to gage from this particular slide is how a user or how a person like me who is using your application will be able to access the application which exists inside your pod right for example if you go to intellibid comm how will cuba Nettie's serve your website to you given that the entire pod website itself is inside the communities cluster right so as a user you basically will be going to a URL and that URL will be pointing to the server which will basically have your application so in this case the deployment exists on any node on the cluster and therefore you would not know which node do you have to contact on so in cuban it is what happens is any application that you deploy can basically be accessed through any of the IP addresses that is there on the cluster for example if there are four servers one master and three nodes any of the IP addresses of the master or of the nodes would give give your way or give you access to the application right so in your now for example if I go to in telepods comm intellibid comm can point to any of these IP addresses that is my choice what I want my domain to point on right that is my choice but irrespective of the fact any IP addresses that I go on I'll be able to access my deployment but I cannot access access my deployment directly what I have to do is I have to have a service that I will be accessing now what is a service a service is basically an internal load balancer for all the pods which are running now the problem statement was if there are three pods which are running inside my cluster and which basically are hosting the same application how would the user know or how will the cluster know where exactly should this user end on right should should be he is requesting to be routed to powered one or pod two or pod three because all of these have the same application so my user doesn't care which pod is serving is request but how does the cluster handle this request for this reason in front of every deployment you have a service and service does nothing but exposing the deployment to the outside world right so this service automatically load balances the requests coming to it but random lis right so not randomly basically basic if you are sending a request for the first time if yours is a first request that the service is receiving it will probably send your requests randomly to any port but as and when more and more requests comes in which are concurrent in nature right then service decides whichever pod is more free will get more requests right so basically service is nothing but an internal load balancer which will be catering to your request so the moment you send a you know you want to go to a in Telecom it will basically point to and the IP address and that IP address would point to the service which will basically enter the load balance you are request to any of these pods which are over here okay now the question is that say I want to go to in telepods dot-com slash blog right and when I want to go to in teleport calm how would it differentiate where exactly my application is right for example if I type and in teleport calm slash blog it should basically go to the blog page right if I go to in telepods calm it should basically come over here and there comes the problem of having multiple services so say this is my blog service and this is my website I mean my homepage right now how will I use and know that which service to go on obviously for a user the only way we can interact is a URL right so his say he goes to in teleport calm slash blog now how it looks cluster know very exactly to send his request right for that we have something called as an ingress so what an ingress does is based on the request which is we which the ingress is getting from the user for example if the ingress sees that you know this user is requesting for Intel apart comm slash blog so what the ingress will do it will basically send the request to this service if it sees that the user is requesting in telepods comm it will send the user to this service right and basically this has the blog application in it and this has the home page application in it right and this is how basically internally your Cuba Nettie's works so your user would send a request which would go to an ingress and the ingress will decide based on the path that the user is requesting which service to send the request to right your cluster can have can can also not have increased in the case in those cases you might not have multiple exposed services on your cluster right in those cases you will not be requiring ingress but in cases when you have multiple exposed services on your cluster in that case you always have to use an ingress right so basically this is a summary of how works internally now let's go ahead and do some hands-on to see how actually we can implement what we just learned all right so before starting the session I would just like to tell you guys if you guys have a cluster on mini cube if you have a cluster on Google Cloud EWS anywhere if you have a cluster all the commands that we're gonna run right now are gonna run the same right most of you guys must be working on mini cube what I'd be doing is I am basically working on an installation on AWS right so I'm not gonna discuss how did we how do we set up the cuban ids cluster in this session probably i'll add a new session for that where i'll discuss different ways you can set up a cuban at east last in this i have already set up a cluster so let me just quickly go to the terminal to start with their hands on all right so our terminal is here now over here the cluster is up and running no the way you can see that the cluster is up and running is by checking the number of nodes that are joined on the cluster right so for that if I type in cube CTL which is basically cube control so the cube serial is the command with which you control the whole cube init Easter so the first thing that I want to see is the number of the nodes which are running on my system right so the number of nodes running on my cluster are cube CT I'll get nodes and these are the two clusters which are sorry two systems which are there in my cluster this is basically my master so all my applications will be deployed on this there is my worker node right and both of them are in the ready state right now you can also see what all pods are running on your system to see all the pods which are running on your system all you have to do is type in cube CTL get pods and then - - all namespaces this will basically show you all the pods which are running throughout your cluster right and if you want to see very exactly your pod is running so all you can do is just go again here and type in - all right all right so here are all the pods which are running on your cluster and some of them are running on one node some of them are running on the other node your master is basically 172 31:44 86 so all the 86 IP addresses that you see here are running on the master so this pod is running on the master and this pod is running on the worker right then you have IP addresses like this wherein you have 192.168.1.2 all these IP addresses are basically the IP addresses of the container so like I stole you guys humanities works of their own network right so if the if one container on or a one part of your cluster wants to interact with the another pod they can interact through these IP addresses but these IP addresses are only valid inside communities cluster right they are not valid outside so if you type this IP address in your browser you will not get anything so let us try that so if I go to my browser and I try pinch control V you'll not be able to see anything reason being that this is a private IP address right this is only visible to the cluster which is basically working over here right now the first thing that we want to do is basically deploy pods now we have to create a deployment now in order to create a deployment you have to write the Yama file now what is the UML file a Yama file is basically a file in which you specify all the specifications of your deployment for example I want a deployment which goes by the name of nginx write the specification of this deployment would be that I want three replicas which should be running right and the container should have the image nginx 1.7.9 running on it and the container should expose the port 80 right now this is the deployment that I want to run now say I want to change the number of replicas to say five right so I'll specify five over you I save and exit this file that is deployed at UML and now in order to create a deployment all I have to do is cube CTL to eat employment and then specify this file which is deployed out yeah well hit enter and now it should create a deployment for me the reason it didn't create the deployment for me was this alright so cube CTL create and then you specify - F space and then deploy data now this will this should correct the errors let me first clear the screen clearing the scene and now in order to create the deployment there are two ways of bringing it either you can specify create deployment and then the name of the deployment that is nginx a1 and then the image of the deployment which is nginx : 1.7.9 so if I do this it will create a deployment called nginx one select CH as created and now what you can do is you can basically see the pods which are running for your nginx so since you didn't specify any number of replicas is only one pod which is running for this particular resource right and its engine x1 now let us see how we can create it from a file that is deployed at EML in order to create a deployment from a file all you have to do is cube CTL create and then hyphen F and then specify your file name which is deployed or damaged now if I hit enter all you have to do is see cube CTL get pods and you can see that there are four replicas of nginx running over here right it's not full it's actually five replicas of nginx running over here right and now if I go to my my deployed out ml I can actually see how much of replicas I've specified cuz I specified five replicas and hence I have five replicas of nginx running over here right so these are all pods and like we discussed in the architecture so let me quickly go to the architecture so here is the deployment and it has the number of pods specified inside it right and this is what is exactly what we have done we have created a deployment we specify the number of replicas and these are the port being run for that deployment right and all you have to do for deleting all the pods all you have to do is delete the deployment and will delete all the pods which are running inside it right for example I created the nginx one deployment so all that blue is cube CTL delete deployment and then specify the name which is nginx one right hit enter and now nginx one has been deleted so if I do a cube CTL you get pods I can see that there is no nginx one pod running right now there's only nginx deployments pod which are running right so we have deployed a deployment which has number of ports which are running inside it now in order to access these pods now what is running inside is poised so when if you want to see like I said where exactly your pods are running all you have to do is cube CTL get so let me first clear the screen so cube City air get pods - all right if you want to see where your pods are exactly running and your pods are all running on the node right so this is the IP address of your master which is 172 3144 86 and all your pods are running basically on the client that is on the worker node which have the address 172 31 a 38 114 right now you can check if these pots are running as containers on your worker by going over here this is my worker right so if I specify say docker PS I can see that these are all the containers which are running right now and all these containers are basically the containers is running inside this pod with the system containers which are running for Cuba Nettie's right so my pods are now running on my worker the next thing that I want to do is being able to access these pods so if I go to these IP addresses I'll be able to access these specific pods right but only inside the cluster for example if I type it in the browser I will not be able to see anything let it let us try that so if I go to my browser and I type this IP address obviously I can't see anything right but FS type the same IP address over here I'd be able to get what is there inside the pod which is basically my nginx file right so I am able to access the pods on the cluster now I want to access it outside now for that what we discussed in our diagram once we need a service we need a service in order for us to basically interact with the pods now let's create a service now for creating a service all you have to do is type in cube CTL create service and the type of the serve another of three or four types of service services that you can deploy right now we have to deploy a service called okay what all different type of services we are going to discuss this in our coming sessions right now let's just focus on creating a service which is of type node put right once you specify sort of the type node put then thing that you have to do is specify the name of the service and remember that the name of the service should always be the same as that of your deployment right so for our case our deployments name was nginx we'll specify that and now let's go ahead and specify the port where your service should basically point on right so the 80 port of your service should basically point on 80 port of your pots because when you created the deployment you specifies that a teapot of the pod should be exposed right so you specify 80 : 80 by 80 because nginx works on port 80 right you specify this you hit enter and now his service has been created right so to see your service all you have to do is cube CTL get service you enter and now you can see that a service has been created which is of the type note put and is available on this particular pot okay so node port basically means that a port will be opened on the master and on basically on the cluster when using this port you can access your pods right so if I copy this port and now basically what I'm going to do is go to my masters IP address which is this and then open a new tab now specify the port which is three one eight two seven three one eight two seven I specify this and now all I have to do is hit enter so the moment I hit enter I can now see my nginx image which basically means my service is working correctly it is basically pointing to my port - basically requests my request is being load balanced among all the pods which are running so there are basically five pods which are running so my request is being load balanced among these five points okay so now let us check if it's actually the service which is load balancing it so let us try to delete in the service of cube CDN delete service and then specify the service name there says nginx hit enter and now if I go over here and try to enter I will not be able to access it let me just copy this once more and open it in a new tab see as you can see now I'm not able to access my cluster right so for the first two minutes it was basically accessing my website from the cache right what I did was I opened an incognito window and when I enter the IP address now it is not able to access the website right similarly if I do a refresh here it is still able to access because it is accessing this from the cache of this system right that's why it is able to access but in an incognito window you can clearly see it is not able to access this service okay now again if I create a service it will be able to access a cube CTL create service which is nginx and the type of the service is no put right and I want this resource to be available on port 80 so I specify that okay so it has been created and if I do a cube CDLs yet service now I can see that this is the port it is available I copy this port number I paste it over here and I'm able to access the nginx service so it is that simple guys also like I told you guys that my mode this port is also available throughout the cluster right it's not just on the master so if I copy the nodes IP address which is this and come back here and basically just change the IP address to the nodes IP address you can still access the service right as you can see you can still access the service reason being this port is available throughout the cluster any IP address that is of the master or the node I specify with this port and be able to access the nginx service right so guys this was the hands-on that we have to do today alright guys so with this we come to an end to our session so thank you guys for attending today's session I hope you guys learned something new today now all the commands that I just showed you today I would recommend you you try it on your own you try to probably change the deployment file so you can see the deployment file by going to that part of the video and just copy it from there right try changing the name of the image and try running that Yama file and you'll be able to basically deploy your own application with a custom service and be able to access it from your browser right and that is the whole point right so thank you guys for training today's session have a great day and good bye
Info
Channel: Intellipaat
Views: 108,359
Rating: undefined out of 5
Keywords: kubernetes tutorial for beginners, kubernetes tutorial, what is kubernetes, kubernetes, kubernetes introduction, kubernetes architecture, getting started with kubernetes, kubernetes networking, kubernetes docker, intro to kubernetes, kubernetes installation, kubernetes vs docker, kubernetes cluster, kubernetes edureka, edureka, kubernetes explained, kubernetes training, devops intellipaat, kubernetes intellipaat, kubernetes training online, Simplilearn
Id: NsDhBEsTTHs
Channel Id: undefined
Length: 39min 38sec (2378 seconds)
Published: Wed Jun 12 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.