AWS EKS Cluster Setup | Kubernetes Load Balancer | Ashok IT

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good morning guys in the last session we discussed about what is the helm charts in the kubernetes helm is a package manager in the kubernetes by using helmo package manager we can install applications on the kubernetes cluster like if you want to go for metric server or if you want to go through Prometheus and grafana plk snack all such kind of tools can be installed on the kubernetes cluster by using help in general we will deploy in general we will deploy our applications on the cluster by using manifest files but writing manifest file is a difficult process so that's why we can go for package manager how we have package manager in the Linux similarly in the kubernetes cluster we can use Helm as a package manager then we discussed about what is the Prometheus what is grafana Prometheus is a monitoring and alerting Tool whereas grafana is a visualization tool however cluster is running how many requests are coming to the cluster what is the performance of the cluster are there any issues in the cluster all such kind of details can be monitored by using grafana and Prometheus it's fine till yesterday we done our work on the cluster by using Cube CTL that is cubedm cluster we have created that is a self-managed cluster but nowadays in the real time companies are preferring to use provider managed cluster in the companies they are preferring to use provider manager cluster so in the AWS Cloud we are having a provider manager cluster that is called eks eks stands for elastic kubernetes service instead of you are creating the cluster on your own you can go for AWS provided eks elastic kubernetes service eks is a fully managed kubernetes service fully managed kubernetes service by AWS fully managed AWS service basically eks stands for elastic kubernetes service eks is a fully managed AWS service eks is the best place to run the kubernetes applications because of its security reliability and scalability eks is the best place to run kubernetes applications because of its security reliability and scalability eks can be integrated with other AWS services such as elb Cloud watch Auto scaling IAM and VPC this can be integrated with Cloud watch Auto scaling IAM VPC and elb also eks makes it easy to run kubernetes on AWS without needing to install operator maintain your own kubernetes control plane last time when we set up the kubernetes cluster we created one T2 medium instance in that we installed kubernetes the control plane we have installed and we are managing but when you go for eks you no need to bother about the control plane because AWS will provide you that control plane and AWS e case runs on AWS e case runs the kubernetes control plane across the three availability zones so whenever you create a whenever you create a eks cluster elastic kubernetes service cluster then AWS will run your control plane on three availability zones in order to ensure High availability and it automatically detects and replace unhealthy Masters so if the control plane is unhealthy AWS only will take care of replacing that control plane with the another control plane so it will make sure High availability will be available and it is very secure and it is the best place to run your kubernetes applications AWS will have complete control over the control plane we don't have control on the control plane whenever we are going for self-managed cluster we are creating the control plane we are creating the worker modes and we are managing but when it comes to the eks cluster AWS will take care of the control plane we don't have any control on the control plane we just need to create the worker nodes and attach the worker nodes to control plane control plane will be provided by AWS only we are just responsible to create the worker nodes and attach the worker nodes to the control plane we will create worker nodes group using Auto scaling group then control plane charges plus worker node charges will be applicable for us control plane will be provided by AWS we cannot control the control plane that will be taken care by AWS only we need to create the worker nodes and we need to attach the worker modes to the control plane as AWS providing the fully managed control plane AWS will charge you for that as and we are going to create the worker node that means we are going to create our ec2 instances as worker nodes and we are going to attach the worker nodes to the control plane that means control plane charges will be available and worker node charges also will be available when we go for eks cluster similarly in the Azure also EK in the Azure also they have AKs cluster in the Google Cloud also gke cluster is available every cloud provider is supporting for kubernetes cluster they are managing the cluster we can just use their cluster to run our applications so eks stands for elastic kubernetes service it is fully managed AWS service that is the best place to run our applications because AWS will take care of the cluster it's a security it's a reliability it's a scalability AWS will take care this eks can be integrated with other services also available in the AWS it is very easy to run our applications on the eks cluster because we no need to set up the cluster cluster will be provided by cloud provider okay cloud provider will give you only control plane you don't have any control on the control plane control plane will be given by this cloud provider we just need to create the worker nodes and we need to attach the worker nodes to the control plane that worker nodes we are going to create as a auto scaling group then next coming to the billing everything is charged here so we should be very careful when we are working with eks cluster on the AWS bill will be generated control plane charges will be available and worker node charges also will be available it is fully managed AWS Services charges are available now I'm going to set up this eks cluster on my AWS account I'm going to set up eks cluster on my AWS account login into AWS account AWS Management console login yes I'm able to login into my AWS console go to ec2 here there are some instances available guys Docker kubernetes Nexus database server helper ec2 there are some easy to instances available now so I am not deleting my self-manager cluster because the remaining Concepts we will work on this self-manager cluster I'm not deleting the machines which I used to set up the kubernetes cluster I just stopped with them they are in the stopper state so Bill will not be generated right now so in order to set up the eks cluster what are the prerequisites AWS account with admin privileges instance to manage our access eks cluster using Cube CTL AWS CLI access to use Cube CTL utility so we need to have one AWS cloud account already we have the account my account is a root account so admin privileges are available for that account instance to manage our access EK is a cluster using Cube CTL so here control plane will be given by AWS cloud in order to communicate with the control plane in order to execute the commands with the control plane API server we need one separate machine that machine we can call as a kubernetes client machine now so what is that kubernetes client machine here I am going to take one control plane that will be provided by AWS last time when we create the control plane that is our machine in that machine only I installed Cube CTL and directly I performed operations on that machine now control plane AWS will provide AWS eks so this is not our machine now this is AWS machine which we are going to use for this control plane I am going to create worker nodes and I am going to attach the worker modes worker modes will be attached to control plane control plane will be given by AWS we just need to create the worker nodes and we need to attach the worker node to control plane Node 1 Android node 2. that is Node 1 and this is node 2. Node 1 and node two these are the worker nodes now this we can't control this because it is AWS product now what I will do is I am going to take one more machine I'm going to take one more machine this machine I will call it as kubernetes client machine kubernetes climbed Beyond in this client VM I'm going to install Cube CTL in this client machine I will install tube CTL now this client machine I will use to execute the commands with the control plane okay so here whatever the machines which are red circled those machines we are going to create so these worker nodes we have to create this is a client Dimension also we have to create so worker nodes are going to attach with eks cluster in the client machine I will install the cube CTL to execute kubernetes commands CLI basis tool that I am going to install in the another computer another virtual machine this virtual machine will access the cluster which is eks cluster and will execute qctl commands so that is what I'm telling you as prerequisite AWS account with admin privileges already my root account is available one instance is required to access cluster using Cube CTL so let me call it as kubernetes client to VM kubernetes client VM one separate ec2 instance I will create in the DC to instance I will install Cube CTL using that Cube CTL we can connect with the control plane then AWS CLI access to use Cube CTL utility so we need to have one IAM user account our access key secret key are required to access AWS CLI perfect next steps to create eks cluster in the AWS create VPC using cloud formation with below S3 URL guys see here here we are going to create a separate VPC we are going to create a separate VPC to set up our eks cluster in the AWS by default we are having one VPC but it is not recommended to use the default VPC for our resources in the cloud that's why I'm going to create one separate VPC I am going to create one separate VPC for eks cluster SBA VPC is available that is previously type VPC that we have created let me delete it let me delete it now so I have deleted currently there is only one VPC available in my account I am in the Mumbai region so that is a default VPC I'm in the Mumbai region that is a default VPC I don't want to set up my cluster in the same VPC so I am going to create a separate VPC so to create the VPC we can use cloud formation template AWS only provided one cloud formation template so cloud formation is one of the service in the AWS that already we discussed in the AWS which is used to create the infrastructure with the templates now AWS people provided one cloud formation template to create our eks cluster okay so now I'm going to use this template and I am going to create the VPC where I'm going to create that in the cloud formation so let me go to AWS cloud formation service let me go to AWS cloud formation service in the cloud formation currently if you see the stacks currently there are no Stacks available currently no Stacks available create a stack it is asking template is ready or use a sample template or create a template in the designer my template is ready my template is ready so take this URL take this URL give this URL it is the S3 bucket URL that is AWS S3 bucket URL click on next enter the stack name enter the stack name you can give any name for the stack I am giving the stack name as eks VPC cloud formation eks VPC cloud formation click on next click on next key and value if you want you can use it or I'll simply go to next go to next now click on create stack that's it by using cloud formation template I'm creating one VPC in the Mumbai region why this VPC I'm creating the cluster that I want to create should be in the dedicated VPC we know already what is VPC VPC stands for virtual private Cloud it is used to maintain our Cloud resources isolated Network we can create for our Cloud resources for security reasons now I'm creating the VPC PPC can be created manually our VPC can be created by using cloud formation template or we can create by using terraform also terraform is global software which is using for all the cloud providers cloud formation is specific to only AWS only for AWS now still the cloud formation template we use it to create the VPC right that VPC creation is in progress once it is completed it will change the status of that so when VPC is getting created let's wait for another 30 seconds the public subnet private subnet subnets route tables all the things will be created as part of this VPC in the cloud formation template all that code is available the Json will be available our Json or yml we can use in that yml they have written the code which is used to create VPC in the AWS so once VPC is created then we need to create the roles in the IAM IAM roles are required basically here we need two rules guys basically we need two rules first one create IIM role in AWS AWS service select use cases G case cluster role name e case cluster role so create eks cluster using created VPC and IAM role cluster endpoint public and private now in order to create the eks cluster we need one role in order to create the eks cluster we need one rule first let's this VPC this VPC should be ready still the creation is in progress that's right still VPC creation is in progress it is taking time yeah now can you see what is the status of the VPC now create complete our VPC creation got completed successfully now go to our cloud and check VPC go to VPC you can see the vpcs now there are two vpcs available one is a default VPC another one is eks cloud formation VPC that we created just now we created the VPC just now so in my account in the Mumbai region now two VPC is available one is the default VPC another one is custom VPC that I have created perfect Next Step create IIM role in the AWS entity type as AWS service use case as eks eks cluster role name you can give anything let's go to IAM service identity and access management we are going for IAM that is identity and access management in the IIM we can create the users we can create the groups we can create the roles Also let's go to roles here already there are 13 rules available there are 13 roles are already available now I want to create one more role so create a role entity type why you want to use this rule I want to use this role for particular service in the AWS now what is your use case our use case is a eks our use case is a eks let me select it eks eks cluster I want to use this role for eks cluster so when I am going to IAM in that I'm selecting create role in that I am selecting AWS Service as my entity type that means I'm creating a role for particular service in the AWS now which service we are going to use for which service you are going to use this rule this role I want to use for eks service the two to create the eks cluster so use case I have selected as eks then select eks cluster click on next so e case cluster service permission I am giving for this role in case cluster formation I am giving for this role click on next role name now you can give eks cluster role you can give any name for the role I am giving the name as eks cluster role now ek's cluster role is added EK is a cluster policy is added click on create role so with this one new role is created in the AWS IAM what is the name of my role my role name is eks cluster role my role name is eks cluster role perfect so with these two steps completed one is VPC creation another one is role creation Now by using this VPC and by using this role I want to create my e case cluster this is the most important point most important Point VPC creation completed role also completed now we need to create the cluster using that VPC unroll now go to eks service eks eks nothing but elastic kubernetes service the most trusted way to start run and scale our kubernetes and kubernetes applications all right elastic kubernetes service now in this elastic kubernetes service do I have any cluster no clusters available in my account as of now I have not created any eks cluster in my account elastic kubernetes service read this elastic kubernetes service Amazon eks fully managed kubernetes control plane so when you create eks cluster AWS will give you fully managed control plane Amazon eks is a manager service that makes it easy for you to use kubernetes and AWS without needing to install and operate your own kubernetes control plane boss you no need to set up a control plane when you go for AKs when we go for eks AWS will give you that control plane to run your applications on the AWS all right so now clusters currently there is no cluster available now we need to create the cluster click on create a cluster it is asking the name for the cluster so eks cluster elastic kubernetes service cluster which version of the kubernetes so the latest of the kubernetes is 1.23 role it is selecting the role we created one role called ek's cluster role that role is selected by default because for cluster only one role is associated that role is selected click on next now VPC you can create the cluster by using default VPC also but not recommended so I have created one custom VPC select that custom EPC so this custom VPC I have created in this custom VPC there are four subnets available that custom VPC we created by using cloud formation cloud formation okay cluster endpoint access how you want to expose this cluster this cluster endpoint you can keep it as a public and private if you keep as private outside people cannot connect okay right now when you go for public everything will be public but when you keep public and private the cluster endpoint is accessible from outside of your VPC worker in order traffic to the end point will stay within your VPC so worker nodes will be accessible within VPC and the cluster can be accessed outside the VPC also for that I am using public and private I'm using public and private so VPC I have selected a custom VPC that we have created using cloud formation in that custom VPC security groups available I mean subnets are available four subnets are available Security Group it is asking which is Security Group you want to use whenever the VPC is created Security Group also created in that VPC VPC related Security Group we have to select as I am going for custom VPC custom VPC having a security group also I have selected the security group cluster endpoint access I'm using public and private public and private guys are you getting my point now click on next click on next control plane logging so do you want to log do you want the details of API server authenticator audit control manager scheduler if you want to see the logs you can enable otherwise you can just go to Next Step cluster configuration cluster configuration already we have done just created so cluster creation is simple we just created one VPC and we created IAM role using that VPC IIM role and Security Group with access public and private I am creating my kubernetes cluster cluster creation will take five minutes of time guys eks cluster is getting creative once it is a cluster is created we will get control plane that's it worker nodes will not be available client machine will not be available we just created this control plane by using VPC and IAM role by using VPC and IIM rule we created this cluster now in order to operate this cluster in order to deploy our application on the cluster we need to have the access for the cluster for that we are going to use Cube CTL for that we are going to use Cube CTL by using qctl I can connect it to this control plane for that now I will go for fourth step I will go for fourth step so if you see already three steps completed I created VPC using control using cloud formation template I created IIM role with e case cluster access then I created cluster using the VPC and IIM role we have created cluster access I have given as a public and private control plane can be accessed outside the VPC whereas worker nodes are receiving the topic within the VPC now cluster creation is in progress it is going to take some time so without wasting our time meanwhile let's create one client machine in that client machine we need to install Cube CTL the cube CTL will connect it to the cluster so let me take one machine here EC to VM you can take any machine Amazon or Ubuntu Amazon Linux or Ubuntu or you can take or you can take red hat Mission also all right let me go and create new instance launch instance I'm using this as kubernetes client VM name I am giving as a client VM ready Hat Linux T2 microbe I'm selecting key pair default VPC available existing Security Group you can select any Security Group for this machine right no one instance launch instance this machine is getting created it is a T2 micro instance kubernetes is a client to VM it is just a normal easy to instance with the Red Hat Linux with red hat linings meanwhile check the eks status still the cluster is in creating state still the cluster is in creating state Yep this client machine is ready guys let's connect to this machine using mobile extra SSH connection I'm making for this username is easy to user that is a default username private key AWS thumb file we are using click on OK yes with this we are able to connect with this Linux VM who am I ec2 user so I connected to this client machine in this machine I need to install cubectl in this machine I need to install Cube CTL why because this client machine should perform the operations with the control plane for that Cube CTL is required in the initial list we discussed that we can deploy our applications on the kubernetes control plane by using Cube CTL or by using web dashboard So currently we are using Cube CTL in future we are going to use that web dashboard also perfect to install the cube CTL first let's check Cube CTL is available in this client Dimension or not version iPhone client when I execute this Cube CTL version double X and client it is saying that Cube CTL command not found all right guys Cube CTL version double FN client so with this we are able to see that with this we are able to see that Cube CTL command not found that means Cube CTL is not installed so let me use this command to install the cube CTL on this machine yes download it then install by using that curl we have downloaded that and we are going to install it after installing we are going to use this version command to check what is the version of the cube CTL right Cube CTL version is installed Cube CTL is installed we are able to get the version of the cube CTL so with this our Cube CTL is ready in the client machine then along with the cube CTL I need to use AWS CLI also to use the utility of the cube CTL in the AWS client in the AWS ec2 instance I'm installing AWS CLI so this command we can use to download the CLI download the CLI we got the CLI zip file then install unzip software install unzip software because AWS CLI downloaded as a zip file we need to unzip that for the time installing unzip software sudo M install unzip here m is a package manager using M package manager I'm installing unzip once it is installed then unzip this CLI zip file then install AWS CLI install AWS client it's taking time yes install it guys in the background my cluster is also getting created so with this AWS CLI downloaded and unzip software is installed then do unzip do unzip yes it is done now you can verify so Cube CTL is installed and AWS CLI unzip is there we extracted that and AWS is created all right now in sudo AWS install that CLI I'm going to install by using this now you can run AWS version that AWS CLI is installed once AWS CLI is installed we need to configure our access key ID secret key access to use AWS IAM now you can create a IIM user you can create a IIM user or you can use your root account credentials in the security credentials of your root account you can see you can see access key and secret access key ID and secret access key right this is my access key ID this is my access key ID and this is my secret key access okay right this is my root account access key and secret access key so you can also copy those details from your account and if you want to create IAM account with administrator access you can do that and you can configure that is actually the recommended approach so root credentials we should not use root account is very powerful account it will have the permission for all the services in the AWS you can create one IIM user with a programmatic access then you will get the access key ID and secret access key so once it is done then let's go for AWS configure let's go for AWS configure I'm doing this AWS configure it is asking access key ID it is asking access key ID and it is asking secret access key secret access key default region what is the region we are using Mumbai region so what is the region code for the Mumbai what is the region code for the Mumbai guys AP South one writing when we go for Mumbai region Mumbai region code is API from South iPhone 1 we need to configure that AP iPhone South iPhone 1 output format none that's it with this I'm able to I'm able to configure AWS IAM account by using AWS configure so I have installed Cube CTL I created one virtual machine using red hat in that I installed Cube CTL in that I installed AWS CLI in that I configured AWS configure to work with the CLI I have used those details I have used those details okay fine next one let's go to our cluster and see the status still the cluster is in the creating state so it is taking a lot of time to create that so once this cluster is created now we can check what are the Clusters available by using AWS CLI command AWS eks list clusters AWS eks Lister clusters now when EK is a cluster is creating when EK is a cluster is creative we need to wait for that guys still it is in the creation state still it is in the creation State now you may get a doubt sir you created you created control plane and you created client machine for control plane I have used VPC which is created with the cloud formation but for this client machine I have used a default VPC you can use anything that's not a problem now if you see my client machine my client machine which is running kubernetes is a client VM if you go to networking this client machine created by using one VPC right which VPC I have used it for that it is a default VPC this is a default VPC client machine created on the default VPC control plane is created on the custom VPC which we have created so this is Cube CTL client machine which is created under default VPC only no need to create on the same VPS it is in the default VPC if you want you can create that also in the custom VPC that we have created for our control plane so this is my custom VPC control plane is created on the custom VPC client machine is created on the default VPC I'm waiting for my cluster to be complete yes successfully we are able to create our eks cluster you can see the status of the cluster that is active so this cluster is ready what is the meaning cluster is ready what is the meaning of that cluster is ready means our control plane is ready cluster is ready nothing but our control plane is ready now by using this client machine Let's execute eks list clusters now my client machine under my control plane available in the same region in the Mumbai so when I execute AWS eks list clusters AWS nothing but a CLI command so I'm executing a CL I command to check the Clusters as I am as I am using the CLI as I am using the CLI I have to install the CL light first so that's why I have installed CLI and I have configured my IAM credentials right based on that I'm able to execute one CLI command AWS eks list clusters with this I got the cluster information now if You observe is there any relation between this client machine and this cluster as of now there is no connection this cluster is running in the Mumbai region and this machine is also running in the Mumbai region and here Cube CTL is available but Cube Ctrl don't know where exactly the cluster is running Cube CTL don't know where exactly the cluster is running are you guys clear with my point Cube CTL don't know where exactly the cluster is running now we need to now we need to configure now we need to configure cluster data now we need to configure cluster data cluster data with our Cube CTL we need to configure cluster data with the cube CTL for that we have a command how to configure cluster data with the cube CTL here awsck is list clusters we are able to list the cluster data we are able to list the cluster data fine then next step update the cube config file in the remote machine from the cluster using the below command now in the cluster in the cluster we will have in the cluster we will have Cube config file that Cube config file will contain cluster information cluster information that Q config file we need to configure in our client machine then client machine can access the cluster control plane is not our machine last time control plane is also our machine so directly installed cubectl on the control plane but today control plane is given by AWS it is eks cluster fully managed by AWS we don't have any we don't have any control on this control plane so that's why from the control plane I will take the cube config file and I will update the cube config file in my client machine so that my client machine can access the cluster how to configure that how to configure that for that I have given a command here AWS eks update cubeconfig name of the cluster and in which region your cluster is available so what is our cluster name our cluster name is our cluster name is eks iPhone cluster copy this name copy this name here I am giving the cluster name the cluster is available in Mumbai region cluster is available in the Mumbai region now take this command it is used to update cubeconfig file in the client machine in the cluster Cube config will be available that Cube config represents cluster location that Cube config I'm updating in the client machine so that so that my machine can access that cluster my machine can access that cluster now can you see added a new context this is our eks cluster that data is added to my machine in this location let me do the cat is used to display the content of the file Cube config file is available enter can you see when I execute CAD command with this location it is able to display API version clusters cluster information cluster server cluster server that means where the cluster is running cluster is running in the AWS cloud with eks cluster context cluster data cluster is available in the Mumbai region cluster name is eks cluster like that that cluster data copied to cubeconfig so with this we are good to this we are good we are able to access our cluster with this we are able to access our cluster are you guys clear with my point now when you go to cube CTL Cube CTL getter nodes Cube serial get nodes do we have any resources no no resources available cubectl get to parts do we have any resources no no resources found no courts found because just a control plane is created just a control plane is created and client information is created in this client machine Cube CTL is configured in that Cube to use that control plane we updated the cube config file in the client machine Cube config file contains cluster information in which a machine are in which location our control plane is running that information will be available in the cube config file the cubeconfig file we have updated in our client machine and the cube CTL is available so by using Cube CTL we can execute the kubernetes commands when we execute the commands it will execute with the control plane control plane is running in the cloud the control plane data will be configured in the cubeconfig file so with this client machine is ready and the control plane is ready the next step is adding worker nodes so as we discussed today in the starting AWS eks will give you only control plane you are responsible to create the worker nodes and you are responsible to attach the worker nodes to this control plane so let us see how to create worker nodes so that is our step 5. in order to create the worker nodes we need to create one IAM role with the three policies we need to create a IIM role for eks worker node with use cases ec2 and that IIM role should have three policies already we created one IAM role for eks cluster with one policy now we are going to create another IIM role with three policies that role is used to create worker nodes for our control plane ticket now let's go to IIM service let's go to IIM service identity and access management now go to the roles go to the rules click on create role AWS service use case is easy too now I want to create a role with easy to use case create IM role for E case worker nodes use cases gc2 with below policies click on next now here there are 771 policies are available now I need to select the three policies from them one is worker node policy select this observe very carefully how I am selecting the policy for the role copy this policy name enter he got this policy select this policy now out of 771 one policy got selected I filtered it and I have selected it then remove from here one policy is already added now go to Second policy it is eks cni policy filter select this then remove from the filter clear the filter so two policies are selected then go to third policy container registry read only policy select this select this so now clear the filter if you see total three policies are selected out of 771 policies those three policies are required for our worker node group click on next role name role name so I'm going to create the role name as eks worker node role because worker no role I'm giving these are the three policies I have selected click on create role now when the new role is one new role is creating for our worker nodes IIM role is creative yes eks worker mode role created successfully good once it is created then create worker mode group so where we need to create the worker node group we need to create the worker mode group in the cluster let's go to our cluster this is our cluster eks service is there inside the eks one cluster is created our cluster is in the active State select this click on this cluster then go to compute option go to compute option in this compute option currently no nodes are available no worker nodes are added to this cluster node to zero node groups also zero now it is our responsibility to create the nodes and attach the nodes to control plane go to Cluster go to compute tab computer tab then go to node group okay this is my cluster this is my compute Tab and this is my node group click on ADD node group so what is the node group name you want to use kubernetes node group kubernetes eks node group I'm giving the name role already we created the role right worker node role just now we created the IAM role select that role select that role click on next click on next what kind of instances you want to use T3 medium instances we are taking so take T2 micro also we can use are large instances if you use large instances then performance will be good instead of medium let's go with T2 large that means 8GB Ram we are going to get disk space 20 GB we are getting right so desired size how many how many minimum instances you want minimum two and maximum size and desired capacity so you can set the details how many missions you want for that not group update if you want to Auto scale the nodes that node group can be updated this is a auto scaling group we are creating desired size 2 desired side means set the desired number of notes that group should launch with the initially so two minimum set the minimum number of notes the group can scale into two maximum set the maximum number of nodes that group can scale out to two the number of instances all right click on next so here subnets it is selecting a default VPC that we used for our cluster the cluster we the cluster is created on the custom VPC right same VPC it is using for this worker modes also subnets four subnets are there in our custom VPC click on next click on next now you no need to do any other change here just click on create so with this worker nodes are created with this worker nodes are created and those worker nodes are attached to the cluster by default worker nodes are created and those worker nodes are attached to the cluster by default so still node group is creative we need to wait eks node group is getting created once this node group is created we can see worker nodes are added to the cluster we need to wait status you can observe as creating is once the node group is created then we are done we can get started with our Cube CTL commands I'm waiting for the node group creation to be completed it is taking time guys so two worker nodes are getting added for this cluster cluster is created inside the cluster node group is getting created worker nodes node group nothing but worker nodes two worker modes will be added because we have given desired size as 2. so once this node group is added then we can go to our kubernetes client machine and we can execute get nodes so it will display how many nodes are added to the cluster foreign ly when I am executing the command Cube CTL get nodes yes ready those are already added so the status is still so is showing as creating but actually the nodes are ready and they are added yes perfect the status is changed kubernetes node group status is active now you see three nodes are added so with this what is ready this also ready control plane is ready two worker nodes are ready and our client mission is also ready now we can play with our eks cluster by using this client machine in that client machine Cube CTL is installed and Cube config is also updated what is the cube config Cube config is nothing but a file which contains a cluster information where the cluster is running that information I already configured in the client machine and the cube CTL is also installed control plane is up and running worker nodes are up and running my client mission is also ready with the cube CTL Here If You observe control plane is managing by AWS Cloud just to be created the worker nodes and attach it to the cluster now let's go here execute a command this is my client machine guys Cube CTL Cube CTL geta nodes how many nodes are available two nodes are available those two nodes are worker nodes only those two nodes are worker nodes so what about control plane we don't have a control control plane provided by AWS AWS will manage that control plane let's go for cube CTL get parts cubicle get pots when I execute Cube CTL get parts it is checking for the parts in the default namespace so no ports are created no ports are created so with this can I say my cluster setup is ready can I say my cluster setup is ready whenever you go for self-managed cluster nodes should be added to the cluster by using join worker token by using worker token we are going to join the worker nodes to the cluster if you go for self manager cluster but here do we need to join the nodes to the cluster using worker token no why because the worker nodes are directly created by selecting the cluster only so internally kubernetes is taking care of that internally eks is taking care of that if you create the control plane manually by using a Linux machine then worker nodes should join with the control plane by using worker token but here I have not created machines separately I have not created the machines separately I have selected the cluster inside the cluster I have selected the compute in that I have created the node group and I have created the nodes so by default those nodes are added to the control plane we don't need to add them by using token you no need to add them by using token did you get the clarity now how the worker nodes are added to control plane do we need to add them by using token or it is not required in this scenario it is not required because we are using eks cluster in within the cluster we created the node group but if you go for a cubadium cluster self-managed cluster then you need to add the notes by using join token but here it's not required so what is the proof to say that my cluster is up and running when I execute Cube CTL get nodes it is giving me the notes information two worker nodes are added when execute Cube CTL get parts it is showing that there are no pods available guys where I am executing cubectl commands am I executing Cube CTL commands on the control plane am I executing Cube CTL commands on the client machine where I am executing cubectl commands and the client machine how the client machine is connected to the control plane how client machine is connected to the control plane Cube config not config map it is a cube config Cube config I have updated the cube config file in the client machine how I updated the cubeconfig file in the client machine here I have given you one Command right so I have given the cluster name and I have given the region which region the cluster is running then I asked AWS CLI AWS eks update Q config from this cluster take the cubeconfig file and update it that cluster is running in this region it updated the Q config file as the cube config is updated my Cube CTL is able to access the cluster which is running in the other machine okay Cube CTL get notes Cube CTL get parts currently no ports available in the default namespace all namespaces now you see AWS notes are running Cube proxy is running so two worker nodes are available so Cube proxy is running in the both worker nodes core DNS AWS nodes are available from all the namespaces I got the parts now now why we are not able to see API server scheduler controller manager etcd last time when we worked on the cubadium cluster when I get the parts we are able to see control we are able to see APA server as a pod we are able to see etcd we are able to see scheduler we are able to see controller manager but this time why we are not able to see them foreign guys why we are not able to see them are we managing the control plane our AWS managing the control plane are we managing the control plane AWS managing the control plane AWS managing the control plane we are just managing the worker nodes we are just managing the worker nodes all right all right now once it is done let's create let's create one deployment on the E case cluster and C are we able to deploy our application on the eks cluster okay let me take my kubernetes manifest files by using this kubernetes manifest files we can deploy our application on the cluster so in the running nodes I have given in the running nodes I have given the ymls which we are using to deploy our applications on the cluster let me go to kubernetes running nodes let's go to end here we have taken some sample ymls which are used to deploy our applications on the cluster for creation with the service manifest for creation with the service manifest create the Pod and expose the Pod by using service create the Pod and expose the part by using service yep so this is my deployment and here I'm using in node Port okay let me take this yml it is used to deploy my Java web application so this is my yml let me open this yml in the vs code IDE so that if any indent spaces are not correct we can easily find out that yes let me save this yml deployment Dot yml yeah API version apps V1 kind is a deployment metadata one replica I'm giving deployment strategy is a recreate so my container specification I'm giving it is a Java web application running on the port number 8080. so here we are having service manifest whenever we create the part those parts cannot be accessed outside the cluster so that's why I'm exposing the pots outside the cluster as a node Port service so last time we worked with the cluster IP and node Port today we can work with the load balancer also there are three types of services available in the kubernetes cluster IP node port and load balancer So currently I'm using this type as a node Port note port number also I'm giving as 3002 so this node Port if you don't give kubernetes will generate a random port number and we know the range of the port number also excuse me guys we can generate it will generate a random port number also if you give the port number a fixed report number will be available that port number we need to enable in the security group then we can access our application note what already we have seen note Port already we have seen remove this let me go for type as a load balancer let me go for type as a load balancer take this guys I'm going to create the deployment on the eks cluster by using manifest file with the service type as a load balancer But If You observe currently do I have any load balancers in my account in a Mumbai region I am doing the operations on the Mumbai region in the Mumbai region currently do we have any load balancer no we don't have any load balancer now I have a deployments manifest and I'm going to expose the Pod here pod label is Java web app I'm using that label as a selector to expose the Pod as a load balancer service copy this go to the client machine create a deployment manifest Java web app deploy Dot yml go to insert mode paste your deployment manifest save this save this now let me do tube CTL apply hyphen AF Java web app deploy.yml I'm creating a deployment now see my application deployment created on the cluster and one service also created service type I have given as a load balancer now refresh here can you see one load balancer created in the AWS can you see one load balancer created in the AWS did I create this load balancer my EK is a cluster created this load balancer did I create this load balancer my EK is a cluster created this load balancer EK is a cluster created this load balancer now check the parts which are running Cube CTL get parts get parts can you see one part is created for our Java web application can you see one part is created for our Java web application yes Cube CTL get service SVC one service is created the service is created as a load balancer service last time we tried with them last time we tried with cluster IP and node Port today first time we are trying with load balancer so when load balancer got created for our service then check deployment also Cube CTL get a deployment Cube CTL get deployment yes when deployment is also created it is upon running up and running Cube CTL get parts iPhone o wide iPhone white so this is quad IP and this is node this pod is running on which node that we can see there are two worker nodes available the one part is created so the Pod is running on one worker mode and that pod is exposed by using load balancer okay let's access the load balancer DNS let's access the load balancer DNS this is the DNS of the load balancer let's try to access this let's try to access this load balancer DNS yes so by default when I access this load balancer DNS 80 Port here if you see in the Manifest the service is mapped to port number 80. Target Port is 8080 and we map it to 80 Port so in the URL specifying the 80 Port is optional in the URL specifying the 80 Port is optional when I hit my load balancer URL it is opening Tomcat within this Tomcat my application is running the context path of my application is Java web app the context path of my application is the Java web app guys can you see am I able to access my application which is running on the eks cluster am I able to access my application which is running on the eks cluster can you see that can you see that can you hit that I have given the URL in the chat box can you hit that URL I have deployed my Java web application as a Docker container and the kubernetes cluster that a cluster is EK is a cluster which is managing by AWS cloud my Java web application is available if you see in my Docker Hub I have that account Docker Hub account is available in the docker Hub I have stored my Docker image that a Docker image I am pulling using this kubernetes manifest go to Docker hub in my Docker Hub if you see there is a image called Ashok it slash Java web app this is my image which I stored three months back all right now in the Manifest I am referring to the same image I have your deployment manifest in the deployment manifest I am creating the container specification so in this I am giving the image name as Ashok ID slash Java web app this image is a Public Image available in my Docker Hub account anybody can pull this image by using this image name in my kubernetes manifest file I have configured the same image name and that part is created to run my application container Docker container created inside the port parts are accessible only within the cluster by default so we need to expose the pods for outside access we have kubernetes service to expose the pods for outside access there are three types of services available in the kubernetes one is cluster IP second one is node Port third one is load balancer so today first time I exposed my pod by using load balancer how the support is exposed by Azure load balancer service we already discussed about labels and Select Cars in the kubernetes right pod is having a label by using I'm using Port label as a selector that means for this service I am telling that boss find a part with a label as app call and Java web app then expose that part as a load balancer as the service is a load balancer one load balancer got created in my AWS Cloud that load balancer having a DNS URL using the DNS URL and my context path of the application I'm able to access sir how we will know this context path this context path is our war file name in my Docker image if you inspect the docker image in the docker image we have written the instructions right to copy the war file to the Tomcat server the War file name will become your context path the war file name will become the context path when I ask the when I ask the docker to copy this war file see here this is the copy command which I mentioned in the docker file to copy the war file from my projected Target folder to Tomcat server web apps folder when it is copying the war file I have given the war file name like this Java iPhone web iPhone app.war so this war file copied into Tomcat server web apps folder so my project context path will become Java iPhone web iPhone app that is what I am giving in the URL load balancer is your load balancer URL available just if you use load balancer URL you will get a tomcat so this is the URL with colon 80. with 80 thumbcat is coming inside the Tomcat my application is deployed what is my application context path War file name Java web app when I hit that I'm able to access my application I'm able to access my application now anybody can access my application by using this URL this is my application URL this is my application URL now this is a lengthy URL this is not a user friendly URL then what we can do we can map this URL to our domain name in the AWS by using Route 53 service so we have Route 53 service what is Route 53 service in the AWS which is used for domain mapping which is useful for domain mapping so you have to purchase one domain you have to register one domain in this you have to register one domain in this road 53 service in this domain you have what if you purchase one domain you will get one hosted Zone in that hosted Zone you can deploy you can I mean you can map your application to that domain name that we will discuss in the under session all right guys are you able to understand how we are able to deploy our application on kubernetes cluster by using Cube CTL my cluster is eks cluster only worker nodes we can manage only worker nodes we can manage control plane is managing by AWS Cloud if you want the locks of the Pod you can see the Pod logs also if you want to get into the part you can get into the Pod also by using Cube CTL exec hyphen ID part name bash you can get into the Pod you can see the locks of the Pod also all right thank you
Info
Channel: Ashok IT
Views: 25,290
Rating: undefined out of 5
Keywords: kuberentes, kubernetes setup, aws eks, working with kubernetes, k8s services, docker kubernetes, pods, namespaces, deployment, pod auto scaling, hpa, ashokit devops, devops tools, aws ecs, kubernetes deployment, kubernetes load balancer, kubernetes services, kubernetes interview questions, kubernetes projects, kubernetes cluster ip, kubernetes nodeport, kubernetes architecture, ashokit kubernetes, kubernetes for beginners, learn kubernetes
Id: z8qDyO8F3XQ
Channel Id: undefined
Length: 70min 55sec (4255 seconds)
Published: Sat Oct 22 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.