10 Microservice DevOps Project Live CICD Pipeline | DevOps Shack

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so let me explain uh let me explain first about this application what kind of application is this okay so first of all this whole this is a 10 microservice component based application okay there are 10 different components written in different programming languages okay and this application is created by Google team okay so if I scroll down it's a basically e-commerce website first thing you should know secondly if I scroll down here so you see there are for each specific component like payment service email Services shipping currency for each service there will be a different component of whole okay if I scroll down you can see each component is written in different programming languages go python Java uh nodejs net yeah net also there CP okay so this this application is basically having multiple uh microservice components when we talk about microservice components they are separate components having their own version of code for that spe that will perform specific task okay talking about how we can see this component so let's go to here this is the repository right so each component is kept in this location SRC folder right if I go inside it you can see there are each micros service component folder okay for example let let's say I want to check out this ad service so if I open it you will see it's a uh gdel based application okay and all the commands written like how to build this application and everything in the description but we are going to use Docker files okay when I say Docker files so basically this Docker file what it contains it contains complete information how the application will be built as well as where exactly this application will run okay where exactly it is going to run it's going to run on specific Port 95 okay and all these commands basically that you see it is like from scratch command okay for example it's going to set up a Java it's going to use the gdel commands to build the application and package the application so basically in this specific case of 10 tier application each microservice component that you see here contains a Docker file and that Docker file contains complete list of commands that will build the or package the application as a whole okay so once we have all these components okay then we are ready to do the uh like build and deployment okay I will explain also like how this application is getting connected like with each component and everything okay one more thing that you need to focus on this part release so if you go inside release there is this kubernetes manifest files okay and it contains the micro uh the deployment and service file for each microservice component you can see it's a very long file okay but the problem that if you're trying to build it directly from your on your own the problem that you will face is that key Google team has created this application right and they have written this yaml files so they have written this yaml file with respect to gcp Cloud okay the kubernetes cluster that they have they have written it with respect to that so what I had to do I had to modify this yaml file so that it is compatible for be to be deployed in any kind of kubernetes cluster which I have modified okay so once it is modified then we can use it for deployment okay okay now let's uh start by setting up the kubernetes cluster inside eks okay so for that you'll be needing a AWS account okay you can create a free account but since uh this easy to service it's kind of costly and it uses like quite good amount okay so make sure that you are ready before like setting up setting it up okay so first thing uh to set up eks inside Amazon there could be so many different ways one way one simple way if I try to explain you is like using the root account which is not the best practice okay because we it's it said that it suggested that we should not use root account for doing these kind of things instead what uh like best practice is that in this case ke we create a new user provide a specific level of access to that user and then use that user to create uh like eks okay in that way that user will be having limited access not complete access because root account is having complete access which is not the preferred option correct okay so first of all we can go to IM am okay which is identity access management so when you want to create users or like assign or give roles to that user you can do that by going into imem in case you don't find it you can just search it here by typing I am and you will find it okay so once you have opened it you can create a user right so for that you can go to users here you can simply click on create user provide the name like something something okay then make sure you check this option provide user access uh to Amazon Management console okay and make sure make sure you this you use this option user type I want to create I am user okay and here you can set up your own password and this option is optional if you want to enable then only you should enable like it will be asking for the next sign in to change your password okay but I don't want to do so in this way with these settings you can create a new user okay AWS user which will be which will be specifically use for uh usage over console okay so you can create on you can click on next next and this will if I click on next let's say something something we put click on next okay and here so then next step as a Next Step you need to assign policies okay there are three options add user to group copy permissions and attach policies directly okay so if you are doing it for the first time you can click on attach policies and here you need to select specific policies okay now talking about what kind of policies you need to give it or assign it so for that I have this screenshot taken okay Amazon ec2 full access Amazon e cni and all these okay out of this one difference you will see here okay these are like Amazon like AWS managed policies but this is the new policy which I have created eeks full access okay so uh basically EAS full access I'm giving so that uh that uh the lamb IM user which we are creating should be having complete access to create resources or perform different actions inside eks cluster okay coming back here so then you can select like each uh policies whichever you want to assign which I have added in that repository those access should be given once you have given then you can click on next okay and then create user okay in my case I have already created user so I will show you with that if I if I go back here to users this is the user which I have created okay let me show you the policies that I have assigned okay so you see these are the policies that I have assigned I am full access then so and so but this is one of the level one of the policies which I I have created manually which gives complete access over eeks okay now talking about how we can create policies right so for that we can click on uh this add permissions you can click on create in line policy okay and here you can select the service in our case it is going to be eks right select eks now here if you know exact operations which you are going to perform on eeks you can select from here in case you are not sure you can just go with this option all eeks options all eeks actions okay this will make sure that you have access to all the services inside eeks okay and that's exactly what I have done this is this is the way that I created the policy and once I I created this policy was available okay and next I just clicked on uh this add permissions here and then here uh if I search eks second yes so inside this the section you will see the policies of this eks full access this policy is created and it is available for usage okay so same policy I have assigned to this user specific okay okay so once these policies are assigned okay uh all like most of them are AWS managed and just one we have created our own customer in line okay you can create it from here so once this is done then we are good to go and then we can use this uh user for creating uh like eks clusters okay okay next what we need to do so next we will go to AWS homepage back okay now we need to create create a VM okay now what we are going to do what what we are going to use that VM for first of all we are going to set genkins on that and from that VM itself we are going to create our eeks cluster using the user Aditya which we created in I am with specific permissions right so let us create a instance so I'll click on launch instance you can let's say provide uh name as genkins I use one one 12 machine then here I'm going to to use onew server okay and instance type so in my case since I'm going to perform lot of things and it needs quite good amount of resources so either you can go with T2 large or you can go with t2x large this will be the perfect thing and it won't be hanging okay so I'll just go with t2x large so I can show you without any uh issues okay now to access the VM uh you need to have a key two ways you can do that either you you can click on create new key pair or if you have already created those key pair will be available here okay name so if you are doing it for the first time you can click on create new keyer and it will create a new key okay in my case I have already created so I will just select it here coming to next section networking for this VM so here I'm going to select launch wizard to in case some of you are thinking okay what is the configuration of this uh Security Group okay so let me show you that as well let me open it in new page and quickly I will show you so this is launch wizard 2 and I have opened these many ports okay because uh this is my default Security Group and uh I use it for all the VMS okay this port may not be necessary because I just opened it because uh when I set up kubernetes cluster on selfhosted my own then I use this but by default this will be needed and this will be needed and and this will be needed okay rest everything fine so we have assigned the security group now configuration storage so in my case uh I can either go with 20 GB or 30 GB so just to make sure just to be on safe side and since this is a demo so I will just go with 30 GB of 30gb of volume okay and click on launch instance okay so instance is successfully created and it's in pending status now how do we connect to this VM so for that the best application that we have is known as mobaxterm okay it's a portable application and it is available for free so let us open it so once you this is like you can say fresh application nothing I have done now let's talk about how we can configure our VM here to access it okay so for that we can click on session click on SS such then here we need to provide the public IP address of the VM which you can see here okay so I will just copy this part paste it here next username so we have created a Ubuntu machine so by default the uh user in ubu is named uban 2 okay and then for authentic authentication part as I said we need to have a key right so we will use private key and I can browse to that so in my local machine I have said saved that key.pem file okay now once I select it click on okay then it will be accessible you can see machine is now accessible I can just clear the screen with control L okay uh one more thing let me do let me just rename this so that it's much easier I can put the name as chenkin and let's put a color as red okay okay so now our VM is Created from where we are going to perform all the task first thing uh whenever you create a VM the First Command that you should always run is this one Pudo AP update so basically what it does you know like there are many default repositories available in the uh Linux okay which is used by this machine or any machine you create so to make sure that those are updated and ready for usage for installing any kind of packages we need to run this command AP update okay so as soon as I run it then it's fine okay so it's done right next command now what we can do we can start configuring our configuring the settings for creation of eeks cluster okay so for that first of all what we need to do let me show you the steps okay first thing we need to create a user in I AWS with any name you can create and the policies that I mentioned you need those need to be assigned okay these are the policies and this this is the one uh specific custom policy that we created for complete access on eks right screenshot also I have it now first thing that we need to install on our VM is AWS CLI okay CLI basically means command line interface and what is its purpose so let's say there are some services for example like Azure sorry AWS and we want to access AWS so that we can create some resources on AWS okay so directly you cannot do that you need to basically authenticate yourself so that you can start creation start working with AWS okay so for that purpose we need to install AWS CLI it will act as a medium between our AWS account and the VM okay once we configure ourself or authenticate ourself that I am uh I have access on AWS through this then we can start uh like creating resources on our AWS okay so I'll install AWS CLI let's do that now we once we downloaded the package ZIP file we need to extract it okay that we can do using unzip command but by default in this Linux machine unzip command is not present but uh these days it's very good feature that Linux is s giving you a suggestions for installing the command so I can use this command to install unzip once unzip package or tool is installed we can use it to unzip our package okay okay so uh unzip is done next what we can do we can use this command to execute the AWS for installation okay okay now initial first three commands that you see here they are used for installation of AWS CLI okay which will be acting as a medium so that we can connect to AWS okay next command we have AWS configure so basically what it does it will authenticate you to that uh AWS account okay now let me I have copied this right I will paste it let me clear the screen paste it here AWS configure now you see it is asking for AWS access key ID so as of now uh like uh where can we get it so let me show you that as well so for that where what we need to do we can go to like here and we can let's say go to IM am okay where we have created our user so once we go there we can open the user and here you see this option security credentials okay so basically here we can create ID and password for this user which will be used for authentication so I can scroll down and here you see we have this option access Keys okay so this is an old one let me just show you how to create new and everything then it will be more clear okay so I will copy this whole thing and I have deactivated and now I am removing it okay in your case if you're doing it for the first time you'll be just find it in this format okay then you need to click on create access key okay and here CLI you need to select command line interface okay and Confirmation click on next create access key okay now you can see access key uh access key ID and secret access key basically username password has been created for this adita user right so uh this is like uh either you can just note it down somewhere or you can simply download a CSV file okay this CSV file basically it contains the credential itself the credential that we generated it will be containing that itself okay so now using those credentials we are going to uh configure our VM to connect to AWS so that we can perform any kind of actions from VM to AWS okay okay so let us put the details access key access key ID okay next it is asking secret access key so that part also I have just generated we can copy this as well put here okay now it is asking default region name okay so in our case like since I'm in India and the default region uh for India is Mumbai location so the code for Mumbai is this ap- south-1 this is default region for Indian people Okay click enter and it is asking default output format so that we will just leave as it is now basically we have configure our VM with credentials to access AWS okay now just to confirm if it is done or not we can run this command AWS configure list once you run this then you will see okay credentials are added okay so this this part is done and now we have installed the AWS CLI also after installation of CLI we have successfully connected to our AWS also okay now two more CLI we need to install one CLI Cube C it is also a kind of CLI see in simple definition CLI is nothing but a medium to connect to specific service from a VM or some specific location okay now Cube CTL is a CLI which will be con like helping as a medium for us to uh work with kubernetes so whatever commands we run we run through Cube CTL okay we run run it through Cub CDL CLI so again we are going to install it so let us install the package one second yes okay this part is done and yes now we will move this and just to confirm if it's installed or not we can run this command to check the version and we have installed Cube CTL version 1.9.6 for eeks okay so now cu Cube CTL CLI is also installed which will help us to work with kubernetes now one final CLI we need to install which is going to be eksctl now this uh this CLI is going to help us to work with eks and run the commands specifically to like perform any kind of actions in inside eeks okay even create a eeks that also it will work with so again we are going to run the uh like basic commands like downloading the package and then execution and then then installation of it okay second now we can once it is installed we can just check the version okay so I have installed 0.16.5 65.0 okay now we have uh installed all the required like tools clis so that we can communicate with AWS cuetes as well as eks okay next step is to set up eks cluster so for that there are going to be three main main steps okay I'll explain everything so First Command that we are going to run is eeks create cluster okay uh one more thing Let Me Tell You team [Music] so so one [Music] second okay no okay so uh now let us create a cluster okay eks eksctl create cluster where we need to provide some inputs like the name of the eeks cluster which we want region where we want to host our eeks cluster and zones we can select okay and then here if you notice without node group that means with this command we are going to just create a master node or control plane for kubernetes okay so let's do that as well the reason for like I am using without node group because node group we are going to select uh like create separately because we need to configure certain thing like Auto scaling options with the node group okay so if I just copy this let me just edit this part little bit here I will just put the name as my ks8 and rest uh region and zones are correct okay this is specific to like India India location okay now team this will take like at least minimum 10 minutes 10 9 10 minutes minimum it will take okay and the way that it's going to get created is using a service known as cloud formation okay so basically cloud formation has uh templates which are known as taxs so it use uses so it uses those uh like templates for uh installation uh like those templates for creation of resources that we are going to use inside eeks cluster okay so uh if I go back here you can see waiting for cloud formation stack so inside cloud formation service inside AWS It Is by Stack it means a template okay cloud formation basically will create a template which contains all the information about what kind of services are going to be created okay also one more thing the reason that creating uh clusters or kubernetes cluster inside a cloud platform like AWS or aure the benefit of it is that you can like you just you will already have a template okay which will be configured with everything like Network part and everything will be configured you just need to decide the names and all basic things and then you can provision it and work with it okay it's very much easier basically uh what I try what I'm trying to say ke creating a cluster over a cloud platform it reduces the manual effort to half almost half okay and that is one of the reasons at this point almost every company who has their applications hosted on on premises server they are trying to migrate it to like Cloud platforms so that it's like much safe and there is very less downtime okay also team like uh uh don't worry like after the session is over we'll discuss much more things about like what kind of questions come what kind of issues come in kubernetes all those things okay now meanwhile it is getting done so meanwhile what we can do we can set up our genkins here okay so for that I can just open this again in a new page okay same VM I have opened now here I'm going to set up genkins okay so for genkins the prerequisite that you need to have is that jdk should be installed on the VM where you want to set up genkins so as of now I if I just run the command Java so I see okay Java is not present but uh uh UB to machine gave me the command to install Java okay so I can just copy this to install a specific 17 version the reason that I'm installing 17 previously I used to install 11 is that within some time after some time the support for like 11 will be uh removed from jkin you you might see some like notification as of now so because of that reason I'll be installing jdk 17 okay which is one of the latest version that is being used okay so this command we'll install uh jdk in our virtual machine so once this this this jdk is installed then we can run the commands to set up our genins okay now in case you are thinking okay where can I get the commands to set up genkins so in that case you don't need to go like much here and there you can just simply search on uh like install genkins on12 okay and the first link that comes it is the official page link and from here you can like you can get the commands to install J uh genkin okay here you see I need to install on Linux right if you are using any other OS then you can select it from here Mac Windows like that okay so in my case I'm using uban 2 I will select this and I'm going to install the LTS version long-term support it's kind of stable also so that is one of the reason that I prefer to uh install this version okay so I can copy the whole thing go here since like Java is installed right so I can paste it and this will set up our genkins now this will take few minutes and once genkins is set up also let me just check here status you can see still it is in Waiting status because it takes some time to create this cluster okay so we'll wait for it meanwhile me close this and yes see is it to then this is the 1 VM we have created as of now correct so genkins is set up to access it what we can do we can copy the public IP address paste it here and by default when you set up the using the commands on uh the official page by default genkins will be getting installed on port 8080 which and make sure this port is open okay now for initial time first time when we want to authenticate with to genkin what we can do we need to uh access this file initial admin password so I will just copy the whole path go here and run the command Pudo cat and the path that we copied okay using this we can uh see the credentials so once the credential is there we can copy the credentials here click on continue and here two options you will see install suggested plugins select plugins to install always go with suggested plugins because Jenkins knows what kind of tools will be needed for the initial phase okay also team uh this part it basically works with the kind of internet speed you are having if you are having good it will work very fast okay one more thing in case you are getting errors here like Cross or Red color do not worry just there will be some option to like reinstall or like rerun so you can just click on that and it will start again okay anik tual for Jenkins we didn't require war and Tom cat so genkins like see war and Tomcat they are not mandatory the way that we want to set up genkins when it comes to like setting genkins in companies two ways it can be done the best possible ways one is like using a stable version of genkins a Docker image of it okay then like at least in my company the way that we we were using is uh we took a like genkins Docker image then we created a yaml file okay then we decided how much like storage how much memory and how much RAM uh Etc we should be giving to that and then we deployed our genkins inside kubernetes server this is the way that we set up our genkins okay okay so coming back here so let me see the status gen case is getting installed and our cluster yeah it's also in progress okay so this will take some time as I told because cloud formation it takes some uh like few minutes to create the resources that is needed for eks okay coming back okay so meanwhile these things are running in they are like in progress only so meanwhile what we can do since we are going to use sonar Cube also so we can set up our sonar Cube now okay so the way that I used to do uh I try to do setting UPS of sonar cube is using a Docker image that is because it's the most easiest way and it set up very easily okay so first of all we need to have Docker installed on our machine so if I just run Docker I can see okay Docker is not not installed but we got the command to install Docker right so I can just copy this and this will install Docker okay now by default when you set up Docker on your uh VM by default other users other than root they do not have access to run Docker commands Okay so in order to make sure that we are able to run Docker commands what you can do either you can add your like currently we are on UB to user right so either you can add your UB to user to Docker group so that whatever permissions is inside the docker group will be given to UB to user as well and then you need to restart okay or else you can simply run this command which will give permission to all the users okay so since it's a demo so we can go with this command and now if I run Docker hyphen V then we should be able to see okay Docker is installed now right one second let me just see okay it's still in progress so now to set up Docker uh set up sonar Cube server what we can do we can run the command Docker run basically there are two main commands Docker pull and Docker run Docker pull what it does it will fetch the uh Docker image just that okay but if I run Docker run what this command will do it will first fetch the image and then create like create the container with whatever information I give Docker run hyph d hyph d means detached mode that means this container the process of creating container will run in background and we won't be able to see logs on the front because like we don't want the logs for creating the container right so we can use Hy D which is for detached mode okay then we need to provide the port on which our uh container should run on which the application inside the container should be running so that we can run 9,000 and 9,000 okay now first thing that I have written it is known as host Port second uh second Port that I given it is known as container Port now host Port is the port that is open on this VM okay where we are creating the container okay host Port is the port on which on where we are creating the container second Port is the container Port so this port will be open on the sonar Cube container which we are going to create right so do not forget that not get confused about this okay now we need to provide the docker image name which we want to use so I can go with sonar Cube and as I always say I prefer to use LTS version of applications because they are stable they are latest and stable so I can provide the tag as LTS d commun basically sonar Cube has two versions one is Developer Edition one is Community Edition Developer Edition is a paid version and Community Edition is a free version but Community editions on R Cube has all the features that we need so we can just go with this and you see it was unable to find the image locally so it started pulling it once the pull is complete then you it will uh like create the container out of it okay meanwhile we can go back here okay still in progress as I said it takes some time okay so container is created now if I run this command Docker PS you can see our container is running to access the uh sonar Cube server again we can use the public IP address go here and as you remember we need to provide the host Port that we have in our case it was 9,000 right click enter and you can see sonar cube is starting that means sonar cube is successfully set up going back to Jenkins so quickly let us set it up as well provide credential details and everything we name this and this save and continue do not click on Skip and continue as admin the save and continue that's the better option okay so genkins is set up now and sonar also in restarting let me just refresh one time yeah Sonar cube is up we can click on home now when you set up sonar Cube for the first time by default the username and password for sonar Cube will be admin and admin okay which we we need to change in a minute login you see first option that it gives is update your password so let's quickly update it admin and here I will put a new password update okay so now uh so cube is set up Jenkins is set up and let's see the status of our here okay so uh Master node or the control plane for eeks cluster has been set up okay next step is to set up uh worker nodes okay so for that we can go here okay before we set up our node group for worker node we need to run this command now you might be thinking what exactly this is okay you can see written main thing associate I am oidc provider so basically it's uh IDC is open ID connect okay and we have written IM ass so basically what happens when we run this command what will happen eksctl will associate this feature this o open ID connect for IM with uh eks eks cluster so that our eks cluster can assume IM am rules okay different IM roles which will be needed and then our eks cluster will be mapping those roles with the service account that we are going to create so that so that those service account if it needs specific access and it requires specific IM rules then that can be given okay so for that reason we need to run this part also I will simply copy this paste it here sorry about that we need to one second eeks cluster name let me check my e it okay so here we need to change the name of eks server like cluster that the one that we are we just created rest everything seems fine we can click on okay and this part is done okay now comes the important part creating the uh worker node okay now basically we create a node group node group is simply uh as the name suggest it's a group where our nodes will be uh like kept okay now now two very important things I want to explain to you here is these three sections okay nodes equal to three notes minimum equal to two and notes maximum equal to three okay so basically when when I was talking about autoscaling part within these three steps autoscaling can be done what happens whatever number you put here that number is the desired number of nodes you want to be created that means if I put three or three then while creating this node group there will be three worker nodes created initially right and minimum that it can keep is two like at all times at least minimum two nodes should be present okay and when we are talking about Auto scaling so this number can be Chang like we can increase it so let's say like maximum node I want four so if the when we are doing deployment and we just have three worker node but requirement is more like more on three worker there is no enough resources to host a new new kubernetes pod in that case if I have given the maximum number of nodes to four then eks cluster automatically will create new one one more one more node it will create okay and the within these three steps only our Auto scaling will be done and rest thing that I have written here is the other services like how we are going to access it so we are going to access using S such and this key so this key name will be the same that we have used for like uh that we have just used for creating the VM also okay and this key should be uh present on your uh like uh in your AWS okay so let me copy this also okay so here we need to change few things name will be my ek8 and nodes so as of now current status I want there should be two notes created and if needed there will be one extra node added okay rest of the things you can just keep it as it is just remember this key name uh whatever you put you should be having it Okay click on okay and everything seems fine let me see okay so it has started creation right of worker notes meanwhile that is getting done we can configure our genkins so that it's ready to work with uh kubernetes okay so for that as of now as you saw I did not do anything specific on genkins we did not like configure anything right so first of all we need to install certain plugins we will go to plug-in section available plugins first plugin we need to install is sonar sonar Cube scanner okay now one difference you need to understand for those who don't know two things sonar Cube server and sonar Cube scanner server is the location where the project will be created all the information about the uh like the report will be present right while sonar Cube scanner is the tool or Plugin inside genkins that will perform the analysis for sonar Cube okay and send the report to sonar Cube server main thing okay so after sonar we need to have Docker so for Docker I'm going to install multiple plugins that will be needed okay because we need to install Docker then we need to have specific set of uh Docker formatted uh commands also so that we can write the docker commands final thing thing kubernetes also team make sure that you understand this part properly because uh using this method which I show you for you can integrate kubernetes from anywhere you can create kubernetes like uh we can connect to kubernetes from genkins from anywhere okay so for kubernetes let me see okay meanwhile let us first install this one second yes okay so for kubernetes two plugins are install one kubernetes and one kubernetes CLI so that we have access to kubernetes uh CLI and we can run commands with that right so these many plugins I'm going to install and we can check the progress here meanwhile if I go back yeah here also you can see again whatever things we are creating it it gets created through cloud formation through a stack stack is nothing but a template that like cloud formation uses to create whatever resources we have uh like defined okay and it's getting created meanwhile we can check out the next step okay now this part once this part is completed we need to open in inbound traffic I will explain why do we need that once that part is done then we are going to create a service account then we are going to create a rule rule based access for role based rbac role base access control and then we are going to assign this role to this service account and then for auth indication from genins to this service account we are going to use a token generated for this service account okay okay so here everything done so quickly let me configure things okay as I have told you before also in previous videos on YouTube like whenever you install a third party plugin in genkins you need to configure it so when we talk about configuration we do configure in this section tools previously which was known as uh configuration tools okay scroll down to Docker first of all we are going to just select this provide a name as Docker install automatically we want to use uh latest version okay that one part coming to next sonar Cube Scanner installation so here also I can just put sonar Das scanner okay and version latest we can just work with it okay other than that we don't need to configure anything specific just save it the reason you might be thinking okay even though we are having Java based comp microservice components in this project why we are not installing jdk you know why I will show you that let me see which one was having Java Java is in ad service okay so if I go inside ad service the thing is ke the Java whatever version is needed it's getting in installed through Docker image a Docker file J you can see here J so since everything is being done at runtime through Docker file so we don't need to separately install uh jdk okay so for that reason I did I did not configure here just let us click on apply now these tools are configured now one more thing we need to do with respect to sonar Cube so sonar Cube scanner we have configured now we need to configure sonar Cube server so that Jenkins can easily connect to sonar Cube server for that what we will be doing we will go to Administration inside sonar Cube security then users as of now only one user is there administrator so we need to generate a token which will be used for authentication I can provide any name expiration date you can select as per your requirement and this this token is going to be used for authentication so we will just copy this whole thing go to genkins then we need to save this in credentials click on global so that it has Global level of uh like access access add credentials and here since it is a kind of secret okay so we are going to select this option secret text and paste the secret text that we just generated in sonar CU and we can provide the name as sonar Dash token same thing we can put here itself in description and this part is done another part that we need to do for connecting sonar cube is adding the sonar Cube server in gen and whenever you want to add a third party server in with genkins you need to go to this section system okay one second size the no no specific customized genkins plugins we are not using whatever plugins are already available with genkins the official plugins uh those only we are going to use okay coming back here to configure sonar Cube server in our genkins so we can scroll down and we see some opt we will see one option sonar Cube servers right okay now this option is available because we have installed sonar Cube scanner plug-in and once you install then only this option will be visible okay so we are going to configure our sonar Cube server in sonar Cube installation section click on ADD sonar Cube and here let us provide a name which can be written as sonar and then server URL so for Server URL we can just copy this part and we will paste it here okay make sure that you remove the last slash now here the token that we have just added in our credentials is available so we can select it and click on apply now genkins is completely configured so uh for the create creation of pipelines for 10 tier okay now let us just see the status so here uh cluster is also created eks cluster okay just to check everything fine we can run this command Cube CTL get nodes so as of now you can see two worker node has been created and to view them inside AWS you can click on instances and you can see mys with the this name two worker node has been created of the size T3 medium which will Define in our script okay okay now this is good so another thing that we can do is check for kubernetes so I can go on the kubernetes page and we can see that it is also created right version is 1.27 1.27 is one of the latest versions so we are not going to update it to like newer version it's fine now here this is the place where you can find all the details about the cluster that we have created okay and we need to make make some changes now if I go to networking section you see there are two security groups and you see the benefit that benefit of your creating a kubernetes cluster inside Cloud platform all these networking things are taken care by uh uh taken care by the cloud platform itself right okay so two security groups are created we are going to uh make some changes inside additional Security Group and let me explain you why so once I open it so this Security Group is specifically used for communication between control plane and worker node groups okay that means uh if like master master node and worker node between them the communication that needs to happen will be happening using this Security Group okay so to make sure that it works fine as of now you see in outbound roles like all traffic is enabled but in inbound rules there is nothing enabled so the communication may not happen properly so to make sure that it works we will open the ports so I can go here search for all traffic and here also I will open this see this part can be changed but as of now I'm just for demo purpose I'm just opening okay and save rules now now our kubernetes cluster is completely configured properly and now we are ready to do the deployments okay okay coming back to homepage now we can start writing pipelines right pipeline also we are going to write from scratch okay so next step till here we create till here we are completed right we created user we assign policies and then uh we have installed aw CLI Cube C ekl we have created the cluster Next Step as I told we need to create service account need to create a role assign that role to service account and create a secret for service account and then we need to generate a token okay now these are yaml files okay one more thing I need to show you if I run this command Cube CTL get Nam space so namespace what exactly is namespace it's a project or group you can say inside which uh certain parts or something will be created okay and but if you don't use any new name Space by default every deployment will go inside default okay but in companies every if there are multiple projects Ed in kubernetes what we do we create each like one one name space for each single project same thing we are going to do here okay so what I can do I can create a name space okay so that I can do using the command Cube CTL create name space and the name of that name space with which I want to create so for that let's say I want to create create the name uh Nam space web apps Okay click and enter and it's created now if I run this command again cctl get namespace you can see there is a new namespace created now whatever deployment we are going to do we are going to deploy our application inside uh kubernetes so we are going to do it inside this web apps okay this specific location we are not going to create it in default okay as I said in companies when it comes to comp working in companies so they have like one one namespace for each different project okay so now namespace is also created created and now we need to create the service account now what exactly is service account let's say like there are multiple people working in company okay so if deployment is being done manually or through genkins that is different thing let's say if I want to do any changes manually so not everyone in the inside the team can have the access to kubernetes okay like it's very hectic also for example let's say there are 50 team members uh for on devop part okay so you cannot uh ask ask everyone's detail to give me your detail I will give you access on kubernetes so instead we create a service account service account is nothing but just a common account that can be utilized by everyone that's the reason we have service accounts okay so what I do I'll copy this yaml and I do not directly create this first I I you should always validate your yaml file so what I do yaml lint this is a website where you can verify if your yaml is correct or not in correct BAS basically what I mean indentation because yaml requires that and if indentation is wrong then everything goes to gutter only so uh we don't we need to make sure that we are checking it okay now basically in yaml files they seem little complex but they are very easy to understand here you see there are only three sections first is API version of this specific kind of file then what kind of resource we are going to create here service account then metadata metadata is nothing but the detailed information about the uh like whatever thing you are creating so here service account name is going to be genkins you can put anything but I just for name sake I'm just putting Jenkins and the name space inside which you are going to create this service account so now you see it says okay yal is correct so I can just copy the whole thing go to my server and here I can create a file whose name will be essay means service account and extension will be this now I can paste the file here now everything seems correct and let me save this okay now when you have a yaml file and you want to create the resource using that yaml file the command that we are going to use is Cube CTL apply hyphen f for file and the file name or or as the location where the file exists So currently it's in current directory one L okay so I can click enter now this genin service account it got created created inside web apps fold web apps namespace only it's not created anywhere else or anywhere else okay and it can be utilized inside only that namespace okay coming to next part now we need to create Rule and it is also basically it's it seems a little complex but it's very easy to understand first is the API version then kind what we are creating we are creating a role then metadata information about like the uh name of the role I have just WR app role you can write it anything okay then we have two more sections rules okay so what kind of rules we want to set up that part then resources so what kind of resources I want to give access on so almost every resource I have written here and what kind of actions should be performed on these resources so you can see it's mostly crowd operations like reading then writing then updating then deleting okay on these services so now we are going to create the role this is uh specifically like for rbac role based Access Control okay again as I said whatever yaml file you have you need to make sure that you verify it that if it is running fine or not also make sure that you see this namespace part so whatever namespace you are creating make sure that name is correct so this y also correct and let me just check one more time great so everything seems fine I can just copy and now we need to create this rule again what we need to do we need to create a yaml file VI r. yaml okay and we will paste the contents here save this okay now for uh deployment using a yaml as I said you just need to do this Cube CTL apply hyphen f name of the yamal file okay and it's going to be in our case it's going to be Ro yamal right again this role is specifically created inside only web apps namespace okay this role won't be usable or utilized in other namespaces okay now role we have created and the uh role we have created and the service account we have created next step is to assign this role to the service account so that service account have access to all these things now service account will be able to create ports config map and everything else also right now talking about binding the role now we need to assign the role to service account right so for that we can use this yaml it's not key see it's not key these yamal are very complex they have sections you need to just understand the section which section is used for what for example first three you will always have this API version kind kind for like what kind of resource you want to create and metadata metadata is just the information details okay then we have rule reference again it's another section then we have subjects where also we have given the name key name space will be what what kind of kind like kind basically about where like kind on which we want to assign so we want to assign it on a service account whose name is Jenkins okay so again let us verify this yamal file everything seems correct so yes so now we can assign this rule to uh our service account which we have created which is genkins in our case so again let's create a file uh whose name I want to put as bind do yamal and I will paste it here save this file okay and for creating the resource that we have like performing the actions which we have mentioned in our yaml file we can again run the command cctl apply right now rule binding is done okay okay so this part is done next thing what we need to do is create a token using service account in the name space okay so for that uh see most of the things that you see here it is coming from the official uh kubernetes uh page or official kubernetes documentation okay so everything you can find there but yeah you can get it from me also because since everything is categorized so it will be useful now we need to create a token for service account which will be used for authentication right okay so for that also we have this small yaml file okay we need to create a secret and okay now here service account name we need to change in our case it's genkins see in most cases the way that I used to work is uh I don't like to write yamal files from scratch I will take some template and then I will work with it because it it saves a lot of time because once you understand how yaml file works and how it's written then it it's it becomes very easier for you to work with it okay okay so this part is also done then let us create a another file which will be SEC do yamal rest everything seems correct okay now one thing that you need to understand he here we have not defined our uh web uh name space right but we want to create this secret for genkins user inside uh web uh web apps Nam space only so how do we do that so for that we can use this command like cctl apply this is the basic command that we have then hyphen n then name of the name space now this command will make sure that the resource that you are going to create now it is getting created inside this namespace only okay now secret is created okay okay so secret is created but we have not created token as of now so token part we will do it like in some time but now let us create our pipeline so that uh we can start with the uh building and deployment part okay so let's put a name I'll just put as 10 here and select pipeline as the type okay and as part of best practice because in my previous company uh the thing is ke how many number of builds you want to keep in your genkins history build history because if you are keeping so many it will occupy some space which nobody wants so I like to just keep two build builds in my build history so you can do that by using this and if you want to Define like for specific number of days each build should exist you can Define here if I don't Define anything that means key uh build in build history will be maintained uh maximum only two will be there that's it coming down so here also uh I can easily use a template hello world template okay so this is a basic template that got just created right so what I can do I can create multiple stages now so that afterwards I can modify each one of them one by one okay first stage in our case is like in pipeline case it's always going to be um a way to fetch the source code from uh your repository wherever you are storing it so I want to provide the name as get checkout okay now the steps in case you don't know the steps you can definit always use this option pipeline syntax it will provide all the uh uh commands or script that you need to use so I can go here I can use this option get right now we need to provide repos osit URL so I can put the URL from here here now Branch okay now there this is the part where things get little complicated even though there are so many branches but since I have modified lot of things when I say modified because uh that means I have modified the yaml file so that it is compatible with uh my eks or other commity service that I want to use okay and also few other things I have made Chang okay so and I have done those changes inside latest Branch so in case when you are going to build use the latest Branch only because in latest Branch I have created my own yaml file it's coming from the original one but I have to remove the components which were specific to gcp uh like Google cloud kubernetes and so on so I created my own yaml file okay so here we need to put the name as uh latest Branch I want to use latest I can click on generate script script okay and this will create the command which we can use to perform the get checkout part now one stage completed right next stage I want to do is perform sonar Cube analysis now the problem is that key since there are so many different languages here so sonar Cube won't be failing but it is not able to perform analysis on everything okay it won't be properly performed but yeah still we will be getting results for specific components okay so let's do that as well now we need to write Command right but before that we have already installed our uh sonar Cube scanner tool so in order to be able to use that tool for analysis we need to Define that tool in our pipeline so the best way is that you can create a variable for that tool using this envirment okay so now we will be creating a variable with the name scanner uncore home okay equal to we need to provide the tool name which we have which we want to create the variable for so that will be tool and in our case we have configured it in tool section as name sonar scanner right okay so tool is configured now we can write the commands which will perform uh sonar analysis okay now in genkin when you want to write any shell commands you need to make sure that you are enclosing those commands inside s shell so usually we do in this format s double codes and we will write whatever command we have right but let's say I have multiple lines of command multiple lines of code which I want to write so in that case what you can use is three column okay and then you can write so now uh we have created a variable right so what we can do we can call that variable using dollar sign you all of you might know this key dollar can be used to replace the variable with its actual value so that's what I'm using now the executable file of this sonar scam scanner or any other tool that you install exist in a specific location bin and then the name of tool so it will be sonar as scanner now basically uh this small line which I have written it gives the it it it brings out the scanner tool okay or the executable file which is going to perform the scanning okay now one second okay now here we need to write the steps we want to perform so in case like you are since we have multiple kind of code right so since there are multiple kind of code so we cannot directly Define it uh in in a way key it goes through everyone okay so in this case what I'm going to do simply provide two information the project name and the project key which is going to be used for the creation of project inside sonar Cube that we can do using D Sona these are like some default thing Dona do project key equal to we can write the name as 10- tier okay second thing I need to write as D sonar dot project name which is also going to be same thing 10 year now you know there could be one more problem occurring because when you have Java based application or Java based components in your project so by default when you want to perform sonar Cube analysis sonar Cube will sonar Cube scanner will search for binary these files of java okay those files exist so in order to make sure that your uh uh your uh your command works fine and sonar analysis happens even for the Java we need to provide a location of the Java binaries file so for that we can write Bonar do java. binaries now see let's say I don't know the location so I can simply put a dot okay and it will assume okay it it is in current location now this part sonar cube is written but it here you see we have written the commands to perform sonar Cube but we have not defined it ke where is the server right so we need to Define our server again as I said key whenever you want to need any help uh you want to like you are not able to uh get the commands you can use pipeline syntax if I scroll down you will find some option as with sonar CU involment see and I can generate the code with authentication token so you see this to this uh block has been generated now we can execute our sonar Cube commands inside this block okay so let me add it here okay now you see like since I do a lot of like uh work around with things to make it better you see here we have provided credential token right a token but in reality since we have already configured our sonar Cube server in system come so we don't need to provide the specific token here we have where is it here we have configured our AR right so because of that reason we don't need to provide token instead what we can do we can write simply the name of server which we have configured here which is the name sonar now you see it it just became very small right so and now we can put this part this set of commands inside this block okay now soar will work fine okay so before we jump to building part of applications let us run this part and let's see if there is any error or something then we can proceed okay some error came now okay spelling is wrong for envirment so let me quickly fix it and let us trigger one more time now it has started okay and now get checkout is getting performed and sonar Cube also started sonar Cube uh uh like analysis okay and it's going to perform analysis on everything one more thing theme you know like I'm not sure like if you have watched my Sonar Cube videos I have mentioned that sonar Cube uses quality profiles for performing analysis okay and sonar Cube there is a specific section in sonar Cube which helps sonar Cube to identify what programming languages file are there and because of that you can see here quality profile and for all different languages whichever is present inside that sonar Cube project it is using specific quality profiles like for go goang language it will using the uh let me just go here and let me show you from here quality you see each programming language quality profile is there with certain amount of rules right so since sonar cube is able able to identify what kind of programming language that file is written in so it can decide which quality profile should be used and based on that it's using you can see these are the quality profiles used for each of the different languages okay so sonar is actually very smart Okay so sonar Cube analysis is Success we can go here go to projects and you can see it it is able to done but you see uh in my opinion to be very Frank it's not 100% correct okay it was able to find Cod smells and uh just three bucks it is obviously there are more than three bucks but it was not able to do everything properly because it's a very complex project we can configure it in such a way that it can work fine but for now the target is to that we are able to connect with sonar Cube and generate the report okay which is done properly right so sonar part is done next part that I want to do is build our micros service each component separately right second let me go here yes vitesh we can use the triy okay but uh again uh yeah triv will actually work fine but when it comes to when if you want to use o dependency check it will give some errors because o dependency check can not scan all sorts of applications all sorts of programming language so it may give some error okay how can we increase size of node after let me read here how can we read uh increase size of node after creation of cluster so Shashank we can do that there is a like command like eksctl update cluster okay and there you can provide the information what you want to increase or what you want to change with with those things okay have integrated genkins with K8 okay so this part I will just show you in a minute let us do that okay okay so now we want to build our Docker images and integrate uh genkins to kubernetes okay how do we do that so first thing that we are going to do is create stage okay and here let's say I want to provide the name let's say I want to build the first component which is add service okay and as much I can see the docker file for ad service is on the SRC inside the uh ad service folder okay so let us configure this I can provide the name as add service remove this part now since it's going to be a Docker command so we will will be we will be writing everything inside a script okay and that we can do in this format now here we need to create a block for writing uh kuber sorry Docker commands so again what we can do we can use the pipeline syntax we will go here and here at the end you will find this option with Docker registry select it now here Docker registry URL now the thing is key if you are using a public dockerhub repository then you don't need to provide any URL okay but in case you are using a private registry like ECR ACR or your own self-hosted Docker registry then you need to provide the URL for sure and the credentials so in our case credentials is going to be we need to add it we can add in this format this is my dockerhub credentials okay password will be this one talking about ID so we are going to write at write as Docker cred Docker credential short for that and same thing here add Docker installation which we have already configured the latest version okay and now we can generate the pipeline script so this is the block created so whatever command that we are going to write inside uh this block for Docker so it will be performed very well without any issues okay so let me put it here now let's remove this part and then okay now let us write the docker commands for building of Docker images okay now the thing is ke each each component that we have inside SRC folder for micros service so each is like different different location right so what we need to do uh I'll use a specific thing uh that is available in genkins using which we can go to each location and then we can like build the docker image how do we do that so for that also we have one uh option inside sample step you find you can see this option directory so basically we can change directory here on which where we want to run our commands okay now let's say let me just save this now let's say directory location how we can get so previous build I have run I can go there go to workspaces and this location that you are seeing this is the location or directory where we have our project so what I'll do I'll copy this go back to configure pipeline okay and here paste it sorry sorry about that one second paste it here okay now this is the project location but our ad service part where it is it is inside SRC folder slash add services this one so I will just copy this part because we need to provide the docker file location so you can see Docker file location is existing here right so I can put in this format generate the script now we have another block which basically put us in the location where our Docker file exist okay so I will paste it here now whatever commands we run here it will be basically it is running on this location inside this location basically okay so First Command that I want to run is s Docker build basically for building the docker image so we we can run Docker build and Docker building the image as well as tagging it I will do in the same stage using Docker build hyphen T and the name of the image that we need to provide is the first Docker Hub registry URL in my case it's going to be ad sorry ADI jwal slash the name of the image which you want to build so in my case add service right add service then uh tag will be latest and the location of Docker file it's on this path dot why dot because we are currently inside this location okay so now uh same same kind of thing we need to do for each component so that we are inside the location where Docker file is present and then we can execute it okay so I have actually written for others also so I will just integrate it to save time actually okay let me 2 three four five 6 7 8 9 10 I hope everyone understood this part right first we are we need because we need to go to the location where our Docker file is present you know uh you even if you want if you don't want to use this part you can do that as well instead of providing a DOT what you can do you can provide the location on which stalker file so this part if I copy paste here so that that will work fine but I just wanted to make it little bit uh proper so for that reason I used the directory part okay now I have pasted it just let me confirm if everything is correct then we can start configuring our uh connection to kubernetes part okay add service this let's release this okay see uh three steps I have written with one second and with First Step what I'm doing I'm building and tagging the docker image on the location Second Step I'm pushing the docker image to Docker H repository for usage later third thing to save some space uh the docker image because once we build the docker image it will it is getting built on our genkins machine right on this machine okay so and since these Docker images are very big in size in GBS so I want to save space so what I do once the docker image is pushed to dock have repository I will run this command to remove the docker uh Docker image that just got built okay and same thing we are doing here one one one issue I will tell you very very simple issue which I faced and I was able to not able to resolve it because I was not looking properly for each component if you go inside like each of the component you will find the docker file on the inside the folder okay uh micros service folder but in case of this card service the docker file was inside the SRC folder which I did not notice and because of that I I just faced lot of things okay but here I have configured okay card service inside SRC and then you can build okay same thing I have done for other components also let me just see if every component is fine shipping recommendation product card lock payment log generator front end email service Currency Service checkout service cat service service okay now this part is done so let me just save this now let us configure our Docker part okay okay okay so as I told you in the beginning that uh uh I had to modify the file yamal file because this this project is from Google and Google team they have configured it in such a way with gcp so to what I did I have created my own version of file the deployment service yl file inside which few things I need to replace first thing obviously the uh Docker images you need to replace each location exact Docker image which is going to be used other than that if you compare the actual the original file that Google team has created and my file you will find lot of differences okay so once those things is configured I just created new file also right okay so coming back to configure genkins right for that again we will go to pipeline syntax and here we will search for this option with Cube configure configure Cube cuber CLI basically what we are trying to do is use kubernetes like use CU Cube CTL the CLI for kubernetes from genkins itself so that we can get the platform to run direct command on inside genkins and that will be executed on the kubernetes server so we need to provide these details from where we should get okay so first let's talk about kubernetes service and point so that we can you can get if you go inside uh eks here you will find the API server point so I will just copy this and let us paste it here then cluster name you don't need to put okay but uh since we already have we can just put eks my eks 8 we have created okay context name not necessary name space in our case it's web apps the thing is ke the thing is ke since we have the uh we want to build and deploy our application inside specific web a specific name space named web apps so I will just put everything I will deploy everything inside web apps only for that we can just provide the detail here okay certificate of authority you don't need to put you need to now now we need to provide the credentials so when I I'm talking about credentials that means we need to generate a token okay token for service account named Jenkins how do we do that so for doing that we can use this command command Okay Okay so this command we can run on our kubernetes server okay uh and since like we are able to already we are connected to eeks cluster through our this VM so we can run the command here now let me explain you what I'm trying to do here okay I want to generate the uh token for the specific secret that we just created in few minutes ago right so now I want to run this command uh let me show you yeah you remember when I was like writing the uh checking the yaml file for the secret so the secret name is my secret name right so we have the secret name and we can run this command Cube CTL describe secret my secret name which will share us the token okay and but I want to run this inside web apps right click enter and this generated all the information detailed information here you can see this option token right so we can copy this whole token once we copy this whole token we can go back to Jenkins and here we can add it again since token is a secret text kind so we can we need to select secret text put it here okay and then we can put some id id in the sense like uh let say anything we can write uh K8 token same thing we can put here also click on ADD so we will select it now this is configured we can generate this script okay now this block that is created it is a block inside which we can run all the kubernetes commands Okay so again we need to create a block see the way that I work is I will just leave last two uh brackets because they are for the opening and closing of uh this Pipeline and stages so only leaving two blocks and then you can write your rest of the commands here okay so let's uh quickly create a uh any stage we need to create so I will just copy this part one second yeah sorry we can here put the name as k8- deploy okay and let me remove extra things okay this is fine now we can put our block which we just created right so I will just paste it here remove this part okay now here we can write the uh uh the commands that we want to run in our case what we want to do we want to deploy the file we can do this using sh Cube CTL apply hyphen f and the name of the yaml file which we have so as I told you I have created my own version of yaml file which exist on the root location so I will copy this also do not forget this is the latest Branch this is this is a new branch which I created configured it as per the my own requirements Okay so if you are going to use that uh if you're going to use this project use that latest Branch it has all the properly configured things SQL and next command that we can run is like check the status Cube CTL get [Music] parts we'll discuss about few things more I have in mind which I'll just share you okay Cube CTL get SVC okay so maybe I should start this so that it will take some time meanwhile I can explain you few more things okay one second let's hope for the good and let's see so it has started and it might take some time okay now I wanted to show you something let me open this deployment service file okay so few things you need to know uh let me open it here itself one second so there are so many components right but the uh access contain on the entry point for this application is going to be front end okay from Front End uh like front end component we are going to access the application or go inside the application first thing secondly in the yaml file that I have written so B basically here one second let me go to front end where is it yes two kinds of front end servic is there one is front end with node Port node Port is not always the best practice but just for demo purpose you can work with not notep okay but most important the one that we are going to use is front end external which is a kind of load balancer okay now I wanted to explain you something about this load balancer and and I wanted to explain you about the load balancer and node Port okay so basically node Port is a port that is going to be opened on node worker node which you can understand it's not a best best practice because uh like opening a port on node uh it's not not always very good okay second option that we can use is load balancer so basically what happens load balancer is much secured option and it it's not opening like like it's it will be keeping things access to things very secure okay okay also it will be uh managing the like uh load like some traffic comes to access the application then load balancer can understand where I need to redirect okay whichever ports are having less resources because there could be multiple ports for the same same same component so load balancer can actually understand where I need to put the where I need to push the files or sorry push the access okay push the traffic that is coming from second thing you might be thinking okay how the components are connected so components are connected basically you see this uh section involment in each deployment file you might be able to see and inside this uh for example one second let me show you yes for example like let's say this is one of the uh micros service component it is load generator so load generator is basically connecting to front end because we we are written front end and this the I uh the port okay okay same thing you will find in uh other components also uh here for example one second yeah for example this component we have checkout service so checkout service is connected to different uh different other components using this you can see inside environment we have mentioned Port plus each of this name that you are seeing product catalog they are different micros service components and how it's connecting because we have mentioned the port also so in this way uh this microservice components are getting connected you can go through this uh whole thing then you will understand which component is connected to which other components you will find that information okay now you can see it has started building and we can go down yes so it's going to take around uh 13 to 14 minutes and then once everything is done then and the Serv the ports and service files service will be getting created inside service I will show you in a minute inside the service we'll be able to see two specific options for front end one is node port and one is load balancer so basically inside load balancer what will happen it will generate an external URL using that URL we can access the application okay but the problem like for example many times you might have seen ke uh you will be using many times you might might have seen you will be using a uh what's that uh sorry one second uh many times you might have seen ke you might be using uh like selfhosted kubernetes okay so in that case what happens basically self-hosted uh if you are setting it on your own VM then not excuse me if you created your own selfhosted kubernetes cluster then it's possible that not always uh the external URL using load balancer will be generated because this part is specifically supported with the cloud platforms because when I was testing with my own cluster self hosted kubernetes cluster it was not able to generate okay okay there are lots of questions guys once again yes RIT I have changed the uh jamble file actually as for the E so the yamel file that I have changed it's not as per eeks it's going to work on every cloud plat every kubernetes cluster which you are going to set up uh Ingress actually yes I have done but not for this thing I actually for my personal use personal learning I did Implement Ingress I think I uploaded on LinkedIn also the steps which we uh which I used there okay so sa have asked do we have to alter deployment yaml anything which practicing so uh as of now if you are going to use the latest branch which uh I have like I have created so uh when you you want to use that Branch then the only thing that you need to modify in the deployment yaml file is the uh image Docker images as of now it's uh I have added my own Docker images if you want to use as per you you can change the docker images okay rest as I just showed you how to create pipelines and everything so now I I guess like you should be knowing ke okay how pipelines will be working and everything right Deepak Mishra so just setting up the environment variable in capital letter as key and value as another service port they are able to communicate so Deepak see uh from yamal file this is what I got to know because as I said this project is not made up by me whatever information I was able to see through devops files whichever is present because of that I got to know okay this component is getting connected to this this component is like microservice components are able to connect in this format okay if the developer has added other things also in back end in the actual code so that I have not seen actually okay but through yaml files I will able to know okay this is what happening so where can we mention the particular microservice in mentioned node I will show you in a minute one let it let the uh deployment done Vishal you have asked do we not have to create jar or War file that we do in Java project for all micros Services we are using here so V in in this case what exactly is happening first of all everything is done on runtime through Docker file so the commands to build and generate jar file and everything that is done through Docker image okay so for that reason separately we are not doing but as per the like if developers team developer team if they are doing it so first the way that they work they will check test each component in their local machine once everything is fine then only they will share us to us share it that to us okay what about Ingress so sort of yeah there will be more Master Class coming in up sessions okay I have implemented in as of now but I did not add here because we are using load balancer and that will be more than enough because we are getting an external URL okay load balancer uh sorry in also we'll cover but in next Master Class maybe vkm you're trying to communicate all service and service name with put not bro I as I said this project is not made by me the information that I got through the yaml file okay this is what happening this is what how component connected there could be something in back end also which developer may have written which I have not seen can we do do with Helm charts and do can we do with Helm charts and do deployment in gen kits yes you can do no Deepak we can consider for other involvements also I hope without inrease load balancer will not work let's see let's see let's see don't worry about that Anil Kumar D can you share GitHub yes later how ensure every build every build of images only build the services Fe the changes we introduced so MTI Daniel see in this case two scenarios we could do okay one scenario either you could be having multiple uh like like Jin jobs that is one scenario another scenario you can create uh you can create uh like this if else thing you need to write in genkins then only you can create situation okay wherever changes are there that part only should be bu can we see console log of the pipeline sure bro why not one second see uh these Docker files they contain all the information about like from scratch key for this specific component building from building part to like uh like uh packaging the application generating artifacts if there is any and then it will uh build the docker image push it to uh Docker Hub uh register okay in this way it will work one my how much is left 1 2 3 4 5 6 7 8 two more components is spending I guess why didn't we use any build tools so the reason that we did not use build tools because in Docker file whatever tool we need to use for bu like packaging the application is already mentioned so everything is happening through runtime uh it's downloading it whichever tool is needed then it's like packaging the application building it then it's creating the docker image this is why we did not use any build tools M shall yes bro see uh V I can suggest to make what kind of changes I have done you can compare you can download the files and you can compare with any like Master Branch or something you can compare then you will get to know secret file means it's a cube config file it's not a cube config file Cube config file is a different thing basically we can access or see the cube config file also which contains all the information about the cluster so at many places in Z somewhere if you want to do deployment you can use the cube conill file like content and then you can authenticate one second what kind of example project and for how many years of experienced person this we can say Ad so to everyone I'll say this kind of project basically is specifically for freshers to be added in resume for experienced people I would suggest if you are really interested to get a job and have real time Pro experience how things uh gets done so in my opinion I I'm not sure how many of you know so total 5 years experience I have and as of now in my current company I work on uni projects I handled 25 projects okay so if you want real experience how things done you can join the batch and I can definitely guide you okay but for this this specific project this is the uh like uh freshers project freshers any of the freshers who are in college they can add this project in their resume mention everything so testing part see Akash the thing is key for this specific project the way if these kind of projects exist in companies the way it happens testing part basically when developers are writing code they test it on their local machine also okay first they test it on local machine build the application on local machine then it comes to genkins okay what is sonar Cube scanning when we build uh Cube scanning when sonar Cube Deepak it will be scanning the files okay it it scans the source code let me show you uh what exactly happened if I go to issues see these everything all this you find that lots of issues they are present in the source code not in the docker file or something Docker file is also scanned but these are the issues or codes mail or whatever thing sonar Cube scans this uh the actual source code files okay okay how much is done shipping Service as you have coming back foram will you be using AO Services availity aw around so mun actually uh we are going to use both the infra part like EC eks then ec2 then creating infrastru secure infrastructure we are going to do with AWS when Cloud DeVos part the pure DeVos plus part cicd and everything build complex application on cloud that part we are going to do AWS sorry Azure rest we are going to do on premises tools like genkins GitHub actions gitlab cicd sonar Cube triy o zap analysis then o o dependency check anchor all those things yes we can add quality gates in our sonar Cube server yes s we can add yes so Raj about the service account and rbac so basically service account we the main purpose of creating a service account so that it's a common account that can be used by that can be used by multiple people okay that is the main reason so that we can just give the one again course is starting so learner actually course will be uh the actually in the upcoming batch batch three I'll be giving access to batch two also plus live sessions for batch three so batch two access will be given by 15th of December and live session will be starting from 5th of January okay coming back to deep so uh service account is a common account so what they prefer instead of giving access to all the users we will just give access to a service account and then that account can be used for build and deployment and other purposes okay and after that uh when it comes to rbac role based access control so what what we do this is a way using which you can create a list of access levels or roles okay only this kind of actions should be performed and that group of rules or like uh thing that will be assigned to service account then service account will be having the all the access whatever we defined in our uh rbsc role based Access Control file okay are we using database file yes Santos we are using database we are using a red database that is connected through this in yaml file if you go through you will find for redis also lot of things everything yeah but redis we are using this batch three so B 3 will sorry about that b 3 the contents of B 3 will be accessible for two years which I believe is more than enough time to understand and do everything yes gas so this project is having all thing the the redist database is getting used for storing data for SE few of the component uh which you can check out okay but yeah it's having front end back end everything it's having so mun bash the thing is key the way that I'm going to take sessions is that key uh we are going to deploy applications on eks we are going to deploy application on AKs and we are going to deploy applications on selfhosted kubernetes cluster all three so that you don't would be uh we would like to add project for experienced people so sure you can do that but see corporate project kind of things I'm teaching in my batch if you interested you canoll Pro so no uh this project is not a real means like this product is good the functionality and everything is good but you should not add this in your resume for freshers definitely it can be added but for experienced people this is not the project for experienced people see I have 10 corporate level project which are actually are the ones that I have worked on which I can give you an idea proper diagrammatic structure functionality architecture cicd pipelines and what was my role as a devops engineer recorded videos so I have the recorded videos Anil if you want you can connect me later I can guide you can you scan Docker image using trivia yes we can scan but as of now I have not done that part okay so it's deployed and let see the status okay as of now everything you will see ready stat it's not in ready status because it takes some time for the things to be created okay and here also so for seeing more details what we can do we can go back here and we can run the command first as cube CTL get parts okay now see the error can no resources found in default namespace as I told we have done the deployment in web apps namespace right so to we need to go inside web apps so I can use the command hyphen and web apps now all these Sports you can see they are ready and they are up and running right all these parts coming to service so to view the service files which we have Cube CTL get SVC again hyphen n then uh name of web apps yes okay now one second let me clear this right it again you see here uh I wanted to explain two things one second it's not everything is one line one second yes now everything is in one line so you see for front Ed as I told is the entry point for this application so we have two things node port and the load balancer okay using node Port also like we have the uh like the port on which the application is running so we can access it but having an external URL is much better option to uh access the application right because since we have using load balancer at the service types it generates an external URL okay so let me try to access it it may not run initially because it takes some time to be able to run properly okay so we will wait for it yes great you can see using the load balancer external URL we are able to access the application application has been deployed and it's running fine the best part it's everything is properly customized and everything is properly written okay so this is one of the applications that I suggest for practicing it's the best application that you can work with for freshers you should definitely add this project in your resume for experienced people if you are interested I can teach you actually I cover everything and I can guarantee you if uh because if you also put efforts you can definitely qualify an interview and my support is not limited to till the course until unless I have said to all my like students previous students also anytime you are going for interviews you can just ping me a day before and I can connect you I can guide you how you need to explain the project how you what kind of project you need to add I'll give you project also and uh uh functionality of the project architecture of the project cicd pipelines that were used for deployment of those things plus what I can explain you that uh how you can explain okay what was my your role as a devops engineer on that specific project so this is what uh I try to do this for course fee learn uh so it's like after discount it's 6,000 and trust me the amount of syllabus that we have the amount of things that we cover it's more than it's actually very cheap the the amount can we work on realtime project with you after joining your batch yamna so what I can see even you know ke realtime projects are in companies I cannot give you directly but what I do I take reference from them I create create proper diagrammatic structure I explain you the functionality I explain you the deployment architecture of it I explain you the pipelines that were used for deployment of those applications I explain you my role as a devops engineer what was done okay and trust me if you are having this many information it is more than enough for you to explain the things in interviews and that's what if you ask other people also who have joined my course they will tell you my course is purely oriented toward two things first how things happen in companies that I try to explain secondly real time scenarios with realtime scenarios okay that way so many questions here one second Deepak I want to transition where my D to infrastructure side where I start growth and Cloud yes deak you can do it's not very hard thing it takes in my opinion if you are new person now so as like I have designed my course for 2 months so after if you are enrolled there if you have completed the course then after that 15 to 20 days will be there where you need to configure things and then you can uh do the deployment okay so this is what this is what my courses designed okay batch three like how many so batch three is actually is two months course but trust me ke everything you'll be having knowledge you'll be having real time knowledge on how things happen in companies what kind of project exist and architecture deployment everything I'll explain you thank you RIT How can I connect to you privately so chandan Shukla uh you can ping me on you can ping me on telegram @ devop Shack okay it's public or you can you all can join my telegram group it is also public you can join there also for those who don't know this already I have actually made my topm account completely free so any of you can schedule calls with me okay okay so I would suggest I think I know I'm not sure if someone from my batch or someone from my my telegram group is here if you are here can you just paste the telegram link you people can join there and you can ping me directly okay and I can guide you how to schedule call you can connect me one to one it's without cost there is no cost involved batch three week days of weekend so the way since as I told uh Kumari my syllabus is very huge okay in order to cover humbled syllabus the way that I take sessions important topics and big topics I take live sessions on weekends and the small topics I share recorded videos for on recorded videos on the uh weekday okay in that way you'll be able to cover uh you'll be able to cover complete syllabus properly with handson also like I I understand like many of you might be thinking about doubt session or if you are thinking about like what kind of issues you are facing so in that case see in my session I have said I don't stop anyone anyone can ask in the between only okay I'm having this doubt kindly clear that is fine if you are facing issues in implementations you can directly ping me in my available time I will connect you over Google meet and I'll help you resolve the issue that is not a concern okay so learner are WhatsApp actually I use very less you can ping me on telegram at devop Shack once again let me just uh ping you the URL of my telegram group then you all can interested people can join it's completely free it's like and I believe uh my telegram group is the only one which is completely open where every anyone can post anything okay I'm four year experienced but need to learn more have basic kind of knowledge an it's not very hard uh see for those who need guidance proper dedicated support for coming two months you can enroll in the batch three okay and trust me it's it's it's going to be really useful every single thing that is there that you should know I cover that so adita you can also enroll in the course let me see if I can share you the uh one second I have pasted my batch link also batch 3 link whoever is interested you can check out and if you needed you can enroll in real time we use PV and PVC in managed C cluster for this micros service project so Akash yes we use but as of now since as I said this project is not mine so the way that I could configure and show you the main part was to show you how we are connecting through genkins to kubernetes eks how we are setting up eks performing Auto scaling all those things that was the main purpose to show you okay so that's what I was showing timing of the batch so as I actually timing of the batch is that ke uh we take live session Saturday Sunday evening 8: to 10:00 p.m. okay 8 2 hours and on weekday you'll be getting recorded videos which you can watch in available time if facing any issue you can directly ping me anytime there is no issue sure Deepak you can ping me batch two videos I can buy so ramkumar I would suggest you enroll to batch three because in batch 3 I'm giving you complete access to batch 2 videos plus live sessions of batch 3 because in batch three I'll be putting 10 corporate projects which will be which are the projects which you can use in your resume explain it and because I I'll be explaining you complete details okay kha readyy on YouTube I'm not sure as of now but I'll try to if not in IST Zone can one was yes Brian so if you're not in see most of my uh students are outside India only for that reason I keep the sessions at evening IST hours if you are not able to watch recorded videos will be available for the live okay for 2 years everything will be available if you are not in IST Zone you can watch later also syllabus okay let me explain you the syllabus also so we'll be starting with the devops where I explain what exactly is devops why do we need devops and how it is being used in companies the actual flow the actual workflow how devops is being used in companies then we'll be starting with Linux complete Linux plus complete shell scripting you'll be having access to proper knowledge you'll understand and with practice you will get better with Linux and Sal scripting after that we are going to get started with build tools where I'll be covering build tools for like Java then build tools for nodejs then for python like that okay and for uh like we are going to in future like in future sessions in batch three we are going to work with all kind of project Java net python microservice nodejs nodejs Docker compost based application okay all sorts of full stack applications we are going to cover after that we are going to get started with cicd tool in cicd tool I cover detailed genkins uh GitHub actions and gitlab cicd okay after that we are going to get started with security aspect we are going to cover uh sonar Cube trivy and this OS dependency check and Os zap analysis which is used for penetration testing very important tool after that we are going to get started with Docker and Docker we complete see for each tool the way that I'm going to uh teach you is first I will teach you what exactly is that tool what problem does it solve like why do we need it in devops Hands-On realtime implementation of that tool and what kind of troubleshooting we need to have in that tool and these are the only things that you need for any single tool okay and in Docker we are going to cover Docker file Docker image Docker container Docker networking Docker volumes okay and Docker compose and how to integrate Docker with other cicd tools then we are going to get started with kubernetes so I'll explain kubernetes architecture Master node worker node process process that runs on Master node worker node okay and then we are going to set up our own kubernetes cluster in like in a VM in multiple VMS we are going to set up and for kubernetes basically I will show you deployment to eks proper self hosted kubernetes and AKs also in that way you'll be having proper knowledge on all the all kinds of kubernetes platform okay after kubernetes we are going to get started with Aero devop so whatever things that we integrate previously we are going to create same P same sort of pipelines inside aure devops in Azure devops the the services that I cover Azure functions uh then uh AKs ACR Azure devops these are the main services that I'll be covering Plus app service also because on app service we deploy many websites okay so that will be aor aor DeVos part then we'll be getting started with uh this uh uh IAC with anible and terraform so the example and the scripts that I'll be sharing you those are 100% like uh uh like they are not nothing is hardcoded in those script if I share you you can simply uh change your credential and it will start running that script will start running and the the way that I used to do the things that I used to work on in my previous company those kind of work that I'll be giving you uh with the Hands-On implementation excuse me and complete script also will be shared to you uh that's it's like you'll be able to understand terraform anible proper and you'll be having enough knowledge what you need to tell in the interviews okay that part once IC is done also like once we understand I completely we are going to integrate both terraform and ncble with genins okay so once this part is completed then I'll be dedicatedly guiding you how to create resume what kind of project you should add if you have fresher or experience what things you should write in day-to-day task okay and I can connect to separately to every single person if you are interested you can just ping me in my available time and I will I will just connect you over Google meet I can guide you so once I guide properly with resume then we'll be having some session about interviews okay already like for all the tools that I teach everywhere I will explain what kind of questions they will ask what kind of Errors may come how to do troubleshooting everything okay so once this is done this part is done uh then uh uh sorry then we have the most important section in our course corporate projects so in batch three I'll be giving you total 10 corporate projects with diagrammatic structure their architecture their functionality how they were built and deployed some projects I'll give you that will be deployed in azour some projects on premises some projects open shift and I will explain you every single project in detail okay and I will guide you also which project you can add in your resume how to explain those and everything so and this is the complete syllabus that we we are going to cover and I believe even you can understand if if anyone covers this much or if you are putting efforts to learn this many things easily you can get a job in devops it's very it's like if you cover these many things you can get the job very easy
Info
Channel: DevOps Shack
Views: 21,634
Rating: undefined out of 5
Keywords: cicd, cicd pipeline, jenkins pipeline, 10-tier microservice, microservice pipeline, jenkins, cicd jenkins pipeline, 10-tier, devops project, free cicd pipeline, devops pipeline, kubernetes, aws, eks, eks setup, deploy to eks, k8, devops shack, abhishek veeramalla, free tutorial, full pipeline, microservice project, microservices, microservice devops project
Id: utgEmyh5ZFo
Channel Id: undefined
Length: 120min 5sec (7205 seconds)
Published: Thu Dec 14 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.