DevSecOps Pipeline Project: Deploy Netflix Clone on Kubernetes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
recently I posted a YouTube shot about a Dev SE Ops project and most of you wanted me to create a project and upload it on YouTube so here we are watching Netflix so this Netflix is actually hosted on my AWS account and in this project I'm going to show you how you can create your own Netflix application using Dev secops practices which will include CD using Jenkins monting using Prometheus and grafana security using solar Cube treb and more along with this we are also going to use Argo CD and Helm to deploy application all kubernetes so before we go ahead and deploy our Netflix application on our Cloud let me explain you the architecture so you get the clear idea so this is our project architecture we start with creating an E2 instance and deploying our application locally using Docker container once the application is running we are going to integrate security using solar Cube and 3v once it is done manually we are going to automate this process using a cicd tool genkins genkin is going to automate creation of Docker image which is going to be secured and uploaded on dockerhub now once we have this process automated we are going to integrate monitoring using Prometheus and grafana this promethus and grafana is going to monitor our ec2 instance as well as Jenkins to check different things in Jenkins like how much is CPU how much is Ram what are some successful jobs what do failed jobs along with this any of the job is failed or success you're also going to get notifications on our email using SMTP and once we automate this we are going to deploy the application on kubernetes using Argo CD the kops tool and we will have monitoring on our kubernetes cluster which is going to be installed through hel charts and once we do this we are going to have an application similar to this which will be on our Cloud so this is our architecture now an important note before we start with the project this project is actually very lendy and most of to you will come across issues and errors while doing this project so don't worry search for things online learn the skill of troubleshooting if you look on chat GPT or documentation you will find answers online but if you don't you you can let me know in the comment section or also connect with me on LinkedIn and ask your questions there second thing is this project is not in free deer if you do this project you will get charged for T2 large instances and for kubernetes tuster but even though you get charged I will recommend you to please do this project once because it is going to teach you a lot and will be a very good project on your resume third thing is this project was actually done by Mr Cloud book on YouTube or AJ and I had a chat with him on LinkedIn if you're stuck at any issues and you want reference you can also watch his video or go through his blog which is actually very good so yeah that's all from me I hope this video will help you learn a lot of new things all the best let's do this project let's go all right let's start with deploying our Netflix application on AWS using Dev cops practices so in this video we will be integrating security right from the start using tools like sonar Cube trivy oabs and lot more for this you will need an AWS account and I already have checks created to help you follow follow along the steps first thing whenever you want to deploy an application is to create a server we are going to use ec2 instances for genkins and other stuff but when our application is created and we have a Docker file ready which is going to be very secure then we deploy the application on kubernetes using Argo CD so we also are going to integrate giops in this let's start with first creating a T2 large instance for genkins and also for sonar Cube and 3v so why are we using 32 large instance this is because in our genkins we are going to do a lot of stuff we are going to install a lot of different plugins so we need to have a big server so I'm going to create a server let's name this as Netflix hyphen genkins or anything you can name it for operating system or Ami I'm going to use uent 222 now scroll down to select the instance type and that is going to be T2 large I know T2 large is not in free tier but if you want to learn something you have to spend some amount of money and this project is going to teach you a lot you will learn everything about genkins about uh sonar Cube how it works what is 3B and all the other stuff so this is going to be a very good project if you are someone who wants to learn about devops or Dev SEC Ops now for Keir I'm going to create a Keir that I already have you if you don't have it you can go ahead and create a new Keir for Linux you can use pem also for Mac for Windows you can use PPK which you can connect through py or mobile exem I have Linux machine so I already have a key created and I'm going to use that now in the network settings we are going to use the default VPC and also to the subnet but for firewall or Security Group I'm going to create a new Security Group which will only have the ports open that we require or else it's not a secured project if if all the ports are open so we are going to only open allow SSH for now https and HTTP this is because when you connect Jenkins with your dockerhub dockerhub talks on https we so we wanton this later on while we do the project we going to include ports like 880 for Jenkins uh 9,000 for sonar Cube 9090 monitoring Prometheus so that is why we are going to just add this for now and later on add more depending on the requirement for storage we are going to add around 25 GB of storage because as I said we are going to have lot of different plugins and I got this error of storage when I was doing it with Less storage so we are going to go with 25 which should be enough and then go ahead with click on launch instance for this project we are going to use the GitHub that I have here and I'm going to paste the link for this in the description the code is present here along with the code I also have the steps and the commands for you uh to copy and paste so that it makes easy for you now let's see if our instance is ready and we should be able to connect to it for connecting to an instance before connecting to an instance let's go and attach a elastic IP so because this is a lengthy project you might be doing it in like two settings or three settings so you need to have elastic IP attached so that even if you come after your break and you should have the same IP address after you stop the instance we will attach an elastic IP address so let's go to elastic IPS in network and security and create an elastic IP for This Server so I'm going to click on allocate elastic IP click on allocate option here now our elastic IP is here let's name this as Netflix hyphen EIP or elastic IP save this click on associate elastic IP and search for the instance on which you want to attach this so I have many instances here but I'm going to select the one that I created which is Netflix genkins and it is in the running state so let's select that click on associate and now our elastic IP is attached let's go to the instance again and this is the instance which is now running let's click on connect option to connect to a server there are four different ways and you can use any of them if you like if you have MOA xterm or you have putty in Windows you can use SS client same for terminal or Mac as well you can use this but the easiest way is to connect using ec2 instance connect which is supported by ubben 2 and Amazon Linux so we going to use this so I'm going to C on connect option here and we should be inside our server so it's establishing connection if by any chance you're getting issue for connection make sure the port 22 in your security group is open and you can access security group from here in the security section I have P22 open and I have the keeper attached so I am inside the server now first thing whenever we are inside the server is to update all the packages so I'll say sudo AP update hyph y okay this is going to update all my packages and then we are going to clone the code from this repo in our machine so this video is going to be separated in three parts Dev SEC and Ops so we'll start with Dev where we understand how to run the application locally then we integrate security into it then we do the Ops part where we are going to do cicd where we are going to do monitoring and also Deploy on cetes so for now I'm going to clone this code let's copy this come back to our server the update is on finished so I'm going to do get clone and the URL for the repo which is this once I do this it will fetch the code from the repo and we have the code here in a directory named as Dev secops project let's go in this and open it up if I do LS you can see the files of the project and we also have a Docker file created if you want to run this without Docker file you can need to install npm which is not here so you can just run npm start an npm run and you canun this project locally but let's do it with ER Docker file so if I do LS I you can see there's a Docker file here and if I want to create a Docker image I need to have Docker installed first so for you to install Docker you can go to the official documentation or you can come to my repo this is the command to install Docker you just need to copy this and paste it in here once you copy and paste it it will go ahead and start installing Docker so till the time it is installing let me explain you the first command is to update all the packages second is to install Docker third is to put the docker in the pseudo group so that we don't have to run Docker commands without sudo commands then we put the permission 777 on this doer socket so once this is done we should be able to access Docker and then create Docker images and run the container as well so right now Docker is installed let's say Docker version command you can see the version is coming up which means we have Docker installed let's do Docker build for this particular file and see if we can run the application as a container in our server before we do cicd or before we integrate security because the first thing should be application should run then you can integrate security and then you can automate it so I'll say Docker build hyphen T to add a tag let's name this as Netflix uh and then dot dot means get the image get the file from here so it's going to start creating an image and this is usually 16 steps so it will take some time for the image to be created now till the time it is creating let me show you the code so I'm going to open my vs code uh which has the same code that I have on the GitHub so you don't need to worry so in this I have a Docker file here in the docker file you can see the base image is node 16 which means it is created on base on node uh and then there are work directory copy and other the instructions but the most important one is this argument which is asking you to have tmdb V3 API key so if you know about API API stands for application programming interface and it is is used to talk between different servers for example let's say you have a weather application or you have a uber application you want to integrate map in it so you can get the map API from someone who has already created it and then start using it in your application so API stands for application programming interface in this application that we are going to create we have the list of all the movies uh and the series on Netflix and we want to have an API that can fetch these movies from somewhere and put at in our application as you can see uh it's still Crea once this is created we will try to run the application and see if we are able to get the complete application or not because the docker file is asking for an argument and we haven't provided any argument yet so most likely we will get a blank page which does not have any movies in it and to troubleshoot that we need to create an API key which is present on tmdb so let's go ahead and uh do that till the time this is going to be created so I'm going to search for tmdb and tmdb stands for the movie database and this is the place where you can get the API to fetch all the movies and series from here into your application that we are going to be creating now so if to get the API key you need to First log to this and I already have a account created if you are new you just need to go for tmdb and click on this click on login option put your username and password and then you can login start using dmdb and get the API key as well let me show you now the image is created I'll do Docker image images and you can see a Netflix image has been created I think I misspelled it you can see it's Net Flix without I so let's try to run this I'm saying Docker run hyphen D hyphen P at81 let's run this application on 81 and then the ID which is going to be this or you can also put the name which is Netflix so I'm going to paste this and press enter which will start running the container let's open this now it will only run if I have Port 881 open in my Security Group which is not the case so if I go back to my security group here we only have 22 443 and 80 and this is why it's still loading and after some time it will give us connection time out so to avoid that we'll go here in our security group and add some ports so the first Port is going to be for the uh application which is on 881 so 80 81 anywhere ipv4 app port similarly we also going to add Jenkins in here so Jenkins runs on port 8080 so I'll say 8080 anywhere ipv4 and this is for Jenkins along with this we also have sonar Cube which runs on Port 9,000 so I'll say 9,000 anywhere IP V4 and sonar Cube so this is what we have for now if we need more ports We'll add more ports later on so let's try to access our application on p88 even now that we have added the security group so when I run this you can see Netflix coming up and we should be able to see our application but as we haven't provided any API key we will get a complete blank one and if you get this error which is also the same case for the previous project that I did most of you got the blank one because you didn't integrated API correctly so API is used to fetch the data from somewhere into your application so let's try building this again with providing the API key so once you create this account you're going to go to this click on on settings option in the settings you will find API option here and in the API you need to copy this key this is my key you have to create your own key because I will be deleting this and this might not work for you so please create your own now when I create this I'll pass an argument uh like this if you check here in the in the commands this is what we ran and we got an error because we didn't had API key so you'll follow the steps of going to the dmdb clicking on login and creating an API key for yourself once you have the API key you need to copy this command which is Docker build command but it's also passing an argument which is the value of API key that we got from this particular thing so I'm going to copy this and paste it in my server let's clear this by contrl L if I have any server I'm going to stop that so if I do Docker PS I have the server running I'm going to say Docker stop and stop this server for now also docker RM and then the same one so this will be removed if I do Docker PS now I don't have any server which is running or present all right GitHub repo so I'm going to say Docker build hyphen hyphen build argument tmdb kpi key and I need to put my API key here so let's put the API key copy and paste using control shift V once I do this it will go ahead and start creating the image again this time it will not take a long time because Docker images are built layers and they will only do the layer that is not done yet so it's it will be done and then we should be able to access our application which will have all the movies and series as we have seen in this particular example so let's wait for this to be created and it's almost done so this is the dep part of Def secops project we will we are seeing how to deploy an application uh in our local machine and then once we do it we will integrate security which will be security to make sure that the code is correct the the dependencies are correct the image that we create is also correct and everything else so now the image is created and you can confirm by saying Docker images and you can see an image has been created 12 seconds ago so I'll say Docker build sorry Docker run hyphen D hyphen p uh 8081 colon 80 and then the name which is going to be Netflix you can either put the image name or image ID it doesn't matter once I run this it gives you a sha output which means the container is running and now you can refresh this page need to remove the browse section here 8081 and then when you press enter you should be able to see an application which is having all the movies and the images this is because now we have API key integrated so you can see uh all the images all the movies and everything and you can access them as well so this means now our application is running properly in the local machine let's do the SEC part which is adding Security in this this so for security we are using two main tools one is sonar Cube most of you might have already heard about sonar Cube sonar cube is a tool to analyze code and to provide you with reports of what is the problem with your code if there are any issues or any vulnerabilities you can go ahead and configure it so for sonar Cube we are deploying this as a container and you can get the command here so I'm going to copy this which is Docker run hyphen D run a container named as sonar which should run on Port 9,000 and this is the image you want to use so I'm going to copy and paste this will go ahead and start uh fetching the sonar Cube image from dockerhub into our machine and once we the image is done we should be able to see a container running on Port 9,000 if you don't have Port 9,000 open on your Security Group go ahead and do it we already have it open so that should not be a problem now you can see Docker uh Docker container is also running for sonar and I can confirm that by running Docker PS we should have two containers the first one is is the Netflix and the second one is sonar so let's access our sonar Cube on Port 9,000 I'm going to copy it and open it somewhere here so let's paste this here colon 9,000 once you do 9,000 you should be able to see this sonar Cube page as the container is created right now it's going to say take some time usually a minute to start and then we should be able to log in to it and we are going to configure it later on along with sonar Cube we are also using a tool named as triy popular open source security tool which is used for scanning vulnerabilities also to check Docker Images auto scan file systems and we are going to use that too so for 3 I already have the commands in the GitHub so you can go to this section to install trivy and copy the commands and paste it here now this will install trivy for us and let's go and check if our sonar cube is up or not so you can see sonar cube is up and it is asking you to log in so the login and the password for sonar Cube default is admin so I'll press admin here and the password is also admin so admin here once you login it will ask you to change the password uh and you can change it so old password is admin new password could be something easy let's say p pass you can put anything you like and then update it once you do that you are now inside the sonar Cube uh page and we are going to make this changes later on when we the genkins part so if you have this screen you should be okay let me show you what is triy now so we have installed triy and if I type Tri I should be able to see all the options which means T is installed so you can see at Tri is installed you can also run T version command uh to check the version so as I said Tre is another open source security tool it is used to scan file systems and give you a report of what is the problem with your file systems same for your Docker uh images so so let me show you an example uh let's try to scan the file system first so right now in this file system we have different files as you can see let's ask T to scan it so I'll say Trey fs and space dot do is to scan the correct one current one so yeah it's going to scan it and then you can see it's creating a report and it gives you the vulnerability that this is a vulnerability which is very high and you can go and check it out right so you can get this report and also the logs similarly if I want to check any image here so if I do Docker images and I want to check the uh vulnerabilities in my Netflix image I'll say Docker sorry Dy image and then the name of the image or ID of the image so I'll say Netflix or let's put the ID because you might be thinking that I'm scanning the official image which is on Docker up so I'll do 3v image this and we start scanning it as you can see and once the scan has been done it will give you a report of what is the problem so there are eight severities which is two are medium six are high and zero are critical so if you are most concerned about security you can go and check it out and also change it so we are using these two tools so now we have the dev part done we have the SE part done we added two security tools try and sonar Cube let's do the Ops part where we are going to integrate cicd Pipeline and we are doing cicd using a very popular tool which is genkins so we're going to install genkins and the commands for installing genkins are here again on my GitHub but you can always use the official documentation if you want to and if you look at this official documentation you will see that when you want to install genkins you also need to have Java installed right or else you will get an error so in this GitHub I also have Java part here and then the Jenkins part so I'm going to copy this simply and paste it in our server so if you're using U smaller instance let's say T2 medium or anything T2 micro you might see that your instance is very very slow and this is because you need to upgrade it to T2 large and if you haven't uh you can upgrade it easily by going to the instance stopping it first and then uh going to actions edit instance settings change instense type and make it to bigger instense but again if you stop it the IP will be changed this is why we have added elastic IP address so coming back here let's see if our genkins is installed or not and genkins runs on port 8080 so if you want to uh configure genkins you need to make sure that the port at880 is opened in your security group now I'm trying to make this as simple as I can and I'm sure most of you will still have questions and that's fine you can ask questions in the comment section or you can ask me on ledin but I want you to do this project because this project will teach you a lot and this will also be a very good project on your resume so make sure you complete this till the end there are a lot of things we need to learn in this so now uh we'll see you can see setting up Jenkins here it also created a the Sim link let's wait for some time so that we can access Jenkins and start with creating a cicd pipeline for our project so now Jenkins is installed according to this to confirm you can say pseudo service Jenkins status if the status is running which means it is installed successfully so I'm going to go and open Jenkins on port 8080 so I'll say 8080 and this will bring me on the signin page for Jenkins for whenever you start with genkins the password is always stored in this path so I'm going to copy this path go to my server contrl C to close this and contrl L to clear the screen Pudo cat cat is a command to read the file and then the path which is admin password once I do this I get this admin password which I'm going to copy and paste it in this administrator password section let's click on continue option once you click on continue it will ask you if you want to customize genkins by installing suggested Plugin or you want to select plugins that you want to install so I'll go with install suggested plugins and then this will go ahead and start installing uh all the suggested plugins for you so this usually takes some time which is why I'm going to go to my GitHub and show you the next steps that we are going to do so as we know genkins is a cicd tool and whenever you want to do cicd you need to make sure that the server all the genkins has all the necessary plugins so this application is on uh node so we'll need to add nodejs we need to add Java we also need to have sonar Cube because we're going to integrate sonar cube in our cicd pipeline so we'll add some plugins and the plugins name can be find here so these are the three plugins we will need to install as of now later on we will install more when we add monitoring and when we add Docker as well so for now you can see it's asking you to create first admin user you can create it if you want but I'm going to skip and continue as admin because I don't want to do that uh you can also add your own own domain name if you have any but for this demo I don't have any domain names I'm going to say save and finish and yeah we just let Jenkins is ready let's start using genkins okay so we are inside the genkins dashboard as I said we need to install some plugins so let's go to manage genkins option here and click on plugin section here in the plugins click on available plugins and search for nodejs this is the plugin you need to install along with this we also need to search for eclipse so Eclipse tamarine installer which is used for jdk so I'll do that and also sonar Cube so sonar Cube scanner and let's install this we don't need to actually restart unless uh Jenkins is asking us to so say if it says success you don't need to install it once you on you don't need to restart it so once you install these plugins you need to go and set them uh in the tools section so I'll go to tools here and we will set the tools for node JS and jdk you can see jdk installation here click on ADD jdk and we are saying install automatically install it from the adop tm. net and the version is going to be jdk 17.0 do8 1 + 1 and let's name this as jdk 17 make sure you use the same name as I do because we are going to use this names in the cicd pipeline and it has to be the same so in this tool section we are going to have the same name so make sure you use the same or else your pipeline might fail so once you do that let's go down and search for nodejs which is here in the nodejs I'm going to also say install automatically and I'm going to choose the version which is going to be 16. 2.0 yeah and let's name this as node 16 you can see the name of the tool is also node 16 all right so once you do that click on apply option and then click on Save if you remember we also installed something for son Cube but we haven't used it here this is because we need to First Connect sonar cube with Jenkins and then we can add a system and then the tool so let's go to sonar Cube here and show let me show you how to configure it so in the sonar Cube once you have this page click on Administration section and click on security part click on users in the user you will see administrator and you need can create token here so click on this update tokens option and put a name so I'm going to say Netflix or let's you can put anything Jenkins and click on generate option and then copy this token come back to genkins click on credentials option here in the credentials let's create a credential for the token that we created now because that is going to be used when you create a sonar scanner tool in Jenkins tool section so I'm going to add credentials and is it going to be secret text so put the text here or the secret here which is the token that we copied from sonar Cube for ID I'm going to say sonar High token and same for the description let's create it once you do that come back to you should be able to see this and then come back to manage enkin click on systems option here in the system search for sonar Cube so I'm going to do contrl F let's search for sonar and you can see sonar Cube servers and then installation so click on ADD sonar Cube and in this we need to put the name so for the name we are going to say sonar Dash server for the URL we going to copy the URL that is present uh here usually the both of these genkins and sonar Cube are on the same server so you can also put you can also put Local Host but I'm going to copy the IP address with the port 9,000 and put it in my genkins here all right for authentication we are going to use the token that we added now which is Sona Dash token and that is all we need for now in the system section click on apply and click on Save now if you come back to the cicd pipeline you can see there's also a tool that we're using which is known as sonar scanner so we need to create this tool too so I'm going to go to manage genkins again in the tool section let's go and search for sonar Cube now so you'll see sonar Cube Scanner installation let's add that let's name this as sonar Das scanner make sure to have the same name as you see in this particular pipeline so so I'll do that and leave the version as it is click on apply and click on save so our pipeline is ready to deploy an application now so I'm going to click on new item so let's give the name g name is going to be Netflix you can name it anything you like click on Pipeline option because we are creating our own pipeline that we have and then click on okay option here once you click on okay you need to go to pipeline section and you need to paste the pipeline that I have here but if you if you are new to genkins or if you're new to groovy pipeline you can use this tool which is pipeline syntax that so let's copy this for now and go back in this paste it click on apply and save and then run the pipeline so till the pipeline is going to be created or running I'll explain you the code now for the pipeline here so in this we creating a pipeline which is using some tools named as jdk 17 and note 16 we also use scanner which is so as scanner then there is some stage is the first stage is to clean the workspace to make sure everything is clean and fresh next is to check out from the git so get the code from the GitHub the next code is to run sonar Cube analysis on that code to make sure that everything is fine and if there are any issues we should get an error so you can see there's a project key named as Netflix here and we need to create that so if I go to users and admins and I need to click on Project option here click on manually click give the name so I'll give the name as Netflix because that is what we use in our cicd Pipeline and if you click on locally you click on generate this is what you get after you choose the operating system and the code and this is what we have used in our uh code here as well D sonar. project. key doar project. name so you can see the pipeline has I guess uh came up to this say this is why we are seeing this thing here so let me show you in here we are we are done with the sonar Cube analysis and this is why we have a report being created on Sonar Cube page so it has scanned around 3.2k lines and we can also see more information about issues so these are all the different issues and you can see bugs code smells vulnerabilities scope and everything so if you look at this error this is saying that you have used as as a Sim smaller case you need to replace it as a bigger case to make sure your code is proper these are all the different Security checks that it has done on our code present in the GitHub repo so this means a pipeline is is run and if you look at the last stage after the sonar Cube we have quality gate the quality gate is something which will check if everything is fine and then move to the next stage if you don't know what quality gate is let's just have a quick look at it so if I do quality gate you can see Cate and is a milestone and IT project that requires that predefined criteria be met before the project can proceed to the next phase so we are doing a quality check to make sure everything is fine and then go ahead and install our dependencies uh which will deploy our application so yeah this is what we have till now let's go and see if we have done some task here so we created A2 instance large instance that is done we have genin cicd Pipeline with security uh this is still in progress because now we have only four or five stages which is to install uh have sonar Cube but we also used pry uh in our project if you remember and this is not in Docker container yet so we need to install Docker to make sure in the genkin so make sure that we are Crea a build which is going to be Docker image we also need to integrate TV Tri here to check the image and if you see there is dependency here so we need to make sure that the dependencies that we install are also secure because if you remember about a news there was a news about lock 4J which caused issues U if you haven't read about it Go ah and read about it so log 4J flaw allows an attacker to put it into their own jdi lookups this is why we need to make sure that whatever depend depend we use we need to uh it should be secured and it should be protected not causing any issue to our application this is why we need to add some dependency Checker so if I do dependency check Jenkins this is what it comes up with oaps dependency check and we are going to use this tool to integrate more Security in our pipeline so let's go to this till the time it is going to be creating let's go and install some more plugins for Docker and for uh o apps dependency check so let's go to manage genkins option click on plugins in the plugins click on available plugins option and search for oaps so you can read about it if you want to I'm going to install it along with this we also need to have Docker as we said we are going to create a Docker build uh which is Docker image and then push it on Docker Hub I already have an image created but I'm going to delete it in front of you so that we can show you that pipeline will create an image for us so this is my Docker Hub you can see an image has been created 18 hours ago so I'm going to just go ahead and delete this image by going to settings delete repo Netflix delete so this will delete and now I don't have any repo with that name it should be created through pipeline every time pipeline is going to be running so coming back to our uh here I'm going to search for Docker and we need to install Docker Docker Commons Docker pipeline Docker API and Docker build step now click on install option for you to manage Jenkins or you to manage dockerhub or push the code uh image from genkins to dockerhub you need to log dockerhub using genkins 2 so I'm going to log into this uh for you to log in you need to add the credentials I'm going to go to manage genkins and click on credentials option here and and click on system credential click on Global's credential and restricted then click on ADD credential and in here we are going to pass username and password for our Docker up so the username is the username for my Docker up which is n 101 and the password is my password and ID is going to be Docker this can be anything but I'm going to put Docker same for this here and click on create now we have added credentials for Docker here so whenever we want to push an image we should be able to log in using this credential now we also need to set the tools for o apps that we downloaded and also for dockerhub so before I do the dockerhub part let's do the O apps dependency check you can see the dependency check installation is here uh for name I'm going to say DP hyphen check make sure you use the same name because that is what we are using again in the next part of our cicd pipeline you can find this here we are using a tool named as DP check that we using a tool named as DP hyen check so I'll use the same name and come back here dpen check I'm going to say install automatically from github.com and this can be okay with this or you can change it if you want you can see Docker installation part here and click on uh let's give a name for the name I'm going to give it as just Docker install automatically from doer. and then latest version is fine and I'm going to click on apply option here apply and save and yeah that is what we need now we can add a pipeline which will be going to scan images through trivy check dependencies through oaps and also create an image using the command so let me show you the pipeline rather than explaining you this way so I have my green me section here and you can see after you do the O app scan you will going to run the docker build command docker build using the API key we need to put our API key here then Docker tag Docker push and this will push our image to our repo which is on Docker Hub once you do that we going to then scan the image using triy and then run the container locally in our machine so right now I'm going to delete the container I have already so let's do Docker PS and I'm going to delete the container or after I stop it so I'll say Docker stop Docker stop and then this container ID and also Docker RM because we want this to be automated automatically created through genkin so now I if I do Docker PS I don't have any server named as Netflix okay uh all right let's go ahead and copy our new pipeline which has more stages in it and then paste it in the pipeline section so I'm going to copy this go to our genkins go to manage genkins you can see this pipeline is still running uh and this is stuck at quality gate because it's still checking if everything is fine or not but you can see sonar Cube has been integrated and if I click on this it will come back to the page which shows us everything about our uh code and what problems is it so for now I'm going to stop this uh and use a new pipeline so I'm going to click on configure option here let's go to pipeline and we have the same kind of pipeline but we are just adding few more stages in it now when I build this I want to put my API key which you should put your own so I'm going to remove this and copy the API key from here put it in my pipeline okay so we are using the same pipeline that I have in the read me section of my GitHub first stage is to clean up workspace get the code sonar Cube analysis then do the quality gate check install dependencies run the dependency scanner then uh scan the file system go on the commands to build the image tag the image and also to push the image once the image is created we want it to be scanned by triy and then we want the image to be deployed as a container on the server So to avoid getting errors all the time I'm going to remove this Netflix name parameter because you cannot have same names all the time so if you read on the pipeline you might get an error saying that the name has been used so we don't want to have a name we can ask Docker to to put a random name according to it so for now I'm going tock on apply option and let's click on Save and let's see if this pipeline is going to pass or not so I'm going to a build now and this will start running the build running the pipeline so till the time it is going to be running let's go and add monitoring which is going to be the next part of our Ops section so this is going to take some time until the time we going to install or monitoring to monitor our server and also Al genkins and later on if we have kubernetes we are also going to monitor that kubernetes and which is going to be interesting because we going to install promethus through Helm also node exporter so let's do that uh I'm going to go to instance and for you to have monitoring enabled we need to create another T2 medium instance for monitoring server I tried doing this on the the same T2 large and it was very very slow this is why you need to have a separate server uh and so we are going to do that by going to launch inance option and start a new server let's name this as monitoring select the same Ami which is.2 for instance type I'm going to select T2 medium why are we selecting T2 medium and not T2 micro or because if you look at system requirements for Prometheus what are minimum requirements for So yeah this is asking you for two CPUs and 4GB of ram so we can get 4GB only in T2 medium not in T2 micro this is why we are using two CPUs and 4 GB of RAM let's add a key pair I'm going to use same key pair I used earlier and for Security Group you can have a separate Security Group if you want let's add a separate one which will have ports open for grafana for Prometheus and if anything else that you have you will need to open it so for now I'm just adding uh https and SSH for storage we want something more than 20 GB because if you see again here we need to have 20 GB of free disc so let's add 20 GB now most of you might be worried about the cost you will get and I know you might get some amount of cost but this will teach you a lot of things so I want you to please do it once and create a LinkedIn post or post this on resume which will help you so it's a good return on investment even if you spend some amount of money coming back to our pipeline just to check if everything is working fine and you can see all the checks have been passed now it is scanning the dependencies and you can check the logs by clicking on this and if I scroll down you can see uh this sonar Cube has also done the configuration and you can get the result on the sonar Cube page after the scan it is then doing the dependency check which is here now it is installing dependencies when the inst dependencies are installed it is then going to use the dependency check tool to check the dependencies and we should also get a report somewhere here after it is done and then it is going to do the docker build Docker push and we should have an image being created in our Docker Hub right now we don't have any image created as Netflix you might see other image like Gateway or converter which is another project that I'm going to create very soon if you like this project I will create more of these so yeah right now let's go and try to set up Prometheus on the instance that we created I'm going to open this again for this instance we also going to attach elastic IP just to avoid the IP to be changed so I'm going to go to elastic IP section here and click on allocate elastic IP click on allocate again and this is the new IP let's name this as monitoring hyphen EIP click on Save and then click on actions click on associate in this I'm going to select the instent that I want to attach this elastic IP to which is monitoring click on associate and then this elastic IP is attached if you go back to instances click on monitoring one you'll see that enas IP address is a attached which is what we want now let's connect to it and install promethus grafana node exporter everything necessary for the monitoring and I have all the commands here so you don't need to worry about it so in this section below you will see install Prometheus and grafana this is the fourth phase which is the monitoring phase till the time we'll come back and check this is still doing the scan so we have to wait for the pipeline to be done and then push the code on Docker Up So to avoid wasting time let's do the pr part first thing whenever we have a new server is to update all packages so I'll say sudo AP update iPhone Y and this should start installing updating all the packages once the update is done I'm going to go to my GitHub repo copy the commands to install and configure promus if you see the first command is to add a user or to create a user named as promius with all these parameters next command is to install git Prometheus from the official Prometheus repository and which is going to be in the the zip format and then we will unzip it or tzip it so let's go and copy this the upgrade has been done I'm going to clear the screen and paste the command which is going to create the user and also install promethus once the promethus is installed you can do LS to see that the Prometheus is installed and it is in zip folder so I need to unzip it so I'll go back here copy the command this is to unzip this is to then go inside the folder uh create a directory named as data and Prometheus move all the necessary files which is Prometheus and promt Tool inside local bin the consoles and console libraries inside Prometheus folder and then prometheus. yaml file inside this particular folder and we're going to see what is Prometheus yaml what is prom tool what is Prometheus app as well so let's copy this so I'm going to paste this now and press enter and this is now not showing an error because we have the Prometheus do file here all right all right now once this is done if I do LS you can see there are no files all of these are puted in the particular path let's go to that path I'll say cdhc PRS and if I do LS here inside this we have console libraries consoles and promus yaml which we moved as you can see we are moving console libraries into the this part at C promus and we have prom tool and Prometheus in this particular part let's go there and if I do you can see promas and prom tool are here so promethus is the main application which is going to monitor it prom tool is a query uh tool that uses uh that uses query language or queries to get the data about the monitoring promus DOL is where you will set the servers which you want to Monitor and for you to actually monitor you need to have something known as node exporter so if I show you what it is exporter is an agent that gathers system metrics and exposes them in a format which can be ingested by promethus so along with promethus we also need to have node exporter which will extract the metrix and that metrix are going to be used by promethus to show us how much is a CPU how much is a ram and all the monitoring stuff so to install node exporter again you need to go to my GitHub repo but before you do this let haven't completed the promethus part yet we need to change the permissions for these files and create a service for promethus because now if I do Pudo service promethus there is no service named as promius so it will give me an error so we need to create a service so it says unrecognized Service let's go here copy the commands to First change the permission right now if I show you the permissions for this prom tool and promethus we will see who is the owner and everything else so the owner is open to for console libraries for prome Sam everything so we will need to change the owner to the user that we created on the first step which is the Prometheus user let's do that by running the command CH own hyphen R Prometheus col Prometheus and the name of the directories once I do that press enter and now if I check the user is promethus now the owner is Prometheus after you do this let's create a service which should start our Prometheus service so I'm going to copy this command which is nano and create a service named as prome service inside system d/ system folder so let's copy this and paste it in the monitoring server this will open a N tab where we need to paste this thing which is a service configuration and you can read about them here but yeah this is a service configuration that we using for our Prometheus once I save this and I start it you should be able to see that Prometheus has been installed and we should be able to access it on particular Port two so when I do this it will start and enable it now if I CH check the status by running sudo system CTL status Prometheus it will show me it is running and active so let's try to access it I'm going to copy and open it in a New Port so Prometheus runs on Port 990 which is here so you need to make sure that your instance has Port 9090 opened or not so I'll go to security and click on this security groups in the edited non rules I'll add a security group rule for 9090 from anywhere ipv4 for Prometheus okay let's do that click on Save rules and now when I access this application on 1990 I should be able to see the Prometheus page so you can see Prometheus here Prometheus time series collection and processing okay so we have Pras here and you can click on status click on targets to check the targets right now we don't have any Targets apart from the Local Host one uh but we can add more but before we add the the target so we want to get the uh metric we need to install node exporter as we know so let's go to the node exporter installation part which is here so you'll need to run these commands one by one and then create a service for node exporter which will fetch the matrics that are going to be used by Prometheus so till the time let's go and check have a look at our pipeline what's the status so pipeline has been failed why is pipeline failed let's do a quick check if I go to console output and I scroll down it says uh it has done the dependency check as you can see and you might also see the output somewhere here you can see dependency check section here and you can also have a look at the dependency check so there are cities and you can also find it so this is another uh security part that we added but the pipeline is failing and let's see why it is failing actually so I'll scroll down to the end and it says Docker login failed so in the docker login section it says down downloading Docker client latest uh when doing Docker login fail to get the registry Endo from demon permission denied while trying to connect to the docker demon socket at this so usually if you get this ER go ahead and check if your security group has Port HTTP or HTTP is opened or not and I'll just go and do it real quick so I'll go to my instance and I'll go to my chenin section go to Security in this I have have HTTP has already opened as well as HTTP so it's not a problem with the port now if you get an error similar to me which is Docker login failed uh you can go ahead and go into your instance in the instance you need to run the two commands which is to add genkins in the docker group so first you need to go to Pudo Su we'll do Pudo Su and run the command which is pseudo user mode hyphen a hyphen G Docker Jenkins and then restart Jenkins which is using this command so I'm going to paste these two commands in the read me section if you get an error which is Docker login I got this error earlier and after some troubleshooting now you can see the build is running let me show you the other previous pipelines so all of these pipelines are failing at Docker login error so if you get this error even though you have credentials and the tools created you need to run the command which I'm going to share with you let me just add it here for reference if you get docker login failed error in your pipeline you just need to run this two commands make sure you have pseudo Su done as well because this will run only in the root user so once you do this your pipeline should run fine and your image should be created as well if I show you the latest pipelines is running and it should also have push the image so this is the sixth one which H which is running right now if you see the console output put you can see the image is being created uh and it also says Docker login succeeded so once this pipeline is passed we should be able to see a repo and an image created on our Docker Hub so this is how you can troubleshoot it let's continue with our monitoring section till the time this pipeline is still running so I'm going to go to my monitoring part in this we were installing node exporter so right now if I show you the previous command was to check promia is running or not after you have promethus running you will go ahead and install node exporter by running this commands one by one so I'm going to copy this the first command is to create a user named as node exporter they're installing node exporter from the official repo so I'll paste that in the monitoring server and this will start installing node exporter if I do LS you see a node exporter file has been created let's uh untar this I'll go to the next command and this is the command then is going to move to the exporter in this particular folder so let's copy this and then remove the node exporter thing so T xvf move and then it sensel if I show you ls- user local bin we have node exporter app here as well now once you do that you also need to create a service similar to what we did for Prometheus so let's go here copy this command to create a service under system d/ system so I'll copy this paste it here and this will create this open Nano just copy what you have in this uh section so copy and paste crl srl X to exit now run the commands to start and enable the node exporter once you enable it you should be able to have node exporter started and you can confirm by Running P sudo service node exporter status and it is running coming back just to check if a pipeline is working fine and it says success which means now it has also done the 3v image part in that is in the next stage and it also has created image pushed it let's confirm if I refresh you should be able to see a Netflix image being created here so yeah Netflix image is being created so we have successfully created a pipeline which is going to get the code create an image do all these scanning make sure everything is perfect image is secured and then puts that image on dockerhub which can be used by anyone because this is a public one so I can have a kubernetes cluster using this image and run my application uh through a gitops which we are went to do very soon but before we do that let's set up monitoring which will make sure everything is running fine and we also need to monitor CPU and RAM for our server as well as for genkins so right now we're going to go and do the monitoring part but yeah this is what we did till now you can also see a Netflix server a container is also running and to confirm that if I go to here and I do Docker PS I should see a Netflix application running which is running about a minute ago and this is done through cicd using genkins so we have done most of the part now we need to do the monitoring part which we are doing right now by installing node exporter once we have node exporter installed we need to now edit the prometheus. yaml file so if you go to here and you can see after you have this done you need to do the configuration which is to add uh jobs inside the prometheus. ml file so I'll go here inside cd/ hc/ Prometheus in this you have a file named as prometheus. ml this is the file which you need to edit whenever you want to monitor something so if I go here and I show you it has something known as job name and then this so if you see there's already a job named as uh Prometheus and it's scanning on Local Host 990 this is what you see here as well so name is prome s and it's scanning local 90 let's add for node exporter and also for genkins so for now I'm going to Nano this say sudo Nano prometheus. yaml and in this I'll go down to add a job for my node exporter make sure the indentation is correct you need to put the job name exactly below the job name as you see here so I'll do that jobor name is going to be the job name let's name this as node exporter so so node exporter and then you need to put static _ confix so staticor configs and then put the target where it is actually so for Target you need to make sure the target is also in the same line so I'll say targets coin and then in square brackets I need to put the part so I can put local because uh no exporter is here or else I can just put the IP address along with the path so I'll put this and the path uh port for node exporter is 910 as you can see here this is what we are going to add you can simply copy and paste it if you don't want to uh do this so I'm going to add the port as 91 0 close the double quotes close the bracket save it andr X if you're not sure your syntax is correct or not you can simply copy this command which is to check the syntax using the prompt tool and if I run this it should give me a success if everything is fine or else it will give me failed error uh now once this is done you need to reload this by running curl command so I'm going to reload and now after I reload I should be able to see a target for not exporter here so I'm going to reload and I have the now if I refresh you should be able to see a new Target coming up here which is going to be forh our no exporter so yeah you can see it's still unknown let's wait for some time and it will come in a healthy State and if you click on this link you will see all the metrics coming up uh let's wait which is going to be used by Prometheus for showing us CPU memory and all the other monitoring stuff so let's wait for this to come up okay I think this is showing a setup because we don't have Port 910 open in our Security Group let's do that uh real quick I'll go to instance I'll go to monitoring go to security and click on this edited inbound rules add a rule custom 910 click on anywhere ipv4 and this is node exporter click on Save rules and yeah this is what we need to do now it should be able to get the metrics and you can you can see the metrix are here we also should be able to see the exporter being up right now once we refresh the page so make sure you always have security groups with particular ports open or else you will get a timeout error or your uh this will load and it will show timeout issue but yeah now we can see it is up let's set up grafana real quick to show to see all of this in a nice dashboard so to install grafana I also have a command in my read me section you can see this is the command to install install grafana on open 222 and set it up to work with promethus so I'm going to copy this command one by one and paste it here to install grafana on my system grafana runs on 3,000 so we need to also add po 3,000 in our security group so I'll do that real quick 3,000 anywhere ipv for gra if you are confused about different ports make sure you watch this video till the end and take a screenshot of all all the ports we have have uh and then add it in the first just to avoid issue but every time your application is loading or buffering or it says timeout issue it's always a security group problem so now once I run this command I'm going to go and copy the next commands which is to uh add the gpg key it should give you an output saying okay after you run this command so when I run this it should give me an output saying okay after I run this I need to add the repository for grafana so I'll say Echo and it should give me back this which is correct and then we can go ahead with update command and then install grafana so let's say sudo AP update and sudo AP get installed upon grafana okay now grafana will be installed and you can enable it start it check the status and it should be running on Port 3,000 where we will be installing some dashboard and see all the things in nice dashboards so now grafana is installed you can run the enable command uh to enable it you can then run the start command to start the grafana service and then you can run the status command to check if it is running or not so I'll run status and it is running let's copy this and open it on Port 3,000 where we have grafana set up so I'll say 3,000 and you can see grafana coming up so grafana is another monitoring tool which is used uh likely with Prometheus but you can use grafana with inflex TB and U Prometheus and any other monitoring tool as well which will help you get get the metrics in the form of dashboard full of charts and graphs so let's wait for this to be done and then we can go ahead and log to our grafana for username and password you need to put admin which is the default username and password for grafana click on login option you can choose to change the password if you want I'm going to click on skip now I'm not going to save this let's load and now we have grafana dashboard here uh first thing you need to do is to click on data sources option and choose the Prometheus as a data data source because we are using Prometheus if you were using inflex TB or last search or anything we would be using that but I'm going to use promethus here and data source has been added now once the data source has been added you can choose to put the uh url url here so I'm going to copy this this even though I can put Local Host I'll still copy and paste it here so this and you just need to click on Save and test and it says successfully queried and this is what we want now click on Three Dots here go to dashboards click on the plus symbol on the top and we don't have any dashboard here if you look at this you don't have no dashboard so I'm going to click on import dashboard option and we want the dashboard for node exporter now that we have node exporter metric shown so I'll import a dashboard and if you don't know what is the number or what is a URL just search for node exporter grafana dashboard and this is the graana.com uh page which will give you the ID and you can just click on copy ID which is 1860 I guess let's paste it here and yeah it's 1860 click on load option it will load it now you need to click on uh select the Prometheus data source click on import and you will now see a nice D dasboard which will show us CPU memory RAM everything like this right this is what you would be checking and you can have more information uh about the node exporter in a nice looking raana dashboard now similar to this let's also Monitor genkins and to monitor genkins we need to install a plugin in genkins that we usually do so uh I'm going to run this I'm going to go here in the dashboard click on manage genkins option in the manage genin section I'm going to install a plug-in for Prometheus metrix so let's go to available plugins and search for Prometheus and this is the one you need to install after you install it you also need to configure a system so I guess it it wants you to restart it but let's see so yeah it wants you to restart so I'm going to click on restart option and this will restart it if you're restarting make sure you have your password ready because you will need to log to your password if you haven't changed it this command sudo cat or Li genin secret and once I do this I'm getting my password which is required after you do the restart because you need to log in again so yeah let's wait for this to be done and put the username as pass admin password as the password that you copied from there and sign in now I'm inside the genkins let's go to manage genkins again and click on systems option to set up a system for promethus Server so click on system so you might see that your genkins is a bit slow because there are many things that we have installed on this particular server so it might be a bit slow but if you use something lower than T2 large it will be extremely slow so this is why we require T to large instance for this particular project so if you scroll down below you should see Prometheus somewhere uh if you don't see it just do a contrl f and you can see Prometheus here we are not going to change anything as of now uh this is all we need so yeah let's click on apply and click on Save option now for you to manage Jenkins monitoring through promethus you also need to add genkins in the prometheus. yaml file if you remember so let's go to our monitoring server in the monitoring server if I do LS this is a prometheus. ml where we entered our uh node exporters IP similarly we'll do that for genkins as well so I will do sudo Nano and from. yaml and if you are not sure how to do that you can just simply go and copy uh this thing here if you if you see uh you should be able to get metrics on this particular path let's see romeus and you're actually getting Matrix here so we are going to query Matrix from this part we need to add a job so it can be shown on this and also we'll later on add a dashboard to see Jenkins and Graff a dashboard too so let's go here uh let's go down to add a job for Jenkins so I'll press enter uh under job name I'll say job make sure that the ination is correct or you can also check later on using prom tool but yeah Pro job name is going to be Jenkins uh then I'm going to put static config so staticor configs and then the targets is going to be targets targets is going to be the I IP address of the genin server and along with the port which is at80 so I'm going to copy this paste it here colon 80 80 and then this but we also need to PA put the path because by default the path is metrix but we have the path as Prometheus here as you can see so we'll put the path as Prometheus so you can check this section this is what we need to add metrix path equals to Prometheus all right now let's contr s this and run the other commands to check if the configuration is Right which is prom tool check config and then we are going to okay we got an error saying there's some issue with Line 39 okay let's see what is the issue uh okay yeah I'm sorry I didn't added the double codes at the end so I'll do that and now I'll check again it says success which means the configuration is correct uper but next we need to run the reload command so that we can see uh Jenkins here so I'll run the reload command and it's reloaded let's refresh this page and now genkin should be shown here okay in the targets you can see Jenkins and it's up let's add a dashboard for Jenkins 2 so I'm going to click on plus symbol here and import a dashboard let's do the same thing search for Jenkins grafana dashboard and you will get the number for the dashboard so I'll click on here and then copy ID for the dashboard after you copy it just P it here 9964 is the ID click on load option select the data source as Prometheus and then click on import option once you do that you will see a nice looking dashboard like this which is going to give you data about genkins what is the CPU what is the ram how many executors are free how many executors are using how many of of them are successful jobs failed jobs aborted jobs so now if I go ahead and try to run a pipeline I should be able to see the monitoring about it build now and this will start building our Pipeline and you can see the changes very soon in our monitoring dashboard so we have successfully done monitoring for our server and also for genkins we also have cicd in place to create Docker images and also to push it on dockerhub so the project is almost done and you can see our pipeline is running as well it's also creating a Docker image which is being pushed on Docker Hub so we are almost done with the project uh the only part remaining the pipeline is the notification which will be sent to the email which will contain 3v scans and reports but if you want to continue with learning how to deploy this application on kubernetes using Argo CD and also to enable monitoring using Helm do continue doing this is optional because I know you will be charged when you create kubernetes cluster but if you want to learn you can go ahead and create a cluster and and do it along with me for now let's go and enable uh notification to enable notification you need to have an email gmail account I already have a Gmail account which is here and if you want to go here and you need to click on manage your Google account this page will come up you need to click on security section in the security section you need to have 2fa enabled which is two Factor Au you can see mine is already enabled after you enable it just search for app passwords because Gmail does not allow you to use your real uh username and password so you will search for app passwords here and Google will give you a password that you can use in your uh applications third party applications yeah so I'm going to put the name I already created this but I'm going to delete this one and to show you I'll create a new one let's name the app as Netflix and I'm going to click on create once I create this will give me a password something like this which is what we are going to use in our genkins to send notifications every time a pipeline is has been running and we get the reports so let's go here in the dashboard section for enabling notification you need to after you add the username and password in your credential section click on manage en kins again in the manage enin you need to click on system and in the system scroll down to the end where you get the email notification part so this here for SMTP server we going to put smtp.gmail.com because that's the SMTP server for this you need to put your username so I'm going to say I am Batman the goat at gmail.com in the advance section you need to choose the authentication for we going to use SMTP and SSL so SM need to copy the same thing here so copy paste and password is going to be the the password that you got from Gmail so I'll get this and you are going to use the SMTP Port as 465 next we don't need to put anything in the reply to address option and now if I test the configuration I can put a sample email here and see if it works you should be able to see that email was successfully sent and if I show you my mail here I should have a test email which is here test email from Jenkins which and then in the in this section which is extended email notification you need to enter the same thing which is smtp.gmail.com the port is going to be 465 as we had used earlier in the advance section you need to select the credential as the one that you have added you need to set SSL and for default content type you're going to use HTML the triggers for all or if there are any failur so yeah that is all you need click on apply click on Save now add a section in a pipeline to send notification so let's go to our pipeline section which is in Netflix click on configure option and in this we are going to add a post stage notification when everything is done we will have to so I'm going to add it here so in here we need to paste this which is the post stage section and that should send us uh this email containing the build and I'll be sharing this entire script in the GitHub so you don't need to worry about it only thing you need to do is to change your email ID here I need to remove this part which is to change email here once you do that just click on apply option then the script is not approved it will not be approved on Save either modify the script and click on approve script option it says script approved click on apply and click on save option now once I do this I'm going to click on build now if it runs which means the script has been approved or else you will get an error something like this which say script not approved so make sure you click on that button and then it will start running the pipeline so at the end of this pipeline we should have scans uh created for sonar Cube for dependency check also for Trey we also want to have an image being deployed on dockerhub and then after the image has been deployed we can use that image or anyone on the internet can use that image as it's a Public Image to deploy their applications on kubernetes on ECS whether they want to so till the time it is going to be done let's go ahead with creating a kubernetes cluster because it usually takes time so I'm going to click on kubernetes cluster here eks and click on ADD cluster click on create option let's name this as Netflix you can name this anything version is going to be the same for cluster I'm using the default one if you don't have a cluster role make make sure you watch my previous video where I've shown you the policies it's just or else you can simply search for eks RO cluster a cluster rle and it will give you the links which is this one and you can use this to create your own rule similarly for node as well so or else you can just watch my previous video where I've shown you the roles of how to create it then click on next and in this section I'm going to use the VPC as default and removing all my private subnets and also removing this Us East one because I don't have space to launch servers in it after I do this for Security Group I'm going to use my default one but you can use any make sure uh you're using the correct one for cluster endpoint I'm making it public click on next no need for logging no need to change any add-ons just the default ones click on next next review everything and click on create this will start creating a cluster in this cluster we are going to set up Argo CD and install Prometheus and grafana uh install promas using hem charts and check the monitoring on our the monitoring server now let's go back to this and you can see it's still creating and you also have a sonar Cube analysis being created as we have integrated so you can check it if you want to now once this cluster is being created then we can go ahead and add our node groups we are going to add one node group of T2 medium or T2 large uh because we are going to have Argo CD and Argo CD requires you to have 4GB of ram as we discussed in the previous gitops video where we also deployed a Tetris game as a demo so if you haven't checked it out go check it out the previous video on giops where you will learn how to deploy a Tetris gem on kubernetes using Argo CD so now the cluster has been created and our pipeline is also running fine let me show you that these are the the checks it also gives you the check email whenever the pipeline has run or if it shows you failure let me show you that you can see this is the failure email which we got along with the triv fs. text and the build loog which is the scans and you can open it up if you want to then you also get the success emails whenever your pipeline has ran successfully you can see it's here so this is our pipeline completely now our pipeline is up and we have everything working fine let's go ahead with deploying it on kubernetes as we were doing so I'm going to go to compute section click on ADD node group in the node group let's name this as nodes for the IM Ro I'm selecting the my Amazon eks node Ro if you don't know how to do it again get uh do a quick Google search of eks node IM am roll and you will go to the official documentation which will explain you how to do it or else you can watch the previous video where I showed you the different policies that you need to have in this particular rule click on next uh I'm going to choose T3 medium and I'm just going to have one instance running so I'll make it one and yeah that's all we need click on next option and I'm going to remove this private subnet click on next click on create this will create our nodes so till the N the nodes are going to be created let's go ahead and quickly install rocd so I'm going to open my terminal first thing we need to do is to set the context to start using this particular Custer and I can do that by running AWS eks update iph Cube config D- name Netflix D- region is going to be us- east-1 so this will set the context to start using this particular uh cluster and if I do Cube C get pods we don't have any pods because this is a new cluster all right let's go and quickly install Argo CD to install Aro CD you can do Argo CD k installation and you and open this link I'm going to paste a link for this in the description so don't need to worry and simply copy the commands that are present here so copy this command and paste it let's wait for this to be done now what we are going to do is to install Argo CD on this and then deploy our application using Argo CD also have monitoring set up on this cluster using Helm so I already have the helm commands present on my GitHub repo and you can use that so if you see here below uh in the kubernetes section you can see create a kubernetes cluster monitor kubernetes with Prometheus and we are going to run these commands using head charts we are going to configure monitoring on This Server so if you don't have Helm installed go ahead and install Helm I already have it installed to install Helm you can simply do a cogle search and say install Helm on your machine it can be open to it can be windows or anything and you can get the commands from here now after uh you do that uh once the cube Argo C is installed you can do Cube CTL get NS and to confirm that your Argo CD is install you can say Cube CTL get all hyphen n Argo CD so you'll see that ports for Argo CD are up and running so go back to the same so in the same in this next we need to click on this option next and we need to expose our AR CD server and we can do that using a service so I'll do paste this command here and this will expose the Argo CD service so service has been created we need to wait for some time because this service is a load balancer so it will create a load balancer in our ews dashboard once the load balancer is created we can get the DNS and log to Argo CD connect our repo and deploy it automatically so till the time this is going to be created let's go and run these commands for him one by one so I'm going to add this repo for Prometheus and then create a name space for this press enter then copy the install the node exporter using hel and this will go ahead and use this chart which is present and this will install Prometheus with node exporter that you can add as a uh job in the monitoring server that we have and it will start monitoring our Pro kubernetes in these targets as well this is almost the end of the project I hope this has been helpful for you so let's do the last part and then you can share your learnings on LinkedIn make sure to tag me uh I'll be sharing you all the code and if you have any questions any doubt feel free to reach me out on in the comment section or on LinkedIn I'll try to reply everyone so now it says it has been deployed uh and you can also check that by running Cube C get NS and you should have a name space with this name so qctl get Prometheus iph sorry Cube CTL get pods promas node exporter and you can see that the pods are running for node exporter so node exporter has been installed using Helm now let's go and expose our Argo CD c server so we get the DNS here so I'm going to clear the screen paste this here and now we should be able to uh so the server DNS is stored at this path let me show you I'll do Echo dollar and this will give me the endpoint so I'm going to copy this endpoint and open it in a new tab this will open up Argo CD page you click on Advance click on continue now Argo CD requires you to have your username and password to sign in we know the username which is admin but for the password you need to run the next command which is to export Argo CD password in this variable aroor PWD so once that is done I'm going to run the same command which I run for server and say Echo dollar Argo PWD this is the password let's use this and try to sign in so I'm going to put the password Here the username is admin click on sign in and we should be inside our Aro CD so that is what it is let's go and connect our repo I'm going to click on the repositories here click on connect repo using https and for the repo you need to copy your repo or you can use my if you are not doing it uh so I'm going to click on copy and do this paste my git URL here but the project is going to be default and you don't need any username and password because this is a public one so I'll click on connect option and this says successfully connected all right now coming back to Argo homepage uh you can click on this and click on new app we we need to create an app which will then fetch the data or the Manifest file present in our uh kubernetes folder here so I'll create an app let's name this app as Netflix project name is going to be the default one for sying policy I'll make it automatic so that it can automatically sync any changes made in the Manifest now for repository URL I'm using the same one that I connected right now the revision is head the path is going to be kubernetes so make sure you use the same name with K Capital so kubernetes uh and yeah for the cluster URL I'm going to use this one name space and if I do Cube C get NS you can see this is a default one uh let's put this here so I'll say default click on save this will save it now let's try to sync again so I'm going to sync click on force and synchronize click on okay and the error should be gone now within some time we'll see that our application is up and we can this application is actually uh node Port using a node Port service so if I show you this service it is using a node Port service and it is working on Port 3007 so I need to access this and I so come back to your nodes in your node click on the Node section here after you refresh and copy the IP address of this node which is here let's copy it optionally you can also do it load balancer type if you want to in this node make sure the port is open for 3007 or else you will not be able to access your application because your application is actually running on Port 3007 which is mentioned here in the notep so 30,7 should be open and I'm going to open that now so going in my security group for nodes I'm going to click on here and add roll custom from TCP 307 and anywhere ipv4 will say app Port app node port and then click on Save once I do that I'll open my new IP a new tab paste the node IP on 307 and we'll press enter now we should have an application running and you can see an application is running here which means we have now successfully created an application that is having security that is having uh cicd it's using Aro CD using so giops is also there and we also have monitoring using prome and grafana now let's try to do the monitoring and add it here in the Prometheus and for us to do that we need to change the prometheus. file in our monitoring so I'll go there go to CD HC Prometheus and if you do LS Pudo Nano prometheus. yam in this we need to add a job that is going to scrape metrics for uh our nodes for our kubernetes cluster and we need to add the path as well so if you look at this manifest file for node exporter service it's working on 9,000 900,000 so if I open my IP address on Port 900,000 900,000 you will see that the node exporter has been installed here let's see that and you need to also open that Port again in your node exporter service I'll go here in my ec2 dashboard click on edit inbound rules so in the node port in the node Security Group you need to have 30,7 and also 91 0 opened for our node Port uh for our node exporter node exporter click on Save and once you save this now if you refresh you should be able to see that a node exporter has been installed after you paste your your IP on Port 9,00 so 91 0 0 and if you press enter you can see node exporter is installed if I click on Metric so this is where you get the metrics for your kubernetes cluster let's go and add that in our this section here so I'll say job Name colon It's name this as K8 and then we are going to put static on Metric path so metric path is/ metric we can put that or else by default it's the same so I'll say metcore path is going to be slash metrix and then the static confix for targets put your node IP with the port 9,00 so I'll say my node IP which is this colon 91 0 0 close the bracket and this save this run the commands to check if the syntax is correct or not and also the commands to reload which you can get in the uh section here so I'll say prom tool and this is the command to check the syntax if I run this it should give me success or failed it is giving me fail we we haven't done the indentation correctly we need to do it correctly like this and now if I save it let's try to run it again yeah it says success now after you s get the success option click on this to reload and reload it after reloading you should be able to see the cluster endpoint here in the Target section yeah you can see cluster is up and you also have the end points coming up which means we have now successfully deployed our application to summarize again we started with a single server where we cloned the code and deployed the application locally on a container then we added security using Sonar Cube and TR we included genkins to have cicd pipeline then we added Prometheus and grafana we deployed this application on kubernetes using Argo CD and the monitoring was done through hel I hope this was informative let me show you how can you delete all these resources after you are done with the project and after you have posted this on LinkedIn tagging me as well so that I can share also make sure you put a nice description if you're including this on resume because this is going to be a very good resume that covers all the important tools kubernetes Helm Argo CD junkins uh Prometheus grafana sonar Cube trivy o apps all the important important tools are mentioned so make sure you describe them properly in your resume if you have any questions let me know about this and I can create a sample or template that you can include in your resume for this particular project uh so let's go with deleting all the stuff that we have now first thing you need to go if you have this kubernetes cluster is to delete the node group in the kubernetes so I'm going to go here and select the in the compute section I'm going to select the node group and click on delete you need to add the name of the node which is nodes click on delete and this will delete the node group for you once the node group is deleted we can then delete our cluster so let's wait for that let's go to ec2 section in the E2 section if you are doing the Argo cd part you also need to delete the load balancer which is here so I'm going to select the load balancer and click on delete so let's confirm confirm delete and then you also need to delete the two instances you have so let's delete these two instances click on terminate and this will terminate the instances now once you do that you also need to delete your a elastic IPS or else you will get charged if you're not using it so I'm going to go to elastic IPS select these two click on action click on release elastic IP if you get an error you need to select one of them click on disassociate and disassociate it do the same for the second one so Netflix disassociate and after you disassociate it then you can release them so let's click on action and release and release this this will be deleting your elastic IPS too so yeah this is what you will be doing to delete everything that you have in your machine and to avoid getting charged after you do your project so once you delete this wait for some time and you will be able to delete your cluster as well uh because cluster requires you to delete the load group and that usually take some time so if I try deleting the cluster now you can see it's going to ask you to delete the node group uh in the computer tab okay so wait for this and after some time you should be able to delete your cluster tool so this was the project if you're still with me on this part of the video I hope all of you have completed the project but if you have any issues let me know in the comment section and if you did complete the video make sure to mention it in your resume and also share it on LinkedIn also tag me if you're sharing it on LinkedIn I hope this video was informative and it taught you a lot so please like this video subscribe to my channel thank you and have a good day
Info
Channel: Cloud Champ
Views: 130,101
Rating: undefined out of 5
Keywords: devsecops project, devops project, devops, devsecops, cicd project, devops monitoring project, jenkins project, kubernetes projects, cicd pipeline project, sonarqube, real time devops project, devsecops hands on project, cicd devops project, devsecops tutorial, devsecops course, what is devsecops, netflix clone project, devops kubernetes project, devsecops pipeline, devsecops cicd pipeline, devops project from scratch, devops project for resume, complete ci cd project, aws eks
Id: g8X5AoqCJHc
Channel Id: undefined
Length: 94min 30sec (5670 seconds)
Published: Sat Oct 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.