Building, Deploying and Managing Microservices-based Applications with Azure pipeline and AKS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
well I don't want that can you hear me yeah I can hello I don't that's how you yeah yeah I'm doing good thank you it's the really excited having you as a speaker for clock w do this so it would be great if you could go ahead introduce yourself and you know just give up - yes just let me know when you guys can see my screen yep okay then hi good morning everyone hoping everyone is having a great sessions from last two days and yeah today I attended Murugan sessions and then Aaron sessions it's on more of a show DevOps I felt and that's because agile devops is pretty famous so I'm also trying to cover a bit of a draw DevOps and trying to show a deployment in IKS which is a short kubernetes service so let me start with my introduction first my name is mamta I have founded a company recently in the month of Chan called Aztec scalable we work on cloud technologies doing consulting and training so I come with around 14 plus years of IT experience and I'm certified in all the public clouds and my passion is DevOps and kubernetes these days so I love doing a lot of things on kubernetes that's the reason we would be talking about a lot of kubernetes and Dobbs today hope you people like the session so let's start with it so today we will be talking about micro services based architectural applications which can be deployed using Azure DevOps to aks and these are two of the basic tools which are been provided by any of the cloud providers but end of the session we would see why after the box is pretty famous and what are the benefits of having agile devops so let's start with the session today's agenda we will have weekly s cluster up we would be setting up a two node cluster we would have a private container registry which is P assures private container registry and in that we would be pushing the images which we do built by using the choker and we would be setting up our agile sequel server so here I want to show how an application when it gets distributed across a guest plus a back-end sequel database as well and your kubernetes seamlessly talks to any type of back-end databases whether it s running in a short kubernetes service or it is running as any other managed service as well we would be discussing how to have that agile repose or to do our control for the code version controlling and we could be using as your pipeline for continuous integration and Bill's I saw an morgan sessions and Aaron session we have covered a bit of it so I'll just quickly run through that and try to focus more on ATS and their session with the agile pipeline which is the release pipeline we would deploy the code to aks and let's first understand how we would be building our infrastructure a lot of Fay's are there to build the infrastructure we can build it as an automated fashion as well but here I wanted to show the ease with which you can deploy using your HR command line so with easy from online we can deploy using this so I'll just try and break this because I've already bought of the infrastructure because it would take some amount of time so I have already deployed but I'll just explain you the command so that when you go back and try doing it you understand what parameter needs to be passed at what point in time so to deploy a IKS cluster you say AZ which is for the a sharkman line to a KS is the resource name and action is the create you say it has verb or action and you are specifying in which resource group you want to create this particular resource what would be the name of the resource and you're enabling add on things you can have a dashboard you can have monitoring and bills and lot of other stuffs as well you can use multiple other features provided by ash or its stealth to club along with APs and work together you can provide which version of kubernetes you can Group one two and go with Adam you are generating and SSH keys so that those keys would be used for setting up your credentials for the cluster and you specify the region in which you would like to spin up your cluster so I have spin up my cluster in Southeast Asia region and I've given a unique name call SKS demo let me quickly show you that so this is my Azure portal hope so the font size is good and this is the kubernetes service which I have bought up and let me take you through this so in this if you see there are a lot of things if you want to do a lot of stuffs like upgrading scaling and all that you can do it from here itself I had started with a one node cluster and now I've gone to - no that's pretty easy you can click on this you can say scale and that would skin up you just need to scroll this bar and you can see how many nodes in the cluster you need and you can just say apply obviously that will take amount of time because when it is bringing up a new virtual machine at the back end it is installing the kubernetes worker nodes cubelet docker and all that stuff and making it ready for joining it inside the cluster with the master and the two worker nodes we already have so we can go and check in the command line as well so I quickly go to the command line at this point in time you will see just two nodes but we would come back to this and we would do cube CTL get notes and we would find in some time we would have the third node up so if you see cube CTL is the command line tool to talk to kubernetes cluster once you have set your easy command line tool you can just set the subscription in which you have part of this cluster and you can right away go and are far these come after that so cube CTL as the command-line tool yet is the verb or the action I'm going to perform and after that it's all the resource type I want to get how many type how many nodes are there so I'm just listing the node server over here and I've picked up version 1.15 the next set of resource which we would be creating would be the azure container service registry so in that we would say as your ACR create which is the azure continual registry create we are specifying the resource group in which you want to create you're giving a unique name for your container registry and you are specifying the location so with that you're agile container registry will come up let me quickly take you to my a jar container registry in case you don't find it here because I have recently used you would find it here it's pretty easy you can type in few words like container and you would find few in the beginning itself it could be there in the first group of options in the drop jar so this is the container registry and as everyone knows we have our registries for saving our code same way we have registries for uploading our docker images or any type of container image just where we can push those and it works very similar to tools like it or bitbucket where it uploads the delta downloads the Delta type so here if you see I have the repository section so registry is the place like a server I'm getting his face in ashore and I'm saying that that is my name for that particular registry so I would get a unique name there you have to create unique name and represent three is the package and in that package I can have several versions so since morning I tried to testing because as we saw for your x demo don't work as expected so I was a bit cautious and I was trying it continuously I've tried pushing so many versions of the image and the latest one is always tagged latest it's as soon as we push the image it would be tagged as latest so this is the azure container registry and I would like you like to show the unique name which we have and this is the unique name of the server so this is the server to which you would be pushing your docker containers image so let's go back and then see what else we have to spend because my as your kubernetes service got to push or pull images in this case we would be pulling the image from a jar container registry IKS has to have authority for that and that is the reason we I assigning role to it and we are creating a assignee we are saying that ApS is the assignee and we are assigning a role of ACR pool which is a jar container registry people and we are saying the scope as ACR so this is also on the azure level I am just creating service principle so how we have users authenticating wire piece or password same way when services communicate they communicate wire something called the service principle and those who would be from the kubernetes world it's called a service account so this is for inter-service communication so you're authorizing IKS to pull image from ACR at this points in time so once that is done because we are deploying our back-end database in sequel so from this short and sweet to set of commands I can create the server and I can create the database as well so let's break that and let's see what we are exactly executing sequel server will create very simple the region in which you want to create the group or the resource group mentioned the name for the server the user and the password the admin user password you can give it over here sorry for that and this is the DB create command using which in this particular server which you have created in the above command you would be creating a database and here you specify the database name so once these infrastructures are up let's see how does who gets up this infrastructure so in our office environment or in our project environment we would have a person who would be having the access to the edge or would spin up a jar cluster for EPS cluster for us of kubernetes would spin up the sequel server would create the container registry and then he or she or maybe the some other person not from the infra chain would set up the repository from where people can push the code to and then we would have the pipeline created there should be two set of pipelines so pipeline word is bit confusing it does pipelines pipeline does the belt and pipelines release factors the release so you have the best pipeline and then you would be having to release pipelines so this part is the infra part and bringing up the complete kit or and like how we have a git synonym for agile devops we have something called as as your repos that is there and the pipeline set up so we saw the ACR the sequel server and the AKS we have not seen the get so let me take you quickly to my as your DevOps potent and this is the place I've created a project over here it's pretty simple to create a project you would have an organization over here and you would you can create several projects I've created a project you can see there's a quick button over here you can add and create projects and you can push your code to that so my and like in Morgan's case in the history the code was reciting and get but I'm using as your repos and this is the place where my code is residing so this is the repos part and this is the pipeline part in which I have built up my build pipeline and the release pipeline both in place and I've done the deployment as well but I'll quickly run through the code file when we are saying we are deploying it we have the code with which we have to prepare the image that would be the first section and second there should be a set of instructions using which the azure kubernetes service would go ahead and deploy this code as a container in a jar cabinet is cluster so for building up the images as usual we have the docker compose file we would do the belt command and we'll prepare the image and these things would be done in the pipeline section and this would be the file which would be used by the build pipeline once the code is built so code build remains the same right the way the code has been depending on what type of language you're picked up you would be picking that type of build operations class for the building of image out of the code you are using the docker compose file and let me take you quickly to the base PI apply and so that we understand what exactly we are doing in it I have tried to keep a lot of environment variables as not static so I've kept the sequel server's ID as a variable I've kept a docker container images as a variable so I've kept a lot of things which are as available and I'll show you how you can define variables over here so here you can see a set of tasks defined lot of options are given over here and there are set of actions which these tasks would be doing and in the last few sessions you saw that in case we want any of the server's to build an image like for example if I am not using Azure DevOps and I want to build the docker image I would need a machine with which I can do the build and I can tipple or create a docker image out of it right so I should have a machine on which I would deploy docker plus I should have a machine on which I can do my built for my code as well so here we would be taking resources that means the servers from Microsoft hosted agent and it's pretty easy as in when you need the machine you just request for it and if it has granted you have it so let's focus on these services these tasks the run service the bells and the push so run service prepares a suitable environment for pulling required images such as our we are using asp.net our core things so it would be picking that base image it would be restoring packages all those thing is being set and this particular tasks then you have the bill section where you could be building the docker image so this is the place where it picks up the docker file docker compose file which I've shown you few minutes back and using this it's saying that this is my register education and this is the subscription so you come from top top top from top to down here it's a container registry type this is the subscription and then this is the container registry to wish you would be pushing the image and you would be building image using this docker compose file and earlier in term for a second yep yeah so a lot of people are saying that the font is not you know clearly visible so do you mind increasing a screen sense because it yes sorry for that in case you guys have missed anything I was just covering the run section in which I showed that you can specify what would be the type I'm saying I would be using a container registry which is a jar container registry and once you choose the proper subscription you should need to have a service connection built up in the project setting for this and once you have connection with your subscription this would come in a drop-down itself so you can see this is the registry and this is the composed file with which you would be building your image so few things you are setting up in the environment variable then you are building and then the final one is the push in this you would be pushing where to push it's being sent and it says what would be the environment variable and when you are building how would you tag you are tagging the images over here and you are saying include latest ad so each time I build it gets a number you would have seen in my as your container registry it was showing up there are few numbers in the repository so if you quickly go here I would like to show you that there were a set of image versions which was being uploaded and it keeps on increasing but to pull that it is much more easier to pull with the name latest that sorry we are retaining attack called latest over here so that people can pull it in the 80s too quick so then we are doing publishing and all that stuff over here you are publishing the artifact here you would publish two stuffs you are publishing the main artifact of the code plus you would be copying a set of files and this is the place where you are saying this also would be the part of the package because this also has to go along with a code and then only a Kaos cluster can pick this and spin up your resources in aqueous cluster so let's quickly go back to our repository and see this particular file and what is the content and what we intend to deploy in our a gears cluster hope the font is fine this is a version in which this kind was introduced the deployment kind so I sure or any kubernetes these are standard kubernetes stuffs right once your kubernetes cluster is up whether it's in AWS as your on-prem name any cloud provider once the cluster is up you can run with all cubes ETL commands and you can manage the cluster as the way it works in any other cloud provider so it's pretty simple so this is a kind of resource we are trying to create and what exactly is a deployment deployment is a type of resource which helps us manage replicas of an application so I can do auto healing auto scaling and all that stuff with the help of deployment in case deployment is a controller which would help us maintain the desired number of replicas at any point in time so if I say I want to run one replica of this my MHC back is the name of the application and here if you see it is just labeling those resources so that it can be worked on the kubernetes cluster and from here the pod section starts so this is the label for the pod and in the spec section it is the container and inside that you are just for image and this is the container port this image I am NOT pulling from the container registry I am just pulling this image from a open source registry like half.com okay and in kubernetes as and when you are deploying an application because it's getting spin as a container and container don't have a persistent IP address we would be wrapping it up with one more component called a service and this is not a process that it will get killed and it will lose its IP address rather it's like an entry and it again it would retain itself till the time you go ahead and delete it so here services can be of several types you can expose your application woodenly within the cluster and because this is a back in type of application and you just want to have it within the cluster exposure that's the reason you have mentioned the type as cluster IP so with this what would happen kubernetes would expose this application only within the cluster and you can't reach this application outside the cluster you can specify the port number with which you want to get in connect to with this application and this is the selector for any of the labels so because in the world of containers you can't retain the IP addresses labels and selectors play a very important role you can identify a application or you can identify any sort of resources I would say with the help of labels okay and whenever you want to select a particular label you say the selector part so let's see we would be deploying to applications the FIR [Music] that one's the bactin part and this is the frontal controller monitoring the application which has been deployed and it will make sure that every point in time I have one replica running and here if you see there are few types in which you can roll out you have cannery type of deployments you have deployments rolling out this blue green deployment you can do lot of things with the help of the resource type deployment ok let's go to the section of pod and the container with which we are more interested at this point in time so this is the label for part and this is the specs section in which I am defining my container and this is the name of the container and image I am giving ready to pull from and I'm saying always fill the latest image so what would happen in a sea are even the last build was 97 that particular image is stacked with two different numbers one is the 97 and one is latest even though image is one it has two separate names you can see like how you have a good name and a name with which you are at rest at home it's like that so if you want to not change this number every now and then with the build number you can write it as latest and in your push time you can see that include the tag of latest okay and this particular container would be exposing code number 80 there's a few of the resource constraints which we have to find how much of resource we would need and this particular service which is being used to expose the sake in the front end type of application because it is front-end application and I would be exposing this outside the cluster I'm going with a service type called ass load balancer and this is very beautiful in case of cloud and all when we say type load balancer 80 estates the responsibility to talk to any of the cloud providers or the kubernetes service takes the responsibility in our case a case we'll talk to our ash or clouds account and spin up a load balancer and asher and that particular load balancer would be helping us balanced across the number of replicas which would be there okay so this is type load balancer and this is the port number so let's quickly go back and see the release pipeline I should give up the build pipeline and I will show you the release pipeline you can click on edit any time and you can see the belt and the release pipeline so this is the part I'm saying here is the artifact turian where's the artifactory once the build was done for the build pipeline you can pour in and it is there in the pipeline itself and this particular thing as soon as you trigger you can see here you can say enable and disable I want to build every time there is a change and that is the reason I have your market as continious or you can even schedule any particular scheduling can be done built every day at six pm built every weekend at I'm Friday six pm types anything like that and these are the set of jobs and tasks which we have because we got to deploy two things here our back-end would be deployed a database content would be pushed to the agile sequel so there is a DB deployment happening and then we have the complete set using which we would be deploying to container services which would be running in kubernetes so here if you see we would be deploying that in a KS and this is that deployment section so this is the normal DB deployment which we have and in case of this as well we are using a set of Microsoft hosted agents so that we can trigger that deployment and here also we have a agent with which we would be communicating to our IKS cluster because as our DevOps is there in the cloud right I don't have a machine from which I can run a cube CTL command or I can really push or trigger a deployment so we pick up our boon to machine and then we go ahead and trigger this deployment so this is also pretty same you are just using your service connection you are saying with subscription and because you are deploying this to your IKS cluster your instead of giving the ser that time we were intending to pull or push images to a container registry so in this case we are using this resource group inside this resource group we have a cluster and there's where we are pushing and in kubernetes apply is the command in case there is a delta change that would get updated in case you're doing the fresh deployment with cube CT and apply as well you can apply that deployment for the first time in case there are no resources it would create them and in case there are existing resources whatever Delta is there that would get applied so this is the part that would be picking up we have seen this file just now the MHC - aks camel which had the deployments and the service definition and this is the place it says where to pick the image from so with this if you trigger the bills I know the bills will take a bit of time let me trigger the bill still so that in case it finishes we can know that the bill has finished so this is my bills pipeline I'm just running the build pipeline and because I have automated the bills and the deployment I can of to the deployment are in an automated fashion you can quickly go acure and check for the updates in case your bill has started it has grabbed a Microsoft hosted agent it will start doing the set of action it is starting the first phase and it is initializing it is checking out the stuffs and slowly it will run one of all the tasks and jobs which you had specified in your build pipeline same way once the build pipeline is done it has pushed the image to your ACR it would trigger the what will say the release pipeline okay so let me quickly take you to my cantata and to talk to the cluster I'm using my terminal of my Mac laptop itself and from here you can do cukes ETL get all this will give me all the resources which are running at this point in time in my kubernetes cluster and if you see I have two things running this these are the notes and these are the four which are running one is the fronton pod and one is the back in part and I have two services one of type cluster IP the other type load balancer this IP address is not resolvable outside only within the cluster you can resolve it and it doesn't have an external IP associated with this and for the load balancer type you get a IV address and with this you can reach the application and if you see this deployment already says how much is ready how many is expected and all that stuff right instead of getting that deployed because that will take a bit of time I'll show you how you can scale from here itself we want to scale our number of replicas and the deployment so you can give the name of the deployments which you want to scale like the name of the deployment is list and you can give - - replicas and you can see equals to 5 for example what would happen it would quickly spin up 5 new containers wrapped up at the pot for you so that kubernetes can manage let me show you the parts you can do cubes ETL get parts to list reports in the cluster and you can see already two are in running state a we were running with one and I scaled to 5 so 3 of them are in running state and to have just started creating and if you want to do a bulk on this command you can do with - W and there is an error few of the images are proper and few of the images are not reflecting correctly we can keep a back by that with me and images show I can't think I can quickly show you where are these spots please Rick - oh right command and you can quickly go and check and you would see normally it gets distributed our third node is also an action it's 0 2 & 3 the node numbers and it is almost equally distributed across them and there are few errors in this we can obviously go and check back what is the issue so how do you do you can do a cube CT Elliott events normally we don't expect this to happen and real-time lie demos but you can't help our some issue with that and it's giving an error so let's go for it and let's look what it has failed it says it object has been modified and it doesn't have the latest version I think we have from though Braille them those things family and we are doing the deployment as well so that's the reason few of the things have worked fine and few things have not so let's see this cube CT I get box I quickly scale down as well and this is the way in which I'm scaling up a scaling down and that's a manual scaling right you can see mamta what's the fun about this if someone has to really run a command and scale it how helpful it is agreed it's not that helpful but in case you want to have that also in place there is a component or a type of resource in kubernetes called as horizontal pod autoscaler and you can implement that and make the deployment much more intelligent to scale depending on the CPU or the memory resources so this was the demo part which I wanted to show so here what is rehab plan we have checked in a piece of code and bad friend to the repository and repository triggered the rails pipeline and docker image was created and docker image was pushed in my onshore container registry and then we triggered up release pipeline with a new set of artifacts and the images and we pushed that and this was the container and in continuous integration phase and once we are having the package build the artifact and the docker images you can go ahead and deploy the newer version of your sequel database is and you can also trigger your aqueous deployments with a new set of a case file which you have and once that is done the AKS which is the master node which we don't own in case of the managed services goes ahead and pull the latest set of image and then after it has that package you can go ahead and create the pod so this would be completely automated right the pots have automatically come up we have not done anything to it and we have also created a load balancer which is there in our as your account but it is not residing in the cluster it's external load balancer which we have created once we have done this this is the CD part and this works awesome the CI and CD is done so let's quickly remove that and think of Bonita plain part of it if the set of users want to reach my web server they would be routed to the load balancer using the DNS name and that would get routed to the front-end application container and in case my friend to the application container wants to talk to the back-end application and it wants to talk to the database all those things up pretty stitched up at this point in time and we can see our database is serving and our website is up so quickly let me pick up the server's IP address so you can do a read service and this was list all the services and buses the IP address with which I can reach so let me quickly go to the page and here see whether I can reach and this is the one we try to open if you see this is the same IP address this would take the time to load yeah it's up and running so this is the website we are hosting using our aqueous cluster this is running in container with three-tier type of application the front-end Redis and the sequel back-end as well so this is what I had to I had plans to show as a demo in case there are any questions I would love to pick that let me quickly switch to my steam yard thing hope I'm on time I was not actually taking a note how much time I've taken so yeah mom that is not a great session and we've got a bunch of questions in the in the comment section so what are we doing
Info
Channel: KonfHub Tech
Views: 426
Rating: undefined out of 5
Keywords: DevOps, Serverless, Azure, Microservices, Azure pipelines, CI / CD, CCdays, AKS, konfhub
Id: xCUmcuM0GFc
Channel Id: undefined
Length: 34min 51sec (2091 seconds)
Published: Sun Jun 21 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.