CI CD Build Test and Deploy to Kubernetes Cluster on Azure Cloud Live Webinar May 2- 2020

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay can you hear me now yes yes okay let's start so first of all I'm really my apologies about what happened so the problems that the maximum number four people joining is 250 so all people join the meeting so even for myself I couldn't join the meeting even I am the organizer this was not even expected for me but it seems even now logic but I was not expecting that because actually the people that confirm the meeting is around 200 but the invitation sent for more than 700 so it seemed there is some people joining which they didn't confirm they're joining so so this is a problem so at the end this I couldn't join so what I did is I just create a new meeting and then try to send that so again even I couldn't send the invite for people because my email plugged to send all this numbers so when I try to send that the email blocked so again even this is will be a good experience for me because I think it will bring the same idea I took if some people attend my first session I talk about the importance of being connected and because you know we don't have a central point of view or people and this is why my central point of view I updated with a new link to the the event in the Facebook but anyway this is the case for people that they want to understand my apologies for that again I couldn't do any it just a meeting reach the maximum okay so let me start so let me share my screen and please let me know when you can see my screen yeah you can't see okay okay so let me start by quickly giving a you know introduction about myself for people that they don't know me so my name is Mohammad radwan I have been doing software development for more than 17 years moving from different role and position and for the last eight years I focused on consultancy based role where I helped with more than 50 companies or 50 enterprise customers among some of them from the fortune 500 and I relocated for many countries and eventually I am in the UK London piece as a part of the community I have contribution for many contribution on github and azure marketplace and I have my youtube channel and blog and I have delivered mini session and many conferences so I got the chance and the opportunity to work on different projects eyes and a company size as well okay for the quick code of conduct I will not so I again you know I will recommend that people it's better that you go and read in detail the code of conduct but let me go through it very quickly so the first is that every session is recorded so if you don't like to be recorded please leave the session now please respect everyone no matter what is the religion or the gender or the age so and we don't discuss any of that mic is all all the time please you just open the mic when you want when you are speaking and just turn it off immediately after you finish our central point of communication is the Facebook so again you know and this is even if if we look at the problem of today we can solve most of the problem if people communicating to the same central point we divided I divided the group the community in two groups for now Group A and Group B group e will structure and group B is responsive and for you know we will get more introduction about that you can read more about that in the page priority question this is for Group B because Group B is more responsive and open discussion at any point of time so usually is there is a priority question which is related all what I did before you know for any contribution I did before like my videos my books my anything of my contribution but just the for Group B as well because it's open discussion so try to avoid asking the same question again and again like how to become a DevOps engineer how you know how to change the carrier because it's the same question we we I want to answer this question and have you know agree about that we can even and the open discussion get agreement you know open the door for other opinion and then we will leave that so people would like talk to to know this question or this answer they just can get back to this conversation so it for group B also we will work on on open source project and please fill the survey it's very important to improve and for group B also it's very good that you can ask or prepare your question upfront and send them you know in a time enough to to send to get that and they get the vote for from everyone on group or people to the most questions that they will they would like to get answered during the meeting discussion is not monitored during the meeting so if I'm you know because I get some of the feedback that so there is no monitor for the discussion or midori moderation please again everything in the FAQ you so everything in the FAQ and you just can use it to get answer for all your questions okay this is the details and of course I'm not going to do that so again your support is really needed and appreciated so what we are doing here as I explained before maybe for for people that they don't know so we're doing here in this community is mentoring open source studying group certificates tips and tricks and also job recommendation so more community champions our goal is to not just having you know like being people in mvv but also to make more community champion of you so and this is again is how we promoting you do to be participated in the community so a number of surveys and the result yes so we I reach it two or seven hundred to register and from 40 countries and this is this is what caused the problem again you know the number of people try to join the meeting at the same time again you know I will try to think about something we can sort it out how to to manage that again I think from my point of view keeping connected on the same you know a point of view or the same central point which is a Facebook page and also by confirming your meeting that you are attend I understand sometimes you maybe you will you will confirm what you couldn't that's fine no problem but it's really great to understand who is going to join and who is not so we don't have this problem again the importance of community and networking so Hawaii community again beside learning and other things community is a platform and people who are you know for many things so people is looking for job and for hiring manager that notice and trust people so it's really you know for this is the community it's a platform for you and I'm trying to get this opportunity for this gathering with 700 people from 40k for more than 40 countries is to just using this network you know to help each other so it is not just a session that you can just attend so keep connected and again Oh everything in the facebook so please make sure that you are connected read the FAQ very well actively participate for the survey and the meeting can you please mute your your [Music] so yes everything in the Facebook so try please to make sure that you you get connected and not all the information with update because usually there is some updates in regular pieces I would like to really recognize and you know really thanks give a big thank you for Mustafa my name because he he did you know he answered that quiz correctly the the only one actually the only temper people answer the quiz but he's the only one that answer that so as I explained we are divided to two groups today is a group for session II and the there is other group for Group B so the main idea is group a which is today is more structure because again based on the survey it's like 80 percent I would say even 90 percent of people would like will a structured session with well-defined agenda and you know let's go for questions so this is what I'm trying to do and this is why I the decision for that so again please make sure to read more about the difference between two groups I will reevaluate you know the groups from time to time and then maybe divided them to more groups in two more groups or it will be you know usually along the way exam culture and Azure subscription you know I will have some exam Belcher or discount sometimes and I'm trying also to get I just subscribe I'm trying to connect with Microsoft to get some of that I can't promise or I can't give any word about it it just you know to understand so what I will try is to get some either voucher or a subscription and give it for people that for example contributing or you know answer quiz and so on so there is upcoming session which is for example DevOps open source and open QA this will be the next Saturday and there is other session which will be in also the the Saturday in 16 this is for group e and this is for group B and as you can see the session will be your support is really appreciated Oh let's start for the session for today okay so let's start what we will cover today our agenda today we will cover I give a quick introduction about queue for natives and then understand what is the structure of Cooper natives like Putin Cuba natives and the nude as well and then we will have an introduction about ich es or as a cube related service and then we will understand the different with and without we with and without ich es or as a community service like how to work with kubernetes with and without and then after that we will get an introduction about either container registry and we will get an overview about the end-to-end kubernetes e ICD pipeline so this include of course you know creating the the DevOps of project creating the Azure pipeline couponing extension and many others so so what is kubernetes so again you know the story begins I think it's bit it's better to start from the beginning the story begins when companies start moving from the waterfall development to agile and DevOps you know all the companies and would like to speed up the delivery of the value for the end-user and the customer and this lead to or it has many obstacles actually but one of them and I'm not going to cover all of them or I'm not going to talk about them but one of them is one of the major this challenges or obstacle is a monolithic application you know I can give you some example from experience I have many project where you know we had the very large project here I remember one of the project that they usually doing the build at night you know like when's he leaving the companies they start running the build which took like around maybe six or seven hours and usually the after the bell to complete it to meet the deployment to the Huey machines so the QE team can at the morning they can just start to working to testing the application and gives what in many days the build and deployment feel and the QE team they don't have anything to work with so again there is many stories about monolithic application and the problems but as I usually say you know like you usually solve a problem but bring more problems or bling not problems but I will see challenges so monolithic application so what was the solution the solution is all about freaking - this any question so the solution was is how to break down this monolithic application monolithic application into small pieces or small part so this is why if we look at the monolithic application for example it just usually is a single application that run you know as a monolithic application monolithic applications that run in a single process and usually we deploy that to either a VM or a container and if we want to replicate that we just need to replicate that on a different server or VM or container but when we looking on on the micro service approach it's all about the segregation of the business functionality across different independent service and this is the most important so I need to apply that independently on either VM or container you know you just accuse the platform but the main idea is the replication the scaling is independently on any you know either VM or container or so it independently scaling up on different components and this lead to also moving from horizontal you know layers into vertical layer which help the team to move from the architecture team you know like for example in or the horizontal team to the vertical team so for example you know in the horizontal team you can find the UI team is working on all components like a male voice TV you know the API working on all the component of the application the data so it's only one team that work all that so but we're talking about which your team which is a vertical theme you have one component and with each component you know you have many people you have just people working on each part of the component so in this email you have the UI the API is the data and the other component so you start having that vertical team which gave us the ability you know to manage that and again we would like again to speed up the the the value to the end user and the customers so this lead to you know the segregation of the application instead of having monolithic application we may end up as you can see with a huge bunch of micro services and again we solve one problem or not one problem I again I don't like the problem but sometimes maybe because I'm used to we solve some of the challenges some of the challenges not a problem but again this micro services or this number of micro services it you know it brings some other challenges which is increase the complexity of managing that services or micro services so Israeli a real example yes if we look at Amazon or Netflix you can see that this is for example the the micro services for both of them and as you can see this is in each dot is a micro services for both of them so when we're talking about how we are going to hosting that so are we going to host each micro services in a VM or but this will be like really huge you know for it I mean ug4 consuming a lot of resources you know with no need so the alternative it was the container and this again you know bringing us so how we are going to manage all this container you know like I'm going to orchestrate all these containers so how I'm going for example you know working with container and the node and you know clustering and start managing which containers run on which node and so on so the darkest region and the managing of this bunch of microservices containers or containers application how I'm going to do that and this lead to Cuba natives so as you understand the the that's the story behind the scenario so this is the cue penis kubernetes is an open-source system for orchestrating automating deployment scaling and the managing of contra Nuys the application so it's helped you to deploy your application quickly and predictably because you know it's improved reliability because it continuously monitored and the manager containers it will also scale your application to handle the change in the load because the people is hitting more on the fly as needed so it's very dynamic which means that this provides a better usage of the infrastructure and of course the cost if you using that infrastructure it also coordinate you know which container run and where and across all the orchestration system and have this container talk to each other so this is the Cooper natives this is why it provide a lot of capabilities and feature either portability extensibility and self-healing so let's talk about what is the pod so pod so pod is the core components of Cooper natives so it is a cube related abstraction and logical representation of the the atomic unit of kubernetes so it can contain one or more containers you know and the other resources that shared or used by these containers for example in our example here I have one containers in input 1 and I have here input 2 I have a container and volume which is a storage access a storage area and here I have two container using a shared resources which is the volume and here we have even as you can see we have more than three containers and two volume for sharing that and with each body it has and it working unique cluster IP and information on how to run each container such as a container image version specific ports and so on is with each part so the summary put is the atomic unit on Cuba needs platform when we create a deployment on Cooper natives ok containers pods and the scheduling as we understand now that containers must Reb or be inside a pod or on queue penitas let's get more in details about the pod from inside so what is it just you can think of it is just an isolated area of you know the operating system which has a network for example as you can see the localhost here so this is for communication this example for example it also can can have you know some named species that can be used within this pod and usually it can run a single container or a closely coupled container can run together in the same pot so it is also the minimum unit of scheduling you know just imagine that if you want to schedule to run you know a component on the node then the minimum you need to run in the boat not the container the same also for scaling up you don't risk in your application by adding more or you know more or less container to your application but by just adding or removing more parts of your application or your orchestration system so I think a single pod can only be scheduled to a single node regardless of the number of containers as we now understand the pods let's talk more about the node so what is a node as you can see you can see now the node has more than one poles right so the node is the worker machine in Cooper natives and can be either virtual or a physical machine so you know just this is the infrastructure the actual infrastructure structure so if we talking about PUD is a logical container a logical space pod node is a physical component so a node can have multiple poles and the cube kubernetes monster automatically handle scheduling the pool across nodes in the cluster okay so to just to get more about the coupon it is operating cluster of nodes so each node because the node again just sync node is as a machine you know so each node heads it's running time environment for container for example in node one it has the docker run time for example in in node 5 we have another container run time again it's based on the you know the software for control ization that you install or your configure this node for so and this is for example if you have an image from you know from one control ization application then you run that on the node that you know and again this is a beauty of the kubernetes it will you know manage this orchestration for you so what is a KS or is a cupid data service so it's a fully managed Couponing orchestration again you know just give you a very quick idea now you understand kubernetes is so cupen it is which you can see now it's just an infrastructure as a service right so what is the AKS it will provide you or bring you them the azure that the kubernetes instead of working with kubernetes as an infrastructure as a service it will be platform as a service or pass solutions you know so you will get the benefit of Cooper natives as a you know all this orchestration system and the open source are all of that but instead of having the overhead to work with kubernetes as an infrastructure as a service you will work with kubernetes as a platform as a service which means that more abstraction more manageable and easy you know or less overhead to manage that so this is why with with aks you know you can deploy and manage kubernetes was very easy because it's you are abstracted from the actual configuration of the AKS the Cupra needs itself you don't interact with the infrastructure of kubernetes you interact with the platform as a service so you can scale up and down you know very confident again because it's not your problem if if you this is will be the problem of the cloud providers that provide that which is in our case in aks is Microsoft Azure of course it's secured because it's on Azure platform which is really very high security and of course this will accelerate the development because you know by reducing this overhead or you know like just removing this overhead you give yourself more about focusing on developing and more on the value for developing the application so you can of course set up a CI CD ok so Cooper natives with and without a key s so when we look at Q Panetta's without aks we can see that the nodes is divided to two types of nodes as you can see that the control plane which is has you know the the master nodes and this is like orcas the controller you know like MVC you know any controller and so here this is the control plane has the master node which controlling on the agent pool or the nodes like you know this is this is actual node this is what hold but for example the master node is not holding the poles the master nodes just for managing and controlling the the agent so we have here the control plane and the agent pool but with Cupra need with a key s or as a queue peƱitas service it's all about that there is a hosted control plane so you will be abstracted fully abstracted from the the master nodes and the control plane because it's fully hosted by has a new master nodes to manage or pay for it's very easily to upgrade so you can I will show I will show you that in the demo how very easily you can upgrade the AKS it's either for adding more nodes or removing nodes or either for for updating the version of Cooper natives so so it's very easy to scale up and upgrade as a container registry so the azure container history it's all about you know the repository of storing your you know I will show you now that the modern engineering practices and usually we when we build our application we usually in a sort of build deploy we build store deploy so as a container registry will provide a repository to store your controller ization image no matter what is the controller ization technology or it was different it support different control ization technology so you can just push and pull images from this repository any questions so far so with a short DevOps and having the pipeline and the AKS and agile container registry we can have a full you know like a life cycle of managing the AKS or queue penitas life cycle so to just give you this idea because it's very important about the end-to-end delivery so when we could and build unit test packaging the application and then provision the environment and then deploy the application again but this is this is old approach so the modern engineering that you have to store your package on the storage area before you deploy so later in other stage you can just pull the package from the storage sorry you can pull the the package from the storage and just deploy that to the other environment after the provision so you don't produce that again and of course you can run so let's see here focus on the end-to-end life cycles that I'm going to show you now so to just to give you an idea this is the component that we will work or we will see in some of them which is the azure container registry for storing docker image eks which is the docker image for deploying pods running inside akes and as your secrets are so I will start the demo by creating the resource group which will hold all my you know all the cloud resources that I will create after that I will create an azure kubernetes service or provision and as a Cooper native service and when I create that and the first step is to just get the version of the last version of Couponing service and then using that to create the provision deck as a kubernetes service just to let you know this part is this provision environment it could be also part of the continuous deployment pipeline for example if you get the previous slide when I talking about provisioning the environment but again to keep this demo more or you know much simpler I just provision the environment manually some of the environment I will provision as a part of the CD but not all of them but again everything can be automated at then so so manually I will using the command line or as a cloud shell to provision the APS and by default is this aks will be configured with three nodes virtual machine I will show you that in the demo so once I have that I will start provision the private container registry and then creating an azure sequel and Azure database once we have that you know just to think about the normal life cycle of software developments that developer working on their machine they doing it change for the application you know just in our case we have an asp.net core application so just imagine again they finish the complete the application and then they just push this is change to the you know to the repository or the source to control your pod stick repository on the cloud once this happen if we working with continuous integration this means that the pipeline will automatically trigger the continuous integration pipeline it's very important to understand from where and what is the start and the end of the pipeline of the CD so what usually happened or not usually what happened what what will happen during the seed CD so the first is that the pipeline will push sorry will pull an image from a public registry that has you know public registries that has many docker image and in our case it's asp.net core image once the they pull the image from the public registry the pipeline will start building the application run the unit testing and all of that and then they start putting the oldest application on the image so they'd be you know they'd commit these changes on the image and then take the image with the build number after that the build will push the image to the private registry you know because now I got you know a plain a plane a speedo net cool image from a public registry but again I will not use public registry to deploy to my know I put all my application on that image and then push this image after it's very important after I take the image with with the version of the application with the buildin number so now my private registry has the container or the docker image which including my application and after that I will have the ml file which including the manifest of the the kubernetes deployment you know that needed described the infrastructure of the the kubernetes inside the AKS I will store do you remember when I said you build package deploy store right so here in this point I'm storing the ml file on the azure repos tree as a DevOps triple tree and also I will store the deck pack file which is the database as the files you know did does anyone work with that back before yes ok but not as your to deploy to azure SQL but to deploy to SQL Server okay that's great so that that back for for people that they don't know it just you know a ticket representing your your schema your initial look up data and so on so and and again part of the building is to again storing that on the head of observables three once this complete then the CD is ready to run so the last step in the CI is storing the package in the storage area so anyone can remember what we store in the in the storage the regenerated docker image okay where we store that the azure container is just great and what else and also the ml file for the deployment yes that's right and back back fine yes great thank you and the deck pack file so we're storing that oh but this is stored in different places like the the docker image in container registry as a container history and the ml and that pack files it's stored on Azure DevOps repose 3 so the last step again for the CI is this step and then the continuous deployment canister so what is the first step of the continuous deployment again it will start downloading from the storage you know pickup from the storage just imagine like any you know any distributor or any and all like carrier company you have the storage you go for the storage you just pick so the the pipeline will furnace to start downloading the tech pack on the release agent and then once it have that then it will use this tag back to execute the command which is you know creating the the database which is you know described in the schema of the deck pack including also the initial data once this complete it will also bake the ML file and then a starting bid on the CML is to provision the pods and the deployment of the kubernetes service you know as we explained we have provisioned the AES but we didn't explain the actual deployment and so it's the kubernetes and this is the ml file is defined the manifest of the deployment for our cube inators also it the pipeline it will just push the docker image now because now I have my pods right so the poses is configured and everything is fine so now sorry I can deploy my docker image to the the pods on on the coop inators once I have that it will also based on the manifest file it will provision services which is a load balancer and then I can go for the web browser and browse to my applic which is again it's very simple application so this is the demo so let's look at the demo so here I will go for the agile devops so as you can see I carry it here are using here relative obsessions and when you can see the project here there is no project in this organization you know because I want you to see an end to end if I go for extension here also there is no installed extension on this organization so what I'm going is I will go for agile devops demo generator this website is generating you know all the web applications and they amel configuration the pipeline everything so you know we can just use that just sign in with my MS account and then here I will choose the adjective obsession which is my organization give some little time to load and from here I just give a name mhissy or my healthy clinic and then choose the template navigate to devops lab and go with other kubernetes service so as we can see choosing this templates required to Azure pipeline to extension the first one is Cuba needs extension and the second one is replaced token so I will just again open this to extension and installing them so I will choose the organization and then install that I will do that with also replace token so now the two extension installed and I will unfortunately need to repeat these steps but it's not that deal so choosing the organization type the name of my project and match you see and they choose the template navigate to develop slab and choose here as a coupon in the service so as you can see now it seems everything working fine and then creating the project so let's navigate to the project and extension so you can see that while it create so if we go for the extension we can see that we have the two extension now and this will create again as I told you is all about creating the the source code the port the the requirements you know this is just a sample application so now it's completed I don't need the demo generator anymore and let's go for projects and we can see that I have now the MHC which is my healthy clinic project if I open the project and I will navigate to my other subscription so as you can see this is the current resource group I have just to refresh so the first step is I will run Here I am you know the shell cloud shell so here I can run it from here and then here it choose the organization then here I will create a storage the first time you the first time you run a cloud shell you just need a you know a space where it can store temporary files and again because this will be cost for storing this information so you need to create that and they talked to matically creating that for you so but you need to but once you have that you don't need to have that again so this behind the scene it will create the storage and there is also a group for that as you can see we have cloud chair storage in with Europe so now I can run the cloud shell and again I will start creating this component as we explained before so the first step again by running this command I will declare a variable called the version and to get the last version from Azure Cooper natives from Cooper natives so this will get the last version and then a code that so I can see the version if I want to do that and as you can see this is a version one point thirteen point five then I will start creating the resource group which will hold all the cloud resources I'm going to have so just as you can see just create the resource group as you can see also by the way you can use the cloud shell you can use bash or PowerShell and for people that ask me about the language again which language and anything you know you just you can if use PowerShell for example you can for me I really like bash I really like Linux Centex this is why usually I use perish so anyway creating the the resource group let's navigate to the azure to see my social group here so created and now I will just start creating the AKS here so because now I have the resource group I have the the version then I will just create using this command so as you can see I specify where is the resource group what is the name of the AKS and I I have switch which called enable and on monitor I will show you who I enable the monitor so I can so I can monitor the you know them that Cupra need is infrastructure later and then of course here I use the variable of version and of course here is generated and SSH so this will take some times dick around 15 or 20 minutes and as you can see this is very important as you can see it finishing finished service principle creation so behind the scene it creates a service principle manager this is why it's very important that my accounts that I'm using on the Azure subscription has the enough permission to create service principle and usually the service creation of service principle Eirik require you know a very high permission this is why for example if you if you would like to try that maybe with if you have an address or a subscription with your organization under the tenant of your organization maybe you don't have the service principle creation you know usually you will not have that so it's better to have your own other subscription and again you know even you can have a subscription you know free up to 30 days and you know it's very easy to get that so anyway so this weekly the service principle and for people that didn't know what is the service principle it's just a service account you know the same idea of service account so it's an account created for managing this you know managing services later so now it's still running and as you can see I would refresh here to just show you so you can see that now I have this resource group created i have also this resource group i didn't specify this resource group right but this created automatically but the AKS itself it's created and slide my resource group and this is exactly the abstraction what I'm talking about like I will manage IKS as a platform as a service pass pass solution from here but for other component of course I can look at them you know like monitoring or you know like interacting with them but again I don't have to you know like get the overhead of managing that component so this is the AKS so this is this resource group will hold the monitor capabilities and as we can see here have the look Analytics works piece and this one Muhammad yeah just a question they asked for people that asking questions say your name we are you pleased you know like we just okay my name is Reggie I'm based in France okay it's the first time I joined your your webinar and I'm very satisfied thank you thank you to you for your spending time and and sharing your big acknowledge just my my question is before in the shell and they shall cloud cloud chariots the key where is where is stored but because in your home your home directory is the cloud yes how you can keep it yukio your your shell key your SSH key yes you keep it safe you mean okay because the the key is generated just for this deployment yeah yes okay yes actually yes the main idea here is this is H key it will be used you know by is the provisioning you know so again you know it's not like you are going so this will be using you know for how to say that again you work with the AKS a from abstraction but the information will be there but you are not going you know to like clicking that or so yes it's part of the provisioning but you are not going to interact with all the you know the this information okay but it's important to to know where is stored everything - I yeah I don't pick it but I don't need to keep it but my question is just to know yeah how it's it's a good question I try I don't know exactly what is stored you know I didn't look at that part but you know yes and of course it's I mean it's a good point but what I'm saying at the end is like you know is the main idea is it keep you from abstraction by you know like it's like you are not going to have this SSH key and then you know managing where to store it or okay thank you no problem okay so if i refresh now as you can see this will be the back end you know for the kubernetes other equivalent service so as you can see this is all the component created automatically just to wait for the completion of the APS now it's complete so as you can see here again it's three resource group this is my in resource group will it has the AKS and this is the resource group for the monitor capabilities and this is a resource group for the back end and let's rephrase that by type and as you can see we have three virtual machine which is three nodes that will work with you know the APS okay let's now open my resource group and then let's create the azure container registry so this command again create as a container registry you know and give the location for the container registry so this will be just the creation of the container registry if I get back here then refresh so as you can see the continuously created this is a private container registry then get back to the cloud shell and then again here I will get the client ID of the service principle if you remember when I create or when I provision the aja crepinette service there is a service principle created so now I'm going to get the client ID of the surface and also get the ID of the Azure container registry the main idea here this is will get the idea of either container registry so I will have the ID of other container registry and Asia cabinet service so I can create a role which the main idea here is to just give permission between the AES and Asia container registry so it can pull the images or interact with the repos three of the images so this will get pleasure container ID and then here I will create the role assignment link with including or using the client ID of service principle and either container registry so now I have the role assignment created let's clear mr. Mohammed I want to ask a question yeah you are making pool now from the docker image to the public right now also not doing pool I'm just keep provisioning the resources okay okay yes this is why when I provision something I just get back to the to the azure portal so I can choose the provision of the items but no there is no nothing just provision then okay okay so now I will just create or provision the sequel server as your second server and of course I will give here the sequel the name of the second server the user name and the password this will create the sequel just refresh so as you can see now just refresh will create the sequel just to give it to time by the way I speeding all of that because this is video because you know just provisioning the AKS will will take like 20 minutes or here so near Mohammed sorry dear can you can you please increase the size of the text and and the console and after is it possible no unfortunately it's not post okay thank you no problem but I can again you know all the command all the command I will give you the command it will be on the video recorded after that I will it so you will get all the commands of that if you want to do that okay thank you have no problem so so yes as you can see the sequel's created and then I will create the database and so at the sequel so as you can see just a database giving the resource group created and here the sequel the name of the read the sequel and here the name of the database so this will create the database but of course empty database so if we refresh we can see now I have a sequel database here as you can see again all the profession as you can see I do all the provision not all but I'm doing so far oh these resources through the cloud shell or the command line but again I can have a full automated provisioning of the environment using the CD okay remember usually as I explained to you before it's very important to understand what it should be inside the CI and what's a CD so provisioning environment is usually the first step you know before you know like the first step in the continuous delivery is provisioning the environment if you remember the slide when I talking about that so all of that manual to keep the this demo simple you know so now I have the database we can navigate to the database and this is the server name of the sequel so this is a Cuban it is either container history this is the name of the container registry so let's navigate now now I provision all the environment let me also just show you so you can be so people can if you remember now I completed or I completed this part okay so this is the part I completed using Csilla a cloud chill and now we will start the part of the CI CD so now if I navigate now to my pipeline and as you can see this is you know this is the build pipeline so the first two tasks this is the run service so what I need here I will explain to you everything here so what I need here this task is just for you know let me first give the authentication or maybe I should explain the run services you know we have from the beginning one to the replace token the first one is there is any question okay so the replace token and this will replace thickest and side the app settings of the web application because you know the connection string because for example I created the sequel server and the sequel database with a user name sequel admin password so I want to change this in the in the in the app setting of the web application so I can use my new sequel server name and username and password and this replacement token the second one it will replace the you know inside the Memphis manifest file of the the kubernetes it including where is the the container registry the private container registry so I will replace that here okay this one is run service this is just for pulling the image from the public registry and restoring all the the third part packages the next one building the service this task is just building the image and taking that image you know of course building the application first and building the image while deploying the application on on that image and will take the image with the build number the this desk which is push it will push the image from you know now I have the image on my local on the agent the agent on the fly and then it will boost the image to the private registry the last one it will just look the service this is just for a digital sign in to you know you know to make sure that you know like you understand that how to say that you know it's like you confirm that you have this final version so you know you just don't confused about the updates it's just making unique for your image and then after that this will copy the file if you remember the publish artifacts this will copy to the from the the source control and then publish artifact will take these files which is the ml and the backpack to store them on Azure DevOps okay as now you you get an overview about that so what I'm going to do here in the run service I just need to have you know like a communication between the dairy poultry and the my age of subscription okay because I'm going to store the image on on the azure container registry so what I need here is again choosing my subscription as you can see and then authorize so once I authorize this will create a service principle behind the scene and adding an endpoint on a DevOps to communicate with Azure cloud behind the scene of course you can do that manually by the way and once now because I'm authorized then I can choose here that container registry and I will do that the same for all of them so now I give authorization for my build a task let's go now to talk about the replacement token now if I go for the variable which is including in the build as you can see this is the name of the variables and the value so it's a key name sorry as you can see it's a key and value key and value so here the ACR which is as a container registry and this is the name ok so what I will just again is replacing that variables with the actual variables again I I named this container registry with with a name so I will use that here yes I can call the opportunity to join the I follow you on the as a tip of slab content thank you I have it took questions on regarding this one the first one is I seen that the two things on the publishing the artifact the first only the so you publish what I saw that there's a big task on you go to the task adding it does you can see that tt said there one is the publish build artifacts and another is published by ten artifacts the last one the last read no go to task task the last one yes after the copy what's the difference between that to have and what should I use then what use publish button artifact accomplished reader so that's the okay between here I'm using publish build artifacts what's your question about I post my question is about I seen that there is a one dust something called publish an artifact aha so what's the difference between that and which should I use should I use because I since all the blog's that mention that publish button typically smart move faster move up tuning capabilities kind of that so I need okay thank you what a Google open pin box and they need not use as you evolve course of course we will never using plain you know a value inside the variable the best practices using key vault but again this is just a demo you know so so yes definitely we using key vault for that for is it so this is to cover you the second part for what is Peter is published artifact build artifacts or pipe are for me I didn't know you know I don't know again for me let me tell you how usually what I'm saying is if we talking about demo for me as long as it's work that's fine let me the more the most important is that you understand what's happened you know this is the first part the second board if it is in reality its work for me usually I say like you know you know this to driven development it's mean that first I need to make it run once it's one I just need to look to look at how to improve okay so this is why again you know like IIIi think it's a chance to look at you know what is the difference between them but you know for me is I will use way again running and then improving that so refactor you know we doing that now and then affect you later no problems my pleasure so see yes as you can see I now I will copy the here from the Logan server just as a container history I put the container registry name and here the name of the database this is the sequel server Mohammed yeah sorry and when you could be the the guest the name it's one mistake thank you you live yes okay yes because I just recognized that but yes I will fix it and you will not say thank you what's your name where are you paste he from friends okay hello ujiie I look okay so so yes as you can see here I would put the name of the actual sequel server so I will just again replace that and this is the running service that I talked to you about so now let's go for the replacement token so the replacement token as you can see here so you can see that it will look this is the file that it will look at so here this is the directory and the file is upsetting Jason this is you know that the configuration file for the web application which including the connection string right so let's navigate here for so this is the configuration file the connection is thing so as you can see here this is say cool server the name of the day because this is the variables as you can see so this will be released during the bill the the the pipeline running the CD pipeline sorry is a build pipeline running so this will be replaced by the value inside this variable and as you can see we have here suffix and prefix and suffix with this underscore so I can recoup you know identify that so this is for the web settings just and again this is the suffix and prefix as you can see so now I go for the variables just again so this will be replaced as I explained also including the sequel username and the sequel password with the actual that I just putted them now so this is for the app setting for the web and then the second one is replaced token it will police inside the the manifest file of the kubernetes which including the deployment of the kubernetes and here i will be replacing the name of the azure container registry so let me show you that inside the this is the manifest file so if you go down here as you can see so as you can see this is the token for the ACR or as a container registry and I will replace that with the value which the actual name okay and then here this is for this is just as a docker compose that including you know the pulling the image the asp.net as you can see here this is a build net solution and this is the working directory storing the solution owner and here for the building the application and here let's navigate to now to the pipeline of the deployment because the previous pipeline it was the continuous integration or the bill and this is the deployment so as you can see I have here three tasks the first one is the deck back executing the deck back file so this will create the the schema of the database and the initial look up data and again I just also need to provision the you know efficient or give privilege from the the backpack from from the azure develops to the to the portal so it will need the same just to give a Fant occasion and here of course the username and the password that will be used and here where is the deck back file and the second one here this as you can see create deployment and services you know in the manifest file of the kubernetes is just a list of all the deployment that will be created and the services in our case we have two deployment as a to bolt deployment and one services is the load balancer so this will be so I just again I need authorization and this is a resource group and of course this is the AKS or as a clipper native service Sariah is also the container registry and year update the image so i can push the changes okay now I updated all the value so what I'm doing now as you can see this is a Dodger container registry I updated that also for the pipeline for the continuous deployment and now let's run the build so what I'm doing now is run the build not the deployment and let's see what will happen so again I I will use open to a host agent because again this is dotnet core so it can run on Linux or Windows and as you can see now the build is running so let's navigate to the azure container registry so we can see you know what if you remember what the build will do is creating the image put the application tag the image with with with the Builder number and it will push the image to the container registry so I will show you here now when the image will be pushed so here this is the task of replace token so this is the creation look services so as you can see now I have my image tagged with with the build number so the build is complete so because I have the continuous deployment enabled which means that once the build the complete and the back edge is restored then the pipeline of the the release will trigger it so this is the pipeline or the continuous deployment by blind trigger so as you can see the first step is download the arc effects again it will get what can you remember what is artifacts the ml file is the deck pack file okay it will not download the image because it will automatically boost the image from the the registry that could the private container registry to the kubernetes service directly downloading the build artifacts from the agile devops rewboss tree then here can you see that this creating the the database schema and even this is data by the way can you see that creation of foreign key and so now it's complete for downloading the artifacts now executing so now let's connect here now what I'm doing is just get credentials so I can log in to the kubernetes services this is just like Logan and now this returns this value so now I have authorized to run Kubek tailgate pods so I get you know the information or the metadata about the current boat running inside the other community service so as you can see now I have two pods so the first one as you can see this is mah GC or my healthy clinic back end and my healthy clinic front end so this is the pods and again I will use cubic tailgate service for the front end so I can get the IP using the switch watch so I can get the IP of this pod if you remember each pod has a unique cluster IP right so I get now the IP of the front 8 pod I will copy that of course the public not the private so I can access that from external not from you know a virtual machine on the cloud the same network and then here accessing the application and as you can see the application has deployed with application with database and they can access that from so mmm I have one question yes can I see your name please yeah my name is sue me I am from India hello so it's a thank you for the great demo so my question is regarding how did you manage the integration between the a CR and a case means you are pulling the image from day CR to a case running the port so how will you know give you manage these in English names yes the integration has many parts you know the first part if you remember when we create when we created the the you know the the role assignment between the AKS and the you know if you remember this part when provisioning at the beginning I create a role assignment using the service rinse bills and client ID and the ID of the occupant of either container registry so this this give you know that the part of having this asset so this will use this authentication between them you know the the service role assignments from from the pipeline by giving the authorization as you can see you know when I also rise so the pipeline will have those rise to to interact with container registry and other cabinet service I don't know where is it blank sorry sorry is it is any question ok I think I got it myself when we first the a case question that and you created the role between the ACR and yes yes exactly problems yeah so now I will use cubic 10 which is a command line CLI and Dwayne okay so I will use now the cubic tail command line interface which is the command line to interact with kubernetes so as you can see it is not installed on my machine so I will install that see the cubic tails Eloy but in order to install that on the Windows machine I need an agile CLI first but here because I have so this will install other kubernetes cubic tail command yes so here I will just you know I just need to add in the environment variable the path to the cubic tail command line so I can load it in any place from the command line so okay and now as you can see if I run cubic tail from any place I can just load it but of course at the X bean we need as you see lifers to be installed once I have the cubic till this is the place where you can install as you see allow first to install as a cubic cubic tilt Eloy once I have that I just need to give a permission by creating cluster role binding using the you know the to give a permission to access the dashboard of the kubernetes once I did that now I can access the dashboard of the Cooper natives so this will now open a proxy tunnel between the Cooper natives just to give it time to load and as you can see now I have the dashboard of the Cooper natives and as you can see it includes the deployment there is a replica said the pods you can see I have two pods here that the back end and the front end and because I created only one Reb one replica set from each pod so I have only one pod for each part but of course you can define how many replicas set for each okay just let you the part of as excellent before about the monitoring because we enable monitor capability so now I of course I can go to the inside look about more information I can go for getting you know the insight about the running for example here this is the nodes the health of the node and the container as well that is running so all these capabilities I can have it also as you can see I can very easily for example I can move it okay yes again this is what we call the configuration as good yes could you could you please mute your mic yes so yes this is what we call the configuration or infrastructure as good usually you don't you don't touch the environment you don't do you know you don't go for the environment and change that with from the portal or from command line you need to change first that you will change that in the ml file so for example let me show you that if we open for example the ml file so for example this is the ML file here but the work on boys here is a deployment tml file so for example the replicas it for the back end is one this is the back end okay this is for example the port for the front end and the replica set is one so if I did that - for example I just ate it and they change that to two which means that I add one more replica set or one more pod for this front end for example or this back end and then I will start you know running the pipeline again and this is exactly what we're talking about infrastructure as code you know everything under source control I can look at the history when the change happen who who did the change and so on okay so as you can see I can scale up the the nodes very easily either or scaling down I can also update the the version of kubernetes and this is exactly what I'm saying it's like give you abstraction for interacting with Cooper natives as a platform as a service and also I can run that from the command line so that's it Mohammed Khan can say what would be the case if we have an update and our code and generate a new docker image how it will be updated and current ease sorry when you asking yeah if we are creating a new version of our application so and build pipelines and Co it will create a new image yes and you to docker image how would how will deploy it to quarantine it will using the same you know you will have the you would have the same I mean as you can see you will push the new image to the to the private registry and then the deployment pipeline will pick this image and doing the same ok so this is a command as you can see our and I will put it in the video for that and of course your support is really needed upcoming upcoming session there is next Saturday also there is a session about open source and this is for group B only and this will be for group e for using as you as you saw in this video we saw using the pipeline as a visual pipeline but also it's you know it's recommended now or it's not recommended but I would say if everything under social control so we can use also the pipeline as code using UML so this session will be about that any other question thank you ok no problem ok great really thank you so much and for for people that attending and my apologies for people you know that he couldn't join for any reason and yes so and look forward hello yeah mo buddy yeah just a question yes sure yeah so my question is like if you have a new version being developed and if the functionality doesn't work how does it replicate - I mean revert back to previous version ok first you should you know you should not reach this part like you know like you should not reach this parts like you deploy and the functionality is not working you know like I will say I have a concern about this this scenario but it's ok sorry Prezi viennois EKG click wa the Ziva yes so the main idea here is you can you know but again once you deploy the application and for any reason you you just you just can push the previous image okay so you may have another pipeline you know that again but this will be workaround I'm just giving you that you will have another pipeline which you know it can just push the previous image because part of the when you look at the manifest file is it pushes the latest image so and this is why it usually when it push I know the private registry Azure container registry has many images right so in the Mentalist the file we specify that we need we need to push the latest so you may have another pipeline which just push pushing the image and you will specify which version exactly using the tag but again this will be a workaround you can put it in a better scenario but at the end because you have a full automation so you can you know play with that as a way that you prefer you know for example if you have a fully automated test so maybe example after the provisional for you to meet the test and this is if this fail for example it just automatically you know gets the latest version and just you know understand the previous one and then automatically roll back or push the previous cell you can have a folio to me again you know if you can do it manually you can do it automate it and if it is automated you can improvement you know and this is exact what I'm saying again I'm not looking for improvement without having its first I'm not looking for automation without to doing it manually first you're doing a manually you doing go to meet it and then you keep improving and you know the process the refactor later you know but again remember refactor it's because you don't want to have a technical debt okay great again thank you so much for people that attending and my apologies again for for situation today and I think again bicubic connected to the Facebook page and again we can miss wicket we can you know like sort out this kind of issues of the number I will think about how we can get you know like bitter but I didn't expect to have all these people to join at the same time so okay Mohammed Mohammed yeah this is Salam from France I was I was waiting on the Facebook page I thought that it was launched from there so I missed the first 20 minutes so is the recording will be available as you realized the source code that you indicated yet earlier everything will be available and you know and please you know everything and everything will be recorded everything will be available this is the first part and every ever even all the question you know that most people I have a list of FAQ or frequently asked question and I keep updated so you will found where you can find the video where you can find that and I'm thinking Evan because again you know I couldn't join in the first part even myself because there is I'm expecting you know there is 250 people and even there is more people like send me emails at deacons they couldn't join so I'm I'm thinking of maybe I should repeat this event because there's a problems that happen it just again for me is now I reach the limitation of the number of 250 so how can I manage you know to have like a situation like that you know of course it could be one of the expectation is that many people but I was not expecting that today well well thank you the second very interesting and very useful thank you so where can I get where can I get those facts and URLs or everything on the Facebook page and if you got it if you got the notification just to scroll down from the Facebook page and you will find all the information the code of conduct the FAQs frequently asked a question get all the information okay and so for next way for the next webinar should we go to their Facebook or should we wait for the a mail from your side to get the link I prefer to use the Facebook but again many people they still don't use it so you know and this is why I'm still using the email and the even for the email I have problem because now you know I I send very huge number of emails so I get plugged by even I even I I got you know subscription yeah yeah but I still do both but I again I prefer that in the future people will understand that our point of communication will be the Facebook but until you know I until I you know we get this to reach this point I'm using posts but of course I prefer you know because at the end once the problem happen I can just you know for example I just put the link of the meeting on the on the Facebook page and that's it but now I need to send for 700 people there's a new email petition and once I did that in one email it's not allowed to put more than 500 outside the organization so I broke the first time you know so it's a very long scenario now I know this is what I thought that I thought that the link on the it was on the face that was I kept waiting there no actually that the link I send it for people and you know so yeah but I was I was not watching my India yeah anyway okay no problem thank you
Info
Channel: Mohamed Radwan - DevOps
Views: 2,322
Rating: undefined out of 5
Keywords: Azure, Kubernetes, CI/CD, Live Webinar, Azure Kubernetes Service, DevOps, Cloud, devops career path, learn devops, community
Id: QSapxLWZYq4
Channel Id: undefined
Length: 90min 28sec (5428 seconds)
Published: Sat May 02 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.