Deploying Python app to AKS using Azure DevOps

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone good morning to all those who are those who are attending us today from india and so today we are going to start with our session and it's going to be really amazing because we have a really amazing speaker uh before i introduce about her i'll just quickly run you through what this community is about so we have been doing so much so many meetups in this community which is azure developer community and we have currently crossed 6000 members already so let me just quickly run through what i'll be doing in this community and why are we doing this so as you can see azure developer community started with these three basic ideas so we are here to help everyone to learn learn on different technologies by sharing ideas and giving them a platform to learn on different things also as we know in a community it's always a very big opportunity to network with others so yes we have so many experts in this community we have so many different organizers speakers and mentors with whom you can network and grow also we are giving the platform to all the experienced people who have something to share with the community so that they can get upskilled and grow with us currently we have already crossed 6000 members within a span of just three to four months we started this community on 26th of january and currently here we are so we have already 617 members and we are across 100 plus cities so we are not just touching cities or which are just the high tech cities but also we are touching upon the two tires and three tire cities and our organizers have been running so many events and these are the upcoming events which we have under the banner of azure developer committee as as you can see today we have montage with us for this session so if you haven't joined any of our committees you can just directly search for your nearest committee and join them so that you get regular notification about the events that are happening near to you also we have so many partnering communities for whom we are really thankful so they are partnering with us to spread the word about the events that we are doing and we are collaborating together so these are the set of committee organizers we have who are helping us spread the word across and who are organizing these events regularly so regularly in their communities also we have recently launched speakers bureau so this speakers bureau basically will help all of you to get connected with these experts who are ready to spread the word across and share knowledge within the community so we have a great list of speakers bureau already so you can visit the website and check across so that is all i just wanted to run you through and so today we have mumtaja from tesca level and she is going to give a wonderful session on python on azure so i'll bring her on board and she can tell you what she is bringing on plate for all of you hello um hey hi hi rohit hello everyone hope you people are doing safe so we are really grateful to have you today so this is the first time you have joined us here in this azure technology and we have so many people who have joined from biggest places in india so you can tell us like what is today's session going to be about and what things do we have yeah today's session is about deploying python application to azure cuban of this cluster but we would be automating the entire stuff of creating python based uh containerized application image using azure devops and we would be deploying our application to aks cluster in an automated fashion so we plan to use all the tools which are being depicted on the right side in their icons like docker kubernetes aks azure devops and certainly azure platform we would be using a few other services like azure container registry to integrate and push our images and keep our images in that a brief introduction about myself i'm i had a company called as tech scalable i having i'm having around 17 plus years of experience i belong to orissa like i have born and brought up in orissa i've done my engineering for marissa and i'm married to a oria family so when rohit approached me for this particular session i immediately told yes looking at bhuvaneshwar because that's obviously very close to my heart and uh from the time i came to bangalore i started working i've worked on quite a few number of stuffs uh i started my journey as a cc plus developer and then moved on to all the three clouds initially started with openstack but that didn't capture that much of market in india so move to public cloud and majorly these days doing automation containerizing application running them in kubernetes is what something we love to do so we will start with today's session so we are planning to use python we would be containerizing the application we would be deploying that in cuban at this cluster but that kubernetes cluster would be from azure we would be using azure's cuban address service we would be using azure's container registry and we would be using azure devops tool which would have azure repos and azure pipeline so we would write a yaml pipeline which would do a automated integration and automated deployment i have not pre-cooked several things and kept it normally we pre-cooked for the demo and keep it but i have not done that why because i want to deploy everything from scratch so i just have a virtual machine running in which i have done nothing apart from my code which is residing here in this folder in case if you are sharing a screen here oh of course you haven't done it yeah so i was showing the pictures as well you people missed this sorry for that so this is yeah yeah so this is the server it's a plain azure soy montage but we are able to see that uh streaming is that so you can minimize that one no that is not on my actual window what do you see now rohit don't you see my screen yes uh you're able to see your screen but it's not full okay then i think you're not seeing the actual screen you're seeing the extended screen so let me get that here it is not given an option to two monitors it's not allowing me to select okay now tell me if you can see my terminal yes correct okay yes yeah cool so this is a ubuntu based machine ubuntu 18.04 i have done nothing i have just executed app update and i've installed app uh i've uh installed docker with app installed command nothing else and i'm sitting and developing my code over here so we would be using our python application we would be like as a developer we sit and create the python application in our workstation so you can see that i've named my ubuntu machine as a my workstation as well right so this is the file and this is my epp.py file which actually does nothing much rather than serve this index.html page and index.html page is i think a list it's in templates cd templates index.html i'll just get that index.html file for you people so when we open the web page it will say namaskar of nature we have automated all these stuffs using these tools and keep learning that's my message for you people now if i have an application like this how do i work with it so i first need to containerize my application so we will go step by step i'll just move this guy the other side okay cool so we would go step by step like we would have application i have my app dot py and i have my supporting index.html template so this is my python app this application can run anywhere right i can deploy that on a virtual machine as well but i don't want to do that i want to containerize containerize uh in the sense i want to run that in a container for that i will have to prepare a containers image that means i need to put this application inside the image of a container for that we do quite a few number of stuffs like i can first create a dockerfile dockerfile is used to create docker images so i write a docker file for a application so you can think that my python application is a microservice based application for that i will write a docker file docker file will execute with a build command and we would produce an output which would be nothing but a docker image so i'll first do things manually and then i'll show the automation part of it because unless and until you do anything manually you will not be able to know how to automate that right so we will first do things manually i have a virtual machine in which my code is residing so you can think this is the vm in which your code is residing i can consider this entire stuff as my virtual machine i have my code written in it and now i need to have a docker file with which i can containerize my application into a docker image i would manually run docker container on my system so i can deploy a container on a virtual machine i would use docker run hyphen d command because i want to run my container as a background process and i would expose some specific port number on the host machine where i can reach this application when it is running as a container so i'll tell you a little bit about the containers networking as well that time and my containers application is expected to listen on board number 5000 and whichever image i produce with the docker build i would be providing that so this would be the first step we would do so let's do this and then move on to the next particular step so if you go back on this folder you should have your docker file residing here i'll just get that docker file for you people and let's discuss what exactly it is so a container always comes up with a base image base image provides us the operating system on top of which you want to actually deploy your application so i'm starting my container with a base image of alpine 3.5 alpine is a very thin layer of operating system from linux family itself but it is very thin compared to ubuntu and other os's i'm installing python pip package i'm going ahead and copying a file so copy command copies a file from the host machine if you can see i have requirements dot file text file over here let me probably catch that as well so in requirements.txt5 i will write all the packages i need today maybe i need just one single package to be deployed but it could be that i'm needing multiple packages to be deployed instead of writing this command several times i would just go ahead and write once the pip install command and provide this text file in which i would have listed all my packages with that what would happen it would recursively go and install all the softwares which are needed and that would happen because we have given hyphen r hyphen r stands for recursive recursively install whatever is needed whatever is mentioned in requirements.txt that's the intention behind writing this line i'm copying my code file which is p app.py i'm copying my index file as well so index file is copied in source user source app inside the container and in index file is copied inside the templates folder this particular statement just mentions it's like a metadata to inform the user of the image that when you are actually going ahead and deploying a container from it then you would be exposing port number 5000 for your container application like you see here why did i mention 5000 on the host machine i can pick up any free empty port and i can mention that on the first parameter which i have left black like here but 5 000 is a constant port number because my application is made to listen on port number 5000 so i would not be able to change that because my application is made that way and that's the reason we are specifying it over here so that if someone is using an image created out of this docker file then the person knows that 5000 is the port number which would be used now the last statement in the dockerfile normally specifies it's not necessary that that's the last instruction in the file the cmd instruction in the dockerfile says what would be the process which would be running inside the container when you once you start the container so i'm saying that once the container starts my app dot py would get executed so i'll just verify docker command is working yes dockers installed docker command is working installing docker is pretty simple you you can just do sudo apt i'll just type in but i've already done this docker dot io and give hyphen y at the end you would be able to install with this and after this i should be able to build my docker file and produce an image docker build command picks up the docker file and you would see here that step by step execution of all the statements are happening and at the end my entire docker file would get executed and whatever like i had asked to install flask i had got to copy the index file i've copied the appb dot py everything is done and it has produced this particular image so this is the image which is produced you can execute docker images command to list the images you can find here there is this image id matches the image id which is being produced but strange there is no name for it that is because in the dockable statement i have not given a tag for it i can even tag that with the docker tag command i will give the id and you can specify a name for it a name like my python app for example and i'll give the version as latest so this is the image which i have produced now image is produced let's see whether my application actually runs inside a container or not then i'll plan to deploy that in a kubernetes environment so for that we will execute the docker run command docker run command creates and starts a container i'm planning to start the container as a background process so i give hyphen d in it d stands for detached mode or you can see as a uh background process so hyphen d i have to give hyphen p number in this host machine i have opened all the port numbers but obviously that's not safe let me give port number 5000 for the host machine and put number 5000 on the container so first parameter stands for host second parameter starts for container i can even give this way i can say 18 colon 5000 then also this would work we just need to make sure that container bracket would get forwarded in a proper way so how does this work let's try and understand that as well so at present what we are planning to do i'm just trying to explain that hope that makes sense for you people so this is my virtual machine which is running in azure and it has obviously an ethernet interface 0 from where i have ssh on this machine this is that ethernet interface 0 and i plan to run container inside that so there is a bridge which gets created as soon as docker starts running in that system this bridge is there on the linux level as well you can list the linux bridge too with brctl show command on the docker level this bridge is called as bridge but on the linux level the bridge is called as docker 0. so whenever we create a container in the default bridge network that gets hosted over here now when i execute a docker run command a container c1 would come up and would grab a ip address from the subnet range of the bridge and would be up and running this guy is listening on 5000 port number but remember that container would always get a private ip address a private ip address is not accessible from outside but when i want to reach my application from outside what do i do i grab a port number on the host machine and i say that this container would be attached to this particular port number on the host in the world of docker it is called as port mapping i'm mapping a port on the host or publishing a port also people call it as a host port would be tagged to one container's port so what happens if you do write this hyphen p 80 colon 5000 or hyphen p 5 000 colon 5 000 exactly what is happening at the back end as soon as you do that a iptable rule gets written here that whenever a packet arrives on vm's ip address on for example i have written port number 80 and containers purchase anyways 5000 then accept the packet and port number 80 and then forward that to the bridge and bridge will forward that to containers ip address on containers port number that's the reason we would be reaching the container if we are reaching this so like this it would be as soon as the packet arrives here a user sitting here would access our application then that person would be reaching on port number 80 would directly reach to our container so we plan to do this let's see if we succeed in doing that so i need to specify the image with which i want to bring up my container let's start to confirm my container is running docker ps command yes container is up and running port number 80 let me grab the ip address of my virtual machine and let's go and access i think i picked up the correct one my workstation so yes so you see this welcome message that is serving from within my container i'm not posting anything on my host machine on port 80 i'm hosting a container on 80 port number on the host machine so this is looking good our application is running container is created successfully now it's time to set up our aks cluster and our container registry and all that so let's understand that part as well hopefully this part is clear now rohit if there are any questions where do i see that in the comment section okay uh yes if you are able to access that comment section you can see yeah okay sure so now we have our workstation i'll just draw the workstation little bit smaller this time because i need to draw a few other stuffs i'll draw my azure cloud account here okay this let me pick up a different color for it so this would be my azure cloud account in this azure cloud account i wish to create a aks cluster so this is my azure account not azure devops as your cloud and this is a workstation this workstation can be my laptop as well okay this workstation can be my laptop my code is residing here i have installed azure cli in this so that i can interact with my azure cloud account but i have not configured that because i wanted to show that to you people how to configure it so azure client tool is installed but it can't talk to my azure cloud account first you can create your aks cluster from the client you can create your aks cluster from the portal i'm going to create from the portal the aks cluster so my aks cluster would be residing here in my azure cloud account let me try to draw a small cluster and i would have two nodes in that cluster one would be my worker node one and one would be my worker node two i plan to deploy my application over here uh yeah if you guys need the github link of mine one second hold on i think my project is accessible rohit i'm posting a message i'm not sure if others would be getting it so if they don't get it can you just post that with them that's my link and it's a public project so i think you should be able to access it in case you're not just let me know so i have worker 1 and worker 2 i'll go ahead and create a worker 1 and worker 2 on my aks cluster i need to create another component where i would be pushing my image understand that at present docker image docker is residing here i have os i have docker and i have executed the docker build command and i have create image and that is residing in the file system of the server but i can't share with anyone else unless or not until i push that into a central location right so this central location would be my azure container registry so i'll create a private registry here and in that i will keep my python app image so that on the fly when kubernetes cluster needs that image to pull and deploy the container that should be done okay so let's first create a aks cluster and our azure container registry both we will do it together so kubernetes service if you're not finding that on the top because you have not used it just write kubernetes on the top and you should be able to see cuban at the service as one of the options so kubernetes service i'm selecting on the left hand side it has a create a cluster option let's go ahead and create a cluster just minimize the size so in azure all the resources are segregated or grouped inside a resource group so it's asking me to create a new resource group or if i have an existing resource group i can do that advantage of creating a resource group is that when i want to delete anything i can delete that in one shot so let me check if i have i have a aks demo resource group i can select that it's very simple to create just write a name if it is syntactically correct as your accepts that name then you can press ok i already have one i've selected that give a cluster name cluster name is aksdemo okay i am selecting east us to reach in in and availability zones whatever abilities won't say that if i'm creating multiple servers in my aks cluster like a worker node 1 worker node 2 worker node 3 whether all of the servers would be residing in one zone or i would spread them across those different zones so i've chosen all to spread across all the three zones now version of kubernetes which i've picked up is the default version we prefer to go with this because that's well tested and on so 1.19 i would change the size of the machine you can go with b2s 2 virtual cpu 4gb ram is the minimum size needed so i'm just select scaling option manual scaler auto scale we know that a cluster can auto scale on demand if i want to do that i can click on auto scale and i can say start with two servers and when load increases max to max go to five so these are few of the options and after that you can keep going to the next next tab and updating each of the parameters now if we see our aks cluster can have different sizes of the machines like for example i can have five machines of two virtual cpu 4gb ram i can have three machines of eight virtual cpu 16 gb ram how do i specify that they are segregated in agent pools a pool would have similar size of nodes so i'm having b2s size of nodes and they are in three count if i want to add more i can add it from here i can choose what would be the size and all that from here so i'm not doing that i would just live with three node cluster and i'm not going to change i'm just keeping a few of the stuffs over here like virtual node vm scales it and all that stuff authentication mechanism normally when we are having authentication across services like my azure kubernetes service would interact with other cloud services of azure itself there would be authentication right now aks will go and pull the image from acr why will ac are allowed to do that so there should be a authentication success and then only that would happen so we are going with this same way inside kubernetes you have role-based access control like for example i can say i'm creating a cluster mumta is an administrator she can do anything on the cluster and rohit is a developer and then limit his access on the cluster i can say venkateshwar or sumit belong to the test team they can do just certain piece of action in the cluster that's the reason i'm just enabling that we might not be utilizing that now the nodes would have disks the encryption of the disk would be chosen this i go to the networking section i can use azure cni or i can use kubernetes default networking azure cni this is the virtual network vnet of azure which would get created and it would have subnet i would have i can define the bridge ip address for docker i can define the service ranges for kubernetes i have the liberty of doing that over here so i'm choosing that i already have a standard load balancer the normal load balancer azure load balancer the network layer for load balancer integrated with my aks we normally switch on our cali codes network policy or any azure network policy this would help us in limiting the stuffs like i don't want two parts to communicate from with each other then i can limit them later by applying some policies on them saying that part one can't talk can't talk to part two or part two can't talk to the outside world that sort of stuff so i'm just incorporating that as well in the integration section i'll go ahead and create a container registry let me quickly create from here a create button is given registry name got to be unique across azure because it has a dns name associated to it so i'll say aks demo and i would give say mj it will belong to the same resource group it would be in the same region i am enabling admin user access so that when i want to do a login and push my docker images i can do that i'll get a password over there and here i would go ahead and press ok so this are the parameters for my container registry i'm disabling container monitoring because we are not using it and that charges heavily so if you're just trying it for test purpose just make sure if you're using a pay as you go account it's good that you disable that container insights it's a little bit costly service now but it does a lot of stuff and that's the reason it's costly um yes so azure creates a load balancer teachers winning and uh they just sweet sorry they just sweet mail okay so azure creates a load balancer and our virtual machines which are there that would be behind that that is the one which is already integrated and tomorrow when i want to expose my application a python application when i deploy i want to expose it to the outside world i would be exposing it with a service tag load balancer that time i can directly create a load balancer in azure if i have integrated that and that was these days it is by default you don't have a choice so your aks can talk to your azure load balancer service by default so these are the parameters which we have chosen validation has passed we would hit on create and we will have to wait for some time cluster creation nearly takes five to seven minutes it would bring two things it will bring a aks cluster and it would bring a azure container registry so we would have this in some time it will take a bit of time so meantime let's go and authenticate our virtual machine with our azure account as well so easy login is the command to log into your azure account i'm not sure if you people are able to see this yellow message it asks you to go to a page in the web browser and enter a code which has been specified here so i'll pick up this and i would go to the browser and here i need to enter that code which has been prompted so let me quickly get that and mark next it would ask me to sign in let's choose the account and i would say continue so authentication is successful let's go back and see in some amount of time you should get a prompt back over here but the prompt would come with all your azure account details i have multiple subscription you would see a lot of them gets listed over here now in which subscription i would be doing the action that needs to be pretty clear because i have multiple of them so what we would do i would set as your set there is one command to list the account account list as well that will also give the list of the subscriptions which are there in my account um account list oh that was a typo in account sorry so this would list and now azure set as your account said hyphen hyphen subscription when you work on several clouds right many a times your brain doesn't work with respect to what is a command you got to enter now i was like a bit skeptical in typing that command now just hold on i want to work with the subscription so let me just choose that and see whether i have correctly written awesome so it's not easy account set rather it is easy account set and you give the subscription id so now i am in my azure subscription in which i want to actually execute my command my system is now configured to talk to my azure account now let's go back and check the status this guy is up not yet it's still in progress so when it comes up it should be here like it is still remember that cluster is a managed cluster so it would be doing all the stuffs and getting a ready-made ready-to-use cluster for you right so in that case lot of things like installation of os docker kubernetes creating the cluster joining the cluster so many things are happening at the backing which we are not doing it certainly because it's a managed service that's the reason it is taking some amount of time let's leave that peacefully and see if our container registry is created so container registry if you just type container you can find that in the drop down go and check yes this guy is created aksdemomj is the name and this is the login details of it so i just show you how to access it so just type admin over here you should get the access keys i need to push the image in an automated fashion no doubt in it but i mentioned that we will do stuffs manually and then automate it so i will manually push the image to my acr i will manually deploy my application to aks cluster and then the hero of the session which is azure devops will come into picture so this is my username for this let's go back to this system let me do sorry it's not ac acr login and i need to specify the parameter with hyphen n i would give the name the name is of the azure container registry i think i've picked up a username right let me pick up the name of my azure container registry first that is also the same so yep i think yes correct so that is also same it would prompt me for a username password here it has succeeded because my easy credentials are already been configured if my az credentials are not configured then it would have surely asked but at present it's configured so it is immediately getting it and it says logging is succeeded now my docker images are residing here i can't literally push this image like this you can't why because to push a image to a registry server the naming has to be proper what is the naming which we have to provide let's discuss that let me get my writing pad so the name of the image is always registry slash repo colon tag this is my acr name this is my my python app the name of the image which you want to give and this would be the version say version one now every day i'm building an image and producing an image and pushing that then this two first two parameters would remain the same i would just keep changing the version from version one version two version three like you have various software releases nginx version 1.17 1.20 1.19 1.21 that sort of stuff okay now my intention is to tag the image in a proper way so that i can push that to my azure container registry again using the docker track command you can give the old name and you can specify the new name let me give the old name and i need to get my acr name for that slash i'll give this name itself docker images you should have don't think that there are two images guys here if you see the image id remains the same the size is also the same but you are having two different images with like the same person is called by two different names it's that sort of stuff same image is now having two different names now if i do a darker push and i push this over where is it going it's going to my azure container registry let's quickly verify that here you should have repositories on the left go and click on it yes my python app is present and i should have a later stack so whenever you're pushing an image it gets updated over here okay so few commands which i use thanks praveen for saying that azure accounts at hyphen hyphen subscription guys and give the subscription id if you guys are interested in looking into the commands which we executed let me type history i did the subscription set after that i've just logged into my acr giving the acr name i've listed the images i have tagged the images and then i've pushed the image to my azure container registry image is manually pushed now manually deployment also needs to be triggered so for that i'll have to go to my kubernetes service and see if it is up and running sorted everything is up and running but how do i make my ubuntu machine to talk to my aks cluster fonts are too small okay does that make sense now your name is so coded smbk amazing okay okay so ac ats get credentials is the command wherein you want to fetch the aqs clusters credentials so that you can configure this machine to talk to your aks cluster okay so i'm just executing this it will give me the options so i just either you type hyphen hyphen help or you can just execute it it will give you hey crap you have done this go ahead and execute these commands so i would pick up from here hyphen hyphen name is the name of my cluster i think it was aks demo and what was the resource group resource group is also akstem let's specify that and let me pick up this and give equestria when you execute this command what is happening a file has got copied from your azure account to your local machine let me show you that file so from this machine i can talk to several clusters of kubernetes it's not necessary that you just talk to one kubernetes cluster right so whenever you're talking to a kubernetes cluster there is a variable called as cube config that gets set and that clusters context is picked up and that users constant context is picked up so this file got auto copied from there this has the cluster details this is my clusters certificate and i can't see my master but this is the master wherever it is running in azure account and the name of the cluster there is a context for the user i'm the administrator for this particular cluster this is my certificate and this is my client keys so when i execute a cube ctl command let me clear the screen if i execute a cube cdl get notes command it would oh amazing i don't have cube ctl because i have never installed so easy nucleus fly okay i need to install that let me pick up the command ac aps client install the package name has to be specified install for many typos yes permission is tonight why ctrl okay how will it allow i'm not a sudo user so ubuntu user or a mumta user in the system is not allowed to download any packages and that's the reason it was unable to do that so i just became sudo su i would go ahead and remain this for to be on the safer side and let me try executing this command okay exit now this would work cly is installed and context is set for mumta user so manpra can go ahead and execute the command if i want the root user to execute the command why it didn't execute as the root user because root users home directory doesn't have that conf file which we downloaded right now this this ac aks credential command when you execute it this file which got generated that is in home mamta so manpra as a user can execute cubectl command your root user can't execute that command because root user doesn't have access to do that now if i copy this file to root user then certainly yes root user can also do now sudo is for super user permission with sudo i become a root user root user has the permissions to install packages in a system um that's the reason i have done that a kanch and they just swing in case you people are puzzled with that okay a normal user in a system can't execute these commands of installing packages okay so i did install of client and i have executed cube ctl get commands but before to that we have to actually execute this so that this gets copied if this file is not present cube ctl get notes will give you that 8080 error now my cluster is also up i have an image but how will i deploy that so cubectl is the command line interface to talk to a cluster i should have a yaml file with which i can deploy my application to the cluster i have written a yamify and this is the other section there are two definitions in the yaml with which we would be deploying our application to the aks cluster okay so let's understand this file okay let me save this and let me tell you a little bit about kubernetes because i'm not sure how many people are aware of it so in kubernetes uh rather i will write this way in kubernetes we can deploy an application with the help of a controller called this deployment deployment is a controller who has the ability to create container containers in the world of kubernetes would be called a spot board actually has a container running inside it you can think it's like a wrapper on top of your container now this guy would get deployed in the kubernetes cluster and someone would be sitting and monitoring and we will tell this deployment that hey you got to manage this application and that to two replicas of the application or three replicas of the application then this deployment controller's task is to do that first thing so deployment controllers are normally used for highly available and scalable applications which we want to scale so this application would get deployed in a container in an aks cluster but it would be wrapped up by a pod and it could be controlled by the deployment if i say two replicas then this deployments work is to make sure at every point in time two replicas of my python application is up and running these applications are again having a private ip address so what way would i do the containers don't have a public id address right i'll create a static ip address which would be off having a public ip and i would expose this to my users when i give you my link you people would be accessing this and this guy would be forwarding the packets to the actual application so you can think this is like a load balancer this is cuban at the service who has the ability to load balance the traffic which is coming across same set of application parts so this is what we are planning to do let's quickly do that so i'm creating a kind deployment metadata the name of the deployment how many replicas i have this is my actual pots label and this is my container specification i would have to change the name here because my image is not residing over here my name of the image is something different so whatever i have pushed actually i need to pick up that image otherwise this guy will not be able to pull that image right a image residing in pcr has to be provided so let me quickly go here click on this you will get that entire string just copy till here and come back and paste it here so i'm saying pull this image and deploy a container which will listen on port number 5000 time i'm creating a load balancer type of service which will actually go ahead and create a load balancer in your azure account and that load balancer is also listening on port number 5000 so load balancer will receive packet on a public ip address on port number 5000 and it is forwarding the packet to containers port number 5000 that's what we are planning to do so i'll just write and quit from this file and command to deploy an application is cube cpl create hyphen f and give you a definition try it will create all the resources which are refined in this file cubectl get all commandments all the resources containers are in creating states deployment controller is created which is expecting two parts to come up at present zero of them are ready execute the command again both of them are up and running thankfully nothing went wrong and i have a load balancer creating created but the ip address is still showing pending because it takes a bit of time for my azure cloud to give me a load balancers ip address and that to reflect inside our pks cluster yep this has come up so let's go to our portal app can run without a load balancer service but for that you will not be able to access the application from outside the cluster some applications are meant to talk only within the cluster some application needs external exposure this application of mine needs external connectivity and that's the reason i have given load balancer type of service in case you don't need external connectivity like instead of a python app i was deploying mysql i would have created service of cluster iptype a ip address for the service which is resolvable only within the cluster what exactly is a service because containers don't have a static ip address on which you can reach if a container restarts they lose the ib address we need to provide a static entity for them so that we can connect to it and that's a reason a concept of services introduced in kubernetes well a small session is really difficult to uh explain each of the concepts so service is something which is giving us a static end point so that i can reach my application so now i can have a static ip address your user is reaching this correct you are not going and talking to my bots ip and my ports ip is anyways a private ip address every node preshant will have a different subnet range and in that subnet range internal subnet range my pod has received an ip address correct and service will get an ip address you people would have seen that when i selected azure cni there was a range for the service ip service ip has got the ip address from there so here we have created these steps set up now if my part goes in up up and down also there's no impact my user is not going to change like think about it millions of servers are serving google.com where do we reach we reach to 888 right does 888 really exist we never know behind 888 there could be million servers running you reach 8888 with saying google.com and then it routes us to a specific instance of a google.com server right it's exactly like that so manual things have worked really amazing now let's go and set up our application to get deployed and created with our uh azure devops automate the entire stuff so building deploying both has to be stitched up with our azure devops that's important so just to differentiate this is running inside my application on the server as a container this is this is running as a container on my aqs cluster what is the difference between the cube service we have created and the ones we got while creating cluster in azure portal so when we are creating these are for individual applications if i have to expose 10 applications i will have 10 individual service for them and when we created our cluster there are two parts in the clusters guys one is the data plane one is the control plane when azure accounts master nodes are going to talk to my worker notes they also need to come by some load balancer right that was that loaded load balancer pages suite so that you can say in a way that it was for the uh control plane and if i'm creating one application i'm creating a service i'll have a load balancer if i'm creating 10 such applications i'll have 10 services and i'll have load balancers for each one of them will look clumsy but we have advanced techniques of reducing that count as well which we are not covering in this so manual deployment has succeeded let's go here my code has to reside here i already have one i'll not touch that i'll show you how to create it from scratch so aks demo python is my project name so i'm logging to my azure devops with the same credentials okay and i'm creating a public project this guy is troubling me the stream yard pop-up okay hit on create let the project first create and then we would host our code in this so there is a question can we have multiple applications behind the load balancer yes certainly we can have that but at present my load balancer is a layer 4 load balancer which can route just on ip addresses if i have intelligent load balancer like layer 7 load balancer then yes it can have multiple back-ends like for example i have ola dot com and i have ola dot com slash corporate then i can distribute the traffic for that my load balancer should be intelligent enough to understand what you have written which url you are trying to access we can certainly do that but smvk for that it has to be a layer 7 load balancer which is not there at present now reposes here i would push my code to azure repos i'll just go and execute this command on my terminal let me get this guide on the other side so that it doesn't trouble us much yep so i am executing this i mentioned that the remote server for this git uh repository is that particular link and the next command is also given over here i can push everything to my so i am adding and i'm pushing my code to this azure repos it will ask you for a password and here you can see the option generate git credentials copy this and paste it here in no time your entire code base is ready here i have my azure aks file which we have recently updated i think this guy is showing up the last one where it is at i'm just committing all files let's see you can generate that credential several times not a problem otherwise save it saving is a bit dangerous okay so let's see if my file is reflecting yup so now i have the proper file aksdemomj dot azure container registry dot io the name is correct now let's see our actual file where is the templates folder index.html saying correct so all things are sorted code is residing here now it's time to build our pipeline pretty simple everything is automated these days not much of effort is needed so just give me one second i'm looking for the one which i had created in the morning so that different project and this is a different project fine so my code is residing here my pipeline section just hit on create pipeline in the create pipeline section you can say my code is residing in azure repos select the repository and here it will populate that entire pipeline looking at your code base so here you can see that i have written a script like this azure pipelines.yaml for the master branch for this repository i am declaring some variables this is my docker service connection the azure container registries id it is it's the resource id image repo name is this in which i would be pushing i would be giving the name over here so i just need to change this guy let's do this for that i'll have to pick up this particular stuff i think i had to pick up the entire huh i've picked up that so this is also done a docker file exists in this part that is much true i would be creating a secret with this i am using a temporary agent during build somewhere the build should happen we built it in our docker host machine but now we are not going to build it on our docker host machine going further right so i would be using an agent called microsoft hosted agent which would be ubuntu based machine and in that i am building this image so i am saying that this is stage build the display name comes here one particular pipeline can have several stages and one stage can have several jobs one job can have several tasks sounding very complicated so just writing it over here so i would have your azure pipeline a pipeline can have multiple stages like i have a stage build in this i can have several jobs like job one job two and inside that i can have several tasks like to create image to push image that sort of stuff then i can have a deployment stage in which i can go ahead and pull the image and deploy that to my aks cluster so let's see this so this is the hierarchy pipeline will have stages stages will have job job will have tasks now we can create separate configurations if you have separate files for it like this you would have separate yaml files meta if you don't have separate yaml files you would not be able to do that so you can define multiple stages here and there would be like first you deploy in dev then you go to qa then you go to broad so i will have like that stages at present i'm just deploying in one environment after that you can expand that by writing few more stages so here till here everything is sorted i am tagging the image which i would be creating i'm saying build and push the image to this repository repository name is given on the top where it has to push and i'm saying this is the docker files path which you would be building this is the connection credential you would be using to create or to log into that azure container registry let me just check this credential 56636 i suspect this to be of my previous container registry let's quickly verify that no okay i think in my repost by mistake this guy was there and that's the reason it has populated the older stuffs just give me one second okay i'll do one thing i would delete this file sorry for that but there's an intention behind deleting it when you set up a pipeline actually this guy creates a pipeline i was just astonished here so i want to pick up a task which will help me build push and deploy that to a case cluster let that task get populated automatically so when you people do it you would come to the screen i was not coming to the screen so i was a bit puzzled why it was so so now things are looking good i'll just have a copy of my previous pipeline ready so that i don't make mistake my resources are residing in this subscription i'm selecting that which cluster in this subscription aksdemo i can just remain an existing i would deploy that in a default namespace don't ask me what is the namespace can't explain that today maybe some other day the name of the image and the port number on which it would listen validate and confirm this pipeline file should actually get generated okay it has got generated now it's taking some amount of time and then we would go ahead and edit it this time when it comes up right you would find i don't have to change anything it has come up with all the correct specifications okay now this looks good last time it was it had picked up the old file od file got checked in and that's reason so our task is these are the variables sorry these are the variables which we have defined few set of variables which is auto populated by selecting the subscription selecting the cluster and on this agent would be picked up this service connection of azure that i can communicate with my azure cloud account from azure devops and then the build command is there to build the image this is the build and push tag it would be building and pushing the image to azure container registry it would be pushing that with a tag called as dollar with build id i want to drag it with latest as well so let me write latest tag here because i would be deploying my application always with a later stack whatever is latest i would be doing that now the upload part which you see here i need to upload the stuffs but there is a yaml file which i need to publish which would be picked up during my deployment aks file the azure aks file which we were writing that file i want to pick up so for that here you can see show assistant just look for word copy copy and publish your artifact i am not touching this root file my file name is azure aks.yaml i'll just skip that i'll give the artifact's name as manifest because bottom it would look for a folder called as manifests and i'll keep this output so what am i doing i'm publishing one image is getting pushed on azure container registry but from the build pipeline the code should go to the deployment pipeline if my aks azure aks yaml file is not going deployment it can't execute that file so we are doing that for this i got to bring this inside by few steps the starts should be aligned yaml is all about indentation now things look good yep it looks good this is the deployment stage in which i'm deploying this depends on build if bill succeeds then only this guy would get triggered if build is not succeeding it will not work first we are creating a secret why a secret is created because now what we are planning to do let's get back our picture once again open the recent one so now instead of we going and deploying our azure devops we go and deploy we have devops coming in between you just don't have now your workstation you have something else as well let's pick up a peppy color for our azure devops okay now this would be my azure devops server without using scale if you draw this is how you draw okay this is your azure devops code is residing in repos which is connected to my code here now and from here i have created a pipeline pipeline will get a temporary microsoft hosted agent server in that docker would already be present this pipeline is saying build the image so image will get built here it is saying push the image to this registry it will push it here we had manually pushed it now my pipeline is pushing in the second stage we are saying deploy deployment would go this particular how we executed a cubectl command here to create that cubectl command will now get executed from the agent agent needs to go and talk to our aks cluster to deploy what it would execute it will execute the same command cube ctl create now aks should be able to pull the image and deploy it for that it needs authentication and that's the reason we are creating this secret so whatever we did we would be doing that in an automated fashion not our workstation a temporary microsoft hosted agent is actually doing that let me see about how much time okay so here we are all sorted this is going to create that secret to pull the image and this guy is actually going to create and push the image deploy to a cluster let me see why this few extra tasks have come in to be on the safer side i would just remove them i'm not creating an in-grace controller so i'm just removing that and i'm not doing a post so the task is coming from a template whatever they feel is there in the template everything is getting populated whichever things you don't need just go ahead and delete that so here it is going to deploy this particular application and you can see from this container registry from this image image repository a tag would be picked up and it will deploy deployment things are also given the secret which we are creating is used here things are looking cool let me just quickly verify all the things once again so now time to save and let's run the pipeline hopefully it succeeds and it doesn't crash in the demo so let's wait for our agent to be grabbed meantime here a agent would be grabbed on the fly which would have docker and everything configured so that it can build our docker image and push that so build of docker images under progress how we had built the docker file this guy is going to build the dockerfile see this it is building that dockerfile step to step image would be produced and it would be pushed after the image is pushed it will go to the next stage like the deploy state someone was asking about the dev qa and all right i think it was smith if i'm not wrong so smitha you should have more stages deploy stage to depth deploy stage to qa deploy stage 2 prod in such cases we would be sharing the files don't worry rohit will get that sent to you people the entire repository is open sourced so you should get that i can even uh put that in our in the github as well and you should have it so building and pushing of image is successful that's the reason you're seeing star let's see deployment should succeed let me go to my azure container registry and see whether our images been pushed or not when we had pushed manually now i think this is the one which is pushed by us some time back and this is the one which is pushed by sorry i went one step back where is it yeah there are two repositories one which we pushed manually and one which we have pushed in an automated fashion this guy is pushing all the images with two tag one tag is 465 is the bill number and latest tag is the tag which we have pushed it has failed it says i can't find the manifest where is that guy why didn't it find it let's see you can go and find what is published in the artifact see whether your azure aks is not published or we didn't give the part proper is this not the correct file okay we have done a very silly mistake which i'll right away correct here what mistake we have done is our image name is not the latest image which we are producing okay let me go to the pipeline once again edit it and see what exactly has gone wrong hmm we should have picked up this because this is the image repo name we didn't write that that we would rectify and it is unable to find the manifest file at the same it is saying that whatever you have published in that i'm unable to locate this file so we have published manifest and let me check that um manifest bundle it is picking up from that place okay yes this is not correct from where do i fill the pick the file for deploying that is not correct so we actually need to deploy that with here we are publishing that in the pipelines workspace with the name manifest and this is the file which has to be picked up and that's the reason it couldn't find it i think now it should work so i'll do one more thing i'll save this it would trigger a pipeline i would cancel that because i need to change something in the repository as well this name is not correct let's edit that actually we should edit that in our server and come back here but i'm doing it directly over here now let's see the build will pass deployment is a problem for us it couldn't find the manifest understand that the build pipeline and the deployment pipeline are two different things when i produced the image image got pushed here but i need my yaml file with which i would be executing my deployment stage so in this deployment stage i need to copy my aks file over here then only it can go ahead and execute that cube ctl create command right so that file it was unable to locate but because from where we were picking up was not the correct path it was a default path we published in our pipeline in manifest folder i'll just quickly show you what change i've done so that so we were publishing that particular thing you can see i'm saying the content would be aks yaml the artifact name would be manifest and it would be publishing that in this pipeline itself but when i was picking up i was not picking up from the correct place and that's the reason it couldn't find that yaml file whenever you publish in a pipeline in the previous run you can go and check here published artifacts would be showing and yaml file was present but that guy was unable to pick up and that's the reason i went ahead and looked into that now downloaded the artifact is done creating of the image has been done it has it is able to successfully pull the image now it's going ahead and deploying it let's see the deployment succeeds or not so this is the time when it will go ahead and do that cube ctl create command it is doing that you can see here it has checked it will verify whether it is able to reach our load balancers ip address and then it will go ahead and mark this or screen so it has successfully completed both the stages both are done this guy is having one artifact produced it is showing this guy is just doing the deployment now let's go back to our terminal and see that we are able to access our application now when you do cube ctl get all the older version is terminated the newer version is coming up and this is the ip address so older version we have deployed it manually newer version is the one which we have deployed it in an automated fashion so here if i go to my web browser i think it is over here go with port number 5000 you should be able to access the application the business being served from my kubernetes cluster and it is deployed in an automated fashion so you can give this link on the chat rohit if you can give people can access my application first ip address with port number 80 will give you the one which is for your sorry i have written 500 can you write 5 000 please instead of 500 so you can reach my application at 20 72 89 66 colon 5000 and you people are not able to access my repository is that so then let me quickly do one thing for you people so any questions deployment has also succeeded are we all good so that was the agenda it's done any questions if we have the vm and aks are in the same v-net no not at all it's not necessary as well bj vm is in different subnet your aks is in a different subnet doesn't matter two machines residing in different networks can also talk to each other right so they can let me quickly do one thing let me download a zip on my local machine rohit if you're there these comments are not getting updated it just says new comment and i'm not able to see if there are any questions if you mind reading that for me yes it says new commands i'm unable to click on that i'm sorry but the last comment was from uh so let me show it on screen what was the upload stage in pipeline prior you had edited what was the upload stage and pipeline upload stage i have not changed uh hsuite i yeah so the upload was not uploading the artifact on the pipeline it was expecting me to create a file share and share the aks file so what i did i said that it would be just doing uh publishing that artifact in the pipeline itself that change i had done any other question what is the ci output published ci output published is the output of my artifact fun publishing is the acrs build image and second what we have published is just copied the aks file so that it is available to our deployment pipeline nothing more than that so there is a query by this i have answered that both of them are in different we net that doesn't make any difference though all right i think that we don't have any wanted the code so i'll just put that in github hopefully it's fine if i share my github link yes yeah right okay hopefully people like the session i'm not sure if you guys have got it not got it that was a really nice session and the way you uh explained using those whiteboard that really helps so there are a few questions yeah which what's the question where i can see in devops what you can see it's is is it that the complete question or smith has asked anything else before that where can i see in devops the build agent you can't see uh smitha the devops build agent because it's a temporary microsoft hosted agent which has been given just for temporary time so that server is given to you for some time till the time your build gets executed so if you go ahead and look at the pipeline i'll send that um code repository from rohit rohitwood sharing share that let me show this so this pipeline is asking for an ubuntu agent right wherever you saw we used this vm image vm image so this vm image is asking for a ubuntu based agent and if you go to any of the runs you would find that whenever this got executed the first task here before to this it was actually going and grabbing a hosted agent can you see that it is going to azure pipelines pool asking for a ubuntu latest agent which is nothing but a microsoft hosted agent and this guy is like azure has given me a temporary server in which i have done a docker like what we did manually in initially we did a docker build docker tag docker push all the three things happened on this server and this server has my azure client the server second deploy server has my cube ctl and it is able to execute on my behalf the cube ctl create command and pull the image and deploy it in my aks cluster so there are two agents here one agent is building one agent is deploying okay uh the secret file is needed when aks goes and talks to acr to pull the image because aks needs to authenticate to acr acr will just not like that give the image to anyone and everyone right when i'm also doing a docker pull it is asking me to enter the credentials so imagine that your actual images are being pulled on the worker notes i would have an aks cluster your worker node would actually go and execute the cubelet residing on the worker node gives instructions to the docker and docker on this machine goes and talks to your acr and does a docker pull then the image comes here and then the container gets created and it gets wrapped up by the pod so we feel part is created but actually the instructions are being given doc to docker to create a container for that it needs to pull the image so at this point secret is utilized so that this guy can go and talk to my acr and pull the image if it doesn't have authentication it can't do so scaling setup in the pipeline uh addition can be done from here scaling is actually can be done from here you just increase the number of replicas the application instances would scale like for example if you do cube ctl get all you would find there are two instances of my deployment ports running i can go ahead and execute this is manual scaling for automated scaling i'll have to create one more controller called as horizontal pod auto scaler cube ctl scale scale deployment i'll just pick from here i would scale this deployment how many replicas you want you can give deployment supports scaling but it supports only manual scaling if i want to create auto scaling i would have to create a horizontal part auto scaler component as well so i have scaled now to five replicas you would find that now controller is making sure that five of the replicas would be running at every point in time okay so this is for the scaling part uh rohan is asking as your ad authentication advantage over rpac azure ad authentication would be used for users authentication to the cluster our back would be used for authorization purpose azure ad is an authentication tool i can authenticate you and i can say hey say for example rohit has a key associated to it if rohit locks in rohit is a valid user in the cluster that is one thing but what is supposed what what all actions rohit is supposed to do in the cluster would be defined by the r back our back will say that rohit is a developer developer can only create they can't delete that sort of thing so they will work in combination rohit gyaan is saying what is replica set in replication controllers what's the difference between them replication controllers are different they are older way of creating replicas replica sets are a newer version of creating it it helps us in doing blue green and canary deployment you can see here when i deployed my new version of it one new replica set got created and my deployment when it gets created it actually has a replica set created internally and this has the parts running under it if i'm deploying version one this replica set make sure that both the ports are off version one when you are upgrading your image name to say version two in your yaml file or new build happen and version became changed or version became version two then this is my blue deployment now i'll go ahead and deploy my green deployment that would happen from here one sec i'm unable to see my mouse cursor okay so it would create another replica set which would be rs2 and newer pods would run inside this it would not be deployed under that and this would get deleted so slowly new version of replicas will come up an old version of replicas would get terminated remember i mentioned that one sought one version we deployed manually and when we deployed it in automated fashion it was showing terminating and that's the reason the older ones got terminated but this guy's reading if i want to go back to the previous version i can quickly go to the previous version so replica sets help us in replicating in sets that's the advantage of it aqua is used for vulnerability scanning uh that's one of the good tools anchor is used for vendometry scanning we can even use that variable groups can be created for each of the environments yes that can be done and that can be incorporated in our pipeline so that we can supply those variables on different different environment levels awesome yes yoga and dan smith are both correct i can even set up uh my notes in a different cluster and in different clouds in my local system anywhere azure devops helps me in deploying just not to azure but to any other cloud or on-prem environment as well a moles answer is correct if pod is going down then cubelet will go and inform that to the api server api server will go and update that in xcd because lcd gets updated your controller comes into picture and controller understands that the desired state is five now in my case and would quickly go and bring that back into action so self-healing process would get triggered deployment group in azure pipeline is normally used if i have a set of virtual machines which are there which are configured as a deployment servers that time we configure the deployment group in case of aks we don't use that because our deployment is not on a vm otherwise if we would have taken three vms and we would have deployed we would have created a deployment group out of it and would have deployed into that so let's see the time okay we have overshoot it by 15 minutes hopefully people are not cursing for that yes shubham for scaling on the fly based on the load you need to create another component called as horizontal pod auto scaler kian stateless is for applications which doesn't remember not remember what it has done in the past stateful are like database type of application they got to remember a lot of sessions and all that stuff our front-end applications are stateless our back-end database applications are stateful you can think in that terms i think rohit we are good we have shooted by 15 minutes as well if people are happy and we are good with it facebook is just not a single application yoga number facebook would have been made with some 170 180 micro services one of the micro service is a database which is keeping our data and that is stateful so a netflix or a amazon type of applications are made up of 170 micro services we just created one micro service and deployed one so architecture is pretty more complicated than we are imagining actually great thanks a lot guys thanks a lot for attending and thanks a lot for making that session so interactive hope you people enjoyed it rohit over to you thank you so much for joining us today and this this was definitely a very interactive session and the way you presented and you have seen so many comments the comments okay and yeah like you answered all the questions actually so it was from end to end as you mentioned uh before starting the session i hope everybody uh uh like what they have uh come and uh actually intended to attend today's event so yeah we will also look forward to attending more events by you uh so people said to just mention you about mamta so she's a co-founder for tech scalable as well and they uh do several uh sessions and workshops on azure so we have microsoft training partners ourselves we do training and we do consulting we do quite a few number of stuffs we work on machine learning ai and devops that's our forte of work we work on those stuffs so if you're looking for getting certified on azure then bob is the go to person and yeah their workshops are really helpful they are very intensive and they start from basic and like uh i think that those are around seven to eight hours of workshop right no uh paid programs for certifications are five days workshops normally so that span across five weekends for individuals we normally on the weekdays we do for the corporates and for the individuals we do through the weekend so whoever is interested to know more you can connect with me on linkedin you can look for mumtaja events are also been there conf events have tagged me so if you can connect with us certainly batches are coming up i have a azure devops batch coming in the month of august itself so if you people are interested i would be the instructor for the session so certainly we can get you guys in thank you so much for joining us today and we'll also try to have you in our future uh i know like your calendar is very if you want to do a series as well that would make more sense so that people can continue for multiple sets rohit i'll pass you on my code base if you can put that others that would be really helpful yes we'll mention that in our post even later and also like in other things yeah okay thank you thank you so much thank you thank you everyone please take care and stay safe all right so guys that was mamta and we hope going to enjoy this session so we also have our future events lined up so check out our website which is easydev.com.com and if you have any queries related to this session do drop in your questions in the comment box i will also try to answer them offline thank you so much for joining us today and have a great weekend ahead and be safe thank you all you
Info
Channel: Azure Developer Community
Views: 1,884
Rating: undefined out of 5
Keywords:
Id: n5kSqFNfYxM
Channel Id: undefined
Length: 108min 27sec (6507 seconds)
Published: Sat Jun 26 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.