Azure Arc-enabled Kubernetes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
my name is asif khan and i'm a cloud solution architect at microsoft in today's session we will be covering azure arc azure arc is a microsoft azure service which is basically built for hybrid use cases now the service azure arc has a wide range of capabilities but in today's session we will be just focusing on the azure arc based cube entities now azure ag based kubernetes is a is a capability where you can use your existing kubernetes cluster running any place which can be on-prem in a data center in other clouds like aws google or alibaba and you can basically use azure arc to enable and basically connect that particular cluster into the azure portal just like a native service experience now the azure arc service as a whole is pretty broad because it covers wide range of different capabilities like the azure rack based vms azure arc based data services which has hyperscale and there are some services in preview as well so one part of the story is where you have these vms or kubernetes or data services running in your other clouds or on-prem or domain data centers and you want to bring it to azure the other capability can be that uh let's say you have azure services like azure web app logic apps and and services of those like and if you want to basically run those services natively in your own infrastructure which can be on-prem data centers or let's say aws google or other clouds now for that the azure arc based uh web apps azure arc based logic app services and a couple of other services are also coming up but they are still in preview at this stage so with respect to the ga we have azure based vms azure based kubernetes and azure arc based data services what we will be covering today is as i said will be azure based communities cluster now in a nutshell uh if you look at azure arc firstly what we said that we would be needing a communities infrastructure so if you want to build a community's infrastructure of a bare metal or let's say any of the managed communities service which is through the other cloud provider like eks or gks uh there are a couple of things which is obviously you might have to have one master node one worker node and that that's something that's the bare minimum which might be required for those infrastructure but let's say if i'm doing a dev or a non-prod or pre-broad environments i would not need that intense environment and if i can run a particular kubernetes cluster locally on my laptop or let's say in a very lightweight sleek infrastructure that would be something my desire so that's where i came across canonical uh micro k it is canonical is the same organization which manages ubuntu uh they have come up with a full fledged uh kubernetes solution which obviously has all the nut bells and vessels which are required which you basically see in a regular community's infrastructure but it is very sleek and very lean now i'll just quickly show you the micro k8s portal where you can basically see what the canonical team has come up with and if you look at it it's a it's a pretty lean communities uh infrastructure kts infrastructure as you can see it supports both windows linux and mac so you can basically take the installer and run it on any of those places with the recent release they are also making it highly available so now as far as their commitment is concerned they say that it is production ready and you can basically utilize it for production workloads as well uh but yeah in my my understanding i would recommend that yeah if you're doing anything on prod definitely you can go for this and this is very quick as far as their commitment it's like 60 seconds under which you can basically set up a kubernetes infrastructure in your environment and yeah but i haven't checked with the clock now what we will do is today i'll basically install a micro kts infrastructure on a ubuntu server running on a hybrid so let me quickly show you that so if i yeah so that's the server which i have and just to prove no smoking mirrors so this is my hyper-v infrastructure i'm just have couple of vms so one of the vms is the ubuntu vm and this vm is basically uh where i've just installed the vanilla ubuntu and and i'll be installing my micro k8s on top of this so yeah i'll just quickly uh minimize that and what we will do is i'll just run a couple of scripts and as you can see uh the if you look at the micro kts portal it gives you a straightforward step of how you can install if you just select the relevant infrastructure relevant operating system there are very straightforward step in six steps you are all done uh what i'll do is i'll i've just taken a note of those steps and and some of the steps which are basically required for setting up the kubernetes cluster micro create cluster for azure arc i will basically run those as well to get this up and running so what i'll do now is i'll just run this command and as you can see i've just selected specific version you if you run the default which is uh sudo snap install micro creators it will just take the the most stable one which is 1.6 as for today on the 9th of september but yeah i have seen like that has some challenges with azure rock so i would recommend to use 1.21 if you are trying to test it out somewhere in the same time as i am building the session so uh for now i'll just run this command and so i'm already a root user firstly you need to be at the root and then yeah i've just run the command so normally uh as as the as the t micro creators team commits it's a it should be done in less than 60 seconds but yeah i've seen it take some time a little bit more so what what i'll do is i'll let it install all right so as you can see the installation has completed it didn't took a long time yeah it was pretty quick so yeah we can see the green tech over here canonical is installed so the next best way to identify whether everything is up and running so basically micro kts has provided a command which you can use to just test if everything is uh it's perfectly set up so i'll just run the command and yeah it doesn't take too long it should be quick yeah so it's basically saying that microcode is running now one of the other thing is we should enable dns uh that's basically uh one of the thing which is recommended for setting up the cluster for aks sorry setting up the cluster for azure arc uh enablement uh yeah so that's the next step which we should do so i'll just enable the micro kds enable dns command so yeah that again shouldn't take too long because it just enabled the dns and and the routing stuff so once that is done the next step is basically what we will do is we will take the configuration file and we will update it with the cluster details so let's quickly uh yeah let's quickly do that so if i just copy that yeah and it's just restarting the cube cubelet so that the dns details gets updated everywhere in the cluster so in the meanwhile i'll just show you some of the other things which we will do after we have set up the cluster so it's more about setting up the connecting the cluster with azure arc so once the cluster is set up which is the last step is about just enabling the dns uh there are a couple of things which needs to be done on the azure site so firstly we need to register the providers now what this provider concept means is when you when you're when you set up a subscription there are by default a lot of providers which get automatically enabled things like your virtual machines things like your storage and and provide us for storage virtual machines and networks and stuff like that so uh with respect to this uh azure our communities we have to enable couple of specific providers so one is microsoft dot communities and another one is microsoft dot com entities configuration so what you do is this is the exact command uh once you run it so firstly you have to do a easy login so you need to login into your azure subscription using this cli once you have logged in you just run the command that will enable now once you have enabled that uh basically you can check that as well so that's good yeah the dns has been enabled we'll just copy the configuration file so this is the same config location which you normally would have seen if you use kubernetes dot q file where you have the config files uh micro create has has a config file with the name micro creators so so we'll just take the take the command and that would basically update our config file so that's it our config is updated our communities cluster is all up and running so if i come over here and if i say micro create s cube ctl get parts minus a from all the namespace and that's where i can see i have four different parts which are running so it all looks good so i'll clear that so as i was talking about the registration of the providers so you can run these commands in my case i've already run it so let me just quickly show you where you can actually find these providers registered so if you come into the portal as you can see in the portal uh i've just gone through some stuff i'll show just quickly show you so you go into the subscription select your subscription once you've selected your subscription go to the resource providers if you select the resource providers as i said there are a lot of different providers which are by default enabled when your subscription comes in so things like your web network storage now if you look at it there are a couple of providers which are not registered as well but in my case since i have already enabled the kubernetes provider i should already see it over here so if i come over here i can see microsoft dot communities configuration microsoft dot communities both of the providers are already registered so coming back to our cli uh so we have i've already registered that the other thing is about installing the extension these extension needs to be installed on the same machine from where you are accessing the cluster because this will help in connecting your cluster with the particular uh azure kubernetes uh sorry azure arc based communities uh service now if i run this command it's already set up so it won't it will say already exist if i run this but in your case you can install it and it'll basically install the particular uh connected ks uh extension as well as the kts configuration extension once we have done these step that's it the next step is all about now just porting this particular cluster into into azure using the az connect kts command now this command is it's just a regular agcli command as you can see there is a name this will be the name of our cluster this is the resource group and then we basically provide the configuration file so as you remember we did create a copy of our uh our config file which we did above that's the same config file which will be providing and then also we will provide a context or inside that config file now what that context is if you open the config file using any of these commands you can see that there's a context property as well so if i run this command so you can see i have a context and that's the that that's the context name so that's the same context name which we provide over here so it's a micro k8s context and that's the tag so this tag is basically the tag which will be visible with the service when the service is provisioned in the azure so so all uh all good from here i think the only thing which we need to do is to run this command so what i'll do is i'll quickly run the command i messed it no that's fine so i'll just copy this all right so so once i run this command obviously it takes a while because it installs a lot of agents and a lot of stuff in the background uh which i'll i'll basically show you what all things happened in the background but yeah it normally takes some time to set it up and the service gets enabled and visible in the azure portal so for now i'll just run the command and then we will just quickly talk through what are the other things which happen in the background so coming back to our whiteboard as you can see over here and this is in a nutshell uh the entire experience so on the left side you can see we have microsoft azure uh all the different azure capabilities and then in the middle this is our customer location so customer location is where we have our agents installed because that's where from where the agent accesses the cluster and then at the same time it also communicates back to the azure kubernetes as your our community's service endpoints so that it gets all the policy detail monitoring detail and all that stuff it's a bi-directional agent so it takes the data from a configuration and everything from here put it into the cluster takes the cluster of details like metadata configuration things like monitoring capability telemetries and then put it into the azure ecosystem so it works both direction so uh yeah at the end of the day this is all on the agent model uh literally i haven't read any any spec specific documentation but based on what i've heard uh at the end of the day these agents use service bus relays because they have that outbound https um only https outbound requirement for for three port and and that's how they communicate back so you don't need to poke any holes in your security infrastructure anything to get this entire capability working in your environment now once the setup is will complete like once the command which we ran completes what we will see we will see couple of different things and those things are these are those common like parts which will get deployed inside your containers which will get deployed inside your kubernetes cluster so we have a config agent config controller manager metrics agent now config agent as the detail already states that it's more about um the the like it watches the connected cluster and all the config sync happens in both the directions controller manager is like the entire orchestrator which orchestrates everything matrix agent is or it basically takes the metrics for for the arc agents and then put it into the azure so that uh if they are all happy and performing at the optimal level and all that stuff uh metadata information so it gets the metadata information of the cluster all the things like version number of nodes which are running and then all that stuff and then it also has a cluster identity operator which is like the managed service identity which is basically the way how this agent talks to the azure ecosystem and all that and then we have the flux log agent flux is very aligned to uh like when we go to the next stage which is the git ops then the flux is used for connecting the git repo with your cluster deployment but in this case the clutch log engine just collect the logs from the flux operators and deploy it as a part of your yeah configuration and all that stuff so we'll leave the command running at this stage it's not yet completed but yeah will come will basically once it completes we can continue from there so now as i can see the command has already completed so let's switch to our ubuntu machine all right pretty good so as you can see on the screen uh we have the execution done and it comes up with the response json details so it so this is basically the public certificate with which the agent would be able to talk to the kubernetes cluster um like the community instance which is there in azure the azure r communities instance it also gives some details so that's the name of our kubernetes arc cluster name that's the resource group where we have deployed all the creation details and whatnot and then that's the tagging which we use so that's anyways and it also shows that the status is connecting so now let's switch to the azure portal and see what's the status over there so if i look at the azure portal you can see that arc micro gate as cluster is now already there the status is showing connecting now if i refresh this so now you can see the status has changed to connected and that's what we needed now if you look at the portal side of things and in the menu blade you can see there are a couple of different things things like monitoring things like insights and stuff uh but the most interesting one is the skate ops now the skid ops is something which we will be covering in our next session not in this one but uh just to give you some overview the gitops is more about if you want to let's say have an application lifecycle management for your containers or solutions running on this cluster so that's where the github comes into the picture so on a nutshell the way how it works is we have our all the configuration for the application uh versioning and container version and all that stuff in let's say helm or a yaml file and then we provide that uh configuration file to this kit op instance in in our particular kubernetes r cluster and then there is a flux agent which gets deployed in the cluster side when we have a git ops setup done so if i click on this you would be able to see that i can add a configuration there's a wizard we'll basically run it through a command line but once i do set up this and and add it it will basically go and install a flux agent which would basically do the syncing of the application lifecycle management or keeping the application up to date with the latest versions of the patch and whatnot now the other thing is all about the policies and stuff so you can have you can set up policies and then automatically that will be deployed into the azure arc server these are not kubernetes policy these are all vm based policies so i think that that's mostly which would be obviously consideration apart from that if we look at the overview it all looks good it still shows connected now you can see couple of other things you can see the kubernetes version which is running over there this is the agent details and what not okay so now the interesting bit is let's see if we can see the inside but we will not be able to see because it lasts to connect to uh log edits workspace um at this stage because it's newly set up i don't think so uh we haven't set up any log index workspace so what i'll do is i'll quickly connect it to one of my log lx workspace i have set up for for this particular instance i've already created the log analytics instance and if i go and select this that's a micro key it has log analytics la and if i just click configure it will start the configuration which is more about the agent which we were seeing earlier on the whiteboard so let me just switch to that so the agents which we were seeing over here the the matrix agent and the the flux log agents all those data would be now streaming through into the log analytics workspace so what that means is if we come back to our browser so as you can see the status is on boarding in progress eventually in due course of time it will set up the entire login rights workspace and all the telemetries will start to come in now let's go to our cluster and let's see a couple of things over there so if i if i run um to get the my port details so let me first create an alias okay because the command becomes pretty lengthy with micro creators microgrid sq ctl and okay i spelled something out micro k it is cube ctl oh what's wrong uh a l i s k equal to micro m i c r okay s cube c t l it's not happy anyways so i'll just use the command directly okay get parts minus a so it will show some pods which got installed during the configuration process and obviously it will also list the ports i was showing you um in the in the in the docker side uh about the configure agent configuration agent the controller manager and stuff like that uh it doesn't take that long hopefully it should come quickly in the meanwhile what we can do is if we go back to our browser and if i if i refresh this so if let's say if i say locks and if i come back into this okay so it oh yeah it takes a while before the setup uh gets completed so it says five minutes okay something interesting is happening on the server side because it's running on a hyper v on my machine it's pretty i think so choked because i'm also recording at the same time so i'll just clean it up and say micro kit is get ports minus a if this doesn't work then then maybe i'll open another terminal because yeah something yeah perfect so now as you can see we have couple of uh name spaces which got created azure arc is one of the name space and that's that's the name space where all our different uh components are sitting so if i uh if i get into this not the search no let me select that yep so if i come over here and if i say i'm just trying to highlight some of the things which are interesting in this part okay so if i yeah so as i said like if you look at the this particular uh number the different parts which got created let me highlight some of the things so as you can see we have this flux locks which is which is really uh the one which we saw in the dockers as well so we have the floss flux log we have over here we have this flux logs we have the matrix agents we have uh we have the cluster metadata we have the controller manager uh and we have the cluster identity which is for msi we have the extension manager which is all about managing the extension and whatnot cube add proxy this is for the proxy management and config agent so config agent as we discussed earlier was more about setting managing the configuration and all that details so i'll just quickly switch to our whiteboard so these are the different agents which i was referring to the config agent controller manager matrix agent cluster metadata resource sync cluster identity operator and flux logs so these are the same parts and and components and like which are get agents which got deployed into our kubernetes cluster and at this stage uh obviously it's pretty connected now what i'll do is maybe i'll quickly run a simple command which is micro gate as uh micro qrs run okay something i maybe i clicked something so i'll just i'll just what i'm trying to do is i'm just trying to run uh the engine export in gen x run nginx and then i'll say image okay before i do that let me uh let me see if the logs are the log analytics workspace is set up or not because uh that's what i want to show that the log analytics workspace without this new agent and without this new pod and with the new port setup so i think we should get some telemetries now okay it still says it needs some time that's okay let me see the log preview okay everything is still under configuration so that's all good so what i'll do is i'll not execute this command at this stage because what i want to show you is firstly uh once the log index workspace is set up we will be able to monitor all these different agents which are installed directly in the portal itself without coming to this uh particular vm ubuntu vm and we can just it's monitored remotely and then if we add any new thing any new pod or any new service or any new component that will be or also reflected back into our azure portal so once we have our log index agent set up we'll come back over here all right so our log agent service has all been set up and so all the telemetries from this particular cluster is all getting uh telemetry are being published to the azure arc uh and and the login links workspace so let me switch to our azure portal all right so as you can see this is our same azure arc the the cluster and then the log index workspace which i have enabled so it has got couple of kubernetes tables which have come in which are part of the telemetries that build these tables and if we look at some of the queries so if i want to see the container logs i can basically run the command yep so once i run the command it basically gets all the details of whatever logs have been streamed in it gives the container id and then also it gives a log entry in that container id so all the details this is something similar to if you write cube ctl logs then yeah it would give almost a similar things and if you want to see the container services uh yeah that that's basically will show our flux and all the other services which are being running in inside our kubernetes cluster so if you look at services so we have flux log agent cube ads cube dns uh yeah so it it it has pretty much all those different services which are running in the background now if we flip and get into the insights this is where we get some interesting uh graphical view uh and we don't need to write any log any queries custom queries or whatnot so as you can see we have some high level details cluster reports nodes container level controllers and containers at this stage we have just connected so as you can see this is only one spot which has come in which is about the node memory node cpu node counts uh obviously we have one node and active ports and all that stuff so it shows there are 15 active running ports but yeah this gets pretty mature as the time passes now if we go into the containers that's where we will see all the containers which are running in inside our cluster and it it streams pretty well so as you can see over here we have oms agent calico nodes uh fluent but these are all i think so internal the interesting ones are the ones uh yes so the cluster oh yeah it's a cluster connect agent this is where it connects and the flux agent this is for the logs which we saw earlier and calico was the one which was uh installed so if you even click on it on the container it will give you more details about the container and whatnot now as you can see we have all the different uh agents and stuff installed over here uh what i'll do is we'll go back to our our server and communities cluster and we'll try to basically i will run a nginx cube ctl ngin x port let's say give it a specific name let's say uh arc and then say image image equal to ng9nx let's run this hopefully that should create a new board over here yep so it's created if i run the get port minus a i should see the new nginx board running so yeah it is running inside the default namespace um yeah so container creating it should be should be quick let's say it's w we'll just wait for the container to complete okay so it's still creating all right so i think it should create yeah uh it's because i think so the vm is pretty exhausted um that's why it it is taking a little while but let me cancel this and clear the screen then i say just minus a uh get bought so let's see all right i'll get i hope there is no image issue described in x and c hope i'm writing the name knight i'm writing the name correctly the server does not have a resource type nginx okay i describe portal i'm not writing the command right all right so it says successfully assigned pulling the image successfully started container okay that shows pretty promising response oh yeah now it's running so all well uh okay i think the interesting bit is like as see we have created this particular nginx uh nginx pod over here which is uh obviously uh yeah just created i i would say two minutes back so it's not even too long now let's go into our azure portal and let's refresh this and if we refresh this uh it would basically bring in all the containers which are there so i can even say search for ng and as you can see i i have the nginx particular part being reflected over here it says the status so it's it's it's a very good experience because everything is extracted out in very time model and if i click on the specific part it would show all the different uh yeah all the different components which are running inside the part if there are yeah like it gives all the things like your this is your nginx arc for uh yeah engine xr arc pod and then inside that it also gives you details like your uh container restart details uptime details and and obviously the node on top of which it is running so yeah it gives you very granular details uh that that was a controller here and if i look at the node i can see that yeah it also shows that it's my ubuntu node which is where ubuntu where everything is running i can open that as well without even getting to the details so it shows all the different uh things which are running inside about the calico nodes and whatnot um yeah so it's uh it's a it's a very uh intuitive experience if you want to push the envelope you can go and have uh yeah you can build your own workbooks and and you can even consume an aks uh predefined gallery workbook so that can give you like a very niche uh graphical visualization of your entire uh telemetry is coming out because at the end of the day it's all communities so the telemetries uh are pretty aligned so yeah i think that that gives you a graphical view now look at the experience so it is where it's what about like everything is running in a very remote location on my laptop azure doesn't know anything about it but it is streaming in a way that it looks like it's running natively in azure and the the performance and everything is so impressive that yeah you can look at it it gives a very real time experience for everything so i highly recommend everyone to go and have an experience with azure arc azure arc with humanities is pretty mature it's already ga what i would recommend is maybe for if you are doing for your testing and purposes and doing just getting the hands dirty i would say try to use regions which are more us-based regions because the the services are good all over the globe as i say it's a gs service but yeah i think it's more comfortable and reliable uh using a us region if you are just trying to test it out uh in the next session we will be covering about the gitops capability which is there in this which makes it like the gem of this entire service if you enable githubs your entire port deployment and deployment experience of kubernetes becomes very easy so yeah i think hope you had a good experience with this with this session and i would highly recommend to definitely go and have a check of experiences by yourself because there's nothing better than getting your hand dirty so yeah thanks a lot for your time and hope you have a good day thank you
Info
Channel: TECH SCOUT
Views: 228
Rating: undefined out of 5
Keywords: Azure, Arc, Kubernetes, K8s, Arc K8s, Azure Arc K8s, Arc Kubernetes, Arc K3s, Arc MicroK8s, MicroK8s, Arc enabled K8s
Id: hnLeAFnAJaM
Channel Id: undefined
Length: 35min 34sec (2134 seconds)
Published: Thu Sep 09 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.