ECS-W3: Running Windows Containers in the Cloud with ACI and AKS

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey how you doing my name is elton and welcome to the show so this is the the third in uh in this and this month are all about windows containers so every month this the show the format is each month is the same as a common theme and i do four episodes around that theme we've done the introduction to windows containers we've done building and packaging apps to run in windows containers using things like msis for pre-built packages and then building from source code using those sdk images that microsoft provide for us in this episode we're going to look at running these windows workloads in the cloud in managed container platforms so this is this is the goal for a lot of people with their existing windows estate they want to be able to package this stuff up move out of the data center or move it to a different cloud but in a way that makes it um easily portable in the future so when you package these things to running containers ideally that's kind of the last big migration you'll do because you can run them in docker uh you can run them in kubernetes you can run them in a managed container platform which doesn't require you to look after a cluster of your own to not worry about the virtual machines that are actually hosting these things that's what we're looking at today so this is ecs w4 if you're new to the show welcome i hope you find this useful we run for kind of 30 45 minutes it's very informal there's no script there's no editing that happens afterwards it's just kind of all out uh there's a q a so if you've got any questions put them in there in the youtube chat and i'll pick those up as we go along or at the end if i run out of time and then everything that i'll be going through is very demo heavy and all those demos are up on github so if you go to eltons.show that's the homepage for the show you'll find the links to all the episodes and you'll find the links to all the stuff on github and every episode tells you the prereqs that you need to follow along and that the code um commands i'll be putting in all that sort of stuff so if you want to check this out yourself you can okay so i'm going to switch over to this camera i'm going to pull down my green screen and while that's happening i've got a very short advert and i'll see you in about 20 seconds you're watching elton's container show with your host that's me elton stoneman while i'm fiddling with my green screen you might want to bookmark this page l to the start show which is where you'll find all the details of all the episodes i go into quite a lot of depth in these shows so if you want to prime out around these technologies i can recommend my books learn docker in a month of lunches and learn kubernetes in the month of lunches and if you just want to see what i'm up to then you can check out my blog blog.sixside.com and now back to the show okay so here i am over here now so uh this is the visual studio code setting this will be doing the whole rest of the episode these are the docs that i've got for the the demos that i'm going to walk through um so i'll be just running through these demos showing you what all this stuff does and hopefully you get a feel for how this works i'm going to be using two different container platforms in the cloud that both run on azure so microsoft is your there are a whole ton of different ways you can run containers i'm going to be looking at azure container instances which are like a kind of paths for containers so you spin up a container you don't need to worry about the host where it's running you don't need to manage a cluster or anything like that it comes with a full sla and all that sort of stuff you can run your production applications in containers without having to worry about managing the stuff the container runs on and then like the other end of the spectrum in terms of managed platform which can run containers for you is aks that's the azure kubernetes service so with aks you can spin up a cluster which has a hybrid a mixture of windows servers and linux servers and between those you can deploy a mixture of windows and linux workloads that's what we're going to be showing today so it's this stuff um these capabilities are not exclusive to azure i use as your because i'm and as your mvp as you as the cloud i'm most familiar with but of course aws and google cloud they both have similar offerings so you can spin up a managed humanities cluster with windows and linux servers in google and amazon and amazon have an equivalent of aci called fargate which just lets you spin up containers without worrying about the the underlying platform beneath that so you can do this stuff in uh in the other clouds too okay cool so we're going to spin up along here so there's a whole bunch of links here at the top for the things i'm talking about if it's new to you uh one of the episodes one of the one of the demos i'll be running we'll be using a multi-architecture image so an image that's packaged to run on linux or windows if you're not familiar with that then check out this episode of my docket in a month of launches series and then the other links are for the source code for the apps you're going to be running um there are two ways you can work with these things so with the with the when you're when you're working with this your you've got the azure portal and then you've got some kind of automation language so azure has its own command line az these are the azure command line there's also powershell command you can run things in you can describe things in arm templates to manage your deployment i'll be using the az command predominantly but i'll show you what these things look like in the portal as well so first of all i'm going to spin up my as your portal over here where is nothing here we go okay i'm going to create in this your container instance which is going to run sql server in a windows container so just like i've done other episodes when i've run that locally i'm going to do that in azure i'm going to show you through the ui but i don't tend to use the ui much because it's not repeatable so ui is a great way to explore the options to see what kind of things you can do and then i'm just going to translate that and run it in an az command okay so i'm going to i'm this the container instance blade advised at the home i can go along here and then the services there are all these here so i'm gonna go to container instances these are the ones i use most recently uh create container instances i've got a whole bunch of options when this screen comes back to me i'm gonna grab my other screen and put it over here so i can see what i'm meant to be doing okay so first thing is i need to choose a resource group so if you're not familiar with azure all the clarins have some way of of grouping together a set of components that represent an application or an environment or what have you azure has resource groups and create a resource group called ecs w3 that's the one i created earlier where i'm going to deploy this i'm going to create my container i'm going to call it sql server see if that's available it can run in east us that's fine and then in acr i've got these list of quick start images so these are docker container images some of them live on the the microsoft container registry or they can live on your own container registry or docker hub i can pick one of these if i just want to get up and running and see what all this does and you just get a really basic um a really basic container that's just going to run you a website i don't want any of that i don't have my own acr that i'm going to deploy from i'm going to deploy from docker hub so all i need to put inside here is the name of the image that i want to deploy so i'm going to be using my own sql server image this is packaged up on top of microsoft's windows server core image is packaged with the sql server 2017 express edition so you can run this for production workloads and you might necessarily run it in a container but this is really great for non-production environments like someone quickly spin me up a sql server database that i can access from different offices around the globe like you're going to do this in about 15 minutes and most of that time will be the the container creation time okay so that's my image i can say that i can tell it's windows or linux and then i can change the size i've got a whole bunch of sizing options here i can go for a cpu cores and the amount of memory that i want um if i was running in linux mode which i count for this container image but the ui doesn't know that yet um i can go and apply a gpu in there so i can get graphical processing there if i'm doing some kind of machine learning or whatever i'm doing in there um and with these limits look pretty low but there you can obviously get these pumped up so you can go from you know um hundreds and hundreds of megabytes if you're using if you're using gpus hundreds of gigabytes for your containers okay so one core is fine i'll bump that up to two gig click okay and then next i've got networking so this is going to let me do the configuration to access this this container from the outside world so you know when you run a container the kind of networking is locked down you can't send traffic in by default when you spin a container up on your machine it's the same when i'm running an aci if i want to let traffic in i need to map a point my port now the sql server port is 1433 and that's a tcp port so that's fine i'm going to get rid of the http port that's there by default because i don't need that i can give it a dns name and then i will get a fully qualified domain name so i'm going to get this east u.s azure container.i o as part of my domain name here as we give me like access remotely from from anywhere in the world so that's all i need to do so i can set up my dns level there are other things i can put in here i can put my environment variable so really what i'm doing in this ui is i'm mapping what i would put in my docker run command into the ui and i can also map that and model it in different ways too so i'm going to put in my sql administrator password so this will give me access to that machine and my password is going to be the one that i always use and that's it tags are just key value pairs that you can use to manage your objects as your not necessarily to do with containers okay and that's it let's give me everything i need i'll click create and that's going to spin up my azure container instance what it's going to do is it's going to find me some some hosts somewhere uh and it's going to create my container on that host and set up the networking for me just out of all the environment variables all that sort of thing and it's all come completely managed i don't know what that's running on it could be running on a some uh uber kubernetes cluster that's running somewhere it could be using service property i don't know i don't particularly need to care it's all starting up and running so i go back home click on container instances it'll take a little while to show up inside here and then when it does show up it doesn't it isn't going to be running straight away so what it's doing is it's going to pull down that sql server image if i go and open up the link to where my sql server is here so this is the image on docker hub if i open that in a new tab and what we'll see is that it's quite a big image so it's windows server core so we've talked about that in previous episodes it's pretty much all of windows server 2019 and then on top of that if i look at this tag for my 2017 tag i'll zoom in a little bit here we can see that some of these layers are coming from the base image so this stuff here this apply image 1809 and install the update that's part of the windows server core image so those layers will come from mcr so they're coming from microsoft data center somewhere so within you know within the scope of where my containers are running but the next bits are bits that i've packaged up myself so these layers here this 400 megabyte compressed layer that's living on docker hub so there's a there's a transfer delay so ideally when you're running your applications in production if you're running them in the cloud then you're going to want to spin up your own container registry local to where your applications are running so because i'm running on azure i'm probably going to spin up on azure container registry if i was running in aws i'll spin up an amazon container registry so that my my images my image layers are local to where my applications are running so i'm moving that data nearer to that nearer to the compute okay so that standard is gonna happen so if i now go back here i should see my container instance okay so there's my sql server instance that i've just created if i go and look into this you can have multiple containers within your resume container instance you can kind of model it so that this this container instance is like a group that can have multiple containers that all talk to each other internally i've only got one inside here and we can see here i was actually pulled this that's quick it's much quicker than when i run it earlier so we can see the so inside here uh these are the list of my containers that are inside this container instance and then i've looked at the events here and it tells me all these things that are happening so it's been created it started pulling my image it's pulled the image and it started my application so we can see here uh it's created about 15 11 and 25 seconds and it's been started at 15 11 and 40 seconds so the pool started at 15 and 9. so it takes a couple of minutes to get up and running that's purely because of the size of the image okay so now if i go and connect to that sql server instance i've got my really lightweight sql server ui here so i've already got this set up using the same sql server address that i put in earlier so that's what i put in as my fully qualified domain name there's my port and there's my password so if i test this that's all up and running so the connection is all fine so we can get to that sql server because it's just a publish port inside the container it's just sql server it's just a remote database happens to be running in a container in the cloud so i can connect to this and i can kind of do whatever i need to do so i can do a create database ecs and click on execute i've got this it's a fully it's a fully operational sql server instance it's empty to start with if i disconnect now and connect again and what i'll see here is there's my new database so i can work with this in any way that i would normally work with i can have multiple containers inside that instance they can connect to each other to model a distributed application so nice and simple just a really quick demo that it's an older windows application so it's kind of packaged up to run in a docker file using the similar techniques that i've shown you before in these in these episodes and it's just up and running it's just really quick to get up and running with round cheap to run because you only pay for the compute instance for that container for the time your container is running okay but obviously using the ui is not great so the alternative is to use the az command line so spin back over here and let's go up to what i'm going to do here so i'll open my command line here if you're not familiar with it az there's the it's a command line for all of your as your all of your resource resources all the cars have some sort of command line uh you do like an easy login command to connect i've already done that and set myself up so if i want to work with container instances then i can use a command like this one here so a z container create then create me a new container instance and i can put in all the stuff that you've just seen me put into the azure portal which is really a mashup of the things that azure needs to know and the things that i would ordinarily put in my docker run command so there's my image that i'm going to use this is a really simple web server that just tells you the operating system that it's running on inside the container i can tell the os type is windows uh how much cpu and memory that i want and then i can same sort of thing publish the ports and give it a dns label which will become the fully qualified domain name so i can take this it's gonna do exactly the same kind of thing it's gonna run a windows container this one's based on windows nano server so it's a net core asp.net core web application but it's based on nano server so it's nice and small instead of the kind of four gigabyte uh image that needs to be pulled in total it's a couple hundred megabytes so it's going to be a lot quicker to start up a lot quicker to get up and running what you see here in the output is that uh this is this is the kind of an interactive command so while the container is being created my command line's gonna sit there and just wait for it to be up and running and spinning up if i switch back to my portal and go back to my container instances and refresh here this support yes there's my new who am i container so that's the one i've just spun up we can see that it kind of operates in the same way uh there's my fully qualified domain name that i've put in there so that's available to me it's allocated a public ip address and i've already published my port so that's all looking good if i go and look at the container events here i can see it's been started created pulled pulling so it's pulled everything much more quickly so the difference between if i switch to utc then i can see the time stamps better so 1446 is when everything began and 1505 is when it started so it took um it took 20 seconds instead of two minutes because it's a much smaller container image and when i say right at the beginning of episode ecs w2 when you're packaging your windows apps the first thing you need to assess is can i run this in a nano server container image because if i can you're going to get a much better experience when you do things like this so it takes 20 seconds to pull instead of two minutes okay so that's up and running i've used my i've used the command line for that but no um it runs in exactly the same way so i can browse to this application here which is my who am i app so this is the the this is the fully qualified domain name for my application it tells me a little bit about what's running inside the container so it knows that it's running on excuse me notice it's running on a 64-bit intel machine that it's microsoft windows and it's based on one seven seven six three which is quite an old version of windows actually which when i packaged up this particular image from my docker book and then this is the container name so you can see this is this kind of elaborately um generated container name that i'm getting from aci obviously a different format of container name if i run this in docker or if i run it in um in kubernetes so this is telling me a little bit about the platform i can see in here there's this cars which may or may not be a random name or it may stand for containers as a service so that's up and running my windows containers now up and running in um in azure but there's a disconnect between my my workflow here because i'm using docker to package up my my application so i've got my docker file my docker commands when i'm running locally i'm going to the cloud and i'm either using this this click portal which i'm probably not going to do in production i'm using the az command line there's obviously there's a disconnect there's a separate set of tools to learn so recently and currently this is a beta feature in the docker desktop edge release the docker team have been working with the azure team over in microsoft and also with amazon to have a similar experience where you can you can um create and manage containers in aci using your docker command line which is really really nice way of being able to manage these things because it gives you a production grade platform without having to manage an orchestrator because that all that stuff's taken care of for you but you're using your standard docker command line so um the way you do that there's a link right at the top of this the docs for this page uh to this deploying to aci from docker from the docker documentation i've already set that up but basically you need the the latest uh beta edition latest age edition you can do a docker login as your command and that takes you to the ordinary as your login page and then you create a docker contact so i've done a youtube video about docker contacts they're just a way to switch from from one docker server to another docker server and the work that docker have been doing with microsoft is to present aci with a kind of docker engine e api so i so it can talk to the docker command and i can talk to it as if it was a remote engine so i've already set that up so if i do docker context list here what we'll see here is i've got a bunch of things that i have for my that i use locally and then somewhere in here i've got my ecs w3 which isn't which is an aci context type and it tells me and it tells me where it's connected to so this is the resource group so my context for my aci containers is a resource group and an azure region so that i can manage everything within that group and speed up my containers so let's see what i've got here so if i switch to using that context so docker context used it's going to set my local docker command line to talk to my remote engine which in this case is aci if i do a docker ps and just list out all my containers then there's a little bit of a network lag because obviously i'm not talking locally to my local docker server i'm talking up to the clarity here and we can see here the things that i started up so sql server sql server and who am i who am i i can see the ports they've been published on the and the public ip addresses i can see the images all the usual things i would see when i'm working with my docker containers these just happen to be running in a completely managed container platform in the cloud they have these two these two names because uh because of this idea of container groups that can have multiple containers so the container group is the first part and the container itself is the second part so um i can look at those containers i can go and look at the logs here so if i get my container logs for the who are my container i've already browsed to that but there's no this is a high level of logging so i don't see all the all the access logs it just tells me all the basic stuff about where this is running you can see the content root is c code on backslash so this really is a windows container it's running up in windows it just happens to be an aci okay now if i clear this down uh actually let me just show you that if i do a docker inspect for that who am i container docker inspect it's going to tell me um the platform is windows it tells me the ports that are available to it the restart policy so with with aci i can say that i want this reason the standard docker restart policy always takes uh gets observed so if the app fails then the container can restart and all that sort of stuff you'll see if you're used to the docker inspect command usually there's a ton more detail that comes out in the json here i'm only seeing part of it because this is all the the api version of the docker engine the aci version of the docker engine gives me so it just gives me a subset of stuff here but i can see that my platform is windows okay cool so now if i run a new container so i'm still using the docker command line i'm going to run that same docker container image this is the multi architecture image so when i created it with aci with my az command i said i wanted it to run on the platform windows so it would have spun up on a windows host to pull this image i'm not specifying the host this time so uh the the platform this time so i'm just specifying this image name which is a multi-architecture image i want to publish my port i'm gonna give it a name who am i to okay so you get this awesome ui i don't know if this is intentional it's just a way because i'm running this inside a terminal that's powershell inside visual studio code but you get this thing that just kind of kind of it's kind of difficult to tell you what it's doing it's creating the container group and it's going to create the container inside there um and it just kind of it obviously does a polling update but instead of updating the same line i get these millions of lines come out like i said it's a beta feature um we think about what it's doing behind the scenes the fact that it's just like superbly creating these containers in a in a production container platform from a single line which is the same line i would run locally okay so there's my thing there's my who am i too if i go into my docker ps again i should see my new container in there so because i created this directly with docker and i gave the container a name i don't have that double container named business but i've already got my ip address allocated i've got the port published so that's all looking good so now if i curl this just to see what i'm getting i get operating system linux so i'm getting the linux version so because i use the multi architecture image that's what's been and by default when i create through the docker command line i'm connected to an hdi context i'm going to get linux containers out now docker does kind of support this platform flag but that's not integrated with aci right now so if i try and do um if i try and pass it the dash dash platform flag it'll throw me an error and if i and also because uh because early stage of this integration i know that i can run a windows container in aci i can manage those windows containers and aci but i can't create them with the docker integration just yet so if i try and run again same sort of command if i run who am i but i run the specific tag which is for my windows image so this is not a multi-architecture image this is just windows and i'll still get a whole bunch of stuff comes out and at the end it'll throw me an error and it'll say i can't run this because it says the specified os linux uh and this container image is for windows so it doesn't support the it's trying to schedule that container to run on a linux host and it's a windows container image so because the docker integration doesn't let me specify the platform yet it doesn't let me do all the things i could do with the az command or with the with the portal but you know really promising stuff um hopefully this will get this will get some traction it'll be developed even further you can use this with docker compose as well so i can have a docker compose file that's specified all my bits and pieces that i'm running locally with docker compose up and when i switch my contacts to use an aci context and i do docker compose up then everything gets created in those azure container instances in the same container group so it's you know it's that sla for production it's that super easy lightweight way to manage it because i'm using the same modeling specification the same tools that i use locally um but i'm using the docker's integration with aci so it's really really cool so yeah at the moment that the hdi integration is in beta so it doesn't have the full windows support you can create windows containers using the azure tools and manage them using docker you can't manage them and create them yet the whole the whole site using docker the other thing that's slightly weird is if i try and remove all of my containers so i'm still connected to my aci contacts i can run my docker rmf command which is going to remove any running containers it'll remove the one that i created using docker run so that's my whom i too has been removed the others it says i can't delete the service uh because it thinks it's a compose project because it's part of that that nested thing it's on my sql server container it's part of the sql server container group so for that i need to do a docker compose down and if you've got a keen eye you'll see this is a slightly different docker compose command because it doesn't have the um the dash so docker compose is becoming um a verb with inside the ordinary docker cli so i can use docker compose down to take these things down i'm still talking to aci so it's removing all those aci container groups and getting rid of those containers so that's all been removed now if i go back to my portal go back to my container instances and refresh it takes a little while for the ui to catch up with everything but we'll see my who am i too is gone now um if i keep on refreshing this then the sql server and who in mind will go but obviously you know we've got a limited amount of time i'm just gonna keep refreshing if i do a docker ps instead because my context is set to be uh aci then i'll see that there's nothing up and running there okay cool so aci the equivalent in amazon is called firegate it's a kind of uh it's containers as a service you're just going to bring your container definition whether you're using the az command line or a docker compose file or whether you're using um the docker command line then you're just going to spin them up and they get managed for you so i don't have to worry about a virtual machine going down i don't have to worry about you know the a problem with the region the data center i'm purely work the sla that i'm asking for is based on running this container and then everything else gets taken care of from this really nice way of if you've got production requirements to run your containers but you don't need the full flexibility of something like kubernetes and you don't need all the overhead of managing the the complexity of your application model because kubernetes lets you do all sorts of fantastic stuff not every application not every organization is going to need that so by kind of getting rid of the orchestration layer then you take your docker compose file that you use locally and in your test environments and use the same thing in the cloud with your aci or your far great integration this is really really nice promising way to look at things in the future but without that you can still spin things up and run them in hci or find it using the clouds tools you're still going to use the same docker container images but you'll be creating them with a z command or whatever whatever else you're using but the docker compose integration which is really new um is looking good already so hopefully that'll be that'll be a nice production alternative in the future okay and then the other end of the spectrum of kind of complexity and setting things up is to use a managed kubernetes service so um if you're looking at kubernetes then the modeling language lets you configure all sorts of interesting tweaks and we'll have a look at some of those files uh later on the episode but the the the platform itself uh ideally if you're splitting up in the cloud it's a managed platform so you're not gonna worry too much about um how you have to look after the vms or deploying kubernetes yourself or or connecting the cluster together if you're running in the data center you do have to worry about that and that is not a small undertaking but if you're looking at kubernetes in the cloud then all the clouds have a managed humanities instance so if i go to the community services here i've already got a couple running because my blog runs on kubernetes obviously i've already set up the one for today's session because it takes a little while to set up but i'll show you what it looks like i can go to add kubernetes cluster um with a without with azure you can you can also take a local kubernetes cluster and connect it to your azure command line your result portal so you can manage that cluster you can see the cluster inside as you're so it it behaves as it is your resource even though it's your local kubernetes cluster which is a topic for another day so uh same sort of thing if i'm creating this important i need to select uh um select a resource group where all these things live give the cluster a name and then the thing that's interesting about about um when i'm when i'm creating this for for uh for a cloud deployment is that in zero i need to have uh multiple node pools so groups of of nodes there are the servers that are going to run my application containers so if i want to have some windows or some linux then i've got to start off with a linux node pool so by default i'm going to start i can i can scale this up or down as many nodes as i want so i'm going to go down i'll go on one notepad in my default note pool uh which is my my linux my primary notebook i can add a new node pool and i can say this is going to be windows and this is how i get this hybrid cluster the user experience is different in different clouds but the end result is the same i'm going to have a kubernetes cluster a single a single kubernetes instance with multiple nodes some of those nodes are going to be windows and i'll run my older windows workloads on those others will be linux and this is fantastic if i'm if i'm moving existing applications to the cloud and i want to run them in containers in a portable way i can do that with kubernetes and if i'm i've got my older windows estate if i want to reduce my windows footprint over time then all i'm going to do is i'm going to spin these sliders i'm going to go from having 10 windows nodes and two linux nodes i'm going to gradually move to maybe five linux nodes and maybe just two or three windows nodes and spread the workloads around as i'm moving from maybe.net framework to.net core or dolby five then i can i can move those to linux workloads and the only real difference is is a couple of lines in the kubernetes yammer which i'll show you in a second so that's all i really do is i set i want windows nodes i set how many nodes i want and i choose a size which is just the vm size from the nodes i'm not gonna do that because i've already got that set up and running and i did it with the azure command line again so let's open this up so you can see how this looks it's a couple of different steps when you're doing the az command line because first of all you create the cluster and you can take these things and paste them straight into your your own az command if you want to do that uh there's the resource group that i've already created um the kubernetes version that i wanted to run as you're really good and all the clouds are really good about keeping up to date so i can run 1.18 the new release of 1.19 that's available in preview and then i need if i want to run windows notes i need to provide the windows administrator username and password so the first step is to create the cluster which will give me a linux node pool the primary node pool and then if i scroll down here then i'm going to run another command to create a new node pool this is the no pull add which will give me the windows uh will give me my windows servers so inside here i'm saying the os type is windows there's my node count and then i can specify the vm size and all that sort of stuff as well i've already done that so my credentials are my cluster's already up and running because provisioning the cluster and setting up all the vms and connecting everything takes a few minutes um so rather than sit here and wait for that to happen i've already done that then you're going to run this command which will take the credentials to connect to that kubernetes cluster and put them in your local cube config so that i can use my local cube title command which i get with docker desktop to connect to my remote kubernetes cluster just like i'm using docker to connect to aci so i'm going to do my az get credentials so i'm already logged into my az optimize your subscription so it's going to go and get the credentials and merge them into my local cube config i don't need to authenticate again because that's already in there and now if i run cube cut get nodes then i'll see that i've got a whole bunch of things in here so i created this couple of hours ago so these this is my default node pool these are my my linux nodes and here are my windows nodes they're all running the same version of kubernetes it's all one cluster kubernetes doesn't really care as far as it's concerned it's got a bunch of machines that are all running the cubelet that are all feeding back all the same information just some happen to be um running the windows operating system okay so in order to deploy to my windows i've got a a really old application that i'm going to deploy it's the dot net pet shop um that link there will take you to the source code i grabbed it from from codeplex so codeplex kind of was was microsoft's way of distributing code many years ago um and pet shop was on there and it's just a really simple application that tried to showcase some of the earlier.net features this is from 2008 using.net 3.5 um an old application you can go and browse the code that i think i made like one code change just to package things up so they work more neatly in containers and that was about it and then i've just built them into into windows container images on top of those microsoft images that we've already seen in the series i'm going to deploy it to kubernetes now inside the the docs and the demos for for this episode inside the w3 folder there's a whole bunch of kubernetes stuff and if you're not familiar with kubernetes then you can kind of skip over a lot of this stuff what what i've got here is i've got two things i'm going to deploy i'm going to deploy an ingress controller which is the thing that's going to run in linux containers on my on my kubernetes cluster and it's going to receive all the incoming network traffic and an ingress controller is just with kubernetes that's how you map a bunch of rules to say i've got one central component listening for all incoming traffic so i can have you know just port 80 um available across my entire cluster any traffic comes in one of those linux containers will pick it up i'm gonna have a whole bunch of rules that say if you get a request for this path or this domain name here's the the container here's the pod that you're gonna get the data from so it's just like it's a reverse proxy if you've seen me talk about that stuff before it's a reverse proxy but it's managed in a way that it becomes a first-class resource in kubernetes called ingress so that's all that stuff is in there the thing i'm going to show you inside here is in my ingress controller and if you're again if you're not familiar with kubernetes this is just a massive lump of yammer you can more or less ignore i'm not showing you this to show how much stuff i know this is just how you've set things up there's some good best practices in here one of the things i want to show you is uh inside here this node selector so this is the bit that says to kubernetes this is a linux workload you need to run this on the linux machines in the cluster so uh as part of the the cluster every machine has a whole bunch of labels um cumulus is very big on labels they're just key value pairs some of them are generated by the platform some you can apply yourself every node has a an operating system label so when it when it gets set up when it joins the cluster kubernetes knows whether it's linux or windows it also knows whether it's intel or arm and you know a whole bunch of other things this is saying that the node selector here these pods these containers have to run on a linux server so my ingress controller my network traffic will come into the linux containers so i'm going to deploy that in a second and then i've got my pet shop application itself so i've got a sql server database that's going to run uh we can see in here this is just this is the container image so again it's it's just an ordinary docker hub image this public image on docker hub i've got things in here like my environment variables for the for the for the administrator password but i'm getting that from a secret kubernetes so if you are interested in kubernetes and you want to see how windows work those get mapped out and modeled out have a look at this stuff you'll see there isn't much difference from what you're normally doing i'm taking secrets like creating kubernetes i'm taking config maps for my application configuration all the usual stuff all the stuff you would do in a production deployment of kubernetes it's the same for windows the only thing that's different is down here i've got my node selector that's saying i'm going to use the operating system windows and that's that's really it this is the only big difference between my windows application components and my linux application components all the rest of the stuff like i can specify how much cpu i expect my application to use i can specify the limits that should be applied so it doesn't if something something happens to my container application it goes rogue and it doesn't start taking down all the services all the compute application all the compute resources on the node i can specify all that stuff so all these productiony things all these best practices are the same with the linux windows workloads and kubernetes as they are for linux the only thing that's different is your node selector okay cool i've got the same thing here for my web application uh this is just a publishing uh service which is how traffic is going to get from my ingress controller to my application i've got two replicas here so i can scale up so i can run it across my multiple containers um i've got the other stuff down in here all the stuff you would normally expect to see uh so these are my my volume mounts for configuration so i'm loading configuration from kubernetes into my application container and then my application reads it from there i've got those same sorts of i've got readiness probes and health probes so that kubernetes can test if the application is healthy down here i'm loading in those configs then i've got that note selector again and i've got and this is just to show you that you can do the same stuff with windows that you do with linux so kubernetes has this concept of affinity which says that this pod which is just a wrapper for one or more containers this pod should run nearby this other pod uh or should run far away from this other pot for high availability what i'm saying here i'm adding some affinity that says my uh my application here my pet shop web application uh should uh should have a what am i doing how am i doing anti-affinity so it shouldn't run next to another one in the same zone so it's a very complicated way of saying if i have multiple um nodes in my cluster and they're running in different availability zones so all the clouds have this notion of availability zones think of it like different racks inside your data center so i don't want all my all my pods running on servers in the same rack because if that rack goes and i lose everything this is a way of saying i want the the pods that are running this application to be spread across different racks different zones and that's all that's doing um it's quite a complicated uh little bit of the ammo in there but it's very flexible because it lets you say i don't want my applications to be co-located in the same server or the same rack or however right however i have my topology defined or you can say things like i do want my web application pods to be next to my api pods to reduce network traffic so i'm only showing you this to show you the the whole api spec of kubernetes almost all of it applies to windows containers in the same way that it does to linux containers okay cool let's go and actually deploy this so we can see how it looks so let's get rid of that browser let's do that let's bring the terminal back and clear this down so uh right now i've got nothing deployed so i've got my nodes up and running i've cleared everything down so each of these things is going to live inside its own namespace and right now i don't have any namespaces uh any any custom namespaces so let's go to where all those yaml files are and if you're completely unfamiliar with kubernetes this is how you deploy stuff in kubernetes so cubecut will apply i'm giving it a path a folder path which is where all those yaml files are for the ingress component so this is the the reverse proxy it's listening across the cluster any traffic that comes in it'll decide what to do with it it's deployed all that stuff so that's all cool those things will run in linux containers if i go and look at the pods for my ingress and what i should see is they're all up and running they're all ready so that's all looking good don't take a few seconds small linux containers i've already done a kind of dry run of this um using this same cluster so all the container images have already been downloaded so they're already on the on those nodes so everything should start up pretty quickly okay so let's clear this down now if i deploy the pet shop up it's the same process they're split into different folders just because i want to manage them in different ways i give everything i can put everything in one big yaml file if i really wanted to um but the fact these are windows components makes no difference i'm still creating all these same sort of kubernetes resources secrets which are where my database password lives which have um a mechanism to to to mean that people can i can separate the people who can work with those things so not everyone will be able to see those secrets configuration maps which is where i'm putting my xml config for my web applications and then the thing i didn't show you which i should show you are my ingress rules so my ingress rules for the web application here this is the bit that says with the ingress controller you're listening for traffic when you receive a request i would ordinarily have a domain name in here but i'm only running one component so anything that comes into this application you're going to send it to the pet shop web component which is going to be my my windows application that's running and actually i've got some static things in here so my petrol web application is a dynamic web app but there are a whole bunch of things that could be cached like images so i've got another ingress which says that anything that comes in for um these production images or these common images that can be that can be still be served by the ingress controller but it can be cached within the angus controller so again showing you all the flexibility to kubernetes applies whether you're running windows apps or linux apps okay so let's go back to here so everything should be up and running now hopefully if i go and look at the pods for my pet shop uh okay that's all looking good so there's my database that's my two web pods they're gonna be running across different servers because that's what i specified in my topology rules and if i go and look at the service for the ingress controller the service is just the service is just the network component that listens for incoming traffic so this is a public load balancer component kubernetes integrates really nicely with all the cloud platforms so when you deploy this specific type of ingress of ingress service that's a load balancer service then i'm going to get a public ip address that um that azure assigns for me and it depends on which platform you're running what you actually get but i'm going to get in this year i'm going to get an actual load balancer that's going to span my the span of the nodes in my cluster if i run this same deployment on docker desktop i'm going to run the whole thing because you can't have linux nodes and windows nodes on a single docker desktop machine but if i ran just the linux versions i could still have a low balancer ip and the external ip address would be localhost so you know i can i can kind of get the same thing with the same yaml files okay so this here this external ip address should be something i can browse to uh let's go up here and open this up hey okay there we go okay so this is the pet shop from 2008. uh i've taken the the code that was on codplex packaged it in the docker image written a whole bunch of yamaha to run it in kubernetes with a whole bunch of kubernetes best practices and it just worked so i can go and look at these things as i'm looking at the images these are being cached now in my ingress controller um i've got some other stuff in there so the pet shop application needs sticky sessions uh in order for your your wish list and your and your checkout to work so i've got that specified in my in my ingress settings i go back and look at these again you should find that faster because uh because the images are being cached inside the ingress controller so i've got all the kubernetes goodness i've got all the managed platform because i just created my whole kubernetes cluster with a couple of az commands and i'm deploying using the standard humanities tools they just happen to be windows containers okay cool so hopefully that's kind of giving you a flavor i need to update that because that's this week so that's not for me next uh so that's that's an idea of how we can take these these windows applications and run them in the cloud without having to add a whole bunch of ceremony if i just want something quick or i want something that's fairly simple to model then i can use aci and i'm still going to get a production grade container platform without having to worry about the modeling language because i can use docker compose hopefully at some point in the future if um uh if the if the integration gets extended to windows containers and if i do need all those tweaks or if i need to model things in a complex way that i've got with my pet shop here i can do that with kubernetes and i can just spin up a kubernetes cluster add some windows nodes in there and then run this hybrid workload and as you saw that my ingress controller is running in linux containers it's fetching the content from my windows containers if my windows containers were reading from an api that was running in a linux container it all just works in the same way they just use the dns names of the kubernetes components and everything works in the same way okay cool so we've got some uh get some stuff going chat is really interesting here uh why would you use okay so yeah hey mike hey nuno so uh so sebastian says why would you use a container instance instead of a web app based on containers is an advantage of one over the other so um azure has a whole a whole wide range of of powers offerings to be able to run your applications web apps is one of them so the idea of web apps in the old days was that there was a kind of the the fact that it ran in containers was kind of hidden from you and you you brought your code and it packaged it up for you and actually run it in containers under the hood and then it evolved to let you say look just i've got my container image run this container image for me i think the difference is is um where the the the landscape that you're running on so you know if you're using docker locally if you're planning to use docker in you know in your production environments um if you're looking at containers as your primary deployment tool then i would stick with the most containery thing which is which is aci and aco also lets you have these container groups now you can't do that in windows yet but i could have container groups that effectively let me um let me connect within like a like a local network a local docker network that i can do when i'm running containers locally so i've got a lot more flexibility in what i can do with um but it means you know it means i'm working in a kind of dockery way instead of a kind of a jewelry way so it really depends on what else you've got in your landscape and what other runtimes you're using um but if if you're if you're more container focused then i think aci will give you some some more tweaks and if you're more of an azure plus focus then the web apps are probably the way to go but the beauty of it is you can you can proof your concept that really quickly so you package your app in a container and you run it on both and you you see what works for you what works for your pipelines and your and your workflow okay cool so uh so yeah that's uh yeah awesome good i'm glad it was awesome um so next week next week's the last one about windows containers so remember the theme is going to run for a month um this month was all about windows containers next month it's going to be about orchestration but before i've got one last uh windows episode we're going to dig in a little bit more into the detail about what's going on inside those those windows versions that we're looking at so windows containers have been around since windows server 2016. um there's some the versioning story is a little bit involved so i'm going to get into that so you understand what's happening inside those versions and talk about isolation process isolation hyper-v isolation and some of the other kind of quirks like like el cow um to run its containers on windows so that's coming next week and then after that when we get into november it's all going to be our orchestration and comparing different orchestrators but next week stay tuned i hope you found this useful uh my name is elton this is elton's dot show and uh next week we're going to look at the the the the digging into the details of windows container versions when you're running when you're running with docker okay thanks you
Info
Channel: Elton Stoneman
Views: 317
Rating: undefined out of 5
Keywords:
Id: jpG0sBqWfgo
Channel Id: undefined
Length: 45min 0sec (2700 seconds)
Published: Tue Oct 20 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.