REPLACED WITH V2 - Microsoft Azure Master Class Part 8 - Application Services

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everyone welcome to part eight of the azure master class this is all about application services so thinking about containers which is really everywhere today then thinking about what containers in azure yes i could run it in a vm these are azure container instances um orchestration with kubernetes and the azure kubernetes service that complete managed offering and then we'll go and look at some of the app service capabilities like app service plans azure functions and then we'll finish off on logic apps so when i think about app services we already covered vms and virtual machine scale sets and absolutely i can run my applications in there what we're really thinking about here is moving up the layers we talked before about the idea of pets and cattle and the idea that well azure always manages the core fabric the storage the networking the compute and then the hypervisor with a virtual machine we get to see but responsible for the operating system the runtime the middleware and then obviously the app and the data so when we start moving into the true app services really our focus is all around hey i want to think just about my app and my data and i've got to kind of set but as always if this is useful please go ahead and hit that kind of like subscribe and comment below and share so as i think about the services this whole idea of pets and cattle is really important and we move from those vms to the app services we really need to understand well what is the difference in these things and so we're absolutely used to the idea of pets so we can have kind of a pet we'll make it garfield okay so with our pets we can think about that's a pet we care for them so we think about well we we would patch them we fix them we really make sure if they're sick we heal them and then we have the idea of cattle now if it's cattle we can kind of think about obviously not not really a great artist just to make sure have it saying mu so that's a cow so with cattle we don't care um if it's sick we just replace we don't really name them there's nothing special about any aspect of cattle if i think the services well we think vms live here a virtual machine but in truth even vms are sometimes cattle if we think about things like a vm scale set we don't really particularly care about any individual instance that's not our focus and even with vms things like windows virtual desktop yes it's using a vm underneath but we don't care about any of those individual virtual machines then absolutely we get to things like containers and by extension that includes things like azure container instances and aks and we kind of think about the app services and realistically we don't want to be in the pet world some things are pets domain controllers our sql server and what a lot of this actually boils down to if it's pet or cattle is i have some kind of state that i care about and this really doesn't have state now you may argue wait they have state otherwise it would be completely useful use less but it's where is the state normally the state is part of the virtual machine the virtual machines disk as we think about these types of services for things like scale sets and for scale set we have some external storage and maybe a data layer database windows virtual desktop we abstract away the profile with the applications it could be our fs logic user profile disks we we separate the data from the actual machine containers if i really need something stateful well i can have a persistent volume that i can attach and detach from whatever compute instance so we talk about state we're really talking about the compute aspect of it does or doesn't have a states whether or not we have to care about the particular instances so this is what we're trying to move away from we don't like the idea of having to care and feed some compute instance i want it to be completely disposable the state is separated and anything could just kind of go and connect to it so if all of these types of things virtual machine scale sets windows virtual desktop containers app services the all app service obviously still has state but we separate it we separate it from the compute block so it's a compute block seek we don't care we can just replace it so let's kind of start with what is the first kind of compute block we always talk about and it's containers if i think about virtual machines a virtual machine virtualizes the hardware so we're used to the idea of virtual machines so i can think about okay well we have that physical box that physical box has resources i can think about well it has kind of cpu it has certain amounts of memory it has storage or access to storage and then it kind of has networking either nigel actually your network not like that now with a virtual machine what we actually do is well i create a vm on this thing and that vm gets a portion of virtual resources i get some kind of virtual cpu i get certain amounts of kind of memory i get virtual hard disks that consume kind of some of that local or remote storage things like storage count managed disks and i have my kind of virtual network adapters that bind to physical network connectivity to go and talk and inside there there's a whole bunch of stuff i have to manage and i can have multiple vms on the same physical box and in there i put an os so that os has kind of a kernel then i have a bunch of processes that run in kind of a user mode space so there's kernel mode user mode and these are all just kind of processes so that's virtual machines i'm virtualizing the hardware to create virtual machines virtual piece of hardware but then i have to install an operating system the os has a certain amount of bloat the os has a whole bunch of processes and manageability it takes time to spin that stuff up it gives me great isolation if i have another virtual machine on the same box well it has its own os instance its own kernel its own process space different versions of libraries there's there's no conflict between them and that's why virtual machines are so attractive before we had things like dll help dll hell was all about well if i tried to run multiple apps on the same os instance they maybe had different requirements so they would clash with each other put them in a vm they're completely isolated they can't see each other the processes can't see each other they're different file systems it removes any kind of risk of a clash so the thing about a container a container virtualizes the operating system and what it's basically going to do is this idea that there's many capabilities containers itself is not a first party part of the operating system as such but there are features in the operating system that are leveraged by containers things like c groups so c groups are all about kind of um controlling i could have a c group to control how much cpu you can use how much memory you can access what devices you could see name spaces provide these isolated walls i can have a network namespace to control networking i can have a process namespace amount namespace and then we have this idea of file systems and what union means is i can have layers of a file system and what this breaks down to is this was virtual machines so then let's take a virtual machine still so we'll expand out one of these kind of vms so now i can run containers on bare metal a physical box it's not that common so this now becomes kind of a container host now i still have an obviously an os i still have that kernel but now what containers give me are these sandboxes an isolated space using c groups using namespaces they use a container runtime container d is the common one and now what i have is this container instance and i can have multiple container instances now when i think about how the machine is divided up they are sharing a kernel then we kind of have kind of a user space so they're sharing a kernel but they're isolated with their own networking space their own process space they cannot see each other so i have this idea of kind of container one container two it has its kind of processes inside it it has its processes inside it they cannot see each other but they are sharing the kernel now they also have constraints i can control how much resource they can use through those c groups and what it sees is kind of this virtual layered file system the container is actually running an image and this container can run a completely different well so this is image two completely different image now the image has to match the os of the container host so this is now kind of a container host which makes sense these are not vms they're still running on this underlying operating system so i could not have for example a windows image running on linux it doesn't work that way i'm still using the os but i'm isolating the processes from each other now when i think about these images it's layers and this is what makes actually containers so powerful it makes it super useful i can think about well there's going to be a base layer let's see if it changes the color so i have a base layer this is kind of going to be the os and again there's many different distributions and different versions i pick what i'm building from and then i might have another layer off of that that maybe adds a certain runtime i could have a completely different layer off of that from a different image that maybe uses one time too and then this one breaks out and maybe has an application in it and then when i use it these are all read-only so anything that i go and fetch as an image this is read-only i cannot change it so it's just read access what gets put on top of it when i actually create a container is kind of a read write layer so when my app is running my app inside this container it can do right but it cannot modify the underlying image it's immutable it's absolute and what i can do with these images is most commonly will have a registry now docker has a registry um azure azure has the azure container registry which i'll talk more about and essentially what we can do is when we when we make our images up so this is my app one luck image or i can actually go and store that push it into my registry so now i have my app image v1 i could let one do a v2 or v3 or whatever i want but it's now immutable it's absolute i can't change that thing so i could have a dev environment a production a uat environment these images can completely move through them i can pull it down from five different environments it's guaranteed to be the same thing and if you think about it what was it was a pain point with virtual machines the big pain point was well the developer creates the app they have a bunch of stuff on their machine and they kind of write down what they think the dependencies are and then you go install it in prod it doesn't quite work there oh you missed this thing so okay and you have this big set of instructions on how to deploy the app well we don't have that anymore the developer now creates an image the image has all of the layers that it needs the runtimes it itself contained all i have to do to deploy this is to pull the image and run it there are no other dependencies i have to go and pull down or anything else and we can actually go go and see these layers so let's kind of jump over to a demo for a second so i'm actually running docker here so docker is kind of one of the most common it's not the standard when we think about containers and what i could do is i could quickly go and look for example in here and look at well what are my images so long line so i'm in here and just very strange thing happens whenever you try and have a embedded session so if i run that line you can see at the bottom docker images i can see all the images on my box which means i've got httpd so that's one i pulled down from a public repository and what you can actually do is i can look at the history of that and if we look at the history of it you'll actually notice it shows me the various layers that kind of build up what's happened to that image over time so we can see it's made up of many different things files being copied to actually make that image up and what i'm going to do in this example is i actually have a docker file so the way we create images is we have a dockerfile composition file so the composition file tells it well how do i create the image so when i go back to that kind of the whiteboard i had the idea that base image if i want to create my own so i build from something and then modify it put other things in maybe remove things again it's read only but i can hide things through these layers of file systems and then create my own image so here i've got a composition file so i'm building from httpd that's my base image i'm going to run a command to remove the built in default web page and then i have a local folder that i'm just going to go and actually copy into it so if i jump over here you can see i can actually go and build my image so if i run this command you can see it's running it's doing these various things and it's now created it and i could actually run it now so when i run the image so the image is the thing that we build from running it now creates a container instance and pulls that image into it and then executes the process within it so if i was to run this command it's going to go ahead and create that container instance and that's now running now one of the things you'll notice i've got kind of this p 8080 that maps port 8 in the container to pull 80 on my actual host so i can now actually just go to 127 that's my kind of local page and there i'm running against my container and the reason i'm bad father is this is kind of the first time i took my sun on a roller coaster and yeah that was the face and if i click here i can see the first time you went on splash mountain and click it again and get even bigger picture and you can kind of see that in action so that's the idea of creating my own image actually from this now i can then go and look and say well what containers do i have running we can see when i run a ps-a i've got there is there's my bad father it's actually running that container instance i actually created you can see it's got a container id i can actually copy that and then use that actually as part of some of my other kind of commands later on and i can run that and now i could i could stop the container i could remove the container i can do all these various things so right now i'm just going to stop it and then once i've stopped it i can actually go in and remove it so i've killed off that container container generally runs a process and as soon as that process terminates the container pro terminates as well so kind of that's the idea of a container it's really not that complicated it's i have an image that i build from a composition file that includes all everything it needs to run all the dependencies and i execute it inside an actual container instance a sandbox with his own isolated namespaces for networking and processing as its own union this layered file system i have that registry i have repository of my images so this is super powerful now those registries can be private they can be public it's really up to me um so i can have private if i was having my like corporate ip in there i can have public by making something that i just want everyone to be able to see that registry could be in the cloud it could be on-prem there's not really a right or wrong but you saw how quickly that container got created because it's not having to spin up a whole operating system the os is there already a container is just using bits of the os to give it the isolation it needs and then it's there generally i can spin these up in a second they're that fast and i still have all the same resource controls they're completely portable because of this image i can run it really anywhere i just have to make sure the image is running on a container host that's a compatible os because it's still using that host's operating system to execute it's just the file system it sees it's built off of those layers so this is kind of one of the huge benefits of containers they're much lighter weight than virtual machines it's completely kind of self-contained think of micro services microservices and containers go hand in hand we're used to a big monolithic application where it's this one big thing and it's just running all the time with kind of micro services i i decouple everything in my application i have loose coupling maybe it's a restful api i call between them but imagine i have some task that needs to be run i spin up a container do the task then it dies hey i've got a stock application go and research these five stocks well spin up a container for each of the investigations to the five stocks does its job and then they terminate they're constantly being created delete it says a lot of capabilities i can actually do about these and you saw in my example if i actually go back a second so if we jump here the next thing you could do is when you've created kind of your container most commonly is well i'm going to put it in that registry oh my bad let's go back over there we go and this is what i'm kind of doing in these next set of commands so if i scroll down for a second over here lots of trouble with this cursor so this is a remote machine and the problem is it's hiding my cursor from kind of my secondary display just making it near impossible for me to do anything mental note don't use this again let's just scroll down so here you can actually see what i do is well i can tag the image i've connected to my azure container registry i've got my az acl login saviletech is the name of my registry so saviletech.azurecr.io i tag it with the name of the registry and i'm doing images slash bad father and then i can push it into my azure container registry and again i can see that full history of kind of my image so one of the things i would do here is i created bad father but i can tag it i can tag it with versions i can do many different things here so i'm going to tag it with this name and then i can actually go and do things like look at the history and what you can kind of see on my image here if i actually make this a bit bigger if we go up and scroll up for a second you can actually see so that's all the way up with the i'm just going to run my command again to get it more current this is looking at my image and you can see my commands here you can see where i actually did kind of that remove of the h t docs you can see i copied in the folder so you can see how i actually made my image so all of this was really built around the idea that hey i created an image and then with that command i could actually go and push it up to the container registry and finally when i was in my office preparing for this session i found this comic it's nano server man but it actually has a whole thing about how containers uh are saving the world you can kind of see there so it's actually comics about containers if you were really really bored so what we have then is i can think about let's get this attention back typically a container runs a primary process and that's its job so when the process finishes the container finishes as well it's there to do a job they share a common life cycle and again we create them from that immutable image now there are ways to have state i i kind of talked about this idea that well it has no state but that may not be what i need so remember these processes could absolutely go and talk to some external storage they could go and talk to a database there are ways for just basic containers i can actually map pieces of its file system to something on the host so i could make folders available so we could store kind of persistent data now when we move beyond this and we actually start looking at some of the core orchestration solutions they have better options for this additionally i drew these layers in the way this works is it's kind of this the idea of a copy on right i can have multiple containers running the same images it only actually duplicates the parts it needs to that are unique so i could absolutely be running completely different images maybe there are any different runtimes but it doesn't store the entire image it understands the idea of the layers so if i have two images built off the same layer beneath it they depend on we're still going to store this once so it is intelligence and it's really super efficient on how it does that work so think about container support um docker and docker really brought containers to the masses that's what i was demoing on it's actually this thing called docker desktop it's great to learn this stuff and it even has a mini version of kubernetes which is kind of the de facto orchestrator so definitely recommend that so management the management commands how i interface that's really become a standard docker registry again one of the standards out there for storing those images uh container runtime there's a lot of different container runtimes again container d is kind of the most popular one right now that's what azure container instances leverage and to be clear they started off as linux but with windows server 2016 they added their own set of container technologies in so i can use the same management interfaces the same registry it's got consistent api it's using something different underneath under the covers but i can absolutely do containers on windows and linux that really is a key point hyper-v containers so this gives me an isolated kernel so if we think about what i drew here they share a kernel that's great if i trust that container if it's in my organization it's my host i'm fine with that i'll share a kernel i want the efficiency and the speed of sharing that kernel i'm not worried about a bad actor running in that process but now imagine i'm multi-tenant or maybe i i'm super important i don't want to risk someone impacting me so this is absolutely the normal model they share a kernel however there's something called a hyper-v container so i have the same idea that i have my container host and it kind of has its kernel and i run my regular containers on top of it absolutely then i have a hyper-v container so a hyper-v container actually creates a managed virtual machine so it is a managed you don't see it for example if i if i did this locally i don't see this in hyper-v manager it's designed around containers so what that means though is it's running its own kernel and then it runs container three so there is nothing about the container that's different this is not a different version of the image container doesn't care this is a deployment when i start the container it's an isolation mode hey i need a hyper-v isolation nothing about the container is any different it's still a container it's still running on a container runtime so in all of these things there's always kind of this um i'm just gonna write container d for fun there's still a container run time in all of these it's just that it's not sharing a kernel anymore because it's the only container running i don't want to do a complete different host just for this so that gives me this isolation so when i think about a multi-tenant environment like azure container instances which we're going to talk about the ability to spin up this super fast managed vm but just run one container in it with its own kernel is super useful also if you think about it because it's running its own kernel its own os instance well technically at this point the container host this is still a container host could be running windows if i create a hyper-v isolation vm this could run linux because it could be a linux image and that managed vm could be a linux managed vm so it gives me some more flexibilities as in how i deploy things so if i need absolute isolation between containers at run time when i start the thing i can pick an isolation mode of hyper-v and it will now put it in its own managed vm it's still super fast now obviously it's not as fast as if it was just a regular container it has to create this vm but it's it's super thin it's designed around running a container let me have the azure container registry now i'm going to talk about the azure container registry again there's the docker container registry there are many others out there but if you think about logically it provides both private and i can make things public as well but it provides this private repository so that's why i would put my kind of corporate ip my corporate images i want to use in my environment i can geo replicate this now there are different skus the premium sku lets me geo replicate it now that's partly for resiliency pacing happens to a region we always think azure we think best practice we we geo-replicate it to somewhere else that's not actually the biggest reason so if i think logically i've got that registry fantastic that registry exists within a location so let's just say this is in region one and let's say these hosts this is all region one so hey the registry is in region one container host in azure region one i want the registry close to the host because it's gotta pull the image down if it's half the world away it's gonna be much slower to create that thing so if i now had another bunch of container hosts so is another container host over here and this time this is region two well i don't want it to pull from region one the performance won't be great now it may not be that big of a deal the images are generally pretty small but what i can do is well hey region two i've got my azure container registry service i'm gonna tell it replicate which means that image hey there's a replica here as well so when i deploy from the azure container registry and i grab app one image hey it's in the same region i'm going to pull the one from the closest region to me so that's your replication yes it's useful because it makes it's resilient from the region failing but i want to have my rep my registry my repository of my images close to where i'm going to run them i want to reduce the latency and also i want to increase the reliability i don't really want to be going over the internet the azure container registry runs in azure that same local network is where my container hosts are running so we'd like the idea of that azure container registry so we place as close as possible as i kind of talked about already and with the premium i can do private endpoints so i can actually put an endpoint into a virtual network and i can even run jobs so i'm talking about storing images they actually change the container registry that it can store other stuff other artifacts i might want to use i can put in the azure container registry as well that might be used as part of my all up deployment but you saw me build the image i had my docker composition file i ran it it built the image and i pushed it that's great if i have the dev tools locally on my box so running all of this the way this really went remember was kind of well there's my machine there's me and doesn't have any hair and what i have is my kind of my docker file my composition file that kind of had that immutable that declarative of hey i'm building it from this base image and i'm doing these commands it ran and created the image and then i pushed it to the registry to the azure container registry well what i can actually do is i could just say hey azure container registry you do it for me so i can actually have that up there and as a job it builds the image for me up here and it can even see well it's built from some image if that source image changes gives a new version rebuild my image and i don't have to do anything anymore i don't need my own pipeline i've given it the composition file i told it what it's built from that job now if it changes up here it's going to go and do the work for me so the azure container registry does a lot more than just store the images it can actually build the images it can store other artifacts that i need for kind of that complete picture okay so now let's think about azure services so azure container instances and this could be linux or windows and it's literally container as a service this is almost serverless computing in a way because there is no particular container host i have to create i'm paying for i literally say hey i want a container i want this container built from this image of this specification and i'll pay for it while it's running that's it um it can be built from standard images or my own custom images i create they can be public or private now the reason it's linux in brackets today today to integrate with my virtual network it needs to be a linux space windows is coming it's linux today and these are super useful for burst scenarios or maybe just some very basic scenarios because again i pay for it kind of only for the seconds this thing is running now in a second i'm going to talk about something more powerful and orchestration but one thing i want to talk about now is this powerful orchestration this azure kubernetes service can use aci i can actually make aci look like a node an infinite scale node to kubernetes it'll be tainted and it'll be kind of tainted as a it's not my regular worker nodes so i will have to add a tolerance to my deployment to use it but i can absolutely do that i can actually burst so think about a certain capacity ordinarily i can burst to this thing as required so let's kind of have a little bit of fun and try and use this remember i created my own custom image that bad father and i pushed it to my container registry in azure so over here if we actually go and search for oh there's my container registries so we can see it and there's the registry i created and if i look at my repositories i can see i've got my images bad father and i've just got the latest one i really don't have any other so i'm using this is just the the real cheap type of azure container registry again there are different types there's different levels i can purchase based on what i need how much data i can store number of concurrent kind of accesses to it things like if it supports private endpoints i don't need any of that so i have my image in here so what i can do is i can go to container instances now i don't have any i'm showing it from the portal just for kind of fun i can say hey create new if you want a new resource group or just for speed it really doesn't matter what i pick i will just pick myself central give it a name um aci bad i'm really going all in on this i can use built-in images they have like a little hello world they have some engine x which is super powerful iis built a nano server that's what this comic is about my nano man comic super super thin version of windows for certain scenarios um really now just for containers it used to be maybe running services as well they changed it to its built all around containers or i can pull from azure container registry and there's my image or things like docker hub or something else it knows it's linux i can pick the size of what i actually want to run it as i could also for networking i'm going to do public i could also do private and i could integrate with my virtual network in that region and deploy it to kind of a private ip but for fun don't do that i'm just gonna do public and we'll call it bad father here as well and it likes that name and i can say create and it pass validation and go so what this is now doing is it's pulling the image and it's going and deploying it to the azure fabric now there is no set node essentially azure has a whole batch of capacity so it's finding where it has some capacity in south central us and it's going to use this type of container a hyper-v container it's multi-tenant i don't trust the container running next to me so it's gonna basically create my own kernel so i'm not sharing that with anyone it is my isolated the same way we do vms in azure and it's isolated using hyper-v technologies it's the same hyper-v technology here spinning up this managed vm super super fast using an os based on the type of the container windows or linux pulling down my image and will execute on top of it so your deployment is complete already you saw how fast that was so even though that was kind of this um hyper-v isolated how quickly did it spin that up it's not like a regular vm it's much much quicker than that so now i can see okay if my deployment is complete i can go to my resource there's my fully qualified domain name that i'll copy and let's just see if that works and there it is so now i'm running me as a bad father my son's going to remember this forever using an azure container instance that's public facing and again i'm not the world's worst father so i will actually go and delete that but you can kind of see how quickly i can spin those things up and that's my image so azure container instances are phenomenal to quickly go and just create something that i need so it's a container as a service i pay for when it's running very very simple and it can still integrate with my virtual network if i pick that private networking so now we get into kind of the the big boy azure kubernetes service because yes one container is great it can do something it can do a process and on its own yeah i could maybe map a folder from a file system yeah i can do some basic uh networking but it's isolated there's nothing really else about it we typically need a more complete set of orchestration around it now there were multiple orchestrators for containers um there's kind of a dcos there's docker swarm and there's kubernetes and honestly a few years ago it wasn't clear which one to use today kubernetes one um initially azure had kind of an azure container service and what it would do is you got to pick the orchestrator and it would deploy it for you well they've already got rid of that there's an engine still out there you can use if you really want to create an arm template but it's now azure kubernetes service and managed offering and the reason we need this is great i've got those containers okay but is that everything do i want one probably not so i probably need some kind of auto scale i need that um i need kind of upgrades kind of a life cycle management and i already want that rolling i don't want to delete it all i want some kind of rolling update capability i need to be able to find up the services like use load balancers things on there so load balancing multiple instances i need to put that behind something maybe affinity and more likely anti affinity the idea of saying okay well i want three containers i want it spread over threefold domains i don't want it put all on the same box i want storage yes i'm stateless but actually uh i need something to persist i need something durable and yes i have that read write layer we talked about up here that read write layer goes away when the container goes away yeah i can kind of map folders to the host but what if the host dies i don't like that so i want some kind of storage capability and i want kind of a rich networking set of capabilities as well there's more to it than kind of just that super simple thing so all these kind of requirements and how do i get that and it really comes down to this well i need orchestration and when we say orchestration what we really mean today is kubernetes now you might see this actually written as k8 s which i actually thought was because it was kubernetes valid pretty that's incorrect the reason it's k8s is well there's one two three four five six seven eight characters between k and s so k eight characters and s who knew so kubernetes is kind of that de facto orchestrator um that we're actually going to leverage as part of our environment now it actually provides a free managed kubernetes environment with aks so kubernetes itself has a whole bunch of components it can't just run and thin out so if i think about kubernetes well there's a database where it has to kind of store information about everything that's going on that's kind of this ncd this open source database well it has that there's an api server this is our interface in and out of kubernetes that's where i make the restful calls that's where i can run command line tools actually a command line tool cube ctl to actually go and interact with the api server to make it go and do things then there's kind of a scheduler and the sketch is actually super super powerful because what's going to happen is we're going to request things to get deployed with the scheduler's job to notice that hey something has been but isn't running yet it's pending so the schedule this job is to go and find where do i run this thing put it somewhere run it make sure it's running and it has a whole set of controls there's things called predicates and kind of priorities and you can really think about these as kind of hard requirements and maybe soft requirements so a hard requirement could be you must have this amount of memory a soft requirement may be some kind of anti-affinity so if a target node doesn't have enough memory we can't run it there that's a hard block so essentially for predicates we filter if it doesn't meet it we can't use it we want it to be anti-affinity i want to spread them out but if i can't i don't just not deploy it so this is kind of a salt so we have a list of possible nodes we can deploy to we filter it then we sort it and then we deploy to the thing so the scheduler super important around that there's also a whole bunch of controllers um so i think about these controllers they do a whole bunch of of different things i can think about um node health for example so go in and check in on those things um checking out replication all sorts of different things but all of these things if i think about kind of this this is kind of this kubernetes management with aks this all of those bits is free you do not pay for the kubernetes cluster it's free you don't pay for it all of this stuff is essentially running in kind of an azure paid for location instead it's for you it's you it's an instance for yours it's running a certain version you can say hey i want to upgrade to a newer version so if these elements are dedicated to you but you're not paying for them it's just there all goodness essentially for that point which is why yes i could deploy kubernetes myself absolutely i can have virtual machines i can deploy kubernetes i can read a book on it i can deploy it and manage it why this is fully managed it keeps it healthy for you i can just use and focus on really what i care about and what i care about are my worker notes and this can actually auto scale i can use things like vm scale sets underneath so if that's my kind of kubernetes environment and this bits all free remember that cool control again i use that from my machine to interface and again i still don't have any hair i'm a bit conscious i've got a hat on so that's how i interact where the things are running are on worker nodes so what we kind of think about is i have worker nodes let's say we have three i can change the number but these are my worker nodes and what i start with is kind of a system pool and the reason is the system pool is there are actually some containers to do certain core functions that do get deployed to this that this has to use for interaction with these now the first thing that's part of this kind of kubernetes is you'll actually see this cube lip i can't spell so on every one of these i really can't spell today he's a cubelet he looks very right first time third time third time's a charm so the cubelet is actually what talks to the api server that's how this kubernetes cluster remember this is microsoft this is you you pay for these work notes they're just vms you pick the size you pay for those remember these have a kernel so they have kind of the the kernel actually a different color i don't draw that color so once again it's an os this is a vm fundamentally you don't see it as a vm but they're running your the shared kernel on all of them once again they're running that container runtime on all of them and then it just creates your containers container 1 container 2 container 3 container 4. you get the idea containers everywhere there is another component that's kind of built in on each of these nodes there's also something called kind of cube proxy q proxy it's all about networking so it kind of helps it interact with the various types of networking i will have in that environment and as i talked about remember there's aci out here this infinite kind of scale on demand and the way that works is kind of in my cluster this cubelet is how we talk to things well it actually creates this virtual cubelet that you can talk to and then instantiate things on azure container instances so i i can do that as well but that's what kubernetes services is there's nothing magical about it as a kubernetes service is hey i have the management all these orchestrator things and then i deploy it and again it's the scheduler's job to deploy these things now i said we're running containers actually not strictly true in terms of kubernetes really doesn't care about containers what kubernetes does is it actually deploys i want a different color let me change color that's not good oh my whiteboard crashed all right let's spin it up again that never happens right there we go back again i think i draw too much stuff on the whiteboard all right so see when i change my color again there we go what it actually does is it deploys a pod so i can think really about the idea that well we have a pod the pod runs generally it's a kind of a one to one a container and that's actually not true it's actually very common something called a side car cycle is saying that helps that main container do its job in fact pods always have at least one side car that owns the networking of the pod because this pod has its own network like a dress that's a big part so when i actually deploy our pods so all of these containers actually sit inside a pod and it is the pod that i actually deploy so when i'm doing this cube proxy and that's for the networking side this cube ctl what i actually do is i have a yaml file yet another markup language in the yaml file i basically have my deployment of what i want the desired state so i don't care about really anything i'm saying this is what i want my desired state to be so it's a desired state configuration i want a deployment of um i want three replicas or two replicas or whatever it is of kind of this image um go and make it so and under the covers cube ctl talks the api services this is what we want to be reality and then it goes and creates deployments and replica sets and stateful sets the scheduler is told hey look there are these pods that have been created depending the scheduler's job is to go and look at the nodes look at these predicates look at the priorities where can i put them instantiate them somewhere then they go to a running state and then all these things keep an eye on it if a node dies when it goes and recreates them on other boxes that's the job of this so let's kind of have a look at this so if we jump over to visual studio code so in here i actually have um a basic aks configuration so let's first actually go and we can see i have actually a couple of kubernetes services i have one for container network interface and one for cubenet i'll talk about that in a second but not very much and on this right now so let's just say you can see if i look at my workloads you see all of these these are core things actually two kubernetes if i look at things like my node pools i'll talk about this again i can see well yeah i've got kind of my node pool i have two nodes in that node pool i could change this so i could actually do a scale notice i could upgrade it to a newer version of the kubernetes if i go to the overview my kubernetes is running a certain version i can upgrade the kubernetes cluster now i just upgraded this yesterday so that there isn't a newer version for me to actually run i would upgrade the kubernetes service then i could update the node pool so that would be that cube that was running that on each of them so it matches my kubernetes service i can see things around sort of various capabilities i can integrate in but right now you can see i've got my node pools i've got various configuration items there i'm running oh that's a newer one already look at that that's funny so since yesterday there's now another one do you want this might be there's only so many upgrades you can do skip at a time so because i was behind i think what it's done is well now i upgraded my cluster now i can upgrade it again so i'm responsible for picking when i want to do that upgrade there are some automations i can use but hey i could at this point say hey i want to upgrade my cluster i'm not going to because i want to use this i don't want to risk it doing any kind of getting in the way of those things you can see i've got role-based access control that integrates with kubernetes roles um i can enable aks managed at azure ad so the pods can actually go and integrate i've got various scale capabilities um but that's actually as part of the node pool which i just showed you but right now if i look at my node pool this is kind of a a system node pool which means it runs all those standard things that kubernetes needs but i can also run my own applications in there so what i'm going to do is in this environment i've installed the cube ctl so i can run this az aks install cli and it will add the cube ctl to my box if i look at wrong key let's look at this again if i run my cluster info let's make this a bit bigger i can see yeah there's my cluster running all good stuff i can for example let's get my node i click what my nodes are and yep i can see i've got my two nodes running in this cluster this is how you integrate aci action container instances as now becoming a virtual node i've also connected my azure container registry to this cluster and because what i actually did is i've got two clusters and what you can do is if i look up i can get the credentials for each cluster so i can actually go and integrate with it so what i want to do at this point if i was to look at all of my pods don't have any in the default name space all i've got running are system pods so now this is a yaml file for a deployment now i'm not going to go into detail of this there's too much to it but essentially i'm doing a deployment you can see the kind is deployment and i'm doing a specification i'm saying i want two replicas i want to match the azure bad father web and then i'm using a template and i'm going to label this template as a bad father website will get used and the specification is hey i want to pull my bad father image from my container registry i'm specifying the resources it can have and i'm exposing pool 80. now i have two instances of this so the other thing i need is a service and for this this is part kubernetes my service is actually going to be of type load balancer so it's going to use an azure load balancer and it's going to be by default external so i have annotations to help control things if i wanted it internal then i would comment out the top line and uncomment these two to make it actually now integrate inside but as you can see it's basically going to create two parts so now if i jump back over we'll just deploy it so what i'm going to do is apply my desired configuration my desired configuration says hey i want two replicas of my image so here i can select this i'm going to run it and it says okay there was a deployment created and a service created so that was kind of super fast if i now go and look back over and i look at my workloads you can see there's bad father two of two age is 20 seconds old if i look at my services we can see well i've now got this service for azure bad father web i can actually see an external ip address and what that's actually doing behind the scenes is an azure load balancer now the networking of kubernetes is actually pretty complex i'm not covering it here at a super high high level i talked about that cube proxy really there's two models of networking so when i think about my networking it really boils down to the ip addresses used within the pods so i can think of something called cube net where the pods have to get nat native diapees so there's an ip space within the hosts that is not routable on the oval network so there's a special route table gets created on the the network that drives that space and then it does nothing so they can actually go and talk to things externally then there's cni well with cni it's just they get ips actually from the subnet directly there is no netting um it's not necessarily one is better than the other for both of these i could create a new v-net i could use an existing v-net if i have containers that are constantly being created deleted created deleted create deleted do i really want those using real ips on network constantly maybe a bit messy so maybe for me that the netting might be the right option i have a separate video on this where i talk for i think nearly an hour just on kubernetes network and i show you how it integrates with the azure load balancers how i use azure app gateway how to use nginx i talk about all of that stuff so if you want to know the internals of networking um you should go and check that out but essentially this has gone ahead and integrated with the azure load balancer and is now there so again if we go over to this i can see well there's the external ip if i select that there we go so now it's me as a bad father running over two different pods with two with a container in each pod running my terrible father website so really am stressing um how bad for father i am and it's just available if we were to go and look if i now look at my pods over here i can see i've got two pods running i can see the ip of the actual pod because i'm using cni that's an ip address actually from my underlying virtual network it's not natted i could also go and look at the services which is going to tie to my azure load balancer here i can see the load balancer that i'm using an external ip i could get details i could see the actual endpoints i can really see all of the different pieces of information and then when i'm kind of done well i could so i could change that deployment file maybe change it to three replicas it's declarative it's just gonna make that state so in this case i'll remove yet more evidence of me being that terrible father from the internet so that's the idea behind this thing it's this declarative um state to drive what we're actually doing now i guess i should mention storage so i talked about the networking what about if i want persistent storage how does that kind of persistent storage working so if you think about well you you've got kind of these these pods so i can think okay great i've got this pod and the container but i want some real storage now in azure there's a whole bunch of different types of storage i can have there's things like um azure disks some managed disk there's things like azure files so smb based access there's things like azure netapp files so these different types of kind of durable storage actually available in azure so what we can have is we can have kind of a persistent volume system volume is kind of created using a provider and i kind of have those types available to me and then we have a particular podcast called a persistent volume claim which maps to particular persistent volume so that's how i can now say hey i want some durable storage made available to me i don't want to map it to the host the host would go away so if i actually want persistent storage of something well i've had these persistent volume claims which map to a system volume that's using a various storage provider that are now available to me um nodes on the same work node pods could actually go and share storage but that's the idea if i want that kind of shed persistent durable storage available to me i can use one of these kind of azure types that aks actually has storage providers for and then i can create persistent volumes of specification of what i want my size performance and then persistent volume claim maps to those so i can actually use them so that's how i can get durable storage in my environment special cool stuff i can have multiple note pulls so i showed you i had a node pull and i said it was actually the system node port so the system node pool has a whole bunch of kubernetes things and i showed you those in the portal in the portal remember we saw the workloads it was a whole bunch of stuff um if you actually look at the pods there were a whole bunch of pods that it was actually doing so these are things that help it run but what we can actually have is well i could add additional node ports so i can pick hey is it linux is it windows and this would drive the types of containers i could run remember this is not using um hyper-v instances the image used by the container has to match the underlying container host that worker node so i can pick hey okay i want these things what's my max scaling how many nodes do i want what size of the nodes they should be linux windows and these would now just be user pools and the benefit of the user pools is well they're not running any system components technically i could actually scale those down to zero and when i'm thinking about it i'm going to upgrade my cluster to the latest version and save so in the background it will go through and actually upgrade my cluster so i have multiple node poles i can also um use aci as we talked about and i can also use spot instances so we think about okay i drew that picture so all of these things here we're in kind of that system node pool so all of the nodes are the same spec in there so there's things running pods running things that kubernetes needs and i can deploy my apps to it as well which is what you saw me do i can also use azure container instances this is phenomenal for bursting i have a typical normal workload i'm going to run those on my work notes and i cannot only do aci if you think about it i have to run the kubernetes thing somewhere today i have to have at least one node in my system notepad i can burst to aci but maybe i need a different spec i want a different type of linux versus windows maybe i'm running some containers that want to use more advanced physical gpu capabilities so i can actually create other node pools i can have a bunch of other workers of a different spec these are spec two and that's kind of usable one i'd have usable two and again they would have the same cubelet they would pull into the api server and now i would be able to select as part of my deployment remember i have the taints and the tolerances i could say oh i actually want to go and deploy that to user pool one i could control that as part of my pull i could if i want say actually i want these to be spot i can't mix it they're spot or they're not spot everything has to be remember spot are the much cheaper resources but they might be taken away from me it's spare capacity and i talked about that in the vms so i'm not going to talk about that here but maybe if i had certain workloads that hey i want to run but just as cheap as possible if they go away for a period of time i don't care always about my deployment i would say well if they go away i'll go and run them somewhere else i can use spot to host those instances i have that capability i can actually this is in preview as i'm recording this i can actually stop and start my cluster so remember what i said if it's a user pool we can change the scale at any time i can go into the portal and in this portal right here i can change my scale so i could go in and i'm sorry not not in there it's in node pools i could pick my node pool you can see my node count is two i could select that and i could change it down to one there's also auto scale capabilities using vm scale sets underneath but i can't scale it below one i can't scale it to zero now if it was a user pull i could actually scale it to zero that's allowed but i have to always have at least one node now what is this is kind of a a dev test and i just want to shut the thing down and stop paying for that worker note so what they've introduced is the ability to actually stop it now again this is preview today but with a very simple command you can actually stop your cluster and i've i've stopped so i've got two clusters running here the portal doesn't really understand this today so i don't think you're gonna see anything different but this cubenet one oh actually you can see it's a no count of zero so you can't ordinarily set it to node count zero but i have stopped this cluster so essentially it's down it's not running we're anymore to look at the workloads never actually try looking for the portal it's error it doesn't understand so i've actually shut this down at this point so we have that capability now to actually stop it so i can completely stop paying for everything remember we never pay for the management anyway it actually has auto healing this is interesting remember i don't care about this part that's azure's responsibility but my work nodes i i do care about maybe if the node is sick so what would fix any problem on a node uh if you try turning it off and on again so correct if it detects it's sick it will turn it off and on again if it still detects it sick it will try and re-image it if it still detects it's sick after that it will delete it and spin up a new node completely so there's a certain amount of auto healing it will actually do so there's these constant health checks and if the node reports are not ready on consecutive checks i think it's for 10 minutes it will first turn it off and on again then we image it and then hey we'll just create a new one completely and then we have managed identity use aks always uses a managed identity so we can go and interface with things like the azure load balancer anyway but there's actually like open source projects there's integrations to actually let the pods use a managed identity so it can go and talk to other azure resources uh in a controlled manner so aks is i mean honestly it's it's phenomenal containers are going everywhere um if you look at things like azure arc azure art talks about bringing data services to any cloud you're on premises arc is three things one is management of vms and inventory and tagging part two is it will manage your kubernetes cluster and it can even deploy aks to your environment those data services it delivers well it does by doing a kind of git ops and what it will actually do is it does configurations to say hey worker nodes i want you to look at this container registry for your config as things that kind of push there pull it down and start running it so the way it deploys data services through azure arc is it actually uses kubernetes to do git ops it's not devops it's git ops so it points them to a certain container kind of a registry and a git library for configuration so it's working together so there's a git uh repository like azure devops or github and then uses registry for the images and it basically tells the worker nodes that say hey your configuration your config well yep there's kind of a a git repository up here with your config in it you go and check that for your config so that's git ops and then it will just deploy those the current those deployments those yaml files i configure and pull down the images from the container registries that's how i can deploy data services containers are everywhere pretty much everything behind the scenes now is containers containers containers you may wonder why i would ever have more than one aks cluster i got multiple node pools well remember uh the cluster runs a certain version as you just saw me on an upgraded it so i may want at least a dev and a prod so i can test upgrade in the cluster then i can go and upgrade the nodes kind of separately it's like crashed on the cluster it wouldn't bring down the nodes the workloads would carry on running and this should heal but just think from a resiliency this is running a version and note you are responsible for saying hey go and upgrade it so if i go back to my kind of my cni and i went to my configuration so i'm now current so the next things i would now do would be going to go to my node pool look at my nodes oh there i am i'm running 1711 hey i could now upgrade to 118 eight just so you can see that so now i can do my upgrade apply and it will now roll that out to all the various nodes okay so containers super good app service plans so this was the original platform as a service in azure this is where it kind of all begun but it's really all about web apps so saying talking over http could be obviously https there's also things like api apps and mobile apps but they're really just some additional things mobile apps for example around pushing and some replication of things but it's really just a web app there's a wide range of runtimes supported in these app service plans so in the app service plan i create an app service and i can pick well what is the time i actually want to build that off of so if i go to my portal and if we jump over let's go and look at our app services so go home i can look at my app services if i just create a new one what we can see here notice i can run a docker container i can actually run the containers and i'll talk more about that in a second but these are all the stacks it supports so i can run.net core asp.net java node php python ruby and obviously i can write all those things on those runtime stacks in many many different languages so a huge set of capabilities and runtimes actually supported within this i can have both windows and linux are running on that container or not containerized you might use a container if you think about it a web app it's platform as a service there's a lot of flexibility in those runtimes so maybe i have a dependency that is just not supported by app services so what i can do with a container is i can really do anything i want within that container it's my container image so it gives me more flexibility even the container still has to be http based though so that's kind of the constraint around it but i just get more flexibility so if i i can't get what i want maybe i can't access a certain type of thing there are restrictions without services i can't get to what i want to get to there's some dependency i need well if it's http based i could run it in a container so the idea here is i have a certain number of nodes and then i can auto scale if i have a standard or above plan so there's different types of plans so i'm still paying for nodes i don't see the os this is pass i get the app and my data i don't see runtime middlewares os i'm not responsible for any of that stuff it's managed for me but if we look at the number of nodes there's different kind of specs available so the way the pricing works is essentially there's different types of environment so it's actually free so i can pick free i just want to play around with this you get a certain amount of cpu minutes a day 60 certain amount of ram and storage then i can do a shared a bit more cpu a day storage but these are free like from what i found i couldn't do like get push one of the nice things about the basic and above plans is to actually deploy my code i can do a git push i don't have to fdp i can just have my repo and push it and it will do the npm start everything it needs then there's basic plans so notice i'm basically picking the size of the vm now this doesn't have auto scale it doesn't have traffic management traffic management means i can't interact with things like azure traffic manager to have multiple instances across different geographies and balance it but i can see okay the bigger it is the the more i pay then i can see well i have standard plans this is for production this is where i get like load balancing support i don't have that in the others i can have auto scale based on my traffic needs um i can also have linux runtimes with web app containers and i can have premium plans with higher ratios of um compute and memory etc premium v3 windows and linux based containers higher scales isolated and there isn't anyone else kind of on me so these are in a dedicated environment in the azure data center if i need that kind of isolation environment and i have all these different kind of capabilities so i pick the sizing really based on what i need so all those those great options and essentially i have multiple applications deployed to the same plan so the plan is where i specify the resources so the first thing i do is i create an app service plan so i can think about with app service i have a plan and my plan is okay i want three instances and again depending on the plan i could have auto scale things like that they're of a certain size that's my plan and then into it i can deploy app services so i deploy an app service one well it runs across those nodes i can deploy another app so now i deploy app service 2. it runs across the same nodes now the decision on whether or not to deploy to the same plan is how much resource do i want to guarantee that app gets could it be noisy neighbor and kind of get in the way so that would drive a lot of those decisions around there but now i and i can get if i scale out all of the apps in that plan scale out it's not like oh i i had enough fourth note just for app service one it doesn't work that way i scale the plan everything in the plan will scale with it we can see this so let's go back over here so this was kind of i was creating a new app and you actually see when i deploy if we go back over it would pick a plan but if i jump back for a second i already have a plan i can see i've got this demo plan over here i can scale out so i could add additional instances i only have manual scale because i'm running a cheap cheap plan i can also scale up so scale up is actually pretty interesting that's not normally something we can do but what it would actually do with the app service is these are a certain size if i say scale up actually create three new ones of the new size spin up my apps on them once everything's running and then delete the old one so it doesn't actually scale them up it just creates new ones in parallel gets everything on them and then deletes the old ones and obviously you can scale out and in with add new instances i can remove them but to this point here so you can say okay yeah i can scale up i can scale out that's of a certain configuration i can see the sizes i have all these different and it tells you kind of the features so i'm kind of on this basic one i can have a custom domain i kind of ssl i can do manual scale then they have kind of the production again it shows you the features auto scale staging slots i took with that in a second daily backup so it will actually back up the app and the configuration that's mapped to the app you might say a lot of times why do i need to back this thing up um i'm pulling it down from a repo my source code i've got continuous integration continuous deployment pipeline i don't care about the backup but there's other configuration that goes into this so having that backup is useful notice it includes things like traffic manager so i have multiple instances and then balance between them at a kind of geography level and then to the app service plan i deploy apps now i have an app deployed this is a node.js application um bad father i did that already now for this i had to ftp it up this is a cheapo plan so i fdp'ed it up and then i did an mp restarted it but even on the cheapo plan there were various tools like i can do a kudu light i can do ssh and what i now have is this node.js application i can go to my node.js application and now this is running on paths so you saw me have being a bad father on premises with a local docker file then i used azure container instances then i used aks i have multiple instances of it over different workloads now i'm running it on a true path and once again i can click through and i want to thank i'm really not a developer anymore so my good friend timoronki actually created this for me and this is a little node.js application and i can just cycle through seeing the sheer terror in my child's face whenever i want and you know what this one i'll probably leave up there for you so if you really want to go and experience the the sheer joy of that you can do that so that's a terrible power we have that capability so we run app services within our app service plan so some of the special features so i can scale up i can scale out i talked about that already we have deployment slots so those better plans have deployment slots now i just drew this picture app service one and app service two with deployment slots i can actually create maybe for app service one maybe this is the production and then i'm gonna have at service one staging now it uses the same set of resources it's not a different set of nodes so here my deployment slots still use and share the same resources everything else it's important to kind of think about that but here this is running v1 of my code i deploy v2 of my code here i can test it i can warm it up so it's ready and essentially what i can do is what i see is these are kind of networks those kind of dns names for each of these i swap them so when i'm ready staging becomes prod prod becomes staging just flicked it over if saying it's gone wrong i could flip them back i can roll back super super easy so depending on the plan i can have multiple deployment slots to really make it enable you to warm up the code roll things back gives me things like kind of those uh deployment capabilities how i think about rolling out my code i could have a certain percentage go to staging 10 it gives me maybe a blue green just switch them over it enables a lot of those kind of capabilities we can use service endpoints so we're going to talk about that in the networking module i'm not going to cover it here but essentially there is a microsoft.web service endpoint so on the app service i could restrict it just to certain subnets being able to talk to it also it can fully integrate with virtual networks i can use a private endpoint as an ip address in the virtual network to talk to it i can use things like hybrid connection manager there's a vpn option or there's regional network integration where it does subnet delegation but it doesn't appear to i can't go cross region it's the only downside today but now it has a subnet actually in your network and it uses that to talk to things in your network so it goes that way and again i have a full massive video going to all the detail um watch that if you want to understand that so this thing called an app service environment so i thought about app service plan there were shared elements the worker nodes are yours if you look at an app service plan outside of that the free and the shared ones they are your nodes no one else is on them but there's other bits that make an app service plan work there's kind of some front-end services there's kind of some file kind of share services there's other stuff stuff and that is shared it's one of the reasons that some of the network integration is difficult for app service plans because it's multi-tenant these things are shared between different tenants these are yours not shared the other bits are shared between there's like load balancers there's there's ip addresses they're shared between multiple tenants well with the app service environment it's not shared all the bits that are normally shared they're dedicated to you and it deploys into your subnet into your virtual network so it makes the in and out communications much much simpler but obviously now i'm not sharing anything so it costs more money it also does have a bigger scale so there are benefits there as well okay we're on the home stretch azure functions uh i love these um so this is serverless compute ordinarily so when i create a function i can pick the plan there's a consumption plan consumption plan i'm not paying for any underlying resource it spins up something when it needs it to do my computation cycles and it goes away i just pay for the cycles that it uses but if i have an app service plan it can run in the app service plan there's also kind of a premium option where it's going to start a bit quicker because obviously it's not running ordinarily it's waiting for some trigger things like serverless they have to be triggered by something maybe it's uh some kind of event schedule manual event grid event grid is phenomenal for serverless um if i think about i might have all these kind of event sources a little bit off topic but this is me so you can think about it there's there's kind of event sources and that could be hey someone creates a blob someone creates a subscription and someone creates a resource group someone there's a huge number of things that can create sources of events and then we have the idea of event handlers now event handler could be an azure function it could be a logic app which we're going to talk about it could be like an azure automation or it could be any kind of like https like rest call anything else ordinarily if i write one of these things i have to go and pull so pull say have you done something they call it like a hammer pole have you got anything for me have you got anything for me have you got anything for me what event grid does so in the middle here i have event grid so event query does the job of connecting to all these things and then what it will do is it will push out to a handler that is registered for whatever those event sources might be but it can also do filtering i can say well i only want to know about this type it will do things like retry so yes i could write something to just maybe trigger off of a blob gets created but with event grid it has retry capabilities i can add filters in i can have multiple things being driven off of it it just gives me more capability so this kind of integration with these event sources and the event handlers again i can't even remember there's so many different types of things i can do over here there's a massive number they're trying to get nearly everything can trigger like key vault one of the things they just announced was key vault can now trigger via event grid so i could find out if the secrets changed so with serverless technologies this is phenomenal because they can all act as event handlers to actually consume what's going on so with the function it's event driven again some sort of http hbs event could be a schedule um event grid blob creation i have all these options to actually leverage these in a bit in addition to the trigger i combined so i can bind to inputs other things i want to go and get and i can bind to outputs many types of servers can be an input and an output so you could think i could trigger based off maybe an event grid i could bind maybe to blob as well because i'm pulling an image and then maybe i go and push a result to a cosmos db output binding to a cosmos db database there's a huge support of runtimes so if we actually look at this and i can actually run containers in here which is again which is kind of insane but if we look at the run times and languages if i scroll down what i can see here is well i can use c sharp javascript f sharp java python type script and the thing i love and this i use this all the time is a power shell and that's just because i'm a an infrastructure geek but i now use this instead of things like azure automation i will actually fire off now powershell i need to run um based on a schedule so it uses a cron format timer i also have like a restful interface that i can call so i have a whole bunch of functions just to trigger and there's templates built in to really help me have all of these different things and i actually get a bunch of executions for free so i can actually just say oh okay um it doesn't actually cost me anything if we quickly look so i'll go over here if i go and look at my where's my function so if we go and actually just create a new resource and we say function app create we can actually see here the runtime stack those different options we can see it says would you want to create a docker container so i can actually base it off of a container as well i can deploy it to a certain location for me right now i kind of have a bunch of different ones that i've created we can see for this function it is a it has a managed identity like everything else so i integrate with things like azure storage um it automatically signs in as that managed to count if you turn it on i don't actually have to do anything this is from the powershell but i just wrote code to integrate with storage and queues and i get triggered via a restful integration but look at my integration my integration is i have a http trigger then i run my function i have other inputs and then i have various outputs for mine it's just http you could have other ones absolutely that run on a timer so when you create these you actually go in and pick all of that kind of stuff so here if i said add you can actually see well okay where do you want to deploy us but there's a bunch of templates built in so based on the language see http triggers for f sharp c sharp timer triggers q triggers a whole bunch of samples built in to actually help me get started if i really wanted to so super easy to actually get going and then i write it in the language i'm used to so c sharp javascript whatever i want to use i can just kind of leverage that but again it can be serverless i just pay for what it's running there's like a premium version where it's pre-warmed up cost me more money um but then i keep like private endpoints as well or i can just if i have an app service panel ready i can run it on top of that finally we have logic apps so this is a graphical based orchestration i think of my business logic i want to do something and again i have some trigger it does something and there's some kind of output there's both a serverless and i pay only when it's running so when i talk about the paying options let's quickly kind of look at those if we jump over again when i look at the logic apps pricing what i actually pay for the actions it performs then it has connectors two things and there's both standard and enterprise type connectors every time it uses an enterprise connector it costs a bit more than a standard connector there is this idea of a fully isolated and dedicated environment at that point well i'm paying for it kind of on the hour because it has to spin this up and it's sitting there waiting for me so no matter if it's doing something or not i'm paying this amount of money just for it to be ready but ordinarily i just pay for the actions it performs and i pay for kind of what it connects to so talk about the integration service environment so it's initiated by a trigger some event and again it could be event grid it could be https call it could be someone tweets that could be a connector so someone tweets oh someone's tweeted they could go into a sentiment analysis and then maybe write something about oh good tweet or bad tweet there are many connectors and templates to help me get started so if i quickly jump over and we look at this so i actually created a blank one so there's nothing in it yet but i did create one just ready so there's a logic app but it's empty right now so it's going to jump to the designer notice it shows me the triggers i can start with these are the common ones there are others it's like well a message is on the service bus queue http someone tweets event grid and file is added ftp a one drive a recurrence again i don't have to use any of those i could use a blank and then i will add my trigger it also has complete kind of flows it gives you some idea of what i might want to do oh okay we'll delete old blobs on a timer goes and does saying to storage um http request response so i get a http in i do a http out i can see things so someone creates a file in dropbox let's copy it to onedrive send an email when a sharepoint list is modified all these different kinds of things there we go email yourself when you see tweets about a certain keyword via outlook post to slack deliver an smt i mean all these different things basically so you can kind of pick all these different integrations or just create your own just start completely from scratch and do whatever you want but it's all this kind of graphical view i'm not coding anything if we just pick something simple for example let's use this template you can see okay well it's going to show me the connectors it's going to use i'd have to go and add the connector but it's all designed kind of in this view i'm not coding anything i'm really just dragging and dropping the components i want to make this so and i would add those various connectors so that's the point in logic apps there's some trigger off of the thing and then it will run that logic that i can define through there there the app services i'm covering there are others there's things like um service mesh there there's there's tons of other things out there i didn't talk about things like iot hub i think today for most people from an app service environment things like containers these are iot hub use containers underneath i think containers and the app services they're going to be key ones but certainly there are others there's just only so many minutes so we covered a huge amount as always i think it's probably two hours again i apologize but i hope this was useful um if there are questions again the goal is please go and ask in the comments i'm keeping an eye on those things so if i see it i'll go ahead and answer it but i really do hope this was useful please give it a like and subscribe there's a huge amount of work goes into creating these um but until next time take care you
Info
Channel: John Savill's Technical Training
Views: 43,641
Rating: undefined out of 5
Keywords: azure, azure cloud, azure paas, azure app service, app service, serverless, containers, aci, aks, kubernetes, begineers, learning, azure functions, logic apps, docker
Id: _E73_SQN8ZU
Channel Id: undefined
Length: 108min 34sec (6514 seconds)
Published: Tue Oct 27 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.