Azure Master Class v2 - Module 8 - App Services

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hi everyone, welcome to part eight of our V2 Azure Master class and this part I want to explore application services. Now, given the direction of the industry, a huge part of these services will all be around containers. So I want to start off with what are containers, then look at Azure container instances, Azure Kubernetes Services, Azure Spring Cloud, Azure Cloud apps, and then just the original Paas app service we're going to explore. All of those different things. Now, to start with, it's important to understand the idea of pets. And we used to say cattle, but I know it's offensive to some people, so we gotta say tin soldiers. And the idea is that as we move from virtual machines to application services, we have this idea of pets and tin soldiers and it becomes more and more important. I can really break it down and think of it so if we think of a pet. So I have the idea of going to be really quick. Some pet, we'll say it's Garfield. Garfield, very happy. So we have a pet. And if I think about a pet, the idea of it being a pet is normally it's some named instance. It's got some special unique configuration, which means I have to care for it. So it's cared for. I have to patch it, I have to heal it if there's a problem. It's typically corresponds with. It has some state inside that type of resource. And the resource we really think about a lot is a virtual machine, hey, a domain controller, my SQL Server, it has some state, whereas ideally what we want to get to is there's no state. Now remember we're talking about no state in terms of the resource itself. The absolutely maybe state in the architecture, but the state is on some back end database tier for example. See here we'd think of the idea of well, it's some kind of tin soldier. We've been there like an X man or something, but it's the tin soldier and they're not unique. There are just cloned. So I have some base template, some image. Whatever it is, I clone it. I can replace it very simply. If something happens to one of these, I just delete it and I recreate it. I don't try and heal it, I don't try and troubleshoot it. There's nothing unique or special about my tin soldiers. If it's broke, I just destroy it. Now we think about these, it might be we can start off with virtual machine scale sets, but then things like containers. And all the other services, Azure Virtual desktop that are built on top of this. And where possible, we'd like this idea of. There's nothing unique about it, it's just cloned from some image. Because also, this helps us shift responsibility. If I'm in this world and it's a VM, remember what am I responsible for? Well, I'm responsible for things like the OS, I'm responsible for things like, hey, the runtime, making sure it's updated, the net, the J2E, whatever it is, I have to take care of that both. In this world, I don't care. My focus is my app. That's really all I have to focus on. I don't care about. These things are just managed. For me, so as much as possible, I want to get to this world of tin soldiers and this is the focus of the app services, the platform as a service, because this, this is an infrastructure for service in the Paas world. I'm not responsible for those components. That's just done for me. And that's what we're gonna focus on. Now the starting point for this is containers. If I think about a virtual machine with a virtual machine, what are we virtualizing? We're virtualizing the hardware, and I'll draw this out in a second. Whereas if I think about a container. Well, what we do is it virtualizes the operating system. So a VM virtualizes the hardware, a container virtualizes the operating system. Now there's many components of the operating system that makes this work, and we'll go through some of these, but it's all about still giving me the different types of isolations I need. But now instead of it coming from an entire operating system, I get the benefit now just from, hey, some construct within the operating system, I don't have to create these. Different virtual machines. So if you think historically. So historically I could think about, well, I had the hardware. Remember that hardware has the CPU, has the memory, it has the storage, it has the network connectivity. On top of that, I would put some kind of hypervisor. Obviously an Azure that's hyper V, but it could be ESX on premises or there's others. And then I create these virtual machines, so I'm creating these virtual machines. On top of that hypervisor now inside each of those virtual machines, so I'm creating these VMS. It has a certain amount of virtualized resource from the hardware. It has a certain amount of virtual CPU, a certain amount of memory, a certain amount of storage. It has some connectivity given to it, and inside that virtual machine though, well, I have to stand up an operating system. I have to install various runtimes or libraries, whatever's required, and then I can put my application onto it in this virtual machine. What it has its own operating system, its own runtimes, maybe it uses some different ones, and then it's got its app. And the challenge with this approach is when every single one of these, all of this essentially is a tax in a way it's costing me, it's costing me CPU cycles. It's costing me memory, it's costing me disk space because everyone of them is running its own instance of an operating system. In this very simple little file server, it's running a complete OS. The bits on the disk for the OS, the amount of CPU to process, core functions of the OS takes up a certain amount of memory. Now, what this does give me though, and it's very attractive, is isolation. Every application is in its own complete set of box size hardware. There is no shared process space, there's no shared network stack, there's no shared resources. So I get isolated namespaces. And because it's just it's a whole whole operating system, if I'm not gonna use the term namespace here, I'm just going to say it has complete virtual hardware. And it has complete separate OS including the kernel and everything inside there. So we have this really powerful isolation which is fantastic, but I have this very expensive tax which is not attractive. So we can take this and we have the same idea. Obviously there's still hardware. There may or may not be a hypervisor, but it is very common. You would still run this on some kind of hypervisor. But now the point is you're going to create. After what bigger, we have some virtual machine still absolutely. Now in this virtual machine, obviously there's still an operating system, there's still an OS there, but now what we have is we have some kind of container runtime. So there is some container runtime, and often this is container D as we're going to see now, this container runtime is responsible for creating these various constructs in the operating system that are going to let us isolate. So now the isolation. Is going to come in the space of various types of namespaces. Process, namespaces, network. I'm also going to have these virtual file systems. And now inside here, instead of creating virtual machines in this devil, I'm going to create. Well, you guessed it. Containers. Now a key point here is. They have some. Container image that they run O on each of these they're running an image. Now they can each be different images. So they're running some image. Which can be different. Now the image is made-up of layers and this is the key point. So this container image. Is what the actual app running inside it. So app one, app two sees. But it is running on a shared kernel. It is not a complete copy of the operating system. If I think about the kernel mode space, they're using a shared kernel mode space, whereas from a virtual machine, it's a completely isolated operating system. Is its own kernel, its own user mode space where the actual applications run. Which is why I could have a Linux VM and a Windows VM on the same physical box, because they've each got their own complete copy of the operating system. Here it's a shared kernel. And that's what gives us a lot of the flexibility I'm not wasting. There's no tax anymore of the entire operating system. It's running a shared go underneath. What it gets is these partitions, these isolated namespaces so processes can't see each other between the sandbox that is the container. They don't see each other's file systems. It's got this virtual file system layer, but because it's actually running on a shared existing OS. They start up subsecond, whether you think about creating a new virtual machine, it might take minutes to create a new VM. Sub second, because it's just spinning up some processes. Now, this container image, what we're seeing here is actually made-up of layers O we have this idea of many, many layers. That make up an image and you might think about the base layer. Well, that has to be compatible. With whatever this is. So whatever this shared kernel operating system, it's certain Linux distribution. Well, my image has to be based on Linux as well. Then I couldn't run a Windows image on a Linux container host because it's not going to work. It's still running on this underlying, so I have to be compatible now there's fantastic compatibility. You have to test exactly how far you can stretch that compatibility. But I could run different Linux distributions images on a distribution that's different from that of the image. So maybe this is Ubuntu, maybe this is a rocky Linux for example, or CentOS. There are some flexibilities between them, but you'd have to exactly test where that might break, depending on how far you can actually push that flexibility. But it's made-up. This is a container image. It's made-up of layers, and this is what gives us a lot of the flexibility. So this base layer is probably the OS, and then maybe there's some runtime or library they injected as a different layer. Then there's maybe some service that they've added on top of this. Maybe it's like Tomcat or Apache or something. And then right at the top. So these are all read only layers, these are all immutable. And then when I create my container it adds that it will read write layer on the top. O my container can still change things, but if I modify a file that's in the container image so these layers. Would be the actual image and then this layer would be part of my running container. So if I was to modify a file that existed for example in this layer, it wouldn't. It wouldn't change this layer. What it would actually do is it would write to. A file when this read write layer that would supersede the files in the lower layers and what the app says it looks down these stacks so if a file is replaced. With one upper lower layer I see that higher file and the benefit I get here is it doesn't have to store these multiple times. I might have lots of images that use the same maybe set of base layers. It doesn't have to store each of those every time I have this nice shared cache on the host, so it only have to actually download one copy of those layers. Now when I think about that compatibility for a second, I do want to stress. So not only is it the operating system, it is also the architecture because again, it's running on that kernel. So for example I couldn't mix let's say X86 with all for example. I have to be the same architecture. I have to be the same high level type of operating system, windows versus Linux. And absolutely I can do containers on Windows. Although we think a lot about containers and Linux and a lot of ways Linux is ahead of Windows in the container space, absolutely windows does have containers as well. But very commonly you will see the idea of containers running on Linux, so. It's actually, carry on this just for a second and we'll see a few more things. So I can think about typically container runs a primary process and it has some life cycle. A container with sun U to go and do a job and then it ceases to exist. We don't do that with virtual machines because they take too long to start. But a container sub second, hey, I've got some job I want to run. Go and look up stock prices for these 20 stocks, spin up a container to evaluate each stock, it gets, the results that sends it back and then it. Roses. I can have those very small lifetime containers because they can spin up and spin down so quickly. And I do create it from this idea of this immutable image. And again it's using these functionalities, C groups for example example, to meter and limit the resources I can use namespaces for network interfaces, sockets, route tables, process IDs. I'm using mount to mount objects, so I have all these different capabilities on how to use them. But if we actually go and let's jump over for a second to. Uh, this? We actually start with Docker. So what we're actually going to do here let me just reconnect. It's time down, so let's look at this and what we're going to do. Is so Docker is what we very commonly think about with containers. They led the way in a lot of ways to standardize what we do with containers now. Docker is actually many many things and it will get confusing because Docker used to be the runtime. Docker used to be the management infrastructure, Docker used to be how he created images. It split apart a lot of the functionality. For example the runtime, they actually split off of the main docker, so now it's called container D and container D is actually a high level container runtime that gets instructions from some API and then the actual low level of creating those. OS components to give me that isolation is run C so we have Docker could be the management calling an API in container D which goes and packages up the image and the components I need to manifest that gives to run. See that actually goes and creates those objects on the file system. So there are these different parts to it and I'll draw this out I guess in a second. But Docker is still what we think of, and we'll use the term when we think of containers. And certainly I can install. I've got Docker desktop here. I can play around with all these things to show you a lot of these constructs. O what I want to quickly show is the idea. I'm not going to do all of these different things, but there are image repositories, so there's Docker hub, there's Azure container registry and I could search for example for a certain image. And what we can see here is, well, there's the actual official image from the Apache project and I'm going to do something called a Docker pull. So what the Docker pull is going to do is download the component. Now remember I said they were layers? Notice what it did. Some of those layers already actually exist over here, as it's pulling all the layers that make U that image. I had some of them cashed already. So it pulled those different things down for me now, what I could now look at. I could list the various images, so I've got my HTTPD image. At this point I could run it and I would get a process inside there. So this next command this run it means interactive. It's going to say well create me a new container docker run. I want to use the image HTTP and I want you to file for process called Bash. Bash is your entry point into that container. So if I run this. I'm now inside. That container. I'm not in my external container host anymore, I'm actually running inside. If I exit. Then I've left it. And if I from this point, I could do various other things, so let's have a quick look. If I did a Docker PS A. I can see it's still there. I can see although I exited it. The actual container still exists, it's just not. Running anymore. It exited when I exited, which is the whole point. It's doing a certain job. Its job was to run bash. When I exited bash, it's lifecycle was finished. It's like I've done my job, I'll exit, it's no longer running. So we control which process that container is actually going to do. But we can build our own container, so I could take that HTTPD which has a web server. And I've got a Docker file. So all this Docker file does is. Removes the default content. From within. The image now remember it's just looking like it's been deleted. It can't change the underlying image it's doing my read write layer said hey these folks don't exist anymore and I'm just copying the content from my location so I would have to quickly go into part. 08 I would have to go into my. Bad Father Apache folder. Where I have that website and I have this Docker file. So I could build my own image. So I'm going to say Docker build. As we can see over here. So I'm going to run this Docker build. Bad father into just my current location. So if I run this command, it's just going to create a new image. And you can see it done a new layer. I could inspect the history and it's showing me all of the commands that went up to make my final layer. So we can see the things it did before. As part of creating those underlying layers. And then you can see the commands I just ran and if I now looked at my various images on my system. So if I did my Docker image LS again, I'm now going to have a new. Image so we can see down here I've got my bad Farber. Now notice the size it's saying is the same size as what it was built on top of HTD. It isn't. It's not recopying all that content, it's just showing me the sum of all those layers. Well that's the size of it. But at this point I could use it. Now to use it I'm going to do a Docker run again, but this time I don't want to interactive, so I'm using bit so it's detached process and I want to be able to talk to it. Now remember this is just a container. It doesn't have its own external networking, So what I'm telling it to do over here is I'm saying look map the port on my container host 80 to pull 80 inside the container. I want you to name the container bad file but A and I want you to use image bad Father. So if I run this. And now just go to my local host on the box. Well, there's my. Container and there's me and my son on the roller coaster. We're going to see this app over and over again. That's us on Splash Mountain. That's him screaming in terror. This was ten years ago. He's no longer scared of roller coasters, but it was fun at the time. So we can see I'm running that against my local box and I'm mapping to that container if I was to do my docker. PS again, well, that one we'll see is actually still running. The top one is still there. And I could go and get those processes. I could stop them, I could do all those various things. I could remove the images. But that's I guess the key initial point that we have this idea of, hey, we go and create. This image and then we run a container using that image. So in this case I have my Docker file so it was my Docker file. There was based on some underlying image and then I made changes to it to go and create my new image and then what I would want to do is I would store that in a repository. So this is the whole point of this. So I would go and create my own image. So this is my actual service that had bad father built on Apache which is based on let's say Ubuntu or Alpine. There's lots of different distributions we could have there, but I'm building. On top of that to get my image. So what I would do is once I have that image. And there's going to be some container registry. So I have a container registry. And this is where I store container images. These can be public, these can be private. So I create my image and I push it. I push it to my container registry. Now once again there are data. Docker has its own registry and as long as I'm compatible with that Docker format. There were many different solutions for this so I could have Azure Container registry and again these can be public. So I can make images available to everyone or it can be private how? Just want to use it in my organization. And I built that bad father actually just recently. So if we go and look for a second. So what I did this morning before I started recording is I did that rebuild. And if we go and look at my container registry, we can see I've got a container registry savilltech. And if I look at my repository. There's my bad father image and we can see, Yep, I build it this morning. At 5:00 AM I get up at 3:00 on Sunday, so for every day. So I build it this morning and we can see and I've tagged it so you have tags. So tags are a way to identify different versions of an image and latest just means if I don't specify a particular version, hey, that's the one I actually want. But one of the nice things I can do is with these repositories I can have replicas. So I might want. Replicas all throughout the world. That would make a lot of sense because if I was going to create container instances in different regions, I don't want a reliance necessarily on a container registry in a different region. What if there was a problem there? And think of the speed when I talk about how quickly we can spring up containers. I have to better pull the image now once I've pulled there as we saw it caches it so unless I remove it, maybe images I'm not using it would be pre cached if I've used it before, but if not I don't want to have to go to the other side of the country. Go and get the image. O one of the nice things we can actually do is when I have things like Azure container registry, yes. It's created in a specific region, for example region one. But if I know I'm also going to go and create container instances off of that image, well hey region two over here. I'm gonna set up a replica. O as I push, I create new versions and I can even have it configured so that hey. If the base image I'm building on top of changes so they release a new version, automatically build my. App so it's always got the latest bits and make it available in the registry. I don't patch these. I'm never going to have a container running and have patch Tuesday or the nightly builds on Linux and up patch in this read write layer makes no sense. These, remember, are tin soldiers. I don't care about them when there is let's say a patch version of Ubuntu or Alma Linux or Rocky or Alpine or Mariner. Mariner is Microsoft's own Linux distribution. It's really designed around containers, is super thin. When they patch that. I just rebuild my image so it gets all of the patches they've applied and then I would update my deployment with the latest version of my image. I won't patch these things, I just rebuild when there's a new version available. So the point is we have. That container registry. And I to bring it all together. So if we think about, well, there's Docker. And remember Docker was many, many things. We think of it a lot around the overall management. We think about it where it has a command line tool that we use running Docker commands. It also has a huge amount around the image management, building an image, updating them. But what they did is they split out the container runtime and So what happens is darker now calls. Container D Now there are others. There are other high level container runtimes. There were some that were built expressly for Kubernetes. They were really designed around that, and with Docker it actually has a little Shim. To talk to, it talks to container D and then container D actually talks to run C, which actually then goes and actually creates those components. And there are different specifications for this. This is sort of OCI. For this it's talking there's a CRI. This talks CRI container runtime interface. If I'm having AKS or just say Kubernetes, Kubernetes will talk directly to container D for example. And the whole point is when I do some container deployment, what it's doing is from the container registry, it's responsible for pulling down the image required so it can then go and actually create the bits to go and create my image for me. That's the whole goal of what I want to do. It's taking care of really the hard work for me to make all of this just happen. So from a container support perspective, so Docker hate it brought containers to the masses and that's really the whole point. We use it a lot today for the image creation, but even that we're going to automate a lot of that in our pipelines. Hey, we push in some new code for our application, it goes through some tests, it will automatically run the Docker to create the new image, it will push it to the repository, it will deploy it to some test environment, do some load testing. Maybe use Chaos Studio to break some things. And then once that's gone, hey, we'll now start rolling it out to production. So a lot of that will be automatic, but these are all standards when I talk about Kubernetes later on. That is really, folks don't contain AD now for that runtime. It doesn't go via the Docker Shim, it just goes directly to it. So this is really about management. The registry, yes it was the original, but Docker themselves split out the runtime into the separate project container D to make it more compatible, easier to be used. So we have the common runtime. There are hyper V containers, so this is an interesting concept. Remember, we're using a shared kernel. Shared kernel space. This is fantastic if this is in my company and I trust the other people. If I'm going to multitenant environment, I don't trust this container. I don't want a shared kernel, I don't like this and so one of the things we have is the hyper V containers. What it actually does is it creates an automatically managed very thin VM. So that each container actually runs in a virtual machine, its own VM. So now it's still a regular container. I don't have to do anything different with the container image. But is an isolation to say how I want to be isolated at a VM level, so every container runs so it's going to be fatter, so it's a heavy. It's a bigger tax on because now it is running a complete thinner than usual, but still a bigger footprint because that is more disk space being used as more CPU. There's more memory being used. But now I've complete isolation. The same isolation I would get with regular VM I would now have with my containers, but I don't have to change anything about the way I create my container images or run them. It's just when I instantiate and create the container it's a switch say. Well actually I want Hyper V level isolation from them and it will create this automatically created VM. And run my container inside it. So at the Azure Container Registry as I showed it can do private repositories, it can be Geo replicated, I can have zone redundancy so it's distributed over different availability zones if required. And as I mentioned, you probably want to place it close to where you're actually going to create the containers from. And as I mentioned as well, it can run jobs so I don't have to even manually do this stuff. If it detects what I'm building my Docker image from changes, then it can just automatically trigger a job to rebuild my Docker image, put it in my repository so then it can be used by things. So that was the foundation of what is a container. And hopefully you you get the idea of what those things are. But now let's talk what are the Azure services to actually use those containers. So Azure container instance is let's say I guess the most basic one. Think of this as a Pez dispenser, it's container as a service. It could be Lindo Linux or Windows, Lindos, Linux or Windows. It's built from hey, just standard images that may be out in the public repositories, or I can use my own custom images in my repository. They can be public or private if it's Linux, so there's a lot of restrictions again around. Windows and Linux. Linux really is the predominant Linux we're going to see. So we've Linux. Let's just actually look at this super quick if we go and look. It talks about the different things we can do and what you'll see here. Some features are restricted. To Linux. And what we can see here when we think about the private ones, so it's not public. Well that virtual network deployment is only available for the Linux once, so we are restricted on the Windows side. So when I think of Windows. I can't do the private, they're going to be public, but it's very useful for a birth scenario. Hey just want to create something to do a job quick some very basic scenario and I can integrate this with Azure Kubernetes service by I would have virtual cubelet and that's not going to make sense right now, but when we talk about Kubernetes it will and I'll explain that in a bit more detail. Now one of the things I can do with the Azure container instances. Is for Linux. Well, as you want in the container group. So the top level resource is a container group. Then my container runs inside it. With Linux I can actually have multiple containers in the same container group. Now all that enables them to do is if they want to talk to each other on the networking, they can just talk to local host, so different ports. But on the local host, I don't have to know anything else. Hey, local host: 5000. I could just go and talk to another container running some other process without knowing anything else about it. We often call this a sidecar. So I have a second container grouped with the primary container that helps it do its job. That provides some extra service to it. So if Linux I can have those container groups. They might have additional shared storage options because they would be placed on the same host. They can use local storage and have it as a shared location. You might see like empty directory, empty dir. You have secret which is a RAM based backed file system. There's extra things I can have with this. Maybe he's just see one of these. So if we go and quickly look what I'm going to do is remember so I placed. Bad father. In my container registry. So it's sitting right here. What I'm going to do is if I jump over to my container instances, I don't have any running to show you how easy this is. I want to create a new container instance such as pick Resource Group. We'll call it. I don't know. Bad. Bad father. ACI. Region South Central. I don't need availability zones, I'm going to use my own image, but notice I could use a quick start. It has a bunch of them available. But I'm going to use my bad father. I can pick the resources. One of the things I have here. Would that public, private none. This is Linux based so I could integrate into a virtual network if I wanted to, but I'm not going to. I've got restart policy. I could have some key management but let's just go ahead and create that. So remember all I did was I said hey I want to container. And I want you to use this image. Is that what it's doing? Is it's going off? And essentially it's going to just create me one of these things. So let's go where's my container registry? Somewhere over there. So my container registry is all the way over there. So what it's going to be able to do is it's going to pull. And I want to save myself space onto about other stuff I'm going to pull all the way over here. And I'm going to create. And Azure container instance and it's super little happy little resource that I've pulled in this image and it's just going to run. It's super, super simple, which is why this is so attractive that there's really nothing I have to do to this. So if we go and look back over, so it says it's complete. So if we go and now look at this thing. We can go to our resource, it says it's running. It's got the public IP address. So if we now just try and go over to there. There it is. So there's that same Docker image that we created locally and we ran locally. There's my son Ben, tormented on roller coasters again. That's it, and when it's done its job, I'll just delete it. It's that simple. That was it. It was that fast to create. It was that fast to delete. Really the only time was pulling the image down and then it spun it up. Now what these are doing is it's doing that virtualization isolation. I'm not sharing a kernel. It is doing that because this is a multi tenant service. I am getting that isolation. But that's ACI super super thin. Super super simple. However. Yes, it's super, super thin. But often we want more than that. We need more than just one container in some isolated way. I want maybe autoscaling based on some amount of resource being used. I want to say, hey the image has been updated, I can't just update all of them at the same time, I want rolling upgrade. I want to be able to discover services being offered in other places. I want easy integration with other Azure services like load balancers or App Gateway. I want to be able to have volumes, persistent volumes, so even when the container goes away, when I create a new container I still have some state. If I do need some stateful service, it's there. I can run SQL in a container, I just need to make sure the storage is somewhere durable. How do I do the networking? I need richer sets of networking. I want to be able to have maybe certain taints and tolerances so that I'm more attracted to certain nodes. Maybe some of them have a GPU, some of them are using Azure container instances. I need these richer pieces of functionality. I need orchestration and Kubernetes has become the standard. Years ago there were many different services. Kubernetes is 1, so Kubernetes is that orchestrator for containers. I don't think there's really any doubt anymore. Now, Azure Kubernetes service is a managed Kubernetes offering, and when we think about managed, there's really two planes when I think about using containers. So if we're going to say, OK, great, Kubernetes, Kubernetes is our winning service. So I have Kubernetes. This is not Azure's specific, this is a CNCF, a standard offering. Now often you're going to see this written as K8SA K 12345678 letters and then S Kubernetes kind of works, so we sue K8S it's combinators. Now Kubernetes is split into two parts. There is a control plane. This is the brain, the bit doing the main orchestration and the primary brain is thing called the API server. When we really talking to it when I'm trying to run commands. So if me as a user it was sitting out here, I'm at my machine. The tool I use is cube. CTL Cube couple. When I'm talking and running my commands, cube CTL is talking to the API server. Additionally, the API server which has a persistent key value storage, so it's storage is the Etsy database. It also has the scheduler. That's responsible for saying, hey look I've got this manifest in or there's been this scaling event. There should be some extra pod to a pod failed because it was unhealthy. I need to get some more pod scheduled. Its job is to say, hey I need to schedule these pods, they need to get created somewhere. There were controllers. There's different types of controllers for various different things. It's looking at different aspects on the system. There's no. There's job, there's endpoint, there's token, or to read different controllers, but all of these things are talking to the API server. This API server, this scheduler, the ETCD database, the controllers, this is all managed for you. In Azure Kubernetes service you don't see this, so all of this this is just managed. But there's a free. And a standard. There's a free and a standard offering. Now, the standard actually has an SLA. The standard can use availability zones. The standard supports 5000 nodes. If I'm in production, I'm probably using standard. If it's just me playing around it's dev test, sure I can use free for that, but again, if it's production usage. I probably want to be using the standard offerings. I want an SLA. I don't want to run things in production that has components that has no SLA around it. That's generally not a great thing to have. So that's the control plane and that's fully managed for me. I don't really touch that in any way, shape or form. So it gives me this managed environment. Now what I pay for if I'm doing free or obviously I do pay for standard as well. I pay for the worker nodes and these can autoscale. So this is a key point of where do I containers actually run that managed part? My containers don't run there. So this is the control plane. This is all control plane. Then what we have? Are the actual node pools. Now I'm going to have at least one node pool. So then we have node pause. Now the point of node pools is. And I try and do is quite big. It's made-up of nodes as you would think and at minimum so I'm going to the next be multiple nodes. There's N number of nodes within here, so this could be node pool one. And I need at least one of them to be a system node pool, because there are actually certain container images. We're going to call them pods. That I need for Kubernetes to work. Not everything Kubernetes needs runs up here in this control plane. It actually runs some components in here as well, So what we'll see. Actually there's something called the cubelet. This runs on every node and it talks to the API server. That's how it does a lot of the management. So hey, a pod needs to get created, okay qubit talks to the API server, goes and creates that thing. I have to have remember. My container runtime. So I have container D running on there. There's also a component called Kube proxy. This is for the networking. There's a whole bunch of different components before I even get to my workloads, but I could have multiple node pools, so I could have a different node pool. So I could have node pool two, and the whole point of these are built on virtual machine scale sets so that they have the ability to scale as we're going to see. But all of the nodes and node pool will have the same configuration, the same type of VM. But this one might be user only, so it's not running any system pods within there. Maybe you spot so I could have spot run remember the really cheap VMS? Maybe I've got some process here that could use spot to run within here. Then once I have this, what Kubernetes is actually then going to do? Is it's going to create pods? So it has pods. And again, there's kind of a number of pods and inside the pod. Is where it's actually going to run. My container. Normally it's one to one. A container runs in a pod. It's just a construct to handle it. Sometimes you will hear the idea of I have a sidecar pot, a sidecar container which runs in the same pot. Again, the benefit I have there is this sidecar is providing some additional functionality to the primary app, maybe it handles certain networking, certain traffic splitting certain services for micro services. Obviously a big push around. What we're doing with App Services we used to monolithic. One big app does everything. Micro services we break it down into each piece of functionality runs its own component and they're loosely coupled so I can scale them independently. Well, when I have those micro services though, there's things I need. Discoverability, traffic splitting, the ability to communicate and discover and bind to other services, storage queues, maybe. I enable that through some sidecar send. My app doesn't have to care, it just talks through some rest API. And this does the work. So I as the app writer don't care. Hey I want to create some secret. Hey I want to store some durable value. You work out how you do that on the back end so I may have them within the same pod. And that's the structure. So the whole point is the node pools. These are VMS in my subscription. I would see these if I was to go and look for a second. So I spun up an AKS this morning if I look at my Kubernetes environment. I have my Kubernetes service. I can see my node pools. And my node pool actually look at this one. We can see the mode. Is system. Now I can still run user workloads on it, but it means it's allowed to run those system components as well. So if we looked right now I only have one node. I can see the node image version, so buntu was the standard. It's moving to Marina over time. We can do that optionally today. But today you'll see probably Ubuntu by default, but again you're going to start to see Mariner. I can see Halo is container D. I can see my container runtime over there and you will notice it is a certain version and we're going to come back to that because that version actually does come into play with some of the responsibilities that we actually have. But if we go and look at the workloads, see if I was look at the workloads running and justice look at well firstly deployments, well there's core DNS and that's not mine. Core DNS, auto scaler, metric server. Look I deployed my bad father so I can show you that later on. But there's a whole bunch of stuff that it needs to do its job. If I just look at the pods whether there's cube proxy. Says my queue proxy, again my chore DNS and there's multiple instances. There's a storage driver, storage interface for disk and file. So there's certain things it needs to do its job which it's going to stick in, like cube, system namespace, public namespace, node. Least I can create my own namespaces to organize things if I don't specify 1. Then hey, there's default. It will just go and stick things in that default namespace, but it helps it separate different things. So we can see that, hey, we've got this thing up and running, so I've got an AKS right there. And the point is I pay for yes. I would pay for that control plane if I'm using standard. And then I just pay just a regular VM fee. There's not a Kubernetes add on price. If I'm using standard, then yes. I'm going to pay. A certain fee for the control plane. But then what I'm actually paying for is just the VM price. On whatever I did. So if I was spot, obviously I pay less money for that. If it's some advanced GPU types you I'm going to pay more money for that. I'm going to pay for the number of instances there are. So I have this this point around it. So they're going to scale. There's different types of auto scale. Remember when I do my deployment actually show you this? So my deployment actually probably wants multiple instances if we went back and looked again. If I looked at my. Workload. And looked at my bad father. You actually see. I've got two of them. Because what I created was. A manifest. So I created a manifest file. Over here. So the way Kubernetes works is it is declarative, I tell it what I want to be there. So in my service I said hey, I want. 2 replicas. And like, gave it the name I wanted, I told it. The image. That I want to use. I told her the resources I want it to be limited to. I also then told it hey look I want a service which is going to create me a load balancer. You can see here and I want it to listen on port 80. I could also create an internal load balancer so there are extra annotations I could make here which I've commented out, but I could make it internal. I could have added that as well, but I just created it as external. So all I do is I pass this manifest this YAML file. To the API server via my cube CTL command or via Git OPS or other things. And it goes and makes this the reality. So if I go back and look. The reason I have two is because I said I want two. So my desired was two and this is the job of those various controllers and the scheduler to make sure I'm meeting that need. And then I can see both those containers. Now I only have one node because I'm cheap. But it would balance them over the multiple nodes. If it could, it would prefer to do that. And I can also see it when and created a service. Of a load balancer. For my service I can see the external IP, so if I click this, well now I'm running the same bad file for app. But I'm running my bad file for app on Kubernetes but again it's public facing now I don't have to make that public facing. I have lots of choices around how I do that. But that's what I picked for this particular application. But I said well I want two pods. But the number of pods I want may vary. I might want different numbers of pods depending on the situation. So for each of my deployments, which will then create replica sets which will go and create the pods. I can absolutely say hey. I can have horizontal. Pod. Auto scaling. So based on some criteria I specify. I do a move pod instances, hey the CPU is above this amount, CPU drop below a memory pressure, add more pots. Now when it adds more pods we'll then the scheduler has to be able to go and schedule those on a certain node. If it can't. If a pod just sits there in basically waiting to be scheduled mode, well, that's where we then have the idea of the cluster auto scaler. So I have N numbers of nodes, then we have the cluster. Auto scaler. And it is going to trigger based on the fact that it can't schedule pods. That's what drives the cluster auto scaler, nothing else. So the horizontal pod autoscaling is responsible for hey looking at things that are happening and saying ohh OK I need to do some things. That. Um need some more instances based on some pressure? If it's decided it needs two more pods, but everything was full and so the pod will get stuck in a scheduling state, then the cluster auto scaler says, hey look, there's pods we can't schedule, I need more nodes. And then it would go and add nodes to the node pool so then the pods could get scheduled. Likewise, there's a bunch of free space on the nodes. The cluster autoscale will say, hey, I can probably bring this back in. I don't need quite this many anymore so we have that ability to keep. Those things there, now there are other options for the scaling. So I talked about the horizontal pod autoscaling. There's also. KEDA is the Kubernetes event driven auto scaling. It's just a more powerful auto scaling technology. It isn't restricted to just some of those basic metrics. I could have scalers like a queue depth, number of messages on a hub, I'm pass some threshold of messages, some richer pod level metrics of CPU and memory. I have just more combinations of what I can actually do around that scaling. So I do have choices. For how I think to scale those pods, but again that would still drive the cluster auto scaler if it's saying hey I need more pods. The pods are stuck in scheduling states nowhere to schedule them hey I need to go and add more nodes. Let's just go and add remove nodes as ever it needs them. Umm. Based on the state of that. There's also networking I have to consider now there's networking is hugely complicated for Kubernetes. I think the most complicated part of Kubernetes, so I'm not going to go into detail on it. But there's actually multiple levels of networking, for one thing. Because I can think about, well, there's a networking of the API server. How do I talk to the API server? By default it's a public endpoint, but I could make it an AKS private cluster where then this becomes a private endpoint within a certain virtual network. I can even now do a vnet integrated API server where the API server has a load balancer within your vnet, so there's different ways I can make that available. Then I can think about there's a networking to talk to a service, and then there's a networking for the pods to talk to each other. There's all these different levels of networking I can have, but at a super high level. And actually one of things I wanted to draw in here quickly was when I have a deployment. This is this YAML file. This is my declarative what I want, how I want two replicas etc etc. So for my networking. Let's go for. That color network. There's two key types of network you'll hear about with Kubernetes now there are. Various options actually, even within these, and I'll talk about them fairly briefly, but it's all to do with the fact that, remember, these containers might fly into existence and then fly out of existence really, really quickly. And so there's a strong chance I don't want them to have real IP addresses in a real network, because especially on a real physical network that I'll put all the traffic, it would be horrible. I'd rather hide the internal IP space of the pods and just have some network address translation offer some service to them. So for the networking, what we have is cubanet. The Cube net is an internal. IP space. For the pods. The nodes, they still have to have IP addresses from some subnet, but the pods will get their own IP space that isn't in the same IP space as the subnet. And then what happens is there's network address translation going on. Now there are things I can do with routing tables so other workloads and other subnets can talk directly to a pod, but this really limits me in a lot of ways because now if you think about it. Route tables have a finite size, so my route table I think can have 400 entries. So I can only have 400. Um. Nodes. It's going to limit how many nodes I could have if I'm dual stacks on IPV 4 and IPV 6. I only have 200 since limits me and it's kind of slow. It's kind of kludgy to actually do this, but it's there. Then we have the option of Azure CNI. So what's the and is going to do? Container networking interface is the pointers get direct? Pods get a subnet IP address. So I can talk to it directly, they to each other. But it's using up. And IP address from my subnet. And it's going to be this subnet. IP is the same subnet as the node, and what it will actually do is when I configure AKS I tell it the maximum number of pods I'm going to support. Um per node and it will use that to actually pre allocate all of the IP addresses. So in my cluster I just use cnni because I was being lazy and I just really did a lot of the quick and dirty configurations. If I look at my networking showing if I'm Azure CNI. And I pointed it to a specific subnet within my network. I can see it from here easily. I'll just show you so look my virtual network I created. A subnet called AKS CNI. And now it's only got 142 IP addresses. I don't have that many pods running because what happens is it pre allocates all of them. So even though I don't have this many containers running, it preallocated 110 IP addresses. That's the maximum number of IPS I support. Per node if I was look at my AKS. Somewhere in here it will say 110. Maybe it's under node pulls us look. So this is an agent, Paul. So it's the system one. There we go. Max nodes, Max pods per node is 110 so that's what it's when a preallocated just pre allocated 110 IPS and it just gives them out. But there are other modes as well. So beyond even that, there is a dynamic. CLI so dynamic CNI. It's still an IP. But pod from different. Subnet, then that of the node. And it grabs them in batches. I think it's 15 at a time so it doesn't pre allocate all of them up front. It gets them as it needs them, but it's a different set. It can be different subnet from that of the nodes and then the other one is then there's CNI overlay. Now see and I overlay it's like cubanet, so it's a different IP space. From the actual virtual network, but it's not using route tables. It's not faster, I don't have the limits. This would be the preferred approach. So I'm saving IP precious IP space on my virtual network, which is probably constrained the size of my virtual network, which is why kubenet is attractive as well. So I get the benefit of not wasting maybe thousands of IP addresses from my usable IP space, but I'm not limited to 200 nodes or 400 nodes. I don't have the pain of. The time required to update the route table. Now there's some flip sides because it's not using route tables. I think things are app gateway don't work with it today, so there's something that won't work with this model, but that's available to me. There's also storage. So I think from a storage perspective, what about when I do want that durable storage? How is that going to work? So one of the things I can do here is. Firstly, it doesn't just disappear once, and if a containers still running and I don't delete for example that that read write space, it may keep that storage. But if I actually want durable storage that can survive the container being deleted or maybe I want some shared storage. Well, for storage. We have the idea that we have a pod. And what a pod can do is a pod can make a persistent volume claim. I need some persistent space. Of a certain type. So it's going to make that against a specific persistent volume, that persistent volume. Uses a storage interface, and maybe this is Azure disk, maybe it's Azure files, maybe it's BLOB. This can be net app files as well. As an option. There's basic and premium variations of this. This will go and allocate space on one of those backed services. So now it's at durable storage. Obviously files is very attractive that it's shareable, it's an SMB, I can use it across different things. So if I need some storage, one of the things I can absolutely do is I can have this idea of the persistent volume claims to actually get that. OK, so. Back to my. Back. So there's some nice features. Obviously I have multiple node pools. So different node pools, some of them might be spot based, some of them are standard based, some of them have GPUs. And we use the idea of taints and tolerances, so I would say hey. This particular node poem tainting it, maybe it's GPU, so then pods, I could say, well I'm going to tolerate GPU. So then it will go and get aligned and use those nodes that actually have those GPU capabilities. So it enables me to have different types of nodes and use them by the pods that will take advantage of that type of maybe underlying functionality that underlying hardware. And remember. We have the idea of system and user pause. User pause only run our user applications system. The system I have to have at least one. The system complies it's system pods and I can put my user applications on it as well. I can actually stop and start AKS clusters. Now I could always stop and start user node, pause or I could basically drop it to 0 instances. But now I could stop the entire AKS cluster, so even the system node pool stop I stopped paying for the compute side. One of the nice things is remember there's no state on these nodes. They're created from some base image. And then it deleted. So I can actually use ephemeral disk when we talk about femoral disks for virtual machines. So if. There were a certain VM type that has enough temporary or cache space. By default it will use ephemeral. It will use ephemeral OS disks to save me money, so it's not going to go and create managed disks flows. You can use ACI. So remember in the picture? We had these Azure container instances that were all happy over here. What I can actually have is how does the API server talk to a node? It's the cubelet, So what we can have is a virtual. Cube let. That makes ACI look like this infinite size pool. And I can use that so the API server could actually talk to this. Again, it's going to be a special type of taint to go and use those to say hey I want to use ACI if I was to quickly look at my deployment. So in my bad father YAML files is my regular one. I've actually got a version that would work with ACI. And what you see is it has a tolerance. And it has a tolerance of virtual cubelet IO provider and it exists. So this says hey I actually I'll use ACI. So I have that ability to use ACI because it's going to have its own taint and then leverage that. So I can once again get as much benefit as possible. If maybe here it is everyone to scale up to more nodes, but I've got some bursts scenario and let's just use ACI for those. If it meets the type of work that I have, I don't need to scale my nodes, I've just spin up some ACIS. And that that could be my birth scenario. Use a node pause can use spot. I can't do it for the system, but for the user mode pulls I can use those much cheaper. Remember my workload has to be tolerant to the fact that it could just get evicted. It has auto healing so if there is a problem. Aks is constantly looking at the health of the notes. And if it sees this not ready status for too many times, I think it's every 10 minutes does a check and I don't know remember exact number, but after so many failures are unhealthy, it'll try reboot. Fixes everything. Have you tried turning it off and on again? If that doesn't work it will reimage it. If that doesn't work, it will just delete the node and create a new one. So there are steps it will take to try and heal something that's just not working. It has auto upgrade now. This is actually an interesting one when you think about that shared responsibility thing, because I said, well, you're not responsible for this. But you have to remember that there is that control plane and then there's the node pools. And there's a version of software on this. So Kubernetes actually has different versions. Every four months they come out of a new version. And it uses the semantic versioning, so there's a major. Dot minor. Dot the patch. So there's different versions you're going to have. This is always one, so it's one dot something. So every four months they come out with a new minor version. There is support for 12 months. So I have to upgrade and I have to upgrade. Remember there's multiple places I have to upgrade because the control plane has to get upgraded. The components on the nodes have to get upgraded. The system pods have to get upgraded, so I can think about triggering a control plane upgrade, but then the nodes have to get upgraded and they don't upgrade anything. When I upgrade the nodes, it creates a new node based off a new image that has the new versions of the new version. The Kubernetes components, the new Cubelet, and all of those new images pre cached for things like cube, proxy, etc doesn't actually patch anything. So I have to say, hey, I want you to upgrade. It doesn't do it on its own today. Also remember this is running an operating system. I drew container D, but there's obviously an OS. This is still running. Most of it's going to be Linux. This is still running Ubuntu. Maybe Marina you can do windows. Those things get patched. Windows every month it patch, Tuesday it's going to get fixes. A buntu I think it's nightly. It does a check and it says a kernel mode update. Well I'd have to actually reboot. Cube the nodes periodically. There are solutions like Q&D that will automatically to reboot demon that would reboot it for you, but as a responsibility there or I just need to make sure I'm constantly reimaging the nodes to make sure hey there's patches to the underlying OS. I'll get those when I update the node image as well, so there is a responsibility to update those things. If we go and look over here for a second. If I well you can see it right here, there's a Kubernetes version. 1.24 that's my that's my major minor .6. You'll actually see there's an upgrade Kubernetes so I can upgrade the individual notebook now. I'm already running the latest because I created it this morning. But I could update the node pool likewise. The cluster. Well. I could upgrade the version of the cluster if I upgraded the cluster. Then it will give you the option should I update the control plane and the nodes? Or just update the control plane and then later on. I could individually update the node pools. And it will show me the different versions that supported. It supports basically 3 versions every four months, one year of support, 3 versions. OI could manually be doing that? Or. You'll notice I've got automatic upgrade. So this is a feature that I can enable. And I pick how I want this automatic upgrade to actually work. So there's different levels if I think of. These versions I can actually basically pick these versions. So I could say I want you to auto upgrade, patch, move to the latest patch version within the existing miner. I could say auto upgrade stable. Stable says move to the latest n -, 1. Minor version and the latest patch I could say rapid give me to the latest miner and the latest patch version. I can also and as part of those updates. When it updates this, it will update the node images as well. So it's going to update all the components, it's going to get the OS patch to the latest version, it's going to bring all of those things together. If I didn't want to update Kubernetes automatically, there's also an option to just say node image. It won't mess with the Kubernetes components, but it will automatically update the node image to the latest version, whatever interval that it's using behind the scenes. But there is a shared responsibility for that. I'd also caution you. On Dev test, all upgrades. Fantastic. I would do it on production, or certainly I wouldn't have production and dev test on the same schedule. If my production was going to auto upgrade, I'd want it at least stable because what if it breaks something? What if I have some use of some API that gets changed and it breaks my code? Why? I want to see that in Dev test first. So be very careful of any always any kind of automatic upgrade. In a production environment there should always be some staggering and some rolled deployment, so I would see any update in Dev test 1st and then when I'm happy. Then I can go and push it into production. Otherwise I may see very, very bad things and have very, very bad day. So it does have an auto upgrade, but use sensibly. It has managed identity, so managed identity is hugely powerful. We're going to cover this when we talk about security, but there's been multiple solutions for Kubernetes because it has a challenge. It's not just like a virtual machine. Because it's a pod. It's a pod within a node. And there's many pods. Well, I can't just use the identity of the node because that's very generic, they might be very different applications. I need the pod to be able to have its own identity, which is actually pretty difficult when you think about it. How does it automatically go and get a certain identity? And what's happened is now there are these specific solutions, there's now something called. With the Azure AD workload identity. So this replaces a previous solution called the Pod managed identity which was based on hey the nodes instance metadata service and doing some things. Well. While they were doing that work, Kubernetes. So the nice people at Kubernetes, they were busy. They actually went and created some own native solutions. They came up with their own oidc Openid connect. Provider. And the ability to then project some service credentials into a pod so they have a token volume protection. So what Azure Kubernetes ServiceNow does is it uses that native capability. And there's a few layers to this, but basically what we do is we create a Kubernetes service account. So there's going to be a user assigned managed identity. So somewhere I'm going to run out of space. But imagine there's Azure AD, so I create a user assigned managed identity. And that user assigned managed identity has a client ID. And then what I can actually do is incriminate is. I create a service account. And what I'm really doing is I'm mapping this so I'm going to map the service account. To that user assigned managed identity, then I create a Federated identity. As part of the environment, as part of my manifest, I'm going to create Federated identity and this then maps the user assigned managed identity to my service account to a certain namespace and then the pod just gets configured projected in via Mount point. And it just can go and trade this in for a token to Azure ID. It doesn't care, doesn't have to know anything but. OK, I'm gonna do a deeper dive video. There's a lot to that, but basically what it allows the pod to do is easily see this service account. And then they can grab the service account. And through an endpoint, say hey give me an Azure AD token and it's just going to work because behind the scenes we've set up this mapping of a managed identity to the service account issuer to the subject, which is this user assigned managed identity, and I get the identity for this. So there's a whole set of different steps happening, but the long and short of it. The pods going to have a unique identity that they can use without having to know anything about Azure AD or without having to know some special magic. It can have its own identity. Every pod could have a different identity from other pods on the same node, nothing to do with the node itself. The node has no access to this, it's all done directly as part of the pods. So that's a fantastic feature and just I guess really portal resources and DevOps. So one of the things that they've really worked a lot on is the portal interface is great. So we've seen a lot of this already. So I don't have to use kubectl for many things. Now I can drive a lot of what I'm doing. I can look at the namespaces that workloads, the pods, the replica sets, the stateful sets, the demons, the jobs, the Cron jobs. Yes, services my ingress. So I have my ingress controllers there. If I like engine X for example my storage, my persistent volume claims my persistent volumes are storage classes that we support. All through here. Really useful, like in the old days would be doing a ton of stuff with KUBECTL and having to mess around with that. It's very rare you do that. Even my deployment of bad father this morning I actually just went to where's it gone? Hold on. I did a workload. I did create and I could either use a starter app or it lets me just paste in the YAML. So I can just easily deploy from here as well and it integrates with Git OPS. So get OPS is a really nice capability, and what that lets me do is. Obviously the the whole point is everything's moving to DevOps. This deployment file right here. What I would actually do is. We had some. Git repo this could be GitHub. Could be Azure DevOps. There's many of them. What I would do is I would. Commit this. To a certain branch and what I can configure with get OPS is there are components that we have on here flux V2. And this component I'm trying to work out what I'm going to do it exactly if I'm going to run out of space but with Git OPS flux V2. It's basically watching. And when I do a commit to the branch it's configured for, it will automatically go and grab the manifests. This could be helm charts. There's different ways I can organize the resources and. Through this component that's running on here, it would apply it. See what it's doing? The apply, it's actually talking to the API server. There's whole sets of interactions going on back and forth. But I don't even have to have a pipeline necessarily. Now I could think about when I commit or maybe this creation of the new manifest is as part of my pipeline, I don't have to push it to my Kubernetes environment. All I have to do is commit it to the right branch that's looking at it will see. Oh, there's a new deployment manifest. I should make that reality in my environment. It will grab it and it will push it out. Should the get OPS integration is really powerful. And might simplify a lot the ways I think about doing all my various tasks, so we spent a lot of time on that. But AKS is such a useful service. Containers is driving a huge amount of what we do today and the services we're about to talk about. Most of them actually sit on top of a KS. The last one doesn't, but pretty much all of the others do. Speaking of which then? Azure container as. I mentioned micro services and Kubernetes and containers is fantastic for microservices. Some small unit of functionality that I can have, lots and lots of them that does some piece of work. They may be persistent, they may last, or they may fly in and out of existence. But if I think of true micro services. To be useful, I need other services to make them work well. And so there's often other components when I lay down on top of Kubernetes dapper. KEDA, so KEDA we talked about already. We already talked about that ability to have the Kubernetes event driven, auto scaling, a number of HTTP requests, a certain queue depth, certain resource use. Scale out the pods based on these various metrics and I can even scale to 0 so it could become serverless. Dapr is this distributed application runtime. And it's all about giving me this really great portability between doesn't care what language, doesn't care about what back end services it might be using to my app. I can just make a HTTP or GRPC call. Which is a kind of modern version of remote procedure call. It's very thin, very lightweight. And it's gonna take out all of the complexity of this. Now. I could install these things myself on Kubernetes. But maybe I don't even understand Kubernetes. I don't want to know, I want to just create my micro services app. And say run so Azure Container apps runs on top of AKS, but it abstracts away AKS but then it gives me. KEDA it gives me dapr gives me envoy. Envoy is one of those classic sidecars that handles the networking for me if I want to do traffic splitting. AB so a percent goes to a 20, goes to be blue-green cutovers quality of service. Inspection of traffic. Envoy is a great way to do it. It does it via sidecar. And so let's go over here. If I was to think about my Azure container as. The whole point is I can just focus on my a I don't even have to understand AKS. So yes, absolutely AKS is still there. I don't know that it's hidden away from me. What I focus on is my application and what it's bringing me is again that event driven autoscaling. It's bringing me dapper and it's bringing me Envoy. And it's facilitating all of this. It's still a pod, like I'm still having my pod. But I'm going to have this sidecar. So it's giving me this side card that's providing all of these different functionalities around the dapper and the envoy and then I have my main container itself. Now the beauty of what this is going to do and I guess I'll focus on the dapper side. If I'm doing a true micro services solution. I may want publish, subscribe to some bus. I may need some persistent state to store things, maybe a key value store. I may need to discover other services. I may need secrets and secret management. I may want certain types of encryption or telemetry enabled for me. I might need to integrate with other types of service and that's a lot of work for me to write as a developer. Remember these could be in many different languages. So what dapper does is this sidecar? Well, it's running in the same pod. Remember, once we're in the same pod, what can we do? I can just talk to local host. So I can just make a hadp or a GRPC call to pull it's 3500 by default, but I can change the port V1 and then some service maybe it's state. I want to do some state and what I'll do is I'll give it a JSON payload. Some key values that I want to store. Maybe I want to get so I can both put and get these values. I just make this request to my sidecar. Doesn't matter what language this is, this could be any language. This is just a regular. Rest call. That's it. Doesn't need to know anything at all about anything. I'm just making a call to local host state. Here's some JSON payload. So this could be J3, it could be C#, it doesn't matter. I can have a mix of languages in my a, it doesn't matter. This that part is then responsible. For actually taking that and talking to some back end. So dapper abstracts all of that away and maybe it's, I don't know, Cosmos DB, maybe it's table store, maybe it's SQL Server. It doesn't care. I could change the back end store anytime. It doesn't care. It would carry on using the same call because that was completely abstracting it away from everything I'm doing. And that's really the huge point. Of dapper it's providing these services, I don't have to care about some special API. My app I always just talk to local host 3500 V 1 state and there's different types of service I might do, but it's just that sidecar so I can always just talk locally to me. And this is the point. So now my microservice doesn't have to worry about. How do I talk to SQL, how do I talk to Cosmos DB? How do I talk to service bus? How do I go and discover services? I don't have to try and learn different things depending on what language I mean, these are just provided for me by dapper. And So what Azure container apps are doing. It's providing all of that for me. I just hey, here's my app and I deploy it. That's managed so that's the big deal about Azure container as. Then we get Azure Spring apps. So Java J2 EE Java Enterprise Edition is one of the most popular languages and spring provides a number of benefits on top of justice regular Java. And what Azure Spring as does is it deploys a full spring cloud. On top of AKS. So again, it's abstracting the AKS away from me, but it is sitting on AKS. And there's both an open source software version and there's a VMware Tansu version. So there's two different versions of this. So they're all three tiers. Come on. So if we look at those tiers really quickly. We can see OK, there's basic for tyre kicking and individual dev test. There's a certain price per hour, there's standard, then there's enterprise. So enterprise is using VMware Tansu instead of the open source. But realize. I then if I'm using enterprise. I have to bring and worry and deal with the VMware Tansu license. So here it would go and take me over and I would go and buy the tansu. Obviously there's different supportability in these things, but I do get to pick. Exactly what I need from those. And So what this is going to look like if we just go back over my whiteboard back up? Over here. So if that was Azure container apps. It's got a slightly different color so Azure Spring apps. So once again, there's an AKS hidden away behind the scenes, but when it's deploying to me is a spring. Or cloud? Once again, there's both the open source software version and the VMware Tansu. I need to bring my own license for the Tansu, but this is based on the idea that hey, I have J2E, but when I'm writing my application there's often things that would help me. Frameworks that make the developer more productive. There's a whole model view controller, so there's a certain model of the data, then the view takes that data and makes it usable. By the end client, there's a controller. The user interacts with the controller, which then goes and updates the model. So what the actual Azure Spring Framework does? Is it provides a lot of those capabilities for that model? And then they created was then spring boot. In spring Boot, boot built on the framework, but it made itself contained it made it autoconfiguration. I could have annotations that would really think metadata about the application that would then automatically go and bring in the libraries and frameworks required. So if I was doing serverless or micro services or web app or a Cron job, it had templates to help me just get up and running. So I'm not rewriting all of the code. It's simplified all of the dependency management. And then from spring boot. What was Spring Cloud sits on top of spring boot. That uses spring boot to pack all the bits, but now it's really focused for cloud native microservices. But it has a service discovery so services can register and find other services. It has Spring Gateway so I can now have traffic splitting so I can do that AB 80 percent 20% I can do the blue-green. It has a configuration service to externalize their configuration of my app. And I can centralize those into one configurations door O we have all these different capabilities. Built on spring. So the idea would be that, hey, if I'm using spring as part of my Java application, well Azure Spring apps would simplify all of that deployment for me. And so instead of me worrying about those components, I don't have to deploy the spring and the spring cloud and all the other things. It's a managed service for me. And if I pick the enterprise SKU, it's a joint managed service by Microsoft and VMware. So I get that full supportability of it. So as we saw, many things actually built on top of. Aks. Something not built on top of a KS finally so app service plans. This was actually the original PAS. And for Azure, so when Azure actually came out, it didn't have VMS at first, it had app services. So this is a hosting web-based applications HTTP. GRPC. And for Linux services using HTTP 2. In the old days we would hear talk of was it a web app, is it a mobile app, is it an API app that's gone away? It's a web application and then if I want to do an API or mobile app, I kind of just bring my own components for that. But this is a huge area of innovation from Microsoft. They're still putting in a huge amount of work into a services. And it supports a huge number of runtimes and languages. I don't have to maintain, I say, hey, I want it to run on node JS, hey I want it to run on net, whatever that is, and I just bring my code to that environment. It does support Windows and Linux, including containers. It's a little bit interesting when we go and look at one of these, so if I was to jump over super quickly. Let's have a look. So if I was to. Go over here. Let's just go now to. My app services. Once again I've got bad father up and running, but first if I just said created a new one. Notice I've got a whole bunch of options actually. Well, it can be code, or it could be a docker container so I can deploy an image, or could be a static web app which we'll talk about at the end. Now if it's a Docker container, is it just Linux or Windows? And then I'll pick the image if it's code. Well. Isit.net? Is it go? Is it Java? Is it node JS? Is it H? Is it Python? Is it Ruby? I've all these different runtime stacks that I can leverage as part of this. And then I just. Notice I can do availability zones. Depending on which plan I use. So there are different plans, there's different capabilities, but I don't worry about anything here. Like I literally just deploy my code. So I created again my bad file. Notice there was a free tier if I just want to play around, so I've got my little free tier. Now it obviously is limited the amount of bandwidth etcetera, but I created my bad file wrap. And I literally just pushed my code, it's just a nodejs file, so I just you could FDP, you could push it with DevOps, you can. There's a VS code extension which makes it really easy to do. But. Now this is going to take a second. Remember, this is free. I have not clicked this anytime recently, so it has to go and fire up an instance on the back end. If I was talking to it fairly frequently then this wouldn't be an issue, but because I don't want it has to now basically warm it up. It has to go and provision some stuff behind the scenes, make it available, and then my terrified son will suddenly show up. I probably should change this at some point. But that's app services and I guess why that's running will carry on and we'll come back to that. So it can run containers. So if it's if it's a Windows app service, it just runs natively. Obviously it's the windows container. He runs a container. If it's a Linux app, it actually runs in a Microsoft managed container with. It's a Linux container. It's a bring your own container. But container is actually pretty predominant. I have a certain number of nodes and I can do auto scale. Now this comes back to what is the plan? So here we can see we have different plans. And I'm using the free one. So with the free we can see hey I can have 10 apps and I get a certain amount of disk space, but it's fairly limited. But there's also then shared. Basic standard. Now once I get this standard. Standard. Is when I get auto scale. So that's that same idea. That, hey. I can now actually change the number of nodes based on the amount of work it's doing and I can have up to 10 instances, whereas with premium or isolated high performance I could have up to 100. And obviously you see there's other types of functionality that I do get as part of that. One of my bad file is my bad father, there's my app service and I get that same. This is now advanced node JS. But once again. It's my poor son being terrified. Let's love it. My poor son being terrified over various things. And so the way this is working is those nodes where we create an A Service plan, an app Service plan. So let's now go and look at where am I on the board. Let's have a look. We'll go this way. I'm trying to keep it so it at least flows a little bit similar. I've Azure app service the original. Azure app service. So what I do is now again, this is web apps. So it's something that I'm talking on HTTP. The RPC if it's Linux. What I'm going to do is I'm going to create a Service plan. Now my Service plan is what has the resources. So the Service plan has certain number. Of worker nodes. Now the thing with app service is there are other components, there are some front ends, there are some file services, but they're multi tenant in a way. Whereas the nodes, these are unique to you. These your worker nodes, they're running your stuff. And then within my a Service plan I deploy. Apps. So let's say I've got multiple apps, so my app one. Who one on however many nodes are in the app Service plan if I had a second app? It's using the same nodes. It's the app Service plan that has the resources, and then what I deploy in that app Service plan shares the resources. So if I was worried about an app being greedy and taking more than it should, I shouldn't put them on the same app Service plan. I would have different app service plans if I need some isolation between them, but they're sharing those resources and once again, what I can have here is autoscale. Is that horizontal autoscaling if my version is well around? I have to be greater than or equal to standard. To have auto scale. So you have this ability to have multiple apps. They're running in our app Service plan. Right now I'm using the free. I've just got one running within there, so. Multiple applications can deploy to the same plan, but it's going to be sharing the resources. That's a key point around there. There is actually an app migration service it will go and look at for example. Hey I'm using IIS, I'm using net, Java. And it can help them pull that into an app service. There were a lot of technologies out there to go and help you use this. I can actually run this on Azure Arc. So if I have remember, arc is extending Azure outside of Azure cloud. If I have a Kubernetes environment that CNCF compatible and I've enabled ARC enabled Kubernetes, I can then bring down the app services on top of that so I can bring down app services, functions, logic apps can all run on an Azure arc enabled Kubernetes environment. So just an important thing is that I don't have to only run this in Azure, I can run this anywhere I have. Lock. So I can actually scale up and down as well. So we can actually change the size of the nodes upwards. Like it doesn't actually change the VM, it creates a new VM, so we'll set up new VMS of the new size. Moved workloads over and then delete the old ones. In addition to scaling in and out, it has deployment slots, so deployment slots are useful if I think about a staging environment. So imagine this is my app one. Well, this is my production slot, so this is production, say. But I've got a new version coming out. So what I can actually do is I can create a staging slot. So we're still at one. But it's staging. So I can deploy my code to staging, it's using the same resources. App Service plan has the resources. Apps will always scale across all the nodes. They'll use the same resources. But I could deploy my app to staging, warm up the code, any caching it needs, any just in time, whatever the compilation, whatever it needs. And then when I'm ready I can do a VIP swap. Virtual IP swap swap was production now becomes staging, what was staging now becomes production. So it's really nice to flip that over. And also I can roll back really easily. If I've broken something in this code, I can VIP swap back again. So I'll go back to the old version of production now. There were some assumptions in there about what upgrades I've done, where's the state and everything else. But on the surface, I could flip back and forth between these very, very simply. So that's what my deployment slots. So deployment. And depending again on the plan it controls if I can use those and how many. I have different virtual network integration options now again, this does depend on the plan. Not all plans can do the virtual network integration. But if we think about, I have. A virtual network. So let's say over here. I have my vnet. Now I can think about two directions of virtual network integration. One is to talk to the app. The best way of doing that is a private endpoint so I can create a private endpoint that talks to a specific. We're back in the Service plan. Maybe I'll create a different 1P2 to talk to that, so I would need a private endpoint for each app. What about if the apps want to talk to things inside the vnet or that the vnet connects to well? I have to have a delegated. Subnet this is for the regional vnet integration and the Service plan goes and takes over this. So this would be every app in this Service plan would now use this. And then anything this is connected to. And they did do some updates as well in terms of the scope of this, it can get used like connected links work on this as well now. So this enables me to integrate with things on the virtual network, because again, this is for. The front end scenarios shared space, so you can't just. By default it's not running in my virtual network. I have to do certain integrations to make it integrate with my virtual network, so private endpoints to talk to my service. Regional vnet integration which can then go and talk to things it's connected to. I can do service endpoints as well, so on the firewall I could limit it to things only coming from certain subnets, but today most of the stuff is really around private endpoints. Maybe it's app service environment as well, so I talked about the idea that there's a bunch of shared things. Well, doesn't have to be. So an app service environment if this is regular app services. As think of the same virtual network. I'll do it bigger. So once again we have a vnet. And what I would now do is I'm going to make it bigger subnet. So I'm going to delegate a certain subnet. To something called an app service environment. So we're different, different beast. So we'll say app service. And what this does is there aren't shared multi tenant components anymore. It deploys everything into your environment. So this is a V3V3, made a lot of changes into its architecture notes stamp V it will be an isolated skew. So this now deploys into this particular subnet. Now this integration is for data plane, so things talking to my app services is via my virtual network. Now within this app service environment I still then go and break it down into what color did I use. I still break it down and actually create one or more. App service plans inside my ace, so my app Service plan one I can have an app Service plan 2. And then within the app Service plan, I still then go and create my apps. And again, this will be multiple nodes are actually making this up. I'm using the isolated skew. But the key point is he's also having a connection. To a Microsoft. Um, vnet for all of the control plane traffic. So it used to be a problem that on the ASV two if I did certain network security groups, if I did certain restrictions, I could break the app service environment because it couldn't get to things it had to be able to get to. And customers didn't like that that a they could break it and BI had to let things through that maybe I don't want to let through. So now it takes that way. It's internal stuff it needs. It's a split plane. The control plane goes to its own venom that I don't see. Only the data plane goes into my network. So that's what the ASP three does. But because it's running in my vnet, I don't have to use regional vnet integration. I don't have to use private endpoints. It's just running in my environment. And again, there's no separate stamp fee anymore. It's just using the isolated V2 skew and I create those instances and all of my parts just run directly on that. So that is the ACE V2. Now, one of the things you may wonder at this point. Remember I said why can one containers on app service? I'm running containers on spring apps and container apps and Kubernetes and Azure container instances. What am I supposed to use? Like why? And it boiled down to what is my use case, if it's a single little container for a short term thing that pairs dispenser, no orchestration, no debugging, autoscale, sure I can use Azure container instance. If I want more of a. Standardized. I'm used to the Kubernetes and I want that hands on. I want that integration. Hey, I can use AKS. Do you know I'm really just wanted to get some micro services deployed and I'm using dapper. I'm maybe using Envoy, I'm maybe using KEDA. I don't want to know about Kubernetes. As you container apps. I'm using Java. I'm using spring. I'm using spring boot. I'm using spring cloud. That's my focus. As a spring apps I don't have to worry about that. I'm used to Microsoft. I'm focused on Microsoft. I understand Azure app service great use, that I can run containers on that as well. There's not necessarily a right or wrong, some of them are obvious. Spring apps might make sense. Hey, I'm using dapper and Envoy container apps. But it may come down to a little bit of. Obviously Kubernetes is a standard CNCF. It's going to be usable anyway. It's highly portable. Was App Services is Azure. It's arc as well but it's Azure centric. But if I'm used to app services, if I like the simplicity because this is simpler than Kubernetes. Hey maybe that might be absolutely the right solution for you. So that's why I can think about, hey, if there are these different options, which one should I pick? Because there's often not a right or wrong, it's going to come down to what am I really trying to do. OK. We're coming up to the final straight, so. Azure functions. This is all about serverless and actually I can run this in app Service plan finally enough or it can run in a purely consumption. I only charge for the work I'm doing. And I guess before I talk about functions, let's just set the stage a little bit. So the whole point of serverless. Let's go over here. So when I think serverless. The key point is there's some work I want to do. And that work I want to do. There's going to be some event, and the event is going to trigger it. For that event could be a schedule, so at a certain time or a certain frequency, it could be some type of message comes in. It could be an API, Restful API, it could be a million different things. But the whole point is there is something that triggers my work and then also there might be other services. That I either maybe write to talk to, read from, and these are called bindings or maybe connections depending on which technology I'm using. So there's something that's triggering me to do something, and then there's other bindings or connections that I might read write from as well. Now, while there were many things that can trigger these events. There's a huge number of sources that could do this. So if I think of event sources, there's just a massive number of things. Like I could write a BLOB to Azure storage, there might be a new key in Azure Key Vault, there might be some event happens in Azure Kubernetes service, there could be saying it happens in an app service, there could be saying it happens on an activity in a subscription, the list goes on and on. And it would actually be fairly hard for me as an A to know I'd have to do a hammer poll. Is there anything for me? Is there anything for me? Is there anything for me? Which is horrible and so. What we have? Is event grid. Event Grid connects to all these different events sources. And then it has event handlers. And it is actually responsible for. And there's different types of event handler, so it could just be a web hook. And web hooks, one of the ways we can talk to other things. So Web hook, I might talk to an Azure automation, I might talk to a logic app, which we're going to talk about. It also there are others. I could also natively talk to Azure functions which we're going to talk about as well. It could go and integrate with some queue and there are other things it can do, but the point is what happens here is. It's pushing it. I don't have to poll, it's pushing the event to these event handlers. And if you think about these serverless things we're talking about are triggered by something, it's very common that it might get triggered by event grid pushing that thing to it. So that's one of the goals when we think about and so understanding, that's a pretty important thing. So I'm going to leave that up there for a second. So Azure functions is a serverless technology. It came on and use the resources of an app Service plan or it can run in a consumption basis. I get bills for what I use. So it's getting. Triggered by some event. And they combine to additional inputs and outputs. It supports a huge number of languages. And so if we look really quickly at the languages over here. Well, I can see well C#, Java, JavaScript, PowerShell, Python, TypeScript. All these different languages, all these different versions, there's actually different versions of the function runtime. We're going to focus on four because there's lots of different languages I can use for this and when I create my function apps. I create function app. I have functions within it that has some event trigger. Mine was PowerShell. So when I think about wanting to run PowerShell now, I don't use Azure Automation, I actually prefer functions. I can have the same triggering, I can do it on a schedule, I can trigger it on some event, but it's just code. It's just code I'm using. As part of this, but I guess that's the point. So I think of serverless technology. When I think about my options, realize that we've functions, it's code. So if I jump over to here my function. Yes. It's many languages. But. It is code. I am writing code. That's the whole point of this. I'm writing code so that's Azure functions. It's hugely powerful. Serverless I can just pay for. I'm using, but I am writing the code to do that. So that's one option. Another serverless option we have. Now you get that amount 3 get lots actually free. Logic apps. So logic apps is graphical based workflows. If I think no code or low code, I get a nice graphical interface that I just drag bits of logic onto. I drag connectors and I pay for different types of connector. I don't have to know any coding. It actually is now built on Azure functions. It uses the functions runtime, so anywhere functions can run, I can now run logic apps in the same place. So it's serverless. I pay only when it's running. Runs on functions. It's initiated by some event. And there's a huge number of connectors and templates to use. But the whole point of this is if we go and look at one of these so that was functions. Remember functions? Writing code like code logic apps. So I actually wrote the same thing as a logic app. And as a designer. This is shutting down a bunch of VM's with a logic app I'm adding in. You can see here I've different based on what I was doing. If I selected get VM detail, there's the detail. This was a get method. But as also things like hey, I could pause JSON, it just has native functionality to do that for me it's flowing down, I can have 4. Each I can have conditions I can deallocate for each step. I could say well only run after if the step before was successful. So the step before successful then go this path or if it wasn't successful. Go this other path. So I basically just drag and drop components onto this canvas. Based on what I needed to do. This was me just treating it, and I would just add in those various components. There's templates, there's connectors. So it talks about different connectors that are available to me that I can leverage, but it's just this really nice graphical view. I don't have to be an expert in coding at all to leverage this. If you ever hear about, for example, power automate. Power automate is built on logic apps. It will seem very, very familiar. And so then we get to the point therefore that hey, well, there's also logic apps. Logic apps are workflows that I create. And this is really no or low code. I don't really have to understand some syntax or programming, I can just create things by dragging and things onto a canvas. There's a whole bunch of templates available. That I can just use. So I think the serverless there's different options if I want to code or not. The decision of logic apps versus power automate. Power automate is probably I would use if I'm doing something for myself. Logic apps are wide use. If I'm doing something for my company or my department, that's really the boundary of those technologies. Finally, this is a newer one, but it's actually really nice. So Azure static web apps. It provides globally distributed content. For my prerendered content, and that's the key point, there are different SKU's available, so there's three. What did that do? Let's try this again. It drew something. There's a free and there's standard. I'm using the free for a whole bunch of different things. There's limits, so for hobbies personal projects you get 100 gigabytes of bandwidth per subscription and have two custom domains and I get SSL certs for free. It's really nice capabilities. I can have custom authentication with the standard, it can integrate with Azure functions, there's SLA's, but these are really cool features. So if I was to look for example at mine really quick. So if I were to look at static. Web apps. I've got a whole bunch of them, some of them used for redirects but my learn Azure. So if you go to learn dot on board to azure.com, my curated set of learning and running that on Azure static web app. It's just. Pre rendered content and this is the key point of this, it is pre rendered content. There is no server side. Brain required. So if we think about what we're doing here. So if I think of an Azure, keep that OK. Excuse me, Azure static web app? Is pre rendered. So if it's pre rendered. That content could be HTML. Remember HTML could have JavaScript in it. That JavaScript could have been created through react. It could have been created through angular. And then it basically compiles into this. It could be custom style sheets, it could be images, it could be a whole bunch of other stuff, but the point is the machine that's receiving it. It just doesn't get. There's no CPU cycles that have to get used now. I guess there are there's network packets to be served up but it's really storage. That's the point. There's no server side compute required to generate these pages. It's a file. It's been pre generated for this and it has a global distribution. Now it is its own distribution. This is not using like Azure Content Delivery network. There is an option of saying called the Enterprise. Edge if I turn on Enterprise Edge then it uses Azure front door for the distribution, so that's an option I can enable if I want that larger scale closer to even more people. But it just serves up this pre rendered content. Now one of the other things that's really nice, what if I need a little bit of server side processing and it's go and check a database workout? Something Azure static web apps will actually integrate with? Managed functions so I can have these managed functions. And what this gives me is a number of things, but the point what I'm using this for is I need some bit of server side, I need some server side work done to maybe go and look saying carport generate something and it makes it transparent to my code like in my JavaScript. Or I can just just talk to it's going to be API and then the name of the function, so then some name. It's integrating natively with that, so there's no cross origin resource scripting wherever the core stands for. It's also got integrated authentication, so I could use GitHub. I could use Twitter. I could use Azure AD and what it would do if authenticate with any of those it will go and populate into a header. It gives my code with that credential so don't have to worry about any of that authentication. It's going to populate that for me into my credential just straight away. And. It's going to create these functions if I'm got my Azure static web app. If I talk about the function I define it in VS code, it actually go and create the functions create that integration for me just automatically. So in my JavaScript I could just do fetch. Slash API name. That's all I have to know about. I don't have to think of anything else for me. So it cross origin resource sharing. That's what the R stands for. It's just going to work now. My function has to be listening on that full path API its name. But now I just get these managed functions. I can bring my own functions as well. There's a blade I can add those in. I'd get those same benefits for me. But it's a really great way, so if I do need a little bit of server side processing. I can very simply integrate with that. I'm not having to worry about the fact that it's some different site name or different fully qualified domain name, which would normally cause me problems with that resource sharing of a different origin. The other thing that the Azure static web apps integrate with. Is get. So if this is GitHub, if this is Azure DevOps, when I commit my code, it will automatically push it to my Azure static web app. What it's doing behind the scenes is it's creating GitHub actions or Azure DevOps pipeline to push my code so when I do a commit to the branch. It will push it so it would actually do any building required and it would actually do the deploy. So I don't even have to think about setting anything up. I commit my code, I've pre rendered my content, I commit it, it will actually go and bring it into the Azure static web app for me and even understands things like pull requests. So if I do a pull request, it will go and create a special variation of my app with the pull request number in it so I can go and see what it would do. I can create branch named versions of my static web app so I can get these consistency between it. There's even things like stateful HTML with hybrid rendering where it could split the path to say some of it go via function as a bit more advanced, but those capabilities they're building in as well. So integrated with managed functions, it's got the DevOps integration. And. That was it. So there's always. We covered huge amounts of stuff, but hopefully it actually made sense and it builds on the idea that hey, containers. Just really the idea about. Virtualizing the operating system, I still get that resource control and management. I get the isolation of namespaces and the images. We build on that to bring orchestration with Kubernetes and AKS. Then there are other things on top of that. When we think of micro services like dapper and cadaver and Envoy for the networking, I could bring that myself and put it on AKS. Or I can use Azure container apps. If I'm using spring for my J3 apps, well, there's Azure Spring apps. App Services maybe I'm not using containers. I'm just going to bring my app and whatever runtime I'm using and get a lot of great functionality. Or if I like app services I can't bring containers running on that. It's simpler and I don't worry about why. Any portability to any CNCF compatible combinators? I just run App services and then for serverless there's some event that drives it. Hey, I want to write code, I use functions. I don't want to write code, I want low or no code. I can use logic apps then Azure static web apps for that. Pre rendered content, globally distributed. And if I need a bit of server side processing, it can hook into those managed functions. So that was it. As always, I hope this was useful and I'll see you next video.
Info
Channel: John Savill's Technical Training
Views: 13,951
Rating: undefined out of 5
Keywords: azure, azure cloud, microsoft azure, microsoft, cloud, app service, paas, platform as a service, containers, docker, kubernetes
Id: bMlkYLX4ZfQ
Channel Id: undefined
Length: 130min 38sec (7838 seconds)
Published: Tue Feb 07 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.