Azure Functions for .NET Developers – Everything You Need To Know

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
I'm part of the engineering team on Azure functions and I'm Matt Anderson I'm a program manager on the Azure functions team and today we're here to talk about well one of our favorite things which is.net on Azure functions um we're kind of going through a couple different aspects of it we'll talk about the agenda in a moment but just a reminder they always want you to do the session survey we love feedback so please absolutely do that uh we'll be putting this up at the end again and then oh hey there um so for this we're going to be just doing a quick lap around functions in case anybody here isn't super familiar we'll make sure we cover all the bases what it means how some of those Concepts translate and then how you start bringing in all the goodness of.net's most modern capabilities and apply that to Azure functions so hopefully a lot of it actually feels very familiar as we go um I'm not getting audio is a um I can project more that's fine better cool all right um just to restate that uh yeah so we're going to be talking about functions we'll do kind of a quick lap around the product make sure everybody's familiar with the concepts have a solid Baseline and then we'll be showing how you apply all the latest and modern capabilities of net to that bring that all together is one nice creative position and uh yeah we're gonna try and jump through it we got some slideware but then we're going to spend a lot of time in demos and then hopefully just plenty of time for Q a at the end yeah so when we talk about functions you're going to hear this word serverless come up serverless is really just kind of a continuation of a trend that's been going on in the cloud of increasing abstraction so that you're able to spend less time focusing on certain pieces of infrastructure that may not be unique to your platform or whatever um you're it's not stuff that's the value add that you're bringing in the type of product that you're trying to deliver so it's about owning less and focusing your attention on what really matters to you that's the ultimate goal of the serverless concept so we try and move away from infrastructure where we can obviously there's going to be places where you're going to have to have some certainly there are plenty of pure serverless apps out there but a lot of them are also just a mix of some more serverless type things or more traditional Technologies as well where yes there's still going to be some operations but for the serverless pieces you have less to worry about there you're gonna be able to react to all sorts of events and have their infrastructure scale for you that's not something you have to reason about it's going to happen in response to the load that it's seeing and in a lot of the models uh it's paper use so you're only spending uh money when the code is actually running and if it's not actually doing anything then it's not uh on your bottom line so we bring functions so like functions as a service the the general pattern here is one aspect of serverless there's plenty of other Technologies as well but it does encapsulate the the general Mission and sort of the reason this trend is caught on uh in a lot of ways you have functions that have single responsibility they do one thing uh they're not um you know trying to have the entire world captured in one function um you break your app into discrete components that are more easy to reason about manage test all that good stuff um short-lived we're not running you know super long-lived workers or having major amounts of State we tend towards stateless more i o bound and CPU bound typically although there's uh you can kind of go both directions and then the scalability and making sure that that aspect of the infrastructure management is out of the way is is a huge part of it as well so that's kind of the general space itself when we talk about functions we have some amount a function is well some event that causes a function to be invoked a piece of code that actually does something with that and then perhaps there's some outputs and things like that that's the general gist of it so we have a couple different concepts triggers things that cause your event to run that could be a message going into a queue image being uploaded to a blob store HTTP request timer event those types of things we're going to be talking mostly about.net and things like that so exclusively about.net the uh so your code is just a standard.net process but we also have some things to facilitate working with those other services but you can always uh you know do anything that works well for uh for how you build your apps so I mentioned that sort of scaling and responding to events so ignore a lot of the words here on the slide the general idea is that you start at zero no code running because there's no events coming in events come into the system system scales up make sure that you're able to process those when it burns through the backlog scales back down and so again you're not paying for it if it's not running and that's one of the key things for a lot of folks when they look to servos and functions is you can frankly make some good cost savings if your load has sort of Spike if you're getting like constant load of you know certain volumes and things like that that model doesn't you know necessarily uh provide as much benefit but in general the elasticity and being able to react to hey suddenly you've got a surge of traffic you weren't expecting system is able to just handle it that's a that's a nice value of these types of things so people use it for a lot I mentioned sometimes you see fully serverless applications sometimes you see it as a single function that's added to an say web stack um where it's just doing one thing it's providing utility surprisingly common one maybe is uh you see a lot of people doing like imagery size upload type things so have a website allow it to take image uploads do some resizing operation in the background put it there and let the web uh web app serve it from from that point forward so you can start small you can go all in either way is valid but it's a handy tool that you can apply to a bunch of different situations so that's enough of a kind of uh maybe paddling on the virtues of functions let's assume we want to do it uh how do you get going make sure you have the Azure development modules installed in your visual studio and then from there uh well we're just going to get right into jumping over to vs and making it happen yeah we have a few demos we want to share with you all today but we figured we'll spend a little bit of time how many of you have some level of experience with functions just a quick show of hands we wanted to take before we jump into demos we wanted to show a little bit of the experience starting from the very beginning if I need to create a project what does that look like a lot of times when we jump into the demo some of that gets a little hazy so we want to show Once you have this workload that Matthew just mentioned installing vs what does that that process look like um so I'm just kind of starting to use this of vs here to show that the flow end to end and I'm just going to hit create project I happen to have functions as one of my recent project templates but if you have the the workload installed you're going to see that as one of the the project templates that that is supported out of the box and present in the s uh as I go through the this flow here I can name my function and I'll have a few options that I I can choose from dot Net 7 is what I want to use today um the latest and create a supported.net version I'm gonna go with that for our overview our intro right now let's just stick with a Q trigger I will have a couple of other demos showing HTTP endpoints but let's let's pretend we want to create something that will respond to a message being sent to Azure storage queue for our demo we'll just use the local emulator we can leave the the connection string empty here as you will default in your local experience to the emulator your name we'll just use what we have in the in this dialog and with those clicks I have a running local project I have a function that is a setup that is uh showing some a pattern and some some features that we'll dive into dive a little deeper into a little later as part of the demos but it shows dependency injection taking a dependency on the log of factory using logging and we are showing a q trigger setup pointing to the container that we defined when we started started that function if you look at the project structure this is very similar to what you would see if you're working with a standard console app and that's one of the things that we want to we want to drive today is that if you're familiar with dotnet if you work with asp.net core if you work with a net applications for different types of workloads all of those skills will transfer and we have been working very hard to make sure that is a good alignment with the the rest of the ecosystem that it fits well and uses the same set of abstractions and apis available for the other textdax.net is tax text that you are you're familiar with your primary difference here is what you can see on screen of the attributes that are defining the actual trigger and the function itself as part of that Handler and if I switch to my entry point the entry point of the application uh you see how this is all bootstrapped and you have full control over these you you can configure how the application is a bootstrap this is the same set of apis it would use in different application types so service injection using uh more advanced features that require access to the the Builder you have access to all of that here and for the model that we're going to be looking at today uh you own main you want the process you have full control over the the dependency uh that the set of dependencies brought into the application um that's how the the application is structured and then we can run this just as a quick validation and demo and then as part of that let's put a breakpoint here so we can see something actually happening uh I have now the the app running using what we call Core tools this is integrated once you install the the functions workloads in and in Visual Studio core tools will be automatically there and automatically updated for you uh this is using the same runtime that runs on Azure so this is not an emulation this is the the same exact set of bits uh giving you really good Fidelity with what you would expect to see the application do once you finally deploy the app um for our demo we use a storage Explorer we're just pointing to the the local emulator as I mentioned so I'll just pick up the cues um the the queue the that we are using from our app and let's uh let's just add a quick message to it so we're using the storage emulator here but uh especially for you know different event types and things like that you may not always have an emulator so when doing local a lot of folks also connect directly to an Azure hosted Resource as well so we'll just give a message what we should see here is that this message will be picked up uh we're seeing the the Q trigger being executed here I didn't have my breakpoint Mount so I would redo that so we can see that if we refresh we'll see the messages has been consumed um I'll just give another message here that says hello we'll see a breakpoint coming in with the content of my message pretty straightforward pretty simple but hopefully that helps understand how do I how do I get started with this if I wanted to create a project right now um how would I get this going again the when I selected cues uh when I was going through the the project uh dialogue we have a lot of different startup templates you can start with HTTP triggers queues blobs we support a number of services and Integrations out of the box that you can you can just leverage uh as part of your function so with this out of the way the next step here would be to to publish so I want to show a little bit so that's the last lag right I have this working I've built this locally uh I was able to debug locally now I want to get this actually on the cloud and you have the ability to do it straight from the S you have the ability to publish directly to to Azure by following these steps you will be able to provision resources you'll be able to do everything you need to do to get the application running but of course we also support more advanced CI CD pipelines and and deployments that integrate with uh you know your repo and in with your application management lifecycle and vs will help you set those up as well so with that in that context um I want to jump into some of the demos that we have prepared for you all today we wanted to show some dotnet features that the team has been working on some features that are some of them are maybe a bit newer and and some of them may be unfamiliar with even if you've been working with functions for a while so it's something that we wanted to highlight and make sure that that you are aware of so I have two different projects here we'll start out by looking at this HTTP Download Project if we look at the the the entry point of the app um very similar to what we looked at for the the application that we just created with file new so not a lot of changes happening here except one yeah a small change that that you will notice is that we are enabling uh some additional web application integration with this application here what that means is that we're bringing in a deeper integration into asp.net core so there is a a new extension extension that you have the ability to to reference that will give you access to richer at richer set of types the same set of abstractions that you use when you're working with a.net core app and we can see that when we actually go into the the function implementation if I Collapse this you see the the I'm using the HTTP request that you may be familiar with if you're experienced with asp.net core uh I'm using some more advanced features here when I'm using uh The Blob input uh which is a I'm actually taking a blob stream uh and now that I mentioned input one thing that will be good to mention as well is this concept of Bindings that exist in functions and there are three different types of bindings you have triggers which are primarily responsible for running your app is what you define as the Event Source for your your code in this case we're using initiativep trigger so HTTP requests will cause this function to run and in the previous example you saw it was a cue trigger right so uh and in addition to the the triggers we have inputs which you can have many of as part of your function so those are that's any additional data that you want to have passed into your code as that function is triggered so functions will make sure that we resolve those inputs all of that data and make sure that when your code is is executed those references are passed to you so uh this creates Matthew talked a little bit about the abstraction that that serverless provides over uh the platform and servers you don't have to worry about you know patching machines and and maintaining the platform yourself this is another set of abstractions that that's kind of unique to Azure functions over services that that we integrate with out of the box so this gives you the ability to work with services like blob Services the the Azure storage blob in this case I'm not using anything directly from the service I'm not not using the SDK directly all I'm doing is I'm taking a standard.net stream and I'm saying hey when you get this blob just give me that stream I just want to consume that so in that Spirit of owning less right we're not actually going through any of the mechanics of dealing with the service we're dealing with the data that are like that's the actual thing that we want this function to operate on that's what we're going for so trying to encapsulate as much as that so that the only lines of code we're writing here are hopefully more valuable ones yeah so I'm going to go ahead and run this so we can see this in action uh if you've noticed what I have defined as my uh well actually one one additional binding type that I forgot to mention is that as I was going through is our output binding so you have your inputs as a result of your function running you can also Define a set of outputs that that you create that will provide the same level of uh abstraction on top of services that we integrate with for this function if you notice I have a reference here to a blob in a container called runtime and I'm just using this blob name here um is that done at 8 previews I guess what this is so we need.net8 here at some point so we figured we might as well create a function that will bring this down from a container in storage this is again using emulator so we have the function running and this will show a simple but very common scenario functions which is the ability for you to very efficiently uh download the respond to initiative request with a payload that is coming from from Storage uh this particular scenario and this is done if you if you go back to the the application graph here one of the things that I did want to show is that this is processing that data very efficiently for you uh which is one of the things that the team has been spending a lot of time making sure that uh communication with these underlying Services even with the the more recent models the the isolation models that they are efficient uh that you have the ability to very easily uh meet those requirements for these types of workloads so just to put a point on that your memory graph looked like it was what 40 megawatts tops yes whereas the payload itself was something like 200 and so we weren't allocating additional memory or anything like that um one thing that maybe I glossed over is the the building where we're doing based on time it's actually memory over time so the amount of memory you're consuming also matters for that sort of model so things like this translate directly to savings which is a something that we want to make sure that like we're not creating additional pressure and this also points to patterns for your application um you know if you're dealing with a bunch of different event sources or particular data sources we'll talk a little bit about things like connection pooling in a bit but you don't want to over allocate things and create additional pressure in your system so there are a few things like that that are good to watch for in functions yeah so in addition to the the programming model uh using the ESP Network abstractions the additional types that we have the ability to support for bindings this shows a lot of performance work that's been done by the team in the result of that with these models like throughput enhancements into what you would expect uh in the traditional consumption based in the scenarios it still benefits you but if you're moving towards any of the dedicated pipelines elastic premium plans that's where you see a substantial difference based on this kind of performance improvements the other demo that we we wanted to show is a demo that will highlight some additional.net features but in the context of functions so we have this HTTP client Factory project and we'll start out with that project what you'll see here is that we are demonstrating the basic dependency injection usage in functions uh in this model in this case we're using the standard HTTP Factory clients Factory functionality this is identical to the way you would use the same feature set in any other application type if you're using HTTP client factoring again in espnet in a console app the code will look identical to this it's the same set of apis same thing there's nothing function specific about these you're registering those Services the same exact way um and if I shift to the the function here we'll be able to see how the function is actually taken in those dependencies and making making use of those services in this case we're seeing we're taking an eye logger that's been passed in this is yeah injected and we are using the HTTP client Factory when the function is optimally executed you'll see that we have the request being passed in we're taking the cancellation token for that that function we create a client using the standard apis available to us from the phtp client Factory uh we are registering a named client so we are retrieving a client that is specific to this function uh with his name that gives us uh just basic configuration so we centralizes the way you're configuring those clients uh where you're registering the client with the factory and it importantly means you're getting connection pooling which is one of the things that I think people run into a lot with these highly electricity um you're going to have tons of invocations potentially happening all at once and that's great until you exhaust all of your ports or um you maybe uh DDOS one of your own uh Downstream dependencies because you have so many connections being opened against it so di is the same everywhere but oh boy it's an important thing to remember when you're dealing with functions yeah so this highlights both the penis injection but proper management of the if you see here this function is not dealing with lifestyle lifetime management for that that HTTP client instance it's all using the best practices that have been put in place and we're just leveraging the the standard.net capabilities again trying to emphasize that hey your dot net skills transfer and apply here uh the same concept would be will be used uh once we get that client we are we'll be disposing the top this client at the end of the the student when the scope of this this function is is finished but we are we downloading a string we are downloading a payload we do expect this payload to be an HTML payload um we're just using the base URI that was again configured for that client uh so we're just looking for the exact resource we're looking for um and passing that result back as part of our response was to run this function right so for this example we've got a client that's pointing at the docs and then we're then uh using that to pull information back and resurface it right yeah yeah so this this shows uh this function calling on Downstream API it's a this this sample maybe a bit contract but it shows a very real scenario that we see customers use any functions all the time as part of your function is very very common that you're going to be communicating with Downstream apis and showing how to do that efficiently getting some content passing that back to the client in this case you might have noticed when I ran this function that that I have a another function defined here this is a super simple function that is just going to return a greeting um back to me so if I run this it I get this grading back if we look at how this function is actually implemented I'll keep it running here so that we can see the different things this function has the ability to do so similar things to what we've seen before we're passing in a logger this case we're not actually using that but we we're simply looking at the the request seeing if we have a query string with a name associated with our request if I am providing my name if I pass my name in uh I'm returning a more personalized greeting uh and otherwise I'm just returning a very general grading that's what we see here so if I was to pass a name this would return a different greeting to me and the way I'm doing that is uh again standard.net features using resource files I'm using resx here I'm just pulling the resources and uh returning the greeting uh that is appropriate based on the context based on what data I have been passed to my function maybe wondering okay the the sample we looked at before it seems a little more complex seems like we've taken a step back here but we wanted to highlight another capability another feature that is available to you uh in functions which is the ability for you to register middleware in the application in invocation pipeline so very similar to what you have the ability to do again in ESP net core where you have the ability to register with middleware that will be in the HTTP invocation pipeline you have the ability to do the same thing for event-based workloads with with functions so if we go back to my my bootstrapping code and if I expand this first few lines here you will see that we are using the same Builder pattern with using the same the same pattern for dependency registration and middleware injection to inject the functions specific middleware so this is a middleware that will provide a context that um is compatible with different that is not HTTP Centric that will work for any event type that is supported by functions let's take a look at what this middleware is actually doing because in this particular for this particular middleware we are actually looking specifically for HTTP triggered functions uh by using some of the newer apis that will give us access to some of the the espnet core features in this case we have the contacts that I described these will is a context that will carry uh event information for uh any event type any of the bindings any event source that you're using for this example we are getting the HTTP contacts associated with that that invocation before we leave that point that function context is also something that can be passed in to your functions this is a something that we see that we're using for giving information about the actual triggering context and things like that that's a generally useful object to know about and be referencing in your function code or in the middleware itself so it's a great point yeah in this case we we only want to deal with HTTP requests so we're actually checking to see hey do we actually have an HTTP contacts associated with this location if we did do we have this additional query stream we're trying to do something different here maybe we have a pattern where our clients will be passing a question telling us hey there is a different culture that I want you to apply to my my request here uh and if that is the case we just applied that to the current thread and current UI culture again .net features that have been around for a long time uh showing how to leverage a lot of uh that that using using functions uh some folks also use serverless and functions to work with integrating existing Solutions and code bases middleware is a great way of translating workloads and either matching schemas or getting things into the right shape so that they interface well with say a legacy library or things like that that you have while still being able to present whatever the newer modern thing or you know whatever the new application that you're building is yeah but again highlighting how I can leverage.net features that you're pretty probably pretty familiar with and if you look back at my function my function is not doing anything to to deal with culture there's no logic here to identify the language that's all based on the the functionality that is available to me from.net and the fact that we injected at this middleware that is now set in the context for when the function is is involved so if I was to run there and actually pass that the query string gave it away earlier with you that passes that so automatically I have the resource resolution now giving me a localized version of that string and a greeting that then matches my my preferred language I didn't know that Azure functions was just Azure functions in Portuguese so you'll learn more than just automating these talks we did one of the things that that we're going to highlight to you is that we talked about how these middleware is a great way for you to address cross-cutting concerns right so anything that you don't want to repeat as part of every function definition build a middleware you have the ability to handle with a lot of that stuff uh keeping your functions simple keeping that code simple single responsibility own less so if we go back to the initial function that we looked at that was going down to that you know Downstream API that we are calling to get the document with the overview or functions now that we have the middleware injected we have the ability to essentially do the same thing and I can call this uh client Factory function but I can pass the the same culture query string that middleware will set up the same context and that is going to return a a version of that the content that matches as long as the downstream API supports that we are setting up the context when we build the HTTP client that is used by by the function again the function didn't have to be modified uh the code is exactly the same just by having the middleware there we're taking care of that cross-cutting concern we're taking care of the logic that will influence the the behavior of the function and if we look back at how the client was set up we're simply taking the the culture that we have in the context and setting that is the part of the default headers for the accept language for that that HTTP client we just have a quick question hello do you want to just ask that real quick yeah so the question was just can we look at the Mike Ultra middleware again okay yeah so yeah explaining how the the the how he was involved how he was yeah so the the enemy aware that you registered you we do provide a set of apis that will let you uh register conditional middleware which uh will depend on that trigger type or some logic that you may add here in this case we're using this Builder pattern and injected that that piece of middleware when we call this uh use middleware method uh passing this middleware type this is injecting this middleware into the middleware pipeline uh that is used by function so from that point on every invocation that gets dispatched to a function before it reaches your function we'll go through this middleware chain uh ordering here matters uh the same the same rules and logic that that applies to asp.net core applies here for to function middleware registration as well as you register then they will be executed in the same order of registration and will be a middleware chain yeah and yeah if you jump back in there the sequencing matters and you'll notice that the the thing we will return is sort of this next bit so pointing to the next middleware that's going to be invoked yeah and this gives you the ability to change your app as if I wanted to this middleware could be uh providing some logic it could be running some logic that would decide if I just want to Short Circuit my my invocation uh if I want to do that I don't have to call next I can change my logic here so the same the same things you have the ability to do with uh it's been a core middleware that that applies here as well for HTTP scenarios people use this for like authorization filtering um for other trigger types um sometimes uh just even data format like what is the serialized message in the queue is it in the right format in encoding is it what we expect is it missing properties and things like that you do some of those validation tasks as part of the middleware and then keep the function to where it's assuming uh a certain amount of Integrity of the data that's coming in and from here you have a I just want to show some of the properties and methods available to you in the context that is passed into the to the middleware uh there's a lot that you can do here to inspect the function that's being involved the Bindings that were used uh features associated with that context the actual function definition sculpt services and and so on does that help does that yeah but this shows the the registration of the middleware how the logic applies to the the different functions that we have defined here uh and how they react differently based on how that the logic is handled outside of the functional logic Okay so we've looked at how a function project is actually laid out we've kind of gone through some of the features that we have there um maybe notes and we'll hit the Cs project there's a few previews going on in there as well um so this is all some of the newer things we've got our project going we're we're ready to take it to Azure we have that right click publish or CI CD pipelines if we're being a little safer where does it go so we have a couple different hosting options for functions and sort of how you can set things up the two main ones I want to highlight are the consumption plan and the elastic Premium plan so the consumption plan is exactly the model that we were describing earlier um to a t it scales down to zero that's where we're starting as events come in it scales up deals with them and then scales back down you're paying for the amount of time your code is running and how much memory it's it's taking as it does um there are some limits there um that you know help guide things um like I said earlier we want kind of short-lived functions and things like that and you also don't want to run away build so execution is bounded to five can be raised to 10 minutes um and things like that so those are some things to note as you're kind of evaluating different options but consumption is where most people land that's the sort of default like what people go to with functions if you need additional networking Integrations or if you don't want to have that kind of coming up from xero to sing first instance you can have a pre-warmed instance those types of things are exposed through the elastic Premium plan so typically what we recommend is start with consumption see if it meets your needs but if you see that there's something that's kind of lacking there the elastic Premium plan is a really good option for kind of getting a few more of those controls and sort of blending between that event driven scaling but also how you're managing uh instances and things that you might be more familiar with if you're familiar with like an app service plan or something like that and so speaking of if you're familiar with web apps or app Service app service plans app service environments these are all Concepts that work in functions ultimately the resource that you're working with in Azure and like the Azure resource manager is microsoft.web sites it's all the same so if you're familiar with those products a lot of things already work but if you have app service plans that are sitting around that maybe aren't using you know all of their resources storing functions on those is a good way of you know especially if they have Affinity with those web apps it's a good way of just kind of you're not paying extra for the extra site going into that pool of resources already um so those are like the main uh ones that LL flag is sort of the premiere there's certainly some other options we can talk about one that's newer is we have as part of azure container apps the ability to run functions uh in there as well that's currently in preview so a little bit newer but provides a really nice option if you're looking to integrate functions and some other container-based workloads in fact a pattern that's fairly common in some cases is espec if you need like a long-running task of some unbounded length you might use functions to kick off a job that then takes you know a longer a longer execution time and then deals with you know have other functions that respond to the output um the other thing to note is that I've talked only so far about managed services that directly say here but functions here here's a function app we do have other options as well um functions is open source our runtime is available we have published based images if you want to take those and throw those into any kubernetes cluster you happen to have lying around IO means um there's a bunch of different ways that you can kind of run functions and various places those options are all great and uh we're happy to have folks doing that wherever it makes sense but typically we point people to start with a consumption plan and then evaluate how that works uh in terms of like informing other options start there but then once we've started using functions perhaps maybe we're getting a bit more uh more of them we're having lots of them they might be part of overall orchestrations chains of logic that we have to deal with like where our actual business logic is kicking in might be sometimes between the functions uh orchestrations can be a a good time always there's a lot that can be challenging about doing orchestration logic durable functions is an extension that makes this way easier um specifically durable exposes new triggers and Bindings that allow you to define the orchestration itself checkpoint it so that it is tolerant to uh restarts failures and all sorts of things like that and keep that orchestration state as it kind of works itself through multiple different functions which might be spread out doing lots of things I mentioned like the time window for an individual execution your orchestration could be much longer um this leads to a bunch of patterns that are you can actually you could absolutely write the code for these um that's that can be very yeah that's quite complex very quick my favorite example is the fan out fan in pattern that you see on the screen here um it's one thing to say hey I'm gonna you know fan this out and you know call a bunch of these other functions I'm going to put a message into three cues or whatever and then they're gonna go off and be processed that's fine now I need at the end of all that every execution that came out of any of those cues okay I need to make sure that once all of them are done then I can move on to the next step that's the part that actually gets a little bit harder so durable makes it a little bit easier by exposing the orchestration as a function itself so here's that fan out fan in example that we were just talking about so you'll notice that in the attribute there it's an orchestration trigger so that's one of the things that's brought in by durable and it exposes a context object that gives us a few other methods and things that we'll talk about in a moment but for this one I mean we're just doing a generic example to match that slide uh the diagram we just saw so we're doing this context that call activity async so we're just invoking another function that's part of the durable setup here and then we're just looping through and doing that middle chunk of we're going to kick off a bunch of ones in parallel the the magic here is the task.win all so again a standard construct that you know you might uh have bumped into plenty before but that's allowing us to say hey this orchestration is actually going to checkpoint itself and wait for when all of those are actually done and then it will proceed into the next step and fire off in this case that third function function F3 there so I mentioned checkpointing so this is one of the things about durable and where it gets basically its name durable if at any point things go wrong this thing it can checkpoint and restore itself so once it does that first call activity async the state is stored that work batch object is known and when the the orchestration trigger actually goes to sleep when the activity gets called so we run through up to the first activity call get that off orchestration goes away orchestration will eventually wake up once that other activity is completed replay its history and continue from there so this also means that if you have certain sort of mission critical uh processes you can do things like have error handling poison view management and things like that that encapsulates some of the fault tolerance that you might have uh have needs for in your overall application it gets Advanced pretty quick um but the nice thing about durable is it makes a lot of those things way simpler this is code that for me is way more friendly than if I would try and work on the exact same task myself so we just want to make sure everyone's over durable but we also want to highlight that you know that and a bunch of the other things we've been talking about have been going through a lot of evolution and specifically we've been making functions work a lot more tightly with the latest features of net and the general ecosystem as well as the patterns that are common there so that's hopefully something that we've covered a good bit today um and if you're familiar with functions uh if you've been working with us for a while um this might be really interesting to you there's a lot here that has changed and we are pretty pleased with the way that things integrate with the broader.net ecosystem and we're excited about where that's going to be going going forward there are a ton of features here that um yeah again you may not even be aware of if you're a long time functions user we're going to make sure folks are flag on those there's a quick question yeah sure going back to the last slides though uh the the question was about how the going through the tasks in Peril actually retrying um so the Retro oh I just accidentally hit the cable that's giving us our presentation standby so this is a great example of a fault you're getting injected and our ability to recover from it um that's part of the orchestration logic of this presentation um but the the kicking off the multiple tasks is the the actual goal so um in some cases you might do uh for say uh order processing is like the classic orchestration example I think people go to um if you have to do SUB fulfillment or going out to different things that's like represented as your business logic so the task here is defined in just the actual application domain but uh the word batch is just how we're accomplishing that notion of hey I've gotten through this function one so like order has been submitted and now I'm doing like some fulfillment tasks that are being kicked off let's call them like approvals or completion tasks for uh somebody to go do elsewhere in the organization or whatever um we're basically then saying hey I only care about moving forward once all of those are done so like maybe another uh framing of that example is uh perhaps I need approvals from like three different people right so I'm gonna go submit a request for an approval to all of them uh and then only once everyone has approved is where we can move on to the last bit um so we're defining that as the The Core Business Logic the checkpointing occurs so that uh when somebody doesn't approve and throws an exception let's say um I don't have to start over from the very beginning and deal with um you know prerequisite steps um we're able to kind of pick back up and do retries for okay they didn't approve but we just need to like you know kick off the next bit that could be either in logic that um You're Building into the application or there's ways actually for durable to um have your orchestration state is something you can interact with as well um yep yep the checkpointing is built in so that it is um sort of transparent uh but yeah like that that is a feature of durable that uh just makes it so that when you have these complex orchestrations they're uh resilient uh to different types of things that can happen and you know I I framed that as a business example in terms of like why things are going wrong but um certainly Network hiccups happen um you know or uh data is in the wrong format for a given like you know case um so that kind of error handling being able to recover from that uh especially when you have much much bigger orchestrations than this can be very critical um so yeah another question different function agriculture bye sometimes yeah so the question here is the premium hosting plan as a means of dealing with longer run functions or just something like this I would say look to something like this if you're breaking it in discrete activities that themselves fit within that window um or or that yeah I think though one of the problems let me reframe that so uh the the statement was that you might have um cold startup time in the consumption plan so that's where I was saying hey we're going from zero to one instance that's going to be the most performance impacting part where we have to load in all of your code all of our stuff things like that you did say minutes it's certainly not that anymore there were there were times where that might be alleged um I know experiences yes fair enough um the Colts art is uh something that absolutely exists absolutely matters for a certain number of applications uh then part of the performance improvements that we've been doing have dropped that pretty dramatically so again if you're familiar with functions worth looking at that bit again but um that's one of the things that elastic premium can help with is because you already have it warm with an instance that's already been provisioned for you yeah right for your specific case I think yeah elastic premium might be a good fit there if you don't have the ability to break it up into smaller functions uh but yeah any any place where you have the opportunity to maybe reflect or break things up that's what this is extremely helpful yeah now it's worth mentioning this is one way to do multiple functions operating in concert with one another um it is not the only way certainly uh in some cases if you don't need some of the behaviors of durable there's other options like using a message queue having a q trigger like we saw before but it is something that we find a lot of folks when they need durable it is truly a a huge asset um and it's it's very frequently something that folks point to as like the big thing for um a lot of their applications so we just want to make sure that everybody's aware of it and there's plenty of good stuff in terms of like we talked about one pattern here um there's plenty of others that I think uh I think our dogs are pretty good on that one actually so if durable is something to definitely learn more about but a lot of helpful documentation on scenarios and what patterns you would you would use durable for depending on scenarios and requirements and in four cold starts if you're seeing anything that is uh so our Baseline today uh is under a second uh you should have a function startup and serve a request initially to be request in under a second uh if you're seeing anything beyond that that you can account for that by all means yeah we we want to hear about that we we perform daily measurements that that that's an area of intense Focus for the team and it's an area that that's seen a lot of improvement in the past three years and Baseline is is certainly quite a bit under a second today it's worth mentioning probably mentioned HTTP triggers um cold start is sometimes also not as scary as it may sound even when it is large uh just because some of your functions are responding to events they may be actually asynchronous background tasks in which case it absolutely depends on your application and what what performance characteristics you require but uh for many if it's asynchronous happening in the background that extra little bit of time is not a major concern but HTTP if you have a client waiting you know actively especially if it's like a human user sitting there waiting for a page to load or something like that right that's where it really really matters that's where some of that tuning is especially important where you know you might want to be looking at something more like elastic premium but for for many for many workloads it's um it is something that's there and you should be aware of and uh but it doesn't necessarily have to be a major factor in plan evaluation if you're not doing anything that uh has a has a client waiting for it actively what makes you the call back I don't think if that's it was planned because it's almost there okay so just to recap really quickly so there are three parts to that one was uh that you're having uh differences between say Linux and windows and it's actually something to note is that there are within those hosting options you have different uh different OS choices and a few other considerations you can tune as well um it does depend on the stack and things like that but yeah um the the cold start numbers might be different uh on particularly probably the consumption plan between Windows and Linux not something to be aware of for sure but then within that uh the there were two things uh one was the async pattern in durable um which if you jump back one slide please um the notion of um uh that sort of I guess fifth the middle bottom one there um this is a really good pattern actually um this is uh so the idea here is that you would um potentially acknowledge the request uh take a return and accepted code um and the durables provides a a basically something that can go in the location header to allow whatever client it is to kind of pull back and say hey is that task done yet um so your client can not be waiting for like a standing HTTP request that then might get timed out by a middle tier or something like that between you and the function but the client is actively monitoring it it's better for ux Spinners or anything like that um so that's a very powerful pattern so like you've returned accepted you've enqueued the work and then you're knowing that that work is asynchronously being processed um so that's one and then the third point that you raised was also I think just app service plan again if if you're looking at something like elastic premium that's a great choice as well a lot of the same characteristics there you're not getting the event driven scaling but you are getting certain baselines of getting more control over the actual Hardware class that you're running on and some of the characters and it's always on sure yeah yeah premium will kind of Bring The Best of Both Worlds and to give a little more flexibility there yeah so a few different options there um I hope we we covered the right of the questions in full just looking at the folks we've been asking make sure we're good um jumping back I guess so uh We've hit on a few of these features that um have been uh you know iterations over functions so again if you're a long-standing user a lot of these might be new and worth taking a look at we highlighted a few of them and again some things have been in previews moving into GA status uh General availability so with full support but we're we're pretty pleased with how functions lands in the broader.netics system how it's able to take advantage of some of the newer things happening in.net which another comment on cold start one of the things that that might not be clear a lot of the performance improvements that we were describing here that will make the the instance performance significantly better even if the page doesn't have a significant impact on the overall cost at the end because you're scaling more in the consumption environment you're not paying for instance it does reduce the number of cold starts so having that it does enable us to more densely packed workloads in a in a single instance so those things at the end of the day still matter a lot and bring significant benefits so overall performance benefits yeah um yeah and and uh as you lean up sort of your functions you'll you'll see that sort of compound on itself because of factors like that um it is you know truly like we mentioned those best practices of single responsibility you own less make sure that you're you know managing your connection pooling keeping your memory profile low those things add up for those performance characteristics and things as well so um if you are liking what you're seeing here uh if there's uh things that you're um looking to either upgrade from existing versions of functions or existing versions of.net one thing we also want to highlight is the.net Upgrade Assistant um even if you forget everything we've said about functions this is worth knowing as a.net developer it's an extension in Visual Studio that allows you to move from across.net versions helps do some of the scaffolding code management changes to address you know the differences between the.net versions as well as in functions any changes that might have to occur there it's not gonna do everything for you if you're you know moving where uh you have an old version of like the Azure storage SDK uh you'll need to you know do some work to bring it up to the to the newer versions but um it gets you a lot of the way there um and it's uh we highly recommend this tool it's kind of indispensable especially as we look towards uh the next major version of.net um being able to to just sort of very quickly uh uh migrate older projects forward um super valuable so just want to make sure folks know about that um and I mentioned this before but Azure functions is open source if you want to uh take a look at our GitHub repos all of this code is there that's where we engage with Community please feel free to file issues there we accept pull requests but uh that's uh we have a you know good ecosystem of.net developers uh all uh you know working with functions and that's sort of where we centralize things so lastly there's lots of good resources that Community provides plenty as well I'm sure you can find lots of good stuff out there but Microsoft learn has a lot of great content we're updating it constantly especially as we roll out lots of new things um so please by all means check those out and uh with that I think we're just going to do q a for the rest of the session um so thank you very much if you wouldn't mind filling out the survey we do appreciate it but otherwise uh yeah thank you and we'll take questions yeah thank you [Applause] foreign absolutely so the question was um can you self-host and then maybe start something on-prem and then take it into Azure uh later or you know on behalf of other customers absolutely um so because we have sort of the um uh the the runtime itself open source and available um and does and the base images publish um we just absolutely see folks doing that um most of the time you're going to be probably interacting with Azure Services uh as part of it um just for the event sources and things like that so there is sometimes a kind of implied bridge to Azure through that but um yeah people absolutely happily run uh this on their own Stacks uh all the time in fact we did a collaborative project with red hat to bring the event driven scaling to kubernetes as well it's a project called cada Keda so we have a lot of folks who take the functions based images they run them in kubernetes with cada so they get the event driven scaling but on their own infrastructure through through you know that orchestration layer of their choice so and that's a fairly common approach for a lot of customers today particularly in hybrid scenarios where they want to run some of the workloads on brand some of the workloads on the cloud but the team wants to use the same programming model uh they leverage functions and they leverage those different hosting options there yeah um and you know we have iot devices here I mean we've also got functions running on Raspberry Pi's out there which I think is fun um but uh yeah in general the programming model is very flexible a lot of functions tend to be net new applications in a lot of cases but they're also really good at helping stitch together workloads and if you need to bring together any existing assets or um you know they're not bad for hybrid situations where we have some on-prem assets some in the cloud any sort of a bridging solution sprinkling some functions throughout your your uh your infrastructure topology uh can be a can be really helpful in those situations as well yes right so the question was if you have app service already and uh when would you choose functional Observer web jobs just to Define that so web jobs is actually um an aspect of app service there's a couple different varieties of it but they fill a similar niche in fact functions was actually born out of the web jobs project originally um so uh if you're using web jobs that is absolutely a happy A Fine Place to be especially for uh for for when it's uh it's part of the application payload itself that you're deploying to the web app um so when those are very tightly coupled to those web apps uh that can be a much better solution but for a lot of situations especially when you're dealing with needs for these more elastic workloads um instead of having it running on those dedicated plans being able to have this sort of more event driven scaling and only bring up those assets when needed that tends to be one angle HTTP scenarios too um workloads yeah absolutely yes you do the ability to combine web API with some of the other web jobs triggers and Bindings that that is where the limitation is what functions will provide that out of the box uh so you have the ability to leverage the web jobs programming model with an HTTP based setup apis uh and that's where you've seen a lot of tooling uh Investments as well the hosting options scaling Behavior but the the more serverless approach will be functions uh again a lot of the functionality if you're using web jobs there's nothing wrong with that but you can continue to use that particularly if you already have app services and you're leveraging that it with that said um I mean we are here presenting functions and we do think in general that it's a better model to be putting most Works towards if you are looking at a new project um and if it's if it's if that decision is more of a you know or you know is less clear we are kind of indicating options tend to be better for most users so I would only Point somebody at web jobs really if they were already familiar with it and kind of uh have those those needs uh so there's a few different when I say better I guess I'm conflating a few different things um it's not necessarily the performance aspect as much as it is um some of the ways of doing error handling uh the Monitoring Solutions are a little bit more robust in general um monitoring is by far one of the things that I think the serverless in particular comes up quite a bit um we've defined small bits of single responsibility functions that's great but then you suddenly have a bunch of different components and so now you have really you're not dealing with a single monolithic thing where your tracing is more managed you now have distributed tracing concerns more broadly and things like that um so especially if you're dealing with lots of them uh monitoring is a good example um there's a few others in terms of usability and where some of the future investment is web jobs also is just a little bit uh probably mentioned the SUV trigger there's a couple others that I think are more unique the durable piece in particular is unique to functions as well so there are a couple of things there where there's a bit more feature richness on the functional side that's where a lot more Investments been but yeah and some of the modern features like a middleware and then some of those things you've seen that come to the functions.net experience so even though that builds on top of web jobs for a lot of that stuff you're not going to have that readily available to you in in web jobs yeah so just in terms of even future breadth and you know starting off a foot that won't we don't want users to be in a situation where they start a solution then run into a limitation that's counter to our goals so we think functions are just a a safer bet for getting a new project started but there's nothing wrong with the web jobs answer yeah yeah I mean yeah the the comparison was just made against.net framework and just to clarify that both support both um you know uh core and framework but again if if web jobs is meeting your current requirements you already have that investment there there's nothing wrong with leveraging the plans you're already paying for uh that that that is just that that is fine yeah other questions did we do that good there's got to be more there we go yeah yeah so the question was throttling and dealing with a downstream resource that might be um overwhelming yeah something that you're overwhelming yeah uh that will depend a little bit on the trigger type that you're using different triggers will have different concurrency nodes that are available to you uh things like controlling the the batch size or how many messages you can process concurrently uh that would differ When comparing by things like service bus event hubs uh storage queues um so we provide documentation on what those those options are uh there are also limits that you can apply at the function map level that will limit how far the platform will scale you out before it stops but there is a significant amount of investment happening right now to actually improve that and provide even more fine-grained control over your concurrency uh modeling how many functions you want to see executing in parallel exactly for for that reason you want to protect it protect the downstream resource uh we will have better controls in place that will enable you to more by granularly control like you know how many executions you have and the model that we showed in terms of like connection pooling using bi clients is also a big deal especially for certain types of Downstream resources that care a little bit more about their connections I'm thinking SQL SQL loves to fall over when you throw a bunch of connections at it really fast and a lot of them um so that's just one place that it can help to to bounce some of that where it's a little bit more stateful as well the other thing is that some of the application patterns that we were discussing uh can can factor in as well so if you do get a throttling or something like that you can do sort of back off strategies either through enqueuing new work to be picked up by something that's going to do retry durable can be used for some of that but you know also even just uh with messages that you have if there's something that's odd about a q message that causes it to be failed we might put that into a poisoned View and that's something that it's just a general pattern for hey this wasn't able to succeed for some reason we need to go back and do it either as part of some cleanup batch or something maybe of a timer that goes and evaluates those or just it gets re-entered back into the the thing that would kick off that function um did you mention uh the trigger type would you say with cues or service bus or yeah so service bus also um you know has all the different message settlement type things which is something that uh in the.net model uh is is actually something we're evolving as basically that's happening right now um and the trigger will provide some of the the configuration knobs that you can tweak to to Define how many messages you want to see how many functions you want to see in bulk uh at the same time within a given instance and you can apply some of those more app level Global controls that will limit the scale Behavior the platform will apply yeah it's a combination of those we essentially add up to the maximum number of concurrent invocations you have today and then from there you know any any error handling from that Downstream is just a matter of figuring out what makes sense for how you handle retries really other questions we've got a few more minutes please there are no questions then oh one more yeah no yeah foreign with the in proc and isolated model which we didn't really touch on but um two different models for.net applications isolated being a slightly newer one that has some of the more of the features that we were showcasing and looking at net eight uh I guess the question was what was the what's the thing that needs to be looked at like do you need to look at isolated or it was specifically dependency injection and how you were handling configuration rejection right so you have a configuration Builder yeah you adding custom configuration sources um yeah for at dotnet8 specifically at our team time you will see that support coming to the isolator model first uh in fact the beauty of the isolated model is that really you're just changing a tfn yeah um in your CS barrage so if you want to be using the.net 8 previews right now at least locally you'd be fine you just wouldn't be able to publish it to Azure um and run 1.8 there yeah the key thing about the the model is how it decouples you from the version of.net that's being used by the host itself so it gives you a lot of flexibility to be either ahead or behind the host uh depending on what your requirements are you can run today isolated using.net framework so we do have full support for tonight framework in in that isolated model if you're looking to move to.net8 at RTM time exactly when it comes out like the very day like hours after the the the the announcement kind of thing then that upgrade will be a requirement you will need to move to isolated and move to that model uh luckily that upgraded system assistant will hopefully be of help you can point your existing projects to it and it will take it to the isolator model it will replicate a lot of that the logic you just need to you know massage some apis that may be the the upgraded system is not aware of um yeah part of the trend here also is that you mentioned overwriting the configuration things like that isolated removes a little bit of that in that it does it a little bit more with the traditional config building approach um so you're more directly in control though there's still a few things that are loaded by default but there's less I guess how would you put like just there's there's less uh danger I suppose with interfacing with that the behavior that is very consistent with the rest of the.net ecosystem right so it's the same behavior you would expect from different application types uh you have full control over your dependencies configuration injection the four configuration sources all of that will work identically through you know some of the other models yeah so I mean if that's something that you're considering just in terms of how you want to handle your documented strategy and so forth certainly better to get ahead of it just in terms of that way when the day comes for net eight it's just that tfn switch um but yeah I mean we think that the configuration model there should be a bit cleaner and so that if you're looking to sort of get rid of I guess any anything that you've had custom handle around that um certainly again I mean I probably sound like a little bit of a broken record but owning less is great um I love deleting code um so uh you know that is an advantage of the other model we'll be hanging around here folks want to come up and ask questions if uh after we get off Mike thank you all really thanks again yeah foreign [Applause]
Info
Channel: Microsoft Azure Developers
Views: 5,571
Rating: undefined out of 5
Keywords: Azure, Functions, Serverless, Computing, azure app service, azure developers, azure dev cli, azure functions
Id: 82QnxMp8PRY
Channel Id: undefined
Length: 72min 48sec (4368 seconds)
Published: Thu Aug 03 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.