Azure Serverless Conf - APAC

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] do hey everyone welcome to azure serverless conference welcome to azure serverless.com welcome to azure service conf my favorite part about serverless technology is that it's so easy to get started my favorite part of serverless technologies is the ability to be productive without having to worry about all the underlying bits and pieces my favorite part about serverless technology is that i can get started super quickly and i only pay for what i use just let me write my code build my integration spin up databases and azure you take care of the rest my favorite thing about serverless technologies is no infrastructure maintenance you don't think about compute or storage you just get started and start building your apps i am looking forward to connecting and engaging with our community and all of the viewers that join us today i'm looking forward to hearing from people all around the globe about what they're doing with serverless and seeing what i can learn and make my applications even better over the next 20 hours we will have some fantastic live sessions showcasing many of the surface technologies available in azure as well as a host of sessions you can watch on demand enjoy the event enjoy the event enjoy azure serverlessconf [Music] [Music] do [Music] hello everyone and welcome to azure serverlessconf i've just had to check to make sure that my microphone was working um i'm going to assume it is give us a nod if you're there rule and you can hear me yeah um my name is aaron powell and i'm one of your hosts today i work for microsoft i'm part of the developer or the cloud advocate group here and i'm excited for the kind of content that we've got today we've got a heap of different speakers that are going to be covering all different aspects of serverless technologies from the code side through to databases and everything in between but i'm also joined by my friend here rahul who is going to be helping me with our hosting duties rule say hi to everyone that's watching on the stream and introduce yourself welcome everyone to azure serverlesscon my name is rahul i'm a microsoft azure mvp and a group product manager at logic monitor so we've got a big lineup for sessions and if you're joining us for the first time today this is one of three streams that we've got uh happening over the course of about the uh 20 hours we've already had the america stream where we had a bunch of content and those sessions will be available on demand afterwards uh later on this evening australia time it's time zones are hard it's like i mean it's it's fine for me now but later on it'll be uh there'll be a stream happening for the europeans uh so feel free to jump on for that there's also going to be a bunch of on-demand sessions that will be available on our youtube channel after the fact but enough of me talking you didn't hear didn't come here to hear from the two hosts uh you came to hear from some speakers so i'd like to bring our first speaker up and we'll introduce you to well there we go it's connected uh introduce you to anna anna is uh she's a data scientist and database fanatic she works as a program manager in the sql server team here at microsoft and she's going to tell us all about things you can do with sql server serverless azure functions logic apps and kind of everything in between i'm really excited to hear about this because i love to see how databases can work in a serverless environment anna welcome awesome thanks so much erin i'm super excited to be here uh joining in from la i think probably a lot of people are joining in from a different time zone and i definitely agree that time zones are hard um but thanks so much for having me on the show i guess we can get right into it okay i i was the first one to to not be to mute themselves on stream excellent it's off to a good start you might be the first but you won't be the last um so great let's get right into it so today i'm going to be talking about catching the serverless bus so this is going to include azure functions logic apps and azure sql database and since we don't have a ton of time i'm going to get right into it so you might be wondering oh catchy catch the serverless bus but actually that's what this scenario is about uh so let's go back in a time back to a day where we went to the office regularly maybe some of you are but i am not um but let's think about that scenario so typically you want to catch the bus what happens one of two things the first thing that happens is either you get to the bus stop you have to wait then you catch the bus and you get to the office or more often than not for the kids and myself is you get to the bus right as the bus is leaving then you have to run to the bus but you miss the bus so you have to wait for the bus to take the bus and get to the office so this is like less than ideal so uh a colleague and uh myself davide mowry we said okay we all or like a lot of us have phones what if we could use our phones to kind of track our location and track the location of the buses and then get some sort of notification when the bus was close by and then catch the bus without having to wait for the bus without having to run for the bus and then go to the office and that's kind of the ideal scenario so this is the scenario that started it all and um just to kind of give a sneak preview of where we're headed uh it turns out that a lot of bus counties actually released this data publicly so this is the king county metro area so that would be like the seattle washington area in the united states and they have a real-time bus feed that releases this like json document that contains all the buses for all the routes in real time for the seattle king county metro area so that's kind of the scenario that we're going to be exploring today and we're going to go really fast we're going to focus on the demos because i think that's a little more exciting but just know that everything we do today is available in a learning path on microsoft learn great place to learn and the best part about microsoft learn in my opinion is that it has this free built-in sandbox you're gonna see every demo that i do is actually in the form of an exercise and you can kind of spin up these sandboxes a bunch of times a day for like four hours at a time so you have lots of free resources to kind of play around with all these different things so definitely check out microsoft learn and the learning path which we're going to fly through today uh but you can see each piece kind of touches on the database the api the app and then putting it all together with github action so let's get right into it with creating a foundation for modern apps and to me a lot of times this means selecting the back-end database service so when you think about the back-end database service we want for this solution we're going to think about the all up solution requirements for each piece and then we're going to think about the specific things related to that piece so all up we want a website for monitoring real-time bus locations we want notifications when a bus is close by and we want deployment automation so that as we improve our application over time uh you know it's a very smooth uh redeployment like a cicd type of pipeline thing now when it comes to available data we saw that json data you saw it just a little bit earlier and we're also going to have a flat file that's available to us that contains some route information if you think about someone using this application we're going to build they're not going to want to monitor all of the routes they're only going to want to monitor the routes related to where they're trying to go uh so that's the idea now when it comes to selecting a modern database for your application even if we abstract away from this specific scenario like a lot of times it comes down to okay you really want to pay for what you need when you need it you've been hearing about the serverless hype but can it apply to databases you want to be able to support your development workflows so the tools you already knew use like vs code github azure azure cli etc powershell you want to be able to use that and then for this scenario and for a lot of like new modern applications we're using a lot of json we're sending a bunch of json we're receiving json or processing json so you need a database engine that's going to be able to support json in a great way and ideally you're also supporting some sort of geospatial capability so you don't have to go about like importing all these different geospatial libraries and figuring out how to do that it would be great if this database had some built-in functionality and you also want to pick a database that isn't like too new and startupy that you know it's going to potentially disappear in a long time and for all those reasons surprise i'm from the azure sql team but for all those reasons i'm going to suggest that you use azure sql database specifically from the azure sql family because it's going to give you those serverless capabilities and all other things you see here so with that being said let's get right into our first demo so i'm going to hop back over here to the browser and just for time sake i went ahead and deployed everything using the exercises but what you can see is i have this bus database and it's currently a serverless database so what's nice about the serverless database is you're going to be able to set a minimum and a maximum amount of compute we're going to auto scale you in between the exact max of what you need at a given time so you see the min and the max and on top of that we're going to allow you to enable an auto pause delay so here i set it for one hour so no one's using the application we're actually going to separate the compute from the storage during that time you'll only pay for storage which is really going to help you save on costs so this is a great capability as well as the auto scaling because you're only going to pay on a per second basis for what you actually need okay so that's one piece of it now the other piece of it that i wanted to show you is the geospatial support so what i'm going to do is i'm going to hop over to azure data studio this is a sql notebook you might be familiar with notebooks but we're going to get right into it and i just want to show you an example so here you see i have some sample json that looks similar to our bus data and what's great about t-sql is that i have this built-in capability to take in a latitude and longitude just like we did from this kind of nested json and we can store it as a geography according to a certain standard and store it as one value as a location so if we take a look at what that's going to look like it's going to be one serialized kind of value representing that location now if we want to bring it back out to the point uh to the latitude and longitude like you saw earlier we can also use this functionality called tostring and that's going to bring us back the coordinates now this is really cool kind of open street map playground that you can use to play around and test these points so for example if i hop back over to that what i can do is i can simply paste in this point and see that we are indeed hopefully this is supposed to be a sample bus point in uh seattle so seattle's over here bellevue's over here and the microsoft office is right over here so you can see the bus point is just down here uh so we can see that's working pretty well so now if i hop back to azure data studio this is great so we know we can store geospatial values the next thing is you might want to do stuff with those values for example we can use this st distance function to find the distance between those two bus data points uh we can also use this thing called a polygon so what we're doing with the polygon is we're essentially making a shape on a map and calling it a geofence so a geofence is useful for us in this scenario because we want to be able to identify when a bus has entered or exited a geofence so we can notify our users uh there's lots of use cases for uh for this as well like if you're trying to find the closest coffee shop or you know anything like that or doing any sort of routing like this is going to be important to understand the geofence idea um so now we can plot this polygon but i'm going to go ahead and skip ahead to the next step where it gets really interesting where you can also plot a geometry collection so this would be a collection of locations so collection of points and potentially a collection of geofences or a collection of polygons and you can do that all with t sql using this sd geometry collection from text functionality and then even better you can you can say use this st within to say is this location within this geofence and then we can store that so if we take a look at the results we have just those two right here we can copy this and we can see that our sql thinks this is in the geofence so just to kind of validate that we can hop over and see is this point in the geofence and sure enough we see this bus is here this is our geofence we're going to use for the session um and it is in the geofence so that checks out and if we want to check this one which says it's not in the geofence we can go ahead and plot that as well and we can see sure enough it's not in the geofence um so this is a really quick example but hopefully an example that helps you understand how azure sql database is going to be able to support serverless json and geospatial so if we take a look at the architecture we have uh so far from this what you're going to see is that we have our buses and of course they have some sort of iot device that's sending this json uh and storing it in real time this is the part we don't know yet but we also we didn't talk about this but we're also going to store that route data in the database we can just import it in using t sql or azure blob storage uh but we're going to keep it in the database so we can do some comparisons and only track certain routes and certain geofences for our users now the next step is to figure out how we're going to go get that data on a regular basis and update the data in our database and that brings us very nicely into the second piece of this which is serverless apis with azure functions and logic apps so if we go back to our requirements we're gonna we already know the solution requirements but the api requirements are that we need to be able to get the latest data identify buses in the monitor routes and geofence activation so we want to know when a bus enters or exits a geofence that a user has decided to monitor the route of and then um once that happens we want to send some sort of notification we'll just do email uh that the bus is activating the geofence and they should probably go catch their bus so for the api service we're gonna use azure functions this is a great way to accelerate and simplify your development because we support a lot of different triggers so for this we're going to use a timer trigger but there are lots of other triggers as well it's really easy really fast these are serverless and scalable there's really not a lot you have to know to get started you can just deploy these and start coding there's lots of templates and support in the azure functions tools and extensions and visual studio code you're going to learn all about that today so if we think about what we want this azure function to do is every 15 seconds so on a timer we want to determine what the the monitored routes are going to get the latest bus data we want to identify buses and monitor routes and activating a geofence and for those two things we're going to actually use azure sql database because we saw that sql database is going to easily allow us to do that and then finally we want to send an email notification for each activated bus now for this i could go and code up an azure function custom connector to outlook or gmail or whatever and set this up however i do know about azure logic apps and azure logic apps has built-in connectors and it's this great low code no code type of platform uh where you can kind of drag and drop things and make things very easily so i decided you know i'm lazy i think a lot of developers like myself tend to be that way sometimes maybe it's not maybe it's just me but uh this is the route i decided to go so i want to show you what that looks like okay again just for time's sake i have gone ahead and deployed everything but what you can see is i deployed an azure function uh using the code sample which i'll provide at the end i've deployed this azure function and it actually has one function in it called get bus data now if i hook into the logs but i'm going to be able to see some of the real time kind of results that are coming in again what's great about azure functions is that i can run this all locally without deploying anything using the functions extension uh just for demonstration purposes i went ahead and uh deployed it so what you can see is it's looking for all the bus positions and it's it found one in my monitored routes but it didn't find any buses that were entering or exiting a geofence so it's not going to trigger a logic app but if it did let's assume that it did what it's going to do is it's essentially going to call that logic app with the bus route the bus id and whether or not it's entering or exiting the geofence and that's where this logic app comes in because when this http request comes in that contains the bus route and whether it's entering or exiting what we're going to do is we're going to send an email so how easy was this i just found the outlook connector i added those two values in like this was so easy um to do and then i can set up who i want to send that uh too uh i said it to myself since i'm the only user of this application today but um you could see how you might want to set that up for a larger application and if i hop over to my email account you could see the past uh past little while i have been getting some email updates it is getting into the evening time in seattle so there's less buses coming but fortunately we're able to see you know when a bus was entering my geofence and you can see it it enters in the bus information and that is entering versus exiting uh so really cool really simple really quick uh really fast demo uh but remember everything we have you can get hands-on after all right so where are we we'll take a brief pause to make sure we're good okay so we have the buses which have some sort of iot device which is sending json data which we're going to use azure functions to bring into azure sql database and then we're going to use a stored procedure in azure sql database to say okay of the routes you're monitoring these are the buses that are in your routes and these are the buses that are activating the geofence for those buses that are activating the geofence we're then going to send that information to an azure logic app which you saw and then we're going to essentially craft that email based on the parameters like the route number the bus number and whether it's entering or exiting the geofence and we're gonna send an email and so hopefully you've seen how easy this has been to be so far been to set up so far um and one of the cool things which we're not gonna see a ton of today maybe at the end if we have time i'll show you all is that we wrote all the code for the functions and the application we're going to get to in a second in python node and net so if you're looking to do it in one of those languages you already know and love or if you're looking to kind of learn a new language and compare the code side by side that's definitely something you can do um i didn't know a ton about node i still don't uh but i was able to davide wrote the node section and i wrote the python section so we're able to look side by side and kind of see what the differences are so that was kind of a added benefit tangent that i'll go on but if we come back to this architecture the one thing we have remaining is this website to track like it might be useful to have an idea to like look at the application be like oh there are some buses like in the general area or like there's no buses coming towards me that i want to take so maybe i should make a new plan instead of just waiting for an email that says hey you should go outside so that was kind of the reason for having also like some sort of website and you'll see it's very simple but uh that's where we came into this azure static web apps piece of the problem so if we go back to our requirements we have the same solution requirements we have most of these things we haven't talked about the deployment automation yet but we do have uh the notifications when a bus is close by now for the web application we want a simple website that displays a map with real-time buses and geofences and we want to be able to filter based on bus routes and geofences um you know i never checked my internet but okay it looks like looks like we're still live so that's good all right so um okay so we want a website displaying the map with real-time buses and geofences we want to filter based on bus routes and geofences now for this we decided to use a static web app and and these are these are popular and some of these other frameworks are growing in popularity uh but the the key thing about a static web app is that it doesn't require server-side rendering um now to be honest i was new to most of these frameworks don't do a lot of uh front-end development but for this uh davide my colleague and i decided to use vue we've been hearing a lot about it we want to try it out and it is supported by static web apps um so uh then it comes into okay stag web apps in general but how are we gonna host this and for that we decided to use azure static web apps and i think these are really cool because it's combining the the the power of kind of your static web application hosting with azure function so essentially what you can have is you can have in one repository code for an azure function code for static content and you can either host it using github or azure devops pipelines and actions to automate your deployment so we're kind of going to check all our boxes in one go now we haven't talked a lot about github actions and i wanted to just briefly talk about the github actions for azure static web apps because a lot of it is built in for you the only thing you have to change when you deploy uh azure static web apps it's going to take care of creating this yaml file or set of instructions file for you the only thing you have to supply or change is where in the code your client uh or your front end is located and where your api is located when it comes to how data is going to move through this stack um essentially the client ui is going to use json to basically call that back this new backend api which is then going to execute a stored procedure and return the data from the database all the way back up the stack to the front end and the two things it's going to take into consideration which are important is it's going to take in the route id and the geofence id because remember we don't want to monitor at all we just want to monitor the things that are relevant to our user what our user cares to track and the front end application i know i'm not showing all the code right now but the front end application is super easy it's basically just calling the back in and then plotting the the data on a map so very straightforward one thing to keep in mind with azure sql database and other database uh providers is that the connection stream changes based on the language you're using so i just wanted to call that out sometimes it trips me up i try to use the wrong connection string i can't connect to the database and then i'm just stuck so know that most languages have these great libraries or packages you can use to connect to azure sql database but make sure you get the connection stream right now what i mentioned before is that we're basically going to use that backend api to just call a stored procedure so this is the stored procedure we're going to call to return data from azure sql database because azure sql database is kind of maintaining the state and understanding everything that's going on in the system and we're going to be able to specify how that json is returned so that our application can display it very easily all right so final demo we want to take a look at the azure static web apps and monitor the solution so i'm going to go back over to the azure portal again i'm not going to show you the deployment but it's all documented and it's all free using microsoft learn but you can see here i have the bus azure static web app it has the url and i can see other information related to it and in the configuration you'll also be able to see i have configured that connection to azure sql database and if i hop over to this application so this is the application notice up here i'm specifying the route id and geofence id i know this application probably needs some work but hey it's working for now um so we have one geofence and we have all these latest bus points um so it looks like uh there should be some bosses coming soon into my geofence and we should be getting some notifications so that's really the app it's really simple um and then the final thing i wanted to mention is that all of this is being managed in a github repository using github workflows and you can see i have a different github actions set up whether it's for the azure static web apps one for the azure sql database so we can use the dac pack to update the schema of the database and then i also have different ones documented depending on what language you want to use for the azure function so all of it's in here all of it's documented uh super great hopefully fun and uh useful for learning about all of these things and at the end of the day this is what you can build so what we added in this last step was the azure static web app which kind of combines all three of these things right here and you also just saw how you could kind of use github actions to also deploy this azure function and also deploy the azure sql database and we have this nice um well nice mediocre website where you can track the real-time buses this was quite a whirlwind and you're probably thinking wow she talks really fast i'm not sure if i could keep up but the good news is remember this is a whole learning path so there's four modules where you can take your time you can get hands-on you can dive deep and actually deploy all these things and play with all these things uh in your own environments for free so please go check out the learning path at azure modern apps at aka watch our show on youtube follow us on twitter to stay up to date with what our team is doing and davide myself and a few others wrote a book last year so if you're really interested in developing modern apps and focusing on how a database can help uh that's a shameless plug there and i think we're at time i think somehow i managed to uh pull this off in about 25 minutes yeah you are you managed to pull it off actually with a few minutes to spare so feel free to uh just keep talking because we're gonna have to fill the space otherwise oh great yeah i i should have known i can fill some space i uh you know did we get any questions that came in on the stream uh yes we did um yeah we had some questions coming through publ i think uh the first one's gonna pop up on the screen shortly uh we'll tap the producer on the head there we go um so there was a question about um why would you choose to use uh sql as a relational database to store json as opposed to cosmos tv which is more designed around soaring unstructured data yeah that is a great question and not a question that doesn't happen infrequently so we do get this question a lot and um you know the classic answer that everyone hates and everyone also finds themselves giving is it depends right um so what we say if you're thinking about when to use azure special databases when to use cosmos db a big reason come big the biggest reason that should drive that decision is are you dealing with relational data or non-relational data what we find many times is that a lot of applications yes use some non-relational things like json in this case but really relational can be the backbone of a lot of more transactional applications so for this example that we went through today you could use either either one is gonna make fit your needs and hey both of them have serverless options um so for me this was just a matter of familiarity but at the end of the day you should really choose what the main workload is going to be so if you really are going to be using relational workloads you're going to benefit from tables then things like azure sql database are going to be better if not and you're doing you know kind of this right right replication or other globally distributed type of scenarios then cosmos db might be a better fit um so short answer it depends as someone who did consulting for a number of years i always like going with an it depends answer because it allows you to leave it up to the viewer to decide what was going to be the right one for them um well i think we've still got another minority for questions abroad did you have any uh questions for anna before i dive in with my uh my big shade of them yes so we have a question from one of the audience uh which is can you please give us another example of the benefit of using cosmos db change feed and some other types of applications sure um yeah and i i will be honest with you all i am not a cosmos db expert uh i do my team does sit right next to the cosmos tv team um but this might not be the best expertise for me um i i can't comment on cosmos tv change feed but hopefully someone um in the the sneeze can or if not i will go look it up and i will let you know and i will respond in public how's that for an answer show that folks yeah oh we've we've got a couple other folks that are going to talk about cosmos db so um we'll also bring that question up and see if we can't dump a few more people before the uh the afternoon is out um i think we've got time for one more question so i actually uh had a question uh while going through that so i haven't played with the serverless aspect of sql server uh from a like a data storage and a data usage perspective is there any difference to like a traditional sql server or azure sql um rather than using it in a serverless model yeah actually aaron that's a great question so um using azure sql database serverless does mean a couple different things so there's two things that i think are are good to highlight one is that for this serverless model we're gonna bill you on a per second basis of the max of cpu or memory that you actually need now if you think about the way sql server works when you start to use memory it just kind of stays right so what we had to do is we had to implement a cache reclamation system so that we could actually bring down your memory usage once your big workload completed so that's one thing to keep in mind the other big thing to keep in mind is that when we pause a database we separate the compute from the storage so we essentially shut down the sql server so sql server is not running so when you try to connect to this thing again initially it might fail as we then decide to spin back up the compute and connect the storage and the compute together but what you should keep in mind is that you'll be starting with a cold cache then so um that's something to keep in mind so you won't have like this full cache or warm cache uh for you available but we found with many applications that's okay and if it's not okay then what they do is once the database gets woken up they have a set of queries that they run to kind of resaturate the cache to a level that they need so those are like the two big things i would say to keep in mind but otherwise the auto scaling is a great benefit otherwise you're kind of paying at a provisioned level whether or not you're using it yeah and that's good to know um and i think none of us probably properly understand the technical challenges that we would have by trying to take what has been a traditional database model and turned it into a serverless uh design um but and i think we're out of time for q a today um thanks anna for that really good session uh giving us a good taste of how you can pull together a number of different things with inside of azure to build a a serverless a serverless based solution awesome thanks so much for having me on the show such a pleasure and i hope everyone enjoys the rest of azure serverless comps excellent we've just got a short video coming up and then we'll be back with our next speaker hi i'm anna hoffman from the azure data product group here to tell you about a show i host called data exposed we talk about all things data what's new deep dives how to's and we even give you a glimpse under the hood from the people who actually build the products we cover topics like azure sql security running high performance sql server workloads on azure virtual machines migrate all your database assets to azure sql data science was something old and something new azure sql managed instance developing apps with azure sql database and more we post short episodes on thursdays and once a month on tuesdays we release a special mvp edition with the community may the fourth be with you catch us live on learn tv on wednesdays at 9am pacific [Music] oh we're back welcome back and i'm very happy to introduce will who is an mvp in the data platform space and he's joining us from new zealand where he's building a health platform on serverless tech thank you guys awesome thanks very much for having me uh welcome everyone to azure serverless conf uh so good morning good afternoon good evening wherever you are in the world thanks for joining us uh my name is will valida i'm an azure integration consultant working for datacom and a microsoft data platform mvp based in auckland new zealand now in this session i'm going to be showcasing a little site project that i've been working on building my own health platform with some serverless technologies so this is going to be a nice overview of several different serverless technologies that i've used to build my own platform kind of like side project um and hopefully this will give you some ideas say if you've got a side project that you wanted to to work on in your spare time or you're thinking of a system to improve or make your life a little bit better um hopefully this will give you a little bit of an inspiration so to start right from the beginning this year was a big year because i'm starting to get old i turned the big 30 the big 3-0 and i kind of had this um realization that i should probably look after myself a little bit more start to think um how i'm you know am i getting enough exercise am i eating right am i sleeping properly am i doing a certain amount of steps a day um and prior to my 30th birthday i have a bunch of fitbit devices that i use day to day so i've got a fitbit watch which um tracks my activities and takes and tracks my step count i also wear it when i go to bed so it tracks my sleep i also have some fitbit scales which i can jump on um just to track on my weight and i also use the fitbit app as well to log my food so i have all this data um just sitting there not really doing much i'm just collecting it's just there harvesting my data and i was thinking well how can i actually use this data to make some more informed decisions about my health and how how can i access this to actually do something useful to it with it so um so i did a little bit of investigation on how i can actually get this data uh and it's available as an api so fitbit the good people at fitbit have um developed an api and i can actually call some endpoints to actually retrieve this data which belongs to me and that i'm generating uh through my devices so it got me thinking how can i build an application uh that incorporates um my fitbit data and try and use some of the technologies that i know and love and try and build something meaningful and useful out of it so i started with just a simple piece of paper writing up some requirements so i thought well i can actually call this api on a daily basis to get um last night's sleep what i ate the day before what um kind of activity i undid the day prior i can also use the api to call to actually get a month's worth of weight data i don't always jump on the scale day-to-day but i can do that over a period of a month so i can use the api to do that and then i had a look at the data that when you call the api what kind of data gets returned to you and there's some data that i'd want to use some data that's not so useful so i needed it to have a bit of a process where i can actually call the raw data from the fitbit api and transform that into a schema that i could work with and that gave me the data that i could actually use and then i need to take that transform data and then persist it somewhere and it needs to be really really simple it needs to be flexible because over time i might find that some pieces of information are a little bit more useful maybe i'm retrieving data and not using data that could be useful so i need a data base or data store that provides me with that flexibility and then i also need a way to once that data is persisted i actually need a way to actually serve that data via api so i can do some visualizations on it maybe build a little website maybe um serve that data from the data store and feed that data into some machine learning machine learning workloads etc so what i came up with was the following architecture so starting right from the fitbit api i can use timer functions to um actually run functions on a schedule to call the api so five o'clock in the morning i um my timer functions uh start and they actually call the api to retrieve yeah yesterday's data or monthly data for um my weight my weight um data i also have another function um to refreshes my authentication with the fitbit api because if for those of you um who aren't so familiar with fitbit apis you need able to you need a way to be able to authenticate to them um so i have a another timer function that actually handles that authentication for me so generate a new access token that i can use and i store that in key vault and then when i need to retrieve the data from fitbit i can essentially get that um token from from my key vault so i can call the api and make authenticated calls to get that data and then i need to handle the processing of messages from function to function so i'm using service bus to um enable enabled to be able to do that and then i have other functions that read messages from the service bus and do a little bit of mapping from its raw data into the new um schema that i want to persist and that persists to azure cosmos db and then sitting on top of that i've got a bunch of serverless api functions that are essentially http triggers that can actually serve that data from cosmos db i've also got a logic app as well that um every time an exception gets raised within one of my functions that will send a message to a service plus queue and essentially that logic app will pick up the message from the queue and then send an email to me so i get a little email saying hey one of your functions something's bad has happened in one of your functions uh you should go and take a look at it and then we've got a little bit of basic monitoring and ci cd um as well that kind of orchestrates this so i've got application insights that monitor my functions and i'll talk a little bit about how application insights works really really well with azure functions and then for ci cd i'm using github to host my code and then azure devops to manage my build and releases and i'll talk a little bit about how that's working um within this project so why azure functions um i really wanted to take a code first approach another motivation for actually doing this project was to kind of work on my development skills work on my coding skills so i opted for for azure functions and functions can help meet the demand with as many instances and resources as needed only when needed um with this pipeline i don't need my functions to be running all the time there's literally some timer functions that wake up at five o'clock in the morning and actually make course of the fitbit api and then once that's done i don't need it to be running anymore so azure functions provides a really flexible way um for doing that for me now we can write our azure functions um using any any of our favorite programming languages so we can write them in c sharp like i have we can write our functions using java javascript powershell or python and if we're using other languages maybe such as go or rust we can write custom handlers to um or to all for our functions i wanted a way to automate uh the deployments for my functions uh both the function application infrastructure as well as my code um and also i needed monitoring as well so when things go wrong i want to i want to know um what's gone wrong uh and azure functions integrates really really well with monitoring tools uh that allows us to gain insights from our applications now in order to react to events in functions um we use something called triggers now triggers are what cause uh causes the functions to run so a trigger defines how a function will be invoked and each function actually well must have exactly one trigger and triggers will have associated data with them which is uh often provided as the payload for the function so in my application i'm using mainly three trigger types so i'm using a timer trigger which essentially um enable our functions to run on the schedule we provide it with an ncron tab expression and that's when our functions will be invoked and we're using that and at a particular point in time to make a get request to the fitbit api to actually retrieve that data um i'm also using service bus as well um a service bus trigger type so enable to react to messages sent to cues and topics we can create azure functions that have a service plus trigger to be able to process messages that are sent to our service bus cues and topics um which is what i'm doing within my um kind of fit health platform so whenever a message is sent to a service bus i can actually read a message off the topic and then we also have i'm also using http triggers so essentially we can evoke our functions via http requests but we can also use http triggers to respond to web hooks and i'm using that with a bunch of serverless apis that i've developed to make get requests to actually retrieve the data that persisted into azure cosmos db now with functions there's a huge amount of support for a variety of different triggers and also bindings as well so this is a very small subset that i'm using we can create functions that get invoked due to events happening in event hub we can have kafka support there's really quite a lot of um different variety of triggers that we can use within our applications and there's also a lot of flexibility in how we host our functions now in my um scenario i'm only really things are only really happening um it's there's a it's very low burst um traffic that's actually happening within my function application really it's only five o'clock or six o'clock in the morning something happens and then for the rest of the day they just sit there idly and then for that we can use a consumption plan so we can only pay for our functions when they run and there's automatic scaling so they only they scale up to as many events as we need it to and that's all handled for us now depending on what kind of workloads you're going to be processing there are a variety of different um hosting plans that you can use for your azure functions we can go up to premium which is the next step up so essentially there are pre-warmed instances with consumption plans you can scale right down to zero instances but that can cause some latency through cold starts in premium this is not the case you can actually have pre-warmed instances which will run applications with no delay after being idle for such a long time it also runs on more powerful instances and you can also connect it to virtual networks and you can have a longer execution time with your functions than you would with consumption um you can also host your functions on dedicated plans so if you have an app service plan you're not really utilizing the underlying vm uh you can actually deploy functions to it um and there you could have some predictive scaling and costs and what and also utilize um the vm if it's being a little bit under utilized there are also more uh dedicated um they're also more um isolated plants so you can even run your azure functions on app service environment which provides you with a fully isolated and dedicated environment for secure network access or you can actually host your azure functions on kubernetes either directly or on azure arc which provides you an isolated environment for your functions but running on kubernetes now that's not to say that azure functions are the be all and and and endorse part of this um platform some of the part of our workflows need to be um straightforward it's it's a little bit more simpler to develop um some parts of the application using logic apps so within my platform like i said i have a single logic app that essentially sends exception details from from functions to me via email uh this came about because i was really messing about with the same grid api in azure functions and trying to get it on an outlook.com account um which is a little bit more frustrating and didn't quite as work work as well as i expected it to and what logic apps provides is a large collection of connectors and actions that we can use so we can build highly avail scalable um integration solutions and we can connect to a variety of different systems whether they be legacy or modern systems really really simply and i'll show you how i actually use the logic app um to process those exception exception messages and send them to me via email moving to our data store um cosmos db so azure cosmos db comes with a serverless um offering and it lets you use your azure cosmos db account in a consumption based fashion uh where you're only charged for the request units that you consume by your database operations and storage consumed by your data so for those of you who aren't too familiar with cosmos db um before before the serverless option we would have um provision throughput so essentially whether you're provisioning throughput at the database or container level you would provision your throughput um allocating so many request units per hour and regardless of whether you're performing operations against that or not um you were charged um for using those um request units um but the difference here was um serverless we don't actually we don't actually have that we're only charged for what we used uh so there's no capacity planning required and it can serve thousands of requests per second with no minimum charge and again no capacity planning required there are some differences so some of the limitations are with azure cosmos db serverless you can only really you can only have a single uh region it's not possible to add additional azure regions to a service account once you create it and it's also not impo it's also not possible to enable the serverless link so if you're using azure synapse to run some analytical workloads on top of your cosmos db data you're not able to do that using cosmos db serverless but one of the big appeals of using um the serverless option for cosmos db um cosmos db is serverless best fit scenarios where we expect intermittent and unpredictable traffic with long idle times like i said i'm only really performing work in the morning so this function functions will get triggered at five a.m in the morning and i'll persist to the database around that time as well and then for the rest of the time there's nothing really going on there are a couple of get requests to retrieve the um data out of cosmos db but it doesn't really justify provisioning throughput on an hourly basis because it's not required and it's also going to cost cost quite a bit so if you're thinking about using um if you're tossing up between um using serverless or or just provision for continuous provision throughput um if you're just getting started with azure cosmos db if you're running applications with bursty intermittent traffic that's really hard to forecast and it's quite low the amount of traffic that's actually hitting your database or even if you're just developing or testing or running prototype applications or running um in production um new applications where you're not quite sure where the track what the traffic will be like um i'd really recommend using um azure cosmos db serverless rather than opting for provision throughput um and what i'll do is i'll just step into pricing calculator um so essentially what i've got here i've just set up a serverless or just entered in the serverless database for azure cosmos db and then per in australia east which is the azure data centers closest to me per 1 million requests you get charged 43 cents new zealand dollars i do um apologize um if you're not to i don't know what the exchange rate is um but if i have a gigabyte of transactional storage i would pay 87 cents a month based on the um if i was hitting a million per one million request units however if i switch this up to a standard provision throughput with a single region right with australia east so i've got 400 request units per second for a 31 day period i'm paying 41.40 i've also got storage costs as well which is still relatively low so it's 43 cents but here i'm paying 41.83 per month which doesn't really really um which doesn't really make sense if it's your traffic is really idle you're essentially just burning money um on throughput that you don't need so if you have an application that doesn't um have a lot of traffic i'd really recommend going with the serverless option and then monitoring we need a way to actually um see what's going on with applications just in case something um bad happens or an exception's throwing we need a way to be able to actually diagnose what's going on within our functions and the really cool thing here is that azure application insights um oh azure functions offers built-in integration with application info insights to monitor our functions so application insights will collect log performance and error data and then by automatically detecting performance anomalies and and you can use um analytical tools you can actually diagnose issues that occur within your function applications and you can actually understand how your functions are being used and you can actually see dependencies between your functions and various other components within your architecture which i'll show you in a little bit and this really helps us to continuously improve the performance and usability of our functions and we can use application insights locally as well when we're doing development so it's a really powerful way of getting insights into your azure functions and then we can also um use our favorite ci cd tools um for our serverless applications um so for this particular project i wanted to kind of boost up my i wanted an easy way to boost up my github stats so we can actually host our code on github um and what i've done is i've also implemented my build and release pipelines on azure devops i didn't think of using azure i'm sorry github actions at the time um but you can do a bit of make to mode a bit of a mix and match with a github and azure devops um we we can enable this using service connection so i've actually created service connections to github to azure so i can actually incorporate um my cicd process through azure devops um and you can also use yaml multi-stage pipelines for your build and releases and then here i've also used terraform to actually provision my infrastructure for this i've used classic release pipelines but you can also achieve this using yaml pipelines as well so that's a lot of slides so what i'm going to do now is we're just going to go through a very quick tour of the application i'm going to show you all these moving parts so this is application insights so essentially what i've got is i've got this single application insights namespace and all of my functions are um admitting logs to this central location and what i've got here is an application map so essentially um i've set this to the last 12 hours because this will all happened early this morning but essentially what we can do with this is we can actually see um at a very high level what our dependencies are between and different components uh within our architecture so i can see here i've got my authentication service of that refreshes my token that's interacting with um my key vault and also interacting with or making calls to the fitbit api and here from here we can actually investigate the performance of the cause so this is a function um and we can actually investigate a function within my function application we can investigate the performance of that app uh there we can also see any failed dependencies i can see one there and we can view this in logs if we want but so that's um so those are my functions at the higher level let me just so what i'm going to do now is just go into my function code so essentially for activity data my weight data my food data etc i've got a similar pattern going here so starting off we have a timer trigger function this kicks off at 5 15 in the morning new zealand time you can actually set um the time zone as part of your set the time zone on your function application and this makes a call to the fitbit api and sends to this topic here so this activity topic that i've got and if any exception gets thrown it gets sent to the exception queue if i go into this function here this is the this is where my document gets created so what it will do it will read read the message that's um been sent from it will read the message from the topic essentially deserialize that into a schema that i've formatted um or that i want to persist to the database and then it will persist that activity document to my cosmos db account and then finally i've got a function here that this well i've got a couple of function here that can actually read the data from the database um so that's just a very high level um from point a to point b um right from calling the fitbit api persisting it to um my cosmos db account so if i go back here into so this is um my app service plan so this will be a dynamic plan that i've got i've got all my applications running on linux we can deploy our functions to either windows or linux i've chosen linux and this is um these are basically all the functions that are sitting within that application plan or app service plan sorry which you can create using either terraform or bicep or arm templates i've opted for terraform because i was using terraform quite heavily but you can also use bicep as well so here i'm just creating a dynamic linux app service plan and then i'm also provisioning my azure functions using terraform as well so i'm importing my app service plan and then when we're actually creating our function application we can say okay deploy it to that particular app service plan that we've imported um into our terraform file so if i just go back so here's what are the one of the functions that i've created so we can see there's the app service plan that it's deployed to if i go into my functions there i can see i've got this single timer trigger function that i showed you earlier and here we can see past implications from the monitor here and we can also run this query in application insights that we've connected this um function to my recommendation um if you have a bunch of different um azure functions try and make them well use only one instance of application insights uh and with this we can configure deployment slots check configuration if you're using managed service identities i haven't enabled it for this function you can actually oh i have um you can actually use msi to connect to different services in azure so this one is actually connecting to key vault we can also configure availability tests um so essentially this will just be like a url health check that will ping every so often just to make sure that the function's healthy and ready to go it looks like it's past um last one i'm just looking at the time and make sure i don't overrun but i've also got this um serverless cosmos db account so here i'm petitioning my document id as you can see i can't deploy other regions um it's part of this account um but i've also got the serverless capacity there and we can all incorporate some ci cd sorry i'm gonna have to just jump in there for a second we'll i like this is really really exciting i'm loving the detail in the the way that you set up um application insights and the mapping and everything uh the uh through it but um we are running a bit out of time um so okay there's been a few questions that have come in um but unfortunately we're not gonna have time to get to them today but for those of you that have been asking the questions on the the bubble chat um don't worry we'll uh we'll get to them um and and answer them uh back there for you um and yeah but yeah just to finish off with any any closing um remarks or do you have any links to share with the audience yes so for the code i've got it all hosted on github so there's my github account there um just basically github.com support will belitter um just search for um my health and all those different repositories are essentially the code that i've got uh for this um for the project um yeah also feel free to connect with me on twitter as well um i am in new zealand so the time difference uh i may be delayed in getting back to you um but yeah um just takeaways hopefully in that kind of like um that uh basic demo that i showed that basically with the serverless technologies you can actually build something um really simple really straightforward but add layers of um sophistication as required so it's not it can be anything ranging from a simple like basic project um from a to b um but you can also um add layers of sophistication such as ci cd monitoring and really scale up your application um depending what you're trying to achieve excellent sounds good um we've just got another short video and then we'll be back with our third speaker up until now the problem with building applications has been that before you could even start you had to choose a framework learn how to deploy that framework onto servers then manage and maintain those servers over time not exactly easy quick or efficient thankfully now there's azure functions the easy way to build the applications you need using simple serverless functions that scale automatically to meet demand no worrying about infrastructure or provisioning servers whether you're new to development or seasoned pro within a few minutes azure functions helps you create applications that accept http requests for a website processes product orders via queue messages react to data flowing from devices like an iot hub or just run simple scheduled tasks and since azure function scales automatically all you need to do is write your function logic azure functions handles the rest what's more you only pay for what you use with azure functions you'll write less code manage less infrastructure and have a lot less upfront costs see for yourself at functions.azure.com and try it free you'll see that making applications has never been easier with azure functions [Music] um we're back um we've got another one of our new zealand friends is going to be joining us in in a moment when they pop up on the screen we've got wager is joining us from new zealand uh again uh and uh he is going to be talking to us about logic apps which i think is a massively underrated technology so i'm really looking forward to getting some new ideas of what we can do with logic apps wagner thanks aaron so let me set everything here and we're ready to go uh welcome everyone my name is wagner silveira i'm a digital integration practice leader cheetah in microsoft azure mvp and what i'm going to try to show here in the next 25 minutes or so is some new scenarios with logic apps so let's go uh through the agenda and see what i'm going to try to discuss so first thing is i'm going to be talking a little bit about identity based authentication there is a couple of things that came up with the uh in the last year or so that not everyone is is taking uh in consideration uh we're going to be talking about state versus speed like the uh when to to to use some of the new features that came with logic apps standard we're going to talk about the concept of logic apps anywhere and then finally something that is a uh was a big uh problem with logic apps before that is private network integration at the end of that then i'm going to try to uh give you some takeaways uh for you to think about uh when you finish the the presentation and in some of those are going to have a demo so started uh helping me to pray for the demogods so let's start with the identity based authentication so on that what is this story so far basically if you're thinking about inbound when you're trying to get into logic apps there was only one way for you to authenticate and was uh sharing keys so you have like a that big or massive uh url to access your your logic apps and a part of that would give you some share keys right if you're thinking about going out of logic apps into all of the things that are will and anna talk about the ability to have a lot of connectors and that's actually one of the great value propositions for logic apps but indeed there is two things that was usually a problem here one is the identity was always associated to the connector itself and uh as a connection right and there is a name that counts of connection string so if there is no a way to identify the logic caps as a user instead of instead of that you have to put like a service account or your own account to make sure that that worked so what is new on that there is two things that is really really that uh interesting the uh when you're thinking about inbound authentication we have the ability now to do all-out based authentication with the logic apps when you're trying to do a request to that logic app and when you're thinking about outbound authentications now you have the the ability to create a manage identity for logic apps and use that to authenticate against some of the connectors right so how that happens the first thing that we can do is uh the there is new authentications options on the consumption on the logic apps consumption that are going to be showing and also there is new managed ident options in some of the connectors for both consumptions and standard so that is like a a lot of things that we didn't thought much about before but you didn't have the a way to use before and now is available on logic apps let's take a quick look on that first one the logic apps inbound authentication uh uh using a wallet so i'm going to change it back to my uh azure here right and what i'm going to show you is first let's take a look at logic apps and inside that logic apps we have now this option called authorization that was not here before right so when you look at that now you have the ability to create one or more policies for uh the logic apps to say the to receive an uh jwt token and evaluated the jwt token and say if you receive the information with those claims you can get inside otherwise you're not able to the demo that i have for you is this i'm going to show you inside the uh api management we have an api called azure serverless conference with a single method right but on this method what we're going to be doing is as i go into the logic apps i actually setting uh i mean the right one yes so i pass in a record with my msi access token right i'm actually creating my authentication the manage identity calling a specific logic application enterprise application putting that information on that msi access token if it's valid and then i passing that as a better talking authorization into the logic apps so let's see if that actually works so if i go to postman i have here first the call to that in endpoint right and you're going to see that the headers that i'm passing to that endpoint i only call passing the ocpa ping subscription key that is the uh share key that you needed to authenticate against api management not against the uh anything else right so if i go and call that and the demo gods are with me are going to receive a very complex response let's see what happens there we go i receive a very complex response call hello word that means that the logic apps has actually been called and we can say uh that happening we have a call at 509 pm new zealand time right and that basically execute my hello world but if i try to call that same logic app now directly even with a a valid what seems like to be a valid token let's see what happens first i have a security token expired right so one that identified that i try to pass a token that is not valid just make sure that you guys can see that okay but that's fine that can be fixed let's try to get a new access token there we go we have a new access token and i'm going to be using that one and i'm going to be sending that on that token because it didn't come from the the api management that was the only item that i'm allowing to go through it receive a lever and you can see that pro zero one logic caps 2f28 just to prove there is nothing on my sleeve and if i go to my trigger i didn't even execute it here because it didn't uh access right i just wanted to find my trigger definition here let's forget about it because i don't want to to spend time i just wanted to prove that that logic app is the logic app that is being called but just to to the uh recap on this right authorization is something that you now uh is able to implement using and take advantage of jwt token instead of taking advantage of the just share connections right let's go back to to our presentation let's talk a little bit about the gotchas on that because you always have some kind of coaches right so the first one is inbound authentication is not available on logic apps standard yet right so you need to take that in consideration you can disable share access keys even though uh using uh the the the authentication the oval authentication method if you get hold of the share keys you still should be able to call that that logic app and if you pass a logic app without the share keys and the uh authorization it is going to complain about you don't have some share keys right so there is no operation operator on policies you see that it only had like this equals that this equals that right there is no option to say uh if that claim contains something and just to complete although we didn't look at a managed identity because you're not going to have time to go through everything manage identity is only going to be enabled for selected connectors right so when you're looking into the the connection doors try to take a look if managed identity is available and if it's available try to uh take advantage of that it's going to make it your a system much easier to manage because it's going to be password less and more secure all right so let's go to the next scenario let's talk about state versus speed so what is the story so far the first thing is there was only a single model for logic apps when you're looking at logic apps consumption for example there is only one one model and the model is status king right you you always have a state on if each one of the uh connections what that means everything that is like before and after an action on logic caps that state is kept and that is really good from one side because allow you to connect to that knows exactly what's happening inside logic apps something that we sell as one of the big advantages of logic apps but on the other side all of that uh audi team take a toll in terms of performance and there is only some uh so much optimization that you can do right so you can combine actions you can try to to make things more uh condensed to avoid to put like that much uh pre and pause authentication but you can only do that so far so what is new on that well the first one is you can choose a model now that is closer to your requirements right there is two models uh available on logic apps compared to just the state four model that we had before and the first one is the state for workflow and on the statement workflow what you have is the ability to have full auditing to everything that happens on your logic app you have like for the trigger what happens what initiated the trigger what is the result of that for each one of the actions what initiate what was the inputs for that action what is the outputs for that action right that makes sure that makes your troubleshooting incredibly simple because you know exactly where it fails and you know the data that makes it fail it also gives you a lot of the uh support on your resubmission since you have the inputs that trigger your logic app that means now that you can receive meteorological from the uh from the portal or from the engine itself instead of having to receive a request again to initiate that then you have these stateless logic caps right what is that advantage of that stateless logic caps in this case one is a lot more throughput right because since you don't have that uh the button to have to audit everything you're going to be able to run those things fast you also have the an advantage here from a cost point of view with logic apps standard now you're paying for both the uh uh you're paying for both the engine running inside of the uh an environment that just like as your uh azure functions but also you pay for the storage to store all your uh metadata or auditing right you pay for for the uh in this case for the your storage account so if you're using stateless logic apps you have less things written on that storage account so you're paying less right and i also have ability to achieve privacy one of the things that was really the uh people really complain about logic apps is especially when you're talking about uh private data was you have to go to every single action inside the logic app and select which action the that that action is has secure inputs and outputs right so we needed to to remember everywhere the data that was supposed to be private and shouldn't be seen even by the people that is managing support in the logic app you needed to to enable data input and and secure output for it to be uh obfuscated or or to be that are encrypted so the met the people that is supporting wouldn't have access to that with this stateless logic apps because that is not being implemented by default it is not being audited by default that information is not there and is you needed to force to actually uh enable that for debugging so how is that working right what what does that uh that creates that magic the first thing is that what it's doing now is the new models on logic apps standard if you needed to use stateless logic apps you needed to implement that on the new uh logic apps deployment model that is logic apps standard your large caps consumption doesn't have that option for you so stateless logic apps is going to give you high throughput you just need to to remember that any state for logic caps is going to give you high resilience so all that ability is are going to allow you to actually um resilience in this case the ability of re uh submitting messages without the requirement of your uh without your source having to send the trigger again and auditability from the point of view that you know exactly what's happening on each one of the actions but as everything there is some gotchas on that right so the first thing is is stateful or stateless needs to be a design decision and the reason for that is because each logic apps only have a single state and although in theory you should be able to change that states not everything that is available in the state foreign is available in stateless right so for example you have managed connections uh with triggers from management uh managed connections you don't have all of them you you only have the the http requests and the timers and variables those are the two things that that i notice are not available in in logic apps stateless right you can always use the uh the ability the compose action to manage variables that what you used to do before right but uh you're not able to use variables itself so this is the second state right let's take a look at the next uh the second scenario let's take a look at the third scenario that is logic caps anywhere so what is the story so far the first one is the the only place we could deploy logic apps was on azure right and you were bound to a resource group so you have a logic apps bound to a resource group and existed by itself it didn't uh it was not something that you could combine together and deploy as one package for something your boundaries or your your package was a resource group and how it's uh done today what is what is different today so first you have now a choice of deployment you can deploy the logic apps by itself or can deploy the uh the the logic apps as a group of workflows if using logic apps standard that means that now you have the choice of the packaging right you can deploy as a set of phone templates or you can set deploy as a package based on web apps uh format of function format using the logic apps standard and also you have the choice of compute you can deploy on azure using the consumption engine you can deploy on azure using the serverless or the best component based on on logic apps standard and you can deploy on any other cloud or on on-premises how is that happening with the introduction of logic apps standard now you have the ability to deploy either using the pass or using the containers right so if you wanted to use the uh logic apps by itself you can do that using logic apps consumption if you wanted to uh use logic apps as a group of workflows that represents an application you can do that using logic apps is standard and if you wanted to deploy outside azure you can use logic apps standard package as a container if you're running that on premises you can use a run either inside a docker server or you can run that using a kubernetes server and if you wanted to run out any other clouders not azure or even inside azure you can use kubernetes so what are some of the gotchas of that first you needed to choose your uh your storage right and uh depending on on where you put in that information you might have like differences on how that uh that's going to to to work right now that uh for the for for your logic episode standard you only have the ability to deploy that on uh on the the azure storage but the the sql server as a storage is starting to to become available and there is a preview for that second thing that you needed to think about is your management plane right so uh uh if you're running that on logic apps that uh on logic apps standard and you're deploying the kubernetes you needed to find a way to integrate to to see the management plan and the way that that's working today is by using uh azure arc so you should be able to to see your logic apps like it was running on on azure and because of that you would be able to see and manage your logic apps and you needed to take a consideration built-in versus managed connectors managed connectors is still going to require a logic apps a resource group for that to be deployed and when you're calling when you're talking about managed connectors are the connectors associated to uh azure so finally let's talk about private networks so one of the uh problems that we have with logic apps before is you only have out outbound integration right for for uh for private networks and that was based on on-premises data gateway so you would only be able to actually connect to either a private network or on premises using on-premises data gateway and then was kind of uh on-premise that gateway is very broad brush right it needs to to you open a pipe into that network but you don't have much control so what is new outbound and inbound integration is available now right you'll be able to define if you wanted to have integration from your logic app from your private network into the logic app and logic app into the network it's much more focused because now you you able to actually define exactly what is going to be integrated and how is that working again using the uh logic app standard because logic app standard is based on the functions engine that by the uh uh that is based on the web app uh uh framework of the the web app engine you able to leverage from outbound options like v-net integration and hybrid connections so you also have the ability to have inbound options with the private endpoints right but let's actually take a look at that instead of just talking about through a little demo so just going to jump here quickly because i can see that are we getting really close to that what i'm going to show you on this one is on logic apps standard although that is a logic app right with its own workflows for a couple of workflows right i have now the networking ability and on network i have the ability to do inbound traffic by access restriction by app assign address or by private endpoints and for outbound traffic i have the ability to do v-net integration and hybrid connections okay so i run a really quick uh item here i'm going to show you that this is actually going to the logic capacity standard and just going to insert this item here the demo is working here is the proof right you can see that this is actually running on my laptop and i have those three entries now i'm going to execute the same thing from going with the logic apps standard that is going to connect to the hybrid connection and call my uh sql server so let's see if it works so first a comment this is a demo for azure civilized conference anonymous is saying wagner is cheating sorry i'm not and then me say no the demo is working here is the proof all of that although looks like a really simple demo went up to the uh the logic apps via hybrid connection called the sql server inside my my logic apps and come back and using hybrid connection that means that i'm not able to open everything just that particular uh ip import let's go back to this session what are the gotchas on that the first is uh you needed to understand that that is going to be uh only for selected connectors right there is some built-in connectors that are able to use the v-net integration those built-in connections that grow on time but just take that in consideration when you're thinking about how to use uh the logic apps uh virtual network features on standard then finally let's go to the takeaways so the first one is is obviously logic caps open a lot of new scenarios right and now with logic apps standard the you able to create workflows but bringing really close to app development before it was is this thing on the portal that you have to develop and then and then later on has a lot of ways a lot of work to actually deploy now is something that is together with your the uh development environment of choice the package just like an azure function deploy just like an azure function but just remember that there's still a new technology right so there are some gaps that might affect your design so make sure that you understand what are your choices and the uh to see how it's going to work properly but also don't discard consumption right just because logic caps standard is there that means that the consumption is is not part of the puzzle it is still something that is quite uh useful and that is me right if you wanted to connect with me you can go to the uh connected through through my twitter or send me a message on on my uh personal email or connect it to my blog post thanks man that was really great i like i'm someone that thinks that logic apps is a technology i need to learn more about and never invest the time so i love seeing demos like this and just how easy it is to use as a as a as a stack as a tool in our toolbox and that hybrid connection stuff that you showed at the end super powerful uh because it just makes it so much easier if you're doing uh hybrid application design where maybe you're not ready to put everything in the cloud and you but you can still use the power of the cloud there cool um but yeah i was like that is something something that is really interesting in uh with logic apps is standard now is growing a lot and it's going to get much closer to developers so we should be hearing a lot more about logic apps even more than what we saw before for sure um but unfortunately we're out of time for questions with you today check out the uh the poll again um keep popping those questions in there for our speakers as we get to them but next up we have another speaker and we're gonna gonna keep rolling through them today no rest of the wicked and uh we're being joined by uh melissa melissa is a senior wait lead developer i i wrote this down i knew i was gonna get it wrong uh um down in melbourne uh she also is one of the organizers of the net user group and microsoft mvp but today she's going to be talking about something that i love javascript and azure functions melissa welcome thanks for having me and thanks for everyone for tuning in to the azure serverless conference today so my talk is called javascript in the cloud and really it's a high-level look at all the cool stuff you can do with javascript on the cloud specifically in azure my socials are up on the screen if you'd like to connect with me i love to connect with people who have similar interests to mine so so please reach out a little bit more about me so as aaron said i'm a lead software engineer at a company called aznix and i'm a microsoft mvp in developer technologies i mainly work in application development focused around.net and worked a lot with angular and other javascript frameworks and also have been working quite a lot in azure and when i've been doing my more recent learning about cloud computing and azure i was surprised by how much you could actually do with javascript in the cloud so i put together this talk so what is the cloud we know that it's software and services that run on the internet instead of locally on your computer in a big data center usually run by azure whatever cloud provider you're using and you rent that space and pay for what you use letting it scale as your business needs change well cloud providers like azure take care of managing that physical infrastructure so really there is no cloud it's just someone else's computer and the other part of this talk is javascript so javascript is a lightweight programming language that allows you to implement complex features usually aren't used for web pages it runs natively in the browser and many people when they start programming today start with javascript and this is because it's relatively easy to get started and since it runs in the browser it's quick to get up and running and my first experience with javascript actually started with this o'reilly book i'm not sure exactly what the edition was at the time but during my university degree i didn't actually learn any javascript there it was in one of my first jobs in the tech industry while i was still studying i started at the software solutions company and i was told that we were using the mean stack which is express at the time was angularjs because the newer version didn't exist yet and node which is all javascript based frameworks for building end-to-end full stack applications so i was handed that javascript book and after reading it trying to understand what the heck a closure was i was partnered with the senior to do web development and from there i've used javascript mostly throughout my careers focused on web development and traditionally javascript is a language for the web it was created in 1995 in just 10 days by brendan ike who worked for netscape it was created as a scripting tool to manipulate web pages in the netscape navigator browser and soon after it was standardized and took off to become one of the most popular client-side programming languages with today 97 of the web using it however javascript is much more than just a client-side programming language it was also initially created to be used on the server side but didn't become popular as a server-side language until later on with the creation of node.js in 2009. so node.js is an open source cross-platform backend javascript runtime environment that runs on the v8 engine and allows you to execute javascript code outside of a web browser and with the advent of node.js as a popular server-side javascript programming language it opened us up for the ability to use javascript in the cloud and i personally am a really big fan of simplifying the developer experience and using one language end-to-end across your application and so with the the use of node and all these other javascript frameworks that we we know and love makes it really easy for developers to work across the full application and you don't have to have the separate front-end back-end team specialists you can have all the skills to you need to build the full robust applications and having some of the same toolkits that you're used to working with different parts of your code and with node we can now run our javascript applications on the server and if you look at the major cloud platforms today javascript is one of those supported languages including on azure which is why i'm here talking to you today uh azure runtime supports javascript but it also supports typescript and any other flavor that you you love that trends files down into javascript and in the words of scott hanselman one of the partner program managers at microsoft the cloud doesn't care about language of choice and one of the best ways to use javascript is through serverless solutions so this conference is all about serverless and serverless means that the cloud provider in this case azure automatically allocates those machine resources on an as used basis and they take care of the services and underlying infrastructure for you so despite the name there are servers that are still there but we as developers don't have to deal with them so it means you can write and deploy your code without worrying about provisioning or doing the admin and configuration of those servers and they're inherently auto scaling and you only pace pay based on your consumption generally depending on how you set it up this makes several solutions very flexible and it naturally optimizes costs and performance of an application and allows developers to quickly update and release new versions and have a quicker term around time to market and one of the most popular serverless computing is with azure functions and functions as a service so we've we've seen a couple examples already today of how you can use azure functions in robust solutions but functions as a service allow you to execute small discrete pieces of code and focused generally on event driven execution such as an http call or a timer trigger you for example you could have a function that sends out an automated email once a week or generates a report once a month something like that and when we're using these azure functions they are serverless solutions and they really allow you to create a simplified serverless backend or api layer for your application and the good news is that all of these can functions can be written in javascript so if you want to get your hands dirty with node or you're already a javascript developer wanting to have the same language across the front and back end you can do that with javascript and azure functions so you can have a nice cloud hosted auto scaling solution for your application serverless and microservices also go together very nicely microservices are independently releasable services that are modeled around a business domain and with microservices we want them to be deployed independently which is encouraged with functions as a service we want them to be small flexible and focused on automation which when we have working with azure functions we know that's what they inherently are but also to be a market service it needs to be modeled around a business domain so when you're it comes down to how you actually design your serverless solutions so they go nicely together but just because you're using functions and serverless solutions doesn't mean you're automatically using microservices so you have to focus on making those discrete pieces of code align to your service boundaries and that will lead you to a serverless microservice architecture and this is an example of what that could look like so i just took this from the microservice one of the azure sample architectures and this is showing a rideshare application solution and it's using the fully managed services from the azure serverless platform so we have an azure app service hosting the application api manager which is exposing the different endpoints and hiding the implementation details and then we have our azure functions for our apis that are based around different business domains in this case it's for drivers trips and passengers and we're also using durable functions as orchestrator and setting up event driven messaging service all these components are scalable and focused around that automation that we want to have with microservices and most importantly they can be all be written in javascript you can also write it in other languages as you if you want as well but this talk is all focused on the cool stuff that you can do with javascript and so you can have these really robust solutions following popular architecture design patterns and taking advantage of all the great stuff that you can do with javascript in the cloud and when it comes to developing these applications and solutions there are some really good tools and guides out there to help you if you go to the microsoft docs they have all kinds of quick start templates and and learning resources for javascript developers in azure uh they can help you get started there's also azure cli supports javascript and you can and that'll allow you to create and deploy your javascript based applications with the cli your favorite ides such as visual studio code and webstorm also have extensions that let you integrate with azure and access quickstart templates write debug and deploy your code all from within that ide that you probably already use with your javascript application so let's take a quick look at a demo of what those extensions look like so i have a i've already created a javascript function azure function using the the templates in visual studio code so i have the cloud tools extension installed so this is the the template that automatically gets generated when you choose node.js as your language and javascript as your language for your function if i go over to the extension i can access the what i have locally for azure functions but i can also actually see what i have on the cloud so this is connected to my azure cloud instance and there's a function app that i've already deployed up there and i can interact with that function from within visual studio so if i wanted to execute this and test it out from my local machine i can go ahead and do that now so if we press enter this will execute and the template for this one let's see if we open it here so it executed successfully this one it's doing a hello world but this one i've said i'm deployed on azure so we know it's the one that's on the our azure instance and it has deployed successfully i can see in the if i jump over to my portal really quickly um that's that same one we have here so this is running hitting our azure instance all from within visual studio so it makes it nice for for testing and if we want to go back to the one that we still have locally um i will close this message we can run and deploy uh debug this all within our um visual studio as well so if you click f5 you can start running it and once that's running you can execute in the same exact way that we just did the one that's hosted on azure and test it out locally before we deploy it up to the cloud so i think that might be running not yet what happened cool well if i if it started running it's being slow of course doing a live demo always makes it a bit slower but if once you start running you you do it the same way where you right click and go execute function and we would see that pop up here cool so it looks like now it's running if i jump back here and go execute function and that executed successfully so if you have some more complex code that you really want to verify and make sure it's all working and test it out locally and fix any issues you may have you can all do that really nicely from within visual studio code and if you once you're happy with it this one is just locally you can also deploy from within visual studio code as well you can click the deploy to function app and that will connect up to your azure resources that you have and deploy it onto the cloud so you can deploy it to a test environment you can deploy it wherever you want before you get started and you can manage everything that's happening within these tools that we have here so it's really handy and nice to work with and all these templates that we get started so this template that was created is written in node and it's based on the language that you choose when you set up your your function app if we go back to my slides um so another really cool thing is the massive suite of javascript sdks so the software development kits the libraries that allow you to integrate with all of the azure services so azure has provided us a huge list of all the sdks that are available for example if you watched anna's demo earlier she connected to an azure sql database and that can be done from a javascript application using one of these sdks and actually her whole example installation architecture could be written in javascript and she said she had the different language options there so go check out her code if you're interested in that um but if we look at uh if you end up going to the azure javascript page and looking at all the different npm packages that are available this is just scrolling through it just to show you how many there really are because they they make it so that you can do all the things that you want to do with your azure services from within javascript applications available to you through node packages and to integrate these with an application all you have to do is run npm install and the name of the package so this example here is using azure digital twins sdk which is part of the iot solution services that are available on azure so you can integrate with all different things such as iot and once you import that client package you can create a new client tool and then you have access to a suite of all the apis to be able to update the twins to insert and create and do all the nice things that you would want to do with say your iot digital twin solution similar goes for any of your ai and ml packages there are sdks available to integrate with those within your javascript framework as well so this is an example of the azure cognitive services sdk where you can do some computer uh vision analysis that allows you to analyze image images so you can do some very very robust things with your javascript applications and another part this was also mentioned in anna's app is of like really cool stuff that you can do with javascript in the cloud is with azure static web apps so azure static web apps are serverless web app web app hosting service offering and it's an offering under the azure app service but microsoft have extended the managed hosting service to automatically build and deploy full stack web apps from azure from your repo using github actions or azure azure devops if you want as well to seamlessly connect static web apps with azure functions so it's a fully serverless solution it gives you a globally distributed cdn and a very scalable backend ready to go and you guessed it this can all be based in in javascript and the static content is javascript based anyways unless you're using like jekyll or one of those there's other um frameworks but generally it's you have say like your react or your view app that's transpiled into javascript then you have your api layer of your azure functions which can nicely be written in node and have your end-to-end javascript application for that as well and um if we're looking at the visual studio tools as well there's an extension available for you for azure static web apps that sets it up uh automatically for you and that'll automatically create the github action which helps you quickly deploy your code to azure and it'll set up some of the framework and do the same kind of thing that allow you to build deploy and debug all from within visual studio code and the cool thing is if you do everything uh end to end say in javascript in azure static web apps you can put breakpoints in both your front and back ends and debug them from within visual studio so it's it's really nice and easy to work with and really great for javascript developers or people who are wanting to work with javascript and it lets you build and deploy everything as one unit so we mentioned github actions so that's part of the the solution with azure static web apps but it's actually really cool github actions is a ci cd tool to automate customize and execute your software development workflows right from within your github repository and you can actually even write your github actions in javascript so if you have custom actions where you're wanting to maybe automatically open up an issue with some some information or just print out certain information in a message you can use javascript to write those custom actions so the javascript actions are just programs written in javascript that run based on a specific trigger and it's a modular piece of code and you can run it directly on the ci server or the build agent so they actually end up being faster than container actions which is the other way of writing custom actions and because this is all part of the microsoft ecosystem it all integrates and works very nicely with azure um i'm quickly just gonna jump over to a uh github action that was written uh in javascript so the template for the yaml you still you still need the yaml template file um but all it does is you're set defining the action and you're defining the inputs to your function and what that's going to look like this is just a hello world one so if we go to our main file all it's doing is taking in those inputs and having some code there so here it's just doing a console.log we can do really advanced fancy stuff such as connecting to different apis and as i mentioned creating issues and all of that so you can get really creative with what you're doing with javascript so in summary we have seen how to create a serverless function in javascript how you can deploy it using your javascript dev tools how you could set up a microservice serverless solution an example of javascript sdks connecting to a cloud service we've seen a little bit of the azure static web apps and how you can work with github actions as well and there are many many things you can do with javascript and javascript in the cloud you can expand out into machine learning and ai and you can even go further into the world of gaming and robots and mobile and cloud because javascript is a very very powerful and amazing language there's even been a complete implementation of a linux emulator written in javascript it can compile c programs support shell commands and everything all based on a javascript implementation there is a concept called the rule of least power from tim berners-lee the creator of the web that suggests the least powerful language is suitable for a given purpose as a corollary to this rule we get atwood's law which states that any application that can be written in javascript will eventually be written in javascript so i hope if you're not using javascript today that you go out and learn it you look at all the cool and awesome stuff you can do with it especially with the integrations you can have into the cloud and all the different azure services because the opportunities with javascript are endless especially when paired with the power of the cloud so thank you um i have a link there for all of my slides and resources and the the different tutorials and stuff that i use today and i'll take any questions if we have some time thank you melissa it was a very uh it was a very knowledgeable session i learned quite a bit from it one thing about i personally like about azure functions is that i don't have to worry about any certain programming language in order to you know start writing code on it with that we do have a question from one of our audience which is what is your preference for the node.js unit testing framework um there's quite a different uh range of frameworks that you can use when you're you're testing um javascript in general so i've used um like jasmine you can use that for for node as well um and uh yeah a few different ones but yeah jasmine's a really good one thank you okay so with that uh we have no other questions so uh i would like to thank you melissa while we move on to the next session so moving forward i would uh like to invite patrick who will be talking about uh how we can do monitoring uh with serverless tech it's going to be exciting over to you patrick all right great thanks rohu for your warm introduction and thanks again um everyone for joining on today's ipad track on azure service conf there have been some great sessions so far and it's a good opportunity for us as a service and other lovers to get around to share our passion our experience and potentially other lessons we learned in using azure service technology so my today's topic is i'd like to show you how we can build a dynamic heat map and monitoring dashboard system using azure serverless and coincidentally i have also prepared a demo that's done in typescript so if you enjoyed melissa's session um about using um javascript marry up with other service technology uh do stick around and hopefully you enjoy my session as well a little bit about myself my name is patrick i'm here working at ssw melbourne i'm a senior software architect i see myself as a full stack engineer and public speaker and a little bit more about myself i have over seven years of experience in it industry during which i have successfully designed architected and led and delivered a number of solutions to a variety of clients in different industries with a focus on fintech my interest in technologies include iot big data machine learning ai and most importantly azure and other service technology over the last year or so i've been using azure service technology extensively and helping my client building and solving their business problems and building applications like real-time voice and video streaming as well as document collaboration and if you want to find me on twitter here's my handle if you like to be a little more formal this is my linking at ssw each consultant has his own profile page you can easily find it by googling patrick ssw and in my spare time i like to follow and sometimes contribute to some open source projects i've been recently working on a conversation ii chatbot at ssw public repo which can help you to find an employee in an organization with certain skill sets and availability so come to my github and check out my activity if you are also interested in ai alas there's a technical block that i sometimes write i see it as a good price for me to retrospectively summarize the experience i gained or the lessons i learned from my recent projects all right enough presentation enough introduction let's look at the uh agenda first of all uh i'd like to show you and explain why we want to build heat map and dashboard system and why they are a good fit and the key elements in a contact tracing or social distancing monitoring system and then we move on to introducing some azure services technologies and why i reckon that they are a good fit for us to design a solution to render and deliver our data and convey the information to the stakeholders that they are concerned about about contact tracing and social distancing i'll especially focus on azure signal service and azure function then we'll move on to a design pattern that comes with cosmos db but often ignored by developers which is called the change feed and change feed design pattern and why i believe that it can help us uh and and it's the best fit as the cool engine of this uh hitmap and dashboard system uh once we uh understand all the technologies then uh we can bolt everything up to our overall um architecture design and then i'll move on to a quick plc uh a demo to show you how we can build this solution easily on azure serverless right so as the the well-known reason since the coffee 19 uh pandemic started uh contact tracing social distancing and yes lockdown as well has been the new password um a lot of great solutions has been uh invented by technologies around the world to uh capture the content tracing history uh to measure the social distancing uh via some smart devices either a watch or a smart lanyard or band whatever now we are able to collect data from different channels like iot or event hub so on and so forth then how to better visualize the data and serve it to the stakeholders and decision makers to take actions is the next challenge and mind you the people who want to look at the data are sometimes from non-tech background so how we can properly and attractively render the data in front of them would either help them or affect them in making decisions a good way i reckon to convey the data is via heat map or dashboard and if if we if we think about that if we ask the a manager at at a um shopping mall to run a sequel report or you know to run a csv report then it may be annoying but instead if we can show a dynamic heat map and dashboard with a real-time updating chart in front of him he'll tend to be attracted and find the most useful insights as he can and we want the our dashboard and heat map to be real-time and also we want uh the heat map or dashboard to be able to cater for a certain amount of dynamicity in the data in terms of the types of the data and why because as i mentioned we might be able to collect data from different sources and the technology goes very fast these days so if the manufacturer decides to upgrade a a certain smart device then it's likely that the format of data it collects is changed so we want to keep the change at minimum level from our end in serving the data we can make change but we don't want to spend too much time in in making the change once we have a nice dashboard system it can be deployed to either a small venue like a small company startup or a a restaurant or an enterprise level um a big building or a big stadium now here's an example aidan is my son who said he's horrible three and apparently we are in lockdown so we are not sending him to his child care but the child care here at austin trade at australia works like this so the kids are usually grouped according to their age and they will be assigned to a dedicated room so throughout the day the kids will spend most of the their time in the room that they're assigned to however these little creatures are very active so the exception is that sometimes they want to go to the other room to play with their friends or attend some classes so this would affect the uh social distancing um that we try to um apply in the in the in the premise of childcare the management team uh or the director of the trackhead would like to know whether at some point of time if there are more than say 10 people including kids and educators in the room and if that happens they should take actions promptly they can they should go there and bring some kids back now so if we are going to build a solution what we can do so we need a smart device of course to detect how many uh moving objects um you know moving human beings in the room and once we have that imagine we have the data already and how can we better render it of course we want to set up a big tv screen on the reception desk and and show a a dynamically updated chart so the management will be uh will notice that certain room it has more than x amount of people and they will take action now we've understand the requirements let's explore some other service technologies that can help us to build our solution the first technology i'd like to bring up is signal service yes signal service is one favorite azure technologies so as we all know uh signal library has been very famous among c-sharp or dot-net developers and almost is our default choice when we try to build real-time applications on websocket it is great don't get me wrong however by using signal library we need to manage the hosting and infrastructure ourselves if there are tens of millions of client applications trying to connect to our hub that we hosted on the server then we need to scale our server out or scale up to advocate more memory or compute power all this can be done but all these require a lot of effort signalr service is a great choice in this case so what azure signal service offers is an abstraction so that only you only need to have a turnkey configuration change you are able to scale the other signal service up or out as you wish azure manages all the rest for you and you can focus on implementing a logic in azure function as a service hub so on the server end if the address signal service has an increasing uh amount of traffic your azure function can be scaled accordingly and this will happen automatically we don't need to worry so we can focus on writing our business logic which is great and uh once we know how we can broadcast the information to a client application dashboard in our case we want to uh find a best choice for a triggering point that we know that there are some data we want to broadcast and i'll mention the change feed and the reason why i think changefit is good is that there's a pattern that allows us to use trend feed essential data store i'll explain more about this uh with that let's look at a chart here so sorry let's look at a diagram here so we have our azure cosmodb sitting here and as i mentioned before we can have data from different sources injected and um and that ingestion is not today's concern but we just know that we we can ingest the data from either your iot your big big data your data factory so on and so forth now once we have the data arrived change feed comes into play it's going to capture the change in the document either a document insertion or update you might wonder why no division i'll expand that later but once we have some item appear on the change feed we can do something subsequently the first thing we can do is we can trigger an additional call to an external api this includes signal service so so that we can do additional things react on the change on the on the cosmos db container or we can use azure cosmos db as a as a intermediate store in your entire data journey and you might have some subsequent processing or data transformation tasks you want to perform subsequently or at last we can do some data warehousing like data movement between containers or moving data to azure blob storage for backing up purpose with zero downtime and minimal impact on your live cosmos db so here is the change feed design pattern in summary it enables us to efficiently process a large amount of data especially with a high volume of right that's the nature of cosmos so so we don't need to do anything in particular so and it is also a great alternative to carrying an in entire data sense to know the change so uh traditionally if you want to know what has been changed among a certain time frame in your data store what we tend to do is that we will query a snapshot at the beginning of the um time frame uh and then we will query another snapshot of the entire database at the end time of the time frame and then we will calculate the delta the change it works but the it requires a lot of computation power because we are acquiring for a snapshot of the entire database so change fit is a great alternative to this every single change within the time frame is captured by change feed so all we need to do instead is we can accumulate the uh each individual change up and then it it will be our delta we don't need to do any calculation we just add them up that's it and cosmos db is globally distributed so the trend feed is also globally globally distributed so as an outcome we can deploy our subsequent applications either an api call or another data processing process or a data warehousing task as close to our consumer application or clients as possible the key benefit of change feed pattern includes following the first is a perfect as a central data store for event sourcing pattern so in the typical micro service architecture or event driven architecture we will have some type of message queue cosmos db uh change feed can be a great alternative to a traditional message queue why the biggest benefit i see is that it supports multiple consumer applications subscribing to the containers change feed without any additional work in your infrastructure so if we take a comparison with other service bus if we are to set up multiple um subscribing consumer applications in the service bus we need to have a topic and we need to provision multiple subscriptions and each um and allocate each consumer app a subscription so with uh cosmos db change feed in comparison we can simply spin up as many consumers applications as possible as you want all we need to do is provide it with a connection string and that's it and of course the from standoff's perspective there's no extra cost to utilize the change fit use it or not it comes with your cosmos db engine so we might as well just use it oh if after we've looked at all the benefits every like every one has two sides there's limitations so uh the change feed is not an operational log meaning that um we can capture the change but if you are looking for a full operational log of your database engine uh your data store engine the change feed is not your good choice and only the most recent change of a given item is included what this means is that if for some reason your uh consumer application somehow could be you know a an exception in your logic or or some downtime in your consumer application that you miss that change item then it's very hard for you to recapture it so it's not suitable for application which uh looks uh for the ability to reply the past event or some dead letter q management so if you're looking for those you'd better choose a traditional event queue such as a major event hub or subspace or kafka and the change feed in cosmos does not capture the deletes so this is annoying but there's a way to to get around it the way to get around it is that we can have a east deleted flag on each document that we store in our cosmos container and whenever we try to delete an item we don't actually remove it from the container but we set the flag is deleted to true so it's treated as a update so the change is going still going to be captured by our change feed and all our subsequent application can still be triggered as usual and and cosmos offers a great feature called ttl time to leave so when a certain time arrives based on our configuration the item in the document will be completely removed so we can leverage that to do the actual delete whereas we can still capture our change by setting the flag and uh in the last i'd like to touch on the uh the the last limitation is the uh there is a guaranteed order in the change field within a partition key value but not across partition key values so if the order of the value across different partition key matters to you a lot in your application then you need to choose a great partition key so this topic is quite big so i'm not going to expand too much on this for a for interest of time so we've understood uh why we want to use those technologies let's look at how we can do it so how we can read function feed there is a push model as you already seen on the screen we can use our function cosmos db trigger uh it comes with azure function and is no different from other azure function trigger but what it does is that it encapsulates uh a layer that uses the change feed process library under the hood so the architecture is very similar to how other signal service works that encapsulates um the underhood infrastructure management of cnr library so we don't need to worry about how to scale how to troubleshoot all that so we can focus on implementing the business logic and the other way around there is a pool model what it means is that the client application need to request the change by themselves there's no automatic polling so the client application need to manage that by itself all right so we've understand why and how let's look at the overall architecture so on starting from the right hand side we have our data store which is our cosmos db whenever there's a change captured by the change feed built in with cosmos db uh it can trigger azure function serverless hub and other functions of services hub has output binding that broadcast messages um to client application by our typical pops up um architecture via signal service now before we allow the connection been established between client application out of dashboard on a big screen with signalr service there need to be a negotiation slash handshake process happen beforehand i'll show you the code so it will make more sense all right so with a quick overview of the overall architecture let's see things in action so if you jump to i uh my sub domain my domain we can see that i have a colorful dashboard um in front of your eyes and what is more important is that it's being updated in real time without me needing to um click any button to refresh you'll see that when when it happens but let's just have a quick look at this dashboard we have an x axis of the timestamp and we have a y axis of different rooms in my uh sounds track here so we can see that you hover over any uh slots we'll see the number of uh people in the room and we can see that in english room in lion room at a certain time a quarter past 15 there are 15 people there oh so it just changed so you can see um we can even hover over the extreme uh or high or different category so that we know the urgence of um take actions to either move some kids back or or better arrange the the classes in the room now this is cool i built this um application with view and let's move on to azure and quickly give you an overview so you must have seen all these services before so i'll just quickly uh walk through what what they are so we have a cosmos db uh it comes with a trend feed function and we have our azure function our signal service um and we have application inside to monitor application easy now if we jump to the code bytes what we can see is that so if we start from the client end what i'd like to show you is in order to connect to uh cnr service all we need to do very simple we need to install a package and we import it as we import any other package after that all we're gonna do is to try instantiate an instance of signal hub connection via the connect hub connection builder and we give it the url we can grab the url from signal service on azure and then then done we're all good and we can define any functions that need to be triggered on certain events so here when a new message arrives we can update our chart and we can just start the connection then that's it so from client application perspective it's very easy so if we look at the backend so what we have is a bunch of um other functions i have a data ingester which is a a function that's fake some data and ingest data to our cosmos db but in real time sorry in real life um the data can be from different sources it doesn't really matter once we have the data we can look at our broadcasting function so we define an input binding which is from cosmos trend feed to our application that if anything changes in my device choice collection the function is going to be triggered and we'll also define an output binding with signalr so that any messages that trying to that's generated and hydrated in the function body can be broadcasted to the clients that's connected with auto signalr service so if you look at the implementation you run this way we can get a copy of the documents uh that's changed a change being captured in the change feed and we can do our own data massaging and hydration if we need to and then we can while the binding we can define the signal message and and populate it and that's it done so our client application can get the message broadcasted by signalr service azure handles all that for us and handles all the connection uh and handles all the scaling for us now as i mentioned before there's a negotiation process that need to happen beforehand i haven't implemented any logic here but if we need to what we can do is that before we return a connection information and allows the client application to connect to us we can because this is a a simple http trigger what we can do is that we can grab some identity information coming from the request either the 3w token or the cookie and do the authentication and authorization if we need to before we allow a connection by issuing the connection info so this one is secure so as we can see the program model is so simple with just a few tweaks here and there we are able to build a great dashboard system so if you jump back to my slides as a quick summary today i've showed you um why we want to build a heat map and dashboard system and how it can help to convey the information and help the audience and stakeholders to better find the insights thereafter and take action properly if they're concerned about contact tracing or social distancing on their premise and we've also explored some other service technologies that i reckon is a good fit for our solution uh i've especially uh focused on change fit design pattern uh come with azure cosmos db change fit and also explain why it can be used as the core engine uh to help us architect this solution so we we look at the solution design and we as part of the software development lifecycle we've done a quick plc uh in using typescript uh married up with other services technologies and prove that the dashboard works so um the management team or in my child in my uh sons child care is happy they can watch a big screen and they will be alerted uh whenever there are a certain amount the a certain room exists exist x number of kids or any educators all right so with that that's the end of my presentation i hope you've enjoyed and any questions thank you patrick i loved how you use change feeds to address an interesting scenario we do have some questions from our audience today the first question that we have uh on the screen is uh can you guide us with another example of the benefit of using cosmos db change feed and some other types of applications yeah sure sure all right if i may uh bring my screen back again uh let me try to find the slides where we have the um the diagram so let me just load this diagram well today um in the dashboard system i mainly uh use the first route on the top which is the event computing notification functionalities but another example could be the data movement at the end so as we all know the partition key in other cosmos container is very important so if we have a query on our cosmos db container against the partition key the performance is optimal and the cost is is the most cost effective however sometimes we might want to have some other query pattern so if we have a personal document and most of time we might query based on the person's email address because that might be the unique identifier but sometimes we might also want to query about the only person's first name then if we query using the same container with the same partition key as the email address then the query will go across different partitions and that's resource consuming and it's expensive in terms of the ru that it consumed the read write unit so we need to pay a lot um for that query so an alternative way to do this is that we can create another container with the person's first line as the partition key and we load the the original piece of data there so we can have different query pattern um carrying different containers this way we can save course and achieve the optimal performance now how can we make sure the primary container and the secondary container keeps in sync in terms of the data we can leverage change feed whenever there is a change on a person's document in the primary container we can move that a copy of the data across to the secondary container so the data will be in sync so you can still your application can still perform the queries using different query patterns hope that answers the question yep thanks patrick so another question that we have from the audience is is there a limit on how many instances you can scale up to when using change feed uh here i guess uh the question is about how many instances of the consumer applications yes no there is no limit so you can have your consumer uh applications are hosted in in any type of hosting model either web app or consumer app or even containerized app so the the scaling is really up to the uh hosting model that you use in the consumer app all right thank you the last question that the audience has for you is when using the pull model does change feed copy data mutations across to a disaster recovery region that's a very very good question i guess i need to do some research um personally due to the complexity in the and the pool model because their their clients need to request the change so uh to to be honest i have personally haven't used the promote before uh so i like to do something and get back to you uh as i mentioned before you can connect me on twitter so we can discuss this further um offline thanks a lot patrick for the very informative session uh thank you i was going to jump in with the question before uh before patrick runs away all right um do we i think can we get it i'll just go get him oh that didn't work i was really hoping that transition would work with uh the production team but no all good um if it does come back i will uh i'll ask the question of him if not uh we will we will carry on um so yeah i've been i've been really enjoying the content so far um have you got uh is there anything that's particularly stood out to you rahul um and what we've what we've seen so far today yeah i had a lot of fun learning uh so much about serverless today aaron uh more so that uh i love the fact that i don't have i'm not restricted by the choice of programming languages and i can very easily start building applications and get productive from day one when i'm using serverless infrastructure in my applications patrick's back um so i we did actually just have another question pop in on uh the bubble chat that i'll i'll ask you quickly um it's not directly about change feeds but a question is coming um does cosmos db support any sort of schema validation of the data structures so that you can ensure that that the data that's being sent does match anything that you are uh that you know that you're storing in your data set uh to my knowledge i don't think there's an out of box um schema check um on cosmos db because it's a no c code and a typical non-relational database is that you can store um any format and if the format changed you can still store it there uh but in your application uh layer you can always do the validation so or you can have a like you can never change feed uh and you can have an azure function subscribe to the channel and do the validation then throw exceptions or alerts if there's any change in the schema but really if you choose to use nosql that means that you want to cater for the dynamicity in the format of data so you don't want to do a validation unless you unless the form the change is too drastic excellent thanks sir so we hope that you are having fun learning azure serverless from the experts today with that here's a short message from our friends escort at cosmos db azure cosmos db helps you get more value for your money by making it easy to manage the components you pay for database operations and storage the cost to perform database operations including memory cpu and iops is normalized and expressed as a request unit more request units are charged for more demanding activities for database operations you can select one of two models provision throughput for serverless consumption provisioned throughput is the capacity you allocate for database operations measured in the number of request units per second and build hourly it works best for workloads that always have some traffic and require high performance slas if the traffic is predictable you can use standard provision throughput to manually set and adjust capacity as needed if the traffic is unpredictable you can use auto scale provision throughput to instantaneously and automatically adjust capacity between 10 and 100 of your set limit auto scale becomes more cost effective than standard when traffic is unpredictable and not close to maximum capacity most of the time provision throughput may not suit workloads with only occasional database operations and lower performance requirements these applications can benefit from the serverless model while it has a higher unit cost it's consumption based and only charges for the request units used for database operation with consumed storage fees are charged for the total gigabytes used per month for both transactional and analytical storage you also pay for storage i o and analytical storage get the most value from your workloads by understanding the components you're built for azure [Music] we've only got one more session for our event today and i want to uh one of the uh i was gonna say welcome to the stage um it's been that long since i've been at an in-person event where i'm just not used to the fact that we don't have stages anymore i would have thought i'd be adapted to that by now all right i'd like to bring um ezio um ezzy is here to tell us all about uh well help us navigate this landscape of the various things that we can do with serverless and how we can integrate them all together using the azure integration application is it over to you thanks aaron for this wonderful introduction and thanks for having me on this show at your serverless conference it's a pleasure to be here so today i'm going to talk about the secret behind building in reactive azure integration application and a little bit about me how am i the right person to talk about this before we get started i am mary lyricirian i'm the lead product consultant of serverless360 at kawaii.com having said serverless360 is an uh tool for ashore so we transform the support for azure serverless applications across enterprises and this role gave me opportunity to interact with azure experts solution architects across various domains across the globe and i'm here with some of the interesting learnings that i had in the last four years out of my total 12 years of it experience so my journey last four years with azure serverless integration space is amazing and every day turns out to be new learning and i'm excited to share some of them with you here so let us get started with a very basic question what is an reactive application when do we call an application to be reactive so according to the reactive manifesto uh it is expected to have four key characteristics and first and foremost being responsive so we all know right other being the consumers and developers of modern applications we all need an application which is highly responsive let us elaborate this word little more so to make sure that we build reactive applications and responsive system is expected to respond in a timely manner if at all it is possible so responsiveness is a cornerstone for usability and utility but beyond that responsiveness also means that problems may be detected quickly and dealt with effectively even before the customer realizes that there was a problem with my system but my application should restore and respond back to the customer responsive systems should focus on providing rapid and consistent response times establishing reliable upper bounds so they deliver and consistent quality of service this consistent behavior in turn simplifies error handling builds end user confidence and encourages further interactions so how can i build and resilient system so resilient systems should stay responsive in phase of a failure so this applies not only to highly available mission critical systems to any system that is not resilient will go unresponsive after a failure which has to be avoided but how do i achieve resilience through four critical aspects replication containment isolation and delegation let me elaborate bit more on all these four factors failures should be contained within each component so isolating components from each other and thereby ensuring that parts of the system can fail and recover without import impacting the entire system is very critical here and recovery of each component is delegated to an external component which in which case we can ensure high availability by replication wherever it is necessary the most important point here is your client should not be burdened with handling these failures my application is expected to be elastic if i'm building and reactive application when i say elastic this system is expected to stay responsive in varying workloads so in real time scenarios none of my applications get in the static or predicted uh input ranges and it keeps changing so reactive systems should react to changes in the input rate by either increasing or decreasing those resources that are allocated to serve these inputs in real time seamlessly so this implies that our design should have no contention points or critical bottlenecks resulting in the ability to replicate my components and distribute my inputs among them so reactive systems should support predictive and reactive scaling algorithms by providing relevant live performance measures they should achieve elasticity over cost efficiency and the system is expected to be message driven so most of our modern applications rely on asynchronous message passing to establish a boundary between the components that ensure loosely coupling so isolation and location transparency is also ensured with the help of message driven architectures so this boundary also provides a means to delegate failures as a messages so this also ensures a non-blocking communication that allows a recipient to consume the resources when they are active so leaving the system overhead out of the context so as all these terms sound complex is there any support that i get from the application development platforms that help me build reactive applications at ease which encapsulates all the complexities and lets me focus on my business core business needs that i would like to achieve yes we have asia to save us what is assure has got to offer to help me build reactive applications at ease in sharing all the four integral principles that we just talked about i have a complete package of enterprise integration platform services that helps in building reliable reactive azure integration applications so like one when i bring all of them on a slide it might sound like i have a huge bunch of resources to be uh to pay attention to and to understand i'll let me give glimpse of some of these services which play critical role in almost industries across domains and enterprises who build mission critical applications and as we proceed further i'm going to bring in an interesting use case from and real-time customer scenario which has been solved with the help of an azure integration services so here the beauty of assure services that i admire is they fit exactly into your real-time business use case see we just talked about uh delegating failure from a component so imagine um i have certain messages that is coming into my service queue that i cannot process at this point in time due to various reasons that could be a system specific reasons or business specific reasons service bus supports and secondary sub q called dead letter q wherein i can pile up all my failed messages and i can process them when it is the right time to process so such features exactly fit into your real-time use cases take for example logic apps as many of my speakers before me in this conference talked about the capabilities of logic apps functions and cosmos db i need not emphasize more how uh they how uh resilient they are or how capable they are to integrate with external third-party applications seamlessly and they how do they scale on runtime so all the four pillars that are required for my reactive application or majorly taken care of by the cloud service provider like asha now let us focus on our core business and we are going to proceed with addressing and real-time business need and we are going to fit in these integration services into the business need to build and complete end-to-end integration application let's get started by understanding the business imagine this is for an uh shipping and logistic enterprise who is decided to build an intelligent fleet management system so they have vehicles taking care of shipments across the globe and they wanted to make their system um data driven and more intelligent with the help of azure so uh there is a trend in industries adopting to cloud like some of them choose to choose to build their entire business application in azure but still there are enterprises who choose to retain their legacy code and premise and integrate part of their application with the cloud services so here is one such case wherein the customer has their core business logic still existing in their on premise system and they want to expose it to the cloud securely and get integrated with other integration services making use of their capabilities to build and real intelligent fleet management system so this is the scenario let us expand the scenario further by understanding all other requirements they have so just get started with their vehicles are all implanted with telematic devices and these devices are capable of streaming real-time information useful information like what is its fuel condition right now what is the speed at which the vehicle is traveling what is its live location all these necessary uh data is being streamed by these telematic devices implanted on these vehicles now what do i actually need to solve this business name first and foremost i need an event in gesture that accepts and stores the stream of events so this event investor should be capable of ingesting millions of events per second and it it is expected to scale seamlessly with the scale off of my business and i cannot expect my legacy code running on my on-premise application to handle the events as and when they come in so i need an cloud storage service that will allow my on-premise application to process these events at its convenient time and how uh my application the core business service is existing in on premise this has to be safely exposed to my cloud services so i need a service in place which can do this job for me which can expose the service from my corporate network into the cloud securely without inviting much of a chat like changes into my corporate infrastructure and i need a reliable uh storage option um where my business service can process the data and store the persistent informations too which would ensure the storage is expected to have business continuity guarantee and enterprise great security in place i need an event routing server so we are creating and reactive application so i do not expect my on-prem service to look for the blob uh container for the arrival of the event that it is required to process so i have in cloud storage service i do not expect my business service to keep pinging the storage service to check whether the data has arrived or not instead the system should be reactive that is as sent when an event is available for my business service to process i expect to get a notification to my business service which will inform the business service yes here the data that you are looking for has come in and you can pick it up for processing that is how i want this exa system to work and following the processing of the information there are certain follow-up operations that needs to be taken care of and this invites interaction with um third-party applications which are my clients that needs to be orchestrated and then taken care of as and when data is being processed by my business service and here i need to make sure that my business service is completely decoupled from my backend follow-up processing orchestrator so i need to bring in a messaging service that would decouple the follow-up actions from my core business service so these are my requirements from the azure integration space to fulfill the business needs of the fleet management system that i am building what is the next step let us now choose the right azure services to be fit into these business use cases and let us build our visualize our business application now so here is how i have designed the fleet management system which is partially on a cloud so i have an event hub put in the place of an event in gesture which is capable of ingesting stream of dilemmatic events seamlessly and eventhub has a wonderful feature called capture so say when we deal with events there are two critical um uh processes that needs to be taken care of one is saving the events and then processing them this event hub cuts off the overhead of me dealing with saving the events let me show you how easy it is to enable saving of events using an event hub so here is a bunch of resources that i have in my ashyar portal to fulfill this business need here i have my event hub i i have created my event have been the standard tire wherein the capture feature is allowed and the basic tire doesn't have the catch capture feature enabled so it's just a flip of a button a toggle button will do the job for me with a little bit of configuration so the complete uh saving of event overhead is taken off from me i can now focus on processing this event which is my core business service getting back to the presentation so i turn on capture on my event hub which captures the events into and storage destination so here i have a couple of options i can choose either and storage account or an um data lake storage so here i have chosen in storage account blob storage so when i turn on capture the events that has been received by the event hub from the telematic devices is now saved comfortably into my storage block now the next step is exposing my on-prem business service to the cloud i have put relay in place to do this job so relay can seamlessly get this attained so you relay is a reliable service that can enable you securely expose your servers that run in your corporate network to the cloud to the public cloud so this can be done without opening a port on your firewall or making any intrusive changes in your corporate infrastructure that's the best part so i expose my on-prem service through and relay and the next step is to make this whole setup event driven which is and key in reactive application so what is a component that is a heart of my reactive application here is event grid so event grid is is an honorable service from hr which can seamlessly route events from any services to any service so you have custom binding possible in an even grid topic and you can subscribe to and web hook in an even grid subscription so this makes pretty much um event grid and uh the an excellent choice to route an event in any any scenario for that matter so in this case i have an event hub that emits the event to the event grid which says that and capture file is created and this event grid passes the event to the relay saying that the file that you are looking for is now available now the on-prem application can comfortably pick up the file from the storage blob and continue with the processing at its own pace so this part is completely achieved and next is in follow-up operations that needs to be taken care of yeah so we have collect a lot of data that has been processed to come up with useful information and this information has to be passed on to the respective clients or respective third party applications to react on so that is where i bring in my logic app which is an wonderful orchestrator service from azure which can which exhibits a number of connectors and you can even build custom connectors which can facilitate integration with third-party applications or a number of built-in connectors would also facilitate building your solution without writing a line of code and what is necessary here is to decouple the core business service from my follow-up operations which is defined in my logic app so i bring in and queue here so any any commands that is expected to be processed by the logic app is compiled as in service message then pushed into this service bus queue and the logic app which is listening to this queue can pick up this message at its convenience and then process it so as we observe here we have clearly uh decoupled all the core uh components of this business application ensuring that failure of one is not going to impact the other the other would continue working without any challenge so we have made we can make use of the features from ashir like a jio replication or ensuring high availability to ensure high availability you can make use of dead lettering in the event grid and serviceable skews to make it fail proof and serverless when we coin the term serverless it means that seamless scaling so need not to be send you can configure appropriately to make sure that your application is elastic seamlessly and we have designed the application uh completely event and message driven to ensure that the components are loosely coupled so this organization later i'll also show you a glimpse of event grid in the azure portal where then we have an connection to the um relay which is an hybrid connection because it's exposing an on-prem service and when this application design is also extendable same this particular scenario uh we later decided to replace the on-prem component with an reliable durable function so in this case this in the inserting and function into and scenario is also can be done seamlessly without impacting rest of your business components so we understood the business needs and we have identified the right services to fit into um the business scenario to solve the business solution so what is next is as we build these services it's necessary to adhere to best practices but actually this topic itself is and can run for an hour or so so i am giving you the authentic reference wherein you can find ref uh tips from experts across the world to on serverless so this is a knowledge base that covers serverless best practices powerful tips and latest announcements from top notch industry experts to help you discover new opportunities and save time within the stack some of um the very common tips would be like uh make sure that you adhere to standards when you build your logic apps say um [Music] say retry policy definitions or error handling make sure you take care of these when you build your azure services and extensive knowledge on every service that we are working with will help us use them appropriately last but not the least so we have built and reactive application and it's equally important to support them in real time so you need to define and frictionless support strategy to support your reactive integration applications in production ashur does offer handful of management and monitoring toolsets that would help us to have and live monitoring of your resources um to add on to that let us have it look into an typical support scenario in an enterprise um business so i have various set of users who are involved in supporting my um azure application that is in production so but when we look uh the azure portal access is restricted only to designated azure experts in an organization because it's it's pretty much harmful to give access to support people on the azure portal that's how enterprises admit so when we look at the typical support um uh how how is it split across the members in the team we find that it requires an uh good amount of expertise to deal with the real-time support requirements when you are working within support application but unfortunately the business users and l1l to support team they do not have visibility on the solution that they are dealing with or do they there isn't security challenge in giving them access to the azure portal and the l3it support team was present with a limited experience of the solution how do we tackle this so what actually ends up happening is most of the support gets addressed by and level 3 id support team and 70 percentage of x supports are being addressed by an experienced assured developer when this what is the harm here is you are investing your skilled azure resource time into the support tasks instead it should be for innovation in your business how do we tackle this so that is where a tool like serverless360 can pitch in and we have brought in uh excellent set of tool set that complements the assure portal by delegating the support tasks to these id support teams so we enable the support team with the rights a tool set that will help them understand the solution that they are dealing with and proactively react to the issues that occur in them so we can safely offload these support overheads from the azure experts allowing them to focus on innovation hence reducing the total cost of ownership on your application how in action let me quickly give you a glimpse of this so i just showed you the resources that i created in the usher portal to fulfill the business need and i have the same resources represented in serverless 360 as an application so when a support person looks at this representation of the business application in serverless 360 they clearly point out where the a problem is so they visualize a business application they've identified the issue they find that the deadline messages in the queue has filed up say if you are not using and premium entire queue you do not get visibility on the messages in the queue so that can be addressed with the help of serverless360 wherein you get visibility of the dead letter messages and that can be viewed and processed right from here so this way the tool set to fulfill the operational need of an azure application is provided to these support team and this can also be automated with the help of automated tasks in serverless 360. say any failure in your logic app that can be spotted in action required and that can be resubmitted to restored and any real-time tracking of the business application can also be done with the help of business activity monitoring so this is how the tool can complement the usher portal in making your azure experience much better same in this scenario that we just talked about say business user says that a typical support scenario would look like this a business sooner user asks is the follow-up processing broken i have no idea let's raise a support call we can't see anything let's escalate it this is how the issue gets escalated to the azure team and the azure expert is expected to invest a lot of time in looking into this problem instead when we have a right tool in place the team can be proactive they get an alert they solve the problem even before the issue has been escalated by the business users so you be proactive and give your customers and wonderful experience when dealing with your azure application to summarize all that we discussed today we understood what are all the requirements of an reactive application uh as recommended by the reactor manifesto and we understood how to build an application to meet the business needs by choosing the right services from asia i gave you the resource linked to check on the best practices that you will need to adhere to when you build your construct your business components and um when you need to define a frictionless support strategy serverless360 is the place to go so here are some useful links and thanks for listening to me i am opening to open to any questions that you have excellent thank you for that that was a really informative session particularly helping to demystify some of the complexity that we've got around um azure um unfortunately we are out of time today so we don't have any uh time for questions but do reach out on social media but i would like to lastly thank all of our uh speakers that we had in the big round of applause and i'd also like to thank my co-host rahul for jumping in and helping me out with this uh stuff today but that's all that we've got thanks for watching and we'll see you next time you
Info
Channel: Azure Cosmos DB
Views: 193
Rating: undefined out of 5
Keywords:
Id: 5x0ZOUlCq2I
Channel Id: undefined
Length: 184min 55sec (11095 seconds)
Published: Thu Sep 30 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.