Mastering Azure API Management - Demos and Best practices

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
cool and next um I think today we have um our awesome speaker Jeff who is a Chief Architect at Fred ID group um and Jeff will introduce you some cool stuff about Azure API management so you will learn from quite a few demos about how to use Azure API management and some new features common use cases I think that would be cool um okay or to you Jeff great thanks I'll start sharing yeah right hello everyone today I'd like to present on API management my name is Jeff Gilberto a bit of background on myself my family and I have been living in Auckland since the end of 2006 I'm currently Chief Architect at Fred it a leader in the Australian pharmaceutical software industry I also collaborate as an author software developer and as a indie game developer thanks for having me so what we'll cover uh I'll provide an overview of apim and highlight major features of apim I'll move fast through these as I really want to explore capabilities of apim that I've seen organizations not used to their full potential compared to other Azure Services apim is different than most as it provides an offering that merges Business and Technology concerns this means when we approach apim we need to look at it with a different lens than we would a function app or a database service I also wanted to show some patterns as I feel these show the strength behind apim this does mean this will be a demo heavy presentation so we'll see if the demo gods are with me tonight saying that I do recognize there will be those on the call with deeper knowledge of apim devops load testing and other aspects of the presentation please post your thoughts in the chat so we can discuss at the end and if you see me struggle please speak up and help me out let's get started so uh first and overview Azure API management provides organizations with a comprehensive set of tools and features to create manage and secure apis enabling them to expose their digital Assets in a secure and controlled manner Microsoft describes APM as having three distinct components the API Gateway acts as a facade to the backend Services allowing API providers to abstract API implementations and evolve back-end architecture without impacting API consumers API providers interact with the service through the management plane which provides full access to the API Management Service capabilities this can be through the Azure portal Azure Powershell Azure CLI Visual Studio code extension or client sdks the open source developer portal is a fully customizable website provided so you can easily provide documentation about your apis before we start on the features let's discuss a simple scenario that we will use throughout the presentation so here's a typical scenario supporting the ability to retrieve and add patients to a database this was implemented using Azure SQL function apps and Azure storage queue are retrieve patient API supports HTTP get operation the add patient operation is triggered when an entry is added to the add patient queue so now we want to expose this functionality this is where apim comes in there are a couple things to note about this example first we've chosen to expose our api's as restful where they were originally designed as RPC this can be referred to as API wrapping or API bridging the advantage of API wrapping might just appear like personal preference but if you imagine a scenario where a company might have dozens or even hundreds of apis created at different times or in different technology then apim is a great mechanism for presenting a consistent offering to customers so let's see this in action right I'm using Postman just to test the API and in this case warm it up so first I'm going to be doing retrieve patients sorry about the size uh but hopefully you can read it I'm calling a Dev instance of apim um where I'm using the the restful endpoint so you can see patients doing a get operation if I send that in um I'm going to get eventually account a total count of the patients in the database and then just an array of pretty much junk data that I've that I pushed in you can contrast this with the function app so you'll see here the function app is using a rpt RPC style endpoint but it's in essence basically doing the same thing it in the real world you could imagine uh maybe this older endpoint returns XML and then you translate an apim to more of a a modern restful uh interface and the other thing to point out is here again off the dev API M I'm posting a Json payload to the queue which then is processed so this is just an acknowledgment that it was added to the queue and so we should see then our account go up from 259 hopefully oh first part of the demo okay something's funny happening already did you send that in again oh dear well I'm sure we're going to stumble upon what has broken in a second right moving on always the way with uh demos isn't it right oh and the uh another pattern to highlight is we have the basis when it works of Cubase load leveling the main benefit here is to reduce the impact receiving many requests in a short period of time that might have on our application so you can you can basically get the idea that we are smoothing out our our request received uh rate so let's take a look at how this was implemented right here I am I'm in in apim um uh I'm sure some of you have seen this before but basically we have our endpoints here and so I'm going to take a look at the post operation which is where we're um posting to the queue and so if we take a look at the policy again apology apologies for the size you can get an idea of the the way that we Define um our our different apis so for now if we Overlook the set body and concentrate on the right queue here with this set body operation we're basically wrapping the payload that we received as a base64 encoded message we are directing this message to a particular backend service and then we're setting some headers that are required um uh authentication using managed entity which we'll talk about in a second and we're performing a rewrite of the URI to the messages format so all we've really done here is implemented the um defined a message and I'm just bringing that up now uh that of the put message which they call it put but it's actually a post let's see if I can zoom in a little bit nice so you can see it's a post message to the messages endpoint and um and further on you can see the different request headers that are required so um nothing too revolutionary but a nice use of of apim where we use the functionality of APM as opposed to standing up another service in order to integrate to the storage queue so the demo I put together centers around two deployments of the patient API a development environment and a production environment I'll be using devops Azure devops and the API Ops framework for CI CD in addition to the monitoring within apim I will be using application insights to visualize what is happening within the apis and to create some activity against our environments I'll be using Azure load testing managed service um so so that we can start to build up some activity I'll launch some tests now so um this is the Azure low testing service and I've defined two tests within it basically they they're just jmeter scripts so um what I'm going to do for the first one is I'm just going to go ahead and start it so this is against the the dev environment and I'm going to do the same thing against the prod but let's just pause a second and take a look at some of the features of the low test Suite so we are able to configure the test so just to give you a little bit of understanding you know we can give it a a nice name the test plan is just a j meter file that's been uploaded in parameters I've been able to feed it specific parameters used in the test for example the endpoint the production endpoint for the test plan this is the same test plan but I needed to use different subscription Keys we'll um we'll talk about that in a second the loads also variable so I've set this up to be on two instances but because we want to push the envelope a little bit I'll increase this to four um the other thing I can do is uh create some test criteria I haven't in this case but um uh if if people are interested I'll I'd love to come back and talk about load testing and the other aspect is is we can monitor different aspects within um uh Azure itself and this this is one of the great features of the load testing Suite where you can actually hook it up to different services that are involved so in this case I'm bringing in aspects metrics from the database from apam and also from application insights just to monitor so I'm going to go ahead and apply that and uh it'll kick me out and I'll go ahead and start the Run so we can give it a nice name like demo you know the uh hopefully it'll work we'll just kick off the start so what we'll have uh actually what I'm going to do is I'm going to switch back to the dev instance because he should already be showing a better activity oh did I not hit run there's nothing hit run sorry everyone I'll go back to the dev instance and what we'll see we'll see that we'll see it kick off in a second and because of that whoops I'll pop back to prod we'll see if this guy is kicked off all right good so he he's actually started up so you can see that we're doing our four instances um looks like there was an initial delay on the the startup of the services but now it seems to be um ticking along fine so you can see the response times uh showing up this is the number of requests per second so looks like we're maxing out oh well we're getting up to around 120 so that's quite exciting this is probably the most exciting thing for me is our error account at the moment is zero so I like that the other thing to point out is we'll start seeing some server-side metrics appear so let's just um let's just let this run for a bit and we'll um we'll go back to the presentation but then we'll come and um and take a look yeah so we can start seeing it ramp up a bit good stuff so um the first feature I'd like to highlight is API lifecycle management apim provides tools and features to create version and publish apis making it easy to manage the entire API lifecycle from creation to retirement it allows for versioning deprecating and retiring apis ensuring smooth updates and transitions so let's have a look at that so again back in API management we can see that we have a single version of the patient's API the versioning scheme here has been chosen as path other options are header and query string and in case it's not obvious what I'm referring to if we take a look at a test for example we can see that the versioning scheme is within the URL shown in the path versioning is primarily used when there are breaking changes the Azure architecture Center provides good advice on API design including what is and what is not considered breaking change at the end of this presentation there's a page of useful links so you can read up on more information about that another mechanism for life cycle management is revisions so um what I wanted to do is I wanted to Post in the chat a quick poll because I'm curious to see how others are using api-m so bear with me as I navigate around Okay so for those of you using apim it would be great if you were to react to this post so take your time but for those using APM to use revisions thumbs up yes no laughing and no idea what's that surprise face all right okay so let's let's check back to see how the low test performance is doing in order to show how revisions can be used navigating is a bit of a challenge okay here we go so I'm going to actually pop back to the the dev instance because it it highlights what I want to show a little bit more so you can see here we only have one instance of the engine running looks like our uh our performance is a little bit more chaotic usually it's not this chaotic so I don't know what's happening uh no errors so that's great requests per second we're not getting very high you can see we're only around 40. what I wanted to show is this the CPU percentage is very very high so um a little bit better of a view is what we have over in um uh the SQL database view so I'm on metrics and I'm looking at the DTU percentage so you can see that we're actually around 80 percent which is is quite high for what we're doing we're just basically sending a request to get I think it's around a hundred users but it's not the most efficient query so what I'm going to do is I'm going to use a revision to put in caching on the service so the reason why it's a revision is that it's um uh not a breaking change so no one needs to be alerted of this um in a similar way that you would have that you would use versioning so let's go ahead and do this so revisions you can see they're located over on revision so I'm going to add a new revision call it caching type and so now we'll have Vision Two once it decides to come back okay so we have revision two an important thing to point out is that it's not marked as current so no one's going to see the change that I'm working on you can see I'm working on revision two if I pop back to design and select uh retrieve patient I'm going to amend this policy to include caching so I'll look at my cheats cheat sheet and the first thing I need to do is is set up the Cash Store so in the outbound policy um I'll cache the response and um what I'm doing here is this is in seconds so I'm going to Cache it for 30 seconds but of course the um the uh the other aspect of this is on the inbound before we actually do any of the back end querying we're going to actually do a cache lookup so you'll see here I'm going to um uh yeah vary by a couple of different parameters but you can get a little bit more sophisticated in your cash lookup I'm keeping it simple so I'll go ahead and and save this now one of the really powerful things about revisions is no one can see it so we won't be seeing any impact to the performance but we can still test it so I'll use Postman again and you'll see that we have a revision two and hopefully you can see it it's not too small but you can see that we specified rev2 in the uh in the path so if I send that in uh it looks like it's it's still working um you know it's really hard to tell what impact it it's doing but let's say we're happy we believe it's working and we're going to go ahead and uh set that as the as the current revision so I'll go ahead and do that I'll just say make current um I'm not going to worry about making a public changelog but I'll change that one so that now revision two is current and what we should see is in our we should see a dramatic change in our test run so let's just be patient and we might already be seeing it with that little dip oh geez that was the opposite of what I wanted to see it's fun having a a live demo but hopefully now we see it plummet back down important we didn't have a failure but now we should see the performance change that's what I'm hoping and the other aspect of it too wow look at that graph all over the place um but we should see an impact to the database unless I'm proven a liar exciting live demo well I'm not going to overthink it we'll come back and we'll have a look we should see an impact if not I did something wrong but hopefully you can you can see where I was going with that that change right going back okay so uh one reason why I wanted to do this presentation um is uh is it providing me an opportunity to explore support for CI CD I've seen several attempts to solve how change is rolled out from a development API to other environments these have been a combination of manual processes Powershell and other approaches but they seemed error prone and incomplete to me uh I then came across API Ops the tool is supported in both devops and GitHub some of you are probably familiar with a popular arm-based approach um but let me show you something important if you haven't already spotted this so this is the Azure API management devops resource kit it's actually in maintenance mode in favor of what we're going to be looking at today so in this today I'm actually using uh devops for my for running pipelines but it's also supported with GitHub actions let me kick off a pipeline and then we'll I'll explain what's happening it's right here right so let me just go ahead and start the extractor pipeline there's a couple of couple bits of information that I need to supply it I need to supply the APM instance name and just so that I don't make a mistake I'm going to go ahead and copy that over nice and the resource Group that it lives in so I'll again do the same grab the resource Group name um the repository is just you can see it here the project name the folder where I want to place it and I've just called it artifacts because it seemed like an appropriate name um I'm going to Target my target Branch I'm going to keep Main and and I'm just going to use the defaults uh for the for our purposes it's not it's not important to uh dive into these so I'll go ahead and kick that off so while that's running let's just talk a little bit about what it's actually doing and then we'll come back and revisit so nice little graphic that they provided us but let's just talk it through so what we've done right now is we just kicked off the extractor pipeline what that's going to do is it's going to take the current state of the API instance in this case the dev instance and it's going to compare it to the git repo um if it finds changes hopefully our cash change is going ahead and create a PR uh I'll go and review the pr which merges it back to the main branch which will then kick off the publisher pipeline which then will update the API management instance later in this presentation we'll also talk about uh how we can do this uh Direct in the GitHub repository so we'll also do that but for now let's see how that pipeline's doing all right well so far so good we have we have one green the next step is to actually create the pr so it's it's in the process hopefully this will will work nothing worse than when it doesn't work shouldn't take long all right and creating the pull request foreign there we go great so what we should see then is we should see a pull request active uh that we just created Created just now and if we dive into that we can actually see what files were updated so popping into the files uh again apology for the size but hopefully it's not too bad um what we're seeing over on the left hand side is the structure of our artifacts folder and so you can see some of the files that were updated by our change so for example the the policy file of the retreat patients we can see our two changes the cash lookup and the Cash Store and we can see also some revision information and various bits and pieces so uh we can we can dive in and look at all the detail details but for our purposes I'm just going to go ahead and approve it I think it's good enough and I'll go ahead and complete the pr so this will then bring it back to main which if it was successful should then kick off the API publisher so an important thing to point out about the publisher is it's going to update both our Dev apim and our production apim and and in order to push to the changes to the production we'll have to approve it so we'll have to wait until it progresses since we have a little bit of time to wait uh let's have a little bit more look at the the repo file structure so in artifacts these are the different files that it's going to build up to describe our apim so you can see you know that we have a a revision one of the patients API and now we have this is actually our revision two uh and you can see our we have both add patient get tree patient um and various various bits and pieces about it the probably the the biggest riddle that people are thinking of is how do we control the different environments that's in a configuration file so in the configuration file this will actually update um values in in some of the other XML files it it's was relatively easy for me to get it to work uh the documentation's reasonable to give you an idea uh API Ops they have uh you know a published Wiki and there it describes the publisher some of the variables that you can control uh and it provides a exciting map of how the um I know you can't read it I can barely read it of how the uh configuration file maps to the the different um you know XML files Within um the artifacts folder so it's reasonably well documented um and uh an important aspect of it too is it actually encouraged me to um write my policies in I would say probably more of a best practice so let me give you an example of that if we pop back to our um uh apim instance for Dev um one of the things that I did is I created name values and back ends so that I could actually replace it in that configuration yaml file so you'll see here we have our ad patient queue and the actual URL that it will be using so our prod instance has the same value but it's been replaced with instead of API API Ops storage Dev it's API of storage prod so I think we're all interested to see how that pipeline is doing right so what we're seeing here is it successfully pushed the changes to Dev which in in essence it shouldn't have made any changes at all now it's going to go ahead and push the changes to prod I do need to approve it so I'm going to review it and say yep I think it's good and that'll then kick it off so one of the things that I was really interested to find out is what was the impact to a running system so when I did this before sorry I'm just trying to find my here we go in the test run what I found uh that's what it is so you can see now do you see the caching how the caching now is reduced it to uh milliseconds as far as the um uh you know the performance of the retrieve patient uh one of the important things to point out is we were able to update the service live without an impact to um to Performance and more importantly no breaking changes what what I found in practice because I've obviously done this several times is depending on where it is in the um uh in the jmeter script it could have an impact on performance but it never failed which is um uh an important aspect so this is this is the dev instance running um and you can see the CPU has has dropped dramatically uh let's just see how prod fared during that deployment or I should say as the deployment so we come in so you know obviously we haven't done anything with caching so the response time is still High no errors which is good a higher requests per second but that's understandable since our uh since we're running four instances of the engine you know which equates to more virtual users and again no failures um so let's see has that deployment completed if we pop back to the pipeline yep so it's it's pushed the changes um and like I said what I've seen sometimes is an impact to Performance but in this case it was nice and smooth so I I'm really excited about this because it it's one step towards um zero downtime um deployment which I think apim is is a great um great service to to Really highlight how you can achieve zero Time Zero downtime deployments um hopefully there's people on the call who can appreciate the value of um not midnight deployments right so going back to the presentation okay uh next feature I want to talk about with security and authentication apim enables organizations to add authentication and authorization mechanisms to apis using a variety of protocols such as oauth open ID connect and API keys and I'm going to put in my second poll because I'm also very curious about this from the from the group right so what security have you used in apim uh and if multiple then pick open ID or oauth because I'm curious to hear how many companies out there are actually using open eye open ID connect or oauth because so far the majority of the companies that I've dealt with aren't aren't using the security mechanisms or using different security mechanisms back to the presentation right uh so oh and not exactly security but it's a good time to mention some related features the subscription key is a way of identifying the caller in our scenario we have two external systems so let me show you what I am describing so um over in our development API Ops if we look at subscription I've created two subscriptions one for Galaxy high school and one for Westfield high school but I just created two separate ones and what that means is they use different subscription keys so to highlight this in our um in in Postman in the header we actually specify the subscription key as as a separate header ocp APM subscription key there is some flexibility on how that's specified so popping back to APM and we look at our API definition we can see that in settings this is specified under subscription so I've made it so that the description is required and I'm supporting both header name and also as a query parameter so if a consumer wanted to specify it as a query parameter they have the ability to do it with subscription key and again this is the section I was talking about with the user authorization where for convenience I've set it to none it's also important to point out that apim off also supports other features like verifying certificates and IP filtering so the developer portal apim includes a customizable developer portal that serves as a self-service hub for developers to discover understand and consume apis it provides documentation interactive API testing code samples and allows developers to subscribe to apis and manage their subscriptions um will not dive into the developer portal in order to spend more time on other features so I believe this is my last poll but this is another thing that I'm really interested to hear from everyone on the call um so at your company is the developer portal being used to communicate the apis to the customer and and how well is it doing it I strongly agree heart partly agree thumbs up and do not agree Smiley so that's uh teleportal is probably another feature that I see um uh not used to its full potential my advice here is to consider how your customers will use the developer portal and only provision if it's required for example it might make sense to only provision in one environment I often see it provisioned in multiple environments which increases operational and management overhead in some cases this could provide an unintentional mechanism for accessing production information so um yeah I guess proceed to proceed with caution uh monitoring and analytics um I see where where I won't spend too much time on it because of time but uh I'll give I'll I'll click around a bit to kind of show some of the power of apim so uh APM offers monitoring and analytics capabilities allowing organizations to gain insights into API usage performance and errors it provides real-time historical analytics logging and Reporting helping organizations to troubleshoot and optimize API performance so let's have a look at that so on insights no not insights yeah analytics sorry we can see uh if we just say last hour we can see some of the the activity so I'm in I'm in Dev at the moment we can get a summary of activity um I'm assuming that's related to our cash change Yeah well yeah and you can see how now um the number of cache hits has increased quite significantly um and again all I'm trying to do is just click around some of the features that would be more interest um in regard to apis we get an idea of which apis are being called um why it only says add patient all patients API sorry patients API um because I only have one API it makes sense operations we can see the split between retrieve patients and add patients in my jmeter script I had uh it tailored to do more retrieves than ads so you can see this reflected in the successful requests and products um I didn't spend time on products sorry instead I did subscriptions and here you can see that Westfield there's more requests going against Westfield in Galaxy high school um users isn't going to be very interesting uh because I I didn't I'm not highlighting that scenario and then in requests we can get more detailed information so it's it's it's pretty good one feature I wanted to highlight though is its integration with application Insight so if we pop over and look at application insights and do a refresh what I've done is I've um uh hooked up both uh Dev and production into the same um application insights instance because I thought it would be interesting to show um you know how the how the uh well two things really one is to show it in one image and the other is to show that um are devops pipeline didn't uh did a good job of actually making sure that we overrode the correct instance of the function app and the storage queue because it would be um uh very poor if we had production pointing to the uh Dev instance of the database and vice versa so that was the main reason of just showing this image um and of course if we look at performance we'll see what we were hoping to see where we can see uh the different messages um being displayed in independently and we can also dive into examples so let's just take a look at the oh let's say the um uh patients API so this is on apim and if we dive into one of the samples when I'm trying to show here is we have both api-m and the function app both contributing to application insights which then gives us a nice view of um sorry trying to expand it of the apim calling the backend service so I'm having Mouse issues at the moment but hopefully you can get an idea of what I'm describing here where it hits apim on our wristful API which which then hits the function app on the RPC Style API so of course this isn't a great view but you could imagine if we had our other dependencies in here it could start being a very rich view of the activity so let's move on because I realized that a little behind schedule so policy based control uh probably probably one of the the key Keyport parts of apim apim allows organizations to bind policies to control the behavior apis policies can be used to enforce security throttling caching logging and other custom business rules policies can be applied globally or at the API product or operation level providing fine grain control over API Behavior so let me highlight what I'm doing in the ad patient so again going back to our API in the dev instance we can see here on ADD patient if we look at the policy um this section let's see if I can zoom in right kind of okay in this set body statement what I'm doing is I'm pulling the subscription ID and I'm adding it into the payload as the registration source so this is a great way of of in in a way providing more of like a multi-tenant situation same API um but uh being able to enhance the data based on who's calling I I could have solved it in a couple of different ways um I could have had the calling system Supply the information the disadvantage here is relying on the calling system to consistently and accurately Supply the value I could also have stood up a separate API for each subscriber with a separate value for each this might make sense if you have a couple subscribers but we do have hundreds then it becomes unmanageable so great technique for multi-tenant implementations right so scalability and high availability this is uh important so apim is built on Azure which provides global scale High availability and fault tolerance it supports Auto scaling allowing organizations to handle variable API traffic loads efficiently so I wanted to talk about scaling in a little bit more detail um important to stress that APM is not a load balancer in the example here we're using serverless resources so it doesn't really highlight so instead let's switch to a scenario where we have apis hosted on some VMS so it in this case we're using application gateway to now act as a load balancer to the different apis it's worth noting that APM does have an instance count standard SKU allows up to four where premium allow premium does not have a specified upper limit scaling can be manual or Auto Scale based on a schedule or on metrics it's also worth noting that APM does support availability zones in some regions so it is possible to have Regional redundancy by spreading the instances across availability zones also it's possible to self-host an application Gateway if anyone on the call has experience with self-hosting please speak up at the end of the presentation I'd love to hear about it apim is a regional service here we see apim deployed to two different regions we can achieve we can achieve this by either having a multi-region deployment or by managing the deployment ourselves the multi-region deployment is available on the premium SKU this will replicate the Gateway component across multiple regions and provide routing based on lowest latency if we manage the deployment ourselves we could use a global Service like front door to distribute traffic uh it's also important to to stress that apim has many security features like IP whitelisting but it does not directly integrate with the Waf this is illustrating how we can use oaf at the front door to provide this capability this is a big topic with many advantages and disadvantages so at the end again I'd love to hear from from you about your experiences integration with Azure Services apim integrates seamlessly with other Azure services such as Azure logic apps Azure functions Azure active directory Azure monitor it allows allowing organizations to build comprehensive and secure API Solutions within the Azure ecosystem APM does support managed identity and it is possible secure communication using Azure ad role-based access control I thought it was important to dive into this a little bit so I will again use our ad patient as the example this step here is where we're using managed identity and I'll just briefly talk about that because if you're not doing managed identity I encourage you to and I've just gone over it where there we go no that's developer portal all right this is where someone can speak up if I'm because I'm overlooking at Health Security Management identities there he is all right so here we go so we have a system assigned identity created and if we dive into it a little bit we'll see in the Azure role assignments that we have access to are blob storage in particular the queue data contributor so um another so as far as integration with Azure Services uh something else to to show is in the creation of a new API we can actually create it against different existing services so for a quick example if we take a look at um what I want to do is I want to take a function app and now expose it within apim and there is a wizard where we can actually go in and select an existing function so you'll see that this one mermataka has a get astronomy API I'm able to select that and build it up up if I hit create it would create a new API and um and it's a nice convenient wizard to set it a lot I'm not going to so I'm going to cancel that but there was one thing I wanted to show which I discovered recently um where you can actually go the other direction so if I find my function app there's this capability right in front of me don't you hate it when I think when when you present your your IQ drops by uh half there it is API management so API management you're actually able to go ahead and select an existing API management and create the API and Link Link the API together so it's a nice streamlined way of actually exposing your function apps in apim right back to our presentation so the the last feature I wanted to highlight was um API bundling so apim allows organizations to create products which are collections of apis with specific configurations policies and access controls products can be used to bundle apis together create API marketplaces and onboard Developers organizations can Define product tiers set up documentation and manage access and subscription to products making it easier easy for developers to discover understand and consume apis just to kind of highlight this a bit let's take a look at the starter project so if we go back to our Dev APM instance we have the concept of products here we have two that were you know automatically defined we have a starter and an unlimited if we dive into the starter it actually has a policy associated with it where it provides both rate limiting so what five calls and six seconds and a quota which I believe it's a hundred in well I don't know whatever that is someone can someone can can let me know right now let's go back and see how the load tests are performing so we can see we have some some variants but for the most part it looks good no errors uh We've had a drop in CPU percentage that doesn't look that exciting ah because I'm in prod yeah let me just do a refresh yep there's our drop in the request count um I was taking a little wild refresh right but but hopefully that this gives you an idea of how you can um uh you know perform load testing against apim but also do a do a deployment without impacting the performance um going back to the demo let's say the dev team missed something and a mischievous mischievous tester found an issue so let me show you something in Postman so I've created this query here called uh add patient and when we uh okay I had it from the last one you can see when I hit send it's actually failing saying the request body is too large so let's let's talk about that um let's talk about that in more detail so this is actually what's happening we're having a failure here when we're trying to add to the patient queue um what we're seeing is a limitation on the message size supported by storage so each service actually does have a limitation Azure storage can store up to 64k depending on the service bus tier you can either have a Max of 256k or 100 megabytes event grid also supports varying message sizes so how are we going to solve this we're going to solve it by doing a very popular pattern called claim check uh and what we're going to do in this situation is uh in in apim we'll identify the messages too large for Azure storage queue um we then will save the message to The Blob storage and instead send a reference uh to the blob in the add patient queue message let's say in this scenario the team has already updated the function app to handle the claim check pattern so what we'll do now is we'll update apim to identify when we receive a large message and implement the claim check pattern for the grand finale demo I'll attempt to do this code first from API Ops project so yeah this is hiding stuff for me anyways right so if we go back to our devops project we go into the files um I lie actually what I'm going to do is I'm going to do it instead of updating it directly I'll go ahead and create a branch and make my change on the branch so I'll say a new branch I'll call it um Wellington you know that created this will be a copy of uh Dev popping into it here in files I'll go ahead and modify my API my add patient API so we can see here this is what we looked at before if I go ahead and add that edit that and look at my cheat sheet because I won't be able to type it without making a mistake all right there's a couple things we need to do first thing we need to do is we need to identify um what the size of the the message body is so I'll go ahead after this step here and I'll take the content length and add it to the body size next I'm going to make a decision on whether or not to whether or not to just send the message directly or to send to blob storage so I'll go ahead and paste that in let's just look at this guy in a little bit more detail so in the choose statement here I'm going to determine how large the body size is so if it's over 60k I'll go ahead and um uh send it to blob storage so you can see here where we're doing a send request again using the the definition that's provided by you know Microsoft and we'll go ahead and um set the URL to The Blob file name which is defined here basically the storage claim check storage with the request ID of the context to ensure that we don't um have any any collisions it's a put method we need to add some Header information including the type which is block blob we're going to set the body then equal to the message that we received and again we're going to use managed identity what we're going to do now and this is probably the more interesting bit is in the set body here we're going to have a basically like a keyword to say what the request ID is this will trigger our function to say you know this is the claim check and to uh to go ahead and retrieve from blob storage otherwise we're just going to go ahead and do what we did before so I'll just remove these two lines because they are now located on line 28 and 29. right and that's it so I'm gonna go ahead and Save and if someone noticed a typo it's probably a good time to speak up but but let's say I did it correct I'll go back to my Branch sorry I won't I'll go to pull request and I'm going to create a new pull request and I'm going to go from our wellington.net back in domain we can see that there's only one file that's been updated here that should be our changes yep and um I'll go ahead and create that pull request and go through a rigorous approved procedure and complete it so in theory then that should have kicked off our pipeline and here he is here so while this is running and in hopefully updating correctly um why don't we dive a little bit into what these pipelines how these pipelines are actually put together so um let's see uh important thing about it is in the um uh there's a couple of interesting pieces to this so uh instead of looking at all the fancy purple text why don't instead I'll go straight to tools pipelines and we can actually see the definition of the yaml files that I'm created so this is where if if you had um bespoke functionality that you wanted to do as part of your publish or extract you can add it here on the yaml file an important bit to note is if we dive into the publisher with environment so basically run publisher will call this one depending on if it's Dev or prod or uat if we look at that in more detail we'll see that it will actually download trying to spot it here the um a release that the team working on API Ops has created so if I take that and have a look at it you'll see that there's different releases available hopefully right so the release that I'm calling is actually 404. but you'll see that they're on 413 you can get an idea of what's changed but you'll see the different releases so as they fix the bugs you can see they've um yeah doing a good job of documenting what's changing so why I said 404 is it's actually going to pull this from a variable um that variable is defined in um in my library so if we look at apim automation this is what I was referring to version 404 so when I'm ready to go to a newer version it's as simple as going and updating this value and the pipelines will then retrieve a new release uh going back to our AP Ops publisher oh look at that good news green if we uh there's another feature I wanted to highlight um and this is again you can add additional steps but there is actually a lint step um I don't have a lot of experience with lint but it's just interesting to note that it's actually pulling it from again whoops a um a community defined uh rule file so there's no reason why you wouldn't be able to host your own yaml file and use that uh for some specific rules um here uh here we go so mine at the moment is showing uh 10 warnings but I have found when when I've I've done some modifications that I'm actually I actually do get errors if I was to forget um a particular policy that has been defined in that real file so it's a it's a nice touch in order to um yeah raise the quality of the apim apis that are being defined so for completeness I'll just go ahead and approve that review approve but let's just make sure in our API Dev instance that it actually did the change so I'm expecting it against our ad patient and I'm hoping an ad patient we now see oh that's good that's a good sign we see the body size and our changes so in theory now if that's the case we should be able to go back to postman and rerun this and it thought about it but it looks like it's now fixed so that was my grand exciting demo um of uh improving it oh and just I forgot to highlight but you can see in the notes section here why it was um of a large size so so good good that the tester discovered that before we actually had that issue in production right that's it thank you I realized I went for a long time so maybe there's only one person left but um uh thank you for listening and um please reach out on LinkedIn uh love to um hear from uh from everyone on the call now um things that you liked and also your experience about apim that's cool thanks Jeff um we have a couple of questions in the chat can you see that yes yeah so so um uh so this is first uh Marcel's um uh question um are there any considerations with API Ops when it pulls artifacts so um I'm I'm really at at the beginning of my my learning of API Ops um I I really wanted a realistic like scenario to start exploring and I have learned quite a lot about it but I know if I was actually doing this you know uh full-time as in you know apim um developer then I would learn a lot but um yes the consideration in particular is um what I found is you have to plan for how you're going to overwrite values when it moves to the different environment so I only have two I have Dev and production but I would imagine most but not most but there's going to be lots of situations where you're going to have Dev to uat to I don't know maybe um sit you know different environments so how you define your apis is very important so I might not have been um I I know I went really fast but you'll see what I've done is I'm doing um I'm defining back-end Services which makes it much easier for me to replace those in the configuration uh file so the configuration file being this file defined here so so that would be an important consideration but I'm sure there's tons of other ones that will also get you is the devops yaml part of API Ops so uh yes in the sense that um uh you you pull these down into your own repo and and you can customize them you can add steps you you know if you if you didn't want the lens you could remove it um but it it does as part of the um the startup it actually will pull down um their release so there's part of it that's going to be inside of um your repo and another part of it that's going to be you know a release which which is good because you have some flexibility but also you're getting uh you're able to leverage um uh you know an open source community cool thanks uh do we have any other questions yeah uh maybe I can ask some questions uh Jeff I do have some questions um the first one is ISO you you uh in your demo you you shared we can set up the cache in apim uh also I I wonder what the difference is between the API cache and you know in application we can also provide some cache like in memory cache or distribute cache so what's the benefit using API and cache all the difference uh well well the the main benefit is then you're um you're putting the pressure on on apim in order to do that I mean you can still do in-memory cache of course in your in your applications but the idea is to to stop it at the um you know uh as close to the the customer as possible um you know in the in the load balanced not sorry in the uh globally load balanced um scenario we had front door front door also has a really great caching mechanism to it too so um it sort of you know it's a design decision on where you want to put it um a lot of the solutions that I've seen come on have been existing and so they already had an existing caching strategy so I wouldn't in that case I wouldn't suggest you know ditching something that's already existing and working for uh using the AAP him one but um I've I have seen in um in a few scenarios it actually quickly add a lot of value where um a lot of times you know uh if someone's hitting your system very aggressively you know multiple times per second it you know the value is not going to change um you know between calls so it does make sense to look at these these features so what kind of uh cash uh patterns is supposed like you know in um if we implement the cash out um in the application probably support some like a slight window or some other patterns uh can API API M cache support that kind of different patterns I um so the example that I used did a 30 second um but uh I I I'll pass on getting myself into trouble trying to explain explain something that I don't know as well as yeah but um but yeah they're they're um in the documentation when you look at the different policies there are um uh some flexibility with what you can you can Define you know whether whether or not you have a sliding one and or how it decides what gets removed from the the cache it I'm sure it's documented but yeah no I don't I don't know offhand yeah yeah thanks my next question is um uh also so in your demos you shared how we make the versioning for apis so from your experience what's best practice of using versioning uh do we put the version in the URL or in the HD HTTP headers I think there are some different applications so so what I've done is I've added um at the end some some useful links because um there's some big brains out there who who have great information on this um so you know I it it is it it's probably more important to be consistent than it is to follow a um to be inconsistent as far as you know how you're communicating to your customers so I think that's probably the the real strength of apim is that it has a versioning strategy you can choose you know in the path or you know as far as you know a parameter or whatever but um um in the documentation uh the architecture Center I think it's here yes um there's some really good information on you know rest not rest um also you'll find information on uh breaking changes and also versioning so you can see that there's different versioning strategies that they'll uh describe but but they'll do a much better job than me trying to summarize in 30 seconds yeah yeah yeah that's cool it's a great question and and you know you you um I think we all have tried to answer that in inner roles also because it's um yeah it's very common you know what versioning practice do you have um you know the the um whether it's rest or RPC my opinion is that's a design Choice uh depending on your consumers I that's probably not popular rest is the popular you know pattern um but in some situations if you um so for example if you're if your customer base isn't as sophisticated as you are if you're pushing um uh workflow and or um you know a complex uh set of business steps on them a lot of customers not a lot some customers aren't able to to do that so it sort of depends on your customer sometimes it's easier to have a a an RPC style that takes you know 30 rest like Steps combined into a single package for these customers that it would be really challenging for them to actually you know Implement that as separate rest statements um but it depends nice thing about rest as developers you define rest and then you just let your customers consume it how they want to it's again a design design decision and there's a lot of factors yeah oh great uh also um I think this documentation uh is really helpful uh can you share the link in the chat oh yes so um what I'll do is I'll I'll uh I can I can post each of each of these that I think are really useful um uh URLs um I can post those in the chat and yeah that's cool um do you have any other questions run that's a great session I saw lots of demos um it's quite engaging in any um so anyone on the call actually has have do they have anything they want to share because um I'm I'm genuine in wanting to hear uh like if if people are using um uh so so if I look at it uh so I see someone saying that um the developer portal their companies using it uh my experience is um I've I've seen uh I've seen it as a developer-led initiative but it it never was uh um communicated strongly to the customer um but I'd be curious just to hear how uh who whoever did the the comment uh how they were able to actually get the customers to consume it or or was it a particular type of um business scenario I don't know if if they're still on the call um oh that was was that you who did the heart nope yeah you did the heart yeah yeah yeah yeah I think um I've used that the developer portal a long time ago yeah yeah it it it's really good I think it's a great idea but um it it's what I was talking about at the beginning where it it's you know you um it's the business aspect of it is is we can create this great portal but unless where you know how to deliver it to our customers and and get our customers to use it then what we end up doing is we write a word document to send to them to describe how to use our developer portal and it sort of defeats the and we Supply them with Postman project and so it defeats the purpose of this developer portal because it's it's incredibly powerful but is how do you how do you communicate it how do you get your customers on board and I would think it depends uh a bit on who your customers are I think uh like a SAS solution it would probably be really um it would be more successful than if it was like um like a a business app especially like an internal business app then you know yeah yeah um the the other question is did we have anyone who actually did self um uh self-hosting of their API Gateway did has anyone ever done that I would be surprised if anyone has but I I'd it'd be it'd be great if if someone has no okay I was just curious because it's a great idea of scaling it uh you know um High availability or scaling it out but um it's so easy to stand up in apim that I'd be surprised about the cell posting side of it but there you go yeah do you have any uh comments or paintings on there I forgot to introduce myself also a Microsoft MEP um right yeah so if you have any comments feel free to to on your microphone let's just join the discussion no mic okay okay it's all right yeah cool um thanks Jeff um any other questions I think we or the uh it's one half an hour that's really insightful presentation yeah lots of cool stuff um if we don't have more questions um I just want to uh thank Jeff for your fantastic presentation on Azure API management uh oh we do have a new message an example yeah uh yeah yeah yeah yeah thanks Aaron yeah really appreciate your presentation um and um thanks for your insightful and engaging um presentation and demos um really appreciate your focus on especially for the API Ops and how it can help developers with crcd um something like that um yeah so thanks um also um for uh I'd like to appreciate all the attendees today um thanks all and hope to see you all next time um thanks Jeff yeah yeah thanks I appreciate it thanks Jeff yeah thank you yes I'll stop recording hey gopal I see you're online
Info
Channel: AzureTar
Views: 4,240
Rating: undefined out of 5
Keywords: azure, azure api management, azure apim, api, api management, azure api development, microservices, gateway, api gateway, development, api policies, mastering azure api managemenr
Id: UyXNkm6hcjI
Channel Id: undefined
Length: 79min 45sec (4785 seconds)
Published: Thu May 18 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.