Azure Automation in production, lessons learned in the field - Jakob Gottlieb Svendsen

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Applause] [Music] mm anyway I'm gonna have this session about Ash automation in production how many of you have used as your automation all right so that's good it's almost everybody or 60 or 70 percent or something like that the idea is to get an overview of the different steps that you can do it's kind of like we have taking that whatever challenges we've had in different projects and we've collected that through the last few years into some best practices in different yeah different subjects around automation so it is all stuff from we have used in production at least almost all of it but I've been trying to collect everything into like one session here it's all of course something that always changes my name is Jakob garlic Swenson I come from Copenhagen in Denmark and I work as a principal consultant and lead developer in a company called CT global so my main area is to do C shot PowerShell and anything in that pin MVP for a few years then you can see my my Twitter handle and my email right there let's get started so as I said I'm gonna try to go from A to C in using Azure automation in production so we're gonna look at authoring tools and then the structure of the code or the PowerShell scripts gonna look at administration source control where to keep the scripts how to automatically deploy the scripts through source control we're going to look at locking and from locking we can then have alerting and in the end we also want to add some reporting of course so and of course afterwards some I just need to finish packing it up and now we'll put it on the in the PS con repo and github so you can download the slides and all the content that I've shown all right so authoring PowerShell ISE partial IC is great it's been there for four years and to use with SEO automation it has the authoring tool kit this means that you can go in and create new run books you can add or edit assets which of course they change the name to shared resources now such as credential stuff like that and that's really nice it's very important if you install it on a hybrid worker for example in a test environment that you scoped it to not the same users running the hybrid worker because then it will actually conflict some of the VM far some of the functions and about you'll have is the same functions as the hyper worker uses by itself and it'll actually fail so it's important to install it not as the same users running the the the hybrid worker so there's also vicious to your code how many uses vicious to the code how many are still using IC in combination right I do that myself i co use wishes to the code I'm super happy about some other features like now my scripts look really pretty because it can also format the stuff well I steroids for IC can do that too stuff like jumping between functions all these things I'm used to from the visual studio isn't vicious to recode sometimes when I need to trigger the code then it's slow or debugging doesn't work as well always yet at least so I still use both and there is no as your automation well there is a community as your automation plug-in made by a few Danish people but it doesn't have the same functionality as the awesome toolkit but what you might know I might not know is that you can actually use IC to to do the administration and you can still run the code and we should suit your code and it can still use these shared resources like module is for so right it's right we don't have the Akua hopefully there will come one at some point yeah that is nice so they are working on and I actually had heard I just didn't know if I was allowed to say it but thank you for that information it's always hard to know when something gets public and and what we can say and what we can't say I see if I administration V is code for coding right now at least which means that you can use your favorite two years code of course and there is the community extension called ash automation that commutes the extension you can create run poops you can create assets there is some really nice features there you can execute a run book in a national nation but you don't have the the rich career of the of the authoring tool kit all right so just to to show you here luckily it wasn't any I always have like a snap in control K and the first snap in is this toolkit because I don't want it to load every single time but yeah here we can do any anything with the run books connect to any and it is actually open source so you can help fix it if you want to for example I have like eight customers or something where my account has access to and there was one of the customers introduced some kind of device compliancy policy which meant my device couldn't login which meant that this plug-in would fail because it couldn't log into one of the eight customers when I try to log on so I made of every 30 fix in the code which is actually in there now I think yeah but it's really nice too but Visual Studio code is also nice to look at and the really nice thing is that we have built in source control integration which some easy easy interface for that so you don't need to use command line or external tools for that alright enough about altering next up is the structure of the run book and of course I'm afraid to come to a power shield conference and say this is my practices and hopefully nobody products anything to throw at me if you don't agree because of course this is something for discussion but this is an example of how we have done it in some projects and I've been collected the things um I think I'll actually jump right into the demo here and we take the slide afterwards so the thing is we decided to make one amor con templates to start from the reason for doing that is of course our scripts become much better quality it's much easier to start up a script and have proper locking in a comment based health for instance have especially about sending results back from your pure run book to the whatever calls the run book is something that we have very worked there on in different iterations so what we have right now and as I said this is of course gonna be uploaded to the repo on and get up and also my own repo probably so that the in a newer version in the future will end up there the thing is that we've created a module called CT toolkit which is on the partial gallery this module will contain some basic functions such as let me show some of these such as handling an event lock or we call it race lock so we can run in a command at trace lock to add something to the trace lock and there's functions to write them to local event lock because some customers wanted that and the for example event lock has a maximum size of 30,000 characters so this one will actually break it into multiple event locks if you have a very long trace lock him so functions for that those functions for for using something called I called an in-text table I'll get back to that in a second is so a few helper functions and then locking function and most importantly is on top here we have a class called CT return object and the idea is that any script that is used by other scripts should return this object with some specific things such as the input it received the output that we want to output goes into the output property right here the trace log goes to the trace log the status is our own status message and even if there is an error it goes to the error here this makes sure that even if we call a child a run book that is used by other run books that when we get this back even though the run book failed we will still get inputs back so we can lock those we get any outputs that were already there and we get the trace lock back instead of just an error message some people will say why don't you just use the verbose log because you can just click enable a verbose lock but anybody's ever enabled a verbose lock which is probably most of you knows that some commands just output you know just Cillian million lines of things and it's like they were trying to write as many with both large messages as they could run a kind of competition but this means that by making our own lock we can much easily a control what we want in this lock what we want to have in the report in the end and so on so the script here it actually uses this module using this new way of importing module called using this means that the class is available in the script and this was actually required actually on a normal PowerShell it works with import module but when you uploaded it to Ashe automation it actually does a check for the classes used and it said they are not available so using this method instead make sure that it is available when it checks the script this means that we can actually go here and create this return object as the first thing of course we have some examples here of of the parameters and we always use at least mandatory true and false and also a type here you can discuss if you want to set Erickson to stop in the script but that's usually what we need to do we have sections to define you know shared resources like getting credentials from from hash automation yeah and a region here for for procedures functions and when we come to the main code here there is some you see there is a region here called main code and this is where you want to put any code that is specific for what you want to do so last year I learned a few tricks and one of them from to BSS is interest and I think it was first of all we want everything to be in a try-catch of course this means that we can make our own locking of the era of course in some cases we want to make more try caches inside the main code but we want a wrapper around everything - and an issue we sometimes have have had is what does your actual run book output and we really want to control that so this little trick here called null equals dot sources with a space there and then a scope here which is an opening bracket and in the end here we have a closing bracket and that thing I always do personally is to put a comment with the actual line that is practice connected to makes it much easier to to find but this null equals dot script block means that whatever this script box since out to the outer normal output will go to dollar null so we don't have to worry about that some kind of import of a module or some kind of command certainly outputs some kind of output we didn't expect and therefore suddenly our return to the other run book is is yet polluted with other types of outputs therefore of course we could send all commands to Delano law sent in to out null but it's easy just to have a block around the whole thing so we can do like get the start time then we can add these trace locks here and entries to the tray slot we can then comes the main code as I said and in here we can for example import modules we can do really good performance code here it actually says if you read the comment this is not how it's supposed to work this is not the way to do it the thing is that it's just an example of if you have a big a great list of a big collection of objects they can be really they can be it can take a long time to actually read them and to filter them so what you can do if you have a big collection you can make it into a hash table and this hash table then has one key called whatever key you want to use as to search for and then the other key is called object and contains the whole object this takes I don't know performance that improve performance improvement of a thousand percent or something like that in some cases and instead of having to create a hash table every time we created a function called convert to index table where you type in the user the the property you want to use as key and it outputs a hash table with that key as one of them and then the object in the object field this means that we can do where and we always recommend now to use this dot where because it's faster and then for an example here of it we can then search for example username this guy here and then we can take that object from that result and we didn't have the object back this was something I also learned on PowerShell conference last year and Matias yes in this session so there was some good stuff that I learned yeah last year and whenever we want to add some to the output we actually go to our return object output ad and we put it in there so we control which exact things we want to send back we are 100% sure that there's not something that suddenly pops up because there was a contingent condition that made some command to output some other stuff than we were expecting we can control it fully here after the main code we go in and add stuff like a control finished a total run time and then we set this to success and we then as you can see the try-catch as a finally down here so it will always output this the outputs of that this is actually wrong I've made a mistake when I try to make it more pretty because we want to send the whole object of course so we have the output the inputs the trace lock the error messages and the status sent back in a control forum this means that if any error happens yep yeah yeah not sure actually yeah okay the question is if you filter down the hash table you sometimes get a dictionary instead you know I haven't checked that is that an issue - yeah but in a dictionary works in there's very similar way though right so you can do may have very similar things - to a dictionary so I'm not sure that it's it's an issue actually okay so if there is any error we catch the error and a long discussion if the read on the net is how to catch the error we always use error 0 and then put it in current error and then we can design our error message down here and we don't use dollar on let's go in the middle of the code because you run some kind of where or something that suddenly dollar underscore means something else therefore get the error from era zero or dollar underscore as the first thing in the catch block put it in your own variable and then go from there and this one we have two templates here this one is meant to be triggered by other where run books or other scripts this means that this one is what we call a control run book and so what it does is to output its output data but it doesn't do anything with that data let me just jump to a slide here because I don't know of anybody ever have done an Orchestrator and read the book about the run book best practices but we gotta got the idea from there that we want three different kinds of run books we want one that is the easiest initiation which is meant to be triggered from outside by a human by the REST API something like that and the output here should either be human readable or it should be Jason I'll show you why we want Jason did a little later because we can use that in the reporting controller on drugs it's meant to be used by other run books they should always return the standardized object CT return object so that the other run books can actually what has some mess up here is go fix things lights live here a control round rug is meant to be to do something specific for some tasks that that we need to do such as creating a new employee or it could be part of a flow such as creating an employee's mailbox or a home drive or something like that but it's then meant to output the object not anything human readable just yeah object with with trace lot so on last one is a component run book and the component run book is it doesn't have to follow our it's small for very generic stuff such as if we want to instead of using new ad group we make one called new CT ad group because it's just creating an ad group with some specific properties on it maybe our names and of course such a round book script should just output whatever object it created such as the ad group it's not very often we use these because in some cases you would want to create a module with these functions in there or just a function that does this yeah but it should not these components should be generalized so that they don't have too much hard-coded stuff and so on then they shouldn't rely on any outside resources such as credential objects and so on from from an automation alright if we look at the init template it's very similar all these things up here is exactly the same as before except is actually calling the other run book just as an example but what's interesting is that down here in the catch it actually sets the status to failed makes the error message but it writes the error message with action continue and then it writes the era action again with action stop because the general air action is set to stock on this run book this is again for locking purposes you'll see that in log analytics you'll be able to see this error that's out put it here by itself so we can easily find them another thing we do is to put the trace lock here and then send stuff this object out as JSON because when we send that to add lock analytics it reads the JSON and we can use the parts of the object in a separate tables in our reports which is very useful all right of course I could toggle a whole hour about this template and how we do that but this is just an example you can download and try and of course send me comments if you have any or we can talk afterwards just don't bash me completely if you didn't like it but constructive feedback is always good but there are some things we learned because if you are running for example a hybrid worker and you're outputting these objects and trace locks and they can get quite big and there's actually an issue if you get over around one megabyte I tried to make a script to find the exact number which didn't work very well but it seems to be around one megabyte so sometimes if you are iterating through a hundred thousand users you probably don't want to ride a trace log for each user maybe you should think about only writing special cases like errors or warnings stuff like that source control how many uses source control for the scripts yes after V is code come arrived I think that went up like a few hundred percent or more so we have a rule everything goes to a script repository a source control repository kit is great but I've heard a lot of people when they hear about git for the first time or multiple times they think about github and just to be clear it is not github it is a way to make a repository which works with github or any other and it does not have to be very hard to do a very advanced to do it's just a few things you need to learn we use visions through your team services that are pretty happy about that actually no real issues for us but and a good thing about wishes to the team services and automation is there is a script that Microsoft have made that can actually on the event of checking in the code or pulling in the code with a pull request trigger a run book that then pulls the the the new changes into your machine of course you can set up a more advanced pipeline if you want to but there's no official documentation for Microsoft describing that yet at least I would expect at some point we get some some actions that we can put in the pipeline or if anybody wants to make these actions you can actually extend place yourself and make some nice some nice steps that you can put into your pipeline and if we look at that just a quick look here it's right here this is one of them and what you set up is what's called a service hook and a service hook means that some event that happens it should trigger a call to a web hook and we can then use the web hooks in a short amasian to trigger a run book that then reads this this is what it receives it contains you know everything about the changes which files have changed and so on and this run book then goes in and pulls that to back into the account so in here for instance it's the other one in here we have this one called run books this folder here and this is the folder that we check I've set it up to only check that folder and if I write some kind of test yes you he can't not PS one of course that in fat-finger it there we go this rocks something and you check that in we check that in here and a good idea is not to do just the top it's a good idea to write something he has can't demo because you can see that in the in the history so as soon as I push this into it triggers the actual service hook which triggers the EM well there we go which triggers the job so you don't actually be started right now yeah and of course this is just the demo we could have multiple branches and use use a pull request it then tracks in installs whatever has been added to that and don't mind all the phase run hooks this is a demo account and and soon we should have the actual PS conf script down here so but we found an issue when we use the script for Microsoft because it only supported checking indirectly the branch and not using pool requests so we had to update that script and I must admit I haven't submitted a pull request for their open source project on it but I will soon I promise but for now we have here's the script pull request in hand it's version by us you can download from this middle link here and then the guide to how to set it up is in the official documentation and of course you can go and just live pictures but you can download this dictate the slides afterwards I'll promise to upload them before the conference is over maybe even today alright yep logging generally in logging I learned that it's better to lock before you do something then after she just when something happens it goes like ok where did it stop because if it fails you won't get that of course it's also nice to log a result of something but it's very important to lock before you do something in a tray slot optional log after use the tray slot a variable and that's why we made that little function in our in our script in our module sorry yeah log analytics our using lock analytics and in general if you are using him for boyish automation stuff all right log analytics formerly part of our miserable it is still but now it's part of the normal Asha portal and slock analytics and we can very easily forward these logs that we have from the run books into log analytics and what does that help well we can actually save up to two years at minimum cost and then again if you have you if you are locking the verbose messages you are paying for like I don't know five million lines every time you run the module maybe that's a bit much but I mean you will pay for much more if you have all the proposed by enable because that's send over there too it has an advanced query language so we can our logs we can expand stuff like Jason we can yeah use it for a lot what we get from automation is something called job locks which is the general result and status of a job so started stopped and so on but we also get job streams which is whatever we output to the output stream this means that we can actually make rules alerts which I'll show afterwards on stuff that is outputted from the run book and as I said if it's output Jason it actually it spans the case and automatically so each property in the Jason will be a field in LARC analytics and we can filter on that field directly and if you are using DSC you also get the node status and other events from DSC you can actually when you configure the forwarding you can set which categories you want so if you don't use thce or if you only want the job locks you can enable that instead yep so this is one of the examples of one of the things here so if we have the category job we always have a field called run book name but if we have the a job type in a job block type what's called a entry we'll see that the result type is actually the status that you see up here if we have a job stream we'll have to look at result description to see the output that we sent to our and that is if it's not a JSON output because if it's Jason this will be empty and whatever feels what any Jason would be directly as available as feels here yep so you see the JSON outputs they will then be called result description under skull and then whatever field name it had and whatever property as your out put it in your jason alright so looking at that we are going and first of all this lost connection luckily in this session I don't have legal robots and whatnot if anybody saw my session yesterday I wasn't very lucky with the demo gods this one is probably more safe and if you go to a supermodel here you can find logger.log analytics you can go to large search it's also here in front m and there is a searcher here but it's actually a much better interface if you click analytics here and go to the advanced one instead the advanced one looked like this and I apparently deleted something and I'll zoom in a little bit if you can yeah so you can actually search and it actually has pretty nice intelligence so you can go and and then select for example category equals E did you see it actually help me there equals equals job blocks and you can a shift enter to run it or you can click the Run button there now searches the data for something that has you need to it is in action Diagnostics because it's the same as all other Diagnostics from VMs and so on but it then has a provider called automation so this part here of the query means all the stuff from from automation and when we can do whatever extra filtering we want and we can actually see we got the stuff here M yeah and if I remove some of this for example and so there we see everything yep and you can then see the stuff here I can go into columns for instance on your condensed it's a little bit difficult in its load this zoom but in here I can choose the columns for example columns like input output or the trace lock if I can see the trace log here there you go M so the trace lock is now outputted see right here if I have of course there is of course a lot of executions in this demo account that doesn't use our best practices because they were it yeah that's stuff that's been running for for each's em yeah so this means that we can actually use log analytics and to actually enable that I've there's of course in the documentation but I've put a little script here M you connect to the account your s your account then you need to find the resource but you might think why did I search for name automation and then do a where object that finds the name automation well it's because they chose when they made that command let that it always does wild-card search so I have two accounts one called automation and one called demo automation and if I write named automation like that in the command it gives me both accounts so they have not followed the normal practice in PowerShell which is kind of annoying something that destroys your Friday night right so you just make it we're uptick it's not that bad and then you can actually go and say either get the setting if you want to see it or you can go in and use the set instead and you use that resource ID and you setup the connection here this one also has workspace ID which is the workspace ID that you can find in the properties of the M of the log Analytics account in the azure poll all right so far so good this means that we now can have two years of blocks and you pay in log analytics per gigabyte and it cost around three dollars I think it is per gigabyte so locking this stuff unless you are running a lot of run books this is probably not gonna cost you much just an example a similar thing is office 365 auditing saat SharePoint auditing you can enable and track in there and a customer that I have with 40,000 users and a thousand SharePoint sites all their audit logging in Azure ad and which files are open in SharePoint and so on cost around $10 a month and that is them saving it for six months because each time you add another month you pay $0.10 more but usually this is almost a no cost at all except I mean I have talked with people that said yeah but we are running 1 million jobs in a day or something like that a month and then of course it can but you can save it for as long as you want and I think it can actually do more than two years but officially in the GUI you can do two years if you need more I think you can talk to Microsoft about that but I'm not sure all right so when we have these logs we can make alerts and previously we were using OMS alerts this means that you would have to have an account that was linked to your OMS workspace which means that our lock analytics workspace which means that you would have to have the workspace running as per node pricing so it goes a different pricing model and became more expensive in many cases and there was also stuff about it the account had to be in the same resource group it had to be in the same location and whatnot very in the new a sure and that's which was GA like a month ago or something like that we can now create em we can now create automation and new alerts based on automation or unlock analytics so these new alerts they have no requirements for linking anything and you can make for each rule you can set whatever account you have in a subscription that is I think maybe they the GUI at least only supposed to have them in the same directory I don't know if you think across directory things where if you are used to we created we are maybe I don't know but otherwise there is no requirements at all you can just select run book and so on there's two way to do it the monitor one is actually really new it came as a set a few weeks ago or something where we can go in and trigger on the actual job result itself directly on the run book but it is the first iteration so it isn't so flexible you can have one rule for one read book that triggers one if I run book Oh actually you can trigger up to ten run books and do different things but you have one rule per run book currently it only works on red book executed in Azure for something I discovered a few weeks ago and I was allowed to tell you that that is not on purpose that is something that they work on it was actually a regression as it's called where it was working but now suddenly something made it fail so it should be fixed soon I think the other one is lark analytics query the thing is this one the first one triggers in about five minutes in my test results the other one is at least fifteen minutes because it takes maybe 10 minutes for the logs to get into to log analytics and then you can only set those those rules to to create every 5 minutes so at least 15 minutes but it is faf does anybody know that acronym Oh actually it's I didn't know that big I just created my own sock it's flex flexible less every Evan in touch to like flexible we fig so if you don't know it I mean it's probably because I came up with it like two days ago for this slide it means you can do anything I mean you can combine you can to crazy things in log analytics which means you can do very flexible rules that calls whatever you need very handy and you can make what that means you can make one rule for all your run books or any result or any error that sent from a run book for instance even though it's errors that are non terminating you can see those in the log and you can make rules on those trigger stuff to fix it if that's what we own all right so the asha alerts um there is a few things you need to know when you try to create them first of all you go to as your monitor here and you go to alerts and you create a new rule that was I also add but you can see the rules in here create a new rule all right you select the target and the target is the automation account so you select the automation account here we go we say report resource type automation account account there then you go to criteria and this is the important part here where and I have a slide that describes it afterwards in this one called maybe I should sue min if I can't come on su mo here signal type metric there is this one called total jobs so if I select that one it will look at my data for the last six hours and you can change this to 24 or one week and you can then down here select a round book but as you can see I can't select all run books I can only select the run books that actually have run within the six last six powers and again I think that something that is being worked on you can also only select statuses that actually happened within the last six hours and so this means you will have to run at least your run book once and you will have to have some run book fail so of course create a test a new era run book to fate make a fail and then happy a run book and select it here but as soon as you do that for instance this one and then fail you can then set how many failures for that run book before we trigger the alert more than zero and how often should you check it every minute and now we can actually have an alert that triggers and what you do in these new alerts is you set something called action group which means that you can create action groups that you can that for example triggers a specific rod book like this one here run book and this one is actually sending Google notifications when it happens so that the google home can say hey your run book failed and get you out of bed or Saturday night or something like that a good little tip here at least for now is to actually write the run hook name here because the result you get from the alert contains that but might not contain all the information that you want yep so when that's it that's how you create an alert and when you trigger it it can then trigger a run book and of course I made a little example for how to how to handle the result here is an example result that I put into my script but what it does is to have called fail because I believe it later there but we said it gets this web hook data as any other web hooks and then we can read the web hook data and and you can see these different we have the time it was it happened we have the rule name we have the resource name which is the the automation account we have the resource group name and actually if we go into more closely here we can actually get the rule that we made so we can get the grunt book name and we can get the status and then we can go from there I think that might be something that is subject to change of course since since it's yeah there is place for untrue and at least but you can use this template for now but one little warning is I tried to update the documentation because the documentation doesn't have the data field here and my reply was well how did you get that data field because it's not usually there when you do as your alerts so there might be something that needs to be a little straighten up there alright so when we have the logs we have monitoring now we just need em some reporting Yeah right it's just said what I just told you in the in the demo so you can easily download the these lights and do the same and a little example there and that's also run book example for lock analytics results because it's different but that's from the from the official documentation reporting you can make reporting with PowerShell generate an HTML file or an excel sheet yeah I don't like that particularly much but if you have to automate something sending something like a status message after a night of patching and something like that that's really great for that because PowerShell of course for the win but I mean sometimes it can't be nice with a GUI sorry about that so I go hide in the corner or you throw stuff at me and otherwise if you really like a nice surface to actually look at your data power bi is great how many have used power bi I mean if you have a big power bi project you kind of feel sometimes like if anybody seen A Beautiful Mind the movie with this autistic guy who see patterns everywhere so if you have a really complex one you can make these crazy patterns and and and join things like from different sources like API and something from a database and another database join them into one table and show that stuff it's great we can advance and another one is that we can actually also help Jason here so we can get there it's that Jason that I talked about show the inputs outputs separately so the the trace log and error messages and so on so we'll make a query in log analytics either to get everything I'll get specific things and then you click em let's see you just click up here when you have your query and it says export to power bi some reason it doesn't right now I think it does yeah so you can export to power bi but there is a state yeah right there but the easiest thing here that you select here last 24 hours last two days seven days a custom range and if we want to make a report with all our data we want the last two years custom time range don't work for that because you set a start date and an end date that doesn't work it needs to be flex it is to be dynamic so what we need to do is to create a query that actually contains that actually contains this where in the end that says a pipe it over to where time generated is less than newer than 730 days that is two years this means that whatever p.m. you'll see actually that when I do that in here the GUI changes actually if I run this let's see maybe I need to refresh ya see it says set in query so now we don't use this one to select the actual time range we can write it in our query which means that we get all our data here well it only shows the first 10,000 but you sort load pretty quickly so we can use that as power bi we click here and we get an M query as it's called which is a text that you can copy into power bi so that it can show that all right so now we are ready to actually I'll just switch around here do the demo that a beautiful spot because this is the power bi stuff and you can when you do power PR you created in power bi desktop which is not the place you want to actually look at your data at least not if for itself of course but if you have end users or somebody who is not into this it is not because you can easily mess up things this is the designer you would call it right but you can actually make cool stuff here and then we can upload it to our power bi namespace here and I can actually make it bigger like that this means that we can actually go here and we can make cool stuff like listing the jobs here different jobs so if I for example select my demo from the template or we can select anything like well should be some like the robot here we can see the robot jobs whoops we can actually select only this one and we can see the the logs down here we can do you know like stuff like different dates so we only see the job from that day or whatever dates you want to choose right now I have you know from start of March and another cool thing is that you can then put filters like this so we can only see the failed ones we can have the failed and a complete or the completed ones of both of them it's actually very flexible here you can select if you only want a specific account or any subscription here so we can easily solve this by the job streams that are has type era we can put here and make it read so we can see any errors that happened in any time period and what I did was also to make a new table that would calculate from the start event to the to the end event how many minutes each of seconds which is here each run book were using and then from that we can calculate how many minutes we used in total and the last thing you might not be able to see is right here this is a link you can actually make em you can actually make a column that has a link and this link is for a asha function I made which in turn triggers a run book so if you click any of these links here on a job it will reach rigor the job with the same parameters so if we had failed run books because we had like credentials that were expired passwords will you can just click here and somebody can click here and you can retrigger the job with the same input and of course I don't have time to show all that but I do placed in the stuff you can download there is the asha function code which is here you just send a job ID to the asha function we trigger then the the webhook here m and then the run book here is the run that goes in to our jobs M locks in ZD account here gets the job figures the parameters of the job modifies the parameters into a hash table and then starts the run book again so for instance we have a product we sell where people can do trial accounts and if there's already a trial accounts from the same company we will not create a new trial account on top of that but we will get an incident sent to us that says these try to get a trial account but they already have a trial account in that company so what would you want to do well we just click the link and then it reaches the job with a false parameter that says overwrite the old account if you want to do that so it's very powerful to use and stuff like that power bi here you can as I said show different things nice graphs you can actually also make drilling so you can right click here and drill into that jobs details make some kind of other view that shows that wealth it was not a really cool or some kind of old result we can for example go to this one you can see that and put in whatever you want can also do stuff like like a charts of how many minutes we are using and if you just drag in that field that I made for the total minutes it actually automatically makes a drill down so i can drill here into quarters into months into days of the month so it's pretty neat you can also look here at the usage of minutes per run book so we can see which one really uses stuff and i decided to put a date and filter there so we can go to a specific date range and see which run books used whatever it's just some examples i've made so that you can see different things another thing is that if you output Jason as I said this one for instance test output Jason outputs a list of processes from get process it will actually be expanded by log analytics so I can actually read that result into a custom table if you want to so taking that jason method and this expanding jason combining it with the return object that we made which has showed that we are without putting as jason means that I can get a separate column for each of the things that we actually have in the output so I can make a view like this that has run books here but my own trace lock here inputs up here and outputs here of course I can then have a drill in that then goes in and shows me all the other stuff so I can drill in here to that M and we get outputs inputs this one has even every pose added so I can go let's only have the pose or we want all of it except progress and repose so you see you can really easily manipulate the data um so I think this is a very nice surface to actually show the results of the run books and to handle that that stuff yep and of course it's available in a content that I will share I just need to remove some of the links to my own function and so on you see here this is what I mean by the relationships if you have like a hundred tables it gets pretty crazy um but it's actually really powerful yep so what did we learn from using power bi well the Jason output is automatically expanded when I discovered that I was like what what the I mean why are you doing that I didn't ask for it can I disable it and then I thought oh maybe I can use it for a good cause here I can maybe take advantage of that feature and that's why the output at T the return object as Jason's dead and then we could get all those data separately you can set the time range in the query so that we don't only get the last seven days or how many you want but that's about it for for reporting the little since I have the enough time last year I started a project alone and then I put it on github and then somehow like stay friends stranger has helped it's a module to import and export run books so that you can export the run book and it'll automatically export all other run books that is that the run book is dependent off it'll also export all the assets that it uses into a folder so you have like a like a package that you can then take to another customer or go from your test environment to your production environment and easily import everything and if you want to help continue work on it of course you're welcome to join all right so summer is authoring vs code rule spot ISE is still useful structure use templates it makes people much better at doing the locking because it's just available right there and one thing I've learned with darking is don't wait till you're done with your scripts to do the locking because it's the locking you need to troubleshoot your script when you're developing the script right so use templates to locking right away administration use source control locking lock analytics rules alerting as your monitor and then power bi as the reporting in the end so this was kind of my all the way 360 to her of how we do our automation and how we will do as an automation in our company in the future and we'll keep iterating making it better and anybody who wants to change whatever template and come up with new stuff it's very welcome to send that to me we might I might as I also put it on the github agree if anybody wants to do it to join in yeah so questions any questions before we end because you're welcome to ask me afterwards if my voice doesn't break completely but otherwise I'll just say thank you for coming and I hope it was useful for what you know inspirational for what you need to do [Music] [Applause] you
Info
Channel: PowerShell Conference EU
Views: 338
Rating: 5 out of 5
Keywords: PowerShell, PSConfEU2018, Monad
Id: NvI8lfhdBUQ
Channel Id: undefined
Length: 58min 30sec (3510 seconds)
Published: Sun May 13 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.