[DAX] Best Practices 101 for Optimization & Performance (with Alberto Ferrari)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] foreign [Music] [Music] [Music] good morning for me good evening to you hi Amanda pleasure to meet you again yeah thank you for joining mornings there yeah here it's a 6 30 pm yeah early evening yeah thank you for joining on your Friday of course you're welcome yeah I think last last time we saw each other was uh next steps in Denmark right ah yes and we yeah he already did the video together did we that the two of us have done um I think you were at sequel uh not SQL but some you were at past but I don't think we ran into each other at that point I was at past yes yep uh yeah the last bus I basically I didn't meet a lot of people that fast so it was yeah I mean I met some people but it was not so crowded I was uh not used to travel enough exactly um and I think will will you be at single bits sure of course I mean in my mind yeah yeah right SQL bits in the past still historically at least been two of the biggest ones per year but uh yeah yeah I mean it's um we'll be bumping into each other then again in just a few weeks so it would be yeah it's only like four weeks from now um I actually just finished booking all my uh tickets and everything to it so yeah exactly that that I need to do that too I don't have my tickets yet thankfully at least for you like like ticket prices maybe go up at like 30 to 50 bucks like they go like they go up by three to five hundred dollars if I wake if I wait too long come in from Seattle a little bit more expensive um one just final question on that is the do you actually uh do you have any outfit or costume picked out for the for the themed party uh for SQL bits for their theme this year sorry sorry say that again so xeno's SQL bit says that like little themed party that they do based off yeah literally theme like I'm actually bringing like my My Wizard outfit that's part that's my avatar from my YouTube channel I'm gonna I'll have a wizard hat and a cape to go with my wizard outfit for SQL bits um I don't know if you you plan to have it that is so cool costume I never wear anything strange during those meetings because uh basically it's a lot of stuff that you need to bring with uh with the lag gadgets so I let's just keep it no fair enough um but uh yeah it's today we're on visual they should need to do Star Wars next year because I just bought us uh how do you call that a laser blade that's saber light I don't know how you call it lightsaber yeah lightsaber yeah exactly I just bought one it's amazing I love it that actually has them assuming little light up where you turn it on and absolutely okay yeah for forcefx Force FX lightsabers is what they call it where if you tap it it also makes the sound and it lights up and everything yeah and you can also use it for doodling if you want is mentioned in that time you're speaking his language right now so I'm sure he has plenty that he would uh Google you with um I have I have one so I I do actually I have a saber as well um honestly like I feel like depending on the outfit that you dressed up in it is technically fantasy themed um but we showed up in like a a Jedi robe with a lightsaber I don't think anybody would be upset about that so you could still do no very much do that I mean can you people around Gandalf fighting against Darth Vader that would be a nice fight that would be pretty cool to watch yeah I mean not not gonna lie that would be fun uh great photo opportunity for sure uh I did show up as Chewbacca last year so that was my costume for the the previous year one because I had to go buy a costume last minute at a at a store in London did not realize there was going to be a costume party so it was a little last minute purchase uh to show up for that I'm just enjoying some of the the comments in the chat but um uh decorations aside the thing that I I and thank you for the suggestion for the title so we're on our 100th and 101st live stream today so optimizing Dax 101 is a very appropriate title uh for that as well um and Marco was on just back in the 90 98th stream uh but um you get it now Usher me in to the 100 marks um and move through that so I appreciate you joining on for this and I'm also looking forward to doing some deep Dives and conversations around just Dax in general I know there's been some updates and evolution of it in the last year which has been nice in terms of some functions and other stuff yeah yeah exactly it's nice when when we get new Christmas presents to play with in that regard but um you're definitely one of the uh um top experts worldwide when it comes to understanding the language and probably having a strong impact on how it actually grows with your uh interactions with the product team and Jeffrey and everyone else there so I'm very interested to see kind of where our conversations will have a chance to be able to go today that's cool do we have already questions I don't know I don't see them yeah I mean there was there was one question uh just on calculated otherwise I think um my hope is uh uh have some kind of conversations around some existing thoughts you had in general but then also for the people on the chat okay you'll have to have this as organic as possible ask questions as you come up um since you already asked Marco let's go ahead and there was one question that Matthias had that we can maybe start the um the ball with so he wanted to know when you have a very nested measure how can you how deep can you change the context with calculate without losing performance endless well actually that's interesting because uh um it looks scary but actually it is not because the algor uh that the question is very precise about how many times you can change the contexter uh and the answer is quite easy how many times has you won that doesn't really matter because uh even though you have a lot of nested calculate one inside the other uh the filter contest is not changed all the time it's a lazy evaluated up to the end so it doesn't evaluate anything until it reaches at the bottom of the measure and only then it checks what is the fact that the context that needs to be evaluated so if you place a filter you remove it then you place it then remove it then you place it and then you remove it again nothing is going to happen the engine is not going to compute all of them it only figures out that the the last filter that needs to be evaluated and that is when the calculation actually happens theoretically then of course you always need to keep in mind that the optimizer does its best but it cannot always figure out exactly what you want to do so sometimes you end up having complex calculation but you shouldn't be scared by using calculate multiple times it just works nicely yes and I think to your point it's the the fact that the engine does a pretty good job above Auto optimization in the back end and I think in maybe you can speak to this actually a little bit to continue on to that in the in the last 12 to 24 months I forget exactly when but I know there was some pretty big updates where it wasn't for the developer who was writing the measures but they I believe they added some more Auto optimization of certain bad practice patterns that were identified of uh this is going to perform slowly we're automatically going to switch it from this to this the number is going to be the same but it's going to run a lot faster so they do continue to sprinkle a bit of optimization into the the engine itself to recognize uh inefficient Pathways uh for for the um calculations to run and it speeds it up when it realizes that oh you know rather than iterating over this it's going to do that to arrive to the value faster and I thought there was something about 16 18 months ago uh came out um where there was a three or four identified patterns that were kind of being changed um in the engine this is a kind of a continuous job because every new release that they find new optimizations I think the latest one that they created is Fusion so horizontal confusion of vertical Fusion that is just great because it saves a lot of time but to me the hardest part is that Dax is a very strange language would you write them the way we would we describe how calculation happened is so incredibly different than what happens inside the engine so you write an iteration and you have the feeling that some iteration is happening but actually when the optimizer finds the iteration it completely changes the algorithm to make it different you write a need condition you expect the engine to evaluate the condition then choose one of the two branches and that is not what is going to happen it's a completely different the algorithm followed by the engine has basically nothing to do with what you're right so the the hardest part of optimizing Dax is understanding that the translation between what you asked for how the engine is going to translate that and of course that because intuitively we tend to think as an example if you write if if a condition then compute a measure otherwise compute the other measure and we have the feeling that the engine will compute the condition and then one or the other actually a lot of time the engine finds it easier to compute the three values the condition the True Value the false value and then choose only at the end which one to use because it's actually faster if in switch statements there can also be a sinkhole of performance I mean that variables help to drastically increase the performance of some of those like I remember pre-excel 2016 you know back when variables hadn't really come out with that version of Dax yet like a switch statement with like eight conditions could be incredibly slow but as soon as you declare a variable for like the measure that's being referenced multiple times it you've got like an eight to ten X increase in performance yeah that is true uh that's the reason why switch is basically a set of nested deep statements so if you write switch it's like writing a lot of if one for each condition that you test and if you think about evaluating all those expression every time that means that if you have 10 different 10 different values to check that you are multiplying by 10 the complexity of the measure yeah if you compute the value in a variable at the beginning then you compute it once and then you use it through all the different expression and then you save time but at the same time sometimes I mean that is always about it depends so it's not that it always follows a path depending on depending on how the optimizer is able to read your code it will either go for a very optimized path or a completely insane and slow one what's the path of least resistance right like essentially it goes to the yeah it finds the the shortest most optimized path between two points of data through the the relationships and everything yeah but the problem is that well sometimes the code is really hard so you make a tiny change to something like that the most common error that most newbies do at the beginning is to filter table in calculate instead of filtering a column if you write filter product that that is probably the worst thing that you could ever do or if your right filter says that isn't worse because you are creating gigantic filter that will slow down the the entire code you just filter a couple of columns and the code becomes immediately faster and you have an idea how many times I I see those patterns the problem is the optimizer that's it it does an incredible job but it's not perfect so it needs a it needs a lot of help and the topic is go ahead think about that I mean we started uh I think I can say that yeah we started writing the book about optimizing daxa another project started one and half a year ago we discovered that there was so much that we still had to learn before writing down so even though we we know a lot of details about optimization writing that in a book that's a completely difference oh there you go all right you came back in it's the internet hiccup for just a second but I'm still there okay yes okay coming back here we go here we go okay really brief pick okay cool um okay perfect all right you were mentioning the book as uh I think is and Marco actually said similar parallels to this is where the there's no better way to learn something than you need to write a book for the subject and the the Deep Dives that you had to do into that and I and he spoke for this and maybe you agree on that is uh the amount of learning that you you had even after starting the the writing process you like there was a lot of extra information that you learned about the language just in making sure that you were going to be explaining information correctly in a book format yeah besides you also need to find examples so maybe you think about something and you you have an idea that some code will be faster or not but then you need to find an example you need to test it you need to measure it you need to discover a lot of things and I can tell you a year after uh having started this uh this process of writing the book I actually now know a lot better how the engine works and I'm I'm happy because I'm I'm still learning a lot and the book is amazing I love it it's a it's a book that I will I would like to read so it's not gonna be easy but it's gonna be a lot of fun if you are really deep into optimizing backs of course when I think this is a nice segue because there's a couple questions that will one I want to get to know a few a little bit later um and then I also do have a follow-up question about myself on horizontal Fusion which I think would be fun to describe just for people who are not familiar with it but speaking of books at least let me pull up this question here um and he's asking when you're when would your next book be um I mean or any timeline or updates to the optimization index course yes unfortunately the only answer that I can provide you is the very Italian one that is the book will be out as soon as it's ready and we have no idea when it will be ready okay uh we are I would say around 80 percent of the writing processor for the book and the training because both are coming out on the at the same time but I can't I don't have a clear picture because we are still discovering stuff and we are still changing stuff we are doing a lot of work for example on backstudio because uh we we are using the tools so much that of course I need features I need new new functionality so I just go to Marco saying hey I do need uh this and that and that and so the the tools are improving just because of the book but I don't have a clear picture about the dates right now well and I think there's a degree too of and maybe um this is the a similar experience for you but I know that like the the trainers that I've focused on has been reporting later and in the last 12 months the design of the reporting interface has changed and it's continuing to change so there's a degree of like if I release anything today based on current builds um it's it's like hitting a moving Target so it's like how how soon is something it is needs to be updated so I think there's a degree of content developers um sometimes almost have to like wait for big shifts and products to change and finish before they can actually do uh release any books or content especially because the book you can't just unlike a Blog which you can maybe update you know you can't really change the content of a book once it's been publishing you don't want the book to go and be out of date or incomplete in two to three months after something is released yeah but that is true that was the concern we had at the very beginning when Microsoft asked us for the first time to write a book about power pivot and then power bi and there was this insane speed of development so every month there were new features so we basically stopped saying no we cannot write a book but then you also realize that at some point you do not generate something and finish writing the book and put a full stop saying okay it's done it will probably be old in a couple of years but who cares so we're gonna update it I agreed that was the same world yeah good for everything I mean the definition Dax does not contain right now all the new functions that uh hit the market recently we will probably need to to write a third edition but that doesn't mean the second is bad that there's a lot that you can learn there and well and the nice thing at least with coding languages and more back-end stuff that's less dependent on a UI is at least with most products and languages um and I think Dax sequel any of those new functions new functionality and features get added but they don't take away for the most part things that are already existing so you might have the scope of the this is everything we've covered in the language you get a little bit more next year but what you've already covered is still relevant you just there's five percent missing so it it doesn't make things irrelevant um unfortunately on like where I focus and like reporting and stuff but when the interface changes on how to do something or they they take one feature away for another then it's it's not that I'm missing information the stuff that I covered and the explanation that I went through of how to do something just becomes useless and irrelevant because it yeah the interface doesn't look the same anymore so I the the target moves faster um when when you start going into UI and interfaces yeah you end up writing that's the reason why the in the definite get to us we didn't put anything about the UI that was a clear decision we are not going to put any button there just pivot tables matrixes and code with no UI at all comment to someone um so there was uh Nikolaus I think if I pronounce your name correctly you were just curious about using individual measures for kpis and I'll just pop this up since we have um symptoms speaking to it uh it was partially covered with Marco a little bit last week and Albert will be interested to hear your comment on this but I'll summarize it so with a model is it better to create individual calculations for time intelligence functions or to use calculation groups and I'll summarize what was discussed last week with Mark a few weeks ago with Marco demoing the Bravo Tool where my the ADD and cost to a model is marginal in terms of the performance or anything else pretty much there's no cost if you can just snap your finger in and add measures so his argument was like calculation groups are nice but there are certain scenarios and configurations where it is nice to have individual measures for prior year prior month and if you can just click the one button and add in in Bravo and just adds 55 measures in the correct folders it's fine like there there's there's no cost basically to have those in the model um the issue previously with with white calculation groups were nice is because it takes a lot of time to write 30 calculations but with that cost removed there's no problem to really shotgun a bunch of calculations into a folder and then use them when necessary but I'd love to kind of hear your thoughts no I I totally agree with that if you care about the full Monster so if performance is your goal then having separate measures is far better because you have the option of optimizing super optimizing everything and calculation group does much as they could be optimized they will never hit the same level of performance that you can do by choosing exactly the calculation that you want yep uh with that said uh I do not honestly completely agree with the idea that having hundreds of measures even though you can create them in a snap I always I don't know if it is actually the best choice because uh sometimes we forget about users so if you provide to a user uh the ability or the need to choose amount 200 or 500 measures you're not doing a good job because the user will just not even know where to search for the measure or you need to find a way to name them in such a way that they have some meaning to be honest I think regarding this I'm probably against Marcus idea but I think that the the most difficult job of the analyst is that of choosing very carefully the tables The Columns the measures and the calculation to add to a model and sometimes you need to say no like who cares do you want the year to date of all the measures there's no point in that you don't need it you want the year-to-date of a set of measures so choose them carefully so to provide the number of items and kpi that are actually needed but that goes probably out of topic I I think that still covers audit um and I I just wanted to to call out a specific thing that you mentioned in terms of the users so to give a little bit more context on that what I'm hearing is when you're building a shared data set so people are are doing lean reports off of it in in the power bi service where you you're building a curated data set for lots of people to use and you want to make sure if there's hundreds of measures it's easy for people to find the correct measure to use when they're making reports off of the model that you built and let's assume it's certified and promoted um that is actually one reason why one of my hidden features that came from the ssas days um is the ability to put one measure in multiple folders using a semicolon yeah so actual smartest budget okay so Steve is likely going to think that that should belong in the actuals folder but Michelle would expect to find that in the budget folder so you can put it in both so like folder organization and model curation is very important like it is the number of measures but also is it easy to find is it organized um there's a lot of Art and Science that kind of goes into building a data set that is consumable and I would actually argue that that's kind of one downside of calculation groups is a measure that just says actualism minus budget makes sense but if you have a table with a you know a calculation group column then you then have to add to the visual and you know or into the filter to get the calculations to show up that requires a little bit more education on how to properly use a calculation group versus I think a measure which just oh I just know that that is supposed to do that thing um of whatever the the calculation is so it's a double-edged sword on all of it but it yeah I think there's always the kind of dance uh between a simple model but just standard calculations versus maybe cleaner calculation groups which are less configurable but reduce the number of uh Fields so to speak in your model yeah yeah exit C program simple as is the key exactly and simple is really like a very cultural thing depending on what what organization you have and what education do they have on on using any of these tools I'm just looking to see um I think I like I have I have a good amount of questions to go up I can continue to bring some of these up I'm also like I I love to see things as well too so if you have anything uh interesting you'd like to show as well I'm happy to also uh divert to maybe some fun practical applications of where you think some common ways that we can actually optimize as well so I'll keep those cute queued up but if you have some shining examples of where you think I do have a couple of demos that we can run through uh absolutely yeah that's uh I mean we can do whatever yeah let's flip through a couple of things yeah let's do that while we're here so you can also help me with that uh I have this beautiful report which is a kind of slow now you do not see it slow so let me open it again for you so we start again now everything is black but that is expected foreign we talk about very simple stuff but the thing is if you look at the report now it's opening I said that the experience is not really cool like there's something wrong here ah where would you start Ed not optimizing but understanding what is the first thing that you would do if we meet at the coffee machine we are drinking some coffee together and tell you hey this report of mine is kind of slow can you help me making it a little better what is the starting point the very first thing that I would do is on is open up the performance analyzer just to see what is yep what what is the the slowest perpetrator uh and I wouldn't re I'd start by refreshing the page and then checking the individual visuals unfortunately the the cash does throw red herrings but yeah let's take a look at that you know as soon as strange as it seems uh that's the thing that nearly nobody does at the beginning so they say oh my report is low and they have no idea where it is low so learning how to use the performance analysis probably the the most important thing if you open the performance analyzer start recording and then you refresh the visual that gives you a better picture of what is low and what is not you have all those numbers that are nice and they tell you the duration of each each visual so 1 to 11 seconds so this took two seconds these two around three seconds an interesting detail here is that if you expand this you see that under each of these you have the Dax query the visual display and that there is this other so this visual specifically took 129 milliseconds to run the tax credit was seven seconds 15 was the visual display and 107 was other you know what other is yeah so uh the the other can be the the queuing the time to wait because the yeah exactly that is why I totally hate this visual let me show you other with a different visual because a different report this is more this is nicer because we have a lot of visuals here and if we do the same we don't use the performance analyzer expanded start recording and then refresh it you see that which visually is kind of fast but if we saw them you can look at these slicer that took a second but actually took 993 milliseconds at the execution time was 16 that other was 951. exactly and that's the very first thing to understand that is the number shown here basically doesn't mean anything at all it's completely useless because other is the time that was required that was spent for that visual waiting in a queue because we have so many visuals here some of those visuals could not be executed immediately there's the queue containing I don't remember something like four or five visuals that are executed in parallel and then all the other need to wait to go a little bit into that I believe that's related to the brow because technically this is running webview too it's the same browser kit that you would like if you just open up Chrome or something else um there there's a limitation to what what can be calculated within a browser and as you're mentioning like there's a queue like only so many things can be run in parallel at the same time and then otherwise things have to wait in line to load behind each other uh which it goes into that giant bucket uh category but that that's how it happened to me at least is it's a brow it is partially tied to like the browser limitations of calculations running through what what will eventually render this in like Chrome or Edge or anything okay that I didn't know that I I always wonder why there's limitations so in my mind there's no point you can send hundreds of queries to analysis services and it will just answer those there should have been no point in queuing when it comes to the visual container because that's not being done by analysis Services your browser is the one that's actually rendering um in CSS and everything else the uh and HTML5 all of the containers and that that can be a huge load too like if you have five 500 text boxes on your page with nothing being pulled from your model your report page can still take or minutes to load um just because of all those containers being rendered in terms of the uh the the web content required to like visually display them yeah so the the first thing that you can do the first thing that everybody should do about optimizing their model is to reduce the number of visuals so reduce them to a reasonable number and to these more multiples for example have a lot if you have a report like this this contains I don't know 30 40 different cards and that is insane the same done with small multiples would be way way faster because you're just reducing the number of queries but again the message here is that number the number that you see here has some meaning but it's not the perfect number that is not exactly what you need to search for you need to get rid of others because the only thing that you can optimize is this number 17. that is the Timex that that's required to execute so that is where you can focus and that is what you can gain by by making your Dax code better by the way on that is um I timed this actually between you and Marco but I released two weeks ago um a visual on your single value card or sorry cards with states that um is a great way to uh trellis a bunch of card visuals that allows you to have face value comparisons and some trend lines but it allows you to take 20 visuals on your page and turn it into one so link for the people watching right now there's a tutorial that walks through kind of how to implement that but that's a massive Time Saver on your age rendered time but anyway reducing the number of visual is probably the first step for any report and to make it better and also more readable so whenever you have a report that contains hundreds of visuals you know it's wrong just because there's too much data there yep but if we go back to this report uh we have a several visuals if we look at the slowest one that is taking 11 seconds and 11 seconds is only for the Dax query here other is only 100 milliseconds so it's not a big deal so the next step would be understanding where the problem is and in order to understand where the problem is that you typically copy the query because uh that gives you the option of understanding what is happening under the hood and then we forget about power bi and we go in external tool and we launch dark Studio because that is where all kind of optimization happen now the studio opens in a different monitor let me place it in the right place and I just copy the query so here we are that is the query that is going to be executed now I'm a geek so I hate to see code that is automatically generated that is why I always clean up this code that studio always contains that top 1000 that I don't like so I always reformat the code a bit in order to read it better what next we know this quiz is going to take some time I typically launch it at least once so just to see that it takes a here you see this tiny tiny number I don't know if it's visible but that is that the execution time that should take around 10 seconds and so what next where's the problem yeah checking the timings and everything uh with this one as it runs up yeah just type in the middle that again a lot of time we forget we don't know where the problem is we have three measures USD and amount in Euro so which one is causing us problems huh I go always for the easier thing that is uh I start commenting out stuff until I find what the problem is and there's a theme that I want to show you because a lot of people don't know and I discovered quite recently that people don't know that this debacle Master if you click on it that the comments are put at the beginning of the line instead of at the end that makes it so much easier to comment lines wherever they are I remember being part of that conversation when this was added because it was a Twitter conversation between myself Daryl and a few others on this because I I actually like I myself and a few others love those comments that comments at the beginning that button was not there until 18 about too little about two years ago when it was yeah it makes it super easy so yeah I I totally remember because if you ask me this is insane so I would never be able to read calls like this yeah but I admit that for the bugging purposes it makes a lot of sense because you don't have to forget to remember that the comma is at the end in a way we know we start with 17 seconds and the easy thing that we can do is just start with amount in USD that is a simple measure and you see that is instant so we know the problem is not here we can just get rid of it and then the problem might be one of the two measures we don't know so we can comment for example open orders and see what happens and this process sometimes is details because you might have maybe 10 or 20 different measures and you need to find what the problem actually is but it's important because it drives all the choices you see that open order takes 10 seconds that's kind of a slow measure and amount in Euro or that is likely to be some currency conversion that takes 2.6 seconds so the problem our serious problem is here in open orders that is where we should focus the most but it's also most boring so we are gonna forget about open orders and we play with amount in Europe because that is much more interesting uh now we can get rid of debug commas we go back to our code and the first thing is taking a baseline so where do we start from this tells you around 2.6 seconds but the numbers are not entirely precise if you just look at here so we need to enable server timings in order to understand a bit better uh the time opening server timings open here the server timings panel that shows you a lot of information by the way I think you are the first that sees the new version of the server time that is coming with the the next version of that studio oh nice yeah I was good so some of the buttons look a little bit uh more rounded um I I know yeah yeah go ahead look here the water the new waterfall is just gorgeous waterfall now and now it's like a it reminds me of a stacked column chart a little bit yeah probably again changing the name from Waterford to something different but this is basically tell you when the storage engine was running and when the formula engine was running and Witcher is this so this shows you that the query was executed at this point after this one after some formula Engine with some formula engine later and that gives you a very clear picture of exactly what was happening in the in the code and so so for a current versus prior versus current iteration of this the waterfall previously just showed timings I don't think that it broke down formula versus storage right it just simply no this is how long it took that's that is yeah exactly that is that is amazing yeah because we were uh we are writing the book so we needed several features uh and that is one of the important ones and for the ones of you that appreciate it look at the XM SQL code which incredibly now it's formatted so you can be actually read by a human being and you have no idea how much time I spent for matching manually Ducks quote anyway I'm not here to do some yeah advertisement about tax studio also because it's free so I would not get anything I can get out on the technical stuff so like that that's super cool also like I'm looking at that like what's that little sliver of storage engine you see on like close to the right um in the waterfall there's like a little a small sliver of blue in the middle of the of the yellow like oh that one yeah yeah like like that that's what fascinates me is like what is that teeny tiny storage and calculation being done it must have been a small table that was generated at some point but it wasn't that much data comparatively and we don't know exactly which one it is because I mean there are a lot of tiny queries here so it's not clear but there is a lot of communication between the two engines and that part yeah and that is useful it tells you for example that the engine executes this query then it computes something here we don't know what but it doesn't jump but before executing this one so that is the problem with the formula engine that is really some data preparing a filter or something for a specific query and then executing a next query so the queries that are executed later they probably depend on calculation that happened before and sometimes that is extremely useful uh plus we we change some other detail but they are less less important and then as soon as uh it will get out uh we will do training and stuff about that anyway now we need to start understanding where the problem is because uh all these is useful to say we start from 2.6 seconds that's our the number to beat so we want to go faster than 2.6 seconds and I always do something like uh 2.6 as a starting point because I want to check then if we start changing the dark score until it runs faster and we need to check whether it's faster or not and I will forget about this timing quite soon so will among the euro is our enemy right now we want to make it faster and we could go back to Power bi and look at the code but that would take forever otherwise what we can do that is way better is search for amount in Euro here right click and click on Define measure by doing that Studio provides you the code of the measure here and any change that I do here will be reflected only in this query that means I can change the code here to all the modifications that I want and that gives the opportunity of changing the code making it faster in a simple way and for the ones of you that love nesting calculate that was the first question at the beginning you can also Define the pen that measure that for some reason doesn't work anymore I need to tell that to Marco sometimes it works sometimes it does not so I'm not gonna demo this but the find dependent measure when it works it shows not only the definition of the current measure but because this measure is using amount in USD it will add to the definition also amount of USD that I'm doing that now manually and that generates the entire tree of execution the entire tree of measures so you start from a measure and you cannot end up having the definition of maybe a hundred different measures that gives you a clear picture of the all complexity that you are looking at and gives you the option of changing code everywhere so you can backtrack basically and test everything in that dependency chain um without having to go and find every measure that it was referencing essentially exactly because of the kind of measure shows everything and then you can change whatever and finally of course you need to place that the measure back where it belongs but uh at least you can change it very quickly so here is the code and what if we want to optimize that because the the real fan Starts Now these calculations is Computing the amount in Euros so it iterates over the orders computes the amount in dollars and that is if is needed because in this database we do not have the currency exchange for all the dates so that is searching for the currency exchange on the Thursday of the month please look at Value is searching for the average data in a date that it is in the year of the order the month of the order and one so the exchange rate is available only on the first of the month so it checks if there is the exchange data because it might not be if there is no exchange rate then you compute the average exchange rate over or time just to obtain some number otherwise it Returns the value computed here um where would you go next how can we start optimizing this code whatever through ideas whatever well I mean one lookup value is incredibly painful uh function when it comes to that so I I would be curious if there's any way to get rid of something like that and Leverage uh relationships or something else uh rather because because that's equivalent to like a vlookup um to my understanding and you know yeah it doesn't it doesn't um it's not using defined paths in in the model it's simply just looking up table a versus table B right yes but if you look at the exchange rate that contains only 700 okay 700 rows so so it's actually tiny but you are right look at Value it's not fast and we are doing that here we're doing that twice exactly variables it would be another thing it's like now now what do I start to see repeating so could we could we cache a table or a column or something as a variable and then reference it okay let's do that we can take all this stuff original variable foreign that before and then if the exchange rate is blank and all of this stuff is again the exchange rate and cross your fingers let's format the code it works cool I'm always scared whenever I change clothesline so we create a variable to store the lookup value and if our guesses were right we should see an improvement in terms of performance so it was 2.6 seconds we run it with the variable let's see where it goes 2.3 yeah it's cool but that didn't change by a lot not not a ton no no not really 2.3 seconds besides keep in mind uh it's always good that I didn't do that to look at the Matrix of the the model just to have a picture of the size we're talking about 1.6 million rows in the order table so that is that the model is Tiny I would expect milliseconds for any answer uh yeah yeah I was I was just gonna say is there any callback IDs happening at the moment between pass-throughs if the where they would be involved very bottle thing yeah we do have one yeah yeah yep yep uh but if you look at the timings huh that's half a second oh yeah not okay yeah the worst perpetrators we could cut this we are not sure about cutting it and that is Computing this if it's blank x rate calculator the if statement is sent into the the Callback that ID middle one's the worst looks like scan line four skin duration 610 yeah and all these things from sales like order number amount order date delivery date and yeah I guess are you even using delivery date in your yeah why is it yeah every day though and the amount and the older date and the order number who cares about the order number yeah you will not see the order number anywhere so that's a very interesting symptom that's something that is it's nice that you noted it and there's another symptom here I give you a hint here the table contains 1.6 million rows yep and like usually it's it like the most common place that I see it is a in as you've both you and Marco have vlogged many times is when using a filter function try not to filter on an entire table versus a um column because it will pull in the whole table so morning wearing here there might be a table reference or something else that needs to fetch some X so you're summing the you're doing the orders table would it be in the sum X part part at all uh because you're referencing the entire order table there yes but actually that's needed I mean I want to scan the order table row by row return the amount if is blank x rate that's the variable that you followed up above otherwise and you're in well the all function on the date table is returning all the columns from the date table does that not okay no no okay calculate this is a modifier so it doesn't not doesn't really evaluate its argument it just removes any filter who cares he's hanging out I guess now uh there's um yeah let's see I'm actually I like these things because for as people are tuning in our show like this the first time I'm seeing this so um uh it's fun to go on the hunt to a degree and like figure out where that is everything is referencing a particular column so I yeah I am wondering where it's pulling things that are outside the scope of what is needed for this summon orders amount I ordered amount calculating averages it's usually like I I found that most the time it's when you're adding filter context through a calculated function that extra stuff starts to get pulled in unnecessarily uh and the only calculator that I see yeah we only have this calculate I know you can do something like I mean this calculator if you look about this calculate this does not depend in any way from the order because that's the exchange rate of currency in Euro over or date so we can just get rid of it okay and compute it outside if that was the problem so we can get rid of it and we can define a variable outside of everything let's call it um average X rate and then return here and then we use this variable let's format the code because I cannot read it otherwise so now if that was the problem I'm now Computing it outside of everything okay no longer inside the order we were 2.6 and now 2.3 we run it again let's see what happens and yeah didn't really change by okay anything do we have no we do not have a lot of time we need to save some time for Q and A so I'm gonna you're you're right we don't have a hard stop unless you do um so we're we're fine to go over a little bit uh to finish this and then yeah yeah no no worries I I always build in a half an hour buffer after half of this just in case okay anyway the the symptom that the signal that we need to monitor in order to find the solution ER are basically two first that this number the engine is retrieving 1.6 million rows and it's doing that twice by the way yeah okay I mean it's it's fetchiness basically I'm assuming the same table twice for those columns from the same table Yeah the entire table but not only to scan it to return it so to scan and return the full table and the table contains all the columns quantity customer key that we are not using anywhere the order number we are not using anywhere the amount the order date why on Earth that the engine should need all those columns that are not used anywhere here and the reason is that somewhere somewhere in our code we are placing a filter on all those columns Can you spot it where it is Mount USD if you exchange order sit you're like hmm and if anybody else mentions it in the audience too I'll throw up those those suggestions onto the screen because it's uh uh it's it's fun to play an investigator let's see yeah yeah a couple people a lot of people are thinking it's the all function but I I think we've discussed all date is not a problem um that was already accommodated um and and just to confirm the some some X over the orders table is not the is not doing it um over this table that that's not the issue um as well because I know that depending on where you apply a filter it pulls in related tables too it's not only the columns from the table but also columns that are related to it via the join um so yeah like and I like this this is actually one of the times where at least basically what I see I'm a little stumped on where this would be coming from yeah but here we go and now somebody already found the problem we have a guess yeah contact trick context transition on line 21. line 21 here it is amount USD that is a measure okay now because some Asia it's uh automatically surrounded with calculate so it's like we have calculate here just because it's a measure and measures are always surrounded with calculate now what does calculator do calculus says that hey I want to compute amount in USD with a filter that filters the current row and the current row is iterating the orders table all The Columns of the orders table as such this calculate this hidden calculate is actually placing a filter on all the columns of the of the table yeah and if you look at amount in USD a mountain USD is just Computing the sum of photos amount but because I'm scanning the orders table the sum of orders amount given one line of the orders is just the oldest amount there's no point in scanning in using a measure inside an iteration unless you actually want to use it so in order to make it faster it's enough to write orders amount here that's simple the original column because it's our it's already going one row at a time just use the original column rather than the the measure on top of the column yeah even because remember that this sum is equivalent to some X of orders of oldest amount so that's another iteration that is happening inside the first iteration you have an outer iteration over 1 6 million rows and inside you have another iteration that we'll find only one row but you need to fit the entire table uh doing that we run it again that's now an improvement it moved from 2.6 seconds to 242 seconds that's uh 10 times faster uh we can actually make it even faster because uh the idea that I don't want to pass but that's the code so whenever I I show this demo the the path is always different because people change uh I mean I follow your suggestions I don't want people to believe is that the context transition is always bad in this specific scenario because we are iterating over orders the contact position is bad because uh if when that was a measure that was taking a huge time to computer because it was filtering all the the columns but actually the context transition is absolutely useful the only problem is that you need to reduce the number of iteration interesting question on this is to yeah just to make it a little bit faster so and I think it was similar to what our question was earlier is you know anytime you declare table reference it can pull many columns from that table could could you use a values um against a single column in a sum in the sum X function versus the whole table without adding any performance increase well that would be dangerous because uh and probably mostly use this the optimizer does a great job in reducing the number of columns that are retrieved unless you have a context transition as it is the case the engine actually does not perform the scanning as you do if you use values of the set of columns what you obtain are the distant values and each set of values is repeated multiple times you also need a count in order to count how many times that value appeared otherwise the algorithm is completely different you're not scanning just picture the difference between doing a sum X of a customer or as a Max over values of customer continent customer continent returns you at most five rows whereas customer might return in millions of rows and the algorithm needs to take that into account yeah yeah you need the proper like you need the original row context from that table and if you you don't want to accidentally duplicate it or anything um so I guess to that then like yeah and at that point if you if you need to do a if you need to use the values function or a summarized table or anything else then you you still have to add back enough columns to calculate verses that might actually make it slower than maybe just pulling in the original orders table yeah you know the answer is always it depends because if you look at this code uh is it a smart idea to iterate over orders it's very natural everybody does it at the beginning okay but orders contains 1.6 billion rows yes but look at this code does it actually depend on all the columns in the order we are searching for the average rate at the beginning of the amount sorry said again oh as you see I like I mean so far it's I think it's pretty much just order order amount uh date and then multiplying that by the rate essentially yeah but if you think about the day that we actually don't need the date because we are seeking for the first of the month so if I have a year of data I can have 1.6 million rows but I can actually compress it to only 12 rows there's no point in Computing month and year the the oldest amount row by Row Old by order I can reduce the grain to one from 1.6 million to the desired one that is likely to be 12 or 100 but a much smaller number so actually the idea is that if instead of iterating of order I say well let me summarize borders and I summarize it by let me see if I have in the day the year month column I do otherwise you okay I was gonna say and otherwise you could use start of month um to create something like that uh off of like the original date which would still give you one row per month yeah but actually you can group orders by year month and that's the only thing that I need because then I will extract the the beginning of the month and that's the only thing that I need no actually I also need the currency now let me see exchange rate currency Keys the lookup value uh no that is always in currency that is finding that exchange rate between US dollars and Euro so I don't need it actually I just need the mouth and then I won out of it I also want an amount here I can use uh say amount in USD so you see that I'm Reviving again context transition but now I'm calling it 12 times over a year so that is much much smaller and the result of fees that columns is a table that contains the year month and the amount in US dollar now for each of these rows I have the date year month let me let's do date year it's easier if we do that that way and they month number it's easier to have them as two separate columns then I compute the exchange rate uh uh this year order date that is just the date year and this month or the data that becomes updated a month man [Music] number and the remaining part is the same because uh oh no I this oldest amount now is in the variable in the variable in the new column that I created amount across your fingers let's format the code okay we do have an error somewhere now the task is harder where the error where is the error look uh I see it right next to there yeah we had a comma here okay and that was likely that on the error now you see that now we change the completely the algorithm we instead of iterating of orders we iterate over the year and month of the order because we recognize that the calculation only depends on the year and month so instead of doing 1.6 million iterations we will do 12 iteration a year 12 times we use the context transition that is fine we compute the exchange rate that depends on the year on the month and then it always goes on the first of January and then we compute the amount computed here multiply by whatever we started with 2.6 seconds and then our best was 242 milliseconds so let's see what happens now it'll be a lot faster yeah that's 20. that is good because we started from 2.6 seconds and we end up with 20 milliseconds that part that is a actually the sweet point so if you are scanning a table with 1.6 million rows and you compute something in 20 milliseconds that is good that is actually the the best that you can obtain and do the remaining part well and it shows you the the the impact of like the you the the granularity at which you calculate something can have a huge impact same number but like at what level or what context is it is it uh calculating that so I I think this is this was a nice walkthrough of not only variable declarations and walking through the code um but I think I think it's uh yeah I honestly just sit on three three really good optimization points clearing variables they're basically you know I like to call them uh for people who know SQL like it's a temp table you're you're caching the data uh calculating it once referencing it multiple times whenever it's needed it doesn't have to recalculate um you also uh walked through a contact context transition where if you use a measure it implicitly adds a calculate that was then um complicating row by row iterations on this table that you had initially and then also the um ending with just the idea that why are we calculating every row on 1.6 million rows when technically the currency rate only changes one month therefore we should really pre-calculate the sales amount per month multiply by the uh currency rate exchange and up to that and we went from what three three and a half no sorry it was like 10 seconds that's the reason why a wrecking 2.6 seconds 2.6 seconds uh down to a couple hundred milliseconds down to 20 milliseconds so yeah yeah that's super awesome but besides to me that the most important thing is that it's damn it's incredible fun so when you start playing with that you you have a lot of fun you play with that you see the numbers change and and I like it is as you and Marco have both pointed out honestly that new waterfall chart uh that new the new formula engine storage engine breakdown because that sounds like the name might change I think that's actually going to be really helpful I'm a visual learner and just knowing timings is very important like hey 500 milliseconds is is going to be slower than 20 but actually seeing where the storm at what point which engine is doing what I I is honestly one of the bigger upgrades that I've seen come out with this so that really excites me to be able to At a Glance just like oh cool like Lion six scan that's eating up 80 of my storage ending calculation time I should probably look at this let's call it on this table where can I look into the code to see why are these columns being called upon like it's putting on your your detective hat and yeah you know kind of walking through this and I I think the these this upcoming add-on and just Dax studio in general really helps you to reverse engineer uh how you wrote a measure versus how it's actually being computed so this was a really cool example um well done and uh I think this will be since we finished this up there will there's again yeah and yeah it's faster than 20 milliseconds no no and as one final point to that I believe I've heard you and many others mention this is like anything less than about 20 milliseconds this is noise uh for the for the CPU and RAM like it can go up and down but there's not a point to optimize past that to a degree like it you know the computer itself has variability in timings yeah you run the query 10 times uh and the difference of a few milliseconds doesn't doesn't mean anything at all but besides there's also the human perception going from 50 milliseconds to 20 milliseconds you are not actually changing anything the the low hanging fruit are when you go from measures that you take seconds and you move them to tens of milliseconds that is important that gives the user the perception that yes the code is actually fast and the report is clicky clicky drag drop it whereas yeah Christian weight created that that's beautiful but seeking for the milliseconds it's uh it's useless foreign yes I'll bring us up here for some of those uh I just want to throw uh this uh this this wonderful message from Billy Bob and just a nice thank you to both you and Marco uh the amount of learning that you two have uh provided him the last few years so uh he wanted to share that with you but I do have a few questions queued up so I'll I'll just bring them up in order of they were asked earlier in the in the session and we can bring them back up so question one uh he's crew they're creating a top n and others filter to use the calculation groups however um they've also created category and subcategories using field parameters um oh yeah now selected value to check others uh is broken um I know with I can actually answer this one so field parameters does not like selected values you have to use max or Min selected value will will throw an error if you try to do a selected value um against a field parameter column yep because of the group buy if that selected value is the only problem then Tope nados is one of the it's a very popular video that we made it's a very popular calculation that's something that I really believe Microsoft should Implement at the visual level because that's so everybody want to do top 10 and then you want the new role with others if they added the filtering by top end why not adding the the option of having the others row because it's fun to solve problem with that but you can do better things than just solving your problems and honestly sorry at a visual level as a visualization feature I do think that there should be an option to check a box to have another's column as you're mentioning like the I've built that for customers but it requires a whole lot of modeling to do that got a little bit of uh water in my chest but yes a feature that would be really nice to implement for sure so let's pull up next question what is your best and simple explanation for applying semantics Topic in Dax window okay that's a very simple question I don't have one a place semantics is uh uh it's really the elephant is the rule because um okay it's a very complex topic it's not I don't completely understand how apply semantics works right now I do have written my best guesses and what I discovered so far in the white paper that he published in in single day plus but it's not yet complete it's not perfect I know that the explanation we are giving does not cover all the possible aspects and I have some pieces of tax code that I do not entirely understand but you can guess it's incredibly hard to try to understand how a feature works so when it is still in preview so any result might be wrong because of a bag or might be wrong because you are not understanding exactly what how it works so the idea of a plasmatics that is that you can have a function that returns a table with only one row and instead of returning only one row the table is uh that function is executed multiple times for all the current rows that exist in that table and that happens in a completely automatic way by inspected both the raw context and the filter counter so our plasmatics is the first scenario where the row context in the filter context that's used at the same time and you might have filters in the row context which it's not something that should exist at all or you can have a current road finding the filter context that all this stuff is messed up with the black semantics and I think a plasmatics is going to be introduced with window function and then we will probably see it might be the case that the place everybody loves Supply semantics and Dax will evolve with apply semantics over time or that it completely messes up all the cultivation and people hate it and in that case the Prismatic stays where it is and we try to avoid it but the idea is we will provide an explanation of course as soon as we know it and so far I do not have a clear definition of apply semantics Jeffrey tried to do that in his blog post I honestly read them but to say that I understood everything no uh takes some time to to check all the all the details yeah I'll see if I can pull up his his article as I pull up this next question um yes I there's a degree of even some times where it's it's when you read Jeffrey's post like my bar is even lower for that I'll get some of it there's still occasionally parts of it that I don't quite get and anytime either you or Marco write a post as well there's there's one of those where I can't just I can't just read this in five seconds I actually have to like take my time to walk through each piece because you will uh you both provide such a good lengthy explanations in a deep dive usually into whatever you you take the time to write about and I think um you didn't even cover the window function for probably about four to six weeks after it came out because I I I'm guessing you you wanted to take enough time as you described to very thoroughly under to understand them to understand the performance I mean something that surprised me is that as soon as window function went hit the market everybody was uh greeting them as the solution of all performance issues uh they are not they are a tool they are powerful they can be used in some scenarios so sometimes they are faster and that is good sometimes they are slower sometimes they do not solve the problem so I want you to spend time understanding them and believe me it was a hard just to wait because everybody was writing about window function I was reading a lot of Articles saying yes but it's not entirely correct or that stuff is wrong um well I think waiting is a good thing just let's set let things settled down digest slowly and then you publish something that actually makes sense well then it it came out in a weird um release because I think some of the functions came out last fall where the intellisense didn't even recognize them but they did yeah so it was like a it I I don't think they were they were supposed to like actually be like my understand like public information but people just stumbled across them and they kind of just it it people started writing about it even with zero documentation um like that's one of the reasons I avoided it uh to begin with is if I if I'm gonna discuss anything I would like to be able to point to a Microsoft link like here is more information to learn and since nothing existed online yet about that like I can tell you about this but don't go I don't want people to go and Implement something that really isn't ready for production so to speak so yeah there's yeah sometimes you do want to wait to make sure that you're not only properly explaining it but you can you have the right resources to point people to to go implement this on their own yeah yeah that makes a lot of sense um I have a good question from Ricardo so uh how many hours in a work do you work in a normal age I totally have no idea but trust me the if you were watching me working you would be kind of surprised that I work a lot less than it looks like because most of the time most of my time is actually spent walking around for my house or looking at the ceiling and thinking and try to figure out a way to describe something when I'm writing I would say that no more than 10 percent of my time is actually spent writing the remaining part of my time is spent about thinking about what to write finding the examples finding the finding whatever but I'm incredibly lucky because my my job is my my primary passion that's exactly what I want to do so I do not consider working I wake up when I when it's time to wake up so my when my brain wakes up and I go to sleep when it's late and I'm tired sometimes I work until 2 A.M in the morning sometimes I stop working at 2PM in the afternoon just because I'm tired and I don't want to work anymore so I I don't have a clear answer I really don't know well and I mean if you love what you do can like it's hard to sometimes like what do I truly Define as work versus you know not work yeah I know like yeah and like I think similar to you like you mentioned pacing and walking around a little bit just giving yourself time to think and I honestly I've I've had times where I know if I'm staring at my computer I can feel my brain just slowing down a bit and going for a walk I don't I don't know going to run um exercising my brain will will like part of my brain is still dedicated to kind of thinking about the problem and I'll have like that Epiphany moment of something either through writing or anything else where the idea just kind of comes up and like uh what I'm in a more relaxed State and then I come back and work but then that argument is what I call that working like I was not working but my brain was still processing something I needed to do but I think a lot of those moments of realization don't necessarily come when I'm at my computer just thinking about it no I'm doing other stuff in my life and you know my brain is multitasking in the back end and trying to think about a solution for something yeah the problem happens when you are having dinner with your girlfriend and you're talking about she chatting about something and you suddenly realize oh now I solve that problem let me write it down she will not like it of course but no it happens yeah like wait like I thought you're like I was listening but like part of my brain was doing something else yeah exactly no I mean I thankfully at least I I have the excuse of having ADHD so my brain's usually doing a couple things at once even if I'm not aware of it but I've I've been in that scenario where um not necessarily that I was thinking about something but I will just randomly remember that I need to do something in like one second I gotta like I gotta make a note to do this later because otherwise I'm gonna forget again for another week um so I I live and die by a task list on my phone um otherwise I will forget to do a thousand things um but uh decorations aside I do want to pull up a couple of other questions as we're wrapping up so let's see I had a good one good one from Greg uh Greggy B so have you noticed changes in the ways that you the right Dax or design models over the course of writing books and do you have any examples of a pattern you might have changed based off of uh this information I honestly don't know well I definitely changed the way I I build models not because of writing books but over time I I thought I think I was much more creative at the beginning and over the years I discovered that just following the rules that somebody studied before you is that the easiest way to avoid a lot of mistakes whereas in writing tax code no I don't think I I changed a lot the way I write that code from the very beginning but at the beginning we didn't have variables and I tended to do everything in just a gigantic piece of tax code now with the variables you can split into simple steps but again that that because of writing books writing books forces you to explain things in the simplest way that you can figure out that is probably not the simplest way but it's something that you you struggle a lot in finding a good way to explain things uh in a simple way that makes you write that score the that is simpler and I tend no longer to like the the genius moment when you write that school then nobody can understand that looks like you are a genius actually no you're doing it wrong because uh something to be good needs to be understandable and if it is not understandable then the next person coming after you uh inheriting your job will totally hate you because they will not be able to understand what you are what you wanted to write so just keep it as simple as possible I just want one anecdote about that is like you're mentioning like more lines of code is not better like more complex is not better like at the end of the day you want it fast but if you can have fast with also simple code that's the perfect scenario the one thing that just reminds me of is recently um and there's a whole separate conversation on like musk and Twitter and everything but he was doing a performance review of employees after takeover and that it was partially based off of how many lines of code they written not how good the code is how many lines of code there is like I feel like if we're in with any language the number of lines of code you write is not at all a measure of how good you are of a coder if anything somebody who writes less code that that still works should be better than more lines but I I just I remember reading that and just like is this person understand what Cody means like you want simple and good not long and complex yeah let's see pop up a couple of other ones I think I had one more from uh uh Matthias and I think um we'll use this to to wrap up um do you think that the development of Dax has slowed down at all in recent years uh or was the Indo index window function um anything that likes uh kind of helped kick-start this that he might have missed um I think um that's the I mean power bi is not only about Dax power bi is a complex system that consists of a lot of different parts that need to be engineered together so specifically talking about docs I think that the last actually important change was the Advent of variables and that was many many years ago since then we had a lot of new functions but nothing really important we had calculation groups and that was a another important change uh uh but that is still the same and there was one more recently too I think right um the uh the basic filters the types of ways you can you can add filters without having to use the filter function they 12 16 months ago they made that a bit more robust I think in terms of conditions you can put into that so they did make filtering more simple as well without having to call upon more complex functions which I is nowhere near as big as variables but I do think that helps people who are beginner to intermediate and Dax write calculate functions uh more cleanly as well yeah now I do agree but I don't have the feeling that the the developments slow down they spent a lot of time in optimizing and making better and make it work in a nicer way in removing some of the issues that were there making them more sound language there are still details that I would really love to see in Dax I would like to see measures that can return table so that's something that to me is extremely important I would like to see functions so the ability to refine a function index that can be called and generated this way these are important changes in my opinion to make tax a better language but I don't think that that should improve every year with new functionalities or even every month because as much as I lacked technology I always have the feeling that we're just running too fast and things are changing too frequently people don't have time to adapt to use the tools they have they always have the idea that while the next version will be better but there's a lot that you can do today with what you have and just learn to use it and learn the details uh become an expert about that and then the new feature will be useful but it's not all about the new features to be Dax as of today is already really complex to learn really hard and it's extremely incredibly powerful yep I don't have I don't feel personally the need for new functionalities and ducks apart from measures returning tables and functions this would be my favorite things like I mean at this point it's the I think you YouTube knowing the term like synthetic sugar like it already can calculate every calculation you would ever need to calculate in the back like there's nothing left that can't be computed with the right formula simplifying the formulas or in your case the I agree like the functions and measures as tables would be really useful would allow you to declare those and um but it's the language has been out for nearly 10 years almost everything that's been needed to add it is already there it's now at this point making it easier to write or simpler um and the team's still working super hard on stuff like the Dax team was heavily involved in the super long named direct query against analysis services and power bi data sets that like that being able to do composition once again yeah composite models the my I like to say the Microsoft term because it's a annoyingly long title uh sorry I need to really use it and I I said I only have five minutes left in my schedule okay perfect then um yes I'll use this um as a good steak take it within um to finish up but conversations have been great loved the demos that walked through a lot of optimization I think with the variables uh context transition and also the level of granulated to calculate so fantastic example today and thanks everyone for tuning in we had about 85 I think at its peak um tuning into the Stream So a good healthy number um but this has been fantastic in uh Alberto thank you so much for uh joining on today enjoy the rest of your Friday night weekend and everyone as well and enjoy the rest of your weekend enjoy ducks yeah exactly thank you so much for watching please consider hitting that like And subscribe button and if you want to help support this channel take a look at our Channel memberships or our merchandise store for cool swag and last but not least please consider sharing this video on your social media platform of choice to help our Channel and grow so until next time
Info
Channel: Havens Consulting
Views: 5,731
Rating: undefined out of 5
Keywords: Power BI, PowerBI, PBI, DAX, Data Modeling, Visualizations, Tips & Tricks, Power Platform, Power Query, Power BI for Beginners, Power BI Training, Power BI Desktop, Power BI Best Practices, Power BI Relationships, Power BI Dashboard, Power BI Tutorial, Power Query Excel, Power BI Versus Excel, Power Query Tutorial, Power Query Functions, Power Query Parameters, Power Query Editor, Power BI Service
Id: DSiRHOcI-es
Channel Id: undefined
Length: 86min 41sec (5201 seconds)
Published: Fri Feb 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.