Profiling Angular Applications

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone my name is minko getchev i'm working on angular at google today i want to share with you a couple of insights on how you can optimize your applications runtime performance first we're going to look into how to diagnose a few common performance problems by using chrome devtools i'll explain what flame charts are and how we can use them to find performance pitfalls as the next step we're going to discuss how to optimize our apps and make them faster finally we're going to look into the javascript virtual machine runtime and explore how it could impact our app's performance i've been doing a lot of work in this space over the past couple of years often at events or on the internet folks ask me how can i make my application run faster well the high level answer to this question is pretty simple just do less this advice is valid not only in the context of angular but for any framework or programming language out there to make our apps run faster we should just do fewer things at ngcoff 2018 i gave the talk optimizing an angular application where i explained several practices that can make angular do less improving our apps performance things haven't changed much over the past few years and these practices are still valid in fact i would recommend you watching this video to get more value out of this current one we can memoize calculations using pure pipes or storing the results out of calculations we can skip change detection using onpush or running code outside of the angular zone and clearly we can render fewer components using virtual scrolling or pagination in this video we're going to classify common performance problems into several categories learn how to recognize them using the chrome devtools profiler and apply these practices that we already know to speed up our apps let us first look into how we can profile an application for the video i have built this dashboard here we have a few different charts a widget showing an overall score for the data we have a table and at the bottom just a bunch of links to profile this app we need to keep in mind the following three essential preconditions when building the project we need to ensure the cli is using its production environment running a production build is required because otherwise the cli will not remove coal that angular uses only during development to guard us against common mistakes such as circular bindings for example next we need to make sure we're not mangling the output of the cli this precondition is not as critical as the first one but ensuring we have readable methods and property names will help us identify the cause of issues we find finally we need to make sure we're profiling the app without any browser extensions enabled extensions can add extra noise to the profiler and even skew the results if they plug into the app's execution lifecycle the easiest way to do this is to open an app in the incognito mode alright now let me prepare our dashboard for debugging making sure it follows these three preconditions we can make sure we disable mangling by setting the ng build mangle environment variable to false after that we need to invoke ng build with dash dash prot to build the app in the production environment look at this beautiful output from the angular cli here notice we are exceeding the maximum bundle budget here that is because we are using strict mode so we have lower thresholds and we also disabled mangling so our bundles will be larger because of that disabling mongolink can negatively impact the profiler's output because the javascript virtual machine needs to parse more codes but it shouldn't skew the metrics dramatically next we can go to the this directory and start a static file server i really love using surf since it is aware of the client-side routing and when it starts a server it automatically puts the url of the app into the clipboard now to preview the app we can open an incognito chrome window and paste the url in the address bar to profile the application first go to the performance tab and after that click on the record button we can start interacting with the app to capture application usage scenarios in the profiler once we are ready we can stop the profiling and preview the flame chart here it is necessary to notice that chrome devtools shows us the estimated frame rate over time see how where the rate is lower there is a red line on top devtools follows the rail model it indicates risks that the frame rate drops to a level that would not allow the ui to respond within 50 milliseconds to user interaction as the next step let us look at what flame graphs are and how can we read them here is an example of a flame graph it visualizes the execution of a program over some time each rectangle size is proportional to the number of times the corresponding call ended up being part of the call stack during the profiler sampling brandon greck a performance engineer at netflix originally developed this visualization method of profiler's output alright so now let us trace the execution of a program and sample it to preview it with flame graph to get a better understanding of this visualization here we have a few functions a which calls b and a1 b which does some work and right after that calls d d which goes e and the function a1 and e we first call the function a and right after that we call a1 at the beginning we'll first call the function a when the profiler takes a sample it will find a in the call stack and record this fact after that it will call b will have a and b onto the call stack in the next sample continuing we'll get a b and d and at the following sample d will invoke e once the execution completes we'll get e d and b out of the call stack and we're going to invoke a1 as an example we'll have completed a and the profiler will capture a one onto the call stack the primary purpose of the flame graphs is to capture how many samples a given function occurred in since this could potentially be in a multi-threaded environment the order of execution is not something that we can express accurately with just a single graph to improve the visualization we can sort the samples in alphabetical order and merge the rectangles corresponding to a specific function call into one we can see that we spent a decent amount of time in b so there might be a place for optimization here well enough about flame graphs now let us talk about flame charts which is something different when the chrome devtools team worked on their profiler they decided to reuse the flame graph visualization because they found it particularly useful however since their main focus was the main javascript thread they changed the format a little bit to show also the execution over time let us look into the flame chart from the profiling we did just a few minutes ago notice the calls from the angular runtime for example refresh component refresh view etc at the bottom we can find the execution of the component's template function when we select this call drag the bottom bar up and here we can see a link to the templates function exact location within the formatted source file clicking on it will take us directly to the right spot here we can find all the iv instruction rendering this template clicking on the bottom up tab we can preview all the functions the template function code and see how much time we spend in them which corresponds to the number of samples the profiler captured them in now let us use this knowledge to understand what triggers the change detection and find redundant calls based on the many apps i've profiled some of the most frequent redundant change detection triggers come from set timeout set interval and request animation frame often these calls are in third-party libraries so it is not immediately apparent that they occurred well notice at the bottom here before we even get into the angular runtime there is a rectangle that says event click this event is what triggered the cycle of change detection the event maps directly to our click on the hamburger menu toggling the site navigation scrolling down we can see the detect changes call that will later indirectly invoke the component's template functions zooming out however notice that we have many similar change detection calls many more than the clicks with it zooming in we can see a timer event judging based on the equal intervals we're on change detection in here this seems like a leaked set interval if this behavior was not intended we can just wrap the invocation inside of ng zone run outside angular just to remove redundant change detection calls and optimize our app okay well as a next step let us look into how we can detect long calls long calls could be particularly harmful to our applications performance especially if there are in templates or life cycle hooks that angular invokes during change detection going back to the flame chart we can see that we have a getter called aggregate at the bottom of one of the calls clicking on the bottom up tab we can find this piece of code's exact location in the source tab to see if we're spending sufficient time in the aggregate getter as part of the change detection we can just go back to the top of the flame chart click on any of the calls there and just explore the bottom-up tab again here we can see that we have spent over 50 percent of the execution time only in the aggregate getter well that is a lot of time here we have a couple of options in order to optimize the code clearly we can use memorization for example since the call occurs in the template we can even use a pure pipe all of these approaches are definitely valid at the same time however the coal seems to be quite expensive so even if we apply memorization or pure pipes will still have to perform the calculation at least once which will hurt the initial performance and initial rendering of our app what we could do instead is move the calculation into a web worker let us go to the terminal and just run ngg web worker specifying the worker's name now open the worker file and let us replace its content here i'm using a snippet but let me quickly go through the code we declared a message listener and in the callback we get a message id and an array over which we're going to perform the calculation we use id just to ensure we return the result associated with a correct worker message at the bottom of the function we just pull the result back associating it with the message id we received earlier to use the worker i'm going to create a very simple service this way we can quickly mock it and cache different calls here we first instantiate the worker after that add an event listener to process the response with the calculated result and at the bottom we send a message to the worker before that ensuring that there are no other pending calls finally we can just update the getter to reuse the service which communicates with the worker first we're going to inject it into the constructor of the home component after that we'll invoke its calculate method passing the required parameters if we get a number we're just going to return it alternatively we want to return the string calculating since well this is an asynchronous calculation here we rely on the fact that anywhere will cause change detection when the microtask queue of the browser is empty this way the aggregate getter will return the numeric value at the last change detection co and we're going to just make sure that we have consistent state of the view this way we can now preview the results notice that we get a calculating label for a bet until it changes to the course result in just a few milliseconds let us now look into the final pattern that we're going to describe in today's video in this scenario we have a really large component tree with many cheap calculations for example very simple templates and life cycle hooks without any heavy calculations here is one such flame chart we can see that there is still a frame drop that can impact the user experience but most calls here are taking less than one millisecond so what could we do when angular calls the app's change detection it will start from the parent component and check its children after that it is also essential to notice that depending on the change detection strategy components using on push could be cheaper than others having a parent component with many children using onpush could be relatively cheap as soon as change in the children's inputs doesn't trigger change detection in contrast however if many children are using the default change detection strategy the execution could be much slower a refactoring could be used here to improve the performance you're just creating a new parent component that uses onpush and move as many of the components using the default chain detection strategy as its children this way we're going to prevent change detection running in entire component sub-trees and have faster execution since we're going to do less however keep in mind that this could bring improvements during change detection but not necessary at initial rendering angular will still have to render all the components and the more components we have well the slower the rendering would be the way to fix this is to render fewer components virtual scrolling is a way to achieve this if we have thousands of items in the list virtual scrolling could help us render fewer components pagination is clearly another alternative a more advanced strategy is implementing on-demand rendering depending on what is currently visible in the viewport for the purpose we can use the intersection observer api well the chances are that you'll be able to speed up your application's runtime performance if you're following all the practices that you already mentioned especially during initial rendering however there are occasions when the javascript virtual machine runtime could bring some extra weight and make things more difficult instead of interpreting all the source code we provide the javascript virtual machine compiles it to native codes to improve performance this technique is known as just-in-time compilation or jet often jet relies on assumptions about the source code and when these assumptions turn out to be incorrect the vm needs to deoptimize the source code well we have optimized the internals of angular well for such situations but jit on its own can bring extra cost during execution especially for cold cold that hasn't been compiled yet well now let us visualize this in practice to do that we need to enable an experimental setting in chrome devtools go to the gear icon select experiments and enable timeline v8 runtime call stats on timeline enabling the setting will require a restart of devtools now when we go to performance and profile the app we're going to see something interesting let us zoom in in the first part of the timeline when we magnify further we're going to see many compile and parse calls in the flame chart these are all places where a javascript vm compiles code during code execution until jit happens some functions could take 5x or even 10x the time they will take once the javascript virtual machine compiles them we can see that when we move towards the end of the timeline notice how we have almost zero compile calls and all the functions are taking much shorter here is one compile called later on because the javascript virtual machine performs jit on demand this function hasn't been called in the past so we just need to compile it right here well that was pretty much everything i had for today i hope this presentation clarifies what's happening under the hood of your app's runtime and how you can diagnose typical performance issues we explained three main patterns identifying redundant change detection triggers detecting and optimizing expensive calls using web workers and refactoring applications with large component hierarchies in the end we peek into javascript virtual machine runtime and saw how function calls could be way more expensive before a javascript virtual machine compiles them thank you very much for watching this video see you next time and happy cutting
Info
Channel: Angular
Views: 25,107
Rating: 4.9672132 out of 5
Keywords:
Id: FjyX_hkscII
Channel Id: undefined
Length: 19min 33sec (1173 seconds)
Published: Wed Dec 09 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.