[MUSIC PLAYING] TOMAS MLYNARIC: Hello
everyone, and welcome to the Practical
Performance Problem-Solving in Jetpack Compose workshop. I'm Tomas Mlynaric, a
developer relations engineer, and I'm joined by my
colleague Ben Trengrove. BEN TRENGROVE: Hello. Thanks for staying. We saved the best
for last, I promise. So in this workshop,
we've got a sample app. We've got some
performance issues in it. And we're going to use
a scientific approach to first measure. Then we'll debug
what's going on. And then we're going
to optimize and fix those performance issues. And just to call out, this is a
slightly more advanced Codelab that we're following
where we don't explain the basics of Compose. So if anything seems
a little weird, we've got some extra Codelabs
at the end that take you over some more of the basics. TOMAS MLYNARIC: All right. BEN TRENGROVE:
Let's get started. TOMAS MLYNARIC: To get started,
follow the link visible on the screen, which is
goo.gle/compose performance codelab. Go to the Get Set Up page
and download the code base for this project. You can either do it as the
starting point ZIP file, or you can do it with the get
command to clone the repository. This repository also
contains other Codelabs. So once downloaded, open the
Performance Codelab project in the latest stable
Android Studio version. And if you have an
Android device handy, connect it to your laptop
because we'll be using it for running benchmarks. But don't worry if you don't. For this workshop, you
can use also an emulator. We don't recommend doing
that for real performance measurements, but for
here, we can do that. And also, if you wouldn't
be able to do any of that, we have system traces
that we'll need for analysis available in
the same Codelab website. And while you get set up, we'll
have some theory to start with. BEN TRENGROVE: So spotting slow,
non-performant UI is normally possible just by opening up your
app, having a scroll around, and going, oh, that
looks a little janky. But you have to watch out
for, especially Compose apps, debug performance is not the
same as release performance. So it's not really what
your users would see. And you might not
have a problem at all. So the first thing we say is
always build in release, ideally with [INAUDIBLE] enabled. After that, another
thing to consider is adding a baseline
profile to your app. Because Compose is
an unbundled library, it's treated as
user code, and that means it can be just-in-time
compiled when you open your app. That has a performance
price to pay, but a baseline profile
helps with that problem. So baseline profiles pre-compile
library code, and your code actually, at install time of the
app rather than first launch. So we've got a graph here that
shows the amount of time spent JITting as we call it,
just-in-time compiling, with no baseline profile,
with an old baseline profile. So we also think you should
try to keep them up to date. Maybe with each major
release, update your profile. And then an up-to-date
baseline profile we can see is much better. Cool. OK, so you've done all that, and
you're like, it's still no good. What do I do now? You might be tempted to
just dive in and start. You read some blog somewhere
or something, and you're like, let's just do some coding. But before that, it's important
to follow a scientific approach, or else you're just going
to add a bunch of complexity to your app that might
not be doing anything. So the first thing to do
is establish the baseline by measuring your
current performance. After you've done
that, you can have a look at that baseline,
or that benchmark, and work out what's actually
causing the problem. It's then you can go and
start modifying the code based on what you've learned. And then after you've
implemented the fix, we remeasure, and we make sure
we actually improve performance, and repeat. Performance optimization
can kind of never end. It's up to you when you
think it's going fast enough. But you could keep going
forever, so don't, basically. And so if you don't follow
a scientific process, you might just find
that something improves, something actually
made it worse, and you never really know what
made the fix in the first place. So to do the measuring,
we have a library called Jetpack Macro benchmark. Basically, benchmarks just
run integration tests. They have some actions
you run on your app-- we'll see it in this Codelab-- and they output
performance data. In the workshop
today, we've already written all the
benchmarks for you. So we won't cover how
to write a benchmark, but we'll definitely run some
so you can see how they work. So with all of that,
hopefully, you're all set up and we can get started. TOMAS MLYNARIC: OK. So I have the project here in
Android Studio already opened. And I'm going to
build it so that you can see how the project
looks, and also go over just a brief from top how it looks. So I use the Project
View in the Project pane so that I can see the
modules and build Gradle file because we'll need it. And I have the app
module here that we'll be making the changes to the code. So this is basically the
Compose app that we built. And then we have
a measure module, which contains the benchmarks
that we we'll be running to measure the performance. And the app looks like this. So it contains several tasks. Each task screen has a
different performance issue, and we'll basically
go one by one and start fixing
them in this Codelab. The first task basically is
a list of items of lazy grit, and each item has some
title image and then some published date with
a current time zone-- that's formatted in
a current time zone, or in Prague time
zone on my device. And we will see that in
action how we'll fix it later. All right, so to
analyze performance, we can use system tracing. And so Compose automatically
gives you some information on what phases it runs
in, but that is not enough to get a full analysis. So to help with that, we need
to enable composition tracing. And so you do that by adding
runtime tracing dependency to the app module. So I'll go and add it into the
build.gradle file in my app module. I will add it here. And at this point, if I would
run a system tracing profiler from Android Studio,
I would already get the composition
tracing information. But we want to use benchmarks
to also measure the performance. We need to add two more
dependencies, which is tracing-perfetto and
tracing-perfetto-binary, to the measure module. Don't add this to
the app module. BEN TRENGROVE: Yeah,
these dependencies are massive, so
don't accidentally ship them with your app. Only add them to
the measure module. TOMAS MLYNARIC: All
right, so I added it here. And the last thing to
have composition tracing enabled with benchmarks is
add this test instrumentation argument to the measure module. So just copied it here
as well, and I'll add it to the default config block. Then I'll sync the
project and I'll be able to run the benchmarks. So now it's syncing. No, no, no, no, no. All right, it's synced now. And so to run the benchmarks,
we'll go to the measure module and following the accelerate
heavy screen benchmark. And from here, we can
basically just tap the Run icon in the gutter action and run it. And this will run it on
the device I have here physically ready. And in the meantime, we-- BEN TRENGROVE: Yeah, so the
benchmarks in the project, just to save time,
are only set to run with one iteration, which is
not what we normally recommend. We think you should probably
run them with at least 10. And what that's doing is
averaging out the results, because a lot of things
can affect performance. The device might get too hot. You might get a
push notification in the middle of the test. All of those things have
an effect on the results. So by running the test over and
over again and averaging it out, you'll get a more
reliable result. The other thing is we're
using compilation mode full for these benchmarks
today because we're interested in
runtime performance. What that does is just fully
compiles the app in advance. So we're not going to get caught
out by just-in-time compilation. But actually, if this
was a startup benchmark, that would slow
things down because we would have to load all that
pre-compiled code from disk, and that would actually slow
down the performance at startup. TOMAS MLYNARIC: OK. BEN TRENGROVE: So it
looks like that finished? TOMAS MLYNARIC:
Yep, we're finished. We have the benchmarks. So whenever the benchmarks
are run from Android Studio, you'll get the outputs in the
Android Studio Output pane, and you'll get some metrics
on what it measures. So the interesting things here
to measure runtime performance-- that means how smooth it is-- are the frame duration
and frame overrun. So the frame duration
metric basically tells you how long it takes for
your app to produce the frames, and it gives the results in
a statistical distribution. Basically, the lower the number,
the better the smoothness of your app it is. The frame overrun metric,
on the other hand, tells you how much time there
was still the frame limit. So a negative number
is basically good. There was still some time
until the frame limit. But a positive
number, like we have here in the 66
milliseconds, that means that this went 66
milliseconds over the limit that the device
expected, and therefore this screen definitely junked
and was not smooth enough. And so there are some
other metrics we have here. You can add as many metrics
as you want, and all of these basically are here to
help you understand the performance of your app. So I have the image placeholder,
and I have it added here with a trace section metric. And basically, I can
get more understanding if I'm going into
the right direction. OK, so once we have
the benchmarks ready, the benchmarks actually
produce system traces. And we'll use the system
traces to analyze performance. You can analyze it either with
Android Studio Profiler or with a browser-based tool-- Perfetto. And we'll use that here. So to do that, you just go into
your favorite browser and go into ui.perfetto--double
t--.dev, and it will load the tool
to analyze performance. And system trace, if you
go back to Android Studio, is located in the Build
folder of the measure module. I'll go there. Outputs. Connected android test
additional output. Then there's the build type that
it was run and the device name. So my device is Pixel 6. And here, you would
see all the system traces for each iteration. So here, we run
only one iteration, so we have only one
Perfetto trace here. And from here, I'll just drag
it and drop it into Perfetto, and Perfetto will load all
the information in browser so we can analyze it there. At this point, if you can't
really run the benchmarks, you could go to the
Get Set Up page, and here are all the traces. So here's the step one. This is the first benchmark. You could download it
here, and just drag it into Perfetto for analysis. BEN TRENGROVE: So that's
a lot of information. TOMAS MLYNARIC: That's
a lot of information. But don't worry,
Perfetto basically records everything that's
happening on your device during the trace,
but you don't need all these informations to
analyze performance of your app. So when you load
Perfetto, first, you would see the state of
the CPU, state of GPU, and stuff like that. And then you would see
a list of processes that were running during the trace. But when you analyze
your performance, what you're interested
is in just your process. So in our case, that's
Compose performance. And so I can just
click it and unroll a list of threads that were
running inside my process. So it starts first
with a main thread, and then rendered
thread, and then rest of the threads that were
running during the execution of the app. So navigate within
in Perfetto, you can use WSAD to zoom
in, zoom out, go right, left because we think
performance analysis is like playing a game. And so that's how you navigate. So to start with
analysis, you need to understand the expected
and actual timelines. So the expected timeline
here, the first one, tells you basically when the
system expected the frames to be produced. So here, we can see
that each frame takes 16.6 milliseconds, which means
that this device runs in 60 FPS. If you have device that
runs with more frequency, you could see shorter
duration here. The actual timeline,
on the other hand, tells you the, well, actual
duration of the frames that the app produced. There might be different colors
here, like a green frame here, which means it takes three
milliseconds to produce this frame. That's all good. There was no jank. Everything's fine. But you might see red
frames, like here and here, that these are the
frames that took longer than the device
expected them to be created. And these are the frames
that you should really invest into what's going on. The first red frame here
is the startup frame. And so the app
startup usually takes longer than 16 milliseconds,
so we will not go down-- we will not go there. But then there's
another red frame that takes 89 milliseconds,
and that's during the execution and that's our janky frame. All right, so to analyze,
we found the janky frame. We need to go there to
analyze what's going on there. So to first do that, we need
to understand how the Perfetto organizes the sections. And basically, it means
that each bar from top, like here, the
Compose recompose, which is populated by Compose
framework, by the way, is-- so each bar from top-down is the
total time of the bars below it. So if I would zoom
in a bit more, we can see that there's
some lazy item creation. And then in this
item call, we see that we would have some
published date format and image placeholder, some async
image, and other stuff. BEN TRENGROVE: Yeah, so
basically, it's a stack trace. That's just showing you all the
functions in the order they were called and the time it took. But what about gaps? I can see a few gaps
in the trace as well. TOMAS MLYNARIC: Oh,
yeah, here are some gaps. So the gaps mean basically that
there was some code running-- untraced code running. That just means that
the system trace doesn't have enough information. This doesn't mean that the
code wouldn't be running. So in system tracing,
it means always there was something executing. It's just that the trace
doesn't have enough information to tell you what was
running in the gap. BEN TRENGROVE: OK, cool. But how do we use all
these fancy-colored bars to work out what's
actually wrong? TOMAS MLYNARIC: Basically,
as a first step in analysis, you need to look
out for anything that's suspiciously long-- any section that takes longer
than maybe some others-- and you start there. So in our case, it looks obvious
that the image placeholder takes 19 milliseconds, which is-- it takes longer
than even one frame. This is where we'll start. And so the image
loader uses the painter resource, loads some JPEG,
decodes it, and shows it on screen. BEN TRENGROVE: Yeah,
so it's really obvious to always load network
images off the main thread, but watch out for the
placeholder as well. Your placeholder is generally
loaded on the main thread, and it might just be way too
big and causing just as much jank as if we'd
loaded the network image from the main thread. So possible fixes. Well, we'd have a look,
make sure if it's too big, we could shrink it. Or maybe we could use a
vector drawable instead. TOMAS MLYNARIC: All right,
so let's fix it in our code. So I'll go into the app module
and into the accelerate package, accelerate heavy screen. And here, basically,
the screen is collect some items from the
ViewModel and loads a content. It shows the lazy
vertical grid of items. And then I have the
heavy item, which contains some published text. This is what the UI shows. Async image. And here's the placeholder. And so, basically, the
placeholders are saying it's 1,600 by 1,600 resolution. So this is clearly too
big for what it shows. And so the fix here
is, well, easy. We can just use vector here,
save it, and that's done. So let's verify it. So to verify if we
actually fixed it, we'll run the benchmark again. So we go into the accelerate
heavy screen benchmark, run it from the gutter action
icon, and wait for the results. BEN TRENGROVE: Yeah, so watch
out for image placeholders. They generally are loaded
on the main thread. But this was just an example. We wanted to show you what a
really heavy bar in a trace looks like. You might spot something
completely different, but this is kind of how
analyzing traces work. You just got to dig in, look
for things that look suspicious, and go and read
the code and make-- come up with a hypothesis
and make a change, and see if it fixes the problem. Cool, looks like it's loaded. TOMAS MLYNARIC: Yeah,
it's not running, so it's swiping the content. And I've got the results. And so, again, as before,
I have the same results in the same folder in the
build, folder of the module. So now I'll go drag and
drop it into Perfetto and see how it helped. So I'll find my process name,
com.compose performance. I'll zoom into the second
janky frame, second red frame. And I can see that the
image placeholder now takes 700 microseconds. So it's clearly a 20
milliseconds improvement. BEN TRENGROVE: All right, so
we definitely made it better, but that frame is still
looking pretty janky to me. TOMAS MLYNARIC: Yeah, it
still takes 92 milliseconds. All right, what's
happening here? Another thing is same process. Find something that's taking
time on a main thread that doesn't have to be there. And so in our case, there's a
suspicious binder transaction that is 1, 2, 3, 4, 6 times,
and it takes a millisecond. What's a binder transaction? BEN TRENGROVE: So
binder transactions are just when your app
has to talk to something outside of its own process. Basically, it's
talking to the system. They can be lots of
different things. Say you're requesting
a runtime permission. I think even Logcat is
a binder transaction. Just any time your app
has to talk to the system, it goes through one of these. Unfortunately, they don't really
tell you what exactly they are. You just get this label
binder transaction. So what you have to do is look
at where it is in the trace, and come up with an idea
of where that roughly might be in your code,
and then you can go to it and wrap it in a
custom trace section. And that will give
it a label, and you can use that to
verify you've actually found the right transaction. TOMAS MLYNARIC: All
right, so let's go and fix it and get more information. So I'll just hide
all the panes and go into the accelerate
heavy screen as before. And, well, I need to look
into something that's calling a system service or something. And, bam, we have it here. Register receiver. Who would say? So to verify if this is actually
the cause, I can add trace call and wrap it with a label. I have the trace
section metric here prepared for the same label,
so I'll use that here. And if I would
rerun the benchmark, I would basically see that these
binder transactions actually collaborate to this call. To save some time, I'll
not go through that path and go directly to fixing that. So this is on a main thread. So to offload it off the main
thread, we can use coroutines. So let's go and get a scope-- RememberCoroutineScope--
and then we'll just wrap this in scope.launch. And we'll use a
different dispatcher than the main dispatcher. So let's use I/O here because
it's talking to the system. So if we do that,
we will basically offload it from the main thread,
but it was there six times. There's still unnecessary
work happening here. And we're using it for
the time zone changes, and time zone is the same time
zone across the whole device, so we can hoist it. And so to hoist it,
there are two ways. We could either pass the
parameter along the tree. But in this case, it
might actually make sense to use a composition
local because it can have a very
well-defined default value, and it can be used only in
the place that we need it. We'll use the composition local. So I'll help myself and
go into the page six, and find we have the composition
local here so that I don't have to remember the exact syntax. So I'll paste it. So basically, I'm creating
a composition local off with the default value of
the current system default, and then I need to provide it. So I have a prepared provide
current time zone method here. So I'll use the composition
local provider that comes from the Compose framework. I have the inputs. And I don't have the value. So now I'll just go to the
publish text composable that is in every single text
of every single item, and I'll just take that
from here and put it here. Conveniently, it's
named the same. And so I have that, but
I have an error here. So here, I'll just use the newly
created local time zone.current. So whenever that composition
local is changed, basically, this composable
will be recomposed with the current value. And so the last step is
this isn't used anywhere. And so composition
locals only work for the subtree of where you
call them, the providers. You would use it in a root
composable of your screen. So in my case, I'll do it in
this accelerate heavy screen composable. And that's it. And basically, in
these composables, this composition local will
exist, nothing will complain, and we can verify
if we fixed it. So we'll go back into the
accelerated heavy screen benchmark and run it
from the gutter action. BEN TRENGROVE: Yeah,
so binder transactions are perfectly normal. There's no need to be
scared if you see one. But a lot of them can actually
be offloaded to main thread. The way you can tell actually is
if they use an activity context, they have to stay
on the main thread. But all the other
ones in general can be moved to a
background thread. So we'll wait for
that to finish. TOMAS MLYNARIC:
Starting the benchmark. Now it's running. And we've got the results. So same process. We have the system trace here. Let's drag and drop
it into Perfetto, follow the same process,
and find out here. Zoom in. And we can see that there are
no binder transactions here anymore. BEN TRENGROVE: But we still
used it, so where did it go? TOMAS MLYNARIC:
Yeah, that's true. We can use the search bar here. So I named it as a publish
date register receiver. Just tap Enter. And as you can see here,
we have just one occurrence of this section, which is here. We still see that this is the
binder transaction, but only one during the execution. And we can see that it uses a
default dispatch thread that is created by the coroutines. So we offloaded it
from the main thread. BEN TRENGROVE: Nice. If we scroll back
up and had a look, we could see the frame
is actually still janky. There's one more performance
issue in that task. But just to save time
in this workshop today, we're going to leave that
one as homework for you. That's step seven
of the code lab. TOMAS MLYNARIC: All right. And also, it might make sense to
not use that in the composable and offload it in a data
layer that could register in the broadcast receiver. But keep in mind, even though
that it's in a data layer, it'd still be running
in a main thread. So just to call that. All right, so since we skipped
the unnecessary subcomposition task, let's go and
compare the result and finish the cycle of
measuring initial performance, fixing it, and now comparing if
we actually improved it or not. So in this code lab, we'll
just do very simple comparison we're going to do, and
take the results here. I'll just copy-paste
it here into the code so I can see it
in the same space. But I can use the
test history and find the result, the first
benchmark that I was running. So that's my before
state and after state. And I can see that first, it
was running, actually, faster in many cases. But in the P99, it
got pretty close. BEN TRENGROVE: Yeah, we've been
caught up by the one iteration. So if we ran this
benchmark 10 times, you would see that
it did get faster. TOMAS MLYNARIC: Yes. BEN TRENGROVE: Which we
also saw in the trace. That's why we say don't
run them just once. Run them 10 times. TOMAS MLYNARIC: Exactly. But, yeah, that's how you
can simply compare and verify that it actually helped
with the performance. BEN TRENGROVE: Cool. So for the rest of the
workshop, just to save time, we're not going to go
through the full cycle of running the benchmark,
working out what's wrong, and stuff like that. We're definitely
convinced that that's the best way to optimize
performance, though. So if you're doing
this at home, we encourage you to run
the benchmark first. We've written benchmarks for
every task in this Codelab so that you don't have to
do it and you can run them. All right, cool. Moving on. So the next thing
we're going to look at is preventing unnecessary
recompositions. And while you might
get set up, I'll just go over a
quick bit of theory that will help
understanding this section. So Compose has three phases. Composition determines
what to show. Layout determines
where to show it. And drawing, I'm
sure you can guess, draws it all to the screen. The cool part is that
Compose can actually skip a phase if it doesn't
have to do anything. So if we can avoid, say,
changing our composition tree, we can skip
composition altogether. And composition generally is the
most expensive phase of Compose. That same thing applies to
layout and drawing, though. If you don't invalidate the
layout, layout can be skipped. You can even do composition
and not require a redraw. They're all skippable. So let's have a look at how you
might actually go and do that. So we'll open the
PhasesComposeLogo file in the app module. So it's under Phases,
PhasesComposeLogo. I'll close that so we can see. And I'll run the app so we
can see what's going on. So this is task two. I'll open the Layout Inspector. The Layout Inspector
is a tool that shows you what in your
screen is recomposing. And we can see here that
whenever we see this blue box, that's a recomposition. So this task is just that logo
is bouncing around the screen, but we're not actually
changing what's being shown on the screen. We're just moving something. So surely, we can do
that just using layout. If I expand the
Layout Inspector, we can see that this
phases compose logo is just going to the moon, straight up. We can definitely
do better here. We're wasting a
lot of performance. So let's have a look at
how we might do that. Having a look at the
code, this is the function that we saw recomposing a lot. And we have this logo position. You don't have to dive into
the logo position function, but we just need to
know that it returns the next position for
the logo, and it does it with a Compose state object. And now anytime a
Compose state is read, that's how Compose
knows when to recompose. So we're reading
this state here. And so anytime logo
position changes, Compose will recompose
the nearest recomposition scope, which in this case
is this whole function. And so all of that code is
being rerun for every time logo position changes. So to fix that, we need to defer
when the state is being read. And there's actually
a really easy way to do that in this case. We can use the offset modifier
that takes in a lambda, and we can just actually
pass the state straight in because it's an offset. And now, this lambda
here will only be called in the layout phase. And that's where
our state read is. And so now we're not
going to recompose. We're just going to
invalidate layout instead of invalidating composition. So I'll rerun the app, and
we'll see if it worked. So we can already see that
the blue box isn't showing up anymore. And if I expand here, we
can see no more numbers sky rocketing to the moon. We're just relaying out now. We're not actually recomposing. So the only code being rerun, in
our file at least, is just that. TOMAS MLYNARIC: All
right, so there are also other lambda-based modifiers. So, for example, if you want to
animate background or something like that, you can
just call draw behind with draw [INAUDIBLE]
and the color. This way, you would also
skip the layout phase and only redraw. Same thing if you want to change
alpha, or rotate, or scale. You can use the
graphics layer modifier with the lambda version. And you can even combine all of
these into just one modifier. But what if there
isn't modifier? What if I want to change
size or something? BEN TRENGROVE: Yeah,
so there won't always be a lambda version of the
modifiers you're using. And in that case,
we can fall back to a lower level API,
which just happens to be what we do in
the next step, Step 10. So you can implement
a custom layout. And that means any
state reads inside that layout composable will
only invalidate layout. And you can even go
directly to draw. You can implement
a custom canvas. If you can get away with
just using draw calls, that would be very performant. So that is also a possibility. In this case, we'll just
do the layout, so step 10. And we'll open up the
PhasesAnimatedShape file. So I'll close this
for the moment just so we can see the code. So this is task three. And this one, we have this
button here that when we tap it, there's just this
animating circle. It has a nice spring effect. But we can see the
same problem as before. We're recomposing
on every frame. Again, we're not
changing what's actually being displayed on the screen. We're just resizing. So we know we can do better. So let's give that a go. All right, we'll have
a look at the code. This is the MyShape composable,
which is the circle. And it takes the
size input here. And that's what's changing. So we'll have a look
at how size is created. And size is an animated dp. So anytime size
changes, which will happen every time
we click the button, this whole function
has to recompose because we're reading it here. The nearest scope is here. So all of that's rerunning, and
then my shape gets recomposed. So the first thing we can
do is just defer that read. And we can do that by
switching size to be a lambda. And I'll update my callsite. But here, I'm like, OK, cool. Maybe I can just switch
that to a lambda. But, no, it doesn't exist. So in this case, we're going
to have to implement a custom layout instead. And just to save
myself a bit of typing, I'm going to copy
it from the Codelab. So I'll paste in the new
implementation of my shape, which uses the layout modifier. And we're using that because we
only have one thing to lay out. You can use the layout
composable instead. The same theory applies. And you do that if you had
multiple composables to layout at the same time. And so now, this is where
we call the size lambda. And that means that's where
our state read is happening. And because we're doing
that inside this layout, this is now our nearest scope. And so only this will be
rerun when size changes. We then just take the size,
make some constraints, measure, and place. You can learn about
making custom layouts in our documentation. Cool. So I'll rerun it. We'll see if it worked. Oh, be nice to me. TOMAS MLYNARIC: Layout
inspector is tired. BEN TRENGROVE: Yeah,
it's been a long day. Hey, there it is. So now I'll tap the button. And we can see, yeah,
problem solved again. We've still got the blue
box around the whole thing because the button
still recomposes on tap, but we can see the circle now. No number. It's just being
skipped every time. Cool. TOMAS MLYNARIC: Great. So remember, you shouldn't
have to recompose only when you adjust
layout on a screen, especially during
scroll or animation. And this might lead to
janky frames sometimes. So recompositions that
occur during scroll is almost always unnecessary
to occur and could be avoided. So that's how you
fix performance. All right. So we fixed unnecessary
recompositions with differing the
state, but we can also fix unnecessary recompositions
with stable classes. And so that's what
we're going to do in the last part
of this workshop. So we will not use Layout
Inspector in this case. We go into task four, and
we see a list of items here. So when I interact
with the screen, we're using a
modifier that shows a border whenever something
recomposes on a screen. So we can see here that
when I'm adding more items, this recomposition goes through
the roof, everything recomposes. And we have different
types on the screen here. So we have items marked
with the ref icon-- with the blue ref icon. And these items basically
simulate a situation that you might have
in a code base, like when SQL creates new
entities, like the room database, or the API's entities
are recreated from Firestore, or whenever there's parsing from
JSON or something occurring, because this is where the
new instance is created and provided. So that's the ref icon. And EQ icon, items
marked with these, these usually stay the
same during this execution. Only when I change, actually,
that instance, then it's recreated. It uses the copy method just to
simplify it here on the screen. And, yeah. BEN TRENGROVE: So the way
skipping works in Compose is Compose is
actually generating a whole bunch of code inside
each of your composables. And what that code
is doing is it's looking at the current value of
each parameter that's passed in, and it's comparing it
to the previous value. And that's how it knows if
it can be skipped or not. If all the parameters are
equal, then that composer will won't actually be rerun. But the trick is it only does
this if all of the parameters are what we call stable. So a stable type, what is that? It's when it's either
completely immutable or Compose can tell
when that object has changed, which normally
happens via mutable state of. If it's not one of those,
we consider it unstable, and that means we won't
generate any skipping logic in that composable. It will always be rerun. So unstable classes tend
to happen when the class itself is just
straight up mutable. You might have a var property,
but more commonly happens because it's in an
external module. So we can only determine
the stability of classes in modules where the Compose
compiler is actually run. And this includes things like
Java standard library functions, which we'll see soon. But also, maybe, you've just
got a generic Kotlin data module or something. Those classes will be
considered unstable. The other thing is there's Nest. So if you have a class that
contains an unstable property, well, that class,
that parent class, will also be
considered unstable. So the first thing you can do
to help with all of this stuff is just to enable
a new thing we're calling Strong Skipping Mode. And with Strong
Skipping Mode enabled, even composables with
unstable parameters are now able to be skipped. So that should just
radically simplify dealing with these sort of issues. So definitely recommend
doing that first. The other thing is if
you've ever found yourself-- you've already been dealing
with this in the past-- and you're wrapping lambda
calls with remember, well, Strong Skipping Mode
fixes that as well. All lambdas and composables
are now remembered for you. Basically, we're
just wrapping them all with remember ourselves. How does the skipping work? Unstable classes are compared
using instance equality. So if the unstable class
is the same exact instance as the last time that
composable was run, then the composable
will be skipped. Stable classes
are still compared using the equals method. TOMAS MLYNARIC: All right,
so let's go and enable it in our code base. BEN TRENGROVE: Nice. TOMAS MLYNARIC: Nice. So we go into the build.gradle
file of our app module. I'll just close this here. And we can enable it in the
new Compose compiler plugin that comes with the Kotlin 2.0. And basically, this plugin will
simplify all the setting up of the Compose thing. It will prevent all the juggling
between Compose compiler version and Kotlin version. And therefore, it will
always be compatible. So you can also use skipping
mode before Kotlin 2.0. It's just named a bit
differently and set up a bit differently. So to enable that,
I'll just go in true. Set true. I'll sync the project
and rebuild my app to see what effect it
has on our list of items. So now I'll just go and
add a bunch more items. And I can see that,
conveniently, every other item is only recomposing. That's just by enabling
this Turn Skipping mode. And in our case, the
items marked with ref are still recomposing, because
they use a new instance with every change. So let's see what item it is
and why the reason of that is. So we'll go into the
Stability package here, and I'll go into
StabilityViewModel. And basically, the stability
item data class here is what is shown on the screen. So it uses a parameter stability
item, like a primitive type, like int string Boolean, or
it uses the stability item type, which is an enum. An enum is considered stable,
but the local date time is coming from the
Java time package, and that's a Java
standard library. And that's a module
that doesn't have dependency on Compose compiler. So in this case, to
make this class stable and use the equals call
instead of the instance check to basically help with
the stability here, I'll just need to
change my language. And then I'll just
annotate it with immutable because, well, this class
is basically immutable. I'll rebuild the project
and see if that helped. Yep, rebuilding now. OK. And now when I go and
add bunch more items, we can see that
nothing recomposes. BEN TRENGROVE: No, there's still
one thing recomposing there. TOMAS MLYNARIC: My sight is bad. Yeah, the latest change. OK, all right. So the latest change
here, let's figure out why it's still recomposing. So this is the
stability screen file. I'll go here. And so the UI is
straightforward. Some items from ViewModel, some
lazy column with the items. And here is the latest
change composable. It shows the change, the text. And the input parameter
is a local date. And guess what? It's still standard
Java standard library Java time package. But the problem here is I can't
really annotate this class. Oh, wow. So in the past, we
were recommending that you would need to wrap
this class with your own class, and annotate that class
as stable or immutable. But there's a new
way to do that. And we can do it
with a thing that we call stability configuration
file, which you can enable in the build.gradle as well. So we'll go to the build.gradle
file of the app module or of the module that
you want to enable it in. And I'll just add stability
configuration file. And because I don't
remember the syntax of resolving a path
in Gradle, I'll just copy-paste it
from the Codelab. And so basically,
what it sets is it tells a path of a
file in your code base that will contain
information about which class do you want to consider stable. So I have the stability config
file already created here in the root folder. And now I just need
to add the class that I want to consider stable,
which is java.time.localdate. All right, I'll rebuild the
project and I will run it. And I'll see that this
should fix the problem. BEN TRENGROVE: So you can also
use wildcards in these stability configuration files. So we could have just
written java.time.* here. That would have also fixed
the last issue as well. But maybe you could consider if
you've got a generic Kotlin data module, you might want to put
whatever your app's name.model.* or something, and that would
declare your whole module stable. Just keep in mind, it's not
actually making it stable. It's just you're opting
into the contract to say, we write immutable data
modules, so please just consider the whole thing stable. TOMAS MLYNARIC: When I
interact with the screen, I see that only when I
check the item that changes. But when I add new items, remove
items, nothing recomposes. BEN TRENGROVE: Cool. So it looks like
it's totally solved. So, yeah, a quick recap. First thing to do, enable
Strong Skipping Mode. Hopefully, it makes
this all much easier. We're actually going to switch
it on by default really soon, so you might not even
have to enable it. And then the other
thing is check out stability configuration files. And with that, that brings
us to the end of the Codelab. So we only showed
you a small portion of the things you might
run into in the real world, but we hope the
techniques we showed are generic enough to apply
it to whatever situation you end up in. If you found this interesting,
here's some more Codelabs. We've got how to
write benchmarks if you want to learn how
to write those benchmarks, how to add a baseline
profile to your app, and, of course, just our Compose
performance documentation. TOMAS MLYNARIC: All right,
and that being said, we thank you for your attention. And now go and compose
a performant app. BEN TRENGROVE: Thank you. [MUSIC PLAYING]