RICHARD UHLER: My
name is Richard Uhler. I'm a software engineer on
the Android Runtime team. I spent the last three years
trying to better understand memory use on Android. And more recently, I've been
working with first party app developers to tackle the
challenges of evaluating and improving
Android memory use. It's great to see
so many of you here, interested in
Android memory use. To start off, why should
you, as an app developer, care about memory use? For me, it's really about
the Android ecosystem, the ecosystem of
applications, of devices, and of the users
of those devices. And where memory
comes into play is not so much for the
premium devices where you have a lot of memory
available, but much more for the entry level devices. Because these devices need a
decent selection of low memory apps to work well. If application memory use
and requirements grow, then the entry level
devices won't work as well. If they don't work as
well, OEMs will not want to produce these devices. And if they don't
produce these devices, well, we're kind of
excluding a bunch of users from our Android ecosystem. And that's a bad thing. So app developers
have a role to play when they're developing their
applications to do whatever they can to be efficient
in your memory use, to reduce your memory use,
keep it from growing too much, so that we have a nice selection
of low memory applications so that entry level devices
behave well, they work well. And if that happens, OEMs
will produce these devices and we can put them
in the hands of users to use our applications. So in this talk, I'm
going to talk about three broad categories, three areas. First, I'll talk about
the mechanisms that come into play on
an Android device when it's running low on memory,
and how that impacts the user. I'll talk about how we evaluate
an application's memory impact, and in particular, some
very important factors to be aware of that come
into play with that. And third, I will
give you some tips for how to reduce your
application's memory impact, especially given that a
lot of the allocations going on in your application originate
deep within the Android stack on which it's running. Let's start. What happens on a device when
it's running low on memory? Well, memory on device,
physical memory on device, is organized or
grouped into pages. And each page is typically
around four kilobytes. Different pages can be
used for different things. So pages can be used pages. These are pages
that are actively being used by processes. They can be cached pages. These are pages of memory that
are being used by processes, but the data they contain also
live somewhere on the device storage, which means we can
sometimes reclaim these pages. And then there
might be free pages of memory sitting on the
device that you're not using. So what I have done is I
took a two gigabyte device, and I started it doing nothing. So at the very
beginning of time, the runtime is not running. And then I started
using it more and more. So lots of different
applications, exercising them, which has the effect of
using more and more memory on the device over time. So we can see at the beginning,
the flat line there is before I started the runtime. Then I start the runtime up. There's plenty of free memory
available on the device. And this is a happy device,
because if an application needs more memory, the kernel can
satisfy that request right away from the free memory. Over time, as you
use more memory, the free memory gets exhausted. It goes down. And to avoid very bad
things from happening, the Linux kernel
has this mechanism that kicks in called kswapd. And kswapd's job is to
find more free memory. This kicks in when
the free memory goes below what I'm calling
here the kswapd threshold. And the main
mechanism that kswapd uses to find more free memory
is to reclaim cached pages. Now, if an app goes to
access a cached page, or memory that was on a cached
page has been reclaimed, it's going to take a
little extra time to reload that data from device storage. But probably, the user is not
going to be noticing this. So that's OK. Now, I exercise more
and more applications, they use more memory. The number of cached pages is
going to fall as kswapd starts to reclaim them. If it gets too low, there's
too few cached pages, then the device can
start to thrash. And this is a very bad thing
because, basically, the device will completely lock up. So on Android, we
have a mechanism which is called the
low memory killer that kicks in when the amount of
cached memory falls too low. And the way this works
is low memory killer is going to pick a
process on the device, and it's going to kill it. And it's going to get
back all the memory that that process was using. Now, this is an
unhappy state to be in, especially if the low memory
killer kills a process that the user cares about. So let me go through and
tell you a little bit more about how the low memory
killer decides what to kill. Android keeps track
of the processes that are running on the device. And it keeps them
in a priority order. So the highest priority
processes are native processes. These are ones that
come with Linux, things like init, kswapd,
which I told you about, demons like netd, logd. And Android-specific
specific demons, adbd, installd-- basically any
native process that's running is categorized into this. The next highest
priority process we have is a system server,
which is maintaining this list, followed by what are known
as persistent processes. These are kind of
core functionality, so [INAUDIBLE],, NFC, SMS,
those kinds of things. Next, we have the
foreground app. So this is going to be the
application that the user is directly interacting with. In this case, perhaps the
user's viewing a web page, so they're interacting
with the Chrome app. Next in priority are what are
called perceptible or visible processes. These are not processes
the user's directly interacting with, but
perceptible in some way. So for instance, if you
have a search process, maybe it has a little
bit of UI on the screen. Or if the user is listening
to music in the background, then they can hear that music
through their headphones. They can perceive it. After the perceptible
apps, we have services. These are services started
by applications for things like sinking, or uploading,
downloading from Cloud. And then we have the Home app. This is what you get when
you hit the Home button. It often hosts your wallpaper
if you have something there. So in addition to these
running processes, we also keep track of what
the previous application the user used was. So maybe they're using
this red app, your red app, and it brings them to
Chrome with a link. Then, when they
switch to Chrome, that app is going to
be the previous app. And we also keep in memory
a bunch of other processes which are cached applications
the user used before. Some of them may be
recently, some of them not for a little bit of a while. I want to point out here
that these cached processes-- when I use the term cached,
this is a different use of the term cached than
the cached memory pages I was talking about previously. OK, so the reason we keep around
previous and cached processes is because if a
user wants to switch to one of these
applications, and say, they want to switch to
the previous application, it's very quick
to switch to that. I should say this is for a
device in a normal memory state. So if you want to switch
to previous application, that's very quick. But also, if you want to
switch to an application that happens to be in
a cached process, that's very quick to do
because it's already in memory. If we step back,
though, and say, well, what happens when the
device is low on memory? In that case, we could imagine
the memory used by the running applications is growing. The number of cached pages drops
below the low memory killer threshold. The low memory killer now has
to come in and kill something to free up some memory. Well, it's going
to start killing from the bottom of this list. So maybe it kills
this blue application. That's gone. We get some more memory
back for the applications that are still running. But if the user
now wants to switch and start using that
blue application, it's not cached any longer. It means it's going to take a
noticeably long time to launch that application. It could be two
or three seconds, and maybe it's lost some state. So this is where the user first
starts to really feel, oh, something's going on here
that's making things slower. If the processes
that are running continue to use more memory,
so we get under more memory pressure, low memory
killer is going to start to kill more
cached processes. If they continue to grow more
and more, until eventually there's only a few cached
processes left, at this point, we say the device memory
status is critical. This is a very bad place to be. If the running processes
continue to use more memory, though, low memory
killer is going to have to kill more processes. Eventually, it's going to
end up killing the whole map. At this point, the user's
going to ask, well, hey, what just happened
to my wallpaper? Because when they
go Home it's going to be a black screen
for a few seconds before the wallpaper
starts up again. If it's even worse, maybe
a perceptible processes is killing. The user is going to say, hey,
what just happened to my music? I was listening and
it just stopped. Really bad case, the
foreground app is killed, the user just looks
like the app crashed. And the most
extreme case you can get into for a
low memory killer, basically it needs to
kill the system server. This looks like your
phone is rebooted. So these are all
very visible impacts of what happens when a device
is running low on memory. And it's not a good
user experience when this is happening
on your device. I want to go back to this
graph that I was showing before about what happens to the
memory pages on the device as you use more memory. This was a two gigabyte device. What do you think it looks
like, this graph, four or 512 megabyte device? I'm going to give you a few
seconds to think about that. You have an idea
which it looks like? So I tried it for a
512 megabyte device. Same thing, start the
Runtime, use more memory, and it looks
something like this. So because there's
so little memory available at the
beginning, there's very few free pages that we can
use up before the kswapd has to kick in. And then there's
very few cached pages we can reclaim before
the low memory killer is needed to start killing things. And so you can imagine
if you have this device and the low memory
killer is always active, it's always killing processes,
and it leads to this bad user experience, then
maybe OEMs are not going to be too interested in
shipping this device because, well, it just doesn't work well. And that gets back to
the ecosystem challenges I mentioned in the beginning. So this is why we
care about memory. Now, how do we figure out how
much memory an application is used? How do we know your
application's memory, in fact? I told you that memory on device
is broken down into pages. The Linux kernel is
going to keep track, for each process running on the
device, which pages it's using. So maybe we have a system
process, Google Play services process, couple apps
running on a device. We want to know each of
their memory impacts. Well, just count up the number
of pages that it's using. It's a little bit
more complicated than this because of sharing,
because multiple processes on the device can
be sharing memory. So for instance, if
you have an app who's calling into Google
Play services, it's going to be sharing some
memory, perhaps code memory, or other kinds of memory,
with the Google Play services process. And then we can
ask, how should we account for this shared memory? Is that part of
the responsibility of the application? Is that memory impact something
that we should care about? And there's a few different
ways that you can approach this. One is to use what we call
resident set size, or RSS. And what this means is when
we're counting in apps, RSS, we're saying the application
is fully responsible for all the pages of memory
that it's sharing with other applications. Another approach is called
proportional set size, PSS. And in this case,
we're going to say the app is responsible
for those shared pages, proportional to
the number of processes that are sharing them. So in this case, two
applications or processes sharing these pages. The application, we'll say, is
responsible for half of them. If there were three processes
sharing the same memory, we would say the application
is responsible for a third of them, and so on. And then a third
approach you can take is called unique
set size, where we say the application is
not responsible for any of its shared pages. Now, in general, which
approach to take really depends on the context. So for instance, if those
shared pages were not being used in the
Google Play services app until your app called into
Google Play services, then maybe it makes sense to say
the app is responsible for all of those pages. We want to use RSS. On the other hand,
if those pages were sitting in memory in the
Google Play services process before the app called
into Google Play services, they were always
there, the app is not bringing them into memory, then
we wouldn't want to count them. USS would be more appropriate. In general, we don't have access
to this high level context to know, at least at the system
level, so the approach we take is the most straightforward
one, which is proportional set size with equal sharing. And one benefit of using PSS
for evaluating an application's memory impact, especially when
looking at multiple processes at the same time,
is it will avoid overcounting or undercounting
of shared pages. So use PSS for your
application's memory impact. And you can run this command,
adb shell dumpsys meminfo -s. Give it your process name,
com dot example, dot Richard, or whatever it is, or
you can give the process ID if you happen to know that. And it's going to
output something like this, an app summary view
of the application's memory. And at the very bottom,
there's a total. And that number is
the application's PSS. This is adb shell
dumpsys meminfo -s. Now, let's say you do this. You figure out what the
PSS of your application is. There's a very interesting
question to ask. How much memory should
your application be using? Because-- I say earlier,
if we use a lot of memory that's bad because low
memory killer kicks in. But we're actually using
memory for a reason. We're using it to
provide features, to provide user value, to
provide delightfulness. Everything that
makes our app great is going to be taking up memory. So we have this tradeoff
between user value and memory. That's what I'm showing here in
this graph, the tradeoff space. And in an ideal world, we're
kind of up and to the left on the graph, where we're
providing a lot of user value without very much
memory impact at all. But in practice, this
is going to probably be technically infeasible
because you need memory to provide value. And there's only
so much value you can provide with a
limited amount of memory. On the other hand,
the other extreme would be if you're
using a lot of memory to provide not much value. And I think it's safe
to say this is not a great app because
it's basically providing too much memory,
using too much memory. Unfortunately, my slides
are not showing up right. But imagine a curve
on which there's too much memory for this app. It's not worth it
to the user to use. Ah, there they go. Wonderful. Next, we can look at
this corner of the graph where we're not providing
too much user value. We're not using too much memory. We can say this is a
small application, maybe your desk clock app. And at the other
end, we can have apps that use a lot of memory
to provide a lot of value. These are large applications,
maybe a photo editor, or something like that. And we can say,
well, what's better? A small app or a large app? In this case, they
can both be useful, except that when I've said
that an application is using too much memory, that really
depends on what kind of device you're running on. If you're running
on a premium device, it can support much
larger applications. But on a smaller device,
an entry level device, maybe this large app
uses too much memory to make sense on that. So really I should
be drawing a line and say, too much memory
depends on the device. Premium, mid-tier,
and entry level might not support
that large app. For better or for worse,
what I see happening often is over time as you
develop your application, you tend to add more features. It tends to take more memory. So you tend to go up and
to the right in this graph. Now, this is actually good
for mid-tier and premium users because they're getting
more value, more bang for their buck,
memory-wise, but in this case, it's a little bit unfortunate
for the entry level device user because while he could use
the older version of your app, you've now added
so many features and it's using so much
memory that it just doesn't work as well on their device. So the points that
I want to say here, the takeaways,
anything you can do to improve your application's
memory efficiency is good. So if you can move to
the left on this graph, so less memory use without
sacrificing user value, that's great. And just be aware
that when you're adding new features, while
it can be good for mid-tier and premium device users, there
might be a negative consequence for these entry level devices. There's something
wrong with this graph. Does anyone know what it is? Well, let me see. The problem that
this graph is, it's suggesting that an application's
memory use is one number. So you give me this application,
and I can tell you its PSS. But in practice, that's
far from the case, because an application's
memory impact depends on a whole bunch
of different things, such as the application
use case, the platform configuration, and
device memory pressure. And so this is important to be
aware of when you're testing your application's
memory, perhaps testing for regressions, or to see if
an optimization is working, to make sure that you're
testing the application use case you care about, and
you're controlling all of the other
parameters so that you're doing a proper apples
to apples comparison. Let me go into a
little bit more detail. So how does an application
use case impact memory? What I've done here is
I started using Gmail. And I've switched to
different use cases in the application over time. So every 20 seconds I switch. I started by viewing the
inbox, of using just a little over 100 megabytes PSS. Then I switched to looking at
an email that had some text, using a little bit more memory. I switched to looking
at a different email, this time with pictures. It uses more memory. Then I started to compose an
email, used a little bit less. I stopped using the app, and
then it used less memory. So you can see here that
depending on the application use case, memory impact
varies quite significantly. And it doesn't
necessarily make any sense to compare your application's
memory from point A point B because these
are different use cases. Application use case is a
pretty straightforward factor. Something that's less
obvious is that your memory will change a lot
depending on what your platform configuration is. So what I was
showing in this graph is I picked one of
those application use cases from the previous
slide, Gmail, looking at an email with pictures. And I've run it on a bunch
of different devices. So a Nexus 4, a Nexus 5X,
Nexus XP, Pixel XL, and also on a number of different
platform versions, even within the same device. So for instance,
for the Nexus 5, I ran it on Android
M, N, and O. And you can see that there's quite a
variation in how much memory this application use
case is taking up. This comes about because,
well, for different devices we have different screen
resolutions, different screen sizes, which means bitmaps take
up different amounts of memory. You might have different
platform optimizations on the different devices. You might have a different
zygote configuration, different runtime
configuration that's running your code differently. And so there's a lot of
different factors going on here, which, when you
switch to a different platform configuration, you're going
to get different memory use. So I would say, when you're
testing your application's memory use, try
as hard as you can to use a consistent platform
set up, the same kind of device, the same platform version,
and the same scenario of what's running on device. And now there's a third
case I want to talk about, which is pretty interesting
because it's a little bit counter-intuitive which
is an application's memory impact depends on the
memory pressure on device. So here, what I've done is
I've taken Chrome application and I started running
it on a device that had plenty of free memory. And then I set up
some native process in the background
that's going to slowly use up more and more
memory on the device so that I can see what happens
to Chrome when the device gets under medium memory pressure
or high memory pressure. And we can see,
when there's plenty of free memory on the device,
so low memory pressure, Chrome's PSS is pretty flat except
for that little spike, which is probably some
variation in app use case [INAUDIBLE] the platform. When the device gets under
enough memory pressure that kswapd kicks in and starts
to reclaim cached pages, well, some of those
pages that it reclaims are going to be from
the Chrome process. And that's going to
cause Chrome's memory impact to go down. Its PSS is going to go
down until eventually, if the device has so
much memory pressure, the low memory killer
is active and it decides it wants to kill Chrome,
then PSS for Chrome is going to very
quickly drop to zero. So what you can see here is that
even for the same application use case, the same
platform configuration, we have a wide range of
PSS values we might get. And so you have to be
a little bit careful. Imagine I've come up with this
optimized version of the Chrome APK. And it has this kind of lighter
blue line for the memory profile. I'm confident that this
is an optimized version of the APK from a
memory standpoint because for every level
of device memory pressure, it uses less memory. But if I'm doing a
test and I sample the PSS of the original
Chrome version at point A, but I sample the PSS of the
supposedly optimized Chrome version of point
B and I compare, and I say, oh, well, A is less
than B, so A has less memory, I might falsely conclude that
the original version of Chrome is better than my
optimized version. So you really have to be careful
when comparing PSS values to make sure that the device
memory pressure is the same. Otherwise, you can get
these funny results. My advice, because
it's pretty hard to control for device
memory pressure, is to run your tests on a device
that has plenty of free RAM so that there's a low
device memory pressure and you can see, the PSS
numbers will be much more stable in that area. So we talked about why you
want your applications not to take up too much
memory, how you can evaluate your
application's memory impact. Let me now give you
some tips for how to reduce your
application's memory impact. And the first tip is, check
out Android Studio's memory profiler. Profile your
application's Java heap. This is going to give you
a ton of useful information about the Java
objects on your heap. So where they're allocated,
what's holding onto them, how big they are,
pretty much anything you want to know about the Java
heap, you can see from this. My tip for you is to
focus on the app heap. So if you open this
up in Android Studio, you'll see three heaps. One is a zygote heap, one the
image heap, one the app heap. The image and the
zygote heap are inherited from the system
when your application first launches. So there's not much
you can do about that. But definitely, you can
do a lot on the app heap. I'm not going to go
into a ton of detail on how you would use this-- or actually not very much
at all, because Esteban is going to be giving a
talk tomorrow at 12:30 on exactly how to use this tool. His team built the tool. He's going to be talking about
how to do live allocation tracking and heap analysis. So I highly recommend you go
check out that talk tomorrow at 12:30. So you say, Richard,
you've told us that we should care about PSS. That's our application's
memory impact. You just told us we should use
the Android Studio's memory profile to profile
the Java heap. But if we look here, we
see, well, the Java heap's not actually all that
much of the overall memory impact of the application. What about all the
rest of this memory? What should we do here? This is tricky because
most of these applications, or allocations, sorry,
are originating deep within the platform
stack, the Android stack. So if you want to
know about them and really understand
them, it helps to know a lot more about how
frameworks is implementing the view system and resources,
or how the native library fonts and SQL light, web view
is working, from the Android Runtime, how it's
running your code, from the hardware abstraction
layer, how graphics is working, all the way down to virtual
memory management in the Linux kernel. By the way, I live in the
orange block in the middle, the Android Runtime. That's where I am in the stack. So you might ask,
OK, so this memory is coming from the platform,
or within the platform. Should we be using platform
tools to diagnose this memory? For instance, if dumpsys meminfo
-s, that summary view isn't enough, you could try running
dumpsys meminfo with -a, to show, basically,
everything you can see from a
platform perspective about your application's
memory use. This will give you a much
more detailed breakdown. For instance, instead of seeing
your code memory regress, you can see, is it because my
dot SO memory mappings have regressed, or my .APK or
[INAUDIBLE] memory mappings have regressed? It will also show
you a breakdown of the different
categories of memory, so private, clean,
share, dirty, and so on. Private dirty memory
is like the used memory I was talking about
at the beginning. Private clean memory,
the clean, suggests it's like the cached memory
that also lives on this. So you could use
dumpsys meminfo. If that's not enough
detail, maybe you see, OK, .APK mmap regressed. There's this tool called
show map you can do, run on your
application, and it's going to give you an even
more fine grained breakdown of your memory mappings,
and it will actually give you specific files
that are being memory mapped in your application. And this can help
pinpoint what files might have led to regressions. In the platform, we
have a heap dump viewer that I've developed,
experimental heap dump viewer called
[INAUDIBLE] that tries to surface more
platform specific things. You could try using that to
learn more about your Java heap, though Android
Studio's memory profile will have all the same information. And then we also have on
the platform something called debug malware. This is where you can
instrument your application so that every native
application it makes is going to save a stack
trace to that allocation. You take what we call a native
heap snapshot of your app when it's running instrumented,
and if you have the symbols, you can desymbolize
the stack traces, and you can get native
stack traces for all of your native allocations. This has quite a bit
of overhead at Runtime, and so it can be a little
bit tricky to work, but it provides a lot of
insight into the native heap. So we have these platform tools. Should we use them? Can we use them? Well, certainly you could. They're all available. But some caveats
with these tools. They tend not to
be well supported. They have very clumsy
user interfaces, as you just witnessed
from my snapshots. This approach requires quite a
bit of deep platform expertise to understand for
instance, what's the difference between Dex mmap,
VDex mmap, [INAUDIBLE] mmap. Where are these things
coming from, for instance? You might need to
have a routed device, such as in the case for
show map and debug malware. You might have to build
a platform yourself if you want to get your hands
on [INAUDIBLE] or the symbols that debug [INAUDIBLE]
needs to desymbolize. The numbers tend to be pretty
noisy because you are looking at memory at a page level. And a lot of the memory you
shall see from these tools is kind of outside of
your control anyway. So you might see
zygote allocations, Runtime allocations, that
aren't related to your code. So I don't think that this
is the best use of your time, to try and use these tools. Though by all means, go
ahead and try them out. I'm going to give a bit
of a different suggestion, which is if you want to improve
your overall memory use, do two things. One, profile your Java heap
using Android Studio's memory profiler like I showed before. And two, reduce your APK size. And let me tell you why. I think this is a
reasonable approach for you to take to reduce your
overall memory impact. First, is that allocations that
are outside of the Java heap, many of those are tied
to Java allocations. So your application is calling
into the Android frameworks, which is calling
into native libraries under the cover, which is doing
native allocations or even graphics allocations,
whose lifetime is tied to Java objects. For instance, just to
give you a sampling, in Java on your
Java heap, if you see these kinds of objects, so
a SQLite database, web views, patterns, those all
have native allocations associated with them. If you see a DexFile object,
that's going to have .Dexmmap, .VDex, [INAUDIBLE] mmap
associated with it. If you have thread
instances on your Java heap, that's going to be
associated with stack memory. And if you're using bitmaps
or sometimes surface views, or texture views, that can
lead to graphics memory use. And there are many others. So if you're focusing
on your Java heap, you're worried it's
not going to help anywhere else, that's not true. Optimisations on
your Java heap are going to help with other
memory categories as well. I am trying, as part of
my job, to surface better this information about these
non-Java heap allocations. And you start to see that. And if you look at Android
Studio's memory profilers, it'll report a
number called native. And I just want to
let you know this is an approximation or
a suggestion of some of the non-Java memory that
might be associated with a Java object. Take it with a little
bit of a grain of salt, but it works really well
for surfacing the memory impacts of, say, bitmaps. My second suggestion was
reduce your APK size. And why do you do this? Because a lot of things that
take up space in your APK take up space in memory
at runtime as well. For instance, your
classes.dex file is going to take up
space on the Java heap in terms of class objects. It's going to take a code memory
for the memory map DexFile. It's also going to
take up what shows up in private other in the app
summary view runtime metadata. Representations for your
fields, methods, and strings, and so on. If you have bitmaps
in your APK, when those are loaded at
runtime, the pixel data is going to take up space
depending on the platform version or how
you've loaded them, either in the Java heap, the
native heap, or as graphics. Resources in your APK take
up space on the Java heap, so you have an asset
manager object. Also, on the native heap,
you have a parsed zip file structure that shows up there. And you're going to have code
memory for your APK on that. And .so files if you're
shipping libraries, J and I native libraries
with your application, when you're accessing
those libraries at runtime, it's going to take up space. So all of these things,
if you can shrink them, you reduce your APK size,
you reduce your memory size. And I will tell you
that measuring APK size reliably is much easier than
memory, because for an APK you actually do have
one number for the size. If you measure the APK size
for a single APK repeatedly, you will get the same result,
very much unlike memory. There was a talk at
Google I/O last year called Best Practices to
Slim Down Your App Size. I recommend you check that out. That'll give you more
advice, more concrete action items you can take to
shrink these things. Let me do a quick recap of
why we care about memory, what I suggest you do to improve
your application's memory use. So first, I talked about how as
we use more memory on device, the low memory killer is
going to eventually kick in. It's going to kill processes. If the user cares about
these processes, that's bad. If the device is getting
low memory killers running too much, then OEMs won't want
to produce the entry level device. Then we lose those devices. We lose those users. To evaluate your application's
memory impact, use PSS. Anything you can do to improve
your memory efficiency is good. When you're testing for memory
regressions or optimizations, make sure you're targeting
the application use case you care about in
controlling for the platform configuration. Test on a device that
has plenty of free RAM to help control for
device memory pressure. And to reduce your
application's memory use, do try out Android
Studio's memory profiler, focus on the app heap,
go to the session that Esteban's giving tomorrow
at 12:30 to learn more about how to do that. Do what you can to
reduce your APK size, and check out the talk from last
year on how you can do that. So thank you all for coming. I would love to chat with
you more and hear more about the memory
challenges you're facing. So you'll find me, I'll
hang out for a little while outside after the
stage, and you can also find me at the
Android Runtime office hours, which are 5:30, so
that's just a couple hours after this talk. Thank you very much.