[MUSIC PLAYING] SUMIR KATARIA:
Thanks for coming. Welcome to Working with
WorkManager session. My name is Sumir Kataria. I'm a software
engineer at Google. RAHUL RAVIKUMAR: Hi. My name is Rahul
Ravikumar, and I'm also a software engineer at Google. SUMIR KATARIA: And we both
work on Android architecture components, particularly
WorkManager. Let me grab my-- All right. So what are we going
to talk about today? We're going to give you a State
of the Union about WorkManager. It was released at
Google I/O. I want to give you an abbreviated
guide to WorkManager. So for those of you who haven't
used WorkManager before, we want to go over
some of the basic APIs. And we're going to spend most
of the time talking about WorkManager, the
questions you've asked, the things that have been
frequently misunderstood, and also, the new changes
that we've made since I/O. So let's start with
the State of the Union. There have been 11 releases of
WorkManager since Google I/O, and these are alpha releases. Today was the 11th one. And beta's coming soon. So those of you who watched
the keynote yesterday may have heard that
beta's coming this month. This was news to us, too. But it's coming this month. [LAUGHTER] We were very close, but I guess
we're doing it this month now. [LAUGHTER] So we want to give you
an abbreviated guide to WorkManager. And what is it, for
those of you who are just completely new to this? It's a library for managing
deferrable background work. It wraps JobScheduler,
and AlarmManager, BroadcastReceivers. It's backwards
compatible to API 14. And those of you who've
used JobScheduler will find many of the
concepts very familiar. So let's talk about work. You've got a unit
of background work-- and we'll talk a little bit more
about how we create it and how we enqueue it later-- but you've got it, and how does
WorkManager execute this work? This is a graph
that might help you. While your process
is up and running, we'll feed it to
this executor that we have that does the work. It could be a thread pull. You can customize this thing. But your process may be killed. And WorkManager is
guaranteed to do work. It may defer it, but
it will do that work when the constraints
for that work are met. So we also enqueue
it in the background. So if you're API 23+,
we use JobScheduler. And before that, we
use an AlarmManager and BroadcastReceiver
implementation. So whenever the
signals are met-- let's say that you've put in
some work for a two-hour time delay when you have network-- When all of those
conditions are met, it'll still go to
the same executor. All right. RAHUL RAVIKUMAR: Let's do a
quick API walkthrough for those who have not seen the API. So the fundamental unit of work
in WorkManager is a Worker. Here I'm defining a
calculation Worker. It extends to Worker type. So now, the only
thing that you need to do when you define a new
Worker is to extend doWork. And doWork returns a result.
Here I'm returning a success. You can also return
a retry or a fail. I'm doing some
expensive calculation on background threads. So you don't have to worry
about threading here, because WorkManager
is guaranteed to schedule your work
on a background thread. And here I am returning a
result of success synchronously. So that's it. I'm pretty much good to go. So now, you've defined a Worker. How do you actually
make that work run? So for that you need to
enqueue a work request. So there are two types of work
requests-- one is a OneTime, and the other one is a
PeriodicWork request. Here I am using a OneTime
work request builder. And I'm building it with
the calculation Worker that I just defined
in the previous slide. So I'm setting an initial delay. And this is a
timing-based constraint. And this tells
WorkManager to only run the work after two
hours have passed since the point of enqueue. And I'm also setting
another constraint, which is the charging constraint. This tells WorkManager
that this work is only eligible to run when the device
is actually connected to-- when the device is
actually charging. I add a tag. I'll talk about
tags in more detail. And I'm finally
calling dot build. Now, all I need to do is to
call getInstance and enqueue the work. And that's it. Now the work is scheduled. So now you've
schedule your work. You've defined a
bunch of constraints. And it's hard to keep track. And if you want to keep track
of what state your work is in, then you want to call
getWorkInfo by real live data. And this returns live
data of workInfo. And so workInfo is the
type that determines the state of your work. And live data here is a life
cycle of our observable. So once you attach it
to a life cycle owner and you define an
observer, you can observe the state transitions
of the work request. So it'll go something like-- once you enqueue the work,
it'll go into enqueue. Once the constraints are
met, it'll go into a running. And then finally, because
you return success, it is going to succeed at state. So remember that tag that we
added when we built the OneTime work request? You can also get work
infos by tag, live data. And tag is something that you
associate with a work request. And there can be one to many. So we can associate the same
tag to multiple work requests. And here I'm calling it
workInfos by tag live data, which runs a live data again. Notice how it returns
a list of workInfo and not a single workInfo. And again, I can
do the same things that I did in the
previous slide. So one of the coolest
features of WorkManager is the ability to chain work. So that helps you define
an acyclic graph of work. And here I'm asking WorkManager
to begin with a, b, and c. And here, a, b, and
c are work requests. And I'm seeing that
d and e are only eligible to run once all
a, b, and c are done. And when I used
to begin with API, I'm asking WorkManager to
run a, b, and c potentially in parallel. Whether they actually
run in parallel is determined by the
capabilities of the device and the size of
your thread pull. And finally, I'm calling
[INAUDIBLE] on f, g, and h again. Now, f, g, and h will only run
once all the preceding works are done. Finally, don't forget
to call enqueue. That's when all
the magic happens. So now you called enqueue. Now, you might be wondering
what begin with returns. So begin with actually
returns an instance of work continuation. And a work continuation is a
node in your [INAUDIBLE] graph. And this lends to
a very fluent API. So every time you
call begin with, it returns a new continuation. And every time you
call dot den, it returns a new instance
of another continuation. Finally, don't forget
to call enqueue. The one important thing
that you need to remember is, when you chain work, the
outputs of the parent work request become inputs
to your descending work request, or your children. This helps you manage state and
send states from parent work to the descendant work. Now, finally,
WorkManager also exposes the APIs to cancel work. So for some reason if
you want to do that, you can cancel work by ID. And every work request
has a unique ID. So here we are canceling
work by that ID. And we can also cancel
all work by tag. So those are the two APIs. And that's it. SUMIR KATARIA: And
I want to point out that all the APIs as we've
been showing are for alpha 11, so you may notice
some slight changes, for those of you who have
been using this API before. But everything is live today. So let's talk a little bit about
how you can get the most out of WorkManager. Manager And these also include
questions like, how do I do a certain type of task? Why does this work
a certain way? And the biggest one that we get
a question about is threading. How does threading
work in WorkManager? So we talked about
a work request. Rahul just mentioned that. And you enqueue it. What happens to it then? So we have an internal
task executor. You can think of it as a
single-threaded executor. And the enqueue
goes to that, which stores it in a local database. So every app that
uses WorkManager has a WorkManager database. This is the source of truth. This is where we keep track
of the state of your work-- your inputs and
outputs, everything with the dependency chains. Everything goes
in this database. So after it's been enqueued,
sometime later your constraints are met. And the OS tells you that. If you have no constraints,
it's eligible to run right away. If there are constraints,
the OS will tell you, oh. You have network. You are charging right now. Whatever it is. Same task executor--
uses a WorkerFactory to create a Worker. WorkerFactory is
exactly what it is. It's a factory for Workers. And you can make your own. You can customize and
do things with that. We'll talk more
about that later. After the Worker's been created,
we execute it on an executor. And this is also a thing that
you can actually customize. We'll talk more about
that later as well. But what if you don't
want to execute something on that executor? We give you a default one. You can specify
your own, but what if you're using our Ex.java? What if you're
using code routines? What if you have your
own bespoke solution that you want to use to run
things in the background? This was a request that
came up quite a lot when we first
released WorkManager. So to do this, we
want to provide you an API to let you
do work on your own and just tell us when it's done. So you want to signal async
work completion to us. And for that, we use a class
called ListenableFuture. Those of you who use Guava will
be very familiar with this. But we split it out-- And the Guava team has
helped us with this, so everybody's
collaboratively worked on splitting this out to its
own lightweight artifacts. So you don't need a full
Guava dependency for this. It's literally one
or two classes now. And it's a future that can
have one or more listeners, and those listeners
can be invoked on a specified executor. That's all it is. It's very simple. So using this
ListenableFuture, we made a class called
ListenableWorker, which only has one method
that you need to override. It's called startWork. We'll call startWork on
the main thread for you. You give us back a
ListenableFuture. You do whatever work you want
on whatever thread you want, and when you're done, just
set the result on the future. And we'll be able
to listen for it, and we'll be able
to react to it. So the actual threading
model in WorkManager is, after your
constraints are met, it goes to the task
executor, which uses the WorkerFactory to
create a ListenableWorker. We call startWork on it and
we also attach a listener. So we can listen to
whenever you're done. What this means
is that the Worker class, which is still around,
is a simple ListenableWorker. So we've got the doWork
method that we talked about. We override the
startWork for you. We create a future. On that background
executor that I talked about-- the one
we provide by default-- we execute the work. And we, of course,
return the future. So now, we have two classes-- Worker and ListenableWorker. What's the differences
between them? Workers, we consider
those a simple class. For most use cases, we
think that's sufficient, is a class that
runs synchronously and on a pre-specified
background thread. ListenableWorker
runs asynchronously on an unspecified
background thread. So Worker dot doWork
is a synchronous API. You're expected to finish
what you're doing there. If you're trying to
create a ListenableWorker, you may need to return a
ListenableFuture, which is an interface. If you have access to
Guava, you have access to many kinds of
ListenableFutures. If you don't have
access to Guava, or don't want to
add the dependency, ResolvableFuture is a
lightweight implementation that we provide in Android X
concurrent concurrent futures. So you can use that. So let's look at an example. One of the things
that a lot of people are trying and doing
incorrectly with Workers is that they're trying
to get locations. And we're going to specifically
use the Fused Location provider client. So if you listen to the Kotlin
Suspenders talk yesterday, they also used this. It's a JMS core API
to asynchronously get your location. Remember that a Worker
class is synchronous. So if you attach a callback and
later get informed about it, but return success, your
work is already completed. It's not going to work
the way you think it is. So first thing we do here-- we
are using a ListenableWorker-- is we create a ResolvableFuture. So this is the future
that we'll return and do all our bookkeeping on. In the startWork
method we'll check to see if we have permissions. If we don't, we'll set
a failure on the future. Otherwise, what
we'll do is, we'll get that Fused Location
provider's last location. This is kind of like
a future in JMS core-- or Google Play services world. We'll call this getLocation
method, which I'll go into. And then we return to future. So that's the
high-level startWork. In the getLocation method,
we'll use that task and we'll add a listener to it. If the task is successful,
we'll pass that location back with a success status. Otherwise, we'll set an
exception on the future. So that's it. We've basically
addressed the three cases where we want to
have a successful or an unsuccessful task, or
if we don't have permissions. And WorkManager will
attach that listener. It'll listen to the success
or failure of your task and do the bookkeeping
as necessary. All right. RAHUL RAVIKUMAR: Let's
talk about operations. Now, remember, Sumir
mentioned that WorkManager uses the database as
a source of truth. So any time you enqueue
or you cancel work, we have to do some
bookkeeping, and we have to make sure we keep track
of what your intentions were. So these involve
rights to a database, and because there are
rights to a database, we have to do them on
a background thread. So as a result,
they're asynchronous. What if you actually
wanted to do something after the enqueue happened,
or the cancel happened? So you want to make sure that
those operations completed before you want to
do some more stuff. So for that, you've
introduced a new API. So now, enqueue and cancel-- actually return a new
type called operation. Operation is a very
simple interface. It has two methods-- so it has a getState API,
which returns a live data of an operation dot state. If you attach an observer
to this live data, you will see that the operation
transitions from an in progress to a successful or a failure. You can also call
getResult, and this returns the familiar
ListenableFuture type. And remember that this API
will only return the terminal state of the operation. It won't give you the
intermediate state. So if you're
attaching a listener, you'll only ever get a
success or a failure. And with the exception-- telling you why the
failure happened. SUMIR KATARIA: Another
question a lot of people have is, when is work stopped? What happens when you stop
work on behalf of WorkManager? All right. So there's three cases
when work is stopped. The first is very simple-- your
constraints are no longer met. So you said, for
example, that I need a network to do
this upload task, but your network got lost. So we both stop your
work at that point. A second case is
that they always preempted your work for
some reason-- for example, you exceeded the
10-minute time limit that the OS gives
you to do your work. Or the device decided to go into
doze mode for battery savings, or something like that. And the third reason is that
you just canceled your work somewhere else in your app. How do we stop work? There is a method
ListenableWorkeronStopped. We call this method
when we stop. So just override this. Then you get your stop
signal right there. We also cancel [INAUDIBLE]
future that we talked about. So you can also just add your
own listener and look for that. So this is when one
of these two things happens for you, whichever
one you're looking at. This is your signal to be a
good citizen and clean up. Because after this is
called, the process may be killed by the OS. So if the OS woke up your app's
process just to run this work, it could actually kill
it when it decides that the work should stop. And if you happen to return
something after this signal-- say you return a success--
we ignore it, because as far as we're concerned, your
work has been stopped. Whatever you are doing is no
longer necessary to be done. You can also pull for
stoppages in your Worker. So you can also call
the isStopped method. And that will tell
you whether you've been signaled for stopping. So let's look at how you can
be a good citizen and clean up. So let's say in
this example we're parsing a file asynchronously. Let's say it's a huge file, so
you're doing it in a Worker. And you've got that familiar
ResolvableFuture thing that I showed you earlier. You've also got
this input stream, so you're reading
the file, right? So in startWork
you say, parseFile. This is doing something
asynchronously, and you return to future. Here's parseFile. So you've got some executor
or whatever-- a coroutine, it doesn't matter-- You're asynchronously doing
that runable that follows. So the first thing
you might be doing is, you're opening the file. You're reading each
bite out of the file, doing something with that bite-- and then when you're
done, you set a success because you're done. And then you have the necessary
try catch, finally, after that, so that you can clean
up after yourself. So how do we handle when
your work gets stopped while this is executing? So like I said, we override
the onStopped method. Let's say that we want to
just finish what we're doing. We could easily just close
that input's file stream. OK. So what happens now
you've done that? Let's go back to the code. Well, if you're in the
middle of that read loop and you close a file, it
throws an exception, right? Next time you try to read
something-- well, fortunately, you're already handling
that exception right there. So it looks like
you're good, right? You're done. Or are you? There's one more
case that you forgot about here, which is that,
what if the onStopped happens before you even open the file? So now you're no longer
looking for that stopped thing, because you
never got that signal. You'll read that file
because it opened after you tried to close it,
which it didn't do anything for. So you'll do all of this work. And how you fix it is,
use that isStopped method. So basically, while
you're in that loop, you can always just
make sure that you're looking for stoppages. So this is a good
example of how you would honor the OS's signal to
you, or WorkManager's signal to you, that you should
stop and be a good citizen. All right. RAHUL RAVIKUMAR: So every
time you enqueue a method-- or every time you
enqueue a work request-- the work request goes through
several state transitions. And I'll talk about
how they look. So let's look at the life
of a OneTime work request. So when you enqueue
a new work request, it can end up in one of two
states-- it can be blocked, if it's blocked on
another work request, given it's a chain work-- or it can be enqueued. Once the constraints are
met, it goes into running. And this is the point
at which the Worker's being actively executed. Depending on the result
that you return on how you signal work
completion, we'll take it to one of
the terminal states. So if the Worker
returns success, then we'll terminate it
with a succeeded state. If the Worker returns
a failure, then we'll mark it as a failure. And that also is
a terminal state. At any point in time while the
Worker was in a non-terminating state if you call cancel, then-- sorry. If you actually
retry, then we'll apply the back-off policy,
and then we'll retry. So the Worker will go back
to the enqueued state. At any point in time if you
have a non-terminating state and you call cancel,
it'll end up in canceled. So this is what a life
of a OneTime work request looks like. Let's look at PeriodicWork. PeriodicWork is almost the same. Because periodic work
can't be chained, there is no blocked state. So it all ends up in
the enqueued state. Once the constraints are
met, it will go into running. So whether you
succeed or you retry, it will go back to
the enqueued state. Now, this might seem confusing. If you succeed, then the
work really isn't done, because it's a PeriodicWork. We'll just wait for
the next interval. If you fail and you
ask us to retry, then we'll apply the
appropriate back-off policy, and then we'll increment
the run attempt count. So we'll tell you that
this is the second time that you're trying to run
it for that last period. If you mark your work
as failed, then we'll transition into
the failed state. At that point, your
PeriodicWork won't run again. And again, any time your work
is in a non-terminal state, and if you ask us to cancel
it, we'll mark it canceled. So let's apply those rules to
how life of a chain of work looks like. So here is the parent
of all Workers. So when you enqueue
this chain of work, only a is in the enqueued
state, and all descendants of a are actually blocked now. Right? So let's assume a's
constraints are met and it goes into
[INAUDIBLE] running state. Once it's done-- let's
assume it succeeded-- So it unblocks b and c now. So b and c, now, become unqueued
and then they go into running. And for the sake
of the argument, let's assume that b
succeeds and c fails. What happens now? Because b succeeded,
it unblocks d. So d goes into the
enqueued state. But notice what happened
to e, f, and g-- They all failed. So the key takeaway is that,
if a unit of work fails, then all descendant is
also marked as failed. If a unit of work is canceled,
then all of its descendants are also marked canceled. SUMIR KATARIA: So I want
to talk about something called unique work. And let's start with a
little question here-- what's wrong with this code? So you've got an application
object in the onCreate. You're enqueuing
some PeriodicWork. The problem here-- and I've
seen this in a few bugs-- is that this enqueues
PeriodicWork every time your app process starts. And that's probably not
what you're trying to do. You're trying to
set up a sync here. Let's say it's syncing
your data every day. If you call this code
every time the app starts-- every time you've
got another thing that's syncing your code-- you
really only want one of them. So unique work,
basically, is something that lets you specify conflict
policies for a database insert. Think of it like that. If you insert the same
key into a database again, what do you want to happen? Do you want to overwrite
what's already in there? You want to ignore what
you're trying to do? That's what unique work does. It basically is a conflict
policy for WorkManager. And here's the syntax for it. It's pretty simple. UniqueName is that key. It's that something
that uniquely identifies that chain of work. Policy is the existing
work policy, or what we call the conflict policies. And then you obviously
have your requests. So the existing work policies
are the interesting things. There's three of them. The first one is keep,
which is, it keeps the existing unfinished work. So if you have things that
are in blocked, running, or enqueued, it will keep them. And if the work is
already finished or it's not there
to begin with, it will enqueue whatever you just
sent along with that call. The next one is replace. It always replaces the work
requests in the database. If your work is
running, it'll get stopped, as I described
a few minutes earlier. It cancels the old
work and it stops it. That's what it does. Append is a special one. It basically appends
to that chain of work. So for example, this
is useful if you are trying to do something
in order-- for example, you're trying to
build a messaging app, and you're sending
messages in order. So you may have a unique chain
of work for sending messages, and you want to append new
messages to the end of it. So it's basically
creating a tree for you. All right. RAHUL RAVIKUMAR: Remember,
Sumir mentioned that in one of his previous slides
on [INAUDIBLE] how one can customize WorkManager. So let's look at all the
things that you can do. So you can actually
specify a WorkerFactory that can be used to
instantiate your Workers and ListenableWorkers. And this is especially
useful in the context of dependency injection. So if you're using Dagger and
you want to inject something into your Worker before
the Worker starts, this is a good place to do that. You can also specify
the default executor that you want all
Workers to use. You can specify the
logging verbosity if you want to distinguish
between a build and a release
build, and you want to make sure you
log more information to diagnose your problems. And then you also specify
various other JobScheduler parameters, like
number of jobs that you want us to send to
JobScheduler, the ideas of jobs that you want us to
use in case you're already using
JobScheduler before. So if you want to
customize WorkManager, then you have to disable the
default WorkManager Initializer first. So for that, you have to add
this entry to your Manifest. And note the tools node remove. That means you
are removing this. You don't want this
entry to get merged. So now that you have disabled
the default WorkManager Initializer. The next step is to, actually,
create a new instance of configuration. So here I am using the
configuration builder, and I'm overriding it to
specify my own custom executor. So now that I've
done that, I can just call a WorkManager.Initialize. I specified the application
context and the configuration, and I'm done. So make sure you do this in
your Application#onCreate or ContentProvider#onCreate. Because the operating system
can actually invoke job services on your behalf. And when job services
are being invoked, WorkManager needs
to be initialized. SUMIR KATARIA:
Finally, the last thing we want to talk about
before we wrap up is some tips for all the
library developers out there. If you're using WorkManager
in your library, you have some special use cases
that we want to think about. So the general advice we
give for library developers is, because WorkManager
is a singleton and the application initializes
it, as Rahul just showed you, you are not really in
control of what's there. Rely on the defaults. So use a default WorkerFactory. The default WorkerFactory
that we provide creates Workers and
ListenableWorkers using reflection. Rely on that, because you don't
know what else is happening. If you need some particular
dependency injection or anything else of
that sort, you'll have to have a
contract with the app. Silo your work with tags. So Rahul also showed you
how to tag your work. If you silo all your work--
meaning you put your prefix, or your package name, or your
library name in your tags-- you can easily identify
all the work that's yours. You don't have to worry
about other people's work. You don't have to
deal with any of that. You can just get your work
and operate on just that, if you don't know the IDs. Finally, we do provide
the ability for apps to wipe away all work. And this is generally
for privacy reasons. This is not something
we expect to be called, but it's for that
critical use case where you have to wipe
user data for some reason. So as a library
developer, how do you find out if your work's been
wiped or gone from under you? You should look at
getLastCancelAllTimeMillis. It's a very confusing name. OK. Next steps. So get WorkManager if
you haven't already. And for those of you who
have, thank you very much. Your feedback's been invaluable. We're up to alpha 11, so
there's three general categories of artifacts here. There's the runtime. There's the KTX stuff, which
includes a quarantines Worker that we just put in. So you don't have
to write your own. And then there's
testing support as well. These are some helpful links. Schedule tasks with WorkManager
is the developer.android.com section for WorkManager. On YouTube, there's
the Google I/O 2018 talk, which talks about all the
basics in much greater length. Some of those APIs have
changed a little bit, but it's still broadly a good
thing to read or to listen to. Also, please file your feedback
at our public issue tracker. Beta is coming. We were told that this morning-- or last morning as well-- So we're kind of-- [LAUGHTER] We have to get
back to work, but-- [LAUGHTER] Thanks a lot. Please visit all the Jetpack
libraries on the web, and we'll be outside for any
questions that you might have. Thanks. RAHUL RAVIKUMAR: Thanks. [MUSIC PLAYING]