SPEAKER: Hi, and welcome to
Practical Magic with Animations in Compose. My name is Rebecca, and I'm a
Developer Relations Engineer at Google. Have you ever been
given a motion design and taken one look
at it and thought, there's no way I'll be
able to implement this, or you look at aspirational
things on Dribble and think, wow, I have no idea
where to start? I'll tell you a secret. I was this person, too. And I sometimes
still am this person. But in my journey to learning
how to create fun animations, like the ones you're
seeing on screen now, I've come to realize that
there are some practical steps to take when trying to
implement a complex animation. In this presentation,
we're going to be looking at this animation
that a designer has given us. It's a non-standard
navigation drawer. We'll use the implementation
process of this navigation drawer to understand
how to think about animations in general. So let's dive into how we
can analyze and implement a motion design. After implementing
many motion designs, I realized that there
are some questions that I end up asking myself for
each animation, which helps me either pick a specific API, or
figure out the complex details of an animation. Let's take a look at some
key principles of animation. The three questions that I
asked myself for each animation is the following. What property am I animating? When does this property animate? Is it on click,
or is it forever? And how should this
property animate? Should it have a bit
of bounce, or should it be a static animation
over a set period of time? When we think of
these three things, it gets a lot easier to
break down our animation. So looking at these
questions, let's think about the navigation
drawer that we are implementing, and figure
out how to answer the questions. Firstly, let's talk about what. Most animations are made up of
changing different properties or values of a
composable on screen. Usually these are properties
such as scale, translation, rotation, alpha, or color. The best way to
approach an animation is to break it down
and try identify individual properties
of a composable that are changing over time. Sometimes breaking
down an animation is easy because your
designer has given you access to the
tooling that they've used to make the design. This is the best case scenario. At this point, there aren't
many questions to answer. You just need to figure out how
to translate from the tooling into your code. But if you don't
have this luxury, you might need to
analyze a design or video and break it down yourself. So in this example, you
might notice that we have two different states. And this is helpful for
analyzing our animation. By looking at the
different settled states that the animation
will take and trying to fill in the
blanks between them, the open state of
the animation shows that the graph content is drawn
over our navigation items. And the top content is
smaller in the open state, as well as the corners
of the sheet are rounded. In the closed state, there's no
visible sign of the navigation drawer at all. So we can identify that there
are at least three properties in this transition
that are changing, the translationX, the scale in
both the x and y directions, as well as the shape corners
are getting rounded as it moves into the open state. So we have our app defined with
an outer box and two custom composables-- HomeScreenDrawer, which
is the bottom layer that contains the drawer
contents, and ScreenContents, which will change based on
which drawer item is chosen. We also have our drawerState
stored in a mutable state. So with this code, we already
have our starting closed state in the correct position. When it's closed,
the ScreenContents drawer over the
navigation items. To apply transition
between the two states, we can use modifier
.graphicsLayer. This is a great modifier
to use for animations. It allows us to adjust
properties, such as scaleX, scaleY, et cetera. The other great part
about this modifier is that it runs in the
draw phase of Compose. So any animation changes here
won't trigger a recomposition, as it doesn't need to go
through all the phases. In this example,
the open state, we set the translationX to the
drawerWidth, and the scaleX and scaleY to 0.8 to make
it a little bit smaller, and the shape of the layer
to be a rounded corner shape, which gives us
the correct Open state. Now this won't
automatically animate between these two states. We've just defined the start
and end state for each property. We can see how this looks. We have the start and end
state of the properties. But it's not animating
between these two states. Great. So we know the properties of
our animation that we're going to work with, and we know
that we are going to be using modifier .graphicsLayer. And this is a great way to
apply these transformations to our composables. Let's talk about when the
animation should occur and triggering these animations. There are a couple of use
cases for when an animation is normally triggered. For example, an animation can
be triggered on a state change, on launch of a composable,
infinitely, or with a gesture. Let's talk about each of
these different options. On a state change--
what does that mean? A state change can
either be driven from a viewmodel or an
action that a user takes, for instance, the
click of a button or the toggle of a
switch, for example. Typically, for these
kinds of state animations, we can use the
animate AsState APIs. There are a bunch
of built-in options, such as animateColorAsState,
et cetera. These store and manage the
value change over time for you. They are typically used
for one-shot state changes. Let's look at an example. In this example, we
have a Boolean clicked. We will adjust this
value to true or false. Then, using the
animateIntOffsetAsState value, we will change the offset
based on if the item is clicked or not, giving the
new value of an offset that should occur on click. The animateIntOffsetAsState
function will automatically
update on each frame, interpolating between the
current state and the target value. We can then use
the offset modifier to adjust the offset of the
item that's visible on screen. This would result
in the following as part of that state change. The other option
for state changes is to use a transition object. This concept allows for
coordinated state updates that will happen at the
same time based on states. The animations are not
independent of each other. They happen at the same time. Let's take a look at an example
of how transition works. In this example, we
create a transition based on the current state. We then create two
properties that will be animated at the same time. The first is the Rect size,
and the second is the rotation. These will be animated
between the two values when the state changes. This would result in the two
properties being animated at the same time, in
this example, the scale and the rotation. The other option for
running an animation is on launch of a composable
when it comes into view. This is typically when you
start a screen, for example, if you had a
placeholder that needs to be shown before the
real content is displayed. To start an animation on
launch of a composable, we can use Animatable. This animation object
allows us to have control of our animation state. And we can then
use LaunchedEffect, which runs on the
first composition or when the key value changes. In this example, we could use
LaunchedEffect side effect to call animateTo on
our Animatable object. This will invoke our
animation when the item is composed for the first time. Another option is to animate our
content infinitely, or forever and ever. These kinds of animations
can be created using rememberInfiniteTransition. In this example, we
create an animating scale that the text composable uses
to scale content up and down for a pulsing animation. As you can see
from this example, the Hello text is repeatedly
animating bigger and smaller. The other option for when
an animation could be run is as part of a gesture. And this is how this particular
design is implemented. A gesture drives how this
animation is performed. When a user drags
the sheet on top, it should perform the
translation in scale. But this pattern can be applied
to other components, too. Gesture-based animations
are interesting because they need to be interruptible
when a touch begins, and they need to continue the
animation after a gesture is finished, either decaying or
snapping to a certain position. And it needs to feel
like a natural stop. So let's get back
to our example. Let's make the transition
between the two states animate between with a gesture. Let's first start
with the gesture, and then we can add the
animation between the two steps. Firstly, let's look
at how handling dragging of a composable works. To create a gesture
between the two states, we can introduce an Animatable
object, translationX. This will store the
translationX that we can use to translate between
the open and closed state of the drawer. We will update this
value with a gesture. We can then use this value
to set the translationX on the graphics
layer, as well as use the same value as a fraction
to calculate the scale. In this example, we lerp,
or linearly interpolate, between 1 and 0.8,
using the translationX divided by the drawer width as
the fraction between the two values. Lerp is a way to calculate
where the fraction would lie between these two
numbers, 1 and 0.8. So if we gave in a fraction of
0.5 between these two numbers, it would give us
0.9 as the answer. The next part we
want to introduce is the draggable concept. We will use
rememberDraggableState, and when the delta
changes, we will make sure the animation snaps
to the changed drag amount. We use snapTo because we
don't want any animation delay when a user is interacting
with the gesture on screen. The change to the translationX
should be instant. Next step, we add the
draggable modifier onto our screen
content composable. And this adds drag gesture
detecting onto our content. We say that we want
the content to be dragged in a horizontal
direction only for this interaction. And we pass the draggableState
into the modifier. For a visual representation,
we can see the start position of the x-axis, 0. When we drag on the screen,
the drag amount is returned in the callback, and the
overall translationX value is increased, which we then use
with modifier .graphicsLayer to move the composable
horizontally. So running this now, we can see
that the gesture is changing the translation of the contents
on top of the other contents. This is looking good,
except there are two issues. The first one is that the
drawer can be completely dragged off screen. And the second issue is
that the drawer dragging doesn't continue if
I lift up my finger. We only drag when your finger
is touching the screen. It doesn't continue
if I perform a gesture and lift up my finger. So back to our code. To solve the first
issue, we can either set bounds on the
Animatable object, or we could ignore
drag amounts that get us above or below
the threshold of what we want to allow. We will set a bounds on
the Animatable object. This means that the animation
cannot progress past these points. We will set it to start at
0 and end at the draw width. It's worth keeping in mind
that this caps the animation at those two points. So if we wanted to overshoot
the animation at some point, we shouldn't use this
mechanism, and instead, check the thresholds when we
are translating with snapTo. Great. We've solved the first issue. But for the second
issue, we need to think about what
should happen when someone lifts their finger. Ideally, we want the
animation to snap back into either one of the two
states, open or closed. We need to take into
account the current velocity of the current
gesture and use that to determine whether the
animation should continue or bounce back to its origin. So let's talk about
the case of handling flinging of the content. We've already dealt
with the drag, but now we need to deal
with the fling gesture. What should happen after
the user lifts their finger is called a fling gesture. So back to our code. To do this, we need to
calculate the current velocity of the gesture and
use it to determine what to do when the drag ends. We first want to create
a SplineBasedDecay animation spec. This will be used to help us
calculate the current velocity of a dragging gesture. How fast your finger
moves on screen is the velocity of your gesture. SplineBasedDecay is
just a fancy term for a natural looking
animation that will eventually come to rest at a
certain position given a current velocity. So to use this, we add
onDragStopped callback to the draggable modifier. This gives us the velocity
with which the gesture finished or the user lifted their finger. In here, we can
now use the decay we've created to calculate
the decay value given the current translationX
values and the velocity of the gesture. This will give us an estimate
of where the gesture would naturally stop
should it continue with the current velocity. The next bit that
we need to determine is the actual targetX that
we want to head towards. We use the midpoint of the
draw width as the decider, but this could be up
to your implementation. In our case, if the potential
landing point, i.e., the decayX is greater than the
midpoint, we want to target the open state
or the drawerWidth. If it's not, then we want
to target the closed state, or the 0 point. Let's go back to our diagram. So we understand that whilst
the user keeps their finger on screen and drags, we update
our Animatable using snapTo. Now when a user
lifts their finger, say they perform a fling,
we use the current velocity of the gesture to determine
the estimated decayX value, or where it would
naturally end up. If the point reaches
past the midpoint, we know we are targeting
the drawerWidth as the final destination. In this case, the ends
up past the drawerWidth. So we can naturally reach the
destination using a decay. We call animateDecay
in this case. If we can't naturally reach the
target, say in this example, you can see if we
naturally decayed, we'd end up just past
the middle of the screen. In this case, we want
to just call animateTo with our current
velocity, and this will take care of
increasing the velocity and decreasing it to get to the
targetX that we want to get to. So back to our code. We can now introduce a Boolean
called canReachTargetWithDecay. That checks if the decayX
is greater than the targetX, and the targetX is
the drawerWidth, or decayX is less than the
target, and the target is 0. So if we can reach
the target with decay, we call animateDecay. Otherwise, we call
animateTo with the velocity. And finally, we set the
current drawer state to the correct state
that it is currently in after the animation. Additionally, we can
also add a click function on click of the menu header that
will trigger an animation that opens or closes the menu. In this case, we don't
decay because we don't have any current velocity in place. So as you can see, using
Animatable in this example allows us to control
the value via a gesture or via a normal animation,
making it a really powerful API to use. So let's take a look at
what it looks like now. Now we have a drawer that snaps
to the closest position based on the gesture and the velocity
of the initial gesture. If we fling the
drawer, we can see it happens a lot
faster than without it. Great. We've basically implemented
the spec that we had in mind. But there's always a way
to make this way simpler. From this example,
we've essentially created what AnchoredDraggable
was designed for. It's very similar to
the draggable modifier, but it allows you to specify
anchor points of the drag gesture. It's part of foundation and was
added in Compose 1.6.0 alpha03. In this example, our anchors
would be the open and closed states of the drawer. First, we defined our
draggable anchors, open at the drawerWidth,
and closed at 0. Next, we create the
AnchoredDraggableState with an initial value of Closed
and passing in the anchors. Setting the positionalThreshold
to the midpoint of the drag, we can then change
our modifier to use anchoredDraggable
instead of draggable, and use the state.requireOffset
to get the translationX of the drag gesture. And this gives us a
pretty similar interaction to what we manually defined
in a more generic way. So we have our animation defined
and we are happy with it. But what if we wanted to
customize it a little bit more, maybe make it a little
bit more exciting? The last question
we need to answer is, how should our
animation be performed? We should be
thinking, how playful should our animations be, or how
long should an animation take in milliseconds? Once we figured out how
our animation should play, we could look into
the APIs we have that can help us achieve this. Looking at an example using
Animatable, in this example, we are animating from 0 to
360 on launch of the screen. We have our animation
setup and ready to go. But we want to customize
how it runs, not just when or what it runs on. This brings us to talking
about animation specs. Each animation is controlled
by an AnimationSpec, which stores information about
the type of transformation, for example, if we
are transitioning between two integers or two
floats and the configuration. The following are
AnimationSpecs that are available at the moment-- tween, spring, snap,
keyframes, and repeatable. The default that most
Compose animations use is to use a spring
animation spec. But let's talk a bit
about tween specs. This spec allows
you to specify how to animate between two values. Tween is just a shortened
version of the word between. Nothing fancy here. This spec is useful
for when you want to set a duration in
milliseconds for how long your animation should take. In this example, we've set
it to take 300 milliseconds. So this is great to
be able to customize. But what if we wanted to
change how the value transforms between 0 and 300 milliseconds? Enter the concept of easing. Easing describes
the rate of change of your animation over time. Easing is an interface
that takes a fraction and produces another value. These easing functions
are then used by tween to determine where a value
should be in its execution. By default, tween spec
transforms your values using FastOutSlowInEasing. A good example of how
to think about easing and what makes a
good easing function is like a roller coaster. A good roller coaster doesn't
travel at the same speed throughout the whole ride. At portions of the ride
you're going faster, and hopefully at the end
you're coming to a stop. Similarly, with
easing functions, using a linear function often
isn't a great experience. It's jarring, and dare I
say it, a little bit boring. Now let's learn a little bit
more about linear easing. This is when you transform
the values linearly. There's no acceleration
or deceleration. In this example, when we
change the easing curve to LinearEasing, the
animation will run as follows. At 0 milliseconds,
the xOffset will be 0. At the midpoint, it'll
be exactly halfway between the start
and end values. And at the end of the
animation, the xOffset will be at its
final destination. Another implementation
of easing that looks a bit more natural than linear
easing is CubicBezierEasing. CubicBezierEasing allows you
to follow a bezier curve, where you can specify
control 0.1 and control 0.2, the two control points that
will determine the bezier curve that the animation will follow. The example on the right
has the easing curve defined by these two points that
the animations will go between. A new addition to our
APIs is the ability to create an easing
curve from a path. We provide a path
between 0 and 1. And in this example, we combine
together two cubic curves to form a path that the
animation will follow. We can then use
these easing curves by specifying the easing
parameter on a tween spec. This diagram shows
the difference between applying different
easing functions, LinearEasing and
FastOutSlowInEasing applied to the same animation. As you can see, there is a big
difference between the two. FastOutSlowIn seems more natural
than using a LinearEasing function. So there are a bunch of built-in
easing functions in Compose that should take care of
most of your requirements. But if they don't,
there are plenty of ways to define your own. So we've talked
a lot about tween and setting the different easing
specs to customize our tween animations. But at the start I mentioned
that by default Compose uses spring animations for most
of the built-in animations. Why is this? Well, spring animations are
physics-based animations based on a stiffness
and a damping ratio. They generally look and
behave in a more natural way than tween animations do. Let's look at how spring
animations work and then understand why. Let's look at the problem with
using tween compared to spring. Specifying a duration for an
animation doesn't always work. For example, if
we have two boxes, the first one only needs to
travel a smaller distance than the other. But applying the
same duration to both produces a very different
look and feel for both. What if your device
is larger, say if you have a very
large tablet, and you're animating from the
top of the screen to the bottom of the screen? You'd see a massive
difference in the animation between devices, as
both of them need to complete the animation
in a set amount of time but with variable
distances between them. The same can be said for
interrupting an animation. For example, when I
interrupt a tween animation and set a new target
value, using tween doesn't work as fluidly as using
spring, as spring animations take into account
the current velocity, making it feel more natural. You can see from
these two videos. In the first video,
using the same duration for each of the same
changes in position here just doesn't look as natural
compared to the second video with a spring spec. Now if we look at what we
can configure with spring, we can set two things,
stiffness and dampingRatio. This allows us to configure
how bouncy an item will appear and how long an animation
will take to come to rest. As you can see, using
spring configurations allows us to create a
whole different look and feel for an application. If we wanted something
playful and fun, adding a little bit more
bounciness to the animations can communicate this to users. Alternatively, if it's a
bit more of a serious app, then removing the
bounciness will ensure it communicates
this more fluidly. Here is an example of
using different spring configurations. Looking at each
of these, you can see how applying a
different spec configuration can communicate a
completely different look and feel to your app,
where the bouncy spec looks more playful than the others. So back to our example. What if we wanted an item to be
a bit more playful and bouncy? We can change the animation spec
used to be a spring animation, and remove the decay
animation to get a bit more of a playful effect on the app. We've covered some of the
more basic APIs and principles of animations. And whilst most
animations can be built up from simple
primitives, luckily you don't always need to
build from scratch for all kinds of animations. There are some predefined
composables and modifiers that are built into
Compose that take away a lot of the low level work for
you and complete it for you. The first one is
AnimatedVisibility. It allows you to toggle the
visibility of a composable with a pretty animation. For example, when
used with a column, the items will automatically
be removed with an animation. The next example is
animateContentSize. This example will animate
size changes of content. Be sure to set the
height and width after the animateContentSize
modifier for the changes to have an effect. The result of applying this to
content can be seen as follows. A smoother transition between
the two is now more apparent. AnimatedContent is another
built-in composable that allows for
easier animations between different composables. It allows you to
customize the transition between different
states and customize the Z order of the animation. For example, we
typically can wrap when conditionals
with animated content for more seamless transitions. So in this example, we wrap
our content in animate content to provide a more
seamless animation. Our transition between
the different composables is now fading and scaling
between each other, which looks a lot better than before. With AnimatedContent, and a
lot of the animation APIs, too, there's this concept
of an enter and exit transition for content. We've recently
introduced new options for this, scaleInToFitContainer,
scaleOutToFitContainer, which animates the scale
between the content based on the container size, as
well as ExitTransition.Hold. This allows for holding
the content in place till both transitions
are finished. So back to the code. Changing the transition
spec of AnimatedContent, we can customize how the new
composable coming on screen will enter the screen, and
how the old composable will exit the screen. In this example, we are
using slideIntoContainer to slide the content towards
an upwards direction, and slide out of the container
in a downwards direction. We have also customized
the animationSpec to set a custom duration
and easing function. Let's take another look
at what this looks like. Our new content slides in
from the bottom of the screen, and the old composable
slides out from the bottom. Of course, this is customizable. So check out the different
enter and exit transitions available to you in the docs. There are plenty of more
built-in composables that you can use. If you're looking
for a quick guide on getting started with
animations in Compose, take a look at this link. Right, onto the last
section of the talk. How do I pick the
correct API for the job? When thinking about
choosing an API for the job, there are many different
options to pick from. And looking at the docs
might have you overwhelmed. And whilst over time
you'll eventually get a feel for what
API can help you, we do have many
options to think about. Pretty much most
animations can be built using a single Animatable,
or using the animator spin function, as we've seen
explained here already. But there are more handy
options that you can use for your specific use case. We've recently released a new
version of the decision tree diagram for Compose animations. It should help you
getting started with thinking of which API
would suit your use case. It's not meant to be
strictly followed, as many APIs can be retrofitted
for different purposes. But it should help you
narrow your options and think of different
questions to ask yourself about animations. For example, if we are thinking
about animating the text color when an error occurs,
let's follow the diagram to see where we would end up. First, we ask
ourselves the question, is the animation like art? Is there a lot of SVGs
or images being animated? In this case, no,
because it's just text. Then, does it need
to repeat forever? In this case, no, it
doesn't, because it goes based on the state of if
we are in an error state or not. Is this a layout animation? No, because it's the
color of a composable, not the layout property. Next, we ask
ourselves, do we need to animate multiple properties? In this case, no, it's
just a single text color. Does the animation have a set
of predefined target values? In this case, yes. So the animate AsState
API is what we should use. And in this case, we have
text, so we should also set TextMotion.Animated
on the style property. The full diagram can
be found at this link. And it should help
you get started with thinking about which
animation to use where. In summary, today
we've covered a bunch of concepts for animations. But most importantly, try
to analyze your animations with three questions in mind-- what, when, and how? We've also learned that
gesture driven animations are typically made
up of two parts-- dragging on screen
and flinging, and how to handle these situations. And finally, use the
built-in components to make common
animations easier. And if you don't know
which animation API to use, use the decision tree
diagram to help you decide. And that's it. Wait, what's that I hear? You wanted more? Let's do a quick
bonus animation then. Let's talk a bit
about ImageVectors and how to animate them. You might remember
animated vector drawables. What if I told you
if you don't need to create micro SVG animations
in an XML file either? Let's take a look at this
fun example of a jellyfish. Would you believe that
this jellyfish is actually an SVG that we can break down
and manipulate with code? Let's take a quick look
at how we can do this. If we inspect what the SVG file
for this jellyfish contains, we can see that there
are a bunch of paths in this SVG for each
bit of the image. We can take these path
commands defined inside the SVG and build up the image
in code in a similar way. For those unfamiliar,
a path is just a set of mathematical
commands instructing how to draw something. For example, inside
the path string we can see these
letters that correspond to a certain command
that performs something, such as a line to a certain
point or a cubic curve. We want to render this vector
programmatically with code. It's worth noting that we could
use something like Lottie, or load this up with AVDs. But for the sake
of this example, we're going to be rendering
this with an image vector. So to do this, first we
take the individual paths defined in the SVG. We then create a vector
using rememberVectorPainter, and create an image composable
passing in the image vector that we are creating. We then add the paths
to the image vector by creating groups that take
in the path data and color. We repeat this process for each
part of the vector of the SVG. And we now have a vector that
is rendered in code instead of being loaded up from assets. This is useful because
now we can animate parts of the image vector. So using what we've learned
before, we can take the example and examine what
properties are changing. We can see that the whole
jellyfish moves up and down infinitely over time. So we should use translationY
and rememberInfiniteTransition. And the eyes blink
when you tap on them. So we should animate
the scale and alpha, and use Animatable to have
fine control over these touch events. Let's take a look
at how to implement the blinking animation. We can create two
Animatable objects, one that represents
the alpha, and one that represents the scale. We can then create
a suspend function that we will call when
the jellyfish is touched. We call launch, animating the
alpha from 1 to 0, and then from 0 back to 1 after
150 milliseconds. The way we are
calling animateTo here means that these two calls
will run one after the other. The next part is that
we can call launch again in this coroutineScope. This will ensure
that the animation is run at the same time
as the other launch block. And the scale is what
we will animate here. We will animate from 1
to 0.3 and back again. Great. We've got our two variables. Now we can apply them
to the actual vectors. First, we change the
scaleY of the eye so we can apply the animation
value to the scaleY property of the eye group. Then, we apply the
alpha animation value to the path fillAlpha. The same can be done
for the translationY, applying a variable
that changes over time. And there we have a
blinking jellyfish. And that's all. Building animations in
Compose should be fun. For more information on the
topics in this presentation, check out the links below. Bye, for now.