DAN SANDLER: This is
what my team does-- System UI. We see a UI paradigm
taking hold in the wild. We take it. We learn about it. We make it safer, more reusable. And then we contribute it back
to the framework for everybody to use on Android. So in API 29, we have a
developer preview of something that we call Bubbles. Bubbles is an implementation
for any app that wants to use it of a
floating chat bubble that you can use to
connect to your app. And effectively, you tie
it to a notification. You add in some bubble metadata
here, including an icon. And an intent to
launch an activity will actually put your activity
inside a floating window. We'll put your activity
inside a floating window when the user
touches that bubble. So this is something we've
been working on for a while. But we really need feedback. We're excited for you to use it. Let us know how it works. The way this is
going to work is, in Q, you can go turn this
on in developer settings and start implementing
support for it in your app. Or, as a user, you can
start seeing which apps have moved over to this new system. In the future, we'll
move that opt-in up to a full user
setting so that users can turn on Bubble support. And then, eventually, the
SAWs will be fully deprecated, and all apps using that paradigm
to do Bubbles will move over to the Bubbles API. There is a talk
about this Thursday, 9:30, right after breakfast-- What's New in the Android
OS User Interface. It's either here or
Stage 1, one of those. It'll be on YouTube, too. Now that we're done
talking about that, I want to bring the
lights down a little bit, and talk about Dark Theme. From the Google
Finally department, it is finally an end-user
feature in core Android. It's no longer tied to
time of day, by the way. So this whole MODE AUTO_TIME
thing, that's done. Forget it. It is just, user wants
it on, user wants it off. That's how Dark Theme works. So if you want to
Dark Theme your app, you have a few options. Option A, and this
is the easiest, is just to use
the Android themes that we already have
that support this. So AppCompat.DayNight--
that's been there for a while. We now have
DeviceDefault.DayNight if you're trying to
match the device's style, you're making more of
a utility kind of app. And then you can
actually pull out just elements of the
day-night current theme like colors and things like that
to just pull in accent colors. Your other option is-- this is actually kind of wild. We have this thing in
Q called Force Dark that you can turn on. This is kind of some
eccentric wizardry that the graphics
team has put together. If you want, the OS will
just invert your app for you. Just flip it into dark mode. You don't have to
do anything else-- except that it's actually
kind of terrifying when some things, like
photos, are inverted. So you can actually opt
out individual views from being put into
forced dark if you're using that in your app at all. So you should try that. It may actually save you quite
a bit of development time. Option C-- you've always
been able to do this. Just do it yourself. Look at the UI mode. Change all of the
themes in your app. Switch it around. That's always fine--
as long as you're looking at the
current configuration and know whether
you should be doing dark theme or light theme. And then, of course, option
D, which I do not recommend, is just ignore it. And then you'll be the only
app with a searingly white UI in between other
neatly dark themed apps on the user's device. So don't do that. We have another talk about that. That is Thursday, also, at-- how
did we end up with both system UI talks at the same time
on different-- anyway. CHET HAASE: We're
really into concurrency. [LAUGHTER] DAN SANDLER: You know, if we
had just used co-routines-- Thursday at 9:30, there's a
separate talk about Dark Theme and gestures in Android Q,
which I encourage you to attend. Next up in system UI,
let's talk about sharing. And as we all know,
sharing is caring. And in this particular
context, what that means is, we,
my team, Android-- we care about performance,
custom share targets coming from your app, content
previews, copy to clipboard. Did I mention performance? These are things that
we really care about. But the share sheet in
P needs a little more of that caring, a
little more TLC. So let me present to you the
share sheet in Q. It's awesome. So this bad boy can
fit so many icons. Look. We've got content preview that
does images and text depending on what you put
into the clip data. We have a new
sharing shortcut API that does not synchronously
and serially launch every app and ask it, hey, what you got? It's lightning fast. We've got copy to clipboard
that's just a button. It's just a button
right there at the top. Like, why did we not
think of this before? And it is really, really fast. Beautiful animations. I'm so excited
for you to try it. It's in beta 3. Check it out. And we're talking about
that-- guess what? On Thursday at 9:30. [LAUGHTER] Now, let's talk-- CHET HAASE: Interruptions. [LAUGHTER] DAN SANDLER: --about
interruptions. Thank you, Chet. CHET HAASE: Yeah. DAN SANDLER: Interruptions--
notifications that ask for your
attention urgently. We did a little back
of the envelope math. Most of our users get at
least one of these interrupt-- something beeping,
popping up, whatever-- at least one per waking hour,
but a substantial fraction, every 10 minutes
or more frequently. And our user research says
that this is too much. People feel like they're
getting too many interruptions. A great deal of that is
due to people's lives being full of interruptions. But your device, your phone
should help you manage that. People want to be interrupted,
but only for what matters. We want notifications
to just chill out. What's a good word? Calm? Quiet? [SPEAKING FRENCH] Gentle. Gentle is the word [INAUDIBLE]. CHET HAASE: I think
he was chilling out. ROMAIN GUY: Were you
trying to speak French? DAN SANDLER: No. ROMAIN GUY: That was bad. [LAUGHTER] DAN SANDLER: So we're going to
take those notifications that aren't trying to
get your attention, and we're going to put them down
here, in a separate section. We'll call this
the gentle section. That leaves the urgent stuff,
the things that can't wait, and the things that make noise
to get your attention-- those go at the top. And we call those priority. So the question, of
course, as developers is, what takes priority? Because if everything
is priority, nothing is. First of all, apps
can still say, hey, this is an important thing. And that causes a popup-- a heads-up notification,
we call it internally. You know this. So you use that sparingly
to not annoy the user. We've also always
applied, and we're continuing to tune our
OS-level heuristics for how to adjust these things
to try to figure out, what is the right
actual presentation for this notification
in the moment? So we'll actually make sure
that things like basic device functions like, hey, you're out
about to run out of battery, or, here's an
incoming phone call-- those things are a priority. We're also going to continue
to prioritize communication from people, which we've
done for a few releases now. And also, events, alarms--
things that you really don't want to miss. And then, as always-- you
should know this by now. My team feels very strongly. The user always
gets the final say. So you'll be able to long
press on a notification, and very easily switch it
from priority to gentle and back to decide
which bucket it fits in. In beta 3, we're trying
something a little exciting where we don't
actually show the icons for those gentle notifications
in the status bar. Let us know how you
feel about that. We're still trying
decide how we want to make that work in future
versions of the beta software. While we're talking
about notifications-- I think this was
mentioned in at least one of the other two keynotes today. We talked about-- we call
them notification actions. But these smart
replies actually are something that the
OS is now supplying to all messaging
style notifications. We're just going to
throw them in there. And you've seen this in
Google Apps up until now. Now every app on
the device is going to have the ability to have
this in messaging style. And you can even opt into it
for other notification types as well, if you want. And this is
automatically generated based on the content
of that notification. So that's the last bit
of notifications news. And we're talking
about it-- guess when? Thursday at 9:30. That's right. Last thing, and
then contractually, I'm required to
give up the clicker. I want to talk to you
about gesture navigation. So this is something,
as you know, we've been working on
for a little while. We started hacking away at it
in P sort of in baby steps. But it's important
to talk about just briefly, why is
this even important? Why are we going to
gesture navigation overall in the operating system? The basic idea is to just
get less system UI in the way of your beautiful
content that's-- it's not really-- can we
do more beautiful-- what would be the most beautiful
thing that we could put here? [LAUGHTER] Right? Right? ROMAIN GUY: I
disagree with this. [LAUGHTER] DAN SANDLER: So in the case of
gesture navigation at the OS level, we noticed a UI
paradigm in the wild. And we took a look
at it, understood it, made it safer, more usable,
and added it to the framework. Does this sound familiar to you? This is what we do at Android. And in this case,
it was actually devices that were pioneering
more immersive experiences, but all with different gestures,
where you would slide and poke the screen in order to get
it to do different things. So we actually worked
with OEMs on this. ROMAIN GUY: [INAUDIBLE]. [LAUGHTER] Is that what I'm supposed to do? DAN SANDLER: Be careful. Be gentle. Gentle is the word
of the day, remember? We've actually worked
with OEMs on this and came up with a standard
that is much simpler, and we can start to see
across the ecosystem. So as app developers, you
need to prepare yourselves. One of the things you want to
do is simply draw edge to edge. The whole point of this is
to give you more Chet face. So go ahead and actually draw
underneath the status bar, underneath the navigation bar,
where you might not ordinarily have done. But you want to avoid putting
clickable things there, because those clicks
are potentially going to get captured by,
for example, the status bar. You also want to figure out
where the gesture areas are. And we're standardizing
what these gestures are, but you'll actually want
to look at the insets to figure out where
there are regions where a swipe from the edge is
actually going to be taken over by the operating system. Obviously, we've had
that for a long time from the top of the
screen and the bottom. And now, it's happening
from the left and right. So you don't want to put
draggable things there. But of course, clicks will
still work-- touches, taps. If you have a situation in your
app where you really, really, really need to be able to
drag in from the sides-- you've got a game or you've got
a sidebar navigation pattern or you've got a drawing app-- I hear those are popular-- you can actually use things
called exclusion rectangles to allocate a part of
the screen to say, hey, I actually need side gestures
in this particular area. So if you've got a
horizontally scrolling thing or, again, that [? drawer, ?]
you could take away just one of those system gesture
areas and bring it to your app. But remember, if
the user is going to be relying on
this to go back, it needs to really, really
make sense that it's not going to do that in your app. And we are talking
about this, once again, on Thursday at 9:30, in a talk
about Dark Theme and gestures. With that, I'll hand
it back to Chet. CHET HAASE: Thanks. We're going to do a
really quick drive by in some of the random
platform improvements that we've had recently. WebView had a recent change to
introduce Trichrome, which is separating WebView from Chrome. They used to be lock
step-- so you update one, you update the other. Now, they are more separate. Makes it a little bit easier
to deal with that as a user. We also detect when
the render is hung. So it's easy to set a callback,
get callback, and do something about it if you need to. Accessibility-- couple
of important changes. It's way easier to set
accessibility actions now. That's a one-liner. And also, sometimes your UI-- the amount of time it
needs to stay on the screen depends on situations that
are related to the user. You can actually query from
the accessibility manager how long that transient
UI should stick around. There's a talk on accessibility
Wednesday at 9:30. Go to that for more. ROMAIN GUY: We've also
been changing to text. So in API 23, we
introduced hyphenation. And it looked beautiful. So we turned it on
by default. However, we then measured the
performance of that thing. And it turns out
it's 2 to 2.5 times slower than not
using hyphenation. It also turns out that most
layouts are slow because of text measurements. So in Q, it's now
off by default again. So sorry about that. [LAUGHTER] DAN SANDLER: Dash it. ROMAIN GUY: We have
Jetpack libraries to help you with this. But re-enable it if you
really need hyphenation. We also made it easier
to find system fonts. Until now, you had to parse
an XML file that was somewhere in the system. Don't do that anymore. Use this new API. We also have an API for this
for native applications. So if you write a game or
anything with native code, you can use this new API
to find all the fonts. We also have a bunch
of new features that I barely understand, but
I'm sure they are amazing. So we finally have
implementations for LineBackgroundSpan
and LineHeightSpan. Those were two interfaces
that we [INAUDIBLE] in the SDK without implementation. Sounds like a pretty
stupid thing to do. So we fixed it. TextAppearanceSpan used to
not read all the attributes that there were available
in TextAppearance. We also fixed that. We have new APIs,
LineBreaker and MeasuredText, to let you break lines
the way you want. And we have Zawgyi
support, font and encoding. So this is useful
for users in Myanmar. If you want to know
more about text, tomorrow at 6:30, there's a-- sorry-- a talk
called Best Practices for Using Text in Android. And really, slightly
related to text, so in P, we introduced this
new magnifier API that we use when we're
doing text selection. And you can use it to magnify
pretty much anything you want. We added new capabilities
in that API in Q. So now you can change the
corner radius of the window. You can change the position. You can change the zoom
of the magnification. And you can change the
elevation to control the shadow. CHET HAASE: So
private APIs-- this is a project that we
started in the P release, where we started limiting
access to private APIs. We really don't want
application developers calling these
things, because they can change between releases. So let's not do the
private API thing anymore. ROMAIN GUY: Don't
touch our privates. Wait. [LAUGHTER] CHET HAASE: Instead, we're
trying to point you to-- in a lot of cases,
there's no good reason to be calling that private API. Maybe we introduced a
public method in the time since you first introduced that
code into your application. Maybe you just missed
it in the docs. Whatever the reason
is, often, there's actually a much
better public way to do the thing
you're trying to do. So we're enhancing
the documentation to point you to the right thing. Or, in cases where there
isn't a public method and there's no particularly good
reason why there shouldn't be, we're introducing a
new public method. Or if there can't be
simply a new method for it, maybe there's a different
approach to do this. So we're also documenting
that, instead. And then we're
restricting these things by putting them on a gray
list which says, yep, it'll still work
through this release. But as of future releases,
this will no longer work. So it's time for you
to migrate your code. This is all about fixing
crashes for the user where it's going
to crash for them, because we end up changing
an internal method without realizing
that you're using it in a completely unexpected way. ROMAIN GUY: And we've
seen applications that use reflection to access
fields that have had getters and setters since API level 1. So if you're doing that,
please come tell me why. I really, really don't get it. [LAUGHTER] CHET HAASE: There's
good documentation on all of the changes
that we've made there and what we're doing about this. So go to the
developer.Android.com site for more information. There's some great
changes in ART. A couple of things we want
to talk about-- app profiles in the cloud. So this gives better start
up for a recently installed application. Here's how it works. Nice diagram I stole
from their slides. Basically, all these
users in the world using this application. We're doing the
ahead-of-time compiles. We git the code. We figure out what needs to
be compiled on the device. We create that device code. We upload all of
these into the cloud and we figure out, what's
the base set of stuff that needs to be pre-compiled for
the apps to launch faster? And now, when you
install that application, you also install pre-compiled
code for your device. So you don't have to go through
the compilation step yourself, because other people did it
across the world for you. The team has also worked a
lot on startup improvements in general in addition
to that other stuff, and also worked on a
generational garbage collector. After ART came out, they had
a generational collector, but that was turned off, I
believe, in the O release. ROMAIN GUY: We gave
a talk on that. CHET HAASE: Yes, we did. And I'm sure we
said O. And it was because they came out with
a new concurrent collector, and that didn't work out to
introduce that at that time. So now we have a GC,
Generational Collector, again. It collects new objects first. So it turns out that a lot
of the garbage being created are things that are
really transient objects. So if we can look
through that list first, it's much smaller, much quicker. We can get all the space that
you need back, in most cases, without doing a full GC. Cheaper, faster, better. And that's what we're all about. There's a GC or ART
talk Wednesday at 11:30 you should go to for
more information. ROMAIN GUY: Kotlin--
ever heard of that? How many of you
are using Kotlin? All right. DAN SANDLER: Yay. ROMAIN GUY: So in the words
of Chet in the keynote this morning, we are
increasingly Kotlin first, whatever that means. So what this means-- that
all the new APIs that were introduced in Q come
with nullability annotations. So that's very
useful for Kotlin. It's also useful if you use
the Java programming language. And speaking of the
annotations, now, we enforce nullability as an
error and not as a warning anymore when you target
Q. So we will help you build better applications. Someone is really happy
about this over there. [LAUGHTER] Kotlin 1.3.30 also brought
incremental annotation processing in kapt. [APPLAUSE] All right. All just stuff using
annotation processors. And so there's a talk
tomorrow at 12:30, What's New in Kotlin on Android. And we're also adding
support for coroutines in various Jetpack
libraries like room. And there is a
talk on Thursday-- Understanding Kotlin
Coroutines on Android. And if you attend the Jetpack
sessions on the [INAUDIBLE] components, you will
hear more about this. Security-- this is where I'm out
of my depth, so bear with me. TLS 1.3 is now
enabled by default. And thanks to the one RTT,
connections are 40% faster. I think that was right. We made improvements
to the biometric dialog so you can request implicit
confirmation instead of explicit confirmation. And if there's an issue with
the biometric recognition, you can also use a
passcode fallback. So the user can use
them for a passcode. And finally, we have a
Jetpack security library that I know almost
nothing about. So please go attend this
talk, Security on Android-- What's Next? on Thursday. Finally, in PowerManager,
we introduced a new API. You can register
a listener to be notified when the device goes
under thermal throttling. So if you're doing too
much work or the device is getting hot for
whatever reason, you can use that feature
to back off, do less work, or adapt what your
application is doing. For instance, a game, you
can change the quality of the graphics to avoid
the device running too hot and maybe turning off or
using too much battery. CHET HAASE: NN API has
had some improvements. Now, 60 new ops to take
advantage of as well as massive latency reduction. And there is a talk
today at 6:00 PM. So go to that for more
ML, NN API goodness. ROMAIN GUY: And
that's the extent that you know about
machine learning. CHET HAASE: Yes. Sure. Preferences-- so you
may already know this. This is not news. But if you're using Android
preference, you shouldn't. That is now deprecated. We want to move everybody
away from the platform onto the Jetpack
version of that, which is androidx.preference. We have some sample code here. So you simply load
in this XML code. We'll show an example of that. So you've got some XML here
that populates this UI. So you have these categories. Easy to setup. We've got built-in widgets for
you, check boxes, and switches. All looks good. It's all pretty easy to set up. So you should use that. And if you want to
know more about that, there was a quick talk at ADS
in the fall which is on YouTube. So go check out that talk. Architecture components--
lots of things going on there all the time. WorkManager just went
stable, 1.0.1 and 2.0.1. And they continue to work
on more capabilities there. Same thing with
Navigation Controller-- just went stable recently. There's a talk on both of these. Well, there's a talk at IO. There's also an older
ADS talk on WorkManager. SavedState for ViewModel
just went alpha this week. Easier handling of
process restarts. Benchmarking-- new Jetpack
API for handling performance testing in your codes. And there's a talk on
that Thursday at 1:30. And finally, Lifecycles,
Livedata, and Room-- now deep co-routine integration,
as well as more capabilities there that will
be covered in the What's New in Architecture
Components talk tomorrow at 10:30. CameraX library-- we
touched on this briefly in the technical
keynote earlier. Easy to use camera library. It works around
the issues that you have of having device-specific
implementation of some of the platform APIs. Puts that work-around directly
in the library itself, so it's much easier
for you to use. It's backwards compatible all
the way to Android Lollipop. And it's a much
more concise API. So your code should
be much smaller to do all the right things
that you need to do. And there's an
extensions add-on. We're working with
manufacturers on this to provide access to
device-specific functionality like HDR. And you'll soon see some of
those extensions coming out for those manufacturers--
not just their new devices, but also existing devices. ROMAIN GUY: So we have a
lot of libraries in Jetpack to make your life easier. But there's one areas
where we have not done-- enough building Android
UI remains pretty difficult. And we designed our UI
tool kit a long time ago, over 10 years ago. And the state of the art
has evolved a little bit since then. So we took inspiration
from what you've told us, from things like
[? CrackDigest, ?] [? ViewDigest, ?] Litho,
and even now on Flutter. And we decided to inven-- to invest, sorry--
in a declarative approach to reactive
programming. So, like Chet mentioned
earlier, in the keynote, we're working on a new tool
kit called Jetpack Compose. It's our next-generation
UI tool kit. It's unbundled, so
you'll be able to put it inside your application. We'll be able to deliver
updates whenever we want. It is reactive. And it's also entirely
written in Kotlin. We rely on the Kotlin compiler
plugin to make this work. It's to make it as
efficient as possible. We tried to avoid doing as
much as possible at run time. And what we're doing
today is, we're not delivering you binaries
you can use in production, but we're open
sourcing all the code. We're putting
everything in the OSP. And as of today, the entire team
will start working in the OSP directly. So you can join us. You can look at the source code. You can play with it. You can give us feedback. And you can even
contribute, if you want. So I want to show you a demo. So can we please switch
to the demo machine? DAN SANDLER: Can we just go
awkwardly stand behind him while he does this? [INAUDIBLE] [LAUGHTER] ROMAIN GUY: Yes. So-- that's bizarre. So I do work a lot with Chet. And he does make a
lot of good jokes. He also makes a lot of bad ones. So I built myself
a small app to be able to count those bad jokes. And I'm going to show you
how to build that app. So this is an app that's
using Jetpack Compose. One of the things
that we want to do is, we want to make it
a lot easier to create new widgets or new components. And with Jetpack
Compose, a component is just a single function. So you have an annotation here. But we're not using an
annotation processor. We're using a Kotlin
compiler plugin. So all you have to do is,
you create this function. Every parameter of the
function is effectively a parameter or a property
of your component. Here, I'm going to
add some states. So I'm not going to
go into the details, but I'm just creating a counter. It's initialized to 0. It's part of its [INAUDIBLE]
state of a component. Then I'm going to
create a column, which is the equivalent of a
vertical linear layout. I'm going to put a button in it. And you can see that
the button, I just reuse the title parameter I received. And in the onClick
property, I can just increase the value
of my counter. Then, I want to display
the value of that counter. And so I use a label. I'm just going to
use, again, my state. Bad jokes. Sorry. And that's it. Now let's run this. Oh, I forgot one thing. I need to instantiate
my component. Here we go. Let's cancel this. Let's run. We are also working hard
on making new compilation step as fast as possible. This is still heavily
under development. So don't worry. It will get better. But here, you have an entire
app that's fully reactive. As you can see, I did
not have to write an XML. So there's no XML layout. There's no annotation processor. There's no data binding. There's no listeners. And when the app
finally shows up, I'm going to blame [? Gradle. ?] [LAUGHTER] It always works. It's very unfair,
but it always works. This is a build from last week,
so please cross your fingers for me. It should work. It worked until now. [LAUGHTER] It worked backstage
just before the talk. CHET HAASE: You know, we weren't
standing behind you last time. ROMAIN GUY: I'll try again. [CLAPPING] Thank you for your support. [APPLAUSE] Like I said, it's not
ready for production yet. Those are the early days. So if someone could
please fix this for us, the source code is available. Oh. Here we are. CHET HAASE: Yay. DAN SANDLER: Yay. ROMAIN GUY: And when I
click-- look at that. It increases the label. [APPLAUSE] No listeners. CHET HAASE: Nice. Thanks. I think that demo
really-- clicked. [LAUGHTER] ROMAIN GUY: Can we get
back to the slides, please? So if you want to know more,
you can check out this website, d.android.com/jetpackcompose. You will find all
the information on how to get to
the source code. We have written
it, so you can use the special version of
Android [INAUDIBLE] that's required to make this work. And there is a talk
tomorrow at 3:30 where we're going
to go into more details about why we're doing
this and how these things work. CHET HAASE: But the UI tool kit
is not just about the new stuff that we're working on that
will be available sometime in the future. We're also still supporting
and enhancing all the stuff that you're using right now. A good example of
that is ViewPager 2. We've been asked for fixes
as well as more functionality in ViewPager for many years. We were sort of painted into
a corner with what we had. What we really wanted
was a ViewPager that was built around a
much more flexible model. So we built this
around RecyclerView. So RecyclerView capabilities,
but ViewPager API. As it should be very
familiar and easy to port to. We support RTL-- something
that was missing for years. We also support vertical paging. Who knew? People actually want to
scroll the other way. Who knew? And much better support
for notifications from data set changes, as well. So please check that out. Alpha 4 very recently. So development ongoing,
but hopefully usable in the meantime. ROMAIN GUY: And continued
investment in our existing UI tool kit-- we're going to introduce soon
something called ViewBindings. Basically, we want
you to not have to call findViewById anymore. We also don't want you to
use annotation processors with this, because they tend
to slow down your build. So instead, we generate binding
code from the XML layouts directly. It is null-safe and
it is type-safe. And I'm going to
show you an example. So there is a talk tomorrow
on architecture component where [INAUDIBLE] is
going to talk about this in a little more details. But here's an example. We have a file called
search_item.xml. From that file, at compile time,
using the similar technology that we use for DataBinding,
we generate a class called SearchItemBindings. And SearchItemBindings contains
a bunch of fields or properties that you can use to
access the views directly. So, for instance, the
root view of this layout is in a property
called rootView. So you can just say
binding.rootView and that's it-- you have
access to that top-level view. And as you can see, there's also
a text view called packageName. And again, we have a
property called packageName. And it had the type text view,
so you can call .text on it directly. And this thing is really smart. It looks at all the
configuration for that XML file and finds the type that works
across all the configurations. Graphics and media-- we're going
to go through this a little bit quickly. So we had this classical
PorterDuff.Mode on Android that was not documented
for the longest time. It's very confusing,
because it contains modes that have nothing to
do with porter and duff. So in Q, we're introducing
a new classical blend mode that basically
replaces PorterDuff.Mode. PorterDuff.Mode still works. But we're also
adding missing modes like HARD_LIGHT and SOFT_LIGHT. So basically, now,
we have all the modes that you can find in
Sketch or Adobe Photoshop and applications like this. So that should make
your designers happy. We're also exposing an
API called RenderNode. This is what we use for hardware
acceleration inside the UI tool kit. So it's used by the
Views internally. And it contains a display. So it's all the joint
commands that you issue on a canvas in
your onDraw function. It also displays properties
such as rotation, alpha, scale, and things like this. And a render node can
contain other render nodes. So you can build a
tree of render notes for super efficient rendering. So here, as I use them. You just create a new instance. You give it a
position and a size. Then, by calling beginRecording,
you get a special canvas that you can use to record
commands on the render node itself. And when you're done,
you call endRecording. Then you can manipulate
the properties. So if you want to fade the
content of the trainer node, you don't have to rerecord
all your drawing commands. You can just change
the alpha property. And again, that is
exactly what views do. Funny, when you're done with
it, inside your onDraw method, you have to check that the
canvas you're going to render onto is hardware accelerated. And if it is, you can call
drawRenderNode and pass the RenderNode you just created. You can use RenderNode to create
shadows without using a view. A RenderNode can
have an outline. So now, you don't need to
create a brand new View, which is a very heavy
object, just to create a shadow inside your
custom view, for instance. We're also exposing
something called a HardwareRenderer,
which is what we use to render the RenderNodes. So again, used by the
tool kit internally. It renders all those
nodes onto a surface. The cool thing about
HardwareRenderer is that it lets you
control the light source that we use in material design. So with material design,
we cast dynamic shadows on all the objects. But the position
of the light source has been fixed by the designers. With HardwareRenderer, you can
move that light source around for pretty interesting effects. So here's how it works. You create the HardwareRenderer. You give it a surface. Then you set the content root of
the renderer as the RenderNode. And then you just request
for a frame to be issued. Here are the methods you can
use to control the light source. You can change the alpha
of the ambient shadows and the direct shadows. And you can also change
the position and the size of the light source. So here's a quick demo that
shows you what you can do. Let's try this again. The shadows should be
moving around and changing. [LAUGHTER] Oh, here we go. CHET HAASE: There we go. ROMAIN GUY: I have no
luck with demos today. As you can see, we're
not moving the object. And the shadow is only
moving and changing blur because the light source
itself is moving around. Hardware bitmaps--
that is not a new API, but now a hardware bitmap
can wrap a HardwareBuffer. And a HardwareBuffer
is something that can be generated by
OpenGL, by the camera, or various other hardware units. They are useful
because that lets you use something that
changes frequently as a bitmap without incurring the cost
of a texture upload, which is something that happens
every time you modify a regular bitmap. And basically, you can
use it to wrap surfaces. So here, I have an example. It's a very silly
example, but it's a Lottie animation that's
using hardware rendering to render into a surface. Then it's wrapped into a bitmap
that's set as a bitmap shader on the paint that's
used to draw the text. It could be a video. It could be OpenGL renderings. So you could do a lot
of really cool things in your application. And I'm going to
keep the code itself. Basically, we use
an image reader. You have to create
a new instance. An image reader will
give you a surface that you can use to render onto
either by loading in the canvas or giving it to the
hardware renderer. And every time an
image is available, you can acquire a
hardware buffer. And then you can call
Bitmap.wrapHardwareBuffer to get an actual bitmap. And then, finally, we
can use that bitmap inside a bitmap shader. So in summary, this
is all the classes you need to make this work. It's fairly simple
and self-explanatory. CHET HAASE: Obvious. ROMAIN GUY: So I'm
going to move on. [LAUGHTER] I think those are all
the graphics classes we have in the
platform, pretty much. Surface control is a way to
talk to surface [INAUDIBLE] directly. Again, we're
running out of time, so I'm going to skip that. There's new NDK APIs. But they're super cool. Vulkan-- all new 64-bit devices
that will ship from now on will be required to
ship with Vulcan 1.1. So if you have
questions about Vulkan, please come to the Graphics
Office Hours tomorrow afternoon. ANGLE-- this is
something experimental. We have OpenGL ES
running on top of Vulkan. This will allow us to deliver
updates to the OpenGL ES driver effectively as
just a simple APK. Right now, it's not
inside developer options. And inside your APK,
you also have to opt in. If you want to know
more about this feature, again, please come
to the office hours. In Android O, we introduced
wide color gamut rendering. We were using
16-bit color depth. Unfortunately, it was
using a lot of memory. It was also using
a lot of battery. So now we use 8-bit color depth,
and it solves these issues. And we have a bunch of APIs. So ColorLong is just like
the oldest color [INAUDIBLE] that you're used to. Those are a little
more powerful, because they let you
choose the color space. And now, we have a bunch
of new APIs on Canvas. So before, you could
render wide gamut bitmaps. Now, you can draw anything
with Canvas and wide gamut. So again, if you have
questions, office hours, please. Audio playback-- now you can
capture audio on the device. There's a new class for that. You have to opt in or out. If your app doesn't
target Android Q, you're automatically opted out. If your app does target Q, you
will be opted in by default. So in case you are playing
something that should not be recorded, please make
sure to check out those APIs and turn it off. CHET HAASE: Really quick
run through-- there are a lot of very
important privacy changes that most developers
probably need to understand. We're not going to have a lot
of time to cover them in depth, so it's very fortunate that
we have at least three privacy talks at IO that you
should check out. There's also really
good documentation on the developer's
side for the preview. External storage-- this
is about making sure that apps can't, by
default, access files created by all these other apps. targetSdk = P-- this
is to help migration. There is no change
from previous behavior. But if you're targeting Q, then
you are sandboxed by default. You'll only have access
to your own files. You can get access
to media files by using the traditional
storage permission, as well as the media store APIs. If you want access
to photo metadata, you need to use the storage
permission plus a location permission. And if you still need
access to all files, maybe you are actually
a file browser. There is a manifest
flag that you can look into for more information. And please go to the
talk Wednesday at 1:30 for more information on that. Changes to location-- we
now have a [INAUDIBLE] for a location. The user needs to
give you permission to get access to
location if you're running in the background. An app can say, I want
location at all time. And the user can say,
no, you'll only get it when you're in the foreground. So we leave that up to the user. And there's a new permission
you have to request for background access. Talk on that tomorrow morning. So please go to that. We are restricting applications
from launching activities when they're in the background. So in order to actually
start an activity, if you are used
to doing this, you need to either be in the
foreground to begin with, or you need to get pending
intent from a foreground app or from the system or a
broadcast from the system. Or, the best practice is
to launch a foreground notification. Then that is in the foreground,
and it can launch the activity. Basically, the user
needs to be involved. They need to understand
what's actually going on. Talk on this tomorrow
morning, early. So go to that. Camera-- if you're used to
getting camera characteristics, we restrict access to
device-specific data. So if you're getting these
kinds of pieces of information, you now need camera
permission to get those. And there's more information,
again, tomorrow at 8:30. Connectivity-- really whipping
through this really quickly. You cannot disable and
enable Wi-Fi on the fly from your application anymore. Instead, the approved practice
is to use a settings panel. If you really need
the user to toggle this for some
specific capability, there's a new
settings capability that we give you
where you can pop up a settings panel directly
inline in your app for specific capabilities
of internet, Wi-Fi, NFC, and volume. And it works like this. It will basically
ask for that thing, and it pops up the panel
as your application is running in the foreground. Works like a charm. And with that-- DAN SANDLER: Oh,
I get to finish? CHET HAASE: --I think
Dan wants to say-- DAN SANDLER: Oh, I thought-- wow. Thank you. Thank you all so much
for joining us here.