[MUSIC PLAYING] CHET HAASE: Hello, everyone. Welcome to Kotlin
Under the covers? Under the Hood. ROMAIN GUY: Under the Hood. CHET HAASE: We've called
it many, many names. We'll call it this
one right now. We want to look at the way
some of the features and Kotlin work. I'm Chet Haase. ROMAIN GUY: And I'm Romain Guy. CHET HAASE: And
let's do this thing. ROMAIN GUY: Yeah, and we
gave this talk before, like, a few weeks ago in France. The difference is,
today we have people from JetBrains in the audience. So we're probably going to
say something very stupid. So feel free to yell
at us, it's fine. CHET HAASE: Yeah, so we'll see
if we were actually correct when we wrote the talk. So Kotlin, very
awesome language. Very cool. Lots of interesting features. ROMAIN GUY: You're doing it
again, you're taking my slide. CHET HAASE: Are you going to-- go ahead. ROMAIN GUY: So Chet
has this issue. In the speaker notes right
here, it says my name and he starts talking. And he keeps telling
my slides every time. All right, so Kotlin is awesome. Who's using Kotlin here? AUDIENCE: Whoo! AUDIENCE: Whoo! ROMAIN GUY: All right,
so we don't really have to convince you
that Kotlin is great. But just in case, it's concise,
you have less boilerplate, you get powerful extension
libraries, we write our own. There's a bunch of AndroidX
or Jetpack extension libraries for various APIs
platform on that. Fully compatible with
the Java programming language, and therefore,
the Andriod APIs, which makes it very easy for you
to adopt in your application. And it has a lot of
modern language features, like coroutines. Chet, what are coroutines? CHET HAASE: Things. ROMAIN GUY: All right. And it's always evolving. CHET HAASE: But some of these
magical things in Kotlin really are magical
and mysterious. How do those things
work, especially since we're compiling
down to the same byte code that runs other languages
that don't have these features. So how does that stuff work? So that's the whole
point of the talk. Take a look on the inside
and see what's going on. We're going to take a look at
two different things right now. Today, we're going to look
at how these things work. And we're also going
to show you some tools to use to get this information. Basically, the tools that we
used to discover this stuff. We're not going to
do it in that order, we're going to do it
in this order instead. So first, we'll talk
about the tools. And then we'll talk
about the features. So tools. ROMAIN GUY: All right. So there are two
tools you can use. The first one is looking
at the Kotlin bytecode, there's a special tool for that. IntelliJ and Android studio. And the second one,
part of the same tool, is you can convert the bytecode
back to the Java programming language, which sometimes is
very helpful to understand what's going on. So you don't have to
decipher bytecode yourself. Finally, the memory profilers. So not about the bytecode
itself, but sometimes some of this languages features
may create allocations, which may or may
not matter depending on what you're doing, what
kind of code you're writing. Which is a useful tool to know
about in general, not just in the context of this talk. So here's an example,
primitive types are handled a little
bit differently. So in Kotlin when you say
this, var i0 equals 5, obviously you're declaring
an Int So capital I -nt. It's non-nullable,
so it cannot be null. And it's equivalent to the
lowercase "int" in the Java programming language. So it is still primitive type. Now if you specify the
types yourself, same thing. It's an int, it's a primitive
type in the Java programming language. But if you declare
it as nullable, the only way we can handle that,
with the current run times, is that it has to
become the Int? Type in Kotlin, which
becomes the capital I, Integer type in the Java
programming language. And certainly, it's not
a primitive type anymore, it's a full-blown object. And this can have consequences. CHET HAASE: So how do we
find out stuff like that? Well as he said, one of the
ways that we can do that is by looking at bytecodes. So let's look at some now. [LAUGHTER] That's pretty self-explanatory,
so let's move on to something else. So in IntelliJ, there
are different ways to look at bytecode,
but fortunately, we have tools integrated
directly into Android Studio, into the IDE. If you go up in the
Menu, you can say, yeah, show me that bytecode, and then
you get this viewer over here. So if you'll look carefully
in the editor on the left, you'll see that the cursor is
on a particular line of code, and then it highlights
the equivalent bytecode on the right to show
you what's going on. Some of the information
in the bytecode, it gets a little noisy in there. There's metadata to tell
you what line that's associated with, a
little extra information. But there's a couple
of instructions, in particular, which are the
actual bytecode instructions. And then there is another
way of doing this. On the command line, you
can use Java p instead and that'll spit out a file
that you can take a look at, nicely formatted bytecode. Just a different way
to do it that's not integrated into the IDE. So for the code that we
were looking at before, you have this var i0. We haven't told it it's
going to use type inference, but it's set to be
this integer value. So what's going on? We have a little bit of
metadata about the line numbers. And then it says, bipush
of the value of 25. So it's going to take
that value, 25, extended to an integer, push
it onto the stack, and then it's going to
pull it back off the stack and it's going to
store an integer into the first variable. So integer store, and then
we go on the second line, we're not using type inference. We've actually said, no. No. We want you to be on int. We're going to assign
you the value 78. Same thing happens. We're going to push a 78
extended to an integer onto the stack. We're going to pull
that off, store it into the second
integer variable. And then for the
third case, where we have this
nullable type, we've got i2, that's going to be a
nullable int of a value of 14. First step is going
to be the same. We're going to
push byte extended to integer onto the stack. But then, we're going
to call the method, we're going to call
Integer.valueOf, which looks like this. It takes in a primitive int
type and it returns an integer. So in the middle
of that, it's going to box it into an
integer type, creating that object on the fly. And then return
the integer type. And then instead of storing
an integer into a variable, it's going to store a
reference to the object that we've created. So if you don't enjoy
reading bytecode it's really not
that complicated. ROMAIN GUY: You're
doing it again. CHET HAASE: What's that? ROMAIN GUY: This is my
name on the speaker notes. CHET HAASE: That's why
I was introducing you. Romain is going to
tell you what to do. ROMAIN GUY: Keep going. You started it. CHET HAASE: So if you don't want
to read all of that bytecode, it's actually pretty
straightforward. There's some simple
reference docs out there that you can take a look at. You're pushing, your
popping, you're setting. But if you don't want
to deal with that, there is an easier way,
especially a more concise way, to see what's actually going on. And for that you would
use the bytecode-- ROMAIN GUY: No, keep going. CHET HAASE: --Decompiler. ROMAIN GUY: It's fine. CHET HAASE: You want me to keep? ROMAIN GUY: No, keep going. CHET HAASE: All right. Rest of the
presentation is mine. Here we go. ROMAIN GUY: I'll go over there. CHET HAASE: What's that? ROMAIN GUY: I'll be over there. CHET HAASE: All right. So [LAUGHTER] let's say you've got
this code-- could you go a little bit further, actually? [LAUGHTER] So you've got this code. You have the bytecode
representation over there. And if you look at the
top of that window, you've got this button
that says Decompile. So you click on that thing,
and then in your editor window, you'll be shown some
Java programming language code that looks more like this. So you got the bytecode
fairly verbose. All the things going on, each
of the lines of Kotlin code may expand to several
lines of bytecode along with the metadata. Or you can see this fairly
terse Java code instead. You see, basically,
straightforward things going on there. You have some int values, you
have some integer values that got auto boxed, and then we're
printing out the values there. Just like that. You want to talk about
the Memory Profiler? ROMAIN GUY: No, 'cause
it says you're name. CHET HAASE: It does. ROMAIN GUY: OK, so-- CHET HAASE: I'm going to talk
about the Memory Profiler, there. The third approach that we have,
using the tools to find out what's going on,
is to actually see what's going on with allocations
in collections in the system. So one of my favorite tools,
when I joined the team and for many years
after that, was-- oh, is this proof? ROMAIN GUY: I'm
going to tweet that. [LAUGHTER] CHET HAASE: Allocation Tracker. Allocation Tracker was a tool
that you would run in DDMS and you would start it
at some point in time and you would use
the app for a while. And then you would stop
it and it would say, here are all the objects that
were allocated on the fly. And then you could click
on any of those instances, and it will show
you the call stack. Really powerful, really useful,
wasn't integrated in with any of the rest of the tools. So, kind of a pain to
get to, a lot of people didn't even know it was there
because actually finding it was a little bit tricky. So now we have Memory Profiling
directly integrated into the ID with the rest of the profilers
that have come online in Studio in the last few years. So this also allows you to
track memory usage over time, you can see how big the
heap is at any given time, you can see when Garbage
Collections happen, and what happened
because of those, and you can catch leaks,
which is really powerful. This is another thing
that, yes, we allowed you to do it on Android. But oh man, did we require you
to jump through a lot of heaps. You would dump a heap and
you would transcode that into a different format. And you would use
some external tool and then you'd walked through
this amazingly complex set of information. Now, all of that stuff
is integrated directly into the ID. And after all of this, it
allows you to track allocations, and that's what we
care about today. So I wrote this
code all by myself. Really simple thing. We've got this
nullable int value. You've got a tight loop
in there, in the method, we're going 0 to 10,000. So 10,001 times is going
to whip through the loop and it's going to set. We know that that's
going to end up being a capital I integer value
set from the primitive I loop value that we've got. So we can click on this in
the lower left of the IDE. You've got this little toolbar
of little tools you can run. If you click on the
Profiler, you'll get something that
looks like this. It shows you all the profilers
that we currently offer. You've got CPU, you got the
Memory that we'll look into, you've got Networking, and
you've got Battery Power Usage. So if you click
on the Memory one, it expands to take over
the whole screen there. And you can see heap
usage over time. Now, what we really care about
here is, what happened there? Right? I don't care when
it's not changing. What I care is, what caused it
to actually bump up allocations in heap usage and why was that? So you can drag the
cursor along there. You can select this window of
time to see what's going on. And just like
Allocation Tracker, you can see all of the things
that were allocated down below. And you can see
that, in fact, there are a bunch of capital I
integers they got allocated. In fact, there
are exactly 9,873. Which is weird. We're in a loop going
through 10,001 times, why do we not have 10,001
allocations going on? So there's the loop and
that, for some reason, didn't equal 10,001
allocations and it's because of this caching
logic that we have. When the runtime
starts up, it knows that most applications are going
to need some integer values. And so it caches values
from negative 128 up to 127, puts them in a cache. And now when anybody asks
for a value for one of those, it's just going to return that. It's already been
created as an integer, it's not being
boxed and allocated on the fly, which means
the only values that are going to be allocated
are outside of that region. Which turns out 10,001 minus
128 is going to be 9,873. Has nothing to do
with this talk, I thought it was interesting. So the one that we care
about are the capital I integer allocations. So we're going to click on
that, and that brings up another window that shows
each of those individual allocations, when they
happen, and what was going on. We can click on one of those and
just like Allocation Tracker, it pops up the call stack. And from that, we determine
that, in fact, that was being allocated because of
the boxing operation that was happening in that tight loop. ROMAIN GUY: Let's talk
about language feature. AUDIENCE: Whoo! [APPLAUSE] ROMAIN GUY: I appreciate this. Thank you for the cheers. But maybe wait until the end. [LAUGHTER] CHET HAASE: Do you mind
if I take this one? ROMAIN GUY: Yeah, I do actually. I can just leave if you want. Anyway, enums, your
favorite topic. So please use enums
but we're going to talk about them anyway. So here's an enum I wrote,
all by myself as well, called Blend Mode. We have some values, it doesn't
really matter what they are. And here's how I'm
using the enum. I'm using a one
statement, effectively the equivalent of a switch in
the Java programming language. And for every
value of the enum I call a function, that doesn't
really matter what we do. What's interesting
is what happens when we look at the dried bytecode. So here's what it looks like. The first thing it does, it's
called this GETSTATIC up code. It accesses the [INAUDIBLE],,
so that's the square bracket with the capital I at the end. So it's filled in a
classical BlendingKt$WhenM appings.$EnumSwitchMapping$0. I never wrote a classical
called this in my code, but there it is. And then what's
interesting, is after it gets this array, it's invokes
the ordinal method on the enum value itself. And then it does a IALOAD. So IALOAD takes the output
of the ordinal method call, and then use that as an index in
the array that we just fetched. And finally, does
the actual switch. So it doesn't switch on
the enum value itself, its first goes through
another intermediate array. And we're going to look at
what this array looks like, and then that's our code. So here's what the
[INAUDIBLE] looks like. So what's interesting
here, is that this is not specific to Kotlin. The Java programming
language compiler will also do the same thing. But it's still
interesting to look at it. So the code that gets
generated does this. There's this special
mapping class that contains an array that has
the same length as the number of enum values that you
have in the enum class that you declare. And then, the static
[INAUDIBLE] code, this array is populated with the
values of the enums and some special constants
generated by the compiler. So what happens if we create
another method in our code that does another
when on the enum? So we do a when
the same order, we switch over all the
values of the enum. If you go back to
the generated class, we can see there's a second
array that was generated and it's also
populated with enum values in some magic
constants by the compiler. And actually, you can
see that those two arrays are exactly the same. They're the same
length and they contain exactly the same mappings. But if we use the enum
once again in a when. And this time, we
declare the enum values in a different order and we
look at the generated code, we have a third array, as
expected, also the same length, but the mapping's now different. I believe the
reason the compilers do this is that if you change
the enum, the code that was compiled before with
the old version of the enum, will still work so they
need this indirection. What becomes interesting is,
obviously most of the time, you will never need
to care about this. You should have a
lot of when on enums in performance critical code. You're going to have extra code
that gets initialized that runs out in [INAUDIBLE] in time. And you have this extra happy
memory, but most of time, you don't need to
care about this. Use enums. CHET HAASE: All right,
let's talk about laziness. I know a lot about this. It's a very common pattern
in software to say, I may need to
allocate this thing. But it's going to
cause a lot of work and maybe I don't want those
allocations in the background work to happen to do that
because maybe the code won't need it. So we'll do it lazily. We'll do it sometime later. So the manual approach
looks like this. So caveat, really
stupid example. Nobody should ever do this just
to avoid allocating an int, no matter how it's implemented. ROMAIN GUY: It is a
very stupid example. CHET HAASE: It is a
very stupid example, there is a very real example
that is almost as stupid, which is code that
we actually saw. Which did lazy allocation
for a Rect object, which is just four times
less stupid than this. So allocating 4 integers,
but for some reason, they wanted to do it lazily. Because why allocate
if you don't need to? Right? So, bear with me,
so we have this int. We're going to set
it equal to 573. But, maybe nobody is
going to need this, so we're going to
make it lazy instead. So we're going to
have a private member variable that's nullable. Now, part of the
problem here is, we know from the earlier
example is, well, now we have something that could
have been a primitive but now all of a sudden we've made it
an object-type instead just because we want
to set it to null. But anyway, you've got this
private member variable and then you have
the actual thing that they're going to access. And you say, OK, when they
actually call the getter, we'll see if it's null. And that's the
trigger to say, OK, now actually set the value of
this thing and return the value and you're done. Next time they ask for it,
it's already allocated and set and you don't have
to worry about it. That's the general
approach, I'm sure we've all done this hundreds
of times, hopefully, for less stupid
examples than this one. All right, this is what it
looks like at the Java level, this is essentially
what it compiles to. If we look at the
decomplied bytecode, it looks exactly as the
way we would expect it to. You've got the same thing
going on with ints and integers and it's doing the
right thing there. But, there's a better way
to do this with Kotlin. There's an automatic
approach that takes exactly one line of code. Isn't that much better to use? Say, by lazy, we'll
set it to 574. So it doesn't get set to
anything until someone asks for the first time. And then it goes
through some operations to actually allocate and
return this value of 574. And the question is, what
operations does it go through? So, I should explain
first, lazy is implemented using property
delegates in Kotlin and it's this generic mechanism
they have for delegating logic somewhere else. So when someone asks
for this variable, I want you to run this
logic over here and return other information from there. The other background piece
of information is properties. Properties in
Kotlin do much more than properties in the Java
programming language, which means, if you're going to access
information on a property using reflection, well, how does
that work through the bytecode since Java programming language
doesn't have that capability? Well, Kotlin needed
to add capabilities so that when you
use a reflection it will go through
their additional stuff to actually get that
extra bit of information from their property. So they created this
class called K Property. And now if you use reflection
on a Kotlin property, it can go through the K Property
and get that extra information that it needs. So that's the background, here's
what's going on inside of lazy. First of all, when you say,
by lazy, it automatically creates an array. Allocation brain should say,
oh, there's an allocation. So you've got this
K Property array where it's going to set up the
information for getting things from this Kotlin specific
property reflection mechanism that they have. So we've got the
array, and the array consists of one item, which is
this K Property that they've set up specifically for
this lazy property you've told it about. You've got the class name. You've got the type. And that's basically
it, you've got the name of the thing, right? So it sets us up array plus an
allocation of the K Property inside of it. Then it sets up this
call internally, this is, sort of,
initialization time code. It says, OK, there's
this lazy thing that's going to return
a class of type lazy, and from that, we
can get the value. So when you call the
getter on that property, it's going to call
into this code, which is going to call a get
value on that lazy object. And then it's going to
call int value on that. All of this could bottom
out in reflection code to go get it from
K Property, which is even worse than the
allocations we've seen so far. At the end of all of this, you
may end up in reflection code just to get an integer value. However, because of
extension methods, they're more clever than that
and they actually spit out this extension
method that says, oh, if they're using the
lazy class then we're just going to call
this function instead. So no need for reflection
even though K Properties stuff was set up and has
reflection capabilities, we're not using
that aspect of it. Instead, it will just
call into this method and get the value,
which is this. Simpler than
reflection, however, couple of conditions to
check, and a synchronized block in there, just
to get this int value from this very stupid
example I've written. In addition to that,
when all of this is setup we go through initialization
code, which in bytecode looks something like this. So, basically, whenever
you get into this and initialize this
object, this is what gets emitted on your
behalf under the hood just to save an allocation
of an integer object. As I said, stupid
example, but real example is people doing it for small
data structures like Rect. So lazy is awesome. It's really cool that you can
do this in one line of code. And if you have a
complicated data structure, or a lot of complicated
object, when it gets set up and if it is not going to be
used very often, totally worth considering. For avoiding the allocation
of a simple int, probably not. ROMAIN GUY: So now let's take
a look at unsigned numbers. So this is a new
experimental feature that was introduced
in the Kotlin 1.3. So Java has mostly sign
numbers and Kotlin is finally bringing unsigned numbers,
which can be very useful when you do graphics code. For instance, when you
have a width and height, unsigned very useful as we know. As a type check
that's you're not going to get a negative
value, which would make no sense for dimension. So if you enable this
experimental feature, here's how you can use
the unsigned numbers. You can use the U suffix to
declare a type to be unsigned. And under the hood,
they're implemented using another experimental
feature called Inline Classes and we're going to take a
closer look at Inline Classes in a little bit. But first, let's take
a look at this example. So I declared two
unsigned numbers, and then I just add them. So what happens? So every unsigned number
is just an integer that's pushed on the stack
and storing the variable. So here, there's nothing
interesting to see. And when we add them, we just
load those two variables. We call the IADD up
code in bytecode, which is just the addition
of two integers, which are normally sine. But generally, they
were very clever and they realized that because
of the way that sine numbers and sine integers are
included, it actually works with unsigned as well. So They didn't have to
create anything new, you just use the
existing bytecode, everything happens at
their language level. The only thing that's
really bizarre here, is that we invoke a
static function called UInt.constructor-impl. And you can see that it takes an
integer and returns an integer. So if we look at the [INAUDIBLE]
function to see what it does, that's where things get
a little surprising. And I'm sure the JetBrains folks
would know why it does this. I have no idea. So we have this function, takes
an integer, returns an integer, and here's the implementation. It loads the perimeter
that you give it, and then it returns it. And that's it. So I don't know why it's there,
it's maybe for the debugger so you can break on it. But other than that, I know that the [INAUDIBLE]
or the ahead of time compiler will get rid of it or
probably [INAUDIBLE].. So we don't need to
worry about this. It's just a little strange that
we have this extra bytecode for no reason. So if we look at
the other operators, we saw that the addition
is just implemented with the existing up code, add,
same thing for the subtraction, same thing with
the multiplication. So only when you do a
division that Kotlin has to invoke a special static
method because divisions works differently. So there's more
code that runs here. So most of the time, unsigned
numbers are basically free. So you should feel
free to use them. Now let's look at what
happens when you try to print one of those numbers. Basically, when you try
to call toString on it. So here I have a signed
number and an unsigned number and I print them both. For first, let's look
at the sign number. So we have this variable
called z, it's equal to 42 and we call println on it. So that's exactly
what we expect. But when we use the unsigned
number, instead of printing it directly, because the digital
printing line in the runtime does not know about
unsigned numbers, we have to go through
something else. And because the unsigned
numbers are implemented using inline classes we have box. So we called this
function called box-impl. It takes our unsigned
integers and wrap it into an instance of
a class called Uint. And that gets passed
to println because UInt has a toString method
that's implemented and does the right thing. So even if you use
unsigned numbers and they are most
of the time free, you can end up doing
boxing and therefore, allocation fairly easily if
you just add them to a string, for instance. CHET HAASE: Let's
talk about ranges. One of the curious things
that a lot of programmers hit when they see Kotlin for
the first time is the for loops. They, kind of, look for the
syntax that they always knew from Java or C or C++
or other languages. And it's not that. Instead all of the for
loops work with ranges. And there are many
ways to do this. So you could say,
for i in this range of 0 to 10 inclusive or
exclusive until you get to 10. Or you could just say, repeat
the following operation, give it the lambda there. Or you could put the
range on the front. These all look the same to me,
modulo that off by one error that we have in the second one. It turns out they're kind
of different under the hood. So that first one works the
way that we would think. It's, basically,
equivalent of the for loop we would expect in Java. Second one same
thing, except we're not going all the way
to the last value there. Same thing for the third, like
all straightforward iterations. But the way the last
one is implemented is a little different. Instead, we're going to
create an iterable object. We're going to create
an int range object, and then going to use the
iteration mechanism, which seems a little bit heavyweight
for just having flip the range to the front there. Not a big deal,
but kind of curious that they're all quite
different under the hood. ROMAIN GUY: So in the
traditional for loop with other languages
you can easily increment the counter
by more than one, so everything plus plus. You can say, plus equal 2. And the way you do
this in Kotlin is you can use the step operator
or infix function. So in that case, you're going
to go from 0 to 10 inclusive and we wanted to increment
by 2, we'd say step 2. And I would expect the code
to look exactly the same as before, but we get this instead. So just by adding "step" the
transition and the compiler basically disappears. And instead we have
to create a range. We have to create
this in progression and then the compiler
calls get first, get last, and get step, which are in
this [INAUDIBLE] constant. So it's not entirely necessary. And then we have
a while loop that makes sure that we do it
the right number of times. And it also happens
if you said, step 1, which makes no sense to me. But assuming it's a missing
optimization in the compiler. CHET HAASE: That
first step is a doozy. ROMAIN GUY: Right,
so inline classes, I touched on those
a little bit when I talked about unsigned numbers. So inline classes are
where to wrap a type 1 field effectively. And the point of inline
classes, as the name suggests, is that they disappear
at compile time. So that with unsigned
numbers there are a classical UInts but really
when you look at the bytecode, all you see in the end
is just an integer. So here, I created my own
inline class called, Color. And wraps an integer, because
color is often defined as an integer or Android. And I've created
custom properties to be able to extract the red,
green, blue, ans alpha channels from the int without having
to do that dance myself every time. So now, if we try to use
that inline class ourselves, I created a print
function that just formats a string using
the different channels of the color. And I create and instance at
the bottom in my main function. I called the printColor
function and just declare. And here's what
the bytecode looks like in the main function. So first, we push a
constant on the stack, it's that weird long
number that corresponds to the [? Higgs ?] additional
value that you saw in the code. And then we invoke that
color, that "constructor-impl" function. And you can see
it's signature, it takes an int, which is expected
because we wrap an integer and it returns an integer. So it's exactly what was
happening with unsigned before, we saw that this
constructor actually doesn't do anything. Just takes the input and
just returns it directly. So there's nothing to see here. What's interesting is what
happened to my printColor function. So in my source code,
is take color instance. Well, you can see
here, now it's called printColor- some
weird name, it's probably a hash of something. Instead of taking an
instance of color, it takes an integer directly. So the compiler
rewrote the function to work directly on the
primitive type I was wrapping. That said, sometimes inline
classes are not free, we saw that with
toString earlier. And the same thing can
happen with the == operator. So here, I create two
instances of my color class and I just called, I want
to print whether or not they are equal. And I do it in two
different ways. First, we use == and
then we do .equals. And those should be equivalent,
they should be the same thing. However, when we look at the
bytecode, here's what happens. So when I do a == b, first
we load one of the variables, I think it's b, then
we call "box-impl". And you can see that call
generating a color instance from that integer. Then we do the same thing
again for the a variable. And then finally, we
invoke a static function called, areEqual, that's
an intrinsic in the Kotlin standard library. So just to be able to compare
our two integers that we know are there, because we saw that
the compiler sometimes it's smart enough to get
rid of the class, we just boxed it back
into actual objects. We created to allocations just
to be able to compare those two integer values. If you call a.equals(b),
things are a little different. We load one of the variables
and we box the other one. So instead of having
two allocations, we have only one allocation. And instead here, we're calling
a method called, "equals-impl" on the color class. And you can see it takes
an integer and an object. So if you're gonna use inline
classes and you don't care about no ability, in the
[INAUDIBLE] instance, you're gonna locate half
as many objects if you say, a.equals(b) instead of a == b. CHET HAASE: Take a
quick look at arrays. The implementation
depends very much on little subtle differences
and how you declare things. So here we're going
to call intArrayOf and we pass in these things. And it's going to
say, yep, here's a primitive array that
contains those things. Or it we could say,
arrayOf and we're going to pass
through these things that we know that Kotlin is
really good at type inference and these are obviously
ints it says, yep, here's your integer array. Or we can say,
give me a IntArray and actually ask for
this thing specifically. And then it does the right thing
with a little bit more code. We're, basically, initializing
it inside the lambda there, and it creates that primitive
IntArray to whip through that. ROMAIN GUY: All right,
So with lambdas there's one [INAUDIBLE] case that can
be a little bit tricky in. And we just recorded the
but podcast this morning with the Pocket Casts folks,
and Tor mentioned that. CHET HAASE: You
just leaked that. ROMAIN GUY: Yeah. CHET HAASE: Uh-oh. ROMAIN GUY: So what
you're going to see is a possible
programming mistake. Thankfully, we have
a [INAUDIBLE] check in androids for the
ones who are against it. So for instance, a class
written in the Java programming language, it's a
widget of some kind. And you can register listeners,
so we have this interface, it has a single abstract
method, it's a sim. You can add a listener, you
can remove the listener, and you can ask how in
listeners are registered. Now let's try to use
this from Kotlin. So first of all, we
instantiate our widgets. Then I create my
listener as a lambda. And if we go back, our single
abstract method takes a widget instance as a parameter
So we create a lambda that matches this signature,
it takes a widget and then we do something
with it, we print it. Then I call addListener
on my widget and then I print the number of
listeners that are registered. And finally, I try to remove
that listener and a printed number of listeners
that are registered. So what's going to
happen is that, when we print a number of
listeners after adding it, it's going to say there's 1
listener, that's completely expected. But after calling remove, the
number of listeners is still 1. The remove did not work. And to understand
why it doesn't work. We have to look
at the code that's generated by the compiler. So here's what it looks like. Our listener, because
it's a lambda, it becomes a function
1 type internally. And the function 1 is
effectively your lambda that there's one parameter. But because that
type is function 1, it is not the listener of
type of which it expects. So the compiler generates
this extra class called Widget_Listener that
is of the type listener that we expect. And it passes it our
function 1, our lambda, so it wraps it into
something else. And it's that other object that
gets passed to addListener. And you can probably guess
what's coming up next, when we remove listener
with our lambda, it gets wrapped again
with a different instance. So we're trying to remove
a different listener, and so we actually
leaked our listener. And this is the
kind of stuff that's Android Studio will
warn you against. And the fix is fairly easy. You just have to be specific
about the type of your lambda. Just say, it is
a Widget.Listener and don't just use
the naked lambda form. CHET HAASE: So let's take a
look at how extension functions actually work under the hood. So you have this simple
class that I've defined. Again, I wrote this
code all by myself. We have a Superclass and
then we have a Subclass, which extends that Superclass. Awesome. Then we have a couple
of extension methods that we've defined,
one on the Superclass, one on the Subclass. And they print out
this value, or they return the string
value that indicates which one was actually called. So the superInstance,
we say, Yep, give me one of those
Superclass objects. Subclass, give me one of
those Subclass objects. And then we have one
where it is a Subclass but we cast it to a Superclass. And then the question
is, what happens when we call getIdentifier
on each of these things? And then we have
one more example, where instead of pre-casting
it, we're casting it at runtime through a Superclass. So the question is, when we call
getIdentifier on the SuperVal we get-- anybody awake? AUDIENCE: Super! CHET HAASE: There we go! What a super answer. All right, how
about the next one? AUDIENCE: Sub. CHET HAASE: All
right, thank you. And number three? AUDIENCE: Sub. CHET HAASE: Wrong. Sorry. Super. So this is the one that's
a little bit surprising. And finally, the
last one is the same. So it is a Subclass, what
is actually going on? Like, shouldn't it be calling
that method on the Subclass? It's because of the
implementation these things. So if we look at the
decomplied bytecode, this is what we've
got. getIdentifier is now a static
method and it takes an instance of Superclass. Same thing for sub, the
getIdentifier on the Subclass object. So that means that when we call
it with a Superclass object, we end up in the one that
takes a Superclass type. Same thing for sub, and same
thing for sub cast as super, we have told the type system
this is a Superclass no matter how we actually
created it, so it's going to call getIdentifier
that takes a Superclass instead. And same thing for the thing
that's cast at runtime. ROMAIN GUY: Default parameters. It's a very handy
function in Kotlin. You can specify default
value for your parameters and then you can use
either name parameters to invoke them or just
omit the parameters and rely on the compiler
to do the right thing. So here, I have a
very useful method. It takes two floats
and just adds them and returns the results. Chet, where the rest of the-- oh, sorry. Wrong side. All right. So here's the code
that gets generated. So from that code that we have
that take default parameter values, we have a
new static function that was created that just takes
our two floats with our default values, because we
generated bytecode, and it just adds them
and returns the results. But there's another
method that was created. So it takes two
floats, as we expect, and then it also takes
an integer and an object. As far as I can tell, the
object is always set to null, so I'm not sure why it's there. It's probably to tag those
methods in a way or another. I should ask JetBrains, they
will probably know the answer. And the integer is
actually a bit field, where every bit tells the method
which parameters you did not specify at the call site. So you can see here, when
we check the first bit var 2 and 1, if it's set,
that means that we did not specify a value. So we use the
default value that we specified on the Kotlin site. And then we do this for
everybody parameter, and at the end we invoke
the actual function that's at the top. What's interesting, because it's
an integer I was wondering-- there's only 32
bits, so what happens when you have 33 parameters? And we're going to take a
look at that in a moment. But just here are some
examples of how it works. So if I called my
function directly without specifying any
values, the Kotlin complier would just use the
default value, 0. And then for the bit field,
you have to off set three so the bit 1 and 2 are set. And it's going to use
the default values. If I specified the
first parameter, we see our value as the
first parameter, which is 0. For the second
one, the bit field is set to 2 because the
second parameter is not set. Same thing when you specify
only the second parameter. And then, of course, if you
specified the two parameters we call directly the function
that does the actual work and we skip all those checks. So we have only 32
bits, what happened when we create a function
that has 33 parameters? You should probably not
do that in your code. But just in case, a double
check for you what happen. Honestly, I was
expecting the compiler to just say nope,
too many parameters. I can't do that. What it does, it
creates two integers, so it just adds a
second bit field and it's going to look at
the bits in both bit fields. CHET HAASE: Because what's
better than a bit field? Two bit fields. [LAUGHTER] ROMAIN GUY: All right. Finally, coroutines. We only have three
more slides, so we're going to go through
this very quickly. How many of you use coroutines? OK. So sorry for those of you who
don't know how coroutines work, I'm not going to
explain that too much. But, basically, so you
have a suspend function, that means you can do
heavy work in there and here I call a
function called delay. We are assembling heavy work. Then I launch
coroutine, so we print that I'm launching a coroutine
and then I called launch. I call my suspend function
twice, so compute and compute. And finally, I print
"Exiting coroutine". So if we don't run
this program we're going to see the
falling out [INAUDIBLE].. We're launching the
coroutine with computing, that's our suspend function,
we're computing again. That's our suspend
function, we exit. Coroutines are implemented
using state machines. So if you look at the
code that's generated, it's going to look
something like this. There's going to be an
invokeSuspend function generated somewhere. And at the top there's a
switch on an internal field, so that function is
inside the class. And switches on the
field called label. And label is basically where you
are in coroutine state machine. So every suspend function
you call in the coroutine will be a state in
the state machine. So here you can see, we switch
on label, and indicate 0, we set the label
to 1, that means we advance to the next state. And we call our first instance
of the compute function. When those suspend functions
return a magic value called suspended, we return
from the state machine, that means that the
coroutine has to be paused and we have to come back later
when we can resume execution. And again, you look at the
other states or when this is 1, we advanced a state to 2 we
call computer a second time and we return for a suspended. So what I did is, I took
the bytecode of that, and the bytecode
starts like this, and I hacked it to add this. You don't need to
understand what it's doing, but, basically,
all I wanted to do was print the current state
of the coroutine of the state machine and rerun the program
to see what's going on. So this was the original
outputs and when I print this label field,
this is what it looks like. So we launched the coroutine,
we enter the state machine. The state is 0. We call compute after
suspend function. We return because
we're suspended. And then, at some point later,
we reenter the state machine, now the state is 1, we invoke
compute, we reach 1 again, we come back into the state
machine with the live state and then finally, we
exit the coroutine. So again, you should not
worry about this too much in your code. You use coroutines,
they're amazing. It's just very
interesting to go look at the bytecode
and the deride code to better understand
how they work. What looks magical, how
it works under the hood. And with that, you have 30
seconds, Chet, so [INAUDIBLE].. CHET HAASE: i guess
I'm gonna wrap up. So the question is, should you
actually care about this stuff? We've given a couple of talks
in the last couple of years and said, you know what? The runtime has
gotten so much better, stop worrying about all the
allocation and deallocation stuff. We told you early days of
Android, avoid allocations. Now, do the right thing for
your code and your APIs. So the question is,
did we lie to you? And the answer is, no. Actually, we were
telling the truth, that is still good advice. However-- oh, yeah. And there is the advice,
allocation, collections, always fine. But it's good to
actually understand what's going on in the hood. If you are trying to be
lazy about allocating even a Rect object, that's
not the right approach. Right? It probably doesn't matter. But isn't it nice to actually
know what's going on? And especially if you're
in an inner loop somewhere, maybe you actually don't
want that overhead, maybe it doesn't matter. On the other hand, if
you can save it in a case where it actually
matters in a loop, that's probably a good thing. And that is it. Thank you. [APPLAUSE] [MUSIC PLAYING]
Really interesting talk. In the last few slides they touch on the internals of coroutines, unfortunately they can't go in depth on this topic. If you are interested in this topic I can highly recommend this talk: https://www.youtube.com/watch?v=YrrUCSi72E8 by Roman.
very informative, thx for sharing