[MUSIC PLAYING] KELLY COX-JORRIE: Good
morning, everyone. I'm Kelly. And I work with music
and audio app developers to help them launch and
build successful businesses on Google Play. We're really excited to be here
today to share with you both technical and business best
practices for developers that require high-quality audio. So our session is
divided into three parts. I'm first going to
share with you tips from leading audio developers
who have successfully built businesses on Android. Then Don is going
to join me on stage to share common technical
hurdles that audio developers face and ways to overcome them. Then Phil will be announcing
our latest AAudio framework that we think will
help developers reach your performance needs. Then to wrap things up,
we'll have ROLI's CEO-- join us on stage for a
really cool live music demo. So stick around. So I grew up in Los
Angeles, California in a very musical family. And I remember the first
time I stepped foot into a music studio. It was really exciting. But I was also a bit
overwhelmed by the amount of professional tools
and expensive equipment, and not to mention that
renting the music studio itself wasn't exactly affordable. As a teenager,
these DJ turntables were my favorite
piece of equipment, because I thought
it was pretty cool being able to mix my favorite
beats for my friends. But as cool as these
turntables are, having access to
expensive equipment or an expensive
studio isn't really feasible for most people. So luckily, for the
billions of people around the world who
want to produce music, music creation has
changed drastically. Now with the
accessibility of mobile, you can simply download an app
and have pro-like quality music instrument and a music studio
directly in your pocket, making these DJ turntables
accessible to everyone on their phones. And that's why we've seen
tremendous popularity in apps like Edjing with over
45 million installs. But of course, it's
not just DJ apps that have grown in popularity. It's apps that allow
you to sing karaoke, such as Smule Sing with over 52
million monthly active users, and apps like Korg
KAOSSILATOR that recently released on Android. And while high-quality audio
is important to music creation apps, it's also important to
a whole host of other apps, such as VR or voice-reliant
apps or games. And the market for high-quality
audio apps is only expanding. So that's why I'd like
to now dive deeper into the first part
of our agenda, which is sharing with you
three best practices from leading audio developers
who have successfully launched and built their
businesses on Google-- Android. These best practices are
launch smart, think global, and, everyone's
favorite, make money. So first, launch smart-- so we know that device
fragmentation on Android can be a major challenge
for developers that require high-performance audio. In order to deliver the
best user experience, it's important to understand
what device work well for your performance needs. By performance here,
I'm specifically talking about a device's
CPU and latency, with the lower the latency,
the better the user experience. So let's take a look now at how
Joy Tunes, an app that requires low latency, approached
launching on Android in order to maximize their reach
but still achieve a good user experience. So Joy Tunes is the
creator of Simply Piano, a subscription-based
music app that helps users learn how to play piano. In fact, my daughter
here has learned how to play piano almost
exclusively on this app. So we've had a piano in my
household her entire life. And it wasn't until
I introduced her to this app did she think
that playing piano was all of a sudden fun. So what she does is she
simply follows along to the guided
lessons on our piano. And the app
immediately recognizes what she's playing and
then provides feedback to her to help her improve. So the key here is that when
my daughter presses down on the piano keys, she
expects that Simply Piano will immediately and accurately
hear what she's playing and provide feedback. But in order to have
this experience, Simply Piano
requires low latency of playback and recording
of musical tracks. So to figure out what devices
meet their low latency and computational
performance needs, they first launched a
beta version of their app in our Early Access collection. So Early Access is a collection
on Google Play for new selected apps that are still in beta. Inclusion in this
collection enabled Joy Tunes to build a beta audience and to
be able to collect private user feedback. This enabled them to identify
both problematic devices where users were having a poor
quality audio experience, but also problematic
regions that they were then able to exclude from
their production version. They also developed a way
to have their app work on lowering devices by
making automatic adjustments the certain features, such
as changing and optimizing the graphics so that a
user on a particular device would still have
a good experience. So with all of these
learnings, Joy Tunes was able to then launch
to a public audience with a very high user
rating of over 4.3 and to also launch to a
larger number of devices than they had originally
thought possible. So second, you want
to think global when you think about
launching your app or growing your user base. And Android has
tremendous strength here. And we continue to see enormous
growth of music creativity apps in both developed and
emerging countries. When expanding to
developing regions, though, you want to optimize
your app for its specific needs for users that are
in that market. For instance, you
may find that there's a higher number of users
that are on low-end devices. And you may find that more
users have lower bandwidth constraints. But as mentioned
with Simply Piano, you don't have to limit your
app to particular devices. There's a number
of technical things that you can do to
optimize your app for both high and
low-end devices. So let's take a look now at
another leading audio developer that has been
distributed globally but has seen enormous
success in emerging markets. So Smule is a leading
developer of mobile music apps, including Smule Sing
that allows users to sing along to
their favorite songs in a karaoke-style fashion. So for instance, if I wanted
to sing, or if any of you wanted to sing, Disney
"Frozen's" "Let It Go," the app would match up what
I'm singing in real time with the song. And then I'd be able to
overlay audio graphics to make my humble voice
sound like a pop star. If Disney sing-alongs aren't
popular in your household, they have a ton of
other genres, too. So Smule has seen
phenomenal growth. But last year, the app
saw over 10x active user install growth in the Southeast
Asia region with over 40% of their user base now comes
from this region with Indonesia being one of their
fastest-growing countries. But not only have they seen
enormous active user install growth, they've also been
monetizing while there-- so with over 7x increase in
revenue over this same year in the southeast Asian region. So in addition to some of
the technical optimizations that they've done, another
reason for this viral uptick can be attributed
to Smule offering locally relevant content. For instance, a user in
Indonesia, pictured here on the left with the
headphones, can sing along to one of the world's
top hits, or she can choose to sing in
a duet-style karaoke with one of her favorite
regional artists, such as [? Seeta ?]
pictured on the right, who's a very popular
Indonesian singer. So as you think about
expanding your reach, you want to identify
areas of growth and then create a localized
experience for that market. I also encourage
you to check out our Building for
Billions guidelines online if you're interested
in more tips on building for emerging markets. Lastly, you want
sure that you're testing your monetization
strategy to achieve the best business results. So many music creation
apps have historically required that users pay
a premium price in order to access their app. But because of the
variety of payments that are available on mobile,
user consumption habits have changed. And in fact, on Play, our
fastest growing business model comes from subscriptions
where we've seen both subscribers
and revenue double over the last year. So while the last two
apps that I mentioned, Simply Piano and
Smule, allow users to download the app for free
and test it out and then sign up for a subscription,
the developers of Ultimate Guitar
Tabs have tested an interesting hybrid model. So Ultimate Guitar
Tabs allows you to learn how to play guitar
through in-app lessons. Or you can just jam
to your favorite song. So Ultimate Guitar started
off as a premium app. But rather than charging
a high price point, they experimented with a lower
price point for a paid app, allowing users to download
the app for $2.99, as you can see on the left. This essentially lowered
the barrier to entry. Then they up-sold their users
once they were in the app, allowing them to download
the full version for $9.99, as you can see on the right. So this hybrid approach of
using paid and in-app purchases turned out to be an
effective monetization model for Ultimate Guitar Tabs. They not only increased
their revenue overall, but in-app purchases now account
for 65% of their revenue, which is a pretty striking stat, given
they're already a premium app. So while a hybrid
approach worked for them, I encourage you to test out
different monetization models beyond premium and see where
you have the best conversion results. So these are just a few
examples of audio developers that are seeing tremendous
success on Android. There is a tremendous appetite
for music creativity apps. And so we think if you follow
the business tips that I just gave, and also some
of the technical tips that Don and Phil
will be sharing, that there's a big opportunity
for developers in this space. I would now like to
introduce Don to the stage to discuss common
technical hurdles and how developers
can overcome them. Thank you. DON TURNER: Hello. Hello. [APPLAUSE] Thank you, Kelly. Hi, I'm Don. I'm a developer advocate. And I lead our developer
efforts for the Android high-performance
audio framework. What that means is I
help you guys create amazing audio
experiences on Android. What it actually means is I
spend virtually all my time listening to sine waves. Outside of Google, I
do a bit of DJ-ing. And in my head, I like to
DJ in places like this. But the reality is actually
probably closer to this. More people definitely
turned up later, I promise. OK. So today, I'm going to give
you two best practices, which you can use in
your apps to create amazing audio experiences. These are obtain low
latency audio paths and meet your audio
deadlines so you don't give your users
headaches by putting audio glitches into their ears. So starting with obtain
low latency paths-- I'm going to talk
about two signal paths through the Android system-- number one, recording,
and number two, playback. Now one of the first
questions that I often get from developers is, what
is the latency of this path? And this is the thinking behind
the audio dot pro hardware feature flag. And if a device reports
support for this flag, it means that this particular
path is less than 20 milliseconds over the headset. You can use this flag in your
app to enable certain features, such as live monitoring,
or you can only distribute to these devices-- these pro audio devices. And there's now tens
of devices in market which is supporting this
particular standard. So it's not just Pixel
and Nexus devices. We're seeing good uptake
from OEMs as well. So audio recording--
this is the path through the Android
audio framework when you're recording. You have an analog
signal into a microphone, goes through an analog
to digital converter, through some effects to kind
of clean up the signal-- this can be things
like noise cancellation and echo cancellation. And then the digital data
is delivered to your app. Now effects can add latency. So if we're talking about low
latency apps, what we want is the lowest possible latency. And there is a route through
the system which allows us to avoid adding this latency. And this is obtained using
the Voice Recognition preset. The other thing we
need to remember here is to use PCM 16 format. And this essentially
allows the audio framework to not do any format
shifting which potentially could add latency. So that's all I'm going to
say about audio recording. Audio playback is a
little more complicated. So every phone that
produces audio in the world has a digital to
analog converter in it. This takes ones and
naughts and converts into a voltage which is used to
drive headphones or a speaker. Now I like to think of this
as kind of a character which is chomping down on this audio
data and producing the signal. In fact, I even have
a name for him-- Dac Man. Now Dac Man has very specific
requirements for his food. He wants it served to
him at a certain rate, and he also wants it served
to him in bite-sized chunks of a very specific size. Now for this analogy
to work, Dac Man also includes DMA controller
and all the other hardware required to consume audio. So just bear with it. So this is how Dac Man fits into
the Android audio architecture. So your app is at the top here. And it's your job to get your
audio data to the output as quickly as possible. The default path through the
system is to go through every sampler, through
some effects-- again, to improve the acoustic
quality of the signal-- and then through a mixer,
and out to Dac Man. Now as with the recording
path, the resampler and effects will add latency. And we can obtain a
lower latency path called the fast mixer
path if our app conforms to certain requirements. So number one, we need to
obtain the correct sample rate. So remember I said Dac Man wants
his food at a specific rate? We can use the Audio
Manager API to find out exactly what that rate is. And that will enable us
to create an audio stream on this fast path. The other thing we
need to remember is not to add any effects. So once we've created this
audio stream to Dac Man, we need to start
supplying audio data. And we need to do it in
this specific chunk size. And again, we can use
the Audio Manager API to obtain this optimal size. So after this first
chunk of audio data is consumed by Dac Man,
he sends us a callback. He is basically saying, I've
run out of food, feed me more. And we get this callback
on a high-priority thread. And this allows you to
do your audio processing work without being preempted
by other parts of the system. Now this is a fairly critical
part of any audio app. So let's take a
closer look at what happens inside this callback. So every callback
has a deadline. Remember that you have to send
these chunks of audio data at very specific intervals. So the amount of time you
spend in this callback is going to vary based
on the computations that you're performing, like the
complexity of the audio data, but also CPU frequency and
device that you're running on. If you miss this
deadline, Dac Man is going to be very unhappy. And he's going to output
silence in protest. So it's very important that
we don't miss these deadlines. So for the next part, I wanted
to talk about some common reasons why you might miss
these audio deadlines, starting with blocking. So inside your callback,
there are various reasons you might block. And here I have a
code sample which does a whole lot of
bad things-- things you shouldn't do
in your callback. So number one, logging-- instead of logging
inside the callback, you should use A-trace
and use Systrace. It's a much better tool
for debugging the callback. Don't do memory allocation. If you need to use memory inside
the callback, which invariably you do, you should
allocate the memory up-front when you
instantiate your audio stream and then just use
it inside here, rather than trying to
allocate new memory. Don't wait on other threads. Bear in mind, this is a
high-priority callback. So if you're waiting on
a lower priority thread, you have priority inversion. Don't do file I/O. If you
need to read from a file, use another thread and
then use non-blocking queue or a circular buffer to
transfer data into the callback. And don't sleep. There should never be any
need to sleep inside here. So we've dealt with blocking. The next reason why you
might miss your deadlines is core migrations. Now when you create an
audio app on Android, the CPU scheduler will
assign your audio thread to a particular core. And here, I have
a Systrace, which is showing the audio
thread running on CPU1. And we have four
callbacks marked-- those green rectangles
there are the callbacks. The other row of interest
is F ready 1 here, which shows us the state
of our audio buffer. And we have four callbacks. And then the CPU scheduler
shifts our thread over to CPU0. Now this core migration can
incur a slight time penalty, in the order of a
few milliseconds. And this can cause our callback
to start late and, therefore, run over. And sure enough, we have an
audio glitch occurring there. So the solution to this is
to set thread affinity, which means that we combined
our audio thread either to the current
core which we're assigned. That's an OK say of doing it. Or we can use Get Exclusive
Cores on API 24 or above, in order to get
the cores which are reserved for the current
foreground application. Lastly, CPU frequency scaling-- so this is a process
which is used to give users great
performance and great battery. It's like a power,
performance trade-off. CPU frequency is high when users
need good performance and low when they don't need
good performance, but they do want
to conserve power. So this is great for
most applications. But for real-time
audio applications, it can cause a problem. So imagine you have
a synthesizer app. And every time you press
a key, the synthesizer app generates a voice. This is how the
computational graph might look for an app like this. So we start off. We have 10 fingers
down on our keyboard. And our app bandwidth
required is fairly high. Now we take our fingers
off the keyboard. Our bandwidth
required drops down. And the CPU governor sees
that actually our app doesn't need as much
bandwidth, so it drops down the CPU frequency. Everything's fine so far. Now we put our fingers
back on the keyboard. So our bandwidth rises
to its previous level. But the governor takes a while
to ramp the CPU frequency back up to the level that we need. So unfortunately, during this
time, glitches are occurring. So the solution to this-- well, the title of
this talk is "Best Practices for Android Audio." But for this
section alone, let's just call this "Don's
Practices for Android Audio." And this is from working
with top partners like ROLI. This is what actually
works in the real world. So what you can do is
you can use something called stabilizing load. Now the idea here
is that instead of having a varying amount of
time spent in your callback, you have a fixed amount of time. And the stabilizing load can
be things like gating voices on and off, or you can
use assembler no operation instructions, basically
to keep the CPU spinning. So the result of that
is that you basically have fixed load,
fixed CPU frequency, and you always have the
bandwidth you require in order to generate audio data. This is best used with sustained
performance mode on API 24, as this will avoid you running
into thermal throttling issues. So in summary, obtain
low latency audio and always meet your
audio deadlines. I'd now like to
hand over to Phil who's going to talk about
a fantastic new audio API in Android. Welcome, Phil. [APPLAUSE] PHIL BURK: Thank you. Hello. My name is Phil Burk. And I work in the
Android audio group, mostly on MIDI and pro
audio applications. My background is in
experimental music. So my personal goal is to make
the Android platform really a great platform for
making strange kinds of new musical instruments. So that's sort of
what motivates me. What I'll be talking
about is a new audio API called AAudio, which
we're very excited about. And then I'll show you how to
do callbacks using that API. And then I'll also show
you how to optimize your latency on any
particular device you happen to be running. So the AAudio is a C API. So this is a native API. You may be wondering,
why a new API? We already have
OpenSL ES and Java. And the reason is that AAudio
is, we think, easier to use. And if you've use OpenSL
ES and compare them, I think you'll see why. Also, it's a platform where
we can make improvements. And this will show
you how we do that. These three APIs can all go
through the existing Audio Flinger framework. But if we make radical
changes in the Audio Flinger we potentially could
break thousands of apps that are already existing. So what we do is we add
a new AAudio service where we can do some
pretty radical things and not have to worry about
breaking existing stuff. So we can do some big
performance enhancements in the AAudio service. So AAudio uses the
concept of streams of audio flowing from
the mic, to the app, back down to the headphones. So how do you create
a stream using AAudio? We use a builder design pattern. So in the Builder, you can set
your parameters that you want. You could leave everything
just the default. And you'll probably get
a stereo output stream. But if you need
a specific sample rate or a specific
format, you can set that. Once the Builder's set up, you
can use it like a rubber stamp to create multiple streams. So this is what it
looks like in the code. So we have AAudio,
create stream builder-- pretty straightforward. And if you want, here's how
you set different parameters on the stream builder. Once you've set up the stream,
you call Audio Stream Builder Open Stream-- again, pretty straightforward. And then if you
didn't specify things like the sample
rate or the format, then you'll need to query
it to find out what you got. Don't just assume that
it's 48,000 hertz, because some devices,
particularly like USB devices, it might be at 96,000
hertz or something. So it's important to query to
find out what you really got after you opened the stream. Another important value
is this frames per burst. And this correlates
with the chunk sizes that Dac Man was
consuming in Don's slides. So what is a burst
versus what is a buffer? This can be very confusing. So when we say
"buffer" in AAudio, we're talking about
the whole array where the audio data is stored
for a particular stream. And in that buffer, there
can be multiple bursts. So in this case, Dac Man has
two bursts that it can consume. And we're riding in
the size of a burst. You have to start your stream. You can pause it. You can flush the
stream, stop it. These are asynchronous calls. And normally, you don't
have to worry about that. But if you have
to synchronize, we do have a function
that will allow you to synchronize with the
state machine inside AAudio. The reading and writing-- so
we have to get data in and out of these streams. So there's two ways. If your application doesn't
need super low latency, the easiest thing
is just to read or write using blocking writes. And so here we're in a loop. And we're doing a write. And you notice we have a timeout
there as a last parameter. When we do a
blocking write, we'll get back either an error code
or the number of frames written. And if it times out, or if
we use a time-out of zero, we may get a partial transfer. OK. The second technique is when
you need the lowest latency. And to do that, you'd need a
high-priority thread that's maybe running with a
[INAUDIBLE] scheduler and hopefully the
higher priority as well. So the way to do that is
to write your own callback function. So this is function
that you would write. And AAudio will pass to you a
stream parameter, a user data, which could be an object or
a structure pointer, and then audio data pointer, which
is a pointer to your array and the number of frames. And then you can render
directly into that audio buffer and then return. Once you have your
callback function and you know what
data you want to pass, you give it to the builder. You set the data
callback on the builder. And then when you
later create a stream, it will use those values. Sometimes people need to
combine multiple inputs. Maybe you're taking
two input sources and mixing them and
sending them to an output. So what's the best way
to do that with AAudio. We recommend using
one stream as a master and doing your callback from
that master stream, which ideally should be
an output stream. And then what you
do in the callback-- see, here we're being passed
the output stream pointer. So what we do in the callback
is we do a read from the input stream. And we set the time-out to zero. So this is a non-blocking call. As Don mentioned, you don't want
to block inside the callback. Now initially, you may
not get all the data that you're expecting. But pretty soon, these two
streams will synchronize-- like very quickly, within
a couple buffer calls-- and then you'll have
nice back and forth between these two streams. And you can do echo or guitar
effects-- things like that. The other topic
I want to discuss is dynamic latency tuning. So it's very difficult
to predict ahead of time what the exact number
of buffers that you need. And the number buffers
determines your latency. If you have too few buffers-- too few bursts, I
guess, in your buffer-- then if your thread is
preempted, you make glitch. So what you want to do-- if you look at this
diagram from before-- right now, we only
have two bursts that are valid in this
very large buffer. So that's our latency
is two times this burst for this buffer. So if we are unable to
write to the buffer, Dac Man will run out of
data after two bursts. So if we have a
glitch, we may wish that we have three
bursts in the buffer. So we have a little bit more
cushion if we get preempted-- if our thread gets preempted. So we can adjust this value. The way you do that
in code is that you can query to find out how
many overruns or underruns you've had on that
output stream. And if it's changed since
the last time you checked, that means that you
just had a glitch. So what you can then
do is you can query to see what the size is of the
buffer, how much of the buffer is being used, which
determines your latency, and then bump it up and
say, well, let's just add one more burst
in here, so instead of being double-buffered,
I'll be triple-buffered. And then you set that
back in, reset your size. So this is sort of
a simplification. You may find that you want to-- you could do timing
analysis and maybe lower the latency again
later, if you haven't glitched for a long time. But that's up to the
application to the those kind of smart analysis. But this is the basic technique. So in summary, a minimal AAudio
program-- you create a builder, you open a stream, you start
the stream, and then in a loop-- in this case, we're doing
the blocking rights, synthesizing audio, and
writing it to the stream. And then we close
it when we're done. So pretty simple. Just for comparison, this is
sort of an equivalent OpenSL ES program. Probably it's a
littler hard to read. But as you can see, the
AAudio is fewer lines of code, little more straightforward,
if you want to use audio. So now AAudio-- you're probably
thinking, that sounds great, but it's only in the
O release, so how does that help me if I'm
writing for Marshmallow or Nougat or Lollipop? So what we're doing
is we're developing a wrapper which is basically
like the AAudio API and AAudio features. But it's in C++, so it just
looks slightly different. And what we do is we dynamically
link to the AAudio library using runtime linking. So your program can run and
link on previous versions of Android. But AAudio won't be there. And so what we do is we
just dynamically switch over to using OpenSL ES. So if you write your
program to this new API, which will be like
an open source thing. It's not quite out yet,
but will be out soon. Then if you write your
program to this C++ wrapper, then you'll be able to
use AAudio or OpenSL ES transparently and run
on old or new platforms. OK. I'm excited about
what's coming up next. ROLI's going to give us a demo. And ROLI is a company
that's been taking advantage of a lot of these tricks. They've figured
out a lot of stuff, and they've been a
great partner for us. So I'd like their CEO
of ROLI, Roland Lamb, to come up and talk about
some of the programs they've been
developing on Android. Thank you. [APPLAUSE] ROLAND LAMB: Good
morning, everybody. It's such a pleasure to be here. As Phil saying, I'm Roland Lamb. I'm the founder and CEO
of ROLI, a company that is developing new musical
hardware and software. And I'm very pleased to have
Marco and Jack Parisi with me, who are virtuoso musicians,
who are kind of on this cutting edge of new hardware, new
software, and expression. So just to give a little bit
of background, starting out, I felt really passionate about
creativity and about the joy that comes from creation. And in particular,
we thought, we want to empower everyone to
be creators, but particularly in music. And the reason for music
being kind of the center point for us is that there's
this huge opportunity for expression that is untapped. And the way we think about that
is that musical instruments are tremendously expressive. But they're still quite
difficult to learn. And on the other
side, electronic music has such versatility
associated with it, but then it's relatively
technical still and complicated to set
up your own home studio. So we thought, what if we
could create instruments that were deeply expressive
but also easy to learn, had the versatility
of electronic music, but then didn't have all of
the extra technical set-up? But to solve that problem,
we thought, first of all, we need these high-resolution
control devices for digital. So if you just have simple,
one-dimensional electronic controllers, you can't get
to that depth of expression that you have with all of
the physical gestures you can create with
acoustic instruments. So I invented this instrument
called the Seaboard. And the Seaboard
is the evolution of the piano keyboard. You can play it just the
way that you play a piano. But then you can modulate
all of the sounds in real time using very
intuitive gestures. So as Marco will show, you
could play the Seaboard, first of all, just like a piano. Maybe if guys can
bring up the audio-- sounds like maybe
they just did, Marco. So we're running the
Seaboard now on Pixel. [MUSIC PLAYING] So it's kind of like
electronic piano patch. He's just playing
it like a piano. But if he wants to, he can play
it, for example, like a guitar. And he would just be able to
bend these soft, silicon keys left to right, as you'll see. [MUSIC PLAYING] So those kinds of
bends that usually you would associate with another
kind of acoustic instrument, you can create in this context. And there's many, many sonic
possibilities with something like the Seaboard. So we thought, wow,
this is awesome, we have this new
physical technology, but we want to make it
as accessible as possible to reach many, many more
people around the world. So we built a new
product called BLOCKS. And BLOCKS takes the
technology of the Seaboard and puts it into a format of
a small, pocket-sized music controller. And you see it there. You can use it to just
play beats or play expressive melodies. And when we launched BLOCKS,
we initially launched on iOS. But the idea was always to
make it go far and wide. And so the issue for us was
really about the latency-- all the stuff that
Don and Phil have been talking about--
because to power these new expressive
instruments, we developed a
professional-grade synthesizer called Equator. And with Equator, you're running
many, many different channels of synthesis at the same time. And you're controlling
them with all of these different
multi-parameter gestures. It's a professional
audio application that's used in studios
all around the world. So to run that on a phone, we
had to do quite a bit of work. But the recent developments
in the last few versions of Android have made
a big difference. And all of the stuff
that's just been discussed has actually made it so now
we can run all the sounds in Equator on Android devices. Noise, the application, is
available in Early Access in the Google Play Store. And Marco and Jack, some
of you may have noticed, opened up Google
I/O two days ago with a performance
that was performed just on four-pixel phones. So they're going to just
play a minute from that. So it's Seaboard plus BLOCKS
plus four-pixel phones, three instances of Noise, and
they're also using DJ Pro2. So let's take a look. [MUSIC PLAYING] [APPLAUSE] Thank you so much. So one of the reasons why
we were able to do this was that we developed a
coding framework called JUCE-- J-U-C-E. And it's a C++
cross-platform framework that's built for audio, and
it's really built for speed. And so we've been working not
only using JUCE for Noise, but we work with thousands of
developers around the world who are creating audio applications
that are cross-platform. And what we're finding is
with these recent improvements in Android, it's not just
for our applications. But for many of our
developers, they can take applications that
were audio applications that were developed for
iOS, for example, and now port those
over to Android. And we're also seeing, this
is an interesting opportunity for a lot of other
developers out there who want to create
low-latency audio applications but don't necessarily have
the resources to learn all of the different
systems associated with a different platform. So that's something to check
out, if you're interested, at JUCE.com. We also organized
something called ADC-- not the ADC that Phil
was talking about. But it's called Audio
Developer Conference in London, which is on the 13th
and 15th of November, which deals with all of these issues. So check that out at JUCE.com. But just thank
you for tuning in. And we thought we'd
leave you with one more little performance
from Marco and jack. [MUSIC PLAYING] [APPLAUSE] So Marco and Jack Parisi,
everyone, and also, check out their work. Parisi is doing
some amazing things and releasing some great work. So I believe that's
all for this session. And there's a Sandbox
that will follow. So come check it out. And thank you all so very
much for coming today. [APPLAUSE] [MUSIC PLAYING]