[MUSIC PLAYING] THOMAS STEINER: Who here is
excited about web capabilities and Project Fugu? Whoa! Great, awesome. So my name is Tom. And I'm a developer advocate
at the web team at Google. And I see the web as the
universal development platform. And when I describe
my role, I tend to say, my job is
to enable developers to do anything they
want to do on the web, that they want to do on-- anything they do on the web and
expect them to do on native. PETE LEPAGE: And I'm Pete. I'm also a developer
advocate on the web team. I love my job because
I kind of look at it as, I get to take all of the
sharp edges of new web tech and make it easier
for you to use. Now I'll admit, I am
a little bit biased. I kind of love the web. I think the web is the
best platform out there. What I love is the scale, right? It reaches users all
around the world, essentially on any device. It's easy to use
and easy to share. There's nothing to install. It just works. But most importantly,
it's an open ecosystem that anyone can use or build on. Almost everything I do
today, I do on the web. I've got a tab open to GMail,
and calendar and drive. I do all my code
reviews in GitHub. I write all of my
samples and stuff to make your lives a little
bit easier on Glitch to try and just-- it's all on the web. It's kind of awesome. And in the last couple of years,
the capabilities of the web have grown tremendously
with things like Web USB, camera controls,
media recording, and so much more. But let's be honest. The web does have
some limitations. And there are some
apps that are just not possible to build
on the web today. If you want to
build something that interacts with local
files on the user system. Nope, can't do it. Want to do something that
interacts with the user's contacts on their phone? Sorry. Nope. Want to register
as a share target so you can receive shares? Sad panda. And as cute as it is, it's
still a sad panda, right? We call this the app gap. The gap between what's
possible on the web today and what's possible on native. And on the Chrome team, we
want to change that gap. We think web apps should
be able to do anything that native apps can. We want to remove-- yeah! Who was that? Who-- Yeah! All right. We want to remove
those limitations. And so that you can
build and deliver apps that were never possible
before on the open web without requiring some kind
of framework or anything like that. Just straight on the open web. We've been working to close
this gap for some time now. And with things like the web app
manifest, which makes it easier to create an
installed experience; with service workers that allow
you to get that performance and reliability
that users expect from an installed experience. WebAssembly, that gets
you closer to the medal. And so much more. But some of these
features that we're going to talk about
over the next 40 minutes or so seem a little
scary at first, especially when you consider how
they're implemented on native. But the web is inherently
safer than native. Opening a web page should
never be scary, right? Adding these new
features needs to be done in a way that respects
user privacy, trust, security, and the core tenets of the web. Our internal code name
for this project kind of alludes to that. Project Fugu. Fugu is a type of fish
that when it's cut properly is considered a delicacy. And if you've ever gotten to
try it, it is pretty delicious. But it's also poisonous. You cut that sucker the
wrong way and, you're dead. Like, that's it. We look at these
features as the same way. We need to do them in a way
that is absolutely right. So nothing should ever be
granted access by default. Instead, they should rely on
a permission model that puts the user in control, right? And is easily revokable. It needs to be crystal
clear when all of these APIs are being used and how
they're being used. We've got a good
white paper there that dives deeper into some
of our thinking around this. So you can check that out. Now obviously, there's a
ton more I could talk-- I could talk this whole
session about the security model for all these things. But that's not
all that exciting. So every one of these
APIs needs to be done in a way that balances the
needs of you, the developer, with the needs of keeping our
users and their data safe. Users should never
be put at risk simply by going to a web page. Now we've identified
about 75 features that we think are critical
to closing that app gap. And I'm sure there are
probably many more. You can see the stuff
that we're currently looking at by
going to CRbug.com, and looking for bugs that
are tagged proj-fugu. Now, guiding a feature from sort
of that idea to a standardized shipped API is important to us. And I showed this quote earlier. But I want to show
it again because I think it's really important. As we work on these,
it's really important that we maintain the
core tenets of the web. That means that we're designing
these and building them as web standards. Everything I'm going to talk
about is a web standard. And I'm really kind of
psyched about the fact that other browser vendors
are getting involved in this. Microsoft is actively working
on several new features, including app icon shortcuts. And Intel implemented
the generic sensors API. And they're currently working
on wake lock and web NFC. So how do we do this? Well, the first step is we
identify a need, all right? Specific use case. Now many of these are
things that you've told us about either by
talking to us directly or by filing a feature
request on CRbug.com. As we start to
think about the API, we put our initial
thinking into an explainer. Essentially a design doc that
is meant to explain the problem and show some sample code about
how we think this problem is going to get solved. Once the explainer has a
reasonable level of clarity, it's time to
publicize it and start to solicit feedback
from you, so that we can iterate on the design. We need to hear from you to
make sure that the use cases are actually covered properly. Is there anything
that we're missing? Or like, did we maybe
just completely biff up the whole API implementation? And during this time, as
we get feedback from you, we keep iterating
on that explainer to get it better and better. But we're also looking for
public support at this point. To get public
support from you can be as simple as a tweet from
an engineer saying, hey, that's really cool. We'd use that in our app
if it were available. Your public support, A, helps
us to prioritize the APIs so we know how many
people are going to use it and how important it is to you. But it also tells
other browser vendors how important it is to
you so they can go, ooh. Lots of people want that. Maybe we should do that, right? So once the explainer's
in a good state, we transition from the explainer
to the spec, all right? And where we continue
to work with developers to get your feedback and
other browser vendors to keep iterating on it. As the design
starts to stabilize we typically use an
origin trial to experiment with the implementation. Origin trials allow us to try
new features with real users and get feedback on
that implementation. This real world
feedback helps us shape and validate the design
before it becomes a standard. Finally, once the
origin trial's complete, the spec's been finalized,
and all the other launch steps are completed, it's
time to ship it. So enough background. I've talked too long. Let's look at some real stuff. Tom, you want to
do the first one? THOMAS STEINER: Sure. PETE LEPAGE: All right. THOMAS STEINER: Thank you Pete. So Bluetooth Low Energy, or just
BLE, is a wireless technology that you can find
in many gadgets like [INAUDIBLE] sensors,
beacons, lights, and also toys. But many times when you want
to interact with a BLE device, they make you download
yet another app. And actually, you should just
be able to use a web app. Web Bluetooth can
be used to control all sorts of devices
after pairing them, which is painless and
quick and works via picker. For example, the
Batmobile from LEGO can be controlled from a
web app without the need to download a native app. We actually have the web
mobile at-- the Batmobile. Not the web mobile,
the Batmobile-- over in the Web
Sandbox over there. So please come over
and play with it. Before connecting to a
Web Bluetooth device, like the Batmobile, I
first need to know its ID. The Bluetooth
service UUID that you can see here is a unique
identifier that LEGO gave to the Batmobile. I can optionally also provide
a name or a name prefix perimeter. You can think of these
services filter as a way to not only look
for certain devices, but also for certain
categories of devices. For example, to just look
for [INAUDIBLE] sensors or just for devices
whose name starts with hub, which is
the name that LEGO gave to the Bluetooth controller
inside of the Batmobile. When I then request a
device, I will see then the connection picker
where I can then pair it and start using it. Web Bluetooth as a
very powerful API. So it requires a
secure connection. And a connection picker only
shows after user gesture. After the pairing
step, I can then connect to the device's
server, that provides the services of the device. In a concrete case, a Batmobile
remote control service. How cool is that? Web Bluetooth is an
established standard that has been in Chrome for a while. It's fully shipped and
you can use it today. If you have a great
use case, or have built a cool demo like
[INAUDIBLE] with the Batmobile controller app that we've showed
at Sandbox, do let us know. PETE LEPAGE: Awesome. Sharing is a critical
feature for many apps. And it means
whatever that means-- let me try that again. Sharing is really critical. Whether that means
you're sharing out or you want to be able
to receive shares, many native apps can plug
into the system sharing APIs. And the web should be
able to do this too. With the Web Share API, you can. It allows your site to
share links and text via the system share picker. Working with the share picker
is pretty straightforward. It's best to use
feature detection because it's only available
on some devices, right? Like your windows or Mac iOS
device doesn't have a share. So you want to fall
back to the whatever the native sharing mechanism is. But if it is, you just provide
the text, title, and the URL that you want to share. And then call navigator.share. And that's it. It's pretty easy. We just also added-- I think this one is
pretty cool-- the ability to share files as well. So you can share not only
links, but you can share files. The other side of the
equation is the ability to tell the system
that you are a share target, that you want to
receive shares from other apps. And the Web Share Target API
allows and installed web app to register with the
OS as a share target so that users can easily
share apps to you. Or share links and text to you. In fact, Twitter just
recently implemented this. If you've installed Twitter, you
can try sharing and see, hey, I want to share a link over
to Twitter's installed app. To register as a
share target, you need to add the share target
entry to your web app manifest. And when the user chooses
to share to your site, it opens a link
to the action page that I've got linked
there, and sends over the required information. So for Twitter, what
they do is they just compose a new tweet with the
information that you provide. Just one pro tip here. If you're going to do
this, make sure you pre cache your share target
page so that it loads instantly and works reliably. Otherwise if you do it and
it's like, not loading, users are going to
get a little cranky and they're not going to use it. So Web Share and Web Share
Target are available today. And they provide a nice way
to integrate with the system. But we've also been working
on Web Share Target V2, which allows you to receive files. You can now send files, but
you can also receive files. And so I suspect
that'll probably land a little bit later this
year, around Chrome 76 or so. THOMAS STEINER: So on
your computer, your laptop or your desktop
computer, oftentimes you have media keys that you
can use for playing or pausing your media content. Or for skipping
forward or backward. For web apps, these keys
typically are not accessible. But there's absolutely
no reason why they shouldn't be accessible. When you build a
media player app, you could for quite
a while now use Android's media controls in
the control panel via the Media Session API. And not only use
rich album artwork, but also obviously use the
media controls to skip forward and backward, pause,
play, and so on. What's new is that now
you can finally also use media keys on physical
hardware keyboards to control playing,
pausing, skipping and so on of your media. And this also works while the
media tab is in the background or in full screen mode. To make this happen, I
set up action handlers in the navigator.mediasession
interface. For the previous track action,
yeah, you can upload for that. It's really cool. It's a great API. So you set up action handlers
in the media session interface for the previous track action,
for the next track action, for play, and for pause. The Media Session API
is fully shipped now. And it works on Android, Mac
OS, Windows, and Chrome OS. And support for Linux
is under development. PETE LEPAGE: Sweet! All right. With get user media,
getting a video stream from the user's web
cam's pretty easy. And you can use JavaScript
libraries to do face detection, or you can do QR codes
and that kind of thing. In fact, a couple of years ago,
in the Chrome experiment movie, "Conte Revo," that's
how it worked. You navigated by
moving your head to move through the experience. But doing shape
detection in JavaScript is kind of expensive. And most native platforms
have APIs for this built in. So the Shape Detection
API is something that is about to land. And it's got three interfaces. A face detector, a bar code
detector, and a text detector. The face detector
does exactly that. It can find a face. But it's just finding your face. There's no facial
recognition there. And sometimes it
can find landmarks. So like, it could find
your eyes, your nose, and your mouth. So you could use
that for maybe taking a picture when everybody's
looking at the camera. Or imagine trying on a pair
of sunglasses, or glasses, before you even try
them in your hand. And you guys all look better
when I have my glasses on. So that's a good thing. The bar code detector
does exactly that, right? It can scan all
kinds of bar codes. And Tom's going to dive
into that a little deeper in a minute. But the text detector. What do you think it does? It finds text. Now each detector
has a detect method, which takes an image
or a video or a blob. It'll check that image
and asynchronously return whatever it finds. We just finished an origin trial
with the Shape Detection API. And one of the key reasons that
we do an origin trial is it allows us to
validate the design. In this case, we
realized that there were some things that we
needed to change and make some updates. So we're updating the
spec a little bit, and making some of the
methods a little bit simpler. We also realized that text
detection wasn't at a level that we were happy with. So we probably won't ship
that one for a little while until we get the
quality level up. THOMAS STEINER: So as you
could see in Pete's demos, there's some really neat use
cases for the Shape Detection API. But putting all of
that together so that it will work across
browsers is a bit tricky. The Shape Detection APIs
coming out of an origin trial. But not all mobile
browsers support it yet. But fortunately, this
is a really good example of where JavaScript
library can help and make lives easier for you. Using platform APIs
where available, and JavaScript and
WebAssembly where they aren't. We're happy today to announce an
early version of the Perception Toolkit, a library that
allows people to navigate your site with a camera. This idea of
navigating a website by pointing the camera at
physical things in the world has a lot of power, actually. And it can help users complete
tasks faster and easier. For example, if you want to
learn more about a product, sometimes instead of
typing in a search box, it can be way easier
and way faster to just scan its bar
codes to find the page that you're looking for. Another use case
is museum tours, where rather than forcing
your visitors to download yet another app that
they will never use again after the museum
visit, you can just scan the actual
artworks from the web and just deliver
directly a rich, enriched online experience. The Perception Toolkit does
planar image detection, which means if museums
have digitized their art works already, it is a
really small step for them to make these artworks
recognized by the toolkit. The Perception Toolkit
does three things. First, it manages a
camera session for you, and can detect bar codes, QR
codes, and 2D image targets. Using native [INAUDIBLE]
when available. And, as I said earlier,
WebAssembly when not. Next, it allows you to link real
world targets like bar codes with pages on your website. And this is done
using structured data. So it's easy to manage. And finally, it makes the
user interface really easy, from onboarding the
user to rendering carts. You have full control
over the user interface. But it takes care of
the heavy lifting. The whole thing is open source. So there's a lot of
flexibility to customize however you wanted it. But the general idea
is that you can get up and running really quickly to
add camera based navigation to your site. If you want to try this
out, we have a live demo over in the Web Sandbox. You can do this on
your own device. So just go to the URL that
you can see on the screens. And yeah, once you are
in the Web Sandbox area, try pointing a camera
at physical things-- signs, logos, and so on-- and see what's coming up. PETE LEPAGE: Sweet. Installed apps can
frequently alert users about new information,
either through notifications or through a badge on the icon. Badging makes it really
easy to subtly notify the user that something might
require their attention. Or it can indicate a small
amount of information. Badges are less intrusive
than notifications, and they don't
interrupt the user, which means that they can be
updated much more frequently. And because they don't
interrupt the user, they don't require any special
permissions to use them. There are plenty of great
use cases for badges. Chat and email could
indicate progress. Games could let users
know it's their turn. And plenty more. Just one small thing about them. They only work when the
app is installed, right? Because if it's not
installed, there's nothing for us to badge. So setting the badge
is pretty easy. Call window.badge.set. It takes an optional number
if you provide that number. It uses that number. If you don't provide a number,
it just sets it with a dot. And then you can
clear it with Clear. It's available as
an origin trial right now on Windows and Mac. And we're working on
Chrome OS support. So you can start using it today. During the origin
trial, though, you'll need to call
window.experimental.badge.set. It everything goes
smoothly, this should be available and
stable around Chrome 78. We're still looking for signs
of public support for this. So if this is something
you think you might use, please tweet us or let us know. Because this is a pretty
cool one, and we're getting close and want to
see some use on that. Yeah! All right! Woo hoo! So the last talk had
four rounds of applause. I think we're at four. So like, let's keep going. This is good. THOMAS STEINER: Well crap. PETE LEPAGE: Uh-oh. Uh-oh. THOMAS STEINER: My
screen just went off. Oh! Luckily it's back. So a lot of devices can keep-- yeah, the screen on
when they're running. For example, a presentation
app like PowerPoint can prevent a screen
from falling asleep, from turning off. Luckily, we can sort of do
the same pretty soon again on the web as well. Because presentation
apps like Google Slides have very much
the same use cases. So why shouldn't they be
able to request something that is called a Wake Lock,
that prevents from happening what just happened? So to avoid draining
the battery, most devices can
quickly, yeah, go to sleep, and turn a screen
off and they're idle. This is fine most of the time. But some applications do need to
keep the screen on, or prevent the device from
sleeping so that they can continue doing the work. I mentioned keeping a screen
awake during presentations. But there are plenty other
really good use cases. For example, a run tracking
app turns the screen off. But it needs to
keep tracking where you are to record your route. Or maybe a game where you focus
on the screen while thinking about the next turn. And you don't want the
screen to turn off when you think about your next turn. The Wake Lock API enables you
to support use cases like that. And new experiences
that, until now, required a native application. The API also aims to reduce the
need for hacky and potentially power hungry workarounds like
looping an invisible video that people use nowadays
to keep the screen on. The Wake Lock API provides
two types of wake lock. Screen and system. A screen Wake Lock
prevents the device from turning the screen
off so that the user can see the information that
is displayed on the screen. And a system Wake lock
prevents the device's CPU from entering the standby
mode so that your app can continue running. And while they're
treated independently, one may imply the
effect of the other. For example, a screen
wake lock implies that the app, and
therefore, the system, should continue running. In order to use
to Wake Lock API, I need to create and
initialize a Wake Lock object for the type of
wake lock I want. Here, I'm requesting
a screen wake lock. Once created, the promised
results for the wake lock object. But note, the wake
lock isn't active yet. Until before you can use
it, you need to activate it. Before activating
the wake lock, let me first set up
an event listener so that I can receive
notifications when the wake lock state is changed. To activate the wake lock,
I need to create a request. And now finally, the screen
won't turn off ever again. Until, of course, I
cancel the wake lock. You probably have noticed the
red ribbon in the upper right corner. This means that the
spec is currently being rewritten based
on early feedback and the code that you
can see here will change. But what you can
see here is also what you can actually
play with today. Wake Lock is moving
along really well. The folks at Intel has
been busy with, yeah, rewriting the implementation. And it's available
behind the flag. But again, please note that
the current implementation will change. I suspect we'll start
something in an origin trial probably around
Chrome 76, maybe 77. But it could change depending
on how we make progress with the API and so forth. PETE LEPAGE: Sweet! All right. So apps that read
and write local files are essentially impossible to
do on the web today, right? You need to like,
upload something. If you want to
start working on it, you need to download it and put
it back in the downloads folder and save it back. It just doesn't work very well. The native file system
API aims to change that, making it possible for
users to be able to give-- to choose files or directories
that a web app can interact with on the native file system. Single file editors
like Squoosh, which is a PWA our team
built earlier this year, are an ideal use case for this. Control O to open a
file, compress it, control S to save it. And then just repeat. And of course, there are
plenty of other examples. Notepad for editing text. Sketch Up for doing drawings. Figma for doing prototypes. All that kind of thing. Beyond single file
editors, we also want to enable the ability to
work with collections of files. So think like photo
galleries or media players. And the one that I'm most
excited about personally? I dream of the day
when VS code just works as a progressive
web app and I just use it straight in my browser. That would be pretty amazing. The one key design
element of the API is that its primary
entry point is a file picker that will require
a user gesture to invoke. This ensures that the user
is in full control over what the web app has access to. And we expect that some
files and folders, web apps just won't have
access to at all. As I said earlier,
users should never be scared about visiting
a website or web app. We're still iterating
on the spec for this. So this code might change. But grab the file
I want to open. Get the file. Read it into an array buffer. Make any changes I want. Then to save it, get a
FileWriter and save it to disk. There's an early draft
of the spec in the works. But there's no code that
you can try right now. So this is still one
of these ones that's a little bit future looking. As with our other
stuff, if you think this is cool and
want to get involved, please go check out
some of these links. THOMAS STEINER: Thank you Pete. So many devices such as micro
controllers or 3D printers communicate through serial data. Up until now,
these devices can't be integrated with web apps. But what if there were a way to
bridge the physical and the web world by allowing them
to communicate together? Even though modern PCs rarely
include the classic nine pin serial port, a lot of the
devices driven by Mac Book controllers, such as
robots and 3D printers, communicate using serial data. But nowadays, commonly over
USB or maybe Bluetooth. So let's have a look at
how the Serial API will work in practice. First, I need to filter
for known vendor IDs in order to request a port
to the device in question. Here, I'm filtering on
the Arduino USB vendor ID. Remember, it's all serial data. So I need to agree
on a board rate when opening the port that
determines the communication speed that is used over the-- when communicating
over the channel. Finally, I can then
read from the reader until there's no more data,
which the reader signals with a down flick. The actual code will
look similar to what is defined in the specification. But some of the details
still need to be worked out. The [INAUDIBLE] model will work
similar to that of web USB. We started the Serial API
way before the capabilities project. And before its processes
came into life. So some of the details
still need to be defined. There's an implementation
behind a flag. But most of the
work has been so far on getting the
permission UI right. When it can request permission
to access a zero port today, the object that your script gets
back isn't actually useful yet. But we're aiming to
have proper support by the end of the quarter. PETE LEPAGE: So there
are a whole bunch of peripherals that communicate
with your computer using HID. The most common classes of HID
devices are keyboards and mice. But there are a whole bunch
of devices that are not well supported by the HID driver, and
are often inaccessible to web pages. WebHID would make these
devices accessible and provide better
support for devices that have limited functionality. So many game pads, for
example, have decent support in the browser. But they frequently miss out on
some of the cool, neat features that they have. With WebHID, the
device specific logic can be moved into JavaScript. So providing full support. And there are plenty
of other use cases. Extra buttons on, like,
Bluetooth, or on the telephony headsets could easily control
video conferencing gaps. You could control virtual
reality experiences with spatial controllers. And just one note, though. Like, WebHID isn't intended
to get high level input where we already may have a high
level API like key presses and pointer events. Those are going to continue
to be used with the high level APIs. This sample opens
up a HID device, grabs that first
matching device, opens up a connection
to it, sets up a listener to be like, oh
hey, you moved the joystick. And since HID is
bi-directional, it sends out a report to
activate some of the lights on the device,
letting the user know that the device is ready to go. There is a draft of
the spec available now. But there's nothing to
play with code wise. So we'll see that probably
around Chrome 78 or so. THOMAS STEINER: Thank you. PETE LEPAGE: This one's cool. THOMAS STEINER: So access
to the user's contact has been a feature
of native apps since almost the dawn of time. And it's one of the most
common feature requests that I hear from app developers. And this is often the
key reason why they still build a native application. Getting access to the contacts. There are many great use
cases for a contacts picker API on the web. But at the core, it is
designed to help you reach your friends and family easier. For example, in a web
based email client, the picker can be used
to select the recipients of an email, or
someone to connect with on social networks. But instead of providing a site
with complete and continuous access to the user's contacts,
the Contact Picker API is just that. An on demand picker. When invoked, it shows a
list of a user's contacts and makes it easy for them
to pick only the contacts that they want to share. And users can provide
access to as few or as many of the contacts as they choose. And because access is on demand,
each subsequent call to the API will show the picker again. In designing the API, we want
to give developers the features that they want. But we also want to
ensure that their contacts and their own
information is safe and they clearly understand
what they are sharing. That means that the Contacts
Picker will only work when served from a secure host. And it can only be shown
after a user gesture. Users can also
explicitly choose which contacts they want to share
and see exactly what is being shared before they share it. Opening the contacts
picker starts with a call to navigator.contacts.select. Then I need to pass
an options object to with information I'd like. So here, I just
want one contact. And from this contact, I
want the name, the email, and the telephone numbers. The call returns a
promise that results with an array of the contacts
that were selected by the user. So let's say the user
selected Sundar Pichai. Why not? Each contact will include an
array of all the properties that were requested. For example, our address
book entry for Sundar has his name, no phone numbers. But it does have an
email address for him. So go email Sundar. We're in the early
phases of experimentation with the contacts picker API. The explainer has been written
and a very first time imitation is available behind the flag. But a lot of the details
still need to be discussed. For example, whether
addresses should be accessible via the API. Like the other APIs
that we've talked about, we're looking for
public support. So if this is
something you think that you could use in your
website, do let us know. PETE LEPAGE: All right. Well, that didn't get
a round of applause. OK, there we go. Yeah! That one's cool. All right. So I'm going to
cover this last one. And then we'll
wrap up real quick. But designers frequently
have custom fonts installed on their system that
they use for creating apps. Company standard fonts,
specialty license fonts, that kind of thing. Browsers can use them
if they know about them. But there's no way to get
a list of those fonts. The Font Access API
has two components. First, it allows
apps to enumerate the list of locally
installed fonts so that it's easy to use
them within your app. And second, it allows access
to the details of the font, such as the individual
glyphs and ligature tables so that web apps can take fine
grained control of rendering, using things like Harf Buzz. Getting access is pretty simple. So we just call
navigator.fonts.query. And give it a query parameter. In this particular case, I've
asked only for local fonts. I can then loop over
the list of fonts and add them to a font selector. We're still working on an
early draft of the explainer. And we're looking for feedback
to make sure that we've got all the use cases covered. So take a look at
this and let us know, especially if this
is something that you're really interested in. THOMAS STEINER: So as Pete said,
in the last couple of years, the capabilities of the web
have grown tremendously. Today, we've only covered
a handful of the APIs that we've shipped or
that we are planning to ship later this year. That will all help
close the app gap. But there are a
few other features that we are working on that I
want to at least quickly call out. We're adding support for
programmatically copying images to the clipboard, which is
the most popular bug in Chrome [INAUDIBLE] checker. Actually more than 1,000-- sorry, 1,800 stars. Currently, the
asynchronous clipart API only supports basic
reading and writing of text. But we will be rounding out the
existing asynchronous clipart API with support for images. Again, helping to
unlock a couple of really new
creative use cases. Then we have the
SMS receiver API that gives developers
the ability to be notified when specially
formatted SMS messages are delivered to the user's device
so that they can parse out OTP tokens, for example. This can help
reduce the friction when verifying phone numbers
and can be used as a mechanism to limit abuse and
account creation. It can ease account recovery. Or it can be used to
confirm critical operations. Web developers have
the ability to display notifications using the Web
Push and notifications API. Notification Triggers
is a mechanism for preparing notifications
that are to be shown when certain conditions are met. For example, the trigger
could be time based. Or it could be location
based or something else. This makes it possible
to display notifications at a particular time
without involving a server, and improves the reliability
and improves the reliability by removing the dependency
on a network connection. This is essential for certain
types of applications. Like for example, calendars. So while most of the APIs
that we described today aren't available yet, some, like
the Web Share and the WebShare Target API already
are available. And there are a few others
that are in origin trial that you can all start
experimenting with today. I give you two, three
seconds for taking pictures of that slide. All right. Cool. So if you want to participate in
an origin trial with your site, the list of the currently
active origin trial can be found at developers.chrom
e.com/origintrials. And finally, if
there's any features that you think we were missing,
like you mentioned like 75. But there might be more. Who knows? We want to hear about them. Please file a feature request
on bit.ly/new-fugu-request. So after all this,
we've just two links that you need to remember. The first is for
the code lab, where you can experiment with other
ready to play with APIs. Not everything works
on all platforms yet. And sometimes you may have
to set a flag to use them. But as the admittedly biased
author of the code lab, I do think it's a lot of fun. And the second link is for the
capabilities project landing page. We will keep it up to date. And it is out links to all the
capabilities that we work on and that are
currently in-flight. From there, we also
link to the full list of all the upcoming
capabilities that we will get to in the future. The web is an amazing place. And we're really working
hard to close that app gap. So thank you very much for
being part of our journey.