JAKE ARCHIBALD: Hello, everyone. [CHEERS AND APPLAUSE] Welcome to Building Instant
Loading Offline-first Progressive Web Apps. It turns out if you put all
the buzz words in your title, they let you have the big
stage, which is great. I'm Jake Archibald. I'm one of the designers
of service workers and one of the editors
of the spec as well. What I actually want
to talk to you about is spam phone calls,
which I get a lot of. You know the sort. It's kind of like have
you been in an accident in the last 200 years? Would you like car insurance
for all of your pets, et cetera, et cetera. I actually get
enough of these phone calls that I've started
inventing games to play. And my favorite game is Um. And in this game, I become
the most indecisive person that has ever existed. It goes like this. Hello, Mr Archibald, are you
interested in saving money on your mobile phone bill? And I reply, um, well, I--
well, I-- I suppose if um-- and at this point, the caller
usually tries to hurry things along. But it's very important
that you do not let them. Well, we've got
a great deal on-- Wait, wait, wait,
do you-- um, is it one of the-- um, um-- and
you get a point for every second that they aren't talking. And it's a really difficult
game because the caller gets frustrated pretty quickly
because you're directly blocking a conversation,
which is a very synchronous transaction
where each person wants an instant response. And you're breaking that model. We expect a similar
model when it comes to getting
data from a computer, but that wasn't always the case. Like, 25 years ago, our
expectations were pretty low. If you wanted to know
the direction somewhere, first, you would have
to go to the room with the computer in the room. And you turn it on, then the
fans would start whirring. You get that static
crackle as the CRT monitor whirs into
action, Windows 3.1 would start booting up. And then eventually you
would get your desktop, and then it did this-- [TA-DA! SOUND] --because this was an era
where booting up successfully was a fanfare worthy moment. [LAUGHTER] But even after that, you had
to find and insert the map CD and print out directions,
then off you went. These days, we don't
need to boot up the one computer
we own because we have a computer in our pocket
that's already booted up. We can ask it for directions. But that [INAUDIBLE] often
comes from the internet. If you have zero
connectivity, and you ask the internet for something,
the web's answer is often no. I remember when I first realized
how problematic this is. A few years ago, I was working
in an agency, a web agency. And I found myself needing
to go to the toilet following a lunch that my stomach
was unhappy about. There were five
cubicles to choose from, but in this instance, the
first four were occupied. That's usually OK. Even in this situation,
I felt like one cubicle would be enough for me. But from previous
experience, I knew that mobile connectivity
and the office Wi-Fi only extended to the
first four cubicles. [LAUGHTER] And I thought for a
moment and decided no. This is not acceptable. And I returned to my desk,
and I waited until later, despite being in
some discomfort. That's the day that I discovered
that as a human being, I required an internet
connection in order to take a dump. [LAUGHTER] So this is a problem
worth solving. And until recently,
there was nothing you could do about
it, especially during the initial page load. But that all changes
with the service worker. I was told that this slide
wasn't impactful enough. I don't know. It's one of those
management buzz words I don't really understand. But I did give it another
go, came up with this. [LAUGHTER] Apparently, this
has branding issues. It's got loads of brands. I don't understand. Bruce Lawson, the
deputy CTO of Opera. He gave it a go as well. He came up with this. But it's kind of freaky. And if you stare
at it, it almost looks like that the
colors are changing. And the colors are changing. It's got a filter on it. But Ben Jaffe, who, at the time,
worked at Udacity, he had to go and came up with this. [VIDEO PLAYBACK] [DRAMATIC MUSIC PLAYING] -For too long, users have been
left staring at a white screen. For too long,
they've been let down by the cruel seas of
network connectivity. And for too long, we've
been powerless to help. We've been left waiting. But no longer, a new
browser feature has arrived. A total game changer. A feature that lets you control
the network rather then letting the network control you. Who is this new feature? And what promises does it bring? Introducing the Service Worker. [END PLAYBACK] [CHEERS AND APPLAUSE] JAKE ARCHIBALD: It's a
bit much, isn't it really? I prefer mine. It's got a TIE Fighter
with a cat's head in it. But what does this all mean. What can Service
Worker actually do? Well, we're going to take a
look at an Emojoy, which is a little progressive web app. You can find it at this URL. But it's basically like a
simple version of Hangouts. So I guess it's
already out of date. It should be Allo. But anyway, it's like
Hangouts, but it only lets you enter an emoji. It started life
as a mere website, but over time it was built up to
become a fully progressive web app. And it didn't require a
full rewrite to do this. It was something that happened
incrementally bit by bit. This here is V1. It runs at 60 frames a second. It's really simple. So it was only 25k, all
in, so really fast to load unless, of course,
you're offline. I mean, it's fast, but user
experience is lacking somewhat. At least this is what it was
like when I first launched it. Here's how I fixed it. To begin with, I registered
for a service worker. Now this isn't some magic
manifest or a config file. It's just JavaScript
because why should we reinvent some new thing when we
have a world full of JavaScript developers and loads of
tooling already out there. Oh, but of course, we should
wrap our registered call in a basic feature of
the text because there are older browsers out there
that don't support Service Worker and a simple
feature that prevents them from hurting themselves
and others around you. [LAUGHTER] But, anyway, in that
script, I'm just going to put a
simple log for now. So now if I load the page and
open Chrome's DevTools Console, there it is. Cool. Also, we've have to look at
this new application tab. There is the Service
Worker section. And there's our service
worker in there. As you can see, it was
last modified in 1970, meaning this service worker
predates the internet, which is pretty cool. That's a bug. We're going to fix that
in a couple of days. So how has this
actually changed things? So to find out,
we're going to pit the original online only sites
against that shiny new service worker version. To do that, let's go
to the comparinator. MALE SPEAKER: Fight. JAKE ARCHIBALD: First up,
the online experience. So, OK, it's pretty
much the same. Both load reasonably quick. I recorded these with a
[INAUDIBLE] internet connection or as most people in
the world call it, their internet connection. Also, I did close the browser
before clicking the shortcut, so the load time
you're seeing, that includes the browser opening. What about the
offline experience? OK, so both are failing. Now you've probably noticed,
not much has changed. Well, nothing's changed. OK, one thing happens. It logged "Hello?" because
that's all we told it to do. And that's how
Service Worker works. It only does what you
tell it, and that's great because I'm sick
of these magic APIs. Compared to AppCache, if we
gave Emojoy an AppCache manifest and that manifest just
contain the words, cache manifest, which is
required to make it valid, even online, that would
turn the render of Emojoy from this to this. And I can't help feeling that's
not what I told it to do. AppCache is a bit of a disaster. It's had a very simple format,
but a massive complicated rule book. And if you didn't like
any of those rules, tough. You were restricted to the way
that the designers of AppCache wanted you to work,
and those designers had not created many
offline web experiences. So this whole thing
kind of gave rise to the Extensible Web
Manifesto, which most browser vendors are now fully behind. Here we acknowledge that
browser developers and standards developers are not better
at building websites than web developers. And we should stop tossing out
scraps from our ivory towers like AppM Cache, like
CSS to do Reflection in one particular way. And, instead, we should give
developers full control, give you as much
information as we can, as many hooks as we can. By providing you with this
kind of low level access, you can create things
we didn't consider. You can use patterns
we didn't invent. And those become evidence for
us as well for higher level features, so we can make the
common stuff easier or faster. So Service Worker was
built to this model. So the things I show you today
are just the kind of patterns that I have. But you'll find
your own patterns, they'll probably be
loads better than mine. The service worker is driven
by events and one of these is fetch. So we've got to listen
for that there, just with a debugger statement. So before I run that, I'm going
to check this checkbox update on Reload. And I'll cover why
in a little bit, but now if I refresh
the page, I'm going to hit that breakpoint. So on the event object there,
it's got a request property. And this is representing the
request for the page itself. You can get the URL, the
headers, the type of requests. But I also get one of
these for every request that the page makes, so the CSS,
the JavaScripts fonts images. I get the event for
these avatar images, even though they're
on another origin. So you get all of the requests. So by default, requests go
from the pace to the network. And there's not a lot you
can do about it, really. But once you introduce
a service worker, it controls pages and
requests go through it. But like over events, you
can prevent the default and do your own thing. So instead of
trigger the debugger, I'm going to call this
event.respondWith. And this is me telling
the service worker, hey, I'm actually going to take
control of this fetch. And I'm going to
respond with a response. It says, Hello. So let's give that a spin. And I refresh the
page, and there it is. So instead of going
to the network, the service worker
just took care of it. So this example works offline. I mean, it's rubbish,
but it does work offline. You don't have to respond to
every URL the same either. You could pause the URLs. You can pick out
the component parts. If the path name ends in .jpg,
you can respond with a network fetch for a cat.jpg. The [? vendor ?] respondWith
takes a response object or a promise that
resolves with a response. Fetch returns as a promise for
a response from the network. So these compose together
really, really well. So now if you refresh
the page, it's back, but all the avatars are cats. Instead of doing something
special based on request, we can do something based
on the response as well. So here I'm going to respond
with a fetch for event.request. This is telling the
browser to just do the thing it would
have done anyway, but because it's
a JavaScript API, we can actually get
hold of the response before we send it
on to the browser, and we can take a look at it. So if the status
is 404, we could respond with something
else, some SVG or whatever. Otherwise, we'll
return the response we got from the network. So now if we refresh the
page, the avatars are back. But if I navigate to some
sort of nonsense URL, we get the 404 message. So Service Worker lets you
intercept requests and provide a different response. And you could do that based
on the request, the response, or the response of a
completely different request. You can do what you want. But this stuff is just
playing around, really. You probably wouldn't use a
service worker for a 404 page. You'd let your server do that. So let's do something
a bit more practical. A good way to dip your foot
into the service worker pool is to make an offline
fallback page, something to show the user
if the page fails to load because the current
state of things is pretty bad. The user comes to us, comes
to our site, wanting content. And this is our moment to shine. But without a connection, we
crap ourselves to the extent that mama browser has to
step in and defend us. And it does this by
blaming the user. Chrome can't display this page
because your computer is not connected to the internet. If we're going to be
competing with native, this is like an
operating system error. We can do better than this. So I create a custom error page. it's still an error,
but at least we're owning it this time. And it's something we
can build on later. I want to show this when
there's no connection, so it has to work offline. And I need somewhere
to save it, and I need to do that upfront when
the user does have a connection. The service worker has an
event for this install. And I pass a promise
to event.waituntil to let it know how long
the install is taking and whether it works or not. The install event is fired when
the browser runs the service worker for the first time. It's your opportunity
to get everything you need from the network, CSS,
JavaScript, HTML, and stuff. For storing these
requests and responses, there's a new storage
API, the cache API. This specializes in request
and response storage. But unlike the
regular browser cache, stuff isn't removed
at the browser's whim. So we put all of that stuff in
there, and once that's done, the service worker can
start controlling pages. So let's do that. I'm going to open a cache. And you can call it
whatever you want. I'm going to call it static-v1. And then we add the offline
page and the CSS it needs. So if the cache fails to
open or it runs out of space, all of these fetches
fails or returns a 404, the promise rejects. And that signals for the
browser that the install failed. And if that happens, this
service work will be discarded. It will never control pages. Note that both the
offline page and the CSS have a version
number in the URL. This means we can give them
good HTTP caching headers. And we just change the URL
when we change content. You can actually work
around bad caching headers with Service Worker,
but it's much better to work with good caching, so
that's what I'm doing here. But now we need
to use this cache. So over in the fetch
event, I'm going to respond with a
match in the cache, one that matches this request. Matching is done
similar to HTTP, so it matches on URL
Vary headers method, but it ignores the
freshness headers. Match returns a
promise for a response. So if we request the
offline page directly, or its CSS is going to come
straight from the cache, and that's great. But if there's no match found,
it results in undefined. So we need to deal with that. So if response is falsey,
which undefined is, we're going to fetch the
request from the network. If fetch fails, which
it's going to do offline, the promise rejects. So we'll catch that, and if
the request was a navigation, we'll return the offline page. We only want to return this
offline page for navigations because it doesn't
make sense to return this HTML page in response to
a request for some JavaScript or an image or something
like that, but that's it. We can now give
the page a refresh. It doesn't look like
anything's changed. Great, but over an
application tab, there's this cache
storage section, and we can see static-v1. There it is. So if we simulate offline, which
we can do in the Service Worker panel-- there's a little
offline toggle there. You can also do it
in the network panel. But now if I refresh
the page, there we go. There's the offline page. We can ship this. This is shippable. And that's what
the guardian did. On the developer blog, if you
don't have any connectivity, it serves you this
custom sorry page. But as a rather nice touch,
it gives you this crossword that you can do in the
meantime, which I think is a really nice Easter egg. I'm no good at crosswords. Actually I didn't even look
when I recorded this screencast, clue one across, a Californian
city, three letters, nine letters. I probably should have
managed to get that one. [LAUGHTER] Of course, we may want to make
changes to this in the future, add some things like
a Refresh button or some JavaScript which keeps
checking for a connection. And to do that, we need to work
with Service Worker's update system, which we've been
avoiding so far thanks to this checkbox. So let's take those
training wheels off. Say we want to
change this text here to be No Connectivity
rather than No Connection, just a simple change. Well, we change the
HTML, of course, but we need to update
the service worker too. The URLs are generated
from the file's contents, so we'll need to update that. We'll make the same
change in our fetch event, so we're are returning the
correct page from the cache. I'm also going to change the
version number of this cache from v1 to v2, and I'll
show you why in a moment. But let's give that a spin. I'm going to reload the page
to pick up those changes. And once again, I'm going
to change the network state to Offline and
reload the page again. But you can see here that
the text hasn't changed. It still says Connection,
not Connectivity. Here's what happened. We reloaded the page which
triggered the browser to go and check the
service worker for updates. It fetched the service
worker and went, this one is different
to the one I have. It's quite different, and
it been set up as version 2, running alongside
the old version. The old one remains because
the new one isn't ready yet. And it's because the
new one has to go through its install event,
so it gets everything it needs from the network,
including the new offline page that we've changed and then
puts them in the cache. And this is why we gave
the cache a different name, so it wouldn't overwrite the
stuff that the version 1 was still using. Now by default, the
new version waits. It doesn't take over while the
old version is still in use, and that's because
having multiple tabs open to the same site running
different versions, that's the source of some
really nasty bugs that a lot of us as
web developers very rarely cater for. We can actually see this
happening in DevTools, so here we've got the
activated service worker. But below it, there's
one there waiting to activate with a
different version number. We can also see in the
cache storage static-v2 will be there as well. This new service
worker will stay there until the old one
is no longer in use. So when this page navigates
away or closes or whatever, there's nothing
left to control now. The old version
isn't needed anymore. It becomes, well,
redundant, and it goes away. [MUSIC PLAYING] [LAUGHTER] [MUSIC ABRUPTLY STOPS] But that means the new
version can move in and start controlling pages. We can make that happen
by navigating away. I'm still in offline mode. Just navigate to About Blank. Then I'm going to click Back. And there we go. The text has changed. We still have that old cache
hanging around, though, but we can deal with that. Once the old service
worker is gone, the service worker
gets an activate event. And we can use that
to perform cleanup because we know that the
old version is gone now. We can migrate
databases, delete caches. So I tend to have an
array of all the caches that I expect to be there. And then I use
the activate event to go through all of the caches,
and delete the ones that I don't expect to be there. It's a slightly
ugly piece of code. It's kind of a bit of
boilerplate right now. I think it's one of those things
that we'll develop a higher level API for pretty soon. This behavior of
one service worker waiting behind another,
that's the default, but it doesn't have
to be that way. Your new service
worker called skip waiting which means it doesn't
want to wait behind the older version. When you do this that just
kicks the old version out, and then it takes
off straightaway. But when you do this,
you need to be aware that you are now
controlling a page that was loaded with some older
version of your service worker, not necessarily the last
one, some older version. You can track this
from your page as well. So when you detect
this happening, you can show a message
to the user like refresh for the new version. Or maybe you could
just trigger the page to refresh automatically
if that's going to be an OK user experience. But for most of
development, I really recommend this update on
reload thing, this checkbox in the service worker panel. This changes the update
flow to speed things up. With update on reload,
you hit Refresh. The browser fetches the service
worker from the network. It treats it as a new version
even if it hasn't actually changed, so it goes through an
install, picks all the latest stuff up from your server, and
then it puts them in a cache. And once that's done, it
kicks out the old version, moves in, and then
the page refreshes. And that's kind of a lot,
but the product to this is you just hit refresh, and
you get the latest service worker and your latest
assets on every load. OK, that was a lot to take in. But how are we doing? Well, to find out, we must
return to the comparinator. [MUSIC PLAYING] MALE SPEAKER: Fight. JAKE ARCHIBALD:
First up, online. OK, so we haven't
really changed anything. Content is still arriving in
a reasonable amount of time. How about the
offline experience? We're now taking responsibility
for network failures. We're catching that error. This is something we can ship. But we're not quite a
progressive web app yet, so we need to tell the browser
that we're ready to offer a native-like experience. So in the head of
your page, you can declare a theme
color which Chrome uses to style the location bar. So that's a quick
win, it kind of integrates with the operating
system a lot better. We want users to be able to
add this to the home screen. There's a lot of ways you can do
this to cover all the browsers. You'll need to specify an icon,
an icon, probably another icon. Did I mentioned the icon? But most of this meta
crap isn't needed until the user opts in to adding
the site to your home screen. And meanwhile, it's being
sent down with every page, increasing the time
to first render. So we got rid of all
this and replaced it with a single reference
to a manifest. And that's only downloaded
when it's needed. It no longer clogs up
the load of every page. Furthermore, it was a great time
to standardize all that meta crap. The manifest looks like this. And once you have one
and the service worker, Chrome will start
asking the user if they want to add the
site to their home screen, if the browser thinks they're
engaged enough with this site. This icon here comes
from the icon field. You can specify many
icons of different sizes, and Chrome will pick the nearest
one that it wants to use. Here I'm just serving
one 512 by 512 icon and letting Chrome
do the scaling. You should serve smaller
icons if your large icon is a big download. But in this case, the big icon
is 5k, so I wasn't too fussed. The name comes from
the name property. You see how this works. If these are tabs on
add to home screen, then you get an icon
on the home screen. We've already seen where
the icon comes from. But this name here, that
comes from the short name. If your short name and
your name are the same, you can omit the short name. It doesn't need to be there. Later when the user
launches your app, they get a splash screen. And that's displayed while
the browser spinning up and while the page
hasn't rendered. This icon comes
from the icon set. It will go for a bigger
icon for the splash screen. The name comes from the name. The background color comes
from background color. And the color of
the status bar here, that's from the theme color. Then once the page is ready,
the splash screen goes away. And the page that is loaded is
this one listed in start_url, so it doesn't have
to be the same page that the user was on when
they added to home screen. But you also notice that
the URL bar isn't there, and that's because the web
app is display stand alone. And all this adds together to
make the whole experience feel like a native app, and
that's why it becomes so important that we,
at the very least, own our connection
errors because we don't want the default
browser error breaking this native feel. Once again, this is
something we can ship. It's an incremental improvement. When developing
offline capable sites, it's a common error to
start iterating on this. For example, make it
show cache messages, make it a fuller
offline experience. But that's an online
first approach. And that works fine when
the user is truly offline, but zero connectivity is not the
worst thing we face, this is. I call it Lie-Fi. [LAUGHTER] This is when your phone
says it has connectivity, but it doesn't. If you have Lie-Fi and
you ask for content, the web says, um-- well,
uh-- um-- and that's it. This is worse than offline. With offline, you
get a quick answer. It's no, but it's an answer. But here you're
just left waiting. And I'm sure you've had
this before yourselves. You don't want to give up. You keep thinking,
well, maybe if I just wait a few seconds, a few more
seconds, the page will arrive, but does it? No. You're forcing the user to
stare at this or give up. And with every
passing second, they hate the experience
a little bit more. Our current online first
pattern works great when the user has
a good connection. They get the latest
messages pretty fast. It's great when user is offline
because they get some cache data or a Failure page. But with Lie-Fi, that's it. Chrome removes the
splash screen when the page gets the first render,
and with Lie-Fi, that never happens. So we've improved things
for offline users, but Lie-Fi users are in the
same hell as they were before. This is the problem
with online first. We're giving users with some
connectivity a worse experience than those with no connectivity. And this isn't always
just down to per connectivity on
the user's device. I don't know if you've
had this before, but I get it all the time. My phone is reporting
full signal, but I cannot get a
bite down it at all. A lot happens to get
data from the web. The phone sends a request off
to the Wi-Fi router or the cell tower, then on to the ISP
through intermediate proxies, potentially across to the
other side of the world. And eventually, the request
reaches the destination server. But that's only half the journey
because the server responds, and the response has to go all
the way back across the world through proxies, through
ISPs, over the air, and land safe and sound,
hopefully, on your phone. But if something along the
way goes wrong or runs slowly, the whole thing runs slow. And therein lies Lie-Fi. And you don't know how good the
network connection is until you give it a go, until you try it. And that takes time,
there are a couple of APIs on the web that
attempt to predict the network such as-- that's interesting. There's supposed to
be a slide there. This isn't part the
act, by the way. This is just some fun. [CHEERS AND APPLAUSE] Oh, yeah. AUDIENCE: Lie-Fi. JAKE ARCHIBALD: Lie-Fi, no
this entire presentation works offline. Maybe I'll unplug it and
plug it back in again. Do we have a dodgy
connection here? Don't present [AUDIO OUT]. I do want to present
from my own laptop. How are we for the-- oh, my god. It's starting to work. This is amazing. So my colleagues
always tell me I'm stupid for doing my
own slide framework. There we go. Let's see if this
continues to work. Oh, my god. It's working again. Yes. [CHEERS AND APPLAUSE] OK, here we go. Where was I? OK, so there were a couple
of APIs on the web that attempt to predict the network. And these are things
like navigator.online and navigator.connection.type. But these are weak signals. Those APIs, they only
know about that bit. They don't know about
any of the rest. For instance, when
navigator.online is false, you have no connection. That much is certain. When navigator.online is true,
you have not no connection. Navigator.online
is true when you're connected to a cell
tower or a router, though that router may only
be plugged into some soil. Navigator.online
will still be true. Anything after that first
hop cannot be predicted. You have to make a connection,
and see, and that takes time. If the user wanted to
come to our chat app and look at past
messages, why should they need a connection for that? Why should the user have
to wait for a connection to fail just to see stuff
that's already on their phone. The great thing
about local data is you don't need to make an
internet connection for it. This is why the gold
standard is Offline-first. Offline-first solves
these problems. With Offline-first, we get
stuff from the cache first, and then we try and get
content from the network. And the more you get to
render without a connection, the better. You should think of
the network as a piece of progressive enhancement,
an enhancement that might not be there. So we need to rethink
our approach a bit here. I'm going to create
an application shell, and that's just a
site without messages. And we'll leave it to the
JavaScript to populate it. So we'll change
the install event so it caches the app shell,
the CSS, and the JavaScript. Meanwhile, over in
our fetch event, we're going to start by
pausing the URL so we can read its component parts. And then if the request
is to the same origin as a service worker and
the pathname is just slash-- so it's the
root page-- we're going to respond with the app
shell from the cache, done. Otherwise, we'll try and
respond with cache content and fall back to the network. So altogether, we're going
to affect the HTML, CSS, and JavaScript from
the cache, and that gets us a first render. And then the page is JavaScript. That's going to go
off to the network and get the messages
for us, which gets our content undergoing. If that fails,
the JavaScript can show some kind of connection
error message as well. So by doing all
that, what do we win? It is time to return
to the comparinator. [MUSIC PLAYING] MALE SPEAKER: Fight. I enjoyed that jingle
the first time, but it's feeling like
diminishing returns now. Anyway, how are we doing? The online experience,
look at that. We've massively
improved the render time by getting to first render
without the network. And the messages
are still coming from the same [INAUDIBLE]
network connection, but they get on screen faster
because the download starts much earlier. What about offline? Great, the app shell loaded,
and the page's JavaScript showed the no connection error. But what about Lie-Fi? So we've defeated the
blank screen at least. I mean, our JavaScript
could do better here. It could show a
spinner or something. But things are
looking loads better. You can see the benefits
of Offline-first versus Online-first. Rather than improving things
for one connection type, we've improved things
across the board. We're back in control
of the user experience. And it's taken us very
little code to get there. We can ship this. What about caching chat
messages so we can display those before connectivity too? Aside from the initial pageload,
messages arrive one by one. This continual feed
of data doesn't really map well to the cache API, which
is request and response based. Instead, we want to have a
store that we can add and remove messages from. The web platform
has such a thing. It is called indexedDB. IndexedDB has a bit of a bad
reputation among developers, I think it's it fair to say. But that's only because it's
the worst API ever designed in the history of
computer science. [LAUGHTER] Other than that,
it's pretty good. But seriously, 60%
of the awful comes from this weird
event system it uses because it predates Promises. If it was invented today,
it would use Promises. And there is an effort
underway to patch it up as best we can without
breaking compatibility. I much prefer teaching the web
platform rather than libraries of frameworks, but I am going
to make an exception here. idb is a little library
that I threw together that mirrors the
indexedDB API, but it uses promises where IDP
should have used Promises. Other than that, you're still
using idb, all the same method names and everything just with
60% of the awful eliminated. It's 1.2k, so it's really small. There are bigger, higher
level APIs out there, which you maybe want to consider
things like Dexie, PouchDB. This library only eliminates
the very worst of idb. But let's use it. Let's create a database
for our messages. The messages look like this. They're in an adjacent format. So here's how we build
a database for it. I'm going to start by
opening the database, giving it a name,
Emojoy version number 1. And then we get a
callback to define the schema of the database. So we need somewhere to
actually store the messages. Relational databases
call these tables. idb calls them object stores. I created one called messages. And I'm going to tell it
the primary key is ID. We're also going to look
at messages in date order, so I'm going to create an
index for that called by-date, and that's it. Not too painful. Now we can take this db
promise and add messages to it as they arrive. So say we had a
function like this that gets called every
time a new message arrives and it's added to the page. We get our database
from the Promise, create a readwrite transaction
on the messages store and then add it. I don't expect you to
remember all this code. I'm just trying to, I
guess, try to convince you that indexedDB can be a
little bit less scary when you involve Promises. Getting messages, not too
bad either, transaction. Go to the object store. Get the index where
everything is date ordered. Get them all done. Of course, we can't just
keep adding messages to the database. We need to perform some
cleanup at one point. Say we wanted to delete
everything but the newest 30 messages. We're going to
create a transaction, get the date index. I'm going to open a cursor so I
can go through them one by one. The prev here means
we're going to go through the index
backward starting with the newest message. I'm going to advance
past the first 30. We want to keep those. And then I'm going to
loop through the rest, calling delete on each one. OK, so this code example isn't
quite as pretty as the others. Like I say, the library
only adds Promises in. You're still exposed to
the rest of idb's ugliness. But this is loads
cleaner than it is with just straight forward idb. Having a database
full of messages means we can fetch the
app shell, the JavaScript, the CSS from the cache. That gets us a first
render, but then we can render with messages
from the database as well. We get content on the screen
without going to the network. And then we go to the network
for new messages and avatars. And we can update the page. So if this network request
fails, that's not a big deal. We're still displaying
content, that's pretty good. That's a great
offline experience. If the network request is
slow, that's OK as well. If the user is just coming
back to check past messages, that's fine. To see the benefits of
this, we must once again gaze upon the comparinator. [MUSIC PLAYING] MALE SPEAKER: Fight. JAKE ARCHIBALD: OK,
the online experience. Check out that
performance difference. That's huge. What about the
offline experience? We get content, and
we get it quickly. OK, the avatars have failed. We'll deal with
that in a moment. But this is loads better. This is way better than the
sorry message we had before. How about the life
Lie-Fi experience? We've gone from the most
frustrating experience in the world, the white
screen of eternal misery, to instant content. We can ship this. The only thing missing, in
terms of a full offlining first experience is the avatars. But yeah, we can fix that. Here's our current fetch code
just like we wrote before. We want to do something
special for the avatar, so let's rewind a bit. If the request is
to gravatar, which is where I'm getting the avatars
from, I'm going to call out into another function,
handleAvatarRequest, otherwise we'll
just carry on doing what we were doing before. So what does
handleAvatarRequest do? Well, we could fetch the
avatar from the network. And if that fails, we could
serve some kind of default from the cache. And we'd cache that as
part of the Install Event. That's cool. We can ship that. It's good enough. But later we could do
something even better. When we get the
request for the avatar, we can try and get
it from the cache. And if we get a response,
we'll send it back to the page. That gives us the avatar
without going to the network, but we should go to the network
too not only in the case that we didn't have something
in the cache but also to update the one
that is in the cache if there is one because users
change their avatars a lot. I'm showing an old
avatar is great. That's fine. But we should update
it for next time. So off to the network we go,
and if we get a response back, we put it in the cache. And that's how it's done. Unless we were unable to
give an avatar to the page from the cache, in
which case, we'll send them the one
from the network. And that's done. This is what HTTP calls
stale while re-validate. It's one of the cache
control options. It's an experimental
feature in Chrome right now. We're busy implementing it. It's behind a flag. But we don't need to
wait for it to ship. We can emulate it inside
the service worker. People can have it today. Thankfully, the code
for this is actually, I think, a lot
simpler than trying to describe it with a diagram. We're going to start by
making a network request because we always
want to do that, sometimes just update the cache. Sometimes it's to send
back to the page as well. We just wait until
to say, hey, we're going to do some
additional work as well as providing a response. And in here, we're going
to take the response that we get from the network. We're going to clone it. And the reason we clone it is
because a response can only be read once. The body of the response
can only be read once. And this is how
the browser works. This is how you can receive
a free gigabyte video and watch it, but that
free gigabytes never needs to be on disk all at once. It just needs to be
memory all at once. We're going to clone it
because we might use it twice. We're going to send
it back to the page, and we're going to
put it in the cache. We're going to open the
cache called avatars. Unlike our static
cache, we're going to preserve this one
between versions. We're not going to change
the version number. And then put the
avatar in the cache Meanwhile, we're going to return
a response from the cache. And if the cache
doesn't have one, we're going to fall
back to the network one. And that's it. I mean, sooner or
later, you're going to have to write some
code to go into the cache and look for avatars you don't
need anymore and delete them. But this helps a lot. So to see how this
affects things, for the final time I promise,
we approach the comparinator. [MUSIC PLAYING] MALE SPEAKER: Fight. JAKE ARCHIBALD: First up,
the online experience. Quick, full content,
the offline experience. Quick, full content. What about Lie-Fi? Once again, quick, full content. In fact, the
experience is the same. [CHEERS AND APPLAUSE] Thank you. The experience is the same
with every connection type. The network only matters when it
comes to fetching new content. We can ship this. So we've achieved net
resilience, right? Well, we're doing
great when it comes to sending data to the user
but not as great when it comes to the user sending data to us. I really hate this because,
for me, the user's transaction is complete. They have said,
here are some smiley faces, please send
them to people, done. That's all they have
to say about it. But no, we're requiring them to
watch it through to completion. We can do better than this. Background sync landed in
Chrome a couple of months ago. It's a Service Worker
event that you request. You're asking to do some
work when the user has a connection, which
is straight away if they already
have a connection, or some time later when they do. So say we had a
function that was called whenever the user
typed a message and hit Send. So we'll add a
message to the outbox using idb or whatever is a
function we'd write ourselves. But then we'd get the
service worker registration and register for a
sync event, giving it whatever name we want. That can fail, of course, if
the browser doesn't support it or the user has disabled it
or whatever, so in which case, we'll catch it and just
send the message normally where the user has to
stare at their phone. Otherwise, over in the service
worker, we get this sync event. And we can check the tag
name so we know what we're supposed to be doing here. And we use our old
friend, waituntil, to let the browser
know how long we're going to be doing work for. So then we get messages from
the outbox from idb or whatever and send them to the server and
remove them from the outbox. The effect of this is
the user can be on Lie-Fi or totally offline, but
they can use the app as if it were entirely online. When they send a
message-- so here, I'm just going straight
to airplane mode, so it's completely offline. And now I can type
some sort of message, some pictures of some cats. And as soon as I hit send,
we can add it to the flow even though they
have no connection. It does say "sending"
there in tiny letters, but it doesn't have
a lot of emphasis because the user is
free to lock their phone and go about their day. They can even close the
browser if they want. It doesn't matter. Then at some point later,
when they regain connectivity, the message will be
sent in the background. There it goes. And these, I didn't
know that this happened. I didn't know that it was
sent in the background because from their
point of view, that transaction was complete. They had already said,
please send this. The time they get
to know about it is when they receive
a push notification with a reply from another user. By using Background
Sync and Push Messaging, we get out of the user's way. They don't have to stare at
the screen while stuff sends. They don't have to
check for new messages. We tell them about that. All of this massively
improves the user experience. And if you do this
sort of stuff, I think it's totally
cool to brag about it. This is something the
[? I/O ?] web app does. So it says, caching complete. This now works offline. I think this is
great, but I do hope that it goes out of fashion. I remember those
little site badges that we all used
to use that said, this site was built using CSS2. And now when you
see them, you're like, oh, CSS, well done you. [LAUGHTER] I hope one day, this
will seem as ridiculous. But before that, we do need
to build up user trust. I don't know if anyone's seen
something like this before. This is what I'm greeted
with in the bathroom on board the trains I commute to work in. First you have to press
D to close the door, and then you press L when it's
flashing to lock the door. Note that there's
Braille there as well. So even blind people
know they have to wait for the flashing light. [LAUGHTER] But the buttons aren't
proper buttons either. They're the kind of flat
touch-sensitive things there. That's horrible. I don't trust this. I don't trust this because
once it failed on me, and I was slowly revealed to the
carriage like a bad game show prize. [LAUGHTER] Similarly, users
don't trust the web to work offline because
it's failed them before, so messages like this do help. And, yes, all epiphanies I've
had about user experience happened in bathrooms. But this is why
Chrome requires there to be a service worker
before it will show the add to home screen banner. In the future, we're going
to tighten the rules there to try and detect some kind
of offline capable experience. We want everything that
ends up on the home screen to be competitive
with native apps. We want to make the web a first
class part of the operating system in the user's mind. So on that note, I
wanted to compare the launching of a native
app, Google Photos, quite a well built and
well optimized one, to Emojoy and launch
them at the same time. Oh, it's really close. Like Emojoy is
0.2 seconds slower to show content to
that, 200 milliseconds. It's almost nothing. And that's a
well-built native app. But that's also starting from
cold, the browser not in memory at all. If the user had looked at the
browser at some point recently, let's face it, the browser,
fairly popular app, so that's quite
likely this happens. This is a progressive web. [APPLAUSE] That's beating a native
app to content render by almost half a second. That is the power
of Service Worker. And the power of Offline-first. And achieving that
wasn't a matter of rebuilding the entire app. It was incremental,
improving the experience at every step for everyone. Things get faster for users
with decent connections. Things stop being frustrating
for users with Lie-Fi. And things become possible
for users that are offline. A few people today asked me
about Android Instant Apps and what that means for the web. Well, progressive apps
are possible today, one app across thousands of
devices, operating systems and browsers already beating
a pre-installed native app to render. Service Worker is in the stable
versions of Chrome, Firefox, and Opera right now. And it's a high
priority implementation for Microsoft Edge. It's under
consideration by Apple, but progressive
enhancement means you can use it today, as
we've seen people talking about on the stage earlier. And if you use Service
Worker and sites suddenly become way faster in Chrome
and Firefox than Safari, that will give Apple more reason
to implement Service Worker. As web developers, you are
in a position of power here. You get to guide the future
of the extensible web. Service Worker lets us create
great user experiences, from becoming faster network
resilient to polyfilling new network features. And I know I've gone
through a lot of stuff at lightning speed, but there
is a free Udacity course, which is fully interactive where
you take a website from online only to fully Offline-first,
covering everything I've spoken about in more detail and more. Don't worry about
remembering the URL. Just google for Udacity offline. It shows up. But with that, it's
been another pleasure. Thank you very much. [CHEERS AND APPLAUSE] [MUSIC PLAYING]
TLDR Service Workers exist now, you're going to use them, because everyone else uses them, and their web app is now way better than your web app.
Kinda long-winded, but there's an interesting point in that - since the browser is used frequently and probably already in memory, a progressive web app's startup can beat a native one.
The question is how long will Apple stall on implementing these features.
Interesting how he's using a Wii remote to control the slides.
I was thinking "wow, buzzwordy and flashy, I hope there is some substance" Oh boy, it's like it read my mind for what I wanted to do on the web. I haven't been this excited since webassembly, but this is actually released!
Having a cache to screw around with, fetching immediately from cache to display something quickly, focusing on offline ( offline first as they call it ) with some more things working if you have online.
I'm excited.
Ideally, this is a really nice technology. But I can't help but feel like there's a minefield of potential issues. Having to manually manage caches will be extremely error prone. The semantics for updating the service worker are a little weird; users could easily go very long periods without getting updated with the new code. Needing to maintain old versions of HTML and such and having to move to a completely different cache for each version to support users running old code is a bit of a mess. I'm excited to see where it goes, but I think we're going to need to see some higher level frameworks to abstract these problems away.