[TITLE MUSIC PLAYING] ADDY OSMANI: Hey, folks. My name is Addy. This is Eva. We work on Chrome. And today we're
going to give you a whirlwind tour
of tips and tricks for making your sites load fast. So let's get started. EWA GASPEROWICZ:
The internet gets heavier and heavier every year. If we check in on the
state of mobile web, we can see that a
median page of mobile weighs just about 1.5 megs,
with the majority of that being JavaScript and images. That's quite a lot. Apart from the sheer
size, though, there are other reasons why a web page
might feel sluggish and heavy. Third-party code,
network latency, CPU limitations, partial
blocking patterns. All of these contribute to the
complicated performance puzzle. We've been pretty busy over the
past year trying to figure out how to fit this rich
content of mobile web into the smallest
package possible. In this talk, Addy
and I are going to show you what you can
do in this regard today, but also we'll take a
peek into the future and give you a highlight of
what might be possible really, really soon. ADDY OSMANI: So I'm not the
biggest fan of suitcases. The last time I
was at the airport waiting at the baggage
carousel for my bags, I looked around for my suitcase,
and all I could find was this. I, too, laughed the first
four times this went around. I was really happy
to get this back. I just wanted more of it. Now, the experience that I
had at a baggage carousel is a lot like the
experience our users have when they're waiting
for a page to load. In fact, most users
rate speed as being at the very top of the UX
hierarchy of their needs. And this isn't too surprising,
because you can't really do a whole lot until a
page is finished loading. You can't derive
value from the page. You can't admire
it's aesthetics. Now we know that
performance matters, but it can also sometimes
feel like a secret discovering where to start optimizing. And so we've been
working on trying to make this problem a
little easier for a while. And today we have an exciting
announcement to share with you. We like to call it a Paul
Irish in your pocket. EWA GASPEROWICZ: Well,
we're happy to announce a newly expanded set
of web performance audits in Lighthouse. Lighthouse, as you know,
is part of developer tools that allow you to make
an audit of your website, and also gives you hints
on how to make it better. It's been around for a while,
right, but as some of you have heard today in
the morning, it's been also actively worked on. And today, I'm happy
to share with you some of the newest audits that
landed in the Lighthouse over past quarters. We'll try to show
them to you in action during the rest of the talk. ADDY OSMANI: Oh,
next one's yours. EWA GASPEROWICZ: As the
internet saying goes, fixing web performance is as
easy as drawing a horse, right. You just need to follow
some steps carefully. But even though there is some
truth in the picture above, the good news is you
can get pretty far by following some simple steps. In order to show you the steps,
Addy prepared something special for you. ADDY OSMANI: So Ewa
and I are really big fans of Google Doodles. Google's been doing
them since 1998, and we thought it would be fun
to create a nice little app where we can show you some
of our favorite interactive doodles from over the years. We call it the Oodle Theater. Here it is in action. [VIDEO PLAYBACK] [TREE FROGS CHIRPING] [CAR ENGINE RUNNING] [EERIE STRING MUSIC PLAYING] [SCREAMS] [MUTED TRUMPET PLAYING] [VIDEO PLAYBACK ENDS] ADDY OSMANI: So we kick it
off with the drive-through, and as we can see,
we can now start browsing through the
application looking at doodles over the years. We can click through to
one, and even start playing. [VIDEO PLAYBACK BEGINS] [BRASS FANFARE PLAYING] [VIDEO PLAYBACK ENDS] ADDY OSMANI: We're not
going to ruin the surprise. You can go and you can play
some of these in your own time. Just not during
this talk, please. So let's begin our
journey into the Wild West of web performance. And it all begins
with one tool-- EWA GASPEROWICZ: --of
course, Lighthouse. The initial
performance of our app, as you can see on this
Lighthouse report, was pretty terrible. On a 3G network, the
user needed to wait for 15 seconds for the
first meaningful paint or for the app to
get interactive. Lighthouse highlighted a
ton of issues with our site, and the overall
performance score of 23 mirrored exactly that. The page weighted
at about 3.4 megs. We desperately needed
to cut some fat. This started our first
performance challenge-- find things that we
can easily remove without affecting the overall
performance experience. So what is the easiest
thing to remove? Usually, nothing. And by nothing, I
mean things that do not really contribute to
the code, like white space or comments. Lighthouse highlights this
opportunity in the Minify CSS and Minify JavaScript audits. We were using Webpack
for our build process, so in order to get
minification, we simply use the Uglify JS plugin. Minification is a
common task, so you should be able to find
a ready-made solution for whichever build
process you happen to use. Other useful audit in that space
is enable text compression. There is no reason to
send uncompressed files, and most of the CTNs support
this out of the box these days. We were using Firebase
Hosting to host our code, and Firebase actually
enables GCP by default. So by the sheer
virtue of hosting our code on a reasonable
CTN, we got that for free. And while GCP is a very
popular way of compressing, other mechanisms like
Zopfli and Brotli are getting traction as well. Brotli enjoys support in most
of the browsers these days, and you can use
binary to pre-compress your assets before sending
them to the server. OK. In these two ways, we
make sure that our code is nice and compact and
ready to present to the user. Our next task was to not send
it twice if not necessary. The inefficient cache
policy audit in Lighthouse helped us notice that we could
be optimizing our caching strategies in order to
achieve exactly that. By setting a max stage
expiration header in our server, we make sure
that on the repeated visit, the user can reuse the resources
they have downloaded before. Ideally, you should aim at
caching as many resources as securely possible for
the longest possible period of time, and provide
validation tokens for efficient re-validation. All the changes we made so
far were very straightforward, right. They required no
code whatsoever. It was really low-hanging
fruit with very little risk of breaking anything. So remember, always minimize the
code, preferably automating it with build tools, by
using the right CDNs, or by adding optimization
modules to your own servers. Always compress your assets and
use efficient cache policies to optimize repeated visits. OK, this may remove
the obvious part of the unnecessary downloads. But what about the
less obvious part-- for example, unused code? As it happens, unused code
can really surprise us. It may linger in the dark
corners of your codebase, idle and long forgotten,
and yet making it to the user's bandwidth
each time the app is loaded. This has been especially if you
work on your app for a longer period of time. Your team or your
dependencies change, and sometimes an orphan
library gets left behind. That's exactly what
happened to us. At the beginning, we were using
material components library to quickly prototype our app. In time, we moved to a
more custom look and feel, and of course we forgot
entirely about that library. Fortunately, the
Code Coverage check helped us rediscover
it in our bundle. You can check your Code
Coverage stats in DevTools, both for the runtime and load
time of your application. You can see the
two big red stripes in the bottom screenshot. We had over 95% of CSS unused,
and a big bunch of JavaScript as well. Lighthouse also picked up this
issue in the unused CSS rules audit. It showed a potential savings
of over 400 kilobytes. So we corrected our code, and we
kicked out both the JavaScript and CSS part of that library. This brought our CSS
bundle down 20-fold, which is pretty good for a
tiny two-line commit, right? Of course it made our
performance score go up, and also the time to
interactive got much better. However, with changes
like this, it's not enough to check your
metrics and scores alone. Removing actual code
is never risk-free, so you should always look out
for potential regressions. Remember that our code
was unused in 95%? Well, there's still
this 5%, right. Apparently one of our components
was still using the slides from that library-- the little arrows in the
middle of the slide there. Because it was so small, though,
we could just go in manually and incorporate those slides
back into the patterns. So if you remove code, just make
sure you have a proper testing workflow in place
to help you guard against potential
visual regressions. So remember, Code Coverage, both
in DevTools and in Lighthouse, is your friend when it comes
to spotting and removing unused code. Do this check regularly
throughout the development of your app to keep your
code base clean and tidy, and to test your changes
thoroughly before deployment. Well, so far, so good. All of those changes made
our app a little bit lighter. It was still to slow to Addy,
so he took it that bit farther. Addy, how did it go? ADDY OSMANI: It
went so, so good. Now, some white pages
are like heavy suitcases. You have some stuff
that's really important, and then you have crap
and even more crap. We know the large resources
can slow down web page loads. They can cost their
users money, and they can have a big impact
on their data plans, so it's really important
to be mindful of this. Now Lighthouse
was able to detect that we had an issue with
some of our network payloads using the enormous
network payload audit. Here we saw that we had
over 3 megs worth of code that was being shipped
down, which is quite a lot, especially on mobile. At the very top of
this list, Lighthouse highlighted that we had a
JavaScript vendor bundle that was 2 megs of
uncompressed code that we were trying to ship down. This is also a problem
highlighted by Webpack. Now what we say is that
the fastest request is the one that's not made. Ideally, you should be measuring
the value of every single asset you're serving
down to your users, measuring the performance
of those assets, and making the call on whether
it's worth actually shipping down with the
initial experience, because sometimes these assets
could be deferred, or lazily loaded, or processed
during idle time. In our case, because
we're dealing with a lot of
JavaScript bundles, we were fortunate because
the JavaScript community has a rich set of JavaScript
bundle auditing tools. We started off with
Webpack bundle analyzer, which informed us that we were
including a dependency called Unicode, which is 1.6 megs
of parsed JavaScripts, quite a lot. We then went over to our editor,
and using the import cost plugin for visual code,
we were able to visualize the cost of every module
we were importing. This allowed me
to discover which component was
including code that was referencing this module. We then switched over to
another tool, Bundle Phobia. This is a tool which allows you
to enter in the name of any NPM package and actually see
what its minified and g sub size is estimated to be. We found a nice
alternative for the slug module we were using that
only weighed 2.2 kilobytes, and so we switched that up. This had a big impact
on our performance. Between this change and
discovering other opportunities to trim down our
JavaScript bundle size, we saved 2.1 megabytes of code. We saw 65% improvements
overall once you factor in the g subbed and
minified size of these bundles, and we just found
that was really worth doing as a process. So in general, try to
eliminate unnecessary downloads in your site synapse. In the case of the Oodle
Theater, an app that already has games and a lot of
interactive multimedia content, it's important for us to
keep the application shell as lightweight as possible. We found that
inventorying our assets and measuring their
performance impact made a really big difference. So just make sure that you're
auditing your assets fairly regularly. Now, although large
network payloads can have a big
impact on our app, there's another thing that
can have a really big impact, and that is JavaScript. We all love JavaScript,
but as we saw earlier, the median page includes a
little bit too much of it. JavaScript is your
most expensive asset. On mobile, if
you're sending down large bundles of
JavaScript, it can delay how soon your users are
able to interact with the user interface components. That means they can be
tapping on UI without anything meaningful actually happening. So it's important
for us to understand why JavaScript costs so much. This is how a browser
processes JavaScript. We first of all have to
download that script. We have a JavaScript
engine which then needs to parse that
code, needs to compile it and execute it. Now, these phases
are something that don't take a whole lot of
time on a high-end device like a desktop machine,
or a laptop, maybe even a high-end phone. But on a median mobile
phone, this process can take anywhere between
five and 10 times longer. This is what delays
interactivity. So it's important for us
to try trimming this down. Now, to help you discover
these issues with your app, we introduced a new
JavaScript boot-up time audit to Lighthouse. And in the case
of the Oodle app, it told us that we had 1.8
seconds of time which was being spent in JavaScript boot-up. The way that this was happening
was that we were statically importing in all of our
routes and opponents into one monolithic
JavaScript bundle. One technique for working around
this is using code-splitting. So code-splitting is this notion
of instead of giving your users whole pizzas worth
of JavaScript, what if you only gave them one
slice at a time as they needed it? Code-splitting can be applied
at a route level or a component level. It works great with React and
React Loadable, Vue.js, Angular, Polymer, Preact, and
multiple other libraries. So we incorporated
code-splitting into our application. We switched over from static
imports to dynamic imports, allowing us to
asynchronously lazy load code in as we needed it. And the impact this had
was both shrinking down the size of our bundles, but
also decreasing our JavaScript boot-up time. Took it down to 0.78 seconds,
making the app 56% faster. So in general, if
you're building a JavaScript-heavy
experience, be sure to only send code to
the user that they need. Take advantage of concepts
like code-splitting, explore ideas like
tree-shaking, and check out this repo we have of a few ideas
around how you can trim down your library size if you
happen to be using Webpack. Now as much as JavaScript
can be an issue, we know that un-optimized images
can also be a problem, too. So over to Ewa to talk
about optimizing images. EWA GASPEROWICZ: Images. Internet loves
images and so do we. In Oodle Lab, we're using
them in the background, in the foreground, and
pretty much everywhere. Unfortunately, Lighthouse was
much less enthusiastic about it than we were. As a matter of fact,
we failed on all three images-related audits. We forgot to
optimize our images, we were not sizing
them correctly, and also, we could get some
gain from using other image formats as well. So we started
optimizing our images. For one of optimization routes,
you can use visual tools like ImageOptim or XNConvert. A more automated approach is
to add an image optimization step to your build process, with
libraries like, for example, Imagemin. This way you make sure that also
the images added in the future get optimized automatically. Some CDNs-- for example, Akamai,
or third-party solutions like Cloudinary or Fastly-- offer you comprehensive
image optimization solutions. So to save yourself
some time and headache, you can simply host your
images on those services. If you don't want to do that
because of the cost or latency issues, projects like
Thumbor or Imageflow offer self-hosted alternatives. Here you can see a single
image optimization outcome. Our background PNG
was flagged in Webpack as big, and rightly so. After sizing it correctly
to the viewport, and running it
through ImageOptim, we went down to 100 kilobytes,
which is acceptable. Repeating this for
multiple images on our site allowed us to bring down
the overall page wait significantly. That was pretty easy
for static images, but what about the
animated content? As much as we all love
GIFs, especially the ones with the cats in them, they
can get really expensive. Surprisingly, the
GIF format was never intended as an animation
platform in the first place. Therefore, switching to
more suitable video format offers you a large savings
in terms of the file size. In Oodle Lab, we
were using a GIF you saw earlier as an intro
sequence on the home page. According to Lighthouse,
we could be saving over seven megabytes by switching to
a more efficient video format. Our clip weighed
about 7.3 megs-- way too much for any
reasonable website. So instead, we turned
it into a video element with two source files-- an MP4 and WebM for
wider browser support. Here you can see how
we used FFmpeg tool to convert our animation
GIF into the MP4 file. The WebM format offers
you even larger savings, and for example, ImageOptim API
can do such conversion for you. We managed to save over
80% of our overall weight thanks to this conversion. This brought us down
to around 1 megabyte. Still, 1 megabyte
is a large resource to push down the
wire, especially for the user on the
restricted bandwidth. Luckily, we could use
effective type API to realize they're on
the slow bandwidth, and give them a much,
much smaller JPEG instead. This interface uses the
effective round trip time in downloading the values
to estimate the network type the user is using. It's simply does a string-- slow 2G, 2G, 3G, and 4G. So depending on this value,
if the user is on below 4G, we could replace the video
element with the image. It does remove a little
bit from the experience, but at least the site is usable
on a slow connection as well. Last but not least, there
is a very common problem of off-screen images. Carousels, sliders, or really
long pages often load images even though the user cannot see
them on the page straightaway. Lighthouse will
flag this behavior in the off-screen images
audit, and you can also see it for yourself in the
network panel of DevTools. If you see a lot of images
incoming, while only a few are visible on
the page, it means that maybe you could consider
lazy loading them instead. Lazy loading is not
yet supported natively in the browser, so we
have to use JavaScript to add this capability. Here you can see how we use
LazySizes to add lazy loading behavior to our Oodle covers. LazySizes is smart
because it does not only track the visibility
changes of the element, but it also proactively
pre-fetches elements that are near the view for
the optimized user experience. It also offers an
optional integration of the
IntersectionObserver, which gives you very efficient
visibility lookups. As you can see,
after this change, our images are being fetched on
demand instead of straightaway. OK. That's a lot of good
stuff about images. So just remember,
always optimize images before pushing them to the user. Use responsive images
techniques to achieve the right size of the image. Use lighter formats
wherever possible, especially for animated content. And finally, lazy
load whatever is not immediately visible to the user. If you want to dig
deeper into that topic, here's a present for you-- a very handy and comprehensive
guide written by Addy. So you can access it
at images.guide URL. ADDY OSMANI: Cool. So next let's talk
about resources that are discovered and
delivered late by the browser. Now, not every byte that's
shipped down the wire to the browser has the
same degree of importance, and the browser knows this. A lot of browsers
have heuristics to decide what they
should be fetching first. So sometimes, they'll fetch
CSS before images or scripts. Now something that
could be useful is, as authors of
the page, informing the browser what's actually
really important to us. Thankfully, over the
last couple of years, browser vendors have been
adding a number of features to help us with this, so things
like link rel-preconnect, or preload, or prefetch. These capabilities that
were across the web platform helped the browser fetch the
right thing at the right time, and they could be a little
bit more efficient than some of the custom loading
logic-based approaches that are done using script instead. So let's see how
Lighthouse actually guides us towards using some
of these features effectively. So the first thing
Lighthouse tells us to do is avoid multiple costly
round trips to any origin. Now, in the case
of the Oodle app, we're actually heavily
using Google Fonts. Whenever you drop in a
Google Fonts style sheet into your page, it's going to
connect up to two subdomains. What Lighthouse is telling us is
that if we were able to warm up that connection, we could save
anywhere up to 300 milliseconds in our initial connection time. Now taking advantage
of link rel-preconnect, we can effectively mask
that connection latency. Especially with something
like Google Fonts where our font face CSS is
hosted on googleapis.com and our font resources
are hosted on gstatic, this can have a
really big impact. So we applied this optimization
and we shaved of a few hundred milliseconds. The next thing that
Lighthouse suggests is that we preload key requests. Now link rel-preload
is really powerful. It informs the browser
that a resource is needed as part of
the current navigation, and it tries to get
the browser fetching it as soon as possible. Now here, Lighthouse
is telling us that we should be going and
preloading our key webfont resources because we're
loading into web fonts. Preloading in a web font looks
like this, so specifying rel equals preload, you pass in
as with the type of font, and then you specify the type of
font you're trying to load in, such as woff2. The impact that this can have
on your page is quite stark. So normally, without
using link rel-preload, if web fonts happen to be
critical to your page, what the browser has to
do is it first of all has to fit your HTML, it
has to parse your CSS, and somewhere much
later down the line, it'll finally go and
fetch your web fonts. Using link rel-preload, as
soon as the browser has parsed your HTML, it can actually
start fetching those web fonts much earlier on. In the case of our app, this
was able to shave a second off the time it took
for us to render text using our web fonts. Now it's not quite
that straightforward if you're going to try
preloading fonts using Google Fonts. There is one gotcha. You see, the Google Font
URLs that we specify in our font faces in our style
sheets happen to be something that the fonts team
update fairly regularly. These URLs can expire or get
updated on a regular frequency, and so what we
suggest you do if you want complete control of
your font loading experience is to self-host your web fonts. This can be great
because it gives you access to things like
link rel-preload. In our case, we found the
tool Google Web Fonts Helper really useful in just helping us
offline some of those web fonts and set them up locally,
so check that tool out. Now whether you're
using web fonts as part of your
critical resources or it happens to
be JavaScript, try to help the browser deliver
your critical resources as soon as possible. If you're connecting up
to multiple origins that are critical, consider
using link rel-preconnect. If you have an asset for
the current page that's really important,
use link rel-preload. We have to have a good preload
plugin available for Webpack. And if you have an asset
for future navigation, consider using prefetch. This is something that has
good support now in the latest version of Webpack,
as well as preload. Now we've got something special
to share with you today. In addition to features
like resource hints as well as preload,
we've been working on a brand-new experimental
browser feature we're calling Priority Hints. This is a new
feature that allows you to hint to the browser
how important a resource is. It exposes a new attribute,
importance, with the values, low, high, or auto. This allows us to
convey lowering the priority of less
important resources, such as non-critical styles,
images, of fetch API calls, to reduce contention. We can also boost the priority
of more important things, like our hero images. In the case of our
little app, this actually led to one practical place
where we could optimize. So before we added lazy
loading to our images, what the browser was doing
is we had this image carousel with all of our
doodles, and the browser was fetching all the images at
the very start of the carousel with a high priority early on. Unfortunately, it was
the images in the middle of the carousel that were
most important to the user. So what we did was we set the
importance of those background images to very low, the
foreground ones to very high, and what this had was a
two-second impact over slow 3G in how quickly we were able to
fetch and render those images. So a nice, positive experience. We're hoping to
bring this feature to Canary in a few weeks,
so keep an eye out for that. The next thing I want to
talk about is typography. So typography is
fundamental to good design. And if you're using
web fonts, you ideally don't want to block
rendering of your text, and you definitely don't
want to show invisible text. We highlight this
in my Lighthouse now with the avoid invisible
text while web fonts are loading audit. If you load your web fonts
using a font face block, you're letting
the browser decide what to do if it takes a
long time for that web font to fetch. Some browsers will wait anywhere
up to three seconds for this before falling back
to a system font, and they'll eventually
swap it out to the font once it's downloaded. We're trying to avoid
this invisible text. So in this case,
we wouldn't have been able to see this
week's classic doodles if the web font
had taken too long. Thankfully with a new
feature called font-display, you actually get a lot more
control over this process. Font-display helps you
decide how web fonts will render or fall back based on how
long it takes for them to swap. Now in this case, we're
using font-display swap. Swap gives the font face
a 0-second block period, and an infinite swap period. This means the browser is
going to draw your text pretty immediately with
the fallback font if the font takes
a while to load, but it's going to swap it once
the font face is available. In the case of our app,
why this was so great is that it allowed us to display
some meaningful text very early on, and transition over to the
web font once it was ready. So in general, if you happen
to be using web fonts, as a large percentage
of the web does, have a good web font
loading strategy in place. There are a lot of
web platform features you can use to optimize your
loading experience for fonts. And also check out Zach
Leatherman's web font recipes repo, because it's really great. Next up is Ewa to talk about
render-blocking scripts. EWA GASPEROWICZ: So
displaying visible text as early as possible
is really important, but we can go farther than that. There are other parts
of our application that we could push earlier
in the download chain to provide at least some basic
user experience a little bit earlier. Here on the Lighthouse
timeline strip, you can see that during the
first few seconds when other resources are loading, the user
cannot really see any content. Downloading and processing
external style sheets is blocking our
rendering process from making any progress. So what can we do about it? Well, we can try to optimize
our critical rendering path by delivering some of
the styles a bit earlier. If we extract the
styles that are responsible for
this initial render, and inline them in
our HTML, the browser is able to render
them straight away without waiting for the
external style sheets to arrive. In our case we
used an MPM module called Critical to inline our
critical content in index HTML during our build step. While this module did most
of the heavy lifting for us, it was still a little bit tricky
to get this working smoothly across different routes. The truth is, if
you're not careful, or your site structure
is really complex, it might be really difficult to
introduce this type of pattern if you did not plan
for actual architecture from the beginning. This is why it's so important to
take performance considerations early on. If you don't design for
performance from the start, there is high chance you'll
run into issues doing it later. In the end, our risk paid off. We managed to make it
working, and the app started delivering
content much earlier, improving our first meaningful
paint time significantly. So to sum up, to unblock
the rendering process, consider inlining
critical styles in the head of your
document, and preloading or loading asynchronously
the rest later on. For non-critical script,
consider marking them with a defer attribute,
or lazy load them later during the app lifecycle. OK. So that's the whole story
of how we drew the horse and tried to put
it in a suitcase. So let's take a
look at the outcome. This is how our app loaded
on a median mobile device on a 3G network, before
and after the optimization. That's a pretty nice progress. All of this progress was
fueled by us continuously checking and following
the Lighthouse report. All of the hints
you've seen today are linked in Lighthouse
tool so that you can check them in the context
of your own application. If you would like to check out
how we technically implemented all of the changes, feel free
to take a look at our report, especially at the [? PRs ?]
that landed there. The performance score
went up from 23 to 91. That's pretty nice, right? However, our goal was never
to make Lighthouse happy. We wanted to make
the user happy. And high-level metrics
included in the report like time-to-interactive
or perceptual speed index are a good proxy for
improved user experience. Also the snapshots
timeline gives you a nice visual feedback about
how much shorter the waiting time for our app became. This is the full story
of little Oodle Lab. Now let's take a look at
some much bigger businesses. ADDY OSMANI: So Ewa
and I had the benefit of being able to think
about performance very early on, but how does this
apply at scale to much larger businesses? Well, let's take
a look at Nikkei. Nikkei are Japan's
largest media company. They have a site that's
got 450 million users that are accessing it, and
they spent a lot of time optimizing their old mobile site
and turning it into a new PWA. The impact of performance
optimizations was huge. They were able to get 14
seconds faster on interactivity. They saw a 230% increase
in their organic traffic, 58% increase in
conversion rates, and their daily active users
and page views also went up. But what did they actually do
to optimize their performance? Well, if we take a look
at the full list of things that Nikkei did, they'll
look a little bit familiar. That's because many of these are
things that we covered today. Nikkei did things like optimize
their JavaScript bundles. They were able to
shrink them down by 43%. One big change they
made was they actually used Webpack to
optimize the performance of their third-party
scripts in addition to their first-party ones. They were able to
use link rel-prefetch on their daily edition pages,
improving next-page performance loading by 75%. In addition to this, they
also took advantage of the tip that Ewa was just
walking through-- critical path CSS optimizations. This is that
optimization where you're sending down 14 kilobytes for
the content of your first round trip. If you're able to squeeze in
enough of your critical styles in there, you can actually
prove first meaningful paint by quite a lot. So here they were
able to shave off a second on their
first meaningful paint by taking advantage
of this optimization. And finally, on
Nikkei, they took advantage of the PRPL pattern. Now, PRPL is a pattern that was
first discovered by the Polymer team a few years ago, and
it says for Push, Render, Pre-Cache, and Lazy Load. So what Nikkei are
doing is that they're pushing their critical
resources using link rel-preload and server push. They're rendering their main
article content quite quickly. They're pre-caching their top
stories using Service Worker. So this gives them
offline access to read articles as well. And they're lazy loading code. So they're lazy loading
code, but they're also using skeleton screens
to improve the perceived performance of these pages. So we talked about how
performance can help businesses to give users a
better experience, but on users, we
have one more thing that we think could
help with the future. EWA GASPEROWICZ: Well, as
you've heard during the keynote, we believe that machine
learning represents an exciting opportunity
for future in many areas. So what if we could
take these two worlds of machine learning
and web performance and blend them together? Maybe it could lead us to some
really, really interesting solutions. Today we want to tell you about
our experience with machine learning, and explain why we
think it has a large potential. Here's an idea that we hope
will spark more experimentation in the future-- that real data can really
guide the user experiences we're creating. Today we make a lot
of arbitrary decisions about what the user
might want or need, and therefore, what is being
prefetched, or preloaded, or pre-cached. If we get it right, we
are able to prioritize a small amount of resources,
but it's really hard to scale it to
the whole website. At the same time,
we have a wealth of data about typical user
behavior readily available. So Addy, how can
we use that data? ADDY OSMANI: So we actually
have data available to better inform optimization today. Using the Google
Analytics reporting API, we can take a look
at the next top page and exit percentages
for any URL on our site. In fact, we have a little tool
for this anybody can check out. Here it is for
developers.google.com/web. And as we can see here,
a lot of the users that land on the Chrome
DevTools documentation actually end up going
over to Lighthouse. So we could potentially
prefetch that page. Folks that land up
on the PWA docs, we usually check out
the PWS checklist. And we could use this data
to improve our page load performance. This gives us the notion
of data-driven loading for improving the
performance of websites. But there is one
piece missing here. Having a good probability
model is important because we don't want to waste
our user's data by aggressively over-prefetching content. We can take advantage of that
Google Analytics data and use machine learning and
models like Markov chains or neural networks in order
to implement such probability models. This is a lot less subjective
and error-prone than manually deciding what we should be
prefetching or preloading for our sites. We can then wire this all up
using link rel-prefetch inside of our sites so that as the
user browses through the site, they're able to actually
fetch and cache things that they need ahead of
time, improving our page load performance. Now we can actually
go further than this. Earlier, we talked about
code-splitting and lazy loading. But in a single-page
app, we're often dealing with routes and
chunks in a bundler. So instead of just
prefetching pages, we could actually
go more granular. What if we could prefetch
individual resources for a page or number of pages? Well, across all of
these ideas, we've been focused on
trying to make some of these a little bit more
low-friction for web developers to adopt. And so today we're happy to
announce a new initiative we're calling Guess.js. Guess.js is a project
focused on data-driven user experiences for the web. We hope that it's going
to inspire escalation of using data to improve
performance and go beyond that. It's all open source, and
available on GitHub today. This was built in collaboration
with the open source community by [? Miko ?] [? Geshav, ?]
Kyle Matthews from Gatsby, Katie [? Empenius, ?]
and a number of others. Let's take a look at what
Guess.js provides us out of the box. The first thing it provides us
is a Google Analytics module for analyzing our
user navigations. It has a parser for
popular frameworks, allowing us to map those URLs
back to your frameworks router. We then have a comprehensive
Webpack plug-in that's able to do
machine learning training on some of that data, determine
all of our probabilities, we're able to bundle the
JavaScript for your routes and chunks with the potential
for future clustering of those chunks, and we then
are able to wire it all together for you so that we can prefetch
those chunks of JavaScript as the user navigates
through the site. We've also got
experimental support for applying these
concepts to static sites. Now that's enough talk. What about showing you how
this works in practice? So here we have a demo
of Guess.js in action. What we're first going to do is
we're going to load up our app. We're in DevTools. We're in low-end mobile
emulation over slow 3G. This app's rendering
very quickly, but what we can see
in the network panel is a number of routes
that have already started to be prefetched because
we have high confidence they're going to be used. We can toggle on a
visualization of this. So in pink, we
have pages we have high confidence the
user is going to want, in green, we have
low confidence, and in yellow, we
have mild confidence. We can actually
start to navigate through this application. So let's say we go to cheese. We can visualize this again. We can see that cheesecake is
a page the users will often navigate through. As you can see, it
loaded instantly because it's already
in the user's cache by the time they go to it. Contextually we're able to
display to you in this demo all the visualizations
of confidence levels we have for different pages as
we navigate through the site. Even this last
page, the custard, loaded really, really quickly
using these techniques. Now prefetching is a
great idea in practice, but we wanted to be mindful
of the users' data plans. And this is where we
use navigator.connec tion.effectivetype to
make sure that we're only prefetching things when we think
that your connection can handle it. So this is our demo of
Guess.js using Gatsby. By the way, this
also happens to be a PWA with a great
performance score, so thank you to both [? Miko ?]
and Kyle for helping with this. Check out Guess.js, let
us know what you think. So today we talked about
quite a few things. But at the end of
the day, performance is about inclusivity. It's about people. It's about all of us. We've all experienced
slow page loads on the go, but we've gotten an
opportunity today to consider trying to
give our users more delightful experiences that
load really, really quickly. So we hope that you took
something away from this talk. Remember that improving
performance is a journey. Lots of changes can lead
to really big gains. So check out some of the
things we talked about today. Talk to us in the web sandbox
if you've got any questions. That's it from us. Thank you. [APPLAUSE] [TITLE MUSIC PLAYING]