[GOOGLE LOGO MUSIC] KATIE HEMPENIUS: Hi, everyone. My name's Katie Hempenius. ADDY OSMANI: And
I'm Addy Osmani. KATIE HEMPENIUS: We
work on the Chrome team on trying to keep the web fast. Today we're going to talk about
a few web performance tips and tricks from real
production sites. ADDY OSMANI: But, first,
let's talk about buttons. Now, you've probably had to
cross the street at some point, and may have had to press a
pedestrian beg button before. Now, there are three types
of people in the world. There are people who
press the button once, there are people who
don't press the button, and then people who press it
100 times because, of course, that makes it go faster. KATIE HEMPENIUS: The frequency
of pushing these buttons increases proportional to the
user's level of frustration. Want to know a secret? ADDY OSMANI: Sure. KATIE HEMPENIUS: At least in
New York, most of these buttons aren't even hooked up. ADDY OSMANI: So your new
goal is to have a better time to interactive than this. Now, this experience of
feeling frustrated with buttons just not working is
actually something that applies to the web as well. According to a US study that
was done by Akamai in 2018, users expect experiences to be
interactive in about 1.3 times the point when they're
visually ready. And if they're not, people
end up rage clicking. KATIE HEMPENIUS: Right. It's important for sites to be
visually ready and interactive. It's an area where we still
have a lot of work to do. Here we can see page
weight percentiles on the web, both overall
and by resource type. If one of these categories is
particularly high for a site, it typically indicates that
there's room for optimization. ADDY OSMANI: And, in
case you were wondering what this looks
like visually, it looks a little bit like this. You're sending just way too many
resources down to the browser. KATIE HEMPENIUS:
Delightful user experience can be found across the world. So today we're
going to deep dive into performance learnings from
some of the world's largest brands. Let's start by talking about
how sites approach performance. This probably looks familiar. For many sites,
maintaining performance is just as difficult,
if not more difficult, than getting fast
in the first place. In fact, an internal
study done by Google found that 40% of large
brands regress on performance after six months. One of the best ways to
prevent this from happening is through performance budgets. Performance budgets
set standards for the performance
of your site. Just like how you might
commit to delivering a certain level of
uptime to your users, you could commit to delivering
a certain level of performance. There are a couple different
ways that performance budgets can be defined. They can be based on
time, for example, having less than a two-second
time to interactive on 4G, they can be based
on page resources, for example, having less than
150 kilobytes of JavaScript on a page, or they can be
based on computing metrics, such as Lighthouse scores,
for example, having a budget of a 90 or greater
Lighthouse performance score. While there are many ways
to set a performance budget, the motivation and benefits
of doing so remain the same. When we talk to companies
who use performance budgets, we hear the same
thing over and over. They use performance
budgets because it makes it easy to identify
and fix performance issues before they ship. Just as test catch code
issues, performance budgets can catch performance issues. Walmart Grocery does
this by running a custom job that checks the size of the
bills corresponding to all PRs. If a PR changes the size of
a key bundle by more than 1%, the PR automatically
fails and the issue is escalated to a
performance engineer. Twitter does this by running a
custom-built tracker that they built against all PRs. This bill tracker
comments on the PR with a detailed breakdown
of how that PR will affect the various parts of the app. Engineers then use
this information to determine whether a
PR should be approved. In addition, they are working on
incorporating this information into automatic checks that
could potentially fail a PR. Both Walmart and Twitter
use customer infrastructure that they built themselves to
implement performance budgets. We realize that not everybody
has the resources and time to devote to doing that. So today we're really excited
to announce LightWallet. LightWallet adds support
for performance budgets to Lighthouse. it is available
today in the command line version of Lighthouse. [APPLAUSE] The first and only step
required to set up LightWallet is to add a budget.json file. In this file, you'll define
the budgets for your site. Once that's set up, run the
newest version of Lighthouse from the command line, and make
sure to use the budgetPath flag to indicate the path
to your budget file. If you've done this
correctly, you'll now see a Budgets section
within the Lighthouse report. This section will
give you a breakdown of the resources on your
page, and, if applicable, the amount that your
budgets were exceeded by. LightWallet was officially
released yesterday, but some companies have already
been using it in production. Jabong is a online
retailer based in India who recently went
through a refactor that dropped the size of their app by 80%. They didn't want to lose
these performance wins, so they decided to put
performance budgets into place. Up on the screen, you can see
the exact budget.json file that Jabong is using. Jabong's budgeting is
based on resource sizes. But, in addition to
that, LightWallet also supports a resource
count-based budgets. Jabong used the current
size of their app as the basis for determining
what their budgets would be. This worked well for them,
because their app was already in a good place. But what if your app
isn't in a good place? How should you set your budgets? Well, one way to
approach this problem would be to look at
HTTP Archive data to see what breakdown
of resources correspond with your
performance goals. But speaking from
personal experience, that's a lot of
SQL code to write. So to save you the effort,
we're making that information directly available today in what
we're calling the Performance Budget Calculator. Simply put, the Performance
Budget Calculator allows you to forecast
time to interactive based on the breakdown of
resources on your page. In addition, it can also
generate a budget.json file for you. For example, a site with
100 kilobytes of JavaScript and 300 kilobytes
of other resources typically has a four-second
time to interactive. And for every additional
100 kilobytes of JavaScript, that time to interactive
increases by one second. No two sites are alike. So in addition to
providing an estimate, the calculator also provides
a time to interactive range. This range represents the
25th to 75th percentile TTI for similar sites. ADDY OSMANI: So
one of the things that can end up impacting
your budgets are images. So let's talk about images,
starting off with lazy loading. Now, we currently send down
a lot of images to our pages, and these aren't the
best for limited data plans or particularly
slow network connections. At the 90th percentile,
HTTP Archive says that we're shipping
almost 5 megs worth of images down on mobile and desktop. And that's not perhaps the best. Now, lazy loading is a
strategy of loading resources as they're needed, and
this applies really well to things like
off-screen images. There's a really big
opportunity here. Once again, looking
at HTTP Archive, we can see that there's
actually an opportunity. We're at the 90th percentile. Folks are currently
shipping down anywhere up to three megabytes of images
that could be lazy-loaded, and at the median,
416 kilobytes. Now, luckily, there are
plenty of JavaScript libraries available for adding
lazy-loading to your pages today-- things like
lazySizes or React Lazy Load. Now the way that
these usually work is that you specify a data
source instead of a source, as well as the class,
and then the library will upgrade your data
source to a source as soon as you come into view. Now you can build on
this with patterns like optimizing
perceived performance and minimizing reflow just
to let your users know that something's happening as
these images are being fetched. Now we're going to
walk through some case studies of people who've
been able to use lazy-loading effectively. So chrome.com is our
browser consumer site, and recently, we've been
very focused on optimizing its performance. We'll cover some of those
techniques in more depth soon, but these resulted in a 20%
improvement in page load times on mobile, and a 26%
improvement on desktop. Lazy-loading was one of the
techniques the team used to get to this place. They used an SVG placeholder
with image dimensions specified to avoid reflow. They're using
Intersection Observer to tell when images are
in or near the viewport, and a small, custom JavaScript
lazy-loading implementation. The win here was 46% fewer
image bytes on initial page load, which was a nice win. We can also look at
more advanced uses of image lazy-loading. So here's Shopee. Shopee are a large e-commerce
player in Southeast Asia. Recently, they adopted
the image lazy-loading and were able to save one
megabyte of fewer images that they're serving
on initial load. Now, the way the Shopee
works is that they're displaying a placeholder
by default here, and, when the image is
inside the viewport, once again using
Intersection Observer, they're able to trigger a
network call for the image to download it in
the background. Once the image is
either decoded, if a browser supports the image
decode API, or download it if it doesn't, the
image tag is rendered. And they're able to
do things like have a nice fade-in animation when
that image appears which, overall, looks quite pleasant. We can also take
a look at Netflix. So, as Netflix's
catalog of films grows, it can become
challenging for them to present their members
with enough information to decide what to watch. So they had this goal of
creating a rich enjoyable video preview experience so
members could have a deeper idea of what was on offer. Now, as part of
this, Netflix wanted to optimize their homepage to
reduce CPU load and network traffic to keep
the UX intuitive. The technical goal was to
enable fast vertical scrolling through 30 plus rows of titles. The old version
of their homepage would render all of the tiles
at the highest priority, and that would include data
fetching from the server, creating all of the DOM,
fetching all their images. And they wanted the new
version to load much faster, minimize memory overhead,
and enable smoother playback. So here's where they ended up. When the page now
loads, they first render billboard
images at the very top three rows on the server. Once they're on the
client, they make a call for the rest of
the page, and then render the rest the rows and
then load all the images in. So they're effectively
simply rendering the first three rows
of DOM and lazy-loading the rest as needed. The impact of this was
decreased load time for members who don't
scroll quite as far. And this is effectively a
summary where they ended up. Overall, faster startup
times for video previews and fullscreen playback. So, before, there was
a CPU load required to generate all of their DOM
nodes and get images to load. Now they don't saturate quite
as much member bandwidth, and they pull in four times
fewer images on initial load. So their video previews
now have faster load times, they've got less bandwidth
consumption and lower memory overall. So, from our tests,
image lazy-loading has helped many brands
shave an average of 76% off of their image bytes
on initial load as a result of using this optimization. These include the likes
of Spotify and Target. So it looks like there
could be something here we could bring
into the platform. So, today, we're
happy to announce that native image
lazy-loading is coming to Chrome this summer. Now, the idea here is that
with just one line of code using the brand new
loading attribute, you'll be able to add
lazy-loading to your pages. So this is big deal--
very excited about it. This will hopefully
work with three values. The lazy value, eager,
if an image is not going to be
lazy-loaded, and auto if you want to defer
it to the browser. [APPLAUSE] Thank you. So we're also happy to announce
that this capability is also coming to iframes. So the exact same attribute--
the loading attribute-- is going to be possible
to use on iframes, and I think that this introduces
a huge opportunity for us to optimize how we address
loading third party content. Now, here is an example
of the brand new loading attribute working in practice. So the way this is going to
work is that on initial load, we're actually just going
to fetch the images that are in or near the viewport. We're also going to fetch the
first two kilobytes of all of our images, as that will
give us dimension information and help us avoid reflow-- it'll give us the
placeholders that we need. And then we start loading
these images on-demand, and what this leads to is
some quite nice savings. So we're only loading up
548 kilobytes of images rather than those 2.2 megabytes. Now Chrome's implementation
of lazy-loading is doing a few other
things behind the hood. We actually factor in the
user's effective connection type when we decide what
distance from viewport thresholds we're going
to use, and those can be different
from whether you're on 4G to whether you're on 2G. Now the loading
attribute can either be treated as a
progressive enhancement, so only using it in
browsers that support it, or you can load a JavaScript
lazy-loading library as a fallback. So here, we're checking for
the presence of the loading attribute on HTML image element. If it's present, we'll just
use the native attribute and we'll upgrade our
image data sources. And if it's not, we can fetch
in something like lazySizes and apply it to get
the same behavior. So here it is
working in Firefox, where we've applied
this exact same pattern, and so we're able
to get to a place where we have
cross browser image lazy-loading with a relatively
hybrid technique that works quite well. KATIE HEMPENIUS:
Users expect images to look good and be performant
across a wide variety of devices. This is why responsive images
are an important technique. Responsive images
are the practice of serving multiple
versions of an image so that the browser can
choose the version that works best for the user's device. Responsive images
can either be based on serving different
widths of an image, or based on different
densities of an image. Density refers to the
device pixel ratio or pixel density of the device that
the image is intended for. For example,
traditional CRT monitors have a pixel density of
one whereas retina displays have a pixel density of two. However, these are only just
two of the many pixel densities in use on devices today. And what Twitter
realized was that it was unnecessary to serve
images beyond a retina density. This is because the human eye
cannot distinguish between images beyond that density. This is an important realization
because it decreased image size by 33%. The one exception
to this is that they do continue to serve higher
density images in situations where the image is
displayed fullscreen and the user can pinch
zoom on the image. Responsive images are just one
of the many techniques that go into a fully optimized image. When we're talking
with large brands, those optimizations
not only include the usual suspects like
compression or resizing, but also more advanced
techniques like using machine learning for automated
art direction, or using A/B testing to evaluate
the effectiveness of an image. And this is where
image CDNs come in. You can think of CDNs as image
optimization as a service, and they provide a
level of sophistication and functionality that can often
be difficult to replicate on your own with local scripts
based image optimization. At a high level, image
CDNs work by providing you with an API for accessing
and, more importantly, manipulating your images. An image CDN can be something
that you manage yourself or leave to a third party. Many companies do decide
to go with a third party because they find that it
is a better use of resources to have their engineers
focus on their core business rather than the
building and maintenance of another piece of software. Trivago is a travel
site based in Europe who switched to Cloudinary,
and this was exactly their experience. When Trivago switched
to an image CDN, they found that overall
image size decreased by 80%. Those results are very
good, but they're not necessarily unusual. When talking with brands
who've switched to image CDNs, we've found that they
experience a drop in image size of anywhere from 40% to 80%. I personally think
part of the reason for this is that
image CDNs can often provide a level of
optimization and specialization that can be difficult to
replicate on your own, if only due to lack
of time and resources. Images are the single largest
component of most website, so this translates into
a significant savings in overall page size. ADDY OSMANI: So, next,
let's talk about JavaScript, starting off with deferring
third party script and embeds-- things like ads,
analytics, and widgets. Now third party code
is responsible for 57% of JavaScript execution
time on the web. That's a huge number! This is based on
HP archive data, and this represents
a majority chunk of script execution time across
the top four million websites. This includes everything
across ads, analytics, embeds, and a lot of these CPU intensive
scripts can cause issues with things like
script execution and can delay your
user interaction. So we need to
exercise a lot of care when we're including third
parties in our pages. When I ask folks how their
JavaScript diet is going, it usually isn't very great. Tag managers, ads,
libraries-- maybe there is an opportunity for
us to defer some of this work to a smarter point in time. Let's talk about a site that
actually did this for real-- The Telegraph. So The Telegraph
knew that improving the performance of third
party scripts would take time, and it benefits from
installing a performance culture in your organization. They say that everybody
wants that tag on their page that's going to make
the organization money, and it's very important to
get the individuals in a room to educate, challenge, and
work together on this problem. So what they did was they set
up a web perf working group across their ads, marketing,
and technology teams to review tags so that
non-technical stakeholders could actually understand
what the opportunity here was. What they discovered
was a change. The single biggest
improvement at The Telegraph was deferring all JavaScript,
including their own, using the defer attribute. This hasn't skewed
analytics or advertising based on their tests. This is a really huge deal,
especially for a publisher, because usually, you
see a lot of hesitation from marketing folks,
from advertising, from analytics because
there's this fear that you're going to end up losing
revenue, or not quite tracking as many users as you
want to be able to track. But through collaboration,
through building that performance culture,
they were actually able to get to a
place with the org where they kept
building on top of this, including leading to
changes such as a six second improvement in
their time to interactive. So they still have
work to do, but this is a really solid start. We can also talk
about TUI, who are a travel operator in Europe. They were looking at how to
be more customer centric, and realized that just
adjusting prices wasn't going to cut it if visitors
were leaving their site because of slow speed. Now for speed projects
at their organization to get off the ground, they had
to get organizational buy-in from management all the
way up to their CEO. And, through a test
and learn mindset, they were able to
discover that when load times decreased by 71%,
bounce rates decreased by 31%. Now part of the things that
allowed them to get to a place where they could
improve performance were these two optimizations. TUI were using Google Tag
Manager in the document head-- in their case, were using it
to inject tracking scripts and things like that. So they moved the execution
of Google Tag Manager after the load event. They didn't see any
drop in tracking in a meaningful level
as a result of this, and the result was great
in their perspective. They had a 50% reduction
in domComplete. TUI also had a third
party A/B testing library that they were
using that weighed 100 kilobytes of gzipped
and [INAUDIBLE] script. They realized that even if
they were to push this to after the onload event, it could
potentially have some issues. They noticed some flickering as
it would switch between one A/B test to the other, so
they completely threw that dependency out and they rewrote
their A/B testing as something custom-- part of their CMS-- in under 100 lines
of JavaScript. The impact was being
able to throw away that dependency
and a 15% reduction in homepage JavaScript. Let's also talk about embeds. Now we noticed how
flags chrome.com is having a high
JavaScript execution time despite it looking like
it's mostly a static site. This would delay
how soon users could interact with the experience. Now what we saw was
the chrome.com actually had this Watch
Video button on it where they'd show
a YouTube promo if you clicked on the button. Unfortunately, they dropped
in YouTube's default embed into their HTML,
and this was pulling in all of the YouTube video
player, all of its scripts and resources, on
initial page load, bumping up their time to
interactive to 13.6 seconds. Now, the solution
here was that instead of loading those YouTube
embeds and their scripts eagerly on page load, switching
to doing it on interaction. So now when a user clicks
to watch that video, that's the point when we load in
all those resources on-demand, because the user signaled
an intent that they're interested in watching. This led to a 69 point
improvement in their Lighthouse performance score, as well
as a 10 second faster time to interactive, so
a really big change. KATIE HEMPENIUS: Now,
no performance talk is complete without a discussion
of the cost of libraries and how you should just
remove all of them. But since that topic has
been done so, so many times, I wanted to take a little
bit different angle and instead talk about what are
some alternatives to removing expensive libraries. In other words, if that's
not an option for you, what are some other
things you can look into? First is deferring--
or deprecating expensive libraries. So taking steps to eventually
removing that library. Replacing the library with
something less expensive. Deferring the use of
an expensive library until after the
initial page load, and updating a library
to a newer version. When replacing libraries,
there are generally two things you want to look for. One, that the library is
smaller, but also maybe more importantly, that
its tree-shakeable. By only using
tree-shakeable dependencies, you're ensuring that
you're only paying the cost for the parts of the
library that you actually use. You can also defer
the loading and use of expensive dependencies until
after the initial page load. Tokopedia is an online
retailer based in Indonesia, and they're using this
technique on their landing page. They really wanted their
initial landing page experience to be as fast as possible,
so they rewrote it in Svelte. The new version only takes
37 kilobytes of JavaScript to render
above-the-fold content. By comparison, their existing
react app is 320 kilobytes. I think this is a really
interesting technique because they did not
rewrite their entire app. Instead, they're still
using the react app, they just lazy-loaded
it in the background using service workers. This can be a really
nice alternative to rewriting an
entire application. As I mentioned, Tokopedia used
Svelte for their landing page. In addition to Svelte,
Preact and lit-html are two other very lightweight
frameworks to look into. And last, consider
updating your dependencies. As a result of using
newer technologies, newer versions of libraries
are often much more performant than their predecessors. For example, Zalando is a
European fashion retailer, and they noticed that their
particular version of React was impacting page
load performance. They A/B tested this and found
that by updating from React 15.6.1 to 16.2, they were able
to improve load time by 100 milliseconds and lead to 0.7%
uplift in revenue per session. ADDY OSMANI: Now another
useful optimization to consider is code splitting. When we're thinking about
loading routes and components, we ideally want to
do three things. We want to let the user know
that something is happening, we want to load the minimal
code and data really fast, and we want to render
as quickly as possible. Now code splitting enables
us to do this more easily by breaking our larger
bundles into smaller ones-- whoops-- that allow us
to load them on-demand. This enables all sorts of
interesting loading patterns, including progressive
bootstrapping. Now when it comes to
JavaScript, it actually does have a real cost,
and those two costs are download and
JavaScript execution. Download times are critical for
really slow networks, so things like 2G and 3G. And JavaScript
execution time ends up being critical for
devices with slow CPUs because JavaScript is CPU bound. This is one of those places
where small JavaScript bundles can be useful for
improving your download speeds, lowering memory usage, and
reducing your overall CPU costs. Now, when it comes
to JavaScript, our team has a motto-- if JavaScript doesn't
bring users joy, thank it and throw it away. I believe that this was in an
extended special of Marie Kondo show. Now one site that
breaks up JavaScript pretty well is Google Shopping. They were interactive in
under five seconds over 3G, and they have this goal of
loading very, very quickly, including their
project details page. Now, Shopping have at least
three JavaScript chunks. One for
above-the-fold-rendering, one for code to respond to
those user interactions, and one for other features
that are supported by search. Now their work to
get to this place involved drawing a
new template compiler, producing smaller
code through it, and also looking at
things like a lighter experience for folks who were
on the slowest of connections. They actually ship
a version that's under 15 kilobytes
of code for users in those types of markets. Another good example
are Walmart Grocery. So Walmart Grocery is a
single page application that loads as much
as possible up front. And they've been focused
on cleaning up their code, removing old, duplicate
dependencies, anything that's unnecessary, and they
split up their core JavaScript bundles using code splitting. They've also been doing
things that Katie's been suggesting
earlier like moving to smaller builds of
libraries, like moment.js. And the impact of this
iterative work has been great. A 69% smaller JavaScript
bundle and 28% smaller-- faster TTI. Now they continue to work
on shaving JavaScript off their experience to
improve it as much as possible. We can also talk about Twitter. So Twitter is a popular
social networking site. 80% of their customers
are using mobile everyday, and they've been focused on
unlocking user experience for the web that lets
users access content pretty quickly regardless
of their device type or their connection. Now when Twitter Lite first
launched a few years ago, the team invested in
many optimizations to how they load JavaScript. They used route
based code splitting and 40 on-demand
chunks for breaking up those large
JavaScript bundles so that users could hit interactive
in just a few seconds over 4G. Between this and smart
usage of resource hints, they're able to prioritize
loading their bundles pretty early. So what did the team
focus on next after that? Well, Twitter is a global site
that supports 34 languages. Now supporting this
required a tool chain of libraries and plugins
for handling things like locale strings. Now after choosing a set
of open source tools, they discovered
that on every build, they were including
internationalization strings in those builds that
were invalidating file hashes across the entire app. Each deploy would end up
with an invalidated cache for their users, and this
meant that their service worker had to go and
redownload everything. This is a really hard
problem to solve, and they ended up
actually rewriting their internationalization
pipeline and revamping it. The impact of this was
that it enabled code to be dropped from
all of their bundles and for translation
strings to be lazy-loaded. The impact was 30 kilobytes
of reduction in overall bundle size, and it also unlocked
other optimizations such as the emoji picker
in Twitter being loaded on-demand-- that saves 50
kilobytes from their core bundles having to include it. The changes in their
internationalization pipeline also led to an 80% improvement
in JavaScript execution, so some nice wins all around. We can also take a
look at JavaScript for your first-time
users, so those people who are coming to your experience
for the first time. Looking at Spotify-- so Spotify
started serving their web player to users
without an account, and they would show
an option to sign up to play as soon as users
would click on a song. For first-time users
that didn't need to use their playback
library or their CoreLogic, they would actually just
keep first-time page loads very, very low with just
60 kilobytes of JavaScript to get it interactive
really quickly. Once users actually
authenticate and they log in, they then lazy-load the web
player and their vendor chunk, meaning that you as
a first-time user get a really quick experience,
and then an OK experience for the rest of
your navigations. Now Spotify recently also
rewrote their web player in reactive-redux, and one
decision that they made was to improve performance
of navigation in the player. Previously, they
would load an iframe for every view which
was bad for performance. They discovered that redux was
pretty good for storing data from REST APIs in a
pretty normalized shape and making use of it to
start rendering as soon as a user clicks on a link. This enabled them to have
quick navigations between pages even on really slow
connections, because you were reducing overall API calls. And finally, we can
take a look at Jabong. So Jabong, as Katie
mentioned earlier, they're a popular fashion
destination in India. They decided to rewrite one
of their experiences as a PWA. And to keep that
experience fast, they used the PRPL pattern-- so Push, Render,
Precache, and Lazy-load. This allowed them
to get interactive in just 18 kilobytes
of JavaScript. So they were using
HP server push. They trimmed their vendor
bundles to 8 kilobytes, and they're precaching scripts
for future routes using service workers which, overall,
led to a TTI improvement of 82% with some good business
wins off the back of it. KATIE HEMPENIUS: Performant
sites display text as soon as possible. In other words,
they don't hide text while waiting for
a web font to load. By default, browsers hide
text if the corresponding font is not loaded, and the length of
time that they will do this for depends on the browser. It's simple to see
why this is not ideal. The good news is that
the fix is also simple. Wherever you declare
a font face, simply add the line font-display: swap. This tells the
browser to swap out-- use the default
system font initially, and then swap it out
for a custom web font once it arrives. ADDY OSMANI: Although,
you do currently have to self-host web fonts to
add font display to your pages, right? KATIE HEMPENIUS: Yes, but we
have a special announcement today. ADDY OSMANI: So developers have
been asking us to do something with Google Fonts for
about a year and a half, and today, we're
happy to announce that Google Fonts is soon
going to support font display. So you'll be able to set
things like font display: swap, optional, and
a full set of values. [APPLAUSE] We're very excited
about this change, and this actually just
came in like last minute, so we've got some-- KATIE HEMPENIUS: Last
night, last night. ADDY OSMANI:
[LAUGHS] Last night, so we've got some
docs to update. Let's also talk
about resource hints. So browsers do their best to
prioritize fetching resources they think that are
important, but you as an author know more
about your experience than anybody else. Now thankfully, you can use
things like resource hints to get ahead of that. Here are some examples. So Barefoot is an award
winning wine business. They recently used
a library called Quicklink which is under
a kilobyte in size, and what they do
is they prefetch links that are in viewport
using Intersection Observer. And what they saw
off the back of this was a 2.7 seconds faster
TTI for future pages off the back of it. Jabong are a site that are very
heavily dependent on JavaScript for their experience,
so they used link rel preload to preload
their critical bundles and saw a 1.5 second faster time
to interactive off the back. And chrome.com,
it was originally connecting to nine different
origins for their resources. They used link rel preconnect
and saw a 0.7 second decrease in latency off the back. What are other folks
doing with prefetching? So eBay is one of the world's
most popular e-commerce sites, and to help them speed up how
soon users can view content, they've started to
prefetch search results. So eBay now prefetch
the top five items on a search result page for
faster subsequent loads. This led to an improvement of
759 milliseconds for a custom metric called
above-the-fold time-- it's a lot like First
Meaningful Paint. eBay shared that
they're already seeing a positive impact on
conversions through prefetching. But the way that this
works is effectively doing their prefetching after
requestIdleCallback(), so once the page
kind of settles. And this is rolling out to a
few different regions right now. It's shipped to eBay
Australia and it's coming soon to the US and UK. Now, as part of eBay's
site speed initiative, they're also doing predictive
prefetching of static assets. So if you're on
the homepage, it'll fetch the assets
for the search page. If you're on the search page,
it'll do it for the item page and so on. Right now, the way that they're
doing predictive prefetching is a little bit static, but
eBay are excited to experiment with how to use machine
learning and analytics in order to do this a
little bit more smartly. Now, another site that's using
a very similar technique to this is Virgilio Sports. They're a sports news
website by Italia, and they've been improving
the performance of their core journeys. They actually track
impressions and clicks from users who were navigating
around the experience, and they were actually able
to use link rel prefetch and service workers to prefetch
the most clicked article URLs. Then, every seven minutes,
their service workers will go and fetch
the top articles picked by their algorithms,
except if you're on a slow 2G or 2G connection. The impact of this was a 78%
faster article fetch time, and they've also seen that
article impressions have been on the rise too. After just three weeks of
using this optimization, they saw a 45% increase
in Article impressions. KATIE HEMPENIUS:
Critical CSS is CSS necessary to render
above-the-fold content. It should be inlined and
that an initial document that is inlined should be delivered
in under 14 kilobytes. This allows the browser to
render content to the user as soon as the first
packet arrives. In particular,
Critical CSS tends to have a large impact
on First Content Paint. For example, TUI is a
European travel site, and they were able to improve
their First Contentful Paint by 1.4 seconds down to one
second by inlining their CSS. Nikkei is another site
using Critical CSS. They're are a large Japanese
newspaper publisher and one of the issues they ran
into when implementing this was that they had a
lot of Critical CSS-- 300 kilobytes to be specific. And part of the
reason for that was there were a lot of differences
in styles between pages, but also due to factors like
whether a user was logged in, whether a paywall was
on, whether a user had a paid or free
subscription, and so on. Once they realized
this, they decided to create a critical
CSS server that took in all these
variables as inputs and returned the correct
critical CSS for a given situation. The application server then
inlines this information and it's returned to the user. They're now taking this
optimization a step further and trying
out a technique known as edge side inclusion. Edge side inclusion
is a markdown language that allows you to
dynamically assemble documents at the CDN level. Why this is exciting
is that it allows Nikkei to get the
benefits of Critical CSS while also being able
to cache the CSS-- granted, they're caching at the
CDN level and not the browser level. In the event that the necessary
CSS isn't already cached on the CDN, it simply falls
back to serving the default CSS and that requested CSS
is cache for future use. Nikkei is still testing out
the use of edge side inclusion, but just through
dynamic CSS alone, they were able to decrease
the amount of inline CSS in their application by
80% and improve their First Contentful Paint
by a full second. Brotli is a newer
compression algorithm that can provide better
text compression than gzip. OYO is an Indian
hospitality company, and they used Brotli to
compress CSS and JavaScript. This has decreased the
transfer size of JavaScript by 15%, which is translated into
a 37% improvement in latency. Most companies are only
using Brotli on static assets at the moment. The reason for this
is, particularly at high compression
ratios, Brotli can take longer and sometimes,
much, much longer than gzip to compress. But that isn't to
say that Brotli can't be used on dynamic
content and used effectively. Twitter is currently using
Brotli to compress their API responses, and on P75
payloads-- so this would be some of their
larger payloads-- they found that using Brotli
decreased the size by 90%. This is really large,
but makes sense when viewed in
context of the fact that compression algorithms
are going to be more effective on larger payloads. ADDY OSMANI: And our last
topic is adaptive serving. So loading pages can be
a different experience depending on whether you're on
a slow network or a slow device, or you're on a high-end device. Now the network information API
is one of those web platform features that give you
a number of signals, such as the effectiveType of
the user's connection, save data so you can adapt. But really, loading
is a spectrum, and we can take a look
at how some sites handle this challenge. So for low-end users on mobile,
Facebook actually-- for users on low-end mobile
devices, Facebook actually offers a very basic
version of their site that loads very fast. It has no JavaScript, it
has very limited images, and it uses minimal CSS with
tables mostly for layout. What's great about
this experience is that users can view
and interact with it in under two seconds over 3G. What about Twitter? So cross-platform,
Twitter is designed to minimize the amount
of data that you use, but you can further
reduce data usage by enabling data saver mode. This allows you to control what
media you want to download, and, in this mode,
images are presented to users when they tap on them. So on iOS and on
Android, this led to a 50% reduction in data
usage from images, and on web, anywhere up to 80%. These savings add
up, and users still get an experience that's
pretty fast with Twitter unlimited data plans. Now, as part of looking into
how Twitter are handling their usage of
effectiveType, we discovered they're doing something
really fascinating. They're handling image
uploads in an interesting way. So, on the server,
Twitter compresses images to 85% JPEG and a max
edge of 4,096 pixels. But what about when you've
got a phone out and you're taking an image-- you're taking a picture, but
you're on a slow connection and may not be
able to upload it? Well, on the client,
what they now do is that they check
if images appear to be above a certain threshold,
and if so, they draw it to the canvas,
output it at 85% JPEG and they see if
that improved size. Often, this can
decrease the size of phone captured images
from four megabytes down to 500 kilobytes. The impact of this was 9.5%
reductions in canceled photo uploads, and they're
also doing all sorts of other interesting things
depending on the effectiveType of the user's connection. And finally, we've got eBay. So eBay are experimenting
with adaptive serving using effectiveType. If a user is on a
fast connection, they'll lazy-load features
like product image zooming. And, if you're on
a slow connection, it isn't loaded at all. eBay are also looking at
limiting the number of search results that are
presented to users on really slow connections. And these strategies allow
them to focus on small payloads and really give users
the best experience based on their situation. So those are just a few
things that people are doing with Adaptive Serving. It's almost time for us to go. KATIE HEMPENIUS: It is. We hope you found
our talk helpful. Remember that you can get fast
with many of the optimizations that we talked
about today, and you can stay fast using performance
budgets in LightWallet. That's it from us. Thank you. ADDY OSMANI: Thank you. [APPLAUSE] [GOOGLE LOGO MUSIC]