[MUSIC PLAYING] ADDY OSMANI: Hey, folks. So today, we're going to talk
about loading performance on the web. Mobile has changed everything. It challenges the way that we
deliver modern user experiences on the web. And the shape of success
over the next year is going to be whatever
lets us ship the least amount of code while
still delivering value to our user experiences. Now, what actually
impacts loading? There's a number of things
on mobile that can impact it. It could be slow networks. It could be thermal
throttling, parsing JavaScript, cache eviction. In fact, there
are so many things that can impact how slowly
a page loads that we simply don't have enough time to cover
all of them in a single talk today. Some of the things that
we've seen teams successfully use to ship fast and
deliver fast experiences included things like shipping
less JavaScript down the wire, caching effectively using HTTP
caching and Service Workers to be resilient
against the network, pre-loading critical resources. But what end goal are
we actually trying to accomplish using
these best practices? Well, it's a lot to do
with user expectations. Now, to talk a little
bit more and illustrate user expectations, I'd like
to introduce you to Gary. [SAD MUSIC PLAYING] So Gary is trying to load
up a web page on slow 3G on an average phone. He's been waiting a few seconds. And he hasn't got any meaningful
content on there just yet. He can't even read the text
of this article just yet. Poor Gary. At this point, he's starting
to question his life choices. He's wondering if he should
have tried loading this page up on a slightly more
capable device, like a Tamagotchi or
maybe a Fisher-Price My First Laptop or
maybe even an abacus. Poor Gary. So talking about expectations a
little bit more, back in 2015, we introduced RAIL, a
user-centric performance model. RAIL had this idea
for load where we try to encourage
folks stripping down, you know, main
content for the page in under 1,000 milliseconds. But the reality is
that on slow 3G, that's really hard
to accomplish. But it also doesn't talk
too much about this idea that loading is
kind of a journey. And so over the last
year years and a bit, we've been focused on a newer
set of user happiness metrics that culminate in
time to interactive. This point during the
loading of the page that we think that
the user is probably going to be able to accomplish
useful actions, things like being able to tap around,
hit menus, hit buttons, actually have something
useful happen. So a lot of things
we talk about today are going to be focused on
this idea of improving time to interactivity. Now, Alex Russell
says of developing for mobile, that networks
CPUs and disks are not our best friend. The reality is that as
we shift more client heavy architectures, we can
end up paying for the things that we send down in ways that
are not always that obvious. In the traces that we see today,
as we profile different team sites, JavaScript ends up
being one of the heaviest costs that we experience. In fact, the cost of parsing
JavaScript is quite heavy. Here's a breakdown of the time
it takes to parse JavaScript on modern devices, so high-end
devices at the very top, average devices in
the middle, and then slightly lower-end
devices all the way down. This is for a meg of
decompressed script. Take a look at the
delta and how long it takes to parse script on
a very, very high-end phone versus something where
average, like the Moto G4 that your users probably
have out in the wild. We can zoom in on this. We can actually take a look
at a real site, like CNN. And if we compare
the performance of processing
script on something like the A11 bionic
chip in the iPhone 8. It takes about four seconds. Moto G4 takes an
additional nine. Imagine how long it's going
to push out how quickly you are able to get interactive. So there are still opportunities
for us to do better here. Whenever you are developing
a modern mobile experience, it's very important
to be testing on representative
average hardware. Over the last year, we've
seen some teams have success with the PRPL pattern. PRPL is a pattern that shows
you through aggressive code splitting how you can actually
get interactive really quickly. So it has this idea of pushing
the minimal code needed to get interactive. You try to render
that really quickly. You then use Service Worker
for pre-caching resources. You don't have to keep going
back out to the network. And then you lazy load
routes as they're needed. This is a pattern that's
been used by sites like Wego and is baked in to
modern toolkits, like Polymer App
Toolbox and Preact CLI. I wanted to take a
data-driven approach to explaining why patterns
like this are useful. Here's the V8
runtime call stats. This is basically
a granular look at where JavaScript
engines like V8 spend their time of 10
popular progressive web apps, mobile sites. What we can see in
orange is that parse dominates in many cases
the amount of time that we're spending here. And all the way down at the
bottom, we see sites like Wego using the PRPL pattern. They're actually not
spending as long in parse and are able to get
interactive, much more quickly. So opportunities
again for us to try making sure that the tooling
that we use these days tries to prescribe you best
practices for performance out of the box. Earlier I talked about caching. Something that we haven't
shared before actually is Chrome's cache hit rates. This is what it looks like. So here, we have a breakdown
of cache rates for CSS, JavaScript, fonts and images. Now, we have memory HTTP
cache shown in this table. In most cases, when a web
page needs a resource, Chrome starts by looking
it up in the memory cache. If that cache doesn't
have it, Chrome's then going to go out
to the network stack and eventually try getting
it from the HTTP cache. What we can see is that CSS has
got a relatively decent cache hit rate. But take a look at JavaScript. What we can see there
is that we've got a pretty poor cache hit rate. And it could be for
a number of reasons. It could be because
we're pushing out new releases too often and
invalidating those cache JavaScript bundles. It could be because we just
don't have decent enough HTTP caching headers set. So opportunities for us
to do much better there. Now, when it comes to being
successful at optimizing your load performance,
we've seen teams have great
success by making sure that the entire team owns
performance as a topic. And setting performance
budgets can really, really have a big impact here. So let's talk about budgets
for things like time to interactive. If we set a budget of about
5 seconds or under for time to interactive on first
load, and let's say that we take a global
baseline of a $200 Android device on a 400 kilobytes link
with a 400 milliseconds round trip time, this can
end up translating into a budget of about
160 to 170 kilobytes for our critical resources. We can zoom in on this
a little bit more. What we see is that
that's composed of a lot of different things. That budget includes
your application logic, your framework, which
could be anywhere between 4 to 40 kilobytes, your ecosystem,
pieces like your router or your state manage,
your utilities. And a question that you
have to ask yourself is, how much headroom do
these ecosystem choices end up leaving you for your
actual application code? Now, as you're trying to
decide on these things, it's very important to carefully
evaluate those libraries, those frameworks that
you're using when you're trying to build for mobile. Take a look at their network
transfer costs, their parse and compile times, and
whether they introduce any additional runtime
costs, like long tasks being added into the page that can end
up janking the user experience. So if you're trying to
be successful on mobile what we suggest is this. This is a good recipe. Develop on an average
phone so that you can feel those CPU and GPU limits. Keep your JavaScript parse and
compile time relatively low. And have a good performance
budget in place-- so five seconds for first load,
under 2 seconds for repeat visits. Now, there are a number of good
tools available for performance budgeting. Some that we've
tried out and really enjoy our Calibre,
Speedcurve and Bundlesize. We've seen teams use these,
as well as many other tools, like Webpack's
performance budgets, with some great success. And if you're interested
in learning more about this topic, Alex
Russell just published a really awesome article on
this topic where he asked, can you afford it? And he talks about real
world performance budgets. So check that out. Next, let's switch it up
and talk about the health of the web as a whole. Now, over the last
few years, we've given you good tools
for understanding the state of your
performance in synthetic lab conditions, things like
the Chrome DevTools, the Lighthouse, webpagetest,
as well as suggested using RUM for understanding
the performance your user's experience out
there in the wild. We've also given you good
tools for understanding trends, so things like HTTP Archive. So you can take a look at
how the web is constructed and get insights, like what is
the average size of the images folks are sending down? What is the median size of
those types of resources? Now, if you've checked
out HTTP Archive before , you might know that it doesn't
include a lot of those modern metrics that I was
talking about earlier. It also doesn't include
some of those graphs that we were highlighting. And so today, we'd
like to change that. I'm happy to introduce a new
version of HTTP Archive, HTTP Archive Beta. This is available at
beta.httparchive.org. And I'm pretty stoked
actually about this release, because it gives you access
to a lot more data and a lot more power to get insight. It includes things like
response bodies for CSS, HTML, and JavaScript;
lighthouse reports for hundreds of
thousands of sites; Blink feature counters, those
newer performance metrics. And all of this is queryable. What does this mean for you? Well, it means that
you are able to get insights such as the state
of JavaScript on mobile. So what we can learn from this
is that at the 90th percentile, sites are shipping down about
a megabyte of gzip JavaScript. Decompressed, that's going
to be even larger when it comes to parse costs. And we're seeing
that sites end up spending four seconds parsing
and compiling that code. Are sites using a Meg
of JavaScript up front? Well, we actually took a
look at the top 50 sites. And using the Chrome remote
debugging protocol, the death tools remote debugging
protocol, we actually discovered that
most of those sites consistently only
used 40% of the code that they loaded on load. We also took a look at this
30 seconds in to the page and discovered that the
situation didn't really change. And what this highlights
is opportunities for us to be shipping less
JavaScript down to our users, taking advantage of patterns
like code splitting, and just ensuring that
we're reducing our network transmission costs as well
for these types of codes. We can also take a look at the
state of the web on mobile. And we can see at
the 90th percentile, sites are shipping down almost 5
and 1/2 megabytes of resources. 70% of this is images. So still opportunities
there for us to be compressing things better. Using things like
moj.jpeg or Webpeach reduce how much we're actually
sending down over the wire. And we can also take a
look at web speed metrics, like time to interactive. And at the 90th
percentile, sites are taking 35 seconds
before they're interactive. That's 30 seconds
longer than the budgets that we're prescribing today. So we still have
some work to do there if we don't want
to make Gary sad. So that's HTTP Archive Beta. The reality is that on
the wild, demographics can vary pretty wildly
for your real users. Some users are going to
have a crappy device. Some are going to
have a crappy network. And your competitors may have a
faster experience than you do. Wouldn't it be useful if we
had something like HTTP Archive but which gave us
queryable RUM for the web? Now, to talk about a
new initiative here that's going to help I'd like
to introduce the stage, Ilya Grigorik and Bryan McQuade. [MUSIC PLAYING] ILYA GRIGORIK: Thanks, Addy. So as I'm sure you've
experienced yourselves, scanning the headlines
on any given day, it's inspiring to see examples
of well optimized sites delivering great
user experience. But at the same
time, there are also definitely pockets on
the web where we all know we need to do better. And honestly, sometimes
it's a little bit hard to tell looking at the headlines
whether we're making progress overall on the web. Are we improving
the user experience? And therein is actually
one of the big challenges that we have both as site
developers and browser developers. How do we understand
the macro trends of where the web is heading? How do we find examples
beyond just the great examples that we're highlighting here
CDS of great user expenses and that we should learn from? And similarly, where do
we focus our attention to improve the overall
experience on the web? So to address that
question, we're actually announcing the Chrome
User Experience Report today, which is a public data set
that we're hosting on BigQuery. And the data set provides a set
of key user experience metrics. And initially, we're focusing
on loading performance. And, of course, Addy mentioned
a lot of other metrics. And we're hoping to add
more metrics in the future. The report will also provide a
sample of 10,000 origins, which is something that
we're also hoping to improve in the future. And I know what you're thinking. Show me that data. So let's actually take
a look at the schema. So, first of all, a
high level overview. The report itself is aggregated
by origin and keyed by origin. So you'll have example.com. And we're providing
two key dimensions that we found to be critical
when actually working with this data ourselves. The first one is a form factor. So you can segregate
this data owned by tablet, phone, or desktop. The second one is the
effective connection type, which is determined by the
network information API. And this one is actually
powered by real user measurement data based on the round trip
time and the download speeds on the client. So you can tell if the
connection is fast or slow based on the actual
user experience. You can be on Wi-Fi connection. That can feel very slow. And it will say so in this case. And then finally, we
have a set of metrics. And as I mentioned, we're
focusing on loading metrics to start. So we're starting with
the four that we are here. First of all, the paint
API, so first paint and first contentful paint. Getting stuff on the
screen is important just to give user perception
that stuff is happening. And, of course, done
content loaded and unload defined by the HTML standard. So those are the four metrics. Finally, we have the
actual histograms. So all the data is
split into time slices. And each slice has
a start and an end and a density value, which
is a fraction of page loads that fall into that range. And with that I'll let Bryan
pull back the curtain a little. BRYAN MCQUADE: All
right, thanks, Ilya. I'm Bryan McQuade. I am a software
engineer at Google. And I work on making
Chrome and the web faster. So let's dig into the
data in the BigQuery table for the Chrome User
Experience Report. We'll see what kinds of
questions we can answer and what kind of
insights we can gather from digging into the data. So here, let's take a look
at here's our first query. We'll look at a few queries
and work through things here in the BigQuery web UI. And so what we're doing
here is this query will help us answer the
question what percent of page loads on the
www.google.com origin result in a fast first
contentful paint. And so we're defining fast
first contentful paint here as a contentful paint that
happens within 1 second. If you've seen some of
the Ilya's past talks or you're familiar with
Jacob Nielsen's work, you may know that 1 second is
that threshold of time where a user has their
train of thought typically interrupted
after that point. So ideally we'd like to keep
as many of our page loads under that threshold
as possible. So let's go ahead and run the
query and see how we're doing. So here, we can see at the
bottom, we've got our results. And we can see that 81% of
the loads on www.google.com are below the threshold. So generally, we're doing really
well here, which is great. So let's take a look just at a
couple of pieces of the query. So first, here,
we've got the name of the table we're querying. So this is the Chrome
User Experience Report table for 2017 October. It's the initial release. We're querying the first
contentful paint metric. And what we're doing
with that metric is we're summing all
of the density values that Ilya talked about for the
histogram bins for that metric, but only where they
represent samples that were recorded
in less than a second or less than 1,000 milliseconds. So there's that query. Let's drill down a little bit. One of the things we've talked
about that the data set enables is also drilling down
on certain dimensions. So let's take a look at
performance broken down by phone versus desktop
for Google News. So here, we've got our query. We're going to update
it to group by, and aggregate on
form factor, which breaks up by phone and desktop. We do have to add a little
bit extra to the query here because since we're
no longer aggregating at the origin level
we have to normalize. So we're sort of
dividing the bins that meet our threshold by the
total bins in the aggregation criteria. And you can learn more about
that in the documentation. But let's go ahead, and
we'll dive in and run this. And let's see how
we're doing broken down by phone versus desktop. And so we can see here
that, well, on desktop we're doing reasonably well. Almost half of page loads are
completing under our target threshold, our fast threshold. We've got a little bit of work
to do it looks like on phone. So this breakdown really
helps to give insight into the differences
in performance between phone and stop. And we see this pretty commonly
for origins on the web. So we definitely recommend that
as you're digging into the data and analyzing origins, you
do these kind of breakdowns to see if the performance
differs in the two dimensions, or in the two phone
and desktop breakouts. So one more query. One of the things that
the Chrome User Experience Report enables us to do
is to compare performance across different origins. So let's finish up with an
analysis of the google.com origins in the data set. So we've got a
wildcard query here. And we can run that. And we get more data. And here, because we're
sorted by fastest, we can see sort of our
fastest performing origins at the top of the set here. And then if we were to
page through the data, we would see areas where
we can improve as well. So these are the
kinds of insights that the data set enables. We definitely encourage
you to dig in and see what you can find and
share feedback with us to let us know how we
can make it more useful. ILYA GRIGORIK: And the last
example that Bryan gave here is actually a great
demonstration of the underlying power of the data set,
where you can actually look across the web and figure
out what are the trends? How is the use
experience changing? And let's go to the
next slide here. We're going backwards. But one of the things
that we discovered, as we have been looking
at the data set itself, is you have to be careful when
you are working with real user measurement data, because
the population of users that visits the website actually
affects the performance, which should be intuitive. But just as an example,
I can have a small site that has visited by users that
happen to be on fast hardware and on fast networks. And the site may not
be well optimized, and it may appear fast. And vise versa, you
can have a big service, like say, Google
News, which is visited by a very diverse set
of users with a wider distribution of hardware
and on slower networks. And that will be
reflected in the data. So when you're
comparing origins, you should be careful
with drawing conclusions and kind of try to
control for those things. So as Bryan
mentioned, we document some of these best practices
in our documentation. And on that note, to
get started, please check out our blog
posts, which has more details on the announcement
and how to access the data set. And it has a link to the
developer docs, which have a walkthrough guide for
how to get started with BigQuery if you have not used it before,
plus some sample queries that you can run, similar
to what you've seen here. And you can start getting
a feel for the data itself. And with that I'm super keen
to see what you guys will build with this data. And let's welcome
Addy back on stage. [MUSIC PLAYING] ADDY OSMANI: So that was the
Chrome User Experience Report. As a browser, what's Chrome
doing to give you as developers more power to control
your loading experience? Well, over the last
year, we've been working on a few new features. The first of these
is font display, which we introduced in
Chrome 60 and is now available as a work in
progress in Safari and Firefox. Font display as a descriptor
allows you as a user developer to decide how your web fonts
are going to render or fallback depending on how long it
takes for them to load. I personally love using font
display optional, because it basically says if a web
font can't load quickly, don't load it at all. If it happens to be in the
user's cache, the next time that they come and
visit the experience, we can then consume it. But otherwise, we don't
end up blocking for it. You've also asked
for the ability to adapt the content
that you serve down to users based on the
estimated network quality. Now Chrome's had the network
information API for a while. But it kind of only provided
you theoretical network speeds. Imagine being on Wi-Fi, but
connected to a cellular hotspot and only getting 2G speeds. Well, navigator.connection.type
would have effectively given you that. It would have told you
that you're on Wi-Fi. And we'd end up shipping you
down a much, much larger file in this case. Over in Chrome 62 we
introduced effective type, a newer property. And this uses the new
network estimation quality work that we've
been doing in Chrome. This uses RTT and
downlink values. And effectively what
this allows you to do is get a much clearer picture on
the actual effective connection type that the user has. So you make sure
that you give them a slightly more accurate
representation of data that their connection
can handle. For many of us in
this room, we're used to building single
page applications. And our waterfalls can end up
looking a little bit like this. We push down some HTML, which
then requires some JavaScript to be fetched before
we go and query an API for some JSON responses. Now the way that we've given
you to control a little bit more of your loading in the
past, back in Chrome 50, is link will preload,
something that's making its way to
other browsers. And this basically allows
you to tell the browser that there are late discovered
resources that are pretty critical to your experience. And can it try loading
those up much earlier on? Now, in Chrome up
until Chrome 62, you weren't able to use
the fetch API with this. General goal of
preload once again is starting the load
of that resource without having to wait for the
timing for scripts or elements to request them. And you can now use this
consistently with the Fetch API as well. So Fetch API and preload
work together now. And for folks that have been
building progressive web apps, there have been some situations
where your Service Worker boot up time can end up
delaying a network response. Navigation preload,
something that we introduced in Chrome 59, allows
you to fix this by allowing you to make
the request in parallel with your Service
Worker bootup time. So in cases where
on particularly slow connections
slow devices, you can end up with a few
hundred milliseconds delaying overall
Service Worker bootup, this can now improve things. And at the 95th percentile,
our current estimates are that this can end up
saving folks anywhere up to 20% on their page load times. So exciting work
being done there. What's up for the future? So we're working
on a few things. We're working on
trying to improve the performance of our ES
modules implementation. Today, you currently still
need to bundle for most cases in production. On the Service Worker front, we
are working on off main thread fetch and script streaming. And we're also working
on a new navigation architecture for loading. That should hopefully lead
to some improvements in time to first contentful paint. Now, over the last
year, we've talked a lot about progressive web
apps and how in many cases they're becoming the new normal
for new mobile web experiences. And today, we've got
some new ones to share. So please join me in welcoming
to the progressive web app family, two new sites. [DRAMATIC MUSIC PLAYING] So let's start off
with Pinterest. Pinterest spent three
months building out the logged in experience
for their progressive web app, which has now rolled
out to 100% of users. This started because
they were focused on international growth. And when they took a look
at their old mobile web experience, which often pointed
folks to the native app, they discovered that not as
many people were actually clicking through and installing
that, which it kind of made sense for them to
explore mobile web as an opportunity for improving
their conversion rates. So they ended up
building this experience. It didn't take too long to
get the initial version out. This is based on React,
React Redux, and React Router with Webpack. I kind of love Pinterest. I'm a heavy Pinterest
user myself. It allows me to take a look at
some really beautiful crayon arts that people
end up creating. And it also saves me
time, because it shows me what my version of this
would also look like, so I don't have to do it myself. But thank you, Pinterest. Taking a look at the performance
of the old Pinterest site, what we can see on first
load is that they used to get interactive
in over 20 seconds. It would often take 23,
30 seconds before you could actually interact
with those pages and start saving your pins. I'm happy to say that with
the new progressive web app experience that
they've just shipped, this changes quite a lot. They're now able
to get interactive in under 5.6 seconds. So really nice boost there. They've also
managed to drop down the sizes of their JavaScript
bundles, all the way down to 150 kilobytes. They've reduced the sizes
of their CSS bundles. At the 90th percentile, the time
it takes to load up pin pages is also down. And on repeat loads, thanks
to Service Worker caching, they're actually able to
boot up and get interactive in under 4 seconds on average
mobile hardware, which has been great see. And we can compare this to some
of their native applications as well. So this isn't necessarily an
apples to apples comparison. But I will say that for the
core home feed experience, what you're able to get in
under 150 kilobytes is reflective of the
same experience delivered in 56 megs of their native app
on iOS, 9.6 megs on Android. Now, you could say that,
yeah, as you navigate through this experience,
you are going to end up fetching more data. But this cost as amortized
over the lifetime of the application, as
those subsequent navigations don't end up costing
quite as much data as the native apps do. We can take a look at
the business metrics off the back of
this-- and I was quite happy to see these as well. We can see comparing the old
mobile website to the new PWA is the time spent in the
application is up 40%. User generated ad
dollars are up by 44%. Core engagements are up. But we can also see
compared to the native app, the time spent is
also up in the PWA compared to that baseline,
as well as user-generated ad dollars. If you're a developer
like me, you probably care more about their
JavaScript serving strategy. So let's talk a
little bit about that. Pinterest are using an
interesting bundle splitting strategy where they have
a vendor trunk, which is used for their
framework and library code, so their React, their
Redux, their React Routers. They have entry chunks
for their core logic And then they have a synchronous
chunks for anything that's lazily loaded in later on. Their web configuration
looks a little bit like this. It's using the
commons chunk plug-in. They maintain a list of all
the different frameworks and libraries that end
up getting squashed into that vendor bundle. They're using things like React
Router for their overall code splitting and lazy
loading story. So in this case, they're using
Webpack's magic comments. They're creating a
loader, registering it to a particular pin
route, rendering the route with React Router 4,
asynchronously loading route bundles with pure
components as needed and rendering components
as they're needed, which has been cool to see. Something of a trend that I keep
seeing with teams that we work with is the value that
they've gotten out of close bundle analysis. So this is what their
Webpack Bundle analysis output looks like. And what you can
notice in the purples, the blues and the
pinks is that this represents asynchronous
chunks of code that included some duplicates. So they had duplicate
logic across a lot of these different chunks. And through using
Webpack Bundle Analyzer, they were actually able
to discover opportunities where they could move a
lot of that common code all the way into their
entry chunk, which increased the size
of that chunk by 20%, but actually decreased
the size of all the asynchronous chunks
by anywhere up to 90%, which is really great to see. On the Service
Workers front, they were able to explore a
very iterative approach to adopting Service Workers. So they initially started
off by just runtime caching asynchronous chunks
of JavaScript so that they could be opted
in to V8's byte code cache. They then moved on to doing this
for vendor chunks, their most popular routes. There were global sites. They also did this for
their locale bundles. And eventually, they ended up
using the application shell pattern and a cache
first approach to their JavaScript in CSS. Now Pinterest are planning a few
other additions in the future. They're working on web
push notification support, trying to fix some desktop
error decisions that lead to some slower API
responses than they'd like, and also adding link row
preloading for preloading their JavaScript bundles. Next, up we've got Tinder. So Tinder swiped right
on the mobile web, which was cool to see. Yet it's poor for things
like Service Workers, ads at home screen, push
notifications for chat. And the original MVP of
this took about six weeks to build out and
then three months to actually initially launch. This has been something they've
been building as an opportunity to explore other markets. And it's also built
on React and Redux. It's rolled out globally
to 100% of users right now. And initial signs are positive. Also taking a look at
sort of the amount of code that's necessary to ship
down their core experience, Tinder are able to deliver that
core experience in about 10% of the data investment
for someone in a data costly or data scarce
market compared to the Android data that. So metrics at the moment
are looking positive. And I'm looking
forward to Tinder sharing a few more
concrete details about this in the near future. Let's take a look at
their performance, so some Lighthouse reports. Before they started
work on this, they were getting interactive
in about 7.7 seconds. After it, they managed to
shave off about 1.5 seconds. And one of the ways that
Tinder accomplished this was by adopting some really,
really concrete performance budgets. Remember, we were talking
about the importance and the need of performance
budgets earlier on. And so they have
performance budgets for all their different
types of chunks. They have a 155 kilobyte budget
for their vendor chunks, so their framework code. Asynchronous chunks also have
a budget, as well as CSS. The approach that they took to
code splitting was moving away from statically importing
in everything in one go over to using things like
React Router React Loadable and the commons chunk plug-in. So they were only
including in code that they needed when the
user would actually need it. They also took advantage
of React Loadable support for preloading scripts. And this just meant that
there were opportunities to preload in scripts
for additional views, so they were
probably in the cache when the user needed them
at a future point in time. Taking a look at
the impact of this, adopting code
splitting for Tinder ended up taking load time
from about 12 seconds all the way down to 4.69. From their Google
Analytics, we can also see that the average user is
able to load up this experience in under 6 seconds,
which is a lot better than the older experience that
they'd previously shipped. Tinder also adoptive
link rel preload. They previously had a situation
where some of their scripts were being loaded early on. Some of them were
being discovered late. And so using link
rel preload, they were actually able to push
all of this work much, much closer to parse time. And this actually gave
them an opportunity to reduce first paint
by 500 milliseconds and load time by 1 second. We were talking about Webpack
Bundle Analysis earlier. And Tinder were no different. They actually found
great value in closely looking at their dependency
graph for areas of opportunity to reduce. They found that they
were shipping down a lot of unused polyfills. And so they used
babel-preset-env to address that situation. They used lodash Webpack plug-in
to strip away parts of lodash that they didn't actually
need to be shipping down to their users. They replaced local
forage with raw IndexedDB, as well as a number of
optimizations for CSS that in a whole actually dropped
down low time furthermore, at this point in
time, to 4.5 seconds. They also adopted a
CSS loading strategy. So they now use atomic CSS
to create highly reusable CSS styles. And the idea here is
that if most of the style has already been sort of fetched
and it's in the HTTP cache, then you can cache
it for longer, and it doesn't have to be
refreshed for every release, because you're doing it
a much more granular way. So this also led to decreases
at that point in time in overall page load
time, which is awesome. And finally-- well,
almost finally-- they updated to
Webpack 3 very recently and saw a reduction in
JavaScript parsing time of 8%. So they're using the module
concatenation plug-in in there as well. They also recently updated to
the latest version of React, React 16, and saw a
reduction of almost 7% in their vendor chunk sizes. We're going to be talking
a little bit about Workbox in the next talk,
but Tinder were also using Workbox for their online
caching, their Service Worker story. Jeff Posnik is going
to talk a little bit about this in his talk. But that's it for Tinder. Improving performance
is a journey. It's not something that you
just do in a single sprint and then leave alone. It's something that you
iterate on over time. And lots of small changes
can actually end up leading to large gains. What I'd like you to take
away from this talk is not like going back to your boss
later on today and saying, I sat through 3,000
Addy slides and now we have to rewrite everything. Instead, if you're
starting a new project, just consider picking a
set of tools that give you a strong performance baseline. And if you've got an
existing experience that could use some
work, just remember MOM, measure,
optimize, and monitor, because there are probably
opportunities there for you to do better
than the baseline that you're shipping to
user today on mobile. That's it for me. I hope that you
found this useful. Thank you. [MUSIC PLAYING]