[MUSIC PLAYING] PAUL IRISH: Hi, everyone. My name is Paul Irish. I am a performance engineer
working on developer tooling for Chrome. ELIZABETH SWEENY: And
I'm Elizabeth Sweeny. I'm a product manager working
on Developer Insights products on the web platform in Chrome. So today, our goal
is to make sure that we can all measure,
optimize, and monitor our site speed like pros. PAUL IRISH: Yep. ELIZABETH SWEENY:
And we're not here to espouse best practices just
for the sake of it, right? But we know performance
sites are profitable sites, and that's the core of it. So we know it can be difficult
to know where to start, and there are a lot of things
that can torpedo your ability to make your sites fast. PAUL IRISH: That's right. So we came up with a blueprint
for your performance success. ELIZABETH SWEENY: And
before we dive in, let's remind ourselves just
how important site speed is. PAUL IRISH: Yeah. I love this. No, I do not like this. ELIZABETH SWEENY:
No, this is terrible. But I know something even worse. PAUL IRISH: Um. Yeah. Like, hello? Like, web page, just
give me a paint. Please, show me something. ELIZABETH SWEENY: We know it's
bad, but just how bad is this? The impact on user
experience is not minimal. In fact, the speed that it
takes for a page to load is revealed to be the most
important factor in a user's mobile experience. It's more important
than how easy it is to find what
they want, it's more important than the
simplicity of using the site, and interestingly enough, it
is three times more important than what a site looks like. So the takeaway is
performance is critical. PAUL IRISH: Yeah. ELIZABETH SWEENY: And we
know that's hard to believe, but we actually are that
impatient, I promise. When overall page load time
goes from one to three seconds, the probability of
bounce increases by 32%. And when you go from
one to 10 seconds, then that nine-second delta
increases your chance of bounce by 123%. PAUL IRISH: Wow. So, yeah. Like Elizabeth said,
this isn't just about speed for
the sake of speed, although as developers, it
does feel really good to get a nice TTI or a nice FCP-- feels good-- but that
investment that we make as developers on site
speed can have direct impacts on business success. ELIZABETH SWEENY:
That's absolutely right. And we've seen these investments
pay off time and time again for our partners. When Pinterest revamped
their mobile web experience to focus on performance,
they saw an uplift in both user sentiment
and engagement, and that net effect was a 44%
increase in their revenue. Their website is now their
top platform for sign-ups. Tinder, after
implementing and enforcing an aggressive
performance budget, now sees more swipes
on the web than they do on their mobile app. So we'll be-- yes-- we'll be talking more about
how performance budgets come into the equation a little bit
later, but over and over again, we see the exact same pattern. Those who know how to design
and implement fast sites get more satisfaction from
their users, higher conversion rates, more time spent on
pages, and higher revenue. PAUL IRISH: OK. So all of this is great. But it is difficult
to know where to start and how to prioritize
when you're trying to improve your site speed. So we created this
blueprint to set up teams for performance success. So within this blueprint, we
have 15 recommended actions for you, starting from
the very basics scaling up to what we would consider
to be a very mature web performance culture. ELIZABETH SWEENY: So let's
make sure that we all start on even footing. What are the things that
absolutely everybody should feel comfortable about? These are the table
stakes for performance. We start with wanting to know
the current status of our page. Are we doing well? Are we doing poorly? And you can get this snapshot
by using the PageSpeed Insights web app. You can run any URL
through the tool, and it'll provide
you with both the lab and field data necessary to
benchmark your page's speed. And I want to take a minute to
break down the elements of what you get back in that report. First, you see the score gauge
at the very top of the report. And this is a high
level indication of how your page is doing. It's the same score as
you'd find in Lighthouse. And the score is calculated with
weighted performance metrics that Lighthouse
measures, including things like First Contentful
Paint and Time to Interactive. What's really special
about the PSI tool is that it provides you with
both lab and field data in one fell swoop. The field data is sourced from
the Chrome User Experience Report, or CrUX, which I'll be
talking about a little bit more later. And the lab data, including
both the performance metrics, as well as the opportunities
and diagnostics that you see beneath it, those
are all powered by Lighthouse. So you get the same results
that you would from Lighthouse within the DevTools Audit
panel, for instance, but it's running on our servers
instead of your local machine. PAUL IRISH: So
Action 2 is, OK, I saw what PageSpeed
Insights telling me about, and it gave me some suggestions. And I want to go try those out--
implement those suggestions on my machine, so I'm going to
need to iterate a little bit. So now it makes sense to
go to your local hosts in your department
environment, open up DevTools, and here, you open
up the Audits panel. So you're faced with
something like this, and then you can go off to
the races and try it out. And actually, since it's-- well, since it's the I/O talk,
we did have some changes-- some new stuff. We actually shipped a brand-new
version of Lighthouse, version 5.0 today. And there's some new features
that we're going to talk about. Yeah. One are those was mentioned
actually yesterday in the keynote. It's called Lighthouse
Stack Packs. Stack Packs are really cool. It's a feature that
allows Lighthouse to include specific
recommendations based on the stack that you're using. So Lighthouse detects
what kind of platform, what software your
site is built on, and instead of
just surfacing just the generalized
recommendation, we add additional messages that are
just for you and that platform. We're working closely
with the community to make sure that all the
recommendations are coming from experts that
know this stuff, and make sure that the advice
is tailored to the platform. The first Stack Pack
is for WordPress. That's available today. You'll see that in
PageSpeed Insights, you'll see that
in Chrome Canary, and as new ones are created
by community experts, we'll be adding those in. ELIZABETH SWEENY:
So, hold on a minute. Can we go back, because
that looks new-- the logo, the report. PAUL IRISH: Yeah, that's true. Yeah. Some new stuff. OK. So, I'll get into that. All right. So some of the new stuff. We've got a new kind of
refreshed Lighthouse UI. We want to make
sure with the report that it's clear and actionable. So we've done a UX
and visual refresh, just to prioritize
the right data. So you'll see this new
design on PageSpeed Insights. You'll see it in Chrome
Canary next week. It's good stuff. Also, we had to kind
of like hop on the hype train for one feature. I mean, it is arguably
the must-have feature of any modern UI
in 2019, Dark Mode. Oh, yeah. Oh, yeah. We just had to. Oh, thank you. Oh. Whoo! Yeah, Dark Mod. It's good. It's good. It's nice. You can flip it on in the
menu in the top right. And also we do that cool thing
with like you set the operating system preferences and that
media query, and, you know, had to do it. So, OK. Anyways, sorry, you can go back. ELIZABETH SWEENY:
So you're telling me we get Stack Packs and
a new UI with Dark Mode? PAUL IRISH: Yeah, yeah, yeah. Yeah. ELIZABETH SWEENY: OK. So that's all great. But onto Action 3. It's great to get a snapshot
of your field data in PSI, but you want to see
how your site actually evolves over time. So as we mentioned
briefly before, the Chrome User
Experience Report provides user-experience metrics
for how real-world Chrome users experience popular
destinations on the web. It is a dataset that is
powered by real users, and the metrics collected
are aggregated anonymously from users who have opted in. As of this past month,
the dataset coverage has expanded to over
5 million origins, and if you don't see field
data for an origin yet, just know that it's coming,
because we're always working to expand our origin coverage. And the CrUX dashboard, built
by this fine gentleman over here Rick Viscomi, allows
you to better understand how an origin's
performance evolves. It's built on Data Studio,
and it automatically syncs with the latest datasets and can
be easily customized and shared with your team online. So right now you're seeing
FCP being drilled down into, but you can easily go more
in-depth into other field metrics, or things like
proportion of device usage, and network connection types. PAUL IRISH: All right. Action 4 is we want to quantify
the experience that users are having on our site. For this, we're just going
to dig into the metrics and understanding what's
going on in these definitions of these metrics. Loading is-- well, actually--
it comes right from the W3C spec for paint timing. Load is not a single
moment in time. It's an experience that no
one metric can fully capture. So there's multiple
moments that really contribute to quantifying
what that experience is like. We've talked about some of
these metrics previously-- things like First
Contentful Paint, Time to Interactive,
the First Input Delay-- these are great metrics
and really doing a good job of capturing some things. But we wanted to
introduce you to kind of the new kids on the block. There's a few metrics
that are in development, and I wanted to introduce
them to you today. So the first one is
layout stability. And to explain this, it's best
to start with the example. Now Elizabeth and I,
we made this website. It's a cute cat, and it
wants to be clicked on. It seems good. ELIZABETH SWEENY:
Can I click on it? PAUL IRISH: You certainly can. But the other day, I actually
had to add some monetization to the site. Oh, I don't know. But I did. And unfortunately, I
didn't do in a nice way. So you might try and click
on it, but like an ad, and then it shifts it down,
and you know when this happens. It's so annoying. So it might be ads, but
it might just be an image. Lots of things can kind
of move things around as the page loads. And it's frustrating, as
users, when it moves around. Layout stability
is a metric that's all about quantifying
this experience, taking a look at the
elements, their dimensions, and their movements, and
putting that into a score. There's a bunch more
details, but you can read about them in
this explainer here. The second metric that I
want to introduce you to is not First Contentful
Paint, but Largest. So the Largest Contentful
Paint here, well, in this load, we start out blank-- just the text-- and
then finally, this image finishes downloading,
and that's good. Usually, if it's
like the big image, we call it the hero image,
the hero content, right? And so in this case, we're
interested in this moment in time, when the
big content is done. Now this content may be
an image, it may be text, and there's a few things
to figure out there. So in the explainer
for this one, you can see a little bit
some of the details there. So there's a big section
on what is largest? How do we quantify that? Figuring out what Contentful
means, figuring out Paint. So, for instance,
figuring out the details with foreground images
versus background images, with text, handling web
fonts, things like that. So the details are in there. You can dig into that. Again, these are metrics that
are still in development, but you'll be seeing
more about these two. All right. Wow. OK. Whoo! So, before we go any further,
we want to take a brief respite and examine a taxonomy
of speed tooling. Yes, yes, nice. So we're going to
cover a few talks-- a few tools-- this talk-- and all these tools makes
sense for different situations. But one thing I just
want to make clear is that they're all based
on the same core engine and using the same data sources. So in particular, most lab
tools are powered by Lighthouse at the base, whereas WebPerf
APIs and Chrome usage statistics are what power-- it's
pretty much all field data, RUM solutions, things like that. All right. So that captures kind of
the performance basics. Let's move on to
the good stuff, some of the more intermediate items. So we're getting
into these third-- sorry, the second of three
blueprints, the plumbing. These are professional
performance techniques. And actually for step five,
I'd like to introduce-- ELIZABETH SWEENY: All right. Hold on. You skipped a step. PAUL IRISH: Step four to five. ELIZABETH SWEENY: Sorry. PAUL IRISH: We just did four. ELIZABETH SWEENY: No,
there's one in between. PAUL IRISH: There's a-- ELIZABETH SWEENY: Yeah, it's
four and three-quarters. PAUL IRISH: Oh! ELIZABETH SWEENY: And it's one
of the most important ones. PAUL IRISH: Four and
three-quarters, obvious. What was four and
three-quarters? ELIZABETH SWEENY: Yeah,
you can't skip that one. PAUL IRISH: Oh, OK. My bad. ELIZABETH SWEENY: So if
you don't have buy-in from all of your stakeholders
that speed is important, and that it's not
just your best friend but it's everybody's
best friend, everything else in the blueprint
kind of becomes a moot point. And people get excited about
the shiniest new feature, and performance gets
put on the back burner. We've all been there. So this means that
you want to make sure that you have support from
all parts of your organization to execute against the
performance blueprint that we're sharing
with you today. There's nothing more
painful than having to layer performance on
top of the fundamentally non-performance site. That's just-- it's painful. But this can be seen in
how organizations often design their web apps. Performance is an afterthought,
and often, it only becomes a priority in
the heat of an emergency. So users are
complaining, businesses are losing money, and
then panic ensues. But like we said
earlier, performance at the end of the day is about
solving business problems. And understanding
that the conversations we need to increase conversions
and we need to lower our FCP are effectively the
same conversation is a really good way to get
both business and engineering stakeholders excited
about solving problems that lend themselves
towards increasing quality for your users. OK. Now you can introduce Amir. PAUL IRISH: Oh, great. All right. Everyone please welcome
Amir Rachum, engineer on the Google Search Console. [APPLAUSE] AMIR RACHUM: Thanks
Paul Hi, everyone. My name is Amir. And for those of
you who don't know, Search Console is a tool
that gives you insight into how your website is
performing on Google Search-- what queries bring
users to your site, how many users see your
website in the search results, how many click
through, and so on. It also provides reports
on your website's coverage on the Google Search
Index, as well as help you fix any
issues you might have relating to
search features, like AMP or structured data. But getting users to
your site is not enough. Like you've just heard, faster
it means more conversions. So with Search Console, we want
to help website owners provide users with an amazing experience
that loads fast and keeps the bounce rates low. That's why I am
happy to announce that we've been
working on a new Speed Report for Search Console. It's still in beta,
but today we'll take a sneak peek
at the new report. Now this report is pivoted
around field metrics. So that's First Contentful
Paint and First Input Delay, based on the Chrome User
Experience Report data. And the goal here is to
get an overview of how all of the pages in your website
are doing, based on real-user of measurements, then zooming
in on a particular metric and device that's problematic,
and getting examples from misbehaving pages, and
then taking those examples, fixing them, iterating
on them with developer tools like Lighthouse
and PageSpeed Insights. So let's take a look. So what you see here
is the breakdown for the Google
Developer website, based on actual user
measurements aggregated over the last 28 days. And the first thing
you can see here is that we classify all the
URLs into three buckets-- slow, average, and fast. So right off the
top, you can get an overview of how your site
is doing on a per-URL basis. So for the Developer
site, we have about 4,000 slow URLs,
about 33,000 average pages, and 800 fast pages. And we classify a
page as slow if it's considered slow on any metric
on either desktop or mobile. So if you have a page that
has a slow First Input Delay, for example,
on mobile, it'll count towards the slow bucket,
even if the other metrics are doing well. So that's kind of a
strict definition. Fast URLs are fast on all
metrics, across all devices. So that's a really
good place to be. And the rest of your
pages are labeled average. And that's usually the biggest
bucket, as you can see here. Now under the
Summary account, you can see an overtime graph of
these performance buckets, so you can get a feel of the
trend of your speed performance over the last three months. This is where you'll see the
effects of any performance fixes you implement as
they reach actual users. So now that you know how
your website is doing, we can drill down
to a specific issue. And because we know
it's unrealistic to fix an entire website
in one go, we really wanted to help website
owners figure out where to spend their resources
when fixing speed issues. So with the Speed Report, we are
also introducing page grouping. Instead of just a
list of URLs, we take all the pages in
your site and group them with pages that have
a similar experience, and that we think we'll have
the same underlying technical issues. This way, if your
issue is caused by a common template
or a slow resource, you can fix them all at once. You can see that
each URL actually represents a bunch
of similar URLs, and we aggregate the performance
metric for the entire group. So in this example, for
the first page group it has about 1,000 URLs, and
an aggregated First Contentful Paint value of 3.2 seconds. And if you click
on one of those, you can see more examples
of pages in that group. This allows you to focus on the
pages you care about the most. And for developer site we
have, for example, a page group for all the pages describing
structured data items you can implement on your site. And all of these pages, they
have a similar structure. And it's very likely
that the technical issues will be the same or
similar on all of these, so you can fix
them all in one go. And after you decide on
what to fix it's time to take the examples
here back to developer tools like PageSpeed Insights. And you can see
there's a direct link to PageSpeed Insights in the
panel for the [INAUDIBLE] selected. And iterate on a
fix using lab data. And when you're done
you can come back to Search Console and see the
effects of your fix on it-- see the effects of your fix
on your website as a whole. And that's it. So as I've said, this report
is still being beta tested. But you can help. You can sign up to register
for the beta in this link. And we'll be adding
more participants over the next few weeks. And as always, we appreciate
any feedback you have. And if you've never used Search
Console, have any questions, be sure to visit us
in the Sandbox area to get a demo of Search
Console in action. And with that, I'll bring it
back to Elizabeth and Paul. Thank you. [APPLAUSE] PAUL IRISH: All right. ELIZABETH SWEENY:
Thank you, Amir. So we are so excited by the
speed report and new features like being able to dissect the
CrUX data by page groupings. That's super cool. PAUL IRISH: That's rad. ELIZABETH SWEENY: And what's
great is that by step 6 we are comfortable with what
we want to be measuring. We know what speed
metrics we want to be tracking both in
the lab and in the field, and what tools we
want to use to do so. But we still haven't
defined success. So my TTI is seven seconds. Am I happy about this? I don't really know, because
we haven't set our goals yet. So it's time to define
a performance budget. And you are in control and
able to define what budget feels reasonable for your team. However, setting
reasonably aggressive goals will allow you to maintain
optimal performance when new features
are introduced, as the team changes, and when
day-to-day priorities devour your bandwidth, which we
know happens all the time. So there are three kinds of
budgets that you can set, including resource
quantity, like the weight of your JavaScript or the number
of network requests, milestone or metric budgets, like
a maximum threshold for your interactivity
or load metrics, or score budgets
based on Lighthouse. And just for the record, if
you set a score budget that is 100 for all of your audit
categories I bow to you, and more power to you. Just know that there's an
awesome Easter egg in there somewhere. But you didn't hear it from me. PAUL IRISH: Hey, don't tell
them about the fireworks. - I made him say it. - They look cool. All right. All right. - So just an hour ago during
the speed and scale talk, Katie and Addy went into depth
about incorporating performance budgets into your workflow. And we're so excited to have
Lighthouse's new performance budgeting feature,
LightWallet, announced this I/O. Lighthouse now
supports your resource quantity budgets within the Report
UI itself so that you and your team can evaluate how
well your site is performing against the goals
that you've set. To get started, you
define a budget file. This example sets a budget of
125 kilobytes for all scripts, 50 for all style sheets, and
35 network requests total. Then you use your
budget in Lighthouse by passing the budget path
flag, followed by the path to your budget file in
order to calculate whenever a category is over budget. And if you're not
sure where to start, you can check out Katie's
performance budget calculator to give you a good sense
of what a good default budget is for you and your
team based on your goals. PAUL IRISH: All right. Action 7. You want to diagnose specific
aspects of what in particular is affecting page load. And I think I remember this is
the DevTools Performance panel. Yeah. My buddy. I like performance panel. There's some good stuff. I mean, in the performance
panel we're all about getting into the details. And really, in order to show
kind of what this is about-- ELIZABETH SWEENY:
You should do a demo. PAUL IRISH: Do I have to? ELIZABETH SWEENY: Yeah. PAUL IRISH: OK. I'll do a demo. All right. Cool. So what we're going to look
at is the Wikipedia page for "Cat." Great little page. And I want to
understand how it loads. So what we're going
to do is, first I'm just going to start it
off from about blank and hit record. This is a non-throttled
run by the way, but we'll just keep it easy. All right. I navigate. I hit stop. That seems good. All right. So a lot of things
going on here, right? We got all this stuff down here. But really I'm just interested
in when we get that content on the top of the screen. So I'm just going to kind of
scrub in the top and see, OK. Yeah. Well looks like the icons
on the top and the logo were a little late to come in. But we had the
content pretty early. So I'll just select that area. Oh, wow. OK. In this case, none of that
stuff on the main thread was even there. So what we have is we have
this network track and then the main thread. One thing that kind of I
notice is this little gap in the middle. And what happens is we have
the HTML downloading over here. And then our end point
is actually here. It's that first paint,
first contentful paint, first meaningful paint, all on
the exact same point in time. And in fact, we can
open up frames just to see exactly what the screen
looked like at that point. All right? So why did we have this gap
here on the main thread? Well I don't know. It's kind of interesting. So we finish the
HTML download, right? They were parsing the HTML here. We parse it again over here. But here main thread's
not doing really anything. Looks like we're
downloading some images, downloading a script. But the priority is low. So that means it's
not render blocking. So that shouldn't be a problem. But the purple is style sheet. Priority of highest. And highest indicates
that it's render blocking. And so what happened
is we download the HTML but then we find a render
blocking style sheet. So we got to go fetch that. Then we once that finishes,
then you see we come down here and we finish parsing
the rest of the HTML. We recalculate
style, layout, paint, and then pretty soon
we finish it off. So this is kind of cool. And in fact, it's
interesting because Wikipedia has one of the best
web performance teams that there is. But still, even they
have an opportunity. They could take the styles that
are in this style sheet, kind of critical CSS thing, and
take them and inline them in the HTML. Would win them-- in
this case, something around 30 milliseconds
is on my unthrottled run. But there's a lot
of opportunities. That's probably where I'd start. And then afterwards
I'd start to get into what is happening
in the main thread over here, because
it looks like there might be some opportunities
for improvement. So that's what we can do
with the DevTools Perf panel. Back to our slides, I guess. - And as much as we'd all like
to, we can't sit our entire lives in front of DevTools
re-running Lighthouse over and over again, as ideal as
a scenario as that sounds. PAUL IRISH: Yeah. ELIZABETH SWEENY: So
we're at the point where we need to automate as much
of our performance story as possible. And that's where production
monitoring comes in. There are a lot of third
party production monitoring solutions that are built on
top of Lighthouse's engine. As Paul mentioned
earlier, a lot of the web performance tools that you
see are based on the same core technologies and data. And I really like that
production monitoring can be done with
web.dev's measure tool and the API that it runs
on, PageSpeed Insights v5 API. With web.dev you're able
to run Lighthouse and track your page's
performance over time as well as other audit
categories like accessibility and search engine optimization. When you run Lighthouse
within web.dev you can easily find
the guidance that you need to optimize your
site's performance. That's one of the joys
of it, is that it marries documentation with tooling. So stay tuned for more
feature build outs there. If you plan on using the
PSI API in an automated way with regularly
scheduled queries, you can get an API
key at the URL here. And it's a great way to
scale your monitoring over multiple pages or origins. And by default, the API runs
just the performance category and on desktop, but you
can adjust it for mobile and expand it to include
the other Lighthouse categories as well. And you can get CrUX
data from the API too. So if you're looking to
build out your own production monitoring solution this is a
really great place to start. PAUL IRISH: So at
this point we need some more details on the user
behavior that's on the site. And there's a lot of
great solutions for this. But I want to call
out one in particular that was launched
just yesterday, and that is the new web
performance monitoring solution from Firebase. There's some really
good stuff in there. And since there weren't
so many details yesterday I want to show a little
bit of what it looks like in the real experience. So this is what you'll see
in kind of the dashboard that welcomes you. We see a bunch of key
performance metrics and the full distribution
of those measurements from all of your users. And that's really nice
because in other tools like Google Analytics
you only get like that one average number. It's not very
indicative of what is happening to all of your users. So it's great to get
the full picture here. You also see metrics like
first contentful paint and first input
delay in there, too. And then you can also dig
into some of these metrics. Look at one in particular one,
see how they change over time, and then pivot the data based
on a few different variables. It's cool. ELIZABETH SWEENY:
That is really cool. But now you know how to and
where to collect field and lab data in aggregate
and on a page level, and how to compare how
you've performed over time against your benchmarks. But how are you
doing in relationship to your competition? Here we recommend that you
leverage the full power of CrUX with BigQuery to dig really
deeply into the data sets. Not only can you compare
one competitor's metrics, but you can compare all
competitors across the board within an industry to
see where you fall. And you can visit
the CrUX GitHub repo to discover useful recipes
for extracting insights. And also if you have a recipe
that you like, submit a PR and share it with everybody. So you have your foundation. And now you have your plumbing. But there's something missing. I can take a shower, but I
can't turn on the lights. PAUL IRISH: Shower in the dark. Obviously. Got to do that. ELIZABETH SWEENY: OK. So now for the stuff that can
really light up your world. OK. I can't help myself. So this part of the blueprint
for performance success is one of the most valuable
things that you can do. Being able to correlate the
speed with which your users interact with your site
and your conversion, bounce, and engagement rates
is a goldmine of insight. There are a few steps to
get started with this. Drawing this graph
is the first one. But step 1, actually, is to
choose representative pages that you can track
and compare over time. This is between both your
business and your performance metrics. With this in place,
then you can reasonably evaluate the correlations
of your top performance and business metrics to
one another over time. This will eventually allow you
to estimate the impact of a new feature prior to deployment--
and this is on your revenue-- and quote the cost of a feature
implementation during design. - So by now at this
point you've surely noticed that the third
parties on your site are bringing you some
performance pain. So we need to sort that out. There's a few tools here
I want to shout out. Request Map gives you a nice
view of your third party situation, their network
costs, their dependencies. And if you're interested
in the web scale impact of third parties
check out Third Party Web. This was actually built by
one of the core Lighthouse engineers, Patrick Hulce. And it summarizes
the runtime cost, the JavaScript cost of third
parties across the web. That's really helpful for just
comparing different competitors in a space based on
what sort of impact they're going to have to
your web page's performance. And it's also cool
because the data behind it is all completely open source. In addition, ultimately solving
your third party situation requires working as a team. So one recommendation
is bringing together representatives from different
parts of the company and kind of establishing a shared goal. We are going to make
faster web pages. At that point you can then
review all the third-party tags together, understand what
their perf impact is, and evaluate what's
absolutely required and what we can do about things. All right. Action 13. Your site, you may feel
is not like other sites. And you might want to define
performance success that is completely custom to you. For example, on me and
Elizabeth's cat site, well, what is success? - Time to first cat. - Time to first cat. Yeah. We want this kitty cat in
front of the user as soon as possible. So let's create a
custom metric for that. All right, cool. There's a new API just
kind of on the way out. But you can put on
an element timing attribute on the image tag. And then you'll set up
a Performance Observer, which down at the last line you
observe element entry types. And then inside the
callback you get data. And you get a time
stamp that represents not when that image was
finished downloading but when it was
rendered to the screen. And that difference can
be significant, important. So now we have, yeah,
our time to first cat. Pretty cool. Element timing is currently
in an origin trial. So this is kind of cool. You just go-- it
looks kind of scary but it's really straightforward. Sign up the form. Say which origins you
want to use it on. And you put in like a
header or a meta tag, and you're good to go. All right. ELIZABETH SWEENY: And
with a custom perf metric you know what you
want to measure. That's awesome. Time to first cat. Fantastic. But we're on Action 14. And this is a mere one
step away from being, like, performance pros. So we need to automate the
measurement of your custom KPIs. And in Lighthouse, we-- just kind of taking a step
back and taking stock of this-- we really try to make sure
that the audits we incorporate into the core report
itself are universally actionable and impactful
for all developers regardless of their tech
stack, what browser they're in, or their industry. So we know there
are valuable audits though that are entirely
valid for use cases that don't necessarily
meet the criteria for universal applicability. And we want to leverage
the power of Lighthouse as a platform to measure
what you care most about. And that's why I'm happy to
announce for the first time, Lighthouse Plugins. It's a brand new feature
that allows domain experts like yourselves to extend the
functionality of Lighthouse for your specific needs. At its core, a Lighthouse
plugin is a node module that implements a set of checks
that will be run by Lighthouse and added to the report
as a new category. The Google AdSpeed team created
Lighthouse's first plugin, which is already
available for CLI users. And this plugin seeks
to provide ad managers with detailed, actionable
recommendations to improve ads loading
on their web pages. Soon we will be
supporting selecting which plugins you
want to run via our UI itself so that you
can easily share the functionality that you've
built with other Lighthouse users. To learn more about Lighthouse
plugins, check out our plugin handbook in the
Lighthouse GitHub repo. And OK. I'm excited. PAUL IRISH: Yeah. ELIZABETH SWEENY:
We're almost there. And what's the last
action to becoming a master of performance? PAUL IRISH: Well,
I'm glad you asked. So when you're developing,
at least when I'm developing, I want to know for each
and every pull request if that's going
to impact my TTI, if that's going to make things
two seconds slower, or two seconds faster. I want to know that
before the pull request is merged into master
and then deployed out live, right? So implementing
performance measurement and continuous integration is
one of the most robust methods that you can employ to
defend against regressions. In fact, the fantastic web
performance team at Wikipedia recently blogged
about their success using a combination of both
RUM data and lab performance tooling in CI. Here they were seeing
their first paint numbers rising in their RUM
data and they didn't have a good explanation for it. But they have a
really robust CI setup and they were able to see
what was actually happening. In fact, as users were
switching over to Chrome 69 the numbers went up quite a bit. They investigated this and found
some bugs with Chromium team, and we were also like, yeah,
we also noticed this too. And there was a change in
how things were measured. But this gave a lot more
confidence as far as what is happening
in the performance so that they know in this
case they weren't at fault. It was a change in our side. We're excited. We want to make sure that
you have the confidence to know how each and
every change that you make impacts your web performance. So we're working
on a new project, it's called Lighthouse CI. And Lighthouse CI is
going to be really cool. But it is early. It's all open source though. It's on GitHub. You can look it up. The curious people can
certainly take a look. And we're excited
about making sure that you have some of
this more control and data to understand how things move. - And you're telling me now
that I can get house pre-prod and post. - Yep. - That's cool. - It's good. All right. So that takes care of
the last blueprint. So all right. - So I'm going to recommend
the next few slides are phones out slides. PAUL IRISH: Oh, yeah,
that's good stuff. ELIZABETH SWEENY: If you would
like to get summaries of stats. PAUL IRISH: I mean,
it's the full summary. Yeah. It's nice. ELIZABETH SWEENY: I can
do better than that one. PAUL IRISH: Oh, yeah? ELIZABETH SWEENY: Mhm. PAUL IRISH: Oh. ELIZABETH SWEENY: I
can give you the tools. PAUL IRISH: Yeah, it's good. That's a good one. I'd totally photosnap that. ELIZABETH SWEENY:
So we know it's hard to know which tools to
use when and for what purpose. But as Paul said
earlier, most of them are built on the
same foundation. So each one brings its
own value proposition. But remember, you're kind of
all building off the same thing. So this tool box can
be seen as everything you need in order to implement
your blueprint for performance success. We really want to have your
back every step of the way. But do know unfortunately
for step four and a half-- PAUL IRISH: Oh, four and a half. Can we go back a second? ELIZABETH SWEENY: Yeah. Because you are going to have
to bring your own whiskey. PAUL IRISH: Oh, but can
I just get some whiskey? You can help me out with the-- - Maybe. PAUL IRISH: That'd be great. - Maybe. - Would love it. All right. - Here are all of the
links that we shared over the course of the entire
presentation, just curated for ease of reference. - Good stuff. And that's it. - Thank you. [APPLAUSE] [MUSIC PLAYING]