[MUSIC PLAYING] TAYLOR SAVAGE: My
name is Taylor Savage. And I'm a product manager on
Chrome's Developer Experience team, which encompasses the
many different open source web developer products that
we build here on Chrome. So I'll be talking
today primarily about one product in particular,
which is the Polymer Project, and about how the
Developer Experience team has been thinking
about leveraging all the different features
you're hearing about on the modern web
platform, in order to build fully end-to-end applications. But like a typical
product manager, I want to start today
by taking a step back, taking a 10,000-foot view, about
why we build developer-facing tools and products and libraries
in the first place on Chrome and the role that we see our
products playing in the broader web development ecosystem. So our biggest product that
we build on the Chrome team is, as you might guess,
Chrome, the browser. And our focus, when
building Chrome the browser, is to provide the absolute
best user experience possible. And so we've built
many, many features within Chrome to
make this happen, things like Chrome
Sync and Autofill, tons of security work,
countless optimizations to make page loading
faster and more efficient that you've heard about
in the last two days, things that we're talking
about at Chrome Dev Summit. But at the end of
the day, a browser is really only as useful as what
exists on the web to browse. So we don't really
make Chrome great. You really make Chrome great,
as web developers, with all the things that you build. So Chrome is just a window onto
the applications and websites that you create. So the quality of
the user experience on Chrome the browser
product is very fundamentally tied to the quality of the
sites that get put on the web. So on the Chrome
team, in addition to building Chrome
the browser, we're looking for opportunities
to also build products that help you
web developers create and distribute really
high-quality sites. And the trick,
though, for our team is to figure out where
our effort is best applied for maximal leverage. Because as everybody
here is very well aware, the open source web developer
ecosystem, on its own, is an amazing massive place. You don't really need us. New products for web
developers are coming out every single day, are being
built and being open sourced. There is certainly no
lack of innovation. And of innovation, there's
certainly no lack of diversity, in terms of the tools that
are available for us to use. Today's open source web
development world, though, is subject to a dominating
force, which I affectionately like to call the JavaScript
Industrial Complex. And this is the
positive feedback loop between all the different types
of players in our ecosystem, the open source
projects themselves, tooling that's adjacent to these
projects, content creators who write blog posts, who tweet on
social networks, conferences, trainings that people pay for
to learn more about these tools. All these different
aspects are all really critical to generate all
the projects and documentation and education and support
that we rely on every day to do our job. But the dynamics of this
JavaScript Industrial Complex will sometimes reward
shorter-term optimal solutions at the expense of what might
be optimal in the slightly longer term. Now, we all want to
build tools and we all want to use tools
that solve problems that were hitting right this
instant on the web platform. And so its these sorts of
tools that solve our problems today that tend to benefit
the most from this system, as you'd expect. Now, fortunately,
the web has been designed to be an incredibly
flexible platform. And so many of the problems
in web development that we hit can totally be papered over,
in the short-term, with tools. But unfortunately, with
the continual application of these tools, we
risk, one, adding a ton of extra complexity
to our workflows and, two, ultimately ossifying ourselves
at, sort of, a local maximum. And we all know that getting
stuck micro-optimizing at a local maximum is a
sure path to obsolescence. So on the Developer
Experience team on Chrome, we try to focus
specifically on solving the problems that we have a
unique opportunity to solve. So one big unique
angle that we have is our proximity to
Chrome, the product. So we have particular
expertise, when it comes to how the
browser works, for example. We have an ability to try
to influence the overall web platform. And so we look for
gaps in the ecosystem where our particular
context and skill set will be particularly valuable. So we work on products like
Chrome Dev Tools, which we can bundle
directly with Chrome itself, which would communicate
with Chrome at a very low level to help you inspect
and debug your website. We built things like Lighthouse,
which can seamlessly and deeply integrate with the
Dev Tools protocol to express our
particular vision of what makes a good, fast,
high-quality site and help you measure your
own site against that bar. But another unique position that
we have on the Chrome team that helps inform what
developer products we want to focus our time on is
our inherently long-term view. So the web has a very
long-time horizon. And we, here on the Chrome
team, are fundamentally tied to that long-time horizon. We're in a very
fortunate position in that sense in that we're very
deeply committed to the very long-term health of the web. So Chrome is going to
be here a long time. And we can't really
afford to get caught up in short-term hot trends,
because we're playing a very, very, very long game. And often, what this means
is that the best investment that we can make on the
Developer Experience team on Chrome is an
investment in improving the underlying platform itself. So now, here's the rub
with that strategy. Which is, the web platform
moves extremely deliberately. It takes years for a new
feature in the web platform to get designed, agreed
to, shipped across browser. And there is a huge cost,
also, to taking features out of the web platform, if
it's even possible to do. So we have to be
extremely careful and thoughtful and
deliberate when considering what new features
we can add to this platform. The road to changing the web
platform is a very long one. You've got to get a
spec written and then get it generally agreed to
across the different browser vendors. And then you've got to
implement it in a browser. And then you got to
ship it in that browser. And then you've got
to, inevitably, fix the bugs that come up
after you've shipped it, because you've broken it
and it's not useful yet. And then you've got
to wait until it's in enough browsers, shipped
and available for developers to use, that the developer
ecosystem will actually start taking advantage of it. So for a fundamentally new
feature, a fundamental change to the web platform
itself, to get baked in, it requires a dedicated
group of people fighting for that feature, in it for
the very, very long haul, for years, maybe even a decade. And so we dedicate a part of our
Chrome platform team to fight, specifically, this fight,
to analyze the ecosystem and then work with
other browser vendors to come up with
new features that plugs holes in the
platform, and then fight in the trenches for
years, to actually see these new features come to life. So this is where web components
and the Polymer Project come in. So we noticed, a few years
back, that web development was getting more and more complex
and that the ecosystem was getting increasingly siloed
and locked into frameworks, many of which, at their core,
were solving fundamentally the same problem, which was
providing a sane component model on top of
the web platform. And again, the lack of a sane
web-native component model is exactly the kind
of problem that can totally be solved in
the short-term with tools. But in the long term,
we'll only really be able to drive
toward simplicity with a fundamental
change to the platform. And so we set out to
create web components. Now, a long story short--
a long story short-- after many years, the
web component standards have finally crossed
a major finish line. Web components
have been natively supported in Safari and
Chrome for a while now, meaning that there are over
1 billion mobile devices out there right now,
in users' pockets, that have native support
for web components. So web components are a reality
of today's web platform. And we're starting to
see the ecosystem adapt to these new, powerful,
web platform features. We're starting to get to that
last phase of the component flowchart. New development frameworks,
like Ionic Stencil JS, have really exploded onto
the scene, based entirely around web components. Other frameworks, like
Glimmer and Svelte, allow you to output
components as web components. And existing frameworks, like
Angular and Vue and Preact, now provide first-class
support for web components. So you can check out
custom elements everywhere to see the latest progress
on first-class framework support for web components. And web components
usage in the wild is actually really taking
off, kind of under the radar. But it's really taking off. So we just had a fun realization
on the team the other day, after seeing another team
at another massive company tweet about one of their web
components-based launches. Which is, if you go
and if you look up the top 16 global
brands in the world, so the 16 most recognizable
companies in the world, nine of those companies-- over
half of those companies-- have a product that is using
web components in production and many of which
are using Polymer. So web components might
not be instant hacker news the top gold yet. But the adoption
is very, very real. In fact, if I were
starting a company that was trying to make money
on web development, I would absolutely pick web
components as my technology. So on the Polymer
Project, we're continually trying to evolve the Polymer
library, as these web component standards evolve and reach
different phases of maturity, to make it as easy as
possible for the standards to cross the finish
line and, also, for developers to take advantage
of components and production. Earlier this year, we held our
third annual Polymer Summit in Copenhagen, where we heard
from 25 different speakers about the state of Polymer and
also innovations more generally in the web component ecosystem. So we heard from
major companies, like USA Today and
Electronic Arts, who are using Polymer to
be able to quickly spin up new pages and web sites with
a consistent design language and with minimal extra
engineering effort. And we also gave
a sneak preview of the forthcoming next major
version of the Polymer library, which is Polymer 3.0. So Polymer has always been
about making it easier to build web components
and, specifically, making it easier to
build web components being as close to the
platform as possible. And so Polymer 3.0 is
a very small evolution of the library to come
even closer to realizing this ultimate goal. There are two major changes
to Polymer with Polymer 3.0. The first is we'll be moving
from Bower to NPM, in order to distribute Polymer
and the elements built using the Polymer library. [APPLAUSE] And the second is
we'll be switching from HTML imports to
ES Modules in order to load Polymer and
Polymer-based elements. So the reasoning behind
these two changes is fairly straightforward,
as everyone seems to intuitively get. Although, major other
browser vendors have agreed to and shipped custom
elements in Shadow DOM-- we're already seeing those
supported in the wild-- we haven't been able to reach
consensus around HTML imports. And today's
close-to-the-platform way to load code is via ES Modules. So on the Polymer Project, we're
going to follow our own motto. We're going to use
the platform and move to using ES Modules in
order to load Polymer. And there are some really,
really big exciting advantages with this switch. For one, Polymer will
become much more compatible with the workflow and
tools and other libraries that JavaScript developers
are already familiar with. And Polymer elements
and applications will also be able to run
without any polyfills at all on Chrome,
Opera, and Safari. And when Edge and Firefox ship
custom elements in Shadow DOM, Polymer will run completely
polyfill-free on those browsers as well. And on the Polymer
Project, we also really, really care about making these
transitions between versions of the library as
easy as possible. As you've seen, we have some
really, really big users inside Google and
outside Google who have thousands of
elements that can't do a major one-off transition. And so we're working on building
an auto upgrader tool that will mechanically and
automatically upgrade your 2.0 elements--
and even back to 1.0 hybrid mode, if
you're familiar with that-- to Polymer 3.0. So it will be an
automatic transition. It even upgrades
your tests for you. So you can learn much more about
Polymer 3.0 by checking out polymer-project.org and
the blog for updates. We're still working on a bunch
of tooling and support for 3.0. And we expect to have a
stable release sometime early next year. So keep an eye out for that. So 3.0 Polymer is
really the culmination of what we've been trying to
do on the Polymer Project-- make it possible to
build platform native components as close to
the web platform itself. And we really think that the
web component technologies are a transformative change
to the way that the web platform works. But components are really
only half the ballgame. It should also be trivially
easy to take these components and assemble full-fledged
end-to-end applications. And when it comes to
today's ergonomics of assembling web
components into apps, we tend to agree with Sam here. We think there is
a lot left to do, in terms of improving
the developer experience of building
end-to-end applications, taking advantage
of web components. And we also think they're
some really interesting opportunities to take some other
cutting-edge changes to the web platform, along
with web components, marry them together, and
be able to build really blazing fast end-to-end
apps that you can deliver to users on
mobile seamlessly and quickly. So I don't have a product,
per se, to announce today. But I do want to take the
second half of this talk to throw out some
of the key ideas that we've been kicking around
on the Developer Experience team for the shape of what an
app-building solution might look like. So Kevin Schaaf, an engineer
on the Polymer team, went into detail of many
of these ideas in his talk at the recent Polymer Summit. So I encourage you
to check that out if you're looking to
dive an a little more. So really, there are
four main problems that we are thinking
about when it comes to going
from web components to entire applications. The first is how to
structure your application for maximum performance, how to
factor your UI appropriately, how to manage state
within your app, and then how to actually serve
your application in production. So we'll dive into
each of these and see how we're thinking about
marrying a bunch of the web platform features
into doing each of these steps of building an
application really effectively. So first, structuring
for performance-- the number one way to make sure
that your application misses its performance
targets is to start thinking about performance
after you've already finished building your app. And I think we've hammered
this concept home quite a bit at this Chrome Dev Summit. So there's one
overarching principle when it comes to structuring
your performance, which we find consistently invaluable. Which is to minimize overhead. Every single byte
of your web app, as you've heard
again and again, has to go through this epic journey
before it finally gets rendered on a user's mobile device. Every byte runs into
so many opportunities for bottlenecks-- flaky
networks, slow devices. The only guaranteed way to
achieve good performance on mobile is to do less. And unfortunately,
what we see again and again is an attempt
to improve performance by doing more. So a lot of the
front-end world today is enamored by the concept
of server-side rendering, as a means specifically to good
performance, where we send down server-rendered static HTML
to get the UI on screen, while the user is waiting for
the rest of the app bundle to download. But unless your application
is mostly just static, kind of passive content, what
the user actually wants to do is interact with your app. They want to select
a departure date or sign up for a newsletter
or bookmark a house. And server-side
rendering doesn't really help with any of
this interactivity. It just gives them
something to look at while the rest
of their code loads, so they can actually
do what they came to your app or
your website to do. So if you still have to send
this large bundle of JavaScript down to transform that initial
rendering into something that's interactive, the user is
still going to be frustrated. And if you don't
believe me, here are a couple of real-life
examples of how popular server-side rendered
applications perform on relatively slow 3G networks. And as you can see, server-side
render the initial view really, really quickly. But the problem is,
on a slow network, it can take a really long time
for the JavaScript to load. And that leaves the user
looking at a screen that looks like they can
interact with it, but this can be an incredibly
frustrating experience. So I want to make the
point that this is not to say that server-side
rendering is wrong. Absolutely not, it can certainly
improve the user experience by getting pixels
on screen quickly. Definitely a good thing to do. Rather, it just says that there
are no shortcuts to delivering a good user experience. We need to focus on the right
metrics from the beginning. And so we think, on the
Developer Experience team here, that for a lot of apps
the right metrics should not be first paint but, rather,
time-to-interactive. And the best way to ensure
a good time-to-interactive is this-- don't make the
user wait on anything that they haven't asked for. So what this means is,
only send exactly the code that a particular
route requires in as few round trips
as possible, sending as little duplicate
information as possible. And this sounds easy enough. It sounds fairly intuitive. But in practice,
this has historically been very difficult,
given the bias of existing front-end tooling. So this is why we've developed
and have spent so much time evangelizing the
PRPL pattern, which gives a straightforward pattern
for factoring an application for optimal delivery. So with PRPL, start
by factoring code around de-coupled
routes that fit together into an interactive experience. Use server logic to push down
only the components or the data that a given route needs,
and eliminate round trips. We render and make that
initial route interactive as quickly as possible. We use Service Worker to
pre-cache the next parts of the app in the background. And then we lazily import
this pre-cached code that's needed for
subsequent routes from the Service Worker cache. So we summarize this
pattern as PRPL-- Push, Render,
Pre-cache, Lazy import. So PRPL gives us a
really nice pattern to ensure we're giving
the user exactly what they need for a particular
route and no more. But to ensure we're delivering
the ultimate best user experience we can, we want
to measure iteratively as we develop. And so we recommend using
Web Page Test for this. And Web Page Test recently
introduced a new Easy mode that you can go to
that's preconfigured for testing on mobile
devices and on 3G networks. So make sure that you've
enabled Lighthouse on webpagetest.org/easy. And then just enter the
URL that you want to test. And click Start Test. And once testing
is finished, you can click this Lighthouse
Score button here. And then under
Performance, you can see this timed interactive number. And this is the number
that we want to optimize. So there is a lot
of different advice out there for what to target
for time-to-interactive. And what we like to
say on the Chrome team, and I think you've heard in a
few talks at this conference, we talk about
aiming for 5 seconds for time-to-interactive. We think this is generally
a really strong target for a good solid
user experience. We think, though, that the
absolute highest quality sites can do even better, down to
3.5 seconds to interactive. And this is the target that
we'd shoot for on the products that we're building on the
Developer Experience team, specifically so we can leave
as much headroom as possible for you, the developer,
to build more complex apps and still get a really, really
fast time-to-interactive. So in these settings,
a first byte from a good edge-caching
server, after SSL negotiation, will be around 2 seconds. So this leaves us with
about a second and a half to get the routes payload
downloaded, rendered, and ready for the user to interact with. And we found that
this translates to roughly 50 kilobytes
of code and data that you can send for the
initial critical section of your route. So now, Polymer 2.0 starts
at around 12 KG zips, leaving you with
roughly 40K of budget for the critical components
that you need for each route. And we found that the best way
to get these recommendations across and the best way for us
to internalize them as a team is to give them a name. And so we're calling
this one PRPL-50. So for building fast
apps on the modern web, you get a really
big head start when relying on web components
for your component model. Because you don't have to
download any extra code to provide that component model. It's already baked in
there with the browser. We also think that any
modern app-building framework should do everything possible
to help developers stay underneath this PRPL-50 rule
for any particular route in their application. So that's how we are
thinking about structuring for performance by minimizing
the amount of code that has to run for any particular view. The next step in building an
app will be to factor your UI and actually assemble
these different views into a full-fledged application. So again, web components
really, really help us here. So we'll want to leverage
reusable components wherever possible. Because the best line
of code is the one that you didn't have to write. So just like NPM is the
go-to source for JavaScript libraries, web components
is your go-to source for reusable web components. So for a lot of
our app UI, we can stand on the shoulders of
giants in the community and stop reinventing
the wheel, and just use web components that others
have already created for us. Now, we talk a lot, when we're
talking about web components, about leaf node
elements, in terms of the sorts of UI widgets,
like buttons and drop-downs and sliders, that the
user directly interacts with and are, sort of, the
lowest nodes on your tree. But we also think
that web components can be hugely valuable
for app-level structure and organization as well. Using the standard web component
model for app components in your app can have a
lot of different benefits. So we can achieve
a smaller payload and get to that PRPL-50 number
by using built-in browser features, rather than
having to download our own extra code on top. We can get strong
encapsulation for free, which is hugely useful when
scaling up to a large team, all working on the
same code base. We get great built-in
developer tool support for web components,
via Chrome's dev tools. And most importantly, we
get the full flexibility, in terms of being able to reuse
our structural components. So as long as you're using
custom elements and properties and events as your
component interface, and as long as you're using
Shadow DOM to encapsulate the rendering of the
component, how a component does it's rendering is totally
just an implementation detail. So with a web component, you
can extend from whatever web component-based class
you like, without losing interoperability. How the component actually works
is an implementation detail. Its dependencies are an
implementation detail. So this inverts the
traditional model that we're used to when
building web applications. We can build, for example,
an entire application out of Polymer elements. But that's just one choice. Down the line, you could
switch some of your components over to using a much
simpler element base class, for example, than
what Polymer provides. And those can work side-by-side
in your application with Polymer elements,
one at a time. We can try out using
SkateJS in our application, without changing any
other part of our app. And someone is
bound, in the future, to make an even better
web component base class. And we can introduce
improvements to our app incrementally,
without throwing the whole thing away each time. So think about that. If you wanted to change an app
from one framework to another, there's no incremental path
to make a change like that. Yet, this is entirely
possible when we're using a standard
component model. So this is one of the very real
benefits of web components, even for app-level views. We can dramatically lower
switching costs for us without sacrificing our
ability to innovate and change incrementally over time. We don't lock ourselves
in at all when we're assembling our application. So now that we've
built our UI, we need to bring our application
to life by loading it with data and dealing with user
interactions that will change that data. So application state
management is, perhaps, an area of web development
where the platform has the least to say. And so we get a lot of questions
about how we do managing state on the Polymer Project. So two years ago, at our
first Polymer Summit, Kevin gave a talk called
"Thinking in Polymer" that put forth the concept
of the mediator pattern for how we think about
coordinating state changes between web components. So in short, a mediator
in the mediator pattern owns a scope of other
components and is responsible for propagating
data to those components, listening to events
from those components, mutating state and
propagating changes to that state back down to
components in its scope, but also up, via
events, to any owner of this particular component. And the mediator
pattern is really useful for creating reusable
standalone elements that can handle their own complex
state changes internally but also communicate those
state changes externally to anyone that
might be interested. And this standalone
state management ensures that reusable
web components are easily portable between
any application context and work just like
any other domino that you might be used to. So we often will build
full applications by composing this
simple pattern together over and over and over and over. And eventually, you have
a top-level mediator, which is controlling the whole
nested tree of components. And you have an application. However, the community
has also shown that there can be
lots of benefits to having less granular and
even global mediators of state. So particularly, as
components become more app-specific and as
generic components come together to hold application
logic, having one mediator for
all application data can make your app much
easier to reason about. And it also opens
up a whole suite of nice developer workflows
that we'll get to in a minute. So there are lots of these
global mediator patterns. And those mediator
patterns, like Flux, formalize this concept
of one central place to put application state that's
passed down to components and one place to dispatch
events that cause application data to be mutated
and passed back down. So we can really just think
about this global mediator of state as a generalized
global mediator pattern, like I described before,
for your entire application. So now, there are
lots of choices out there to implement
this global mediator pattern that work just
fine with custom elements, too many to go into. But we purposefully made
Polymer very low-level and very flexible, precisely
so that you could have many different options for
how you want to manage state in your application. But a lot of times, developers
will say, just show me one way to manage state that works. And if you're that person,
we do think that Redux is a really good choice. And a lot of people have had
a lot of success with it. So the Redux library
is very simple, with very little magic and a
relatively small footprint. It follows a very
easy-to-understand mediator pattern. And as complexity of
your application grows, there's a large ecosystem
of add-ons to Redux that can help your application
scale and complexity. So these usually come
in the form of ways to abstract and streamline
async flows in your application patterns. It is also, fortunately,
very, very simple to integrate Redux
with web components. So let's go back to our
global mediator diagram and make it specific to Redux. So the Redux term for the global
mediator that manages state is called the Store. Elements then
subscribe to state that is passed down into an
element's properties via a subscribed callback. And in place of
events, the elements dispatch what are called
Actions to the Store. And we write functions
called Reducers that then return a new
state object with changes to the data that
have happened based on actions that have occurred. So there's a bit of
a trade-off involved, going from localized state
management to global state management. But one of the key benefits
is that it opens up a really nice set of
developer workflows, like the dev tools
that ship with Redux. Since actions that change
state are centralized, it's trivial to log every action
that happens in an application and to see the entirety of
application state all at once. So there are lots
of ways to connect custom elements to Redux. And it's fairly simple to do. But one approach
that we really like is to build your views
as generic elements that accept the properties
and fire changes based on user interactions,
just like any other reusable web component that you might
create, and then subclass that generic element
to create a more application-specific
version of the element that is connected to the Store. By subscribing to the Store
and setting properties into elements and then listening
for DOM events and dispatching Redux actions as a result.
So a key pro tip, also, to point out-- if you're looking
into a global state management technique, most of them don't
come out-of-the-box with a way to separate all your different
state management code. They lead you towards one big
blob of global state management logic. And this is in opposition
to our PRPL concept, only loading the code that you
need for a particular route. Now, this is totally possible
to achieve with Redux. But it definitely is something
that we want to pay attention to in our implementation. So I'll give a quick
example for code for how we'd make a
Redux-connected custom element. So let's say we're building
a browsing-style explore view in our application. And we've built our Explore
page component, which takes properties and events. And then we're going to make a
subclass of this Explore page that will be specific to
our application and hook into our global Redux store. So in the Constructor,
we can call Redux' subscribe method
and dereferent state out of the Store and then set it
into our elements property interface. And here, we are using Polymer
2.0's set properties API, which provides a
really efficient way to do a batch of multiple
properties into an element. And next, we can
add event listeners for any custom DOM
events that will be fired from that element. And then we can call
Redux' dispatch method to notify the Store of
actions that have taken place. And we'll do this
by using functions that create an action objects. So Redux calls these
Action Creators. And we'll do that
for any of the events that our component
will admit that need to update global state. And remember, this
Explore page JS is lazy-loaded, only with
the slash Explore route, for example. So rather than loading all
of the state management code with the app, we want to
load it and install it alongside only the
components that need it. So you can add a little
code to enhance Redux with the ability to
incrementally build up global logic in the Store. So that logic that manages
the part of the state that a component depends on,
can be lazily loaded along with the component and
then added into the store. Likewise, any
non-trivial logic needed should also be separated
out and loaded along with only the components that
need them to ensure that you're achieving optimal delivery. So we're going to continue
experimenting with patterns for state management and how we
think they can fit into a web component-based application. And I also want to give a
shout-out to the Polymer Redux Library which is a community
library that approaches binding Polymer components
to Redux in a much more declarative fashion. So like I said before,
there's a lot of innovation happening in this space. And Redux is really
just one choice. And virtually any of
these global patterns can work totally well with web
components really seamlessly. So finally, once development
of our app is complete and we're ready to
deliver it to our users, we need to host it and
serve it to clients. And although a lot can be
accomplished by statically serving our client
application, there are a few minimal
features that we feel are required to be
implemented on the server to really achieve this
optimal user experience. So we want to be
all to serve our app shell for all rich
URLs, in order to enable client-side routing. We want to serve either
route-based dependencies using HTTP Push or route-based bundles
for non-Push capable browsers. We want to serve different
builds, optimized to target different user agents. And we want to serve
static content also, for search engine crawlers that
might not execute JavaScript. So we've been
doing a lot of work on a reference server that does
all of these things, called PRPL Server Node,
which is designed to work hand-in-hand with build
output from the Polymer CLI. So a lot of the things you heard
Sam talk about a little earlier today, we're trying to
make much easier to achieve with PRPL Server Node. So this is a node-based
server that's set up for client-side routing. And it also has
built-in presets, in order to serve
optimal code, depending on the browser capabilities
of a particular client. So it has presets that know
which browsers have ES6 support and can take advantage of
custom element subclassing and which need
ES5-compiled code. It can also differentiate
between those that take advantage
of HTTP/2 Push support to serve granular components
for better efficiency and caching and those that
will need bundled code. And it will also serve
the optimal set of code for a client, given what the
client can actually execute. So last, it will also
leverage a new project that we're working on called
Rendertron, for those bots and crawlers that don't execute
JavaScript and social network crawlers, like
Facebook and Twitter, in order to serve a
fully-rendered HTML for optimal SEO and social
snippet generation. So stay tuned for more on that. And you can check out more about
the PRPL Server Node beta here on GitHub. So that's it. Those are the key
aspects that we see in taking advantage
of the modern web platform to ship fast and
high-quality apps and also reduce your pain
during web development. So you can really
start to see how the interplay of these
different web platform features-- web components and
Service Worker and HTTP/2-- that you achieve a
result that's greater than just the sum of the parts. So we're going to keep
exploring, on the Developer Experience team, ways to
package up these concepts and make them easier
for you and your teams to take direct advantage of. But in the meantime,
I encourage you to take a look at some of the
latest with web components and start applying some of
these concepts and ideas to your current
web applications. So thank you so much. And stay tuned for more
coming from Polymer. [MUSIC PLAYING]