[MUSIC PLAYING] MATHIAS BYNENS: Hi, everyone,
and welcome to this session, “Building the Future Web
with Modern JavaScript.” This is Sathya. My name is Mathias. We work on the V8
team at Google. And V8 is the JavaScript engine
that’s used in Google Chrome and in other embedders, such
as Node.js and Electron, for example. SATHYA GUNASEKARAN:
JavaScript started out as the scripting
language of the web. And to this day, it’s still a
crucial piece of technology. But today, JavaScript’s reach
extends far beyond the web to the entire
surrounding platform. MATHIAS BYNENS: So if
we take a step back from just V8 and Chrome, we
can see that JavaScript allows you to build engaging
web experiences that work in a wide variety of browsers. And with Node.js,
JavaScript can be used to build powerful
command-line tools, servers, and other applications. React Native enables building
cross-platform mobile apps also using JavaScript. With Electron, JavaScript
and other web technologies can be used to build
cross-platform desktop apps. JavaScript can even run on
low-cost IoT devices nowadays. SATHYA GUNASEKARAN:
JavaScript is everywhere. As a developer,
learning JavaScript, or improving your
JavaScript skills, is an excellent time investment. MATHIAS BYNENS:
At Google, we want to enable this enormous
JavaScript ecosystem, and we do so in multiple ways. First, there’s TC39. This is the committee
that governs the evolution of
ECMAScript, which is the standard behind
the JavaScript language. We actively participate in
these standardization efforts together with other browser
vendors and industry stakeholders, and we bring
back feedback from us, as implementers, and from
you, the developer community. SATHYA GUNASEKARAN: On
top of our participation in the standards process,
we constantly improve our own JavaScript engine, V8. We boost its
runtime performance, reduce its memory usage, and
implement new language features as they are being standardized. MATHIAS BYNENS: The
V8 project’s mission is to lead real-world
performance for modern JavaScript
and to enable developers to build a faster future web. Now, that’s quite a mouthful —
that’s a lot of words. But don’t worry, because
in this presentation, we’re going to zoom in on
some aspects of this mission. First, let’s discuss
what we mean when we say, real-world performance. We’ve stepped away from
synthetic benchmarks that we used in the past. Instead, we now focus on
optimizing real websites and JavaScript applications. Of course, we need a way
to measure our efforts in this area, and luckily,
some modern benchmarks are good approximations
of real-world performance. We consider the
Speedometer 2 benchmark to be a good example of that. It is a good proxy for
real-world performance, because it uses common
frameworks, libraries, and build tooling
patterns that are being used on
production websites that we see in the wild. And over a time span of
just four Chrome releases, we achieved a 21% overall
improvement on Speedometer 2. And one key highlight here
is that we more than doubled our runtime performance
of the React framework. That’s a pretty big deal. And this is not just
optimization in isolation. Our data shows
that optimizations to apps in Speedometer
2 translate to performance improvements
in real-world apps. Now, Speedometer focuses
on web apps quite heavily. But if you think about it, a
lot of web development tooling is built on top of Node.js. And performance
matters there as well. So we decided to optimize
for popular real-world tools, such as TypeScript, Babel, and
Prettier, as well as the output that they generate by using
the Web Tooling Benchmark. And we’ve had some nice
improvements so far. New features are being added
to the JavaScript language constantly. We’re working on making them
available in V8 and Chrome as soon as possible. Modern JavaScript features
have good baseline performance, and we’re continuously working
on making them even faster. So as a developer,
please don’t hesitate to use these modern
JavaScript features. They’re often more performant
than handwritten alternatives. So this means that
you can just focus on writing idiomatic code
that makes sense for you. And we’ll worry
about making it fast. SATHYA GUNASEKARAN: On top
of runtime performance, we’re constantly focusing on
reducing V8’s memory footprint and improving developer tooling
for debugging memory leaks. This is especially important
for low-power devices, such as mobile phones
or even IoT devices. Recently, by changing
V8, we made it easier to find detached DOM trees or
leaks in event listeners using the Chrome DevTools. Better memory tooling ties
directly into our mission, as it helps developers
build better apps. Well, we talked about
real-world performance. We talked about tooling. I think we skipped the
middle part of our mission. Yes, it’s modern JavaScript. For the rest of this
talk, we’re going to be taking a look at some
of the cutting-edge JavaScript language features that
we’ve been working on, both on the standardization
side and on the implementation side in V8. MATHIAS BYNENS: Let’s start
with ECMAScript modules. Modules are a major new
feature, and actually a collection of
features in JavaScript. You may have used a
userland JavaScript module system in the past. Maybe you used CommonJS in Node,
or maybe you used AMD, or maybe something else. But one thing all these
module systems have in common is that they allow you to
import and export values. And JavaScript now has
standardized syntax for exactly that. Within a module, you can
use the export keywords to export just about anything. You can export a const, or a
function, or any kind of value. All you have to do is prefix
the variable statement or the declaration with
the export keyword. And that’s it — you’re all set. Now, you can use the
import keyword to import the module from another module. So here we’re importing the
repeat and shout functionality from the lib module, and we’re
using it in our main module. Modules are a little bit
different than regular scripts. For example, modules have strict
mode enabled by default. Also, this new static import
and export syntax is only available in modules. It doesn’t work in
regular scripts. And because of
these differences, the same JavaScript code
might behave differently when treated as a module
versus a regular script. As such, the JavaScript
runtime needs to know which
scripts are modules. On the web, you can
tell the browser that a script is a
module by setting the type attribute to module. Browsers that understand
type=module also understand the nomodule attribute and they ignore scripts with
the nomodule attribute. This means that you can serve
a module tree to module- supporting browsers
while still providing a fallback for other browsers. And the ability to
make this distinction is pretty amazing by itself,
if only for performance. Just think about it: only modern browsers
support modules. That means if a browser
understands your module code, it also supports
features that were around before modules, such as
arrow functions, for example. So you don’t have to
transpile those features in your module bundle anymore. You can serve smaller
and largely untranspiled bundles using modules
to modern browsers, and only legacy browsers need
that nomodule payload. Now, you can optimize delivery
of your modules even further by using <link
rel="modulepreload">. This way, browsers
can preload and even preparse and precompile
modules and their dependencies. Now, you may have noticed
that we’re using the mjs file extension for modules. On the web, the file
extension doesn’t really matter as long as the file is
served with a JavaScript MIME type, which is text/javascript. The browser already
knows that it’s a module because of
the type attribute that you put on
the script element. But still, we
recommend to use mjs for modules for
consistency with Node.js where mjs is actually required. Node has experimental module
support, but only for files with the mjs extension. The Node.js team
is still working out some issues around
interoperability here, so expect more
updates on this soon. We recommend to use the js
extension for regular scripts and the mjs extension for modules. Node.js needs it to make
modules work, and on the web, the file extension
doesn’t matter anyway. So you might as well
be consistent with Node and use mjs. That way, you can potentially
run the exact same modules, the exact same files, both
on the web and in Node.js. So far, we’ve only
used static import. With static import,
your entire module graph needs to be downloaded
and executed before your main code can run. Sometimes, you don’t
want to load a module up front, but rather on demand,
only when you really need it — when the user clicks a link
or a button, for example. This improves the initial
load-time performance. Well, we recently added
support for dynamic import(), which also works in regular
scripts, unlike static import. So let’s say you have some
navigation links in an HTML document. And when the user clicks
or taps on them, instead of navigating to another page,
you want to download a module and run some code from it,
which instantiates the module and then updates the page. We can do that as follows: we add click event
listeners to the links that cancel the navigation
and then dynamically import the corresponding module as
specified through the data- entry-module attributes. Once the module is loaded, we
can use whatever it exports. In this case, we use the
loadPageInto export, which is a function that is called. And then it instantiates the
module and it updates the page. Another new module-
related feature is import.meta, which
gives you metadata about the current module. The exact metadata
you get is not specified as part
of the standard. It depends on the
host environment. This means that
in a browser, you might get different metadata
than in Node.js, for example. Here’s an example of
import.meta on the web. By default images are loaded
relative to the current URL in HTML documents. But import.meta.url
makes it possible to load an image relative to the
current module instead. It gives you much more freedom. With modules, it becomes
possible to develop websites without using bundlers,
such as Rollup or Webpack. However, loading performance
of bundled applications is better than unbundled ones. And one reason for this is that
the import and export syntax is statically
analyzable, which means it can help these bundler
tools to optimize your code by eliminating unused exports. Static import and export
are more than just syntax. They are a critical
tooling feature. Of course, we are
working to improve the performance of native
modules out of the box, but please continue
to use bundlers before deploying to production. It’s very similar to minifying
your code before deploying. That’s always going to result
in a performance benefit, because you end up
shipping less code. And bundling has the same
effect, so keep bundling. All modern browsers
already support modules as of this morning
when Firefox 60 was released. As we discussed, Node.js has
an experimental implementation behind a flag. SATHYA GUNASEKARAN:
Woah, that was a lot of cool features
involving modules. Now, let’s take a look at
some of the smaller, more incremental features
coming to JavaScript. Let’s start off
with a little quiz. Quick, what’s this number? Is this a billion? 10 billion? What about this number? What’s the order of
magnitude for this number? I’m sure you can answer
this given enough time, but large numeric literals are
difficult for the human eye to parse quickly,
especially when there are lots of
repeating digits, like in the first example. That’s why a new
JavaScript feature enables underscores as
separators in numeric literals. So instead of writing
these like this, you can now write
them like this, grouping the digits per thousand. Now it’s easier to tell
that the first number is in fact a trillion,
and the second number is in the order of a billion. This small feature helps
improve readability. And it’s not just for
numeric literals in base 10. For binary numeric literals,
you may want to group the bits by octet or by nibble,
depending on the use case. For hexadecimal
numeric literals, you may want to group
the digits per byte. This feature is so
cutting-edge that it’s not shipping anywhere yet. [LAUGHTER] It’s still being
standardized, and there’s some open spec issues
that need to be resolved. We have an implementation
behind a flag in Chrome and V8. And as soon as
the spec is ready, we will update our
implementation and ship it. MATHIAS BYNENS:
Another new feature involving numbers
is BigInt, which brings arbitrary-precision
integer support to JavaScript. Let’s take a look at
what that means exactly. Numbers in JavaScript are
represented as double-precision floats. This means that they
have limited precision. The MAX_SAFE_INTEGER
constant gives the greatest possible integer value that
can safely be incremented, and you can see that here. If we increment it once, we
still get the correct result. But if we then increment
it a second time, we get the same
number as before. That’s not the right result
mathematically speaking. And this happens
because the result is no longer exactly
representable as a JavaScript number. BigInts are a new numeric
primitive in JavaScript that can represent integers
with arbitrary precision. With BigInts, you can
safely store and operate on large integers even
beyond the safe integer limit for regular numbers. To create a BigInt, just add the n
suffix to any integer literal. The global BigInt
function can be used to convert a number
into a BigInt dynamically. And here, we’re combining
those two techniques to calculate the sum. And this time, we do
get the correct result. Here’s another example. We’re multiplying
two regular numbers. And if we take a look at
the least significant digits here, 9 and 3, we know that the
result of the multiplication should end in 7,
because 9 times 3 is 27. However, the result ends
in a bunch of zeros. That can’t be right. So let’s try this again
with BigInts instead. And indeed, this time, we
do get the correct result. BigInts make it possible
to accurately perform integer arithmetic. And most importantly, it’s now
possible to safely represent a “googol” in JavaScript. This is going to help me in
my day-to-day programming for sure. On a more serious note,
why would you need BigInts? Arithmetic on large
numbers is one use case, but what else is there? Well, it turns out that
without BigInts, it’s very easy to run into bugs
that are very hard to debug. Tweet IDs, for example,
are 64-bit integers, meaning they cannot safely be
represented as a JavaScript number. So if you tried, you would
run into very sneaky bugs where some IDs would suddenly
turn into other numbers. And until now, developers had
to represent such IDs as strings instead. But with BigInt,
you can represent them using a numerical type
when it makes sense to do. So here’s another example
of a real-world bug that occurs in Node.js. Sometimes different
files or folders appear to have the same
inode, which is supposed to be a unique number per file. The fundamental
cause of this bug is the imprecision of
JavaScript numbers. BigInts don’t have this problem. Another use case is representing
high-precision timestamps accurately. Using regular
numbers, this is not possible without
losing accuracy. But with BigInts, the
precision is guaranteed. That’s why Node.js
is currently looking into exposing BigInts
instead of numbers for some of its built-in
API where it makes sense. Historically, developers
have been working around the lack of precision by using
BigNum libraries implemented in JavaScript. And when we benchmarked our
native BigInt implementation, we found that it
consistently outperforms userland alternatives. So once BigInts become
widely available, applications will be able to
drop these runtime dependencies in favor of native BigInts. This will help reduce load
time, parse time, compile time. And on top of all
that, it will offer significant runtime
performance improvements. I can’t wait for that to happen. But right now it’s still
early days for BigInt. Chrome 67 ships
with BigInt support. This means that Chromium-based
browsers such as Opera support it as well. But other browsers are
still actively working on their implementations. You can expect wider
BigInt support soon. SATHYA GUNASEKARAN: You may
be familiar with for-of loops and the iteration protocol. And more recently,
JavaScript gained support for async functions. A new set of features
combines these together to form async iterators
and generators. Let’s say you want to find out
the response size of a request. Currently, async
functions make it easy to work with the Fetch
API and the response stream. But there’s still a bit
of boilerplate required to iterate each chunk,
await each chunk, and then manually check if
you’re done with the stream. With async iterators,
the code is much cleaner and more ergonomic. There’s no manual
awaiting of each chunk or checking to see if
the stream is done. The for-await-of loop
just does all that. Streams aren’t async
iterable quite yet, but there’s ongoing
standardization work being done to make it work. Similarly, you can
expect more and more APIs to add async iteration
support in the future. And this is already
happening in Node. So Node.js has built-in
streaming APIs such as ReadableStream that can be used with the
old-school callback style. In this example, we’re
collecting all data chunks until the stream
ends, at which point we print the
result. To do so, we need two different callbacks. With Node.js version 10 now,
you can use async iterators out of the box. So instead of
using callbacks, we can now write a single
for-await-of loop to iterate over all the chunks. Browser support for async
iterators and generators, including the for-await-of
syntax, is pretty good already. Chrome, Firefox, Opera,
and Safari Tech Preview already have support,
as does Node.js. For other browsers,
Babel is your friend. It supports transpiling
the for-await-of syntax. MATHIAS BYNENS: Regular
expressions also recently got some much needed love with
multiple new features that help improve their readability
and their maintainability. And one such feature
is called dotAll mode. Let’s say you have
some input text, and you want to apply a
regular expression to it. In this case, we can see that
the text contains the phrase "hello world" right there. So let’s write a
regular expression that checks whether the
input contains "hello" followed by any character,
followed by "world". In regular expressions,
the dot can be used to match arbitrary characters. So this regular expression
should match, right? Well, it turns out
that it doesn’t match. The problem is that
the dot, in fact, does not match all characters. It only matches characters that
JavaScript does not consider to be line terminators. And there happens to be a line
terminator in the input string, right there — that’s a new line. Sometimes you really do want to
match any character, including new lines. This problem is so common
that developers have started to use workarounds like this. So here we’re matching
any whitespace character or any non-whitespace
character, which effectively matches any character. Another workaround is to use a
negated empty character class. This matches any
character that is not nothing, which effectively
matches any character. So these two techniques
work, but they’re not very intuitive or readable. And stuff like this is
why regular expressions get a bad reputation for
being hard to decipher. But it doesn’t have
to be this way. A new regular
expression flag makes the dot truly match any
character, including line terminators. You can use the `s` flag
to enable the dotAll mode. This is one of those
flags that you’ll probably want to enable for every
single regular expression you write from now on, just like
the `u` flag for Unicode mode. These two flags
make the behavior of JavaScript regular
expressions less confusing and more sensible. And here’s an easy trick
to remember these flags. Instead of su, you
can just write us. This makes your code
more America-friendly and your regular expression
behavior less surprising. dotAll mode is currently
supported in Chrome, Opera, Safari, and Node.js. But Babel can transpile
these features, so you can use this
while still supporting other browsers as well. SATHYA GUNASEKARAN:
Another new feature improves capture groups
in regular expressions. Let’s take a look at an example. This regular expression
matches four digits followed by a hyphen, followed
by two digits, followed by a hyphen, followed
by another two digits. Each set of digits is
wrapped in a parenthesis to create a capture group. Each capture group in
a regular expression automatically gets
an index and can be referenced using that index. There are three capture
groups in this example. So the first one maps index
1 in the matched object. The second capture
group maps to index 2. And the third capture
group maps to index 3. You get the idea. Capture groups are useful,
but referencing them by index is not very readable
or maintainable. Imagine adding a
new capture group to the start of the pattern,
that all the existing indexes shift, and all the code
relying on the ordering has to be updated. A new feature makes
it possible to assign a name to each capture group. So this makes the
regular expression itself more readable,
because it clarifies what each group stands for. Instead of using
indexing, we can now refer to each capture
group by name. So this improves the
readability and maintainability of any code that uses the
regular expression as well. So now we can refer
to the first capture group with our identifier
year, second capture group with our identifier
month, and the third capture group with our identifier day. Name capture groups
are currently supported in Chrome,
Opera, Safari, and Node.js. For wider browser
support, we recommend using Babel to transpile
and polyfill this feature. MATHIAS BYNENS: Another
regular expression feature involves Unicode properties. The Unicode standard
assigns various properties and property values to every
symbol you can think of. Unicode property escapes
make it possible to access these Unicode character
properties natively in regular expressions. For example, the pattern
script extensions Greek matches every symbol that
is used in the Greek script. Previously, we had to resort
to large runtime dependencies or build scripts to just
mimic this functionality. But those solutions lead to
performance and maintainability issues. But with built-in support
for Unicode property escapes, creating regular expressions
based on Unicode properties couldn’t be easier. And this is not just
useful for complex Unicode specific properties,
it also helps with some more common
tasks, such as matching ASCII characters, for example. You could just spell out the
ASCII range from codepoints 0 to 7f yourself. But why would you
bother when you can just refer to it by name like this. It’s easier, and your
regular expression instantly becomes more readable
and more maintainable. Another example of
Unicode property escapes is matching all math
symbols according to the Unicode standard. This would match the not
equal to sign for example. And here’s an example that
combines multiple property escapes. This pattern matches any letter,
including non ASCII letters and whitespace
characters according to the Unicode standard. And indeed, if we apply it
on a piece of text that is written in Greek, it matches. This regular expression
feature is not just useful in JavaScript, it
can be used in HTML as well. Input and text area elements
have a pattern attribute that is used for
client-side validation. Unicode property escapes
work there as well. So when this HTML
document loads, the input field gets a
light green background, because the pattern
matches the input value. As soon as you enter any
character that is not a letter or a space, the input
state switches to invalid and the background becomes pink. Unicode property
escapes are already implemented in Chrome,
Opera, Safari, and Node.js. But Babel can
transpile the feature so you can use it while still
supporting all browsers. SATHYA GUNASEKARAN:
Another new feature involves looking around
in a regular expression. Lookarounds are assertions
that match a string without consuming anything. JavaScript supports lookaheads
in regular expression. This is not new. Positive lookahead
shows a pattern is followed by another pattern. So in this example, we’re
matching ASCII numbers, but only if they’re followed by
a space, followed by the word dollars. So for the input
string "42 dollars" the match contains
the numeric part "42". And similarly,
negative lookahead shows a pattern is not
followed by another pattern. In this example, we’re
matching ASCII numbers, but only if they’re
followed by something other than a space, followed
by the word dollars. So for the input
string "42 rupees" the match only contains
the numeric part "42". Lookaheads are not
new in JavaScript. What’s new is a very
similar feature. Regular expression patterns now
support lookbehinds as well. Positive lookbehind
ensures a pattern is preceded by another pattern. In this example, we’re
matching ASCII numbers, but only if they are
preceded by a dollar sign. So for the input
string $42, the match contains the numeric part 42. And similarly,
negative lookbehind ensures a pattern is not
followed by another pattern — sorry, not preceded
by another pattern. In this example, we’re
matching ASCII numbers, but only if they’re preceded by
something other than a dollar sign. So for the input
string "₹42" the match contains
the numeric part "42". Lookbehinds are just as
useful as lookaheads. Until recently, there
was no built-in way of using lookbehinds
in JavaScript. A workaround I’ve
seen developers use in the absence of lookbehind
support is to reverse a string and then apply a
lookahead instead. This indeed mimics the
lookbehind behavior, kind of, but it’s overly complicated
and confusing. Thanks to native
support for lookbehind, we no longer need
ugly hacks like that. And lookbehinds are
already supported in Chrome, Opera, and Node.js. MATHIAS BYNENS:
One feature that’s related to regular
expressions is String#matchAll. It’s common to repeatedly apply
the same regular expression on a string to get
all the matches. And this is already possible
by writing your own loop and keeping track of the
match objects yourself. But it’s a little bit
annoying to do so. So String.prototype.matchAll
simplifies this common scenario and allows you to more easily
iterate over all the matches. The idea is that you just
write a simple for-of loop, and String#matchAll takes
care of the rest for you. String#matchAll is
still being standardized, but we have an implementation
behind a flag in V8 and Chrome. But we won’t ship it until some
open spec issues are resolved. So expect this to come to a
browser near you very soon. SATHYA GUNASEKARAN:
Another new proposal avoids the creation of an unused
binding for try-catch blocks. It’s common to use
try-catch blocks to catch exceptions and then
handle them accordingly. But sometimes you
don’t really care about the exception
object that is available in the catch block. Still, previously, you
had to bind the exception to a variable, otherwise
you’d get a syntax error. Luckily, this binding
is now optional. In cases where you don’t
need the exception object, you can now omit the parentheses
and the identifier following "catch". The code now looks
a little simpler. This small feature
makes patterns that don’t use a catch
variable play nicely with linters that complain when
there’s an unused variable. The catch binding is already
optional in Chrome, Firefox, Opera, Safari, and Node. For wider support, use Babel
to transpile this feature. MATHIAS BYNENS:
Another new feature has to do with trimming. You may already know
about the `trim` method on strings, which
trims whitespace on both sides of the string. Now, there’s also the
`trimStart` and `trimEnd` methods. These methods allow
you to trim whitespace from only the start or
the end of the string — a very small but very useful
piece of new functionality. One-sided string
trimming already works in Chrome, Firefox,
Opera, Safari, and Node. But for wider support,
the functionality can trivially be polyfilled,
including by Babel. SATHYA GUNASEKARAN: Another
new JavaScript feature is the `finally` method on
the Promise prototype. Let’s say we want to implement a
`fetchAndDisplay` function that takes a URL, loads
some data from it, and then displays the
result in a DOM element. Let’s walk through the
implementation together. First, we display a
loading spinner in case the request we’re about
to kick off takes a while. Then we start the request. If the request fails, we end up
in the promise’s `catch` handler, we display an error message,
and hide the spinner. If not, we continue. We get the HTTP response
body in text form. We then show the text
in the DOM element that was passed
into the function and then hide the spinner. So if the request succeeds,
we display the data. If something goes wrong,
we display an error message instead. Either way we need to
call `hideLoadingSpinner`. Until now, we had no choice but
to duplicate this call in both the `then` and the `catch` handlers. But in modern JavaScript,
we can do better thanks to Promise.prototype.finally. Not only does this
reduce code duplication, it also separates the
success/error handling from the cleanup
phase more clearly. Pretty neat. In this particular
example, when you have full control
over the code, you can also use async-await, which
is my personal preference. You could wrap that up in
a try-catch-finally block. Promise.prototype.finally
is still useful if you don’t control the
code surrounding the promise, or if you only have
the promise object. Promise#finally is currently
supported in Chrome, Firefox, Opera, Safari, and Node. And this functionality
can be polyfilled. MATHIAS BYNENS: The next
feature I want to talk about is called object
rest and spread. And I hear you all thinking,
rest and spread, that’s not new in JavaScript, right? Arrays have had this for years. And you’re absolutely right. Here’s how rest elements
work with array elements. We’re using array
destructuring here, and we have an array
of prime numbers. And we want to extract the
first and second elements into their own variable. We then store all the remaining
array elements into an array that we call `rest`. And here’s array
spread in action. I know the syntax
looks exactly the same, but they are two different
features technically. If it’s on the left-hand
side of an equal sign, then it’s rest elements. If not, it’s spread. Anyway, here we take the
parts from the first example, and we put them back
together into a copy of the original array. These two features allow you to
rest or spread array elements. But JavaScript now offers
rest and spread functionality for object properties as well. This example shows how rest
properties come in handy when using object destructuring. We have a `person` object, and
we extract the `firstName` and `lastName` properties
into their own variables. We then store the remaining
properties in an object that we call `rest`. Spreading objects is now
possible an object literals, too. So in this example, we use the
destructured parts from before, and we put them back together
into an equivalent object that is effectively a copy of
the original person object that we started out with. Spreading properties offer
a more elegant alternative to `Object.assign`
in many situations. Shallow-cloning an
object is one example. This is already possible
using `Object.assign`, but it’s a little
awkward because you have to pass in an empty new object. With spread properties,
you can just perform the spread as part
of the object literal itself. It’s much more
elegant, and it even offers greater optimization
potential in JavaScript engines compared to `Object.assign`. Merging objects is
another example. Once again, `Object.assign`
can already do this. But if you forget to pass
in a new empty object as the first argument,
then you end up mutating the `defaultSettings`
object, which is probably not what you want. It’s just a little awkward. So spread properties
help here, too. By using them, there is no way
to accidentally mutate objects. On top of that, the code is
very nice and elegant and easy to read. Object rest and
spread properties are currently supported
in Chrome, Firefox, Opera, Safari, and Node. And Babel supports
transpiling this feature. SATHYA GUNASEKARAN:
Recently, JavaScript introduced classes that built
upon the prototypal object model of JavaScript. A bunch of new proposals evolved
the decorative API of classes. Here’s a simple class for
an increasing counter. There’s a constructor that
creates an instance property code and sets its
default value to 0. We use underscore to denote that
_count is a private property. But that’s just a convention. The language doesn’t enforce it. We also install a getter
and an `increment` method on the prototype. The new class fields proposal
introduces public instance fields in classes. So this moves the
instance fields from the constructor
to the class body. A constructor function
is no longer required. This new class fields proposal
also introduces private fields. Private fields enforce
the encapsulation at the language level. Private fields cannot be
accessed outside the class body. So instead of using underscore
to denote a private property, we use the hash sign. This is another bleeding-
edge feature that is still being standardized. We have an implementation
via a flag in V8, and we’re looking forward
to shipping this soon, once the spec
issues are resolved. MATHIAS BYNENS: So
we’ve looked at a lot of cutting-edge
JavaScript features today. But what’s the main
takeaway for developers? The JavaScript language
is continuously evolving, and we’re working on
making new language features available in V8 and
Chrome as soon as we can. Developers like
yourself shouldn’t be afraid to use these
modern JavaScript features. They have decent
baseline performance, and in many common
cases, they’re even faster than handwritten
or transpiled equivalents. We understand that
you do need to support a wide range of browsers
and environments, and transpiling your
code helps with that. But we ask you to consider
transpiling selectively to keep bundle size small
and to avoid transpiling away the potential optimizations
that these modern language features hold. @babel/preset-env allows you to
transpile only those features that you really need to
transpile based on the browsers and environments
that you support. Similarly, we
recommend continuing to use existing bundlers
such as Webpack or Rollup. They work very well together
with modern language features, especially with
modules, and they help you provide a performant
experience to your users. We’re interested in your
feedback on this presentation, so please go to google.com/io/schedule
and let us know what you think. And don’t forget to
follow us on Twitter and follow @v8js as well. We’ll be tweeting links
with more information about the features
we just discussed. And after this talk, we will be
hanging out at the Web Sandbox, so just come over and talk to
us if you have any feedback or questions at all. Thanks for listening, and
enjoy the rest of I/O! [APPLAUSE] [MUSIC PLAYING]