[MUSIC PLAYING] SURMA SURMA: I am Surma. I am a developer advocate
for the open web platform and work with a Chrome
team for Google in London. And I have the great pleasure
today, talk about one of my newly-found passions,
which is WebAssembly. You have probably heard
of it a little bit. And you can contact
me on Twitter if you have any
questions in the future. And later on my colleague
Deepti from the [INAUDIBLE] engineering team is going to
talk a bit about the future of WebAssembly. Before we start though,
I wanted to bring us all onto the same page. Because WebAssembly is often
associated very tightly with C++. So much so that a lot of people
think it is all about C++ when in fact, WebAssembly is
so much more than that. Many of the demos you can
find online are about C++ and emscriptening. And that makes sense
because emscripten is an absolutely amazing tool. But it is very important for
web developers to realize that it's not just C++, and that
WebAssembly by itself is actually a really useful tool
to have in your back pocket. And that's what I want to
talk about in this talk. I want to show some
other languages that support WebAssembly, how you
can use WebAssembly maybe without learning a new language. And then, as I said,
Deepti is going to talk a bit about what
the future of WebAssembly might hold. So to just make
sure everybody knows what we're here for
this is WebAssembly.org, where we can explain
what WebAssembly is. And it is a stack-based
virtual machine. And if you don't know what a web
stack-based virtual machine is, that is absolutely OK. What is important,
though, is that you realize that it is a
virtual machine, meaning it's a processor, it
doesn't really exist, but something that has been
designed to easily compile to a lot of actual,
real architectures. And that is called portability. So it is a virtual
machine designed to prioritize portability. So when you write some
code in whatever language and you compile
it to WebAssembly, that code will get
compiled to the instruction set of that virtual machine. And then those instructions
get stored in a binary format, usually in a .wasm file. And because that virtual machine
is designed to easily compile to real processors,
this file can now be ingested by a runtime,
which in our context here is most likely the browser. The browser can turn that .wasm
file into actual machine code off the actual machine the
browser is running on, and then execute that code. And from the very
start, WebAssembly was designed to make
that process secure. So yes, you are running
code on your bare metal. But it's not an insecure thing. We already talked
about WebAssembly at the last OI,
actually quite a bit. It's a technology that's growing
super quickly and actually maturing at a very
impressive pace. And we talked about how some big
companies are using WebAssembly to bring their
existing products, that they probably wrote in
C++ for example, to the web. So for example,
that was AutoCAD, who had been working
on AutoCAD for years and it's a well-known product. But now they've
put in the effort of compiling it to WebAssembly. And suddenly it was
running in the browser, which is kind of mind-blowing
when you think about it. Another example would be the
Unity game engine or the Unreal game engine, which now
support WebAssembly. So these game
engines often already have a kind of
abstraction built in. Because you build
your game and then you compile it to Playstation,
or to X-Box, or other system. But now WebAssembly is
just another target. And what is impressive is that
the browser and WebAssembly are able to deliver the
performance necessary to run these kind of games. And I find that amazing. And these things are going
to continue to happen. So you haven't already
seen in the Web keynote, that my colleague Paul
Lewis built a perception toolkit, which helps you build
kind of immersive experiences. And they wanted to build
links into the real world in the form of QR codes
and image detection. So QR codes can already
be detected by browsers using the Shape Detection API. But not every browser
has that implemented. So what they're doing is they're
using that Shape Detection API. But if it's not available,
they have cross-compiled Zebra Crossing, a
QR code library, to WebAssembly and load that
on-demand to fill the gap if they find it. And image detection
isn't on the web at all. So they build that themselves
and used WebAssembly to give new capability to a browser. The UI tool kit QT
announced that they're also supporting WebAssembly now. So that kind of means
that you can now actually take an old libqt app
and compile it to WebAssembly. And then you have this weird
window in a browser tab experience, which is not ideal. But it just shows that
it works but libqt is a very powerful and
generic UI library. So there's loads of demos on
the website at the bottom where they actually build a kind
of good and native-looking UI using libqt and WebAssembly. So if you don't know that
much about WebAssembly, you might be asking,
how did they do that? And the answer in these
cases is emscripten. Emstripten's goal is to be a
drop-in replacement for your C or C++ compiler. And instead of compiling
to your machine code, it spits out WebAssembly. And they really try to
be a drop-in replacement. So whatever code you
wrote to run on a system should just magically
run on the web. And that's very important
as a distinction to make. Because emscripten does a lot of
heavy lifting behind the scenes to make that happen. And I think because it works
so well in these scenarios is why it is so tightly associated
with WebAssembly currently. Because originally, emscripten
was an asm.js compiler. This was an idea
by Mozilla where they wrote this compiler
that takes C code and turns it into JavaScript. So what you see on
the right is asm.js. It's just normal JavaScript. And every browser that
can execute JavaScript can execute asm.js. But the plan was to give some
browsers an understanding of asm.js so that they
have a dedicated fast path to make these kind of
programs run faster. So you have a chunk of memory
and you have some variables. And suddenly, your C++ code can
run in your JavaScript engine. But C and C++ often use
other APIs like file opening and maybe open GL. So emscripten made
that work by using WebGL to pretend to be OpenGL
and to emulate the file systems so it can pretend
to be working on real files. So they're basically emulating
an entire POSIX operating system to make code
run on the web that was never written for the web. So they actually
made that happen. So when WebAssembly
came along, emscripten just added a new output
format but kept all the work they had put into
making that emulation. So emscripten was basically
able to use all the experience that they had with making
POSIX code work on the web, and apply to WebAssembly. So they were able to
extremely fast deliver extremely impressive
and actually mature demos and tools
around WebAssembly. And they deserve a lot of credit
for taking everybody along with them and leveling
out the playing field for all the other languages
that have come along since. And I think that's why
WebAssembly it's so [INAUDIBLE] to C++, because of that
quick maturity of emscripten. But what about web developers? How about you, who
might be working at a web agency or maybe
even a freelance developer? How can and WebAssembly
be useful to you? Do you have to learn C++? Spoiler alert, no. When you are a web
developer and you think oh, I should learn C++ so
I can use WebAssembly, many people end up like this
because what even is C++ when you know JavaScript? And fun fact, it works the
other way around as well. When I see it C++ developer
see or write JavaScript for the first time, they
make the exact same face. And I'm not saying that
because one language is better than the other, but just because
that requires such drastically different mindset to write. I have written both
professionally. And whenever I switch,
I twitch a little bit. It takes some time. It's just very different
to think about. What I'm saying is
there was, so far, no incentive for a web
developer to learn C++. And so the amount of people who
are comfortable in both worlds is fairly small. And as a result, WebAssembly
seemed like a very niche technology when in fact, it's
actually a really useful tool to have in your back pocket. And so what I want to talk
about is the two main use cases that I usually see when
I think about WebAssembly. On the one hand, I want to talk
about the surgical replacement of tiny modules in your
JavaScript app, the hot pass, the bottlenecks
with WebAssembly. And I want to talk
about the myth that WebAssembly is
faster than JavaScript. But first, I want to talk
about the other facets, about ecosystems. It might seem a bit weird
because nobody will probably disagree when I say that the
JavaScript ecosystem is pretty huge. I mean, just look at NPM. It's massive. But it's just a fact
that not every topic-- not for every topic
JavaScript is the first choice while other languages might be. So sometimes you are
faced with a problem and you're looking for libraries
to solve these problems. And you'll find them in C or
in Rust but not in JavaScript. So you can either sit down and
write your own JavaScript port. Or your new option is to tap
into other language's ecosystem using WebAssembly. And that's exactly what
we did with Squoosh. Squoosh is an image
compression app that runs completely in the
browser and offline, no server side. And you can drop
in images and you can compress them
with different codecs and then visually inspect how
these different codecs have different effects on the
visual quality of your image. Now, the browser already
offers that, if you know. Because with Canvas, you can
decide in what image formats you want to encode an image. And you even get control
over the quality. But it turns out that
these codecs by the browser are optimized for compression
speed rather than compression quality or visual quality. So that was a bit
lackluster, to be honest. And also, you're kind
of bound to the codecs that the browser supports. So until recently,
only Chrome could encode to WebP, and none
of the other browsers. So that wasn't enough for us. And so we googled a bit, and
we found some codec emcoders written in JavaScript. But they were kind of weird. And also we didn't find a single
encoder in JavaScript for WebP. So we thought we'd have to look
at something else and we found loads of encoders in C
and C++ so WebAssembly. So what we did is we
compiled, for example, the library called
MozJPEG in to WebAssembly, load it in the browser, and
replaced the browser's JPEG encoder with our own. And what that allowed
us is that we actually got much smaller images
at the same visual quality setting, which is kind of cool. But not only that,
it also allowed us to expose loads of expert
options that the library had but that the browser
obviously didn't expose. So things like
configuring the chroma sub sampling or different
quantization algorithms are not only valuable to squeeze
out the last couple of bytes from an image, but also
just as a learning tool to see how these options
actually affect your image visually and the file size. The point here really is that
we took an old piece of code-- MozJPEG goes back to 1991. It was written definitely
not with the web in mind, but we are using it on
the web anyway, and using it to improve the web platform. And we used emscripten. So with emscripten, to
show you how that works, I usually find myself
in a 2-step process. The first step is compiling
the library, something that you can link against later. Often, image codecs
make use of threads and simd, because
image compression is a highly parallelizable task. But neither JavaScript
nor WebAssembly have support for threads
or simd just yet. Deepti will talk a bit later on
what is coming on that front. But for now, we disable
simd to make sure we don't run into any problems. And in the second step,
you have to write a piece of what I called bridge code. This is the function that I want
to call from JavaScript later on. So it takes the image, it
takes the image dimensions, and then uses MozJPEG
to compress it and returns a typed
array buffer, which contains the JPEG image. And once you have written this
bridge code, we call EMCC, emscripten C compiler with our
C++ file and the library file, link it all together, and
provided I didn't make any mistakes, we get this output,
a Java script file that sets everything up for us and
the WebAssembly file. Now, here is something
to keep in mind. Because emscripten is
a drop in replacement and does all these emulation and
all that heavy lifting for you, it is always a good idea to
keep an eye on your file sizes, because these kind of
file system emulations and API tunneling, that's code
that needs to get shipped. So if you use a lot
of CAPIs, these files can become quite big,
especially the JavaScript file. We have been working
with the emscripten team quite intensely to help
them keep it at a minimum, but there is only
so much you can do if you want to be a
drop and replacement, so keep an eye on your file sizes. Another example of
WebAssembly in squoosh is image scaling,
because it turns out making an image
bigger or smaller, there's many ways
to achieve that with many different visual
effects and visual outputs. So with the browser,
if you just use the browser to scale an image,
you just get what you get. It will probably be fast,
it will probably look good, but sometimes having control
over the different variants of scaling an image can really
make a big visual impact. So on this video, you can see
me switching back and forth between the Lanczos3 algorithm
and whatever the browser has. And you can see that
with Lanczos3, I actually have a linear rgb
color space conversion. I actually have a much
more real perception of brightness in this picture. So in this case, it's actually
a really valuable piece of code to have running. And so these image
scaling algorithms that we are using in
squoosh, we actually took from the Rust ecosystem. Mozilla has been
heavily investing in the Rust ecosystem, and their
team writes WebAssembly tools for Rust. But the community also
abstracts those away to just generic tools. One of these tools is wasm-pack,
which really takes you by the hand and turns your
Rust code into WebAssembly modules in modern
JavaScript and really small, and I think it's really
fun to play around with. So with Rust, same
kind of principle. We have a library, and we want
to write our little bridge code. So in this case,
the resize function is what I want to
call from JavaScript. It takes the image and my
input size and the output size, and then I just return the
resized image afterwards. And then you use wasm-pack
to just turn all of that into a WebAssembly
module that you can use. Now, that size comparison
is not quite fair, because it's a
different library, and it's a smaller library. So don't compared
it bite by bite, but on average, Rust tends
to generate much smaller glue code. Which kind of makes
sense, because Rust doesn't do anything of the
POSIX file system emulation. You can't use a file
function in Rust, because it doesn't do
file system emulation. They have some crates
that you can pull in if you want to have
that, but it's much more of an opt in approach. So the bottom line is
that with squoosh, we are using four different
libraries at least from two different languages that have
nothing to do with the web, but we still proceeded
to use them on the web. And that's really what
I want you to take home from this entire thing is that
if you find a gap in the web platform that has been filled
many times in another language but not on the web,
or not in JavaScript, WebAssembly might be your tool. But now let's talk about
the surgical replacement of hot path in your
JavaScript and the myth that WebAssembly is
faster than JavaScript. Now it's really important
to me, and that's why I came up with this really
far fetched visual metaphor. Both JavaScript and WebAssembly
have the same peak performance. They are equally fast. But it is much easier to stay on
the fast path with WebAssembly than it is with JavaScript. Or the other way around. It is way too easy sometimes to
unknowingly and unintentionally end up in a slow path in
your JavaScript engine than it is in the
WebAssembly engine. Now that being
said, WebAssembly is looking into shipping
threads and simd, things that JavaScript will
never get access to. So once these things
ship, WebAssembly will have a chance to
actually outperform JavaScript quite a bit. But at the current
state of things, the peak performance
is the same. To understand how this whole
falling off the fast path happens, let's talk a bit
about V8, Chrome's JavaScript and WebAssembly engine. JavaScript files and WebAssembly
files have two different entry points to the engine. JavaScript files get
past to Ignition, which is V8's interpreter. So it reads the
JavaScript file as text and interprets it and runs it. While it's running it, it
collects analytics data about how the code is
behaving, and that is then being used by TurboFan,
the optimizing compiler, to generate machine codes. WebAssembly, on the other
hand, gets passed to Liftoff, streaming WebAssembly compiler. And once that compiler
is done, TurboFan kicks in and generates
optimizing codes. Now, there's some
differences here. The first obvious difference
is that the first stage has a different name
and a different logo. But there's also a
conceptual difference. Ignition is the
interpreter, and lift off is a compiler that
generates machine codes. So it would be an
overgeneralization to say that machine code is
always faster than interpreted code, but on average,
it's probably true. So here's already
the first difference in terms of speed perception. But more importantly
is this difference. For JavaScript, the
optimizing compiler only kicks in eventually. So this code has to
run and be observed before it can be optimized,
because certain assumptions are made from the observations. Machine code is generated,
and then the machine code is running. But once these assumptions
don't hold anymore, you have to fall back
to the interpreter, because we can't guarantee
that the machine code does the right thing anymore. And that's called a
de-opt, a de-optimization. With WebAssembly, TurboFan
always kicks in right after the lift off
compiler, and you always stay on the TurboFan output. So you always stay
on the fast path, and you can never get de-opted. And I think that's where
the misconception comes from that WebAssembly is faster. You can easily get
de-opted in JavaScript, and you cannot in WebAssembly. And Nick Fitzgerald from
the Rust WebAssembly team actually did a really
nice benchmark, that he wrote a benchmark
in both JavaScript and WebAssembly. JavaScript is red,
WebAssembly is blue. And ran it in
different browsers. And what you can see here is
yes, OK, WebAssembly is faster. But the main
takeaway really here is that JavaScript
has to spread. It is kind of unpredictable
in how long it takes, while WebAssembly
is like spot on. Always the same time,
even across browsers. And I think that is really
the key line I would like you to take home with you. WebAssembly gives you more
predictable performance. It delivers more predictable
performance than JavaScript. And that's actually a story I
can tell as well from squoosh. We wanted to rotate an image. So we thought, OK,
let's use Canvas. But we couldn't, because
Canvas is on the main thread. Off screen Canvas was barely
in Chrome at that point, so we actually ended up writing
a piece of JavaScript by hand to rotate or just reorder the
pixels to rotate the image. And it worked really well. It was very fast,
but it turns out the more we tested
in other browsers, the more it became a bit weird. So in this test case, we are
rotating a 4K by 4K image. And this is not about
comparing browsers. This is about
comparing JavaScript. The fastest browser
took 400 milliseconds. The slowest browser
took eight seconds. Even off the main thread,
that's way too long for a user pressing a button
to rotate an image. And so what you
can see here really is that we clearly stayed on
the fast path in one browser, but we fell off the
fast path in another. And the browser wasn't
usually a fast browser, just some browsers
optimize differently. And so we wrote our rotate
code in WebAssembly, or in a couple of languages
that compile to WebAssembly to compare how that performs. And what you can see here
is that pretty much all the WebAssembly languages bring
us somewhere around the 500 millisecond mark. I would call that predictable. I mean, there's still
a bit of variance, but nothing compared to
the variance of JavaScript. And that's a logarithmic scale. And also what you can
see here, I just noticed, is that the head to head
performance of WebAssembly, the peak performance of
WebAssembly and JavaScript is pretty much the same. So if you look at the graph,
you might be wondering what AssemblyScript is. And if you haven't, I'm
really excited about this, because AssemblyScript
really brings me back to the title of my talk,
which is WebAssembly for web developers. AssemblyScript is a TypeScript
to WebAssembly compiler. Now, that might mislead
you, because you can't just throw your existing
TypeScript at this compiler and get WebAssembly out of it,
because in WebAssembly, you don't have Dom API, so you
can't just use the same code. But what they're using
is the TypeScript syntax with a different type library. So that means you
don't have to learn a new language to write
WebAssembly, which I think is kind of amazing. So here's what it looks like. It's like TypeScript, just with
a couple of minute differences in something like 832. It's not a type
that JavaScript has, but it is a type
that WebAssembly has. And then there are
these built in functions like load and store that
put values onto the memory or read them from memory. And the WebAssembly
compiler turns those into WebAssembly modules. So you are now able
to write WebAssembly without learning a new language
and harness all these benefits that WebAssembly
might offer to you. And I think that's
kind of powerful. Something to keep in mind
is that unlike TypeScript, WebAssembly doesn't have
a garbage collector. At least not yet. And Dipthi will talk about
this a bit more later. So at least for now, you have to
do memory management yourself. So AssemblyScript offers a
couple of memory management modules you can just pull in. And then you have to do
these C style allocations. It's something to
get used to a bit. But it's very much
usable right now, and once WebAssembly does
get garbage collection, it could get even better. So I just want to
give full disclosure. AssemblyScript is a fairly
young and small project. It has a group of extremely
passionate people behind it. It has a couple of sponsors,
but nothing compared to Mozilla behind Rust or MScript. And that all being said,
it is absolutely usable and very enjoyable. My colleague Aaron Turner
wrote an entire emulator in AssemblyScript. And so if you're
interested in that, you should look him up on GitHub
and take a look at the code. Now, one thing that
I want to make sure that I say out loud is at
the current state of affairs, putting everything in
wasm is not a good idea. JavaScript and WebAssembly
are not opponents. There is synergy between them. Use them together. One doesn't replace the other. Like debugging is
going to be harder, and code splitting is much
harder with WebAssembly currently than it
is with JavaScript, and you have to
call back and forth. It's just not going to
be a great experience. I had some people tweet at me
that they want to write the web components in C++. I don't know why they would want
to do that, but apparently they want to do that, but I
wouldn't recommend it. What I would like to say is
WebAssembly the right things. Do some performance
audits, do measurements. Where are your bottlenecks? And see if WebAssembly
can help you. Did you find a gap in the
platform and you can fill it from a different language? Again, WebAssembly is your tool. But, now to talk a bit about
the future of WebAssembly and the upcoming
future, I would like to welcome Deepti to the stage. [APPLAUSE] DEEPTI GANDLURI: Thanks, Surma. Hi, everyone. I'm Deepti. I'm a software engineer
on the Chrome team, and I work on standardizing
WebAssembly features as well as implementing
them in V8. So most of what you've seen
in this presentation so far has landed and shipped
in all major browsers, which is the MVP or the Minimum
Viable Product of WebAssembly. And we've been working
hard on adding capabilities to make sure that we
get closer and closer to native performance. The MVP itself
unlocks a whole set of new applications on the web. But this is not the
end goal, and there are a lot of new,
exciting features that the community group
and the implementers are working to enable. The first one of these is the
WebAssembly Threads proposal. The threading proposal
introduces primitives for parallel computation. Concretely, that means
that it introduces the concept of a shared
linear memory between threads and semantics for
atomic instructions. Now, why is this necessary? There are many existing
libraries that are written in C or C++ that use Pthreads, and
those can be compiled to wasm and run in multi-threaded mode,
allowing different threads to work on the same
data in parallel. Aside from just enabling
new capabilities for applications that benefit
from multi-threaded execution, you would see performance scale
with the number of threads. So the threading proposal builds
on primitives that already exist in the web platform. The web has support for
multi-threaded execution using Web Workers, and
that's exactly what's used to introduce multi-threaded
execution to WebAssembly. The downside of Web Workers
is that they don't share mutable data between them. Instead, they rely on message
passing for communication through post message. So they'd rely on message
passing for communication. So each of these WebAssembly
threads runs in a Web Worker, but their shared
WebAssembly memory allows them to work
on the same data, making them run close
to native speeds. The shared linear memory here
is built on the JS shared array buffer. So if you look at this
diagram, each of these threads is running in a
Web Worker and can have a WebAssembly instance
that's instantiated with the same linear memory. This means that the instances
operate on the shared memory but have their own
separate execution stacks. So the API to create
a WebAssembly memory remains almost the same. If you look at the
first line there, you create a WebAssembly
memory with a shared flag and a mandatory maximum. This creates a
shared array buffer underneath with the initial
size that we've specified there, which is one page of memory. So with all of these threads
operating on the same memory, how do we ensure that
the data is consistent? Atomic modifications
allow us to perform some level of synchronization. So when a thread performs
an atomic operation, the other threads see it as
happening instantaneously. But full synchronization
often requires actual blocking of a thread until another
is finished executing. So the proposal has an example
of mutex implementation, and I pulled out how you would
use this in a JavaScript host. If you look at it
closely, there's subtle differences
between what you would do in a worker
versus what you would do on the main thread. So on the main thread,
the trilock mutex method is called, which tries to lock
a mutex at the given address. It returns one if the mutex
is successfully locked, or zero otherwise. And on the worker thread,
it will lock a mutex at the given address, retrying
until it's successful. So basically why this is the
way it is is that on the web, you can't actually
block the main thread. And this is something that's
useful to keep in mind when using the threading primitives. So what is the current
status of this proposal? The proposal itself
is fairly stable. But there is ongoing
work to formalize the memory model that's used
by the shared array buffer. The thing I'm really
excited about here is that this is shipped in
Chrome 74 and is on by default. So Surma mentioned QT
earlier in the presentation, and QT uses full thread support. So this is something that you
can use in your applications today. As the shared memory
primitive here is the JavaScript shared array
buffer and that's temporarily disabled on some browsers,
WebAssembly threads is not currently available
by default on all browsers. But you can still try this
out in Firefox Nightly behind the flag. One of the goals
of WebAssembly is to be a low level abstraction
over modern hardware. This is especially true of
the WebAssembly SIMD proposal. SIMD is short first Single
Instruction Multiple Data. It lets one instruction
operate at the same time on multiple data items. So most modern CPUs support
some subset of vector operation. So this proposal is
trying to take advantage of capabilities that already
exist in hardware that you use every day. The challenge here is
to find a subset that is well supported on
most architectures but also is still performing. Currently, this subset is
limited to 128 bits simd. There are a couple of different
ways to use the simd proposal. By using auto
vectorization, you can pass in a flag to enable
simd while compiling, and the compiler would auto
recognize your program for you. On the other hand,
many simd use cases are niche and highly tuned for
performance and use hand coded assembly, so these
would be using clang built-ins or intrinsics
that generate machine code that is tuned for performance. Now, simd can be used for a
large variety of applications. So you can use it for image
or audio, video codecs, applications like Google
Earth and Photoshop, or even machine learning
applications on the web. We've had a lot of interest for
webml and simd collaborations as well. So let's take a closer look at
how this data is operated on. Here, you see a simple
example of an ad instruction on an array. Let's say this is an
array of integer values. So on the left side is what
a scalar operation would look like, where you add
each number to the other and store the result.
The vector version of this would just boil down
to one hardware instruction. For example, a p ad or a vp ad
on some intel architectures. So simd operations
work by allowing multiple pieces of data to be
packed into one data world, and enabling the instruction
to act on each piece of data. This is useful for cases
where the same operation has to be performed on
large amounts of data. For example, take
image processing. Say you want to compress
an image in squoosh or reduce the amount of color of
an image in Photoshop by half. Simd operations would
actually make this a lot more performant. So we've talked about making
use of underlying hardware capabilities and
OAs capabilities to make your
applications perform. Now, let's look at what
happens on the other side. What are we doing for better
interact with the host? One of the proposals
that's being implemented by multiple
browsers is the reference types proposal. WIth the reference
type proposal, WebAssembly code can pass around
arbitrary JavaScript values using the any ref value type. These values are
opaque to WebAssembly, but by importing JavaScript
built in functions, WebAssembly modules can perform
many fundamental JavaScript operations without actually
requiring JavaScript glue code. So the WebAssembly table
object is a structure that stores function
references at a high level. So this reference
types proposal also adds some table instruction
for manipulation of tables inside of wasm. The neat thing about this
is that the reference types proposal is actually
setting the stage for really useful future proposals. So for efficient interop
with the host for the web IDL proposal or
exception references for exception handling, and it
also enables a smoother path to having garbage
collection, and I'll be talking about all of
this in the next few slides. So a proposal that our
team will be focusing on in the near future is a
web IDL bindings proposal. Web IDL is an interface
definition language, and it can be used to
define interfaces that are implemented in the web. We touched on this a little
bit with the reference types. The basic idea here is that
this proposal describes adding a new mechanism to
WebAssembly for avoiding unnecessary overhead when
calling or being called through the web IDL interface. The web IDL bindings
proposal would allow compilers to optimize
calls from WebAssembly into existing web APIs and
browser environments today as well as other APIs that that
may use web IDL in the future. So let's take a
closer look at this. So when you have
a wasm function, you would call JavaScript
through the JS API. The JS API goes through
the binding layer that facilitates communication
between the JS API and the web APIs for Dom access. This adds a lot of glue code
and additional overhead. The goal of the web
IDL bindings proposal is to reduce this overhead and
optimize calls from WebAssembly into existing web APIs. So effectively, this would not
have to go through the JSAPI, and the bindings would
also be optimized to reduce the
overhead, so you would have streamlined calls between
WebAssembly and web APIs. So currently when we
talked about C, C++, Rust, and putting these languages
to WebAssembly is very well supported. And there's a lot
of work ongoing to bring different classes of
other languages to the web. Once such feature is
garbage collection, which is necessary for
efficient support of high level languages. That means faster
execution, smaller modules, and outside of C, C++, this is
really a requirement for being able to support a vast
majority of modern languages. This is also a large
and open ended problem, but we have been making
progress by carving out smaller proposals and honing
in on the exact design constraints. So currently, wasm is designed
explicitly for bits tail call optimizations. In the future, we want to enable
correct and efficient tail call implementations of languages
that require tail cal emulations, so these would
be functional languages like Haskell. V8 already has an
implementation for this, and this is actually
moving along quite well. So for full C++ support,
we need exception handling. In a web environment,
exception handling can be emulated
using JavaScript. Exception handling, which can
provide the correct semantics but really isn't fast. So post MVP, WebAssembly
will gain support for zero cost
exception handling, and this is something that
is being actively worked on as well. We're also working on a
number of other proposals, so feel free to check out the
future features documentation on the WebAssembly GitHub page. The other thing I
want to emphasize is that a lot of these
are in the design phase, and if you're interested
in wanting to participate, all of the development is done
in an open community group, so contributions
are always welcome. We also talked
about performance. So if you have
performance bottlenecks and you've used wasm to
elevate some of the concerns, we'd love to hear from you. Surma and I are going to be
hanging out here and later in the web sandbox, so
if you have questions, please come find us there
or obviously find us online. Thank you. [APPLAUSE] [MUSIC PLAYING]