[MUSIC PLAYING] STEVE KLABNIK: Hi, everybody. I'm Steve. Thank you so much for having me. It's been really cool. This is my first time at this
conference and in Toulouse. So, OK. This talk, this title, might
seem like a pile of buzz words, but I swear it's coherent. We'll get there, eventually. I got the idea-- and it is if I can, like,
click the thing right. What is going on? Oh, no. All right. I'm using the presenter view,
which I don't often use. I'm trying to use it more. Anyway, the point is, I
got the idea for this talk when I saw this tweet. I think this tweet
is very fascinating. It's by Solomon Hykes. And he said, if Wasm and
WASI existed in 2008, we wouldn't have needed
to create Docker. If you don't know who
Solomon is, he made Docker. That's how important it is. WebAssembly on the server
is the future of computing. A standardized system
interface was the missing link. Let's hope WASI
is up to the task. And this is a retweet of
Lin Clark announcing WASI. We'll talk more about
what WASI is, later. But I find this
tweet fascinating because there's two
kinds of people-- people who saw this
tweet and went, oh, yeah, that makes perfect sense. And people who saw this
tweet and were like, wait, what does Wasm
have to do with Docker? Like, this is just
completely, totally nonsense. And so, in some sense, this
talk is like a 30 minute version of this tweet. Luckily, tweets cannot
be half an hour long. I think they used
to call those blogs. But, anyway, this is
kind of the subject of this talk, and some
interesting stuff about Rust and WebAssembly, and
where it's going, and all those kind of things. All right. So part one, we're going
to talk about Rust. If you weren't at
Alexi's talk earlier, you know, I'm assuming
maybe you might have heard of Rust by now. But Rust is this
programming language that I've worked on for
the last couple of years, along with a whole
pile of other people. And the current language
we're using to describe it is, a programming language
that's empowering everyone to build reliable and
efficient software. Hilariously, I noticed
recently that this is almost the same tag line that Go uses. The Go and the Rust
teams are actually very friendly with each other. And unlike people
on the internet, we don't, like, argue
about which one is better. So it's kind of funny that it's,
like, almost word for word. So I think we might need to
change it because we don't want to steal their thunder. But, anyway, the point is
that Rust is a language that lets you go very fast. If you need speed and you
need to do it correctly, that's kind of like
where Rust exists. And so sort of how
rust came about was, there was this idea for a
very long time in programming languages that there
was this trade-off between speed and safety. And a lot of stuff in our
discipline is trade-offs. And sort of your job
is to pick which thing do I actually sort of need? And so a version of
this that does not use Sonic kind of looks like this. You sort of had two options. You had C++ which is extremely
fast, but not very safe at all. And then you have Ruby,
which is very safe, but is definitely not fast. I actually have a Ruby tattoo. I'm not, like,
talking trash on Ruby, but it's just not--
it's not fast. It's just not why Ruby was made. And so you kind of had
these two options-- do you want to be fast-- live fast and die young? Or do you want to not
cause terrible security vulnerabilities? Like, which one? Good job. And so, as part
of that, Microsoft released these figures recently. And I will note that a lot
of people who saw this chart misunderstood it. This is a chart of all
of Microsoft's products and all their security
vulnerabilities. And the big blue
section on the bottom is vulnerabilities that were
related to something called memory safety, which basically
means you used a pointer wrong and now you're owned. The other vulnerabilities
on the top are every other category--
like everything that's not related to memory safety. And so this line
is hovering at 70% because, it turns out,
from 2006 to 2018, roughly every year about
70% of the vulnerabilities across all of Microsoft products
were related to memory safety. Now, a lot of people
said, like, this means that C is
terrible or something. But notice this is not
actually categorized by programming language. It's categorized by what caused
the problem in and of itself. So you can get these kind of
vulnerabilities in languages other than C and C++,
but usually that's what they're generally
associated with. So the idea of Rust is, what
if you could be safe, as well? The text on here, if
you can't read it, says, I'll be your
designated driver tonight. And someone off on the
sides says, we know. Don't talk while you're eating
or you could choke and die-- ha-ha. Rust kind of gives
you this armor that lets you do things
in a much safer way. This is from a really
great webcomic. I have links to all
of the media that I'm using at the end of the talk. But there are a bunch of these
for a bunch of other languages. So if you go look up
your favorite language, they're all very well done,
and it's super amusing. But sort of this
idea is, what if we could have the same
protections that we have in high-level languages,
but you do super low level things at the same time? And could we eliminate
that trade-off off? Obviously, there's trade-offs
in many directions, not just speed and safety. And so in Rust, you
get speed and safety, but you lose some compile time
and it's not as easy to program in as other languages. So everything is
always a trade-off. Rust is not perfect
or the be-all end-all of programming languages. There may be a Rust++ someday. We'll see. Who knows? But the point is to improve
the status quo as it exists. You know, it's not the end of
programming language history. And one of the things that
makes Rust work this way and makes Rust a lot different
from other programming languages that you may be using
is this thing called a runtime. Now, every non-assembly
language has a runtime, including C and C++. But people often say
no runtime to mean a tiny one and a runtime to
mean a big one because words are hard, and we can't just
actually say what things mean. So we have to invent
all these terms. And so this is a completely
super scientific diagram of a program and how big
the runtimes are in them. The light gray is
like your code, and the dark gray
is the runtime. So Rust has a really tiny
one, maybe a JavaScript with the V8 engine. It's not the V8 engine's logo. That's the V8 juice logo. V8 devs either love or hate
when you make that joke. I choose to listen to
the ones that love it. But, anyway, like V8
and all the other stuff that makes JavaScript work
is part of your program when you're running
a JavaScript program. And so this will
become more relevant a little bit later in the talk. You'll see this slide again. But the only actual
bit of Rust code that I have in this
presentation-- because this is largely this sort of
big overview of things. As I said earlier,
the trade-off is, things get harder to
actually program in. But it's actually pretty
cool, in some ways. Maybe this is just like
Stockholm syndrome-- like, you know,
you grow to enjoy the thing that beats you up. But it's sort of like working
with a pair programmer. Like, you write some code
and the Rust compiler is like, uh-uh. You made a mistake here, and
here, and here, and here. But, luckily, computers
never get tired, and they never make
mistakes-- sort of-- and they're there
to check your work. So once you get a
little better at Rust and you learn to work with
this kind of development instead of against it,
it becomes much more fun. So when you start out,
you're like, oh my god. It's yelling at me all the time. This is terrible. And now, I just kind of
like crap out some code and then the compiler
tells me where I got all the things
wrong, and then I fix it, and it's way easier. So there's sort of this
weird part at the start where it feels like a lot of
mental overhead and work, but when you get better
at it, it becomes easier, which is a very strange feeling. So example what's
wrong with this code-- there's this function,
add_one, and it takes a mutable reference to an
integer, and it adds one to it. Not very super fancy-- it
does what it says on the tin. And then in the main function,
we create a variable called y, but we don't give it
any sort of value. And then we try to
pass that to add_one. So the problem here is that
we don't know what's in y. So anything can happen. This is what's often
called undefined behavior. And, specifically, we're
accessing uninitialized memory, which is a memory safety issue. And so the Rust compiler
will give you this error and it'll say,
hey, you're trying to borrow some
uninitialized variable. It says possibly here
because it's a little humble. Like, even though it can
see it's definitely not initialized, it's like,
maybe that's not initialized. I don't know. And then it's like, you're
using it on this line, and it shows you where
the thing actually is. So, yeah. So you get these kind
of error messages. This is the only-- this is a pretty small one that
was going to fit on the slide so it's readable. But you'll actually get
really in-depth messages where it will be like, hey, you
tried to use this thing here. You also tried to do this
other thing over here, and this third thing over
here, and that doesn't work. And it'll actually take
your code and point to all the bits of it. And, like-- there's some people
who've put in a lot of work to make these really,
really good and useful, and so they generally are. Which is funny because once
you get used to them being good it makes the bad
ones feel even worse. Like, you're like, come on,
most of the time this helps me and now this one doesn't. If you use a language where
the error messages are all terrible, you're just kind
of like, yeah, whatever. I can't understand this error. That's normal. But when you make like
most of them good, the bad ones suddenly become
this terrible situation. Anyway, we have people who are
just working on error messages and making them understandable,
and it's pretty great. So that's Rust. Next up, we're going to talk a
little bit about WebAssembly. So the web has been very
ambitious the whole time that it's existed. It turns out that, initially,
we had just plain old HTML with a little bit of JavaScript. And then, pretty
soon, CSS came along, then we started adding images. Like, there's this sort of-- like, a lot of people say that
the web has been moving really fast in the last
five or 10 years, but it's been moving fast,
basically, since it existed, and sometimes too fast,
and in the wrong direction, and in, like, five
directions all at once. But that's because
we're web developers, and we love the
web, and we want it to grow and be able to use it
in even more and more different places. And so, at first, it was
good for just documents and hyperlinks. And don't get me wrong,
I love me some documents and hyperlinks. But eventually, people
realized that JavaScript is a real programming
language and you could write real programs
in it, and, all of a sudden, this became way more serious. And we started building
everything in web browsers because a web browser is
basically an operating system, but that's a whole
separate talk, and I don't have time
to get into that. But we want to build
these really ambitious web applications that can do all
sorts of interesting stuff. And so I used to
work at Mozilla, and some of my former
colleagues there came up with this thing called asm.js. Who's heard of asm.js before? Cool. So, basically, as
it says on the tin, here, an extraordinary
optimizable low-level subset of JavaScript. The idea here was,
we saw this rise of languages and
other stuff that would compile to JavaScript. So a great early
example is CoffeeScript. You'd write CoffeeScript, it
would spit out some JavaScript, and the JavaScript would
be what it would actually run because the only thing
we had in the browser was JavaScript. And if you wanted to use
a different language, compiling the
JavaScript is basically the only way to do it. But as people started to build
more and more monstrosities of compile to
JavaScript shenanigans, people started to
think, like, hey, maybe we could make this
a little better somehow. And so asm.js is this subset. It's still JavaScript, but it's
got some tricks up its sleeve. And it makes it a great way to
compile stuff to JavaScript. Here's what I mean. This is some asm.js code-- compiled calculation--
doesn't really matter. It calls a function f and then
logically orders it with 0. And then it calls it with
g, and does it again. And then it returns the
same thing, also like ord. And you're like, why
is this happening? If you're familiar
with Boolean logic-- which maybe you're not-- doing this with zero-- something or zero is
always just something. So this should be a no-op. So why are we doing this? Well, the comments kind of give
it away a little bit, here. The trick is that JavaScript
doesn't have integers that you can define
as a person, but it does have integers in the
semantics of the spec. And so when you or
something with zero, it's a no-op in number terms,
but you get an integer instead of a floating point number. And integers are very
fast and very accurate, and floating point numbers
are kind of a little slow and they have some
accuracy issues. Let's put it that way. And so calculations
love integers, so we want to get an integer. So this is like a way
to trick the JavaScript engine into using integers
when doing your computation. And so this code will run a
lot faster than this code, even though, semantically,
they mean the same thing. So asm.js was
basically, like, can we define a bunch of JavaScript
that does all these tricks that you could compile to? Because you, as a
human, don't want to write all those or zeros,
although some people did. We should make a
compiler do that instead. And so this was the idea. If you're interested in
the technical details, here's an example
from the ECMAscript about why this works. So this is the
runtime semantics that are required by the
language definition for implementing any of
the Boolean operators, not just like the or and stuff. So any A @ B, where the @ is
one of the bit-wise operators, so it's like does these things. So the left-hand side is
the result of evaluating A. And then we evaluate B. And
then we call ToInt32 on the left and ToInt32 on the
right, and then the result is a 32-bit integer. So even though you can't
write integers in JavaScript, it does have integers,
sort of, if you, like, write bad code, which
is really interesting. But you can do really
cool stuff with this. This is a screenshot of a video. I will not play the video
because it's actually removed from YouTube
sort of, or whatever. I don't remember and also
playing videos in presentations is always complicated. So this is a screenshot. And I remember this
coming out, actually. This is like the
Unreal Engine running in a web browser at the time. These graphics--
this is a little blurry from some
expression artifacts, but this level of graphics
the time was mind blowing. Like, whoa, I got this 3D
graphics running on my browser? And if you saw the earlier
talk about WebAssembly, we talked a little
bit about-- well, it wasn't about WebAssembly. It was about web GPU. But we've gone a lot
farther than this in terms of graphics in the browser. Like, 3D graphics it's
kind of like old hat now. But for a while, it was
super, super impressive. So this is actually-- the Unreal folks
worked with Mozilla to port the Unreal
Engine to asm.js and would compile
it to JavaScript. And so this is
actually, technically, all JavaScript code
running in your browser to do this just, which
is really cool. But we started looking at
these giant piles of building on top of this weird
subset of JavaScript, and it had some holes because
it isn't the best thing to do, but it is like a thing,
and it is pretty cool. So the folks behind
all this decided, hey, we can do something better. Let's just actually
build a real thing that's designed
for this purpose, instead of hacking yet another
layer on top of JavaScript. And so this is how
WebAssembly was born. WebAssembly is, basically,
all the dreams of asm.js, but done in a reasonable
way that people might do if they
designed a thing instead of just taking an
existing system and turning it until it breaks. Here is why Wasm matters-- well, one of the reasons
why Wasm matters. So this is an example of
asm.js code, on the top. You can see there's actually
this string use asm. That was intended to be a
thing that asm.js would do. And the idea was that web
execution engines would look for that special string. Because that's also
like a no-op, right? You're creating a string and
not assigning it anything. So they'd look for the special
string and then be like, whoa, I'm in asm.js I should switch
into super fast asm mode. And so Firefox did all this work
to detect asm.js and compile it specially. The Chrome folks went,
like, asm.js is still just JavaScript. Why don't we just make
JavaScript faster? And so they actually never
even looked for this. They just like made all of
JavaScript faster, instead. And so it was sort of a silly
idea, but, like, whatever. It happens. So here's the Fibonacci function
written in basic JavaScript using asm. So you can see, we take
n and we or it with zero. Same thing in the
recursive fib calls-- we have all these extra
or zeros and we return it. And so you'll see, up at the
top there, it says 185 bytes. So this is like
185 bytes of code. If you were to send
this over web browser, obviously, you would want to
totally run this through Babel and whatever else, and minify
it, and obscure all the things, or whatever. But naively written,
it's like 185 bytes. On the bottom is a bunch
of unprintable shenanigans that doesn't work
because it's binary code. And that is actually the Wasm
version of this compiled. So it is 62 bytes. You can see,
hilariously, it actually starts with the letters asm. That's actually built
into the magic number for the binary format for Wasm
puts asm at the start of it. So that's kind of fun. But, anyway, the point is that
the same thing is actually much smaller, in general. And also, because it's smaller
and because it's more regular, it's designed to be
parsed real easily. So part of the problem with
this, like, we compile-- we learned what the JavaScript
does and we make it faster, is you still have to be able
to parse the full JavaScript language, even though we're
not using the full JavaScript language anymore. And so that's--
turns out it's slow. It's much easier
to actually take a binary file that's intended
to be executed as byte code and then just like run it than
it is to turn a language that was written for
humans into byte code and then turn that
into whatever. So it's smaller in size
so it downloads faster, and it's got all
these special tricks to make compiling
it into something really fast, really nice. And so that's the way
that WebAssembly really improves on this. And so, basically,
all of the web browsers now, except
for original IE-- somehow, even though Microsoft
is doing great things with IE these days, old IE is
still like haunting us like around the corner. It's like, oh, I gotcha. Old IE, like non-Edge IE
doesn't support WebAssembly, but, basically, every
other browser does. So it depends on if you
care about them or not. Maybe in 15 years we can stop-- maybe. So, yeah. So that support is in
all the browsers now and they know how to execute
WebAssembly sort of natively. And so this means we now have
options for all sorts of things to compile to Wasm and then use
them inside of your browser. The way this works in like
a compiler-ish sort of way-- so you wouldn't want to write
that binary code by hand, unless you're into
that sort of thing. You probably want to write a
regular programming language and compile it into WebAssembly. So this is an image made
by Lin Clark at Mozilla who makes these great cartoons
for explaining topics. Her blog posts are awesome. This is one of her blog posts,
and this is the image from it. Basically, you take a language
like C, or C++, or Rust, and you compile it into
this thing called IR, which is intermediate
representation. In this case, it's LVM. It doesn't matter at the moment. And you turn that
into WebAssembly, and then your browser
takes that and it turns it into native code on whichever
platform you're using. As I mentioned before,
everything but IE works at this point, basically. And that's pretty cool. So this has been true for
about a year and a half now. So all the browser
vendors are still working on making WebAssembly
super fast and performing. There's lots of-- like,
you'll test something one week and it'll be slow. And you'll test it the next
week and will be, like, three times faster. And you're like, I guess I
updated Chrome in the meantime. But people have been really,
really working on this a lot, and it's been really
neat to see it evolve. However, I've been talking a
lot about Wasm in the browser, but that's not actually what
this talk is sort of about. See, the people who
work on WebAssembly often joke that
WebAssembly is neither web nor assembly, actually. Like, it's not an assembly
language, it's a binary format. And it turns out that,
increasingly, people are interested in using it
outside of the web browser context, even though that
was what it was made for. When I think about this,
I go back to this talk. This talk is Gary
Bernhardt in 2014. It's called The Life
and Death of JavaScript. Has anyone seen this talk? Yeah. You should go watch it. It's a trip. I was lucky enough
to be physically present for this talk,
and it, basically, determined the next like
six years of my career, in some sense. Gary is a wonderful presenter. The idea is that
it's set in 2050, and Gary is giving you a
history of what's happened to JavaScript since 2010. And so it's in this
alternate universe. He has this thing called
Metal, which is basically what WebAssembly is because this
was made up before WebAssembly was invented. But he did talks about how
once you compile things to a thing that
runs in the browser, a browser is a
thing you compile. So, like, this is
a screenshot of-- if you'll notice,
you have Chrome running GIMP inside of it. But, also, Chrome is
running inside of Firefox. And, like, OK, you may say,
this is kind of a dumb example and it's out there. Well, 2017, another Mozilla
ex-colleague of mine, Dan Callahan, compiled
DOSBox to Wasm and then ran Netscape Navigator
inside of DOSBox, inside of Firefox. So you actually were running,
like, old Firefox inside of then current Firefox. Is this a good idea? Probably not. Nobody's like downloading old
Netscape Navigator to use it, but the point is that when
I say a browser is an OS, it's really actually an OS
now, even more so than before. And you can run really
arbitrary things inside of it, including
other browsers, depending. So now we've talked about Rust
and we've talked about Wasm. It's time to talk about
Rust and Wasm together. I showed you this code earlier
with JavaScript and the Wasm stuff. You'll notice these
byte-size things, as I talked about before. There are other languages,
other than Rust, that work with WebAssembly. So Rust does work with
WebAssembly for a bunch of different reasons. This is a thing
called AssemblyScript, which is a subset of TypeScript
that compiles to WebAssembly. And so if you've
done some TypeScript, this should look
relatively familiar to you. This is a slightly
different example of a package that turns-- it gives you 64-bit
integers in JavaScript by implementing them in
WebAssembly, which, again, is kind of silly, but whatever. The point is that you
can write this code that looks like this TypeScript,
and then compile this to Wasm, and then you-- it works. But if you remember our
slide from earlier-- AssemblyScript is
a great project, but there's sort of this weird
rift happening in WebAssembly where, if you remember
back to this thing where languages have almost
no runtime versus languages have a big runtime,
WebAssembly doesn't give you a runtime at all. So if you're using a
language that requires one, like, say,
AssemblyScript, you have to compile the runtime
to WebAssembly as well and ship it to your end users. So, for example, like Rust-- you can make a
binary in WebAssembly that's, like, 151 bytes I think
is the smallest one we made-- 200 bytes-- something like that. I tried the new
C# Blazor release, which is really awesome
in every respect. But it has a megabyte in size
off the top by default. Like, HelloWorld is a megabyte. They're working on
that, and they're doing a bunch of great work. AssemblyScript
actually has options for four different
runtimes that give you various things from
garbage collection doesn't exist, to bad
garbage collection, to fairly good
garbage collection, with varying ranges of sizes. But one of the
interesting things here that's kind of
happening with Wasm is, these languages that
have these bigger run times, they're better for
full applications because you don't want
a runtime all the time. But imagine you have, like, MPM
packages implemented in Rust, and because they're
MPM packages, you're now depending on
55 Rust Wasm packages. You don't want 55 copies
of the runtime involved. And so solving that problem
is one of the big things that the Wasm folks are
working on right now. But it means that languages
like C, C++, and Rust, that have a small
runtime, are, like, extra well suited for this Wasm
world because the binaries are smaller. And that's cool. So we recognize
this kind of thing. And back in February
of 2018, we decided to start a working
group within Rust to build out
WebAssembly support. And there's been a number of
different really great projects that have come out of here. You have Wasm-bindgen,
which gives you the ability to call into arbitrary
JavaScript and browser APIs, and other stuff. You have stuff like
Wasm-pack [INAUDIBLE] lets you build NPM packages
that are written in Rust and compiled to
WebAssembly, and your users don't even need to know
that they're secretly using WebAssembly. And all these other tools
and we put a lot of work into making the user
experience really pleasant. So we've still been pursuing
this, ever since 2018, and are, in many
ways, at the forefront of a lot of these
kind of things. There was a mention earlier,
at the talk about web GPU, where there is a Rust
implementation of the web GPU speck that compiles
to Wasm and you can use it to build your
apps on the desktop, and also on the web, and
all this kind of cool stuff. And so RUST and WebAssembly
are like a thing. Like, we're super into it. So that's cool. But now is onto the last part-- sort of. So we're going to
talk about serverless. Before I talk
about serverless, I like people like to make this
joke it's not a funny joke. You can stop tweeting. Everyone knows serverless
still has servers. But it's about what is being
done where and in what way. So this is not my image,
but it's beautiful. Again, I have a link at
the end of the slide. This is a blog post about
the history of the cloud. But we went from
having data centers where you would buy big iron
and put it in a data center somewhere. And then we sort of moved
into this infrastructure as a service world, where you
got like virtual machines, like a VPS, and so
you're able to spin up a virtual machine,
and then SSH into it, and then do all sorts
of things instead. And that was like
a big step forward. And then, eventually, we moved
onto platform as a service where, instead of you
needing to SSH in a machine, you just kind of said,
hey, I have a Rails app. Please deploy it. And then it would get
deployed, and then you didn't have to worry
about things anymore. Serverless is kind of the latest
evolution of these ideas, where instead of you
deploying an entire app, you deploy individual functions. And you're able to scale
them independently based on-- there's a big sale
today, and so everyone is using the log in and
send-Steve-money parts of the website, but nobody's
using the read-the-blog part of the website. So let's just spin up
only the parts that matter and then leave the
parts that don't down. And that saves you
money, in theory. In the end, all
your money is going to Amazon, no matter which
one of these that you use. But I think that we sort
of missed something. So lots of people
talk about this and they're like, OK, physical
machine, virtual machine, set of virtual machines, sort
of, and then now its functions and not even a virtual machine. But I think we sort of missed a
really interesting shift here. And that is around to
this question of-- what is the API that your
hosting platform offers to you, as a user? And, most of the
time, when people think about APIs
for the web, they think about like JSON
being posted somewhere, but that's not
necessarily what I mean. I mean, like, what is the
interface by which you give your application
to a provider to have them actually host it? Because there's actually a
really big and interesting shift that happened
between infrastructure as a service and
platform as a service that we kind of lost a little
something in the shuffle here. And that is this part. So in infrastructure
of the service it says the operating
system is the unit of scale. That is, the API the
VPS gives you is Unix. That is a lot of
acronyms in one sentence. Jeez. Maybe this is a buzzword
talk, after all. But the point is that you could
deploy anything that implements the Unix API, if you will. Like, anything that
compiles on Unix, you could put into
a virtual machine. But when we got to
platform as a service, it's now like you have a Rails
app and you just deploy it. And the difference
there is, a Rails app is like a subset of Unix-y stuff. There are many more things
that run on Unix than run on, for example, Rails. I'm using Rails as an example
here because of my next slide, but this is true of any
of these hosting services. You need like, now we support
Django, now we support Node, and now we support
Go, and whatever else. And so they had to
offer specialized APIs because the application was
now what you would give them. It wasn't like, here's a box. Go crazy. Like, they had to know that
you were running a Rails app and how you would
deploy it, which meant that now you
would have to build specialized ones for each kind
of thing you wanted to deploy. And that was much harder to do
because of the number of things you needed to do as the provider
of the deployment platform, whereas it was just Unix before. So I went onto archive.org and
I dug out the Heroku initial web page. This is some choice-- what is this, 2008? Yeah. October of 2008-- this is like
the pinnacle of web design. You got the beta tag up there--
the little ribbon, old Amazon Web Services logos,
and Ruby, and Rails, and all this kind of stuff. This is great. I love this. But, yeah, Heroku was originally
just a Ruby on Rails platform. And that's because they only
had the time, as a startup, to implement one
particular kind of API. And they picked rails
because, at the time, it was like the biggest one. Rails was super hot
stuff back then. And so this was
this weird problem where all these companies
that started in this era had to figure out which
languages they were supporting and at what time. And if you were a user of a
slightly more obscure language, you'd have to go beg your
favorite hosting provider to add support for your language
so you could continue using it. And that didn't really work. So there's this
company that existed sort of contemporarily
a couple of years later, called dotCloud. And their idea
was, what if we had all the benefits of
the scaling sliders and stuff that Heroku
offers, but instead of being a Rails platform,
we're like an anything platform. And this is probably
one of the companies that succeeded most massively
that you probably haven't heard of or forgot even existed. The reason why this
is so blurry is that I had to go dig out
images because like their web presence is totally gone now. But you may know these
guys by this logo-- Docker. DotCloud needed a way to be able
to support any kind of thing while not getting
into the specifics. And so they built
out some technology to do that for their
hosting platform. And whenever they finally
decided to release it, they decided to give it it's
own name and call it Docker. And it turns out that
Docker was way more useful than a particular
way to host your websites, and so it kind of
grew super massively. And then other
people recognized, wait, I have a way that
I can manage my hardware and not care about the specifics
of the kinds of applications I'm running, and
that's super useful. And that's how we got this
giant thing of Kubernetes to start managing this stuff. And all these stuff
around DevOps things started happening because now
we have all these tools where we've sort of abstracted-- we've gone back
to where before it was like an OS-ish thing as
the sort of like unit of scale again, but with all the benefits
of being able to arbitrarily scale different sort of things. And so then we had to figure
out ways of managing them. And this is when
service meshes started happening and all this stuff. And you can make an
entire career just out of managing Docker containers,
which is pretty neat. And so if you sort of think
about the way that this works, you can take your whatever
web app you want to write, and whatever thing you
want, and you put it into a Docker container. And the person that's selling
the servers or the server capacity, they don't have
to worry about whatever is in the container. They just like know how
to run Docker containers. And so we sort of found
ourself in this Docker-ish sort of like ecosystem. Part four-- the
future of serverless. So about Docker-- and
as I mentioned before, Docker is this, like, container. The reason that Dockers are
cool and why containers are cool is, they limit the ability of
the thing inside the container to effect the stuff
outside of the container. That's why it's
called a container. Right? Like, the idea is that
you just pack it up, and you ship it off, and not
care what's inside it all. What's inside can't come out. Just like a real
shipping container, until you hit a storm
and stuff breaks. But the idea was
that you could know-- because the outside
of the container would set the rules
for what was allowed to happen inside the container. You could be sure that your
customers weren't uploading like break into my
credit card database and steal everyone's
credit cards dot rb. And it would be safe for the
people outside of the container to run the arbitrary stuff
inside the container. But you know what
other kind of thing runs arbitrary
code in a safe way? Web browsers. Like, I made a joke
about web browsers being operating systems
before, and, obviously, Docker is not an
operating system, but actually a
better analogy would be that Firefox is Docker, just
in a different kind of way. You download and run
arbitrary JavaScript code inside of your
browser, and you know that it can't break
outside of your browser and do a bunch of other
shenanigans on your computer. In the same way that
your hosting provider can run a Docker container with
your arbitrary code inside of it and know that you're not going
to break outside the container and do all sorts of other stuff. And browsers have been around
a lot longer than Docker has, and they've had to
deal with a lot of way worse kinds of attacks, maybe. One of the things that
I realized at one point when a lot of stuff was
happening with Firefox and security vulnerabilities and
stuff-- and I was like, there are governments trying to
hack Firefox and Chrome to do spy stuff with nation states. This isn't like I
put a little repo up on GitHub and it's open source,
please pay attention to me. This is like other governments
are sending their best computer hackers to mess up your stuff. It's like a really
adversarial environment. And so browsers have
gotten very, very good at containerizing JavaScript and
making sure that it doesn't do things it's not supposed to do. So that kind of takes
me back to this diagram from earlier about how
WebAssembly kind of like works. You compile arbitrary
programs to WebAssembly and then that runs
inside your browser. And so this is kind of what
that original tweet was about. Like, if you put all
these things together, all these trends that
have been happening over the last couple of
years, and you, like, squint in the right
way, WebAssembly is basically Docker, in
a very strange sense, but inside your browser,
not on your computer. And once people
started realizing that, they started asking
themselves, well, why shouldn't it be
on your computer? And that's how we got WASI. This is the actual logo
for WASI, by the way. Somebody opened an issue
on the repo and was like, could you get a better logo? And they close it,
being like, sorry, we're trying to do real
technical work here. This logo is fine for now. It doesn't actually
matter, and closed it. I was like, that
seems very fair. So WASI is short for
WebAssembly Systems Interface. It's sort of like if you
think about WebAssembly only lets you run stuff
inside of the container, WASI is sort of like an API
that implementations of Wasm can implement that lets
you set permissions around what is allowed to
break outside of the container. So do you want your program
running in WebAssembly to build access THE network? Do you want to let it be able
to access the file system? And you can control this on a
per WebAssembly module basis. So it's got these
capability systems, and that's really cool. And when I think about
breaking through containers, This is the image that
always pops up in my head. In what way do you want to let
it bust out of what's going on? And so WASI has led to
this situation where you can write desktop
WebAssembly applications where you compile your C
code to WebAssembly and then run it natively on the
desktop because you could never run C programs natively
on your desktop before. But now you have this
benefit of the sandbox, where it won't let the stuff
break out of your system. So if you would just
compile your C natively and it had a big pointer
error in it somewhere, maybe that would be able
to break into your computer and steal your stuff. But if you compile it into
WebAssembly instead and then run that on your computer,
it's slightly slower, but it'll just crash whenever
that pointer goes wrong, instead of causing
catastrophic error. So that's the reason that
you may want to do this, is that you gain the sandboxing
ability, not exactly for free, but pretty cheaply,
and in a way that's been tested inside of
web browsers for a really long time. And so that's kind of cool. Last super big buzz
word of the day-- I'm almost done, I swear-- edge computing. This is a term that's popping
up more and more lately in sort of the web world. And I definitely do
not want this talk to be a Cloudflare pitch, even
though I work at Cloudflare. This is a thing
that we're doing, and I'm trying to explain to
you why we're interested in it. Fastly is also doing
this kind of stuff. And Cloudflare and Fastly
are sort of but not really competitors of each other. And so I'm going to talk
about the awesome things that are going on at
both of these places because they're doing really
cool stuff with Wasm and edge computing stuff. So this is a map of
Amazon Web Services and where all the
locations are in the world. I think the red ones are
planned expansions that don't exist yet. As we all know, us-east-1-- actually, I don't know if you
all in Europe use us-east-1 as much as people in the US
do, but that's where like everything-- I knew someone that had a thing
set up in their Slack channel where if there was
a bad thunderstorm in northern Virginia,
it would alert them. And they're like, I'm
doing cloud monitoring. Because if there's
like a bad thunderstorm and it shuts off the
Amazon data center, our stuff is going to go down. And I was like, that's
a terrible joke-- monitoring the clouds
above your clouds. But if you're
deploying stuff on AWS, these are the locations in the
world that you can put stuff, at least as of two
or three weeks ago, whenever I made this slide. And so that's cool, and
there's a lot of them. But if you compare this to,
say, Fastly's map of places of presence around the
world, there is a bunch more. And if you look
at Cloudflare's-- I made this a couple weeks ago. This is actually wrong. We've added like
five since then. But, like, we've
got a lot of them. And so what's interesting
about these CDN-ish companies is that they're building this
global network of CDN stuff, but they didn't really
think about the fact that building a bunch of
data centers for CDN purposes is actually not that different
than building a bunch of CDNs for EC2-ish purposes. And so everyone sort
of in the CDN game realized, like, wait a minute. We have piles of
servers everywhere all around the world. What would happen if we let
people run code on them, instead of just caching
your images or whatever? And so now it's time
for an old buzzword. I don't know if everybody-- I know a lot of you
are Java programmers, so you're probably
more familiar with this than the last audience
I talked to about this. But there was this
idea with Java that you'd write
once, run anywhere. And at the time, in the late
'90s, when this was a thing, it was like a real thing
where it was like, whoa, I can compile it to the JVM. And then it runs on any
platform the JVM is on. And so it's sort of-- this idea
is like similar with the CDN thing, but what if
run anywhere didn't mean like Windows and
Linux, but it meant the whole way around the world? And so WebAssembly is
letting people do that. And the reason why is that the
edge has a bunch of things. Like, all of these data
centers are not as powerful as Amazon's data centers, and so
you need to do certain things. And I don't have time
to get into that, I just want to say even more
about the broad things here. But we can talk about it
later if you're curious. But all of the stuff
going on in edge compute is WebAssembly focused. So Fastly has this project
called Terrarium and Cloudflare has this thing called Workers. And they basically both let you
run WebAssembly in the edge, like on the server that
is living somewhere around the world. And what's interesting
about these things is that they both have
significant components of Rust inside of them. So because we invested with Rust
into making WebAssembly tooling great, when people need to start
building WebAssembly stuff, they're increasingly
picking Rust to do it. And so I think in the same
way that Go got really big after Docker existed and
it's mostly written in Go, and everyone was
like, it just makes sense that cloud native tooling
is written in the same thing that the Docker is written in. WebAssembly is kind of
doing the same thing. So Lucet is Fastly's
WebAssembly runtime. And Wrangler is the sort
of command line tool that you upload your code and
stuff with Cloudflare stuff. And so these are
both written in Rust. And there's a whole bunch
of other Rust shenanigans. Most of them have much
not as cool logos, so I just left
these two up here. Logos look great on slides. But, increasingly,
we're seeing people that are doing Wasm
stuff build it in Rust because they just kind of
mutually reinforce each other. And that's really cool because
then, as other people start seeing that momentum build,
they're also like, oh, WebAssembly and Rust are this,
like, cool thing together. So this is sort of
the common denominator with a lot of this WebAssembly
shenanigans that are going on. And we've seen an
increasing number of people want to program in Rust
because they get interested in WebAssembly and vise versa. And so that's really cool. I am actually out of time. So I have this very last
slide, this sort of a summary of the points of this talk. The first line is Rust
loves WebAssembly. Like, WebAssembly stuff
in Rust is really cool. And we're working
super hard on it. And if you're
interested in Wasm, we'd love to have you learn some
Rust to help build it's tooling and do all that kind of stuff. The second one is
that WebAssembly and serverless computing
are kind of like-- they're not totally sure
it's a super great fit yet, but it's like an
interesting thing that is happening where it turns out
that it seems like WebAssembly and serverless are surprisingly
and increasingly becoming part of each other's worlds. And that's a thing that
I would have not thought of like four years
ago when I first started getting involved
in the shenanigans. Like, I thought everything--
like WebAssembly was purely in the browser. And now, increasingly, it
seems like in the WebAssembly ecosystem, no one cares
about the browser. They're talking about
desktop applications. And they're talking about
serverless stuff, and all this other kind of shenanigans. So that's a really interesting
thing to keep an eye on. And then, finally, edge
computing is a cool idea. I don't want to talk about it
a whole lot, but it's neat. So with that, I'm going to go. Thank you, so much. Here's a bunch of links. Yeah. [APPLAUSE] [MUSIC PLAYING]