MALE SPEAKER: Please
take your seats. JUSTIN BECKWITH:
Greetings, folks. How's it going? Excited to be here today. We're going to be talking about
building Node.js applications on Google Cloud Platform. We're going to be
building some apps, deploying them to
App Engine, talking about taking those
same applications, running them on Kubernetes
and Container Engine. We're going to use the G
Cloud NPM module to connect our app with some
of the cool services we heard about this week, things
like the Google Cloud Vision API, Google Cloud Storage. And then finally, if we
have some time at the end, we're going to dig into
the way that JavaScript is helping change the face
of the internet of things. We've got a lot to do today. So before we dig in, a
little bit about myself. My name is Justin Beckwith. I'm a product manager
here at Google. And before that, I was a web
developer at a bunch of places. If you want to
take a look at any of the code we're going
to do today, all of it's available on GitHub. Feel free to go check
it out, clone it. If you want to
ask any questions, reach out to me on
Twitter @justinbeckwith. And for me, this
is really exciting because Node.js has been a
big part of my development experience for the
last five or six years. I was originally on an
engineering team back in 2010. And we kind of
stumbled upon Node.js because we were primarily a
front end focused development team. And what was really attractive
was that finally, we were able to take all these
years of experience we had building front end
JavaScript web applications and apply that same knowledge,
all that expertise we had built, on the
server side as well. We didn't have to split
between front end and back end. We had one scheme
that homogeneously could learn one
language, and then use all of that
collective knowledge. And one of the things we
were really surprised by when we started experimenting with
it was that it without doing very much work at all, we were
able to get great performance out of our application. And you'll see that today. Just kind of the way
that Node.js is built makes it really easy to
build performant web apps. When we started
digging in, we started using Express as our
web application server. And we were a little
bit surprised by this because most of the
frameworks we'd used before were part of the core
of whatever language we were using. But most of the packages
out there for Node.js, they're really not in the core. They're provided by this third
party module system called NPM. And it's wildly popular. And there's just about anything
you could ever want out there with it. Just last month, there were
nearly four billion downloads of NPM packages for
authors of Node.js apps that were building and
using these dependencies. And then over the last
six years, a lot of things have changed. This started up as
a technology where maybe if you were
building a new app and you wanted to experiment,
or you were a startup, you might decide to use it. But Node.js, it's ready
for the enterprise now. There are real companies
taking a big bet. And with things like
the Node Foundation and an 18-month
support cycle, it's finally ready to use for
enterprise web applications. So in that same
time, a lot of things have changed on
Google Cloud as well. And it's now also a great
place to run web apps. As it would turn
out, Node.js is built on top of V8, the JavaScript
execution engine that's used by Google Chrome. So we have years and
years of experience fine tuning JavaScript to
run great on the client. And we're bringing
that same expertise to helping build performant
applications on the server. And when you are ready
to bring them to us, you can take advantage of
Google's infrastructure so you're not
worried about uptime. You're not worried
about scaling. We're just going to
manage all that for you. And then lastly, when you want
to extend your application to use things like Cloud
ML, Vision API, BigQuery, some of the cool stuff
we've talked about all week, you can take advantage
of our APIs and services. So getting started,
when you want to deploy your application
to Google Cloud, really three places that
you're going to look at. The first is App Engine. App Engine's been
around for while. It's the most storied
part of Google Cloud. And the idea is
like I mentioned. You just bring your
code, deploy it to us. We'll scale it. We'll manage it for you. If you're an organization
that's already using Docker, Container Engine is a
great option as well. It's essentially the
Google Cloud hosted version of Kubernetes. And it's a great
option for those of you that are already into
the container revolution. And then finally, if you want
to have complete control, you want to run a Windows VM,
you want to run a Linux VM, and you want to kind
of own that end to end, you can always create a virtual
machine on Compute Engine. So first thing I
want to do, before we dig into the first demo, is we
had an announcement on Monday. And Node.js on App
Engine just entered beta. So everything that
we're going to do today, you're going to be able
to go home and do as well. [APPLAUSE] Thank you. So for the first
demo, we're going to build a simple web
application with Express.js, the most popular web
framework for Node, and take a look at what
deploying that to App Engine looks like. So I'm just going to use the
Express tool, nothing App Engine specific here. And we're going to say
express node.js next. And this is just
creating an application on my local machine. So I'm going to do what
the instructions tell me to do here, cd into the
directory and run npm install. Now, I'm installing these
dependencies locally. But when you deploy it into
Google Cloud into App Engine, we're also going to run
the NPM install for you. So if you have any
native dependencies, they're going to get built
on the right architecture. So just to show the standard
app running locally. We're going to do npm start. Our app is up and running. And let's take a
look at this locally. So nothing fancy. We've all probably
done this 100 times, getting started
with our projects. So let's make a
simple change to it and then deploy
it to App Engine. So I'm going to cancel this
out, open up the project in my editor of choice. And all we're going
to do here is instead of it saying
"Express," we're going to say "Express Now
on Google App Engine." All right. So when you're ready to
deploy your application, you're going to want to go
download the Google Cloud SDK. That's at cloud.google.com.sdk. And this provides the
GCloud command line tools that you can use to
orchestrate your deployments, along with a lot
of the other things we're going to see in the
UI in a little bit here. So to deploy, I'm just going to
do gcloud preview app deploy. First thing it's going to do is
take a look at our application and figure out, OK,
this is Node.js. Let's generate an app.yaml,
a small configuration file that lets us know what type
of application you're running. And so I'm OK with
generating that. And finally, it's
going to tell us, we're about to deploy our
version and we're off. And you see here we've got
nodejs-next.appspot.com. This is a project that
I'd set up earlier. I'm deploying to
a specific version because I don't want this to
replace the version I have deployed out there right now. And behind the
scenes, we're going to be going off and creating
our app in the App Engine flexible environment. So this is going to take
a little bit to deploy. While that's
happening, let's take a look at a version of
this that I've already deployed out to App Engine. So this is the Google
Cloud Developer console. And this is the Node.js
project that we had just created and deployed into. The dashboard's going to
give us a good oversight of the performance
of our application, let us take a look at requests. We can also dig into
things like latency, traffic utilization, lots
of interesting statistics available from the dashboard. What we're going
to take a look at are the versions
of our application that are actually available. So we just kicked
off a new deployment. Every deployment
into App Engine is going to create what
we call a new version. Now, you'll notice that when
we clicked on that link, or when we started
our deployment, it didn't say that
it was just going to nodejs-next.appspot.com. It had this version
number in front of it. And the reason for that
is that every deployment that you do in App Engine
creates a new version. And we want to give
you the opportunity to go check it out,
make sure it works, validate everything
with your application, your deployment is working
exactly like you expected. And then only when you're ready,
reroute all of your traffic over to this new version. Now of course, during
development time, you can set it up so that
it does this automatically. But it's a really
great feature if you're building an application
that has to have five nines of
reliability and you want to be extra, extra
sure that you're not going to deploy a bad build. But first, let's check
out the version of this that I've already deployed. So I'm going to click
here on the version V1, open it up in a new window. And we have essentially
the same thing that we just started the
deployment for right now. Now, as well as this
V1 version that's getting 100% of
my traffic, you're going to notice we have
yellow, green, red, blue. These are just four
different versions of my app that I deployed that have a
different background color. Now, the reason I'm
bringing this up is because this
versions feature, it's not just useful
for validating my build before I take a new
deployment into production. It's also great if I want to
do things like A/B testing. What if we wanted to
know which background color of our application is
going to provide the best experience for our users? And we want to be able to
go measure that and use that to make decisions
about the product that we're deploying into cloud. What I can do is I can
actually split the traffic across those four versions. So let's try doing that. I'm going to take 25%
of blue, 25% of green, and so on until
we get up to 100%. Finally, 25% of yellow. We're going to split on
cookie because otherwise, this would be a difficult demo. That happens sometimes, so we'll
just give it a quick reload. Lots of new things getting
deployed this week. Some of this stuff you're
seeing for the first time, so we're working out a
few of the kinks as we go. Great. So up. And look at that. Our new version is
actually available. So this is the version
we just deployed. Let's do our traffic
split one more time. I'm feeling good
about it this time. So we have red, 25% to blue,
25% to green, and 25% to yellow. Save and-- well, we'll skip
past that at this stage. That's OK. These things happen. New code. We just had to change it out
to the flexible environment. But the idea is that
as I come back to this, what I'm going to see now is
that every time traffic comes in, one out of four users
will get red, one out of four will get yellow,
green, and so on. So one of the things
you might notice up here is that each of these
versions that's running mentions that we
have two instances. Now, with the App Engine
flexible environment, you can define how many virtual
machines your application is going to run on. And you can limit
that to one, two. You can tell us to auto scale
it, which it does by default. Go up to 20
instances by default. But one of the things
that's interesting is that, unlike a lot
of platform as a service systems you might
have used, we actually expose the underlying
infrastructure that your application's
running on. So if we want to get
an idea of what's happening with the V1
that's deployed over here, we can click in,
and we can actually get the list of the virtual
machines that are available. So now, let's say that I
want to actually connect to this and SSH. You could SSH
directly into that VM. Another useful thing
that you can do from here is let's say that
we wanted to go and I'm going to try to do
the set traffic command. Only this time, I'm not
going to use the UI. I'm going to try to
use the command line. You can also use this tool we
have called the Cloud Shell. Cloud Shell is an integrated
development environment that includes the Google Cloud SDK. All the commands that I was
running locally on my terminal you can run from here as well. So I'm going to do gcloud
preview app services-- or let's do versions list. This is going to tell us all
the versions of our application that are deployed. And what I want to do
is I want to set traffic so that we're going to
route to the new version that we just went and created. You can see it has
a time stamp on it. So to do that, I'm going to say
gcloud preview app services. And we're going to
say set traffic. And when we do that, we could,
of course, do a split here. But instead, what
I'm going to do is just send all of my
traffic to that version. We'll say yes. And in just a moment-- good. It looks like this
one actually set. So now if I come
back over here and go to nodejs-next.appspot.com,
we're going to see "Express Now on Google App Engine," the
version that we just deployed up and running. And we were able
to do that safely in a way that let us try it
before we actually sent traffic to it. So this was a pretty
simple application that we just made, right? But let's say that we
decided we don't want to run this on App Engine anymore. Or we want to run it
in multiple places. What if you wanted to take
that same application you just built and you want to
go run it on Kubernetes? When you're ready to do
that, behind the scenes, all we're really doing with the
App Engine flexible environment is creating a
Docker image for you when you do your
deployment that is run on your virtual machines. So you can actually
dig in and take control of the Docker container. You can customize it,
redeploy it to App Engine. Or you can just
take the whole thing and run it on Container
Engine, Kubernetes, your own local machine,
wherever you want to do that. So let's take a look
at how this works. Right now, we create the Docker
file automatically for you. But if you want to
get a hold of it, you can say gcloud preview
app gen-config--custom. And what we're going to do is
generate a Docker file for you. So quick show of hands. How many people here
have used Docker before? So a fair number. That's actually fantastic. I was a little bit
worried I was going to start talking about
Docker and everyone was going to get bored and
get in that post-lunch phase, everyone taking a nap. I don't want any of that. So this is the Docker file that
we're using behind the scenes when we're actually
building your image and deploying your application. But let's say we wanted
to do something special. We want to do
something different. I want to use ImageMagick. I don't know if
anyone else has tried to use ImageMagick on other
PaaSes, but it's kind of hard. Or I want to do
something like use FFmpeg to do video encoding. All you have to do is come
in, and just like you would normally do with a
Docker container, do your apt-get-- well,
first we'll do our update, and then your apt-get
install imagemagick. Now, when we do this,
a standard Docker build is going to
work just fine. Docker run. It's going to work. Deploy to Kubernetes. It's all going to work. And it's the same
application that we're deploying to App Engine. There's no lock in. We're not worried about
it being vendor specific. It's just going to be
portable across any of-- this code will work in
any of the engines that we have and on your machine
and externally as well. So a few of the things
that we just covered. The big thing I want
to drive home with this is you can run your Node.js
application as it is today. There's no lock in. You take the code with you,
run it wherever you want. As far as NPM modules are
concerned, use any NPM module you want, within reason. I happened to use Express
for this application. Hapi.js, we're going to see
an example of that later. It'll work fine. Jade, any of the modules
that you're using today are going to work just fine. We can automatically
scale this to millions of requests for you. Out-of-the-box, we're going
to scale up to 20 instances. And you can bump that
up as you need to. The versioned upgrades and
traffic splitting features are really great, especially
once we figure out why that wasn't working. And lastly, and
most importantly, you can bring your own runtime
with the Docker support. And it's not just for Node.js. This comes with all the App
Engine flexible runtimes. You have complete control
over that environment and it's transparent. You can look in and see
what's happening on the VM. So that was a really
simple application. I mean, that was the classic
kind of hello world deployment. But it's pretty rare that
our apps in production are actually going
to look like that. They're typically
not just App Engine and one client hitting them. Usually, they look
something like this, right? Things start to get complex. We have multiple modules
or multiple micro services that we're managing. We're using things like scalable
pub/sub and message queues to get messages between them. We're using a
scalable file system in the cloud to store
all of our files and maybe serve them over a CDN. We need databases. We need things like BigTable,
Datastore, or MongoDB, or CouchDB, or
RethinkDB, whichever database you're using today. We need to be able to run these. And usually our
architectures end up looking something
a little bit more like this by the
time we're done. When you're ready
to take advantage of all of these services
that we provide-- things like Cloud Storage,
Cloud Datastore, BigQuery-- that's when you
use the Cloud NPM module. So I should say npm install
--save gcloud and your project, it opens up access to
all the APIs and services that we have on Google Cloud
so that you can expand and take advantage of the other
services beyond just App Engine or Container Engine
that we make available. And really, these come
in these big buckets. There's storage for things
like using your databases. You can use the
databases we provide. You can use MySQL with Cloud
SQL or GCS or Datastore. You can do that. Or you can bring your own
with things like Mongo DB, on Compute Engine,
or on Kubernetes. We have all these cool
things with big data. We've got BigQuery for
accessing millions, billions of records in near real time. We had some announcements
around Datalab. Pub/sub for message queuing. And then you get into the
APIs, which in my opinion, those APIs, these are the real
magic of Google Cloud, right? These are bits of
technology we've been incubating here
internally at Google for years and years and years. And finally, we're able
to take some of these, wrap a simple API around them,
and then make them available for you to use. Stuff like Cloud ML,
speech recognition. I know that's something
I've struggled with. The Vision API,
detecting if we're looking at a chair or an
airplane automatically for you. These are the real
magic of Google, and they're all exposed. I mean, think about the types of
apps you could build with this. You could reach
billions of people. You could change the
world with the types of applications you could
build with these services. I used these services to
build an app about cats. That is actually my cat. So I'm sure, like many of
you, I frequent Reddit. One of the Reddits
that I enjoy is r/aww. And one of the things I
noticed while I was on Reddit was that Reddit likes to talk
a lot about how they like cats. This comes up on the internet
as well as developers. We're all cat friendly. Yet every time I
went to r/aww, I observed that I felt
like I saw more dogs. I have a dog and a cat. And being a kind of-- I
needed answers to this, right? I didn't want to just
leave it to perception. I needed to do this analysis. And I wanted to know, does
Reddit like dogs or cats more? So a casual observation was
not going to stand here. So what I did is I
built an application. And what this does
is it goes, it loads the first
three pages of r/aww. You can actually go look at
this right now if you want to. You're about to see
the same pictures. It loads them up, sends them
into Google Cloud Storage, runs the Cloud
Vision API against it to determine if it's
a dog, a cat, or both, or maybe it's a turtle. I don't know. It figures out what it is. Publishes those results
back through Cloud Pub/sub, and then finally throughout
the front end module. Now you notice here I separated. We haven't talked
about modules yet. I have an App Engine front end
and an App Engine back end. So I've separated the concerns. You can do that inside
of one application. And that way they can
scale independently. I'm going to have a lot more
load on the back end process than on the front end process. It's important, when
you're answering questions like this, that you can scale,
because I don't want to wait. I've got to know. All right. So let's try it out before
I keep talking about it. So this is Cloud Cats. And this is going to go do this
analysis for us in real time. So right now, we're
going to Reddit. I hope it's up. And there we go. We're starting to get
the first images in. And of course, the first
one-- is that a bear, or is that actually a dog? Either way, it's
counting as a dog. That's all that matters. All right. The cats are getting back in it. We've got some cute kittens
hiding behind there. That one looks like a tiger. OK, it's fine. Aww, we have cuddling
dogs, funny-eared dog. But you can see dogs on the
right, cats on the left, and we're counting
them all the way. And this analysis is
happening in real time with the Google Cloud
Vision API and Pub/sub. All right. I feel like I've been
seeing more dogs than cats. We've got about 75
images to process. That should count
as, like, four dogs. Look at that. There are four dogs
in that picture. And look at that. My casual observation
confirmed with data. [APPLAUSE] Organizing the world's
information, people. This is what we
do here at Google. All right. So let's take a
look at the code. Or maybe not. We'll get there. All right. Into cloud cats. So first thing
you'll notice here is we've got the web module and
the worker module separated out as two independent
Node.js applications. I'm going to dig
into the worker. First thing I want to call
out, as I mentioned earlier, this is using Hapi
instead of Express. So any web framework you want
to use-- no web framework, I hear that's popular-- you
can just use whatever you like. We have no restrictions on the
NPM modules you can import. The real magic here, though, is
happening inside of vision.js. This is on GitHub. You can go check this out. You can see here the
first thing we do is import the GCloud NPM module. Now, this NPM module, it has
access to all of our services. So I just happen to be using two
in this part of the code base. I'm using the Vision
API and the Storage API. Now, the real interesting piece
of this code is right here. It's these four lines of code. So lines 22 to 25. In those four lines of code,
we're downloading a picture from Reddit. We're saving it to
Google Cloud Storage. This right here
is using Node.js. It's just using the
standard Streams API, just like you would
with FS, to pipe the image after we
download it directly to Google Cloud Storage. And then, once
that's done, we're going to run it through
image analysis that tells us if it's a dog, a
cat, or neither, or both. That's magic. That's four lines of code. We just deployed something
into a scalable file system in the cloud and
ran an image analysis of it that I could never-- it would
have taken any of us years and years and years of
collective knowledge to go build. And that's really the magic
that's behind this thing. So after we've evaluated all
of these images, we're then, I mentioned, publishing it
up through Cloud Pub/sub. And that's what lets our front
end nodes know that the image analysis on one record is done. Let's take a quick
peek at that code. Again, using the
GCloud NPM module. And Google Cloud Pub/sub has
this concept called a topic. Think of it kind
of like a channel. And the idea is that I
can publish many things into a topic, and then
consumers will all create subscriptions to that
topic to read it out later. You'll see this in a minute. So we're going to get a topic. And then we're going
to publish our events as they get published
from the back end service. So we're going to
say, is it a dog? Is it a cat? Is it both? It's important to be fair. And then finally, we're
going to take that data and publish it out
to the front end. So now let's take a look at
the front end web process. This one also happens
to be Hapi.js. But the real interesting code
is in here inside of Cat Relay, probably the first and
last time we'll ever create a node module called Cat Relay. And here you're going
to see similar code importing the module. I'm using PubNub to
send messages out to the connected web
clients, but you could use Firebase for this as well. I just happened to know PubNub. Grab the same topic,
grab the subscription. And then here's where the
real magic is happening. Get the topic, the subscription. Finally, once each of
these are published, we're going to send
them out through PubNub with the final message letting
us know if it's a dog or cat. Now, while I was building
this application, one of the problems
I ran into was I kept getting some data
inside of this message object that I didn't
really expect. And these kinds of things can
be hard to debug, especially when you're dealing with live
data coming from something like Reddit, where the shape
of the JSON kind of moves around a little bit. And traditionally,
debugging this locally is something that I could
do, but debugging it once you're in cloud
gets kind of hard. You have to go sifting
through all the logs. So one of the tools that
I had at my disposal that I want to
show you guys next is called Google Cloud Debugger. So check this out. I'm going to come back over
to the Developer console. We don't need this right now. And I'm going to
switch my projects over to the Cloud Cats project. And then I'm going to go into
the Google Cloud Debugger. Now, I happen to have
published all of my code into Google Cloud
source repositories so that I have access
to it for debugging. And you can see over
here the same code that we were looking at
on my machine is available here inside of the
Developer console. This is where it
gets really cool. I can come into Cat Relay, find
that one line of code where I wanted to know exactly what
was inside of that message object that I was passing
through the PubNub, and we can set a breakpoint. Now, this breakpoint, it's
not a traditional debugger breakpoint, if you've used
Eclipse or Visual Studio. It's a passive debugger. What this means
is that people can be running against our
application in production, and we're able to capture call
stack and variable information without affecting performance,
without stopping execution. So we can actually
come back over here. I can start to run the
simulation again, get a few more pages of data. And as soon as it hits
this-- look at that. It just hit it. That code got hit in real
time inside of App Engine. I'm able to go in, take
a look at the call stack. And most importantly,
I can come and debug the variables that are
in the local scope. So I can see, OK, this message. I want to know what
data is in there. Here's the ID. Here's the data object. Here's the URL that is
going out to Reddit. And the type wasn't dog or cat. It was other. So this is a pretty
incredible experience if you've ever had a
problem in production that you needed to debug. Another useful thing
we have out there for debugging production issues. Let's say you're noticing
a decrease in performance. We can also use
Google Cloud Trace. Now, what Trace does is it
measures the amount of time during my HTTP request
and breaks it down into each other network
request that I'm making during the initial one. So for something
like Cloud Cats, we were talking to Reddit,
we were downloading images, we were uploading them
to Google Cloud Storage, we were calling APIs,
we were using Pub/sub. We had network requests
all over the place. I want to measure them. I want to know, what's
the long pole in the tent? What's making it slow? How do I debug that? And using Cloud
Trace, I can do that. Here you can see there's our
sample we ran one minute ago. The Go endpoint is the
one that I was using to measure all this stuff. And here we have it. So the request was
10 seconds long. It was doing a lot of stuff,
so I don't feel bad about that. We made first request to
Reddit, second request. I asked for three pages of data. They have to be serial
because of the way they do their next token on it. And then in parallel, we
go download all the images off Reddit Media. We upload them all to
Google Cloud Storage. You can see this
going into the bucket. And then you can see us
starting to analyze them with the Vision API. So it turns out in
this case, the thing that was taking a long time
was actually calling the API. But let's say that
we found something where we needed to parallelize
something that was serial. This is a great tool
to tell us what's taking time during our
requests for our applications with barely a performance
hit in production. So when we're building our
applications for the cloud and we're kind of converting
from something that we have full control over, we
need to really understand how it's performing. We need these types of tools. We need Cloud Debugger. We need Trace tools. We need better logging. And when it comes to
building Node.js applications in the enterprise,
there's no one better to talk to than Joe McCann,
CEO, cofounder of NodeSource. [APPLAUSE] JOE MCCANN: Thanks, Justin. There's nothing like talking
about enterprise application development right
after a cat demo. So thanks for that. So I'm Joe McCann and
I work at NodeSource. Who is NodeSource? We are the enterprise Node.js
company currently offering the only commercial version
of Node.js explicitly targeted at the enterprise
use cases of Node. So a rough analogy
is kind think of us as what RedHat does for Linux,
we kind of do that for Node.js. But how did we even
earn sort of the right to have a commercial
version of Node.js? Well, at NodeSource,
we love Node. In fact, we have the
most core commiters, contributors to the open
source project itself, including the Node.js
lead, Rod Vagg, who also represents
the Technical Steering Committee on the Node.js
Foundation Board of Directors. So at NodeSource, we know Node. We love it. We love the open source project. But we also really know
Node in the enterprise. Here's a handful
of our customers that we've worked with
in the past, where we've seen Node at scale
over the past few years. And a lot of folks, some
on this slide, for example, have had challenges
that they needed met. But even with these
folks on this slide, I still get this question a lot. Is Node really ready
for the enterprise? Now, I might be a
little bit biased because I have an enterprise
Node.js company that I work at. But I think the answer
really is, in fact, yes. We are ready. There are hundreds,
if not thousands, of enterprises
out there actually using Node, from folks
like Intuit to Netflix to Apple, BMW, Capital
One, NASA, et cetera. It's actually more
difficult to find a company in the
Global 2000 that's not using Node in some capacity. So we believe, and we've
seen it empirically, that Node is actually really
ready for the enterprise. The thing, though, about an open
source project, or even Node specifically, is when you
get it into the enterprise, there are some challenges
that face its adoption and integration. And the obvious ones are
things around security. Is Node secure? Of course it is. But there are some
additional demands brought by certain
industries around how to harden that runtime. Who do we call if
something's broken? Who's the entity backing
this open source project? And then more
specifically with Node, how do we get better control
over the runtime and better insight into things like
finding performance bottlenecks and debugging the application? Well, this is precisely why
we built our product N Solid. N Solid is, in fact, the
enterprise grade Node.js platform. And to be clear, it's
not a fork of Node. It actually is Node.js with an
additional set of capabilities wrapped around it so it
doesn't break interoperability. In fact, you don't even have
to touch your application code whatsoever. And the capabilities that N
Solid actually provides-- well, they're the things that
the enterprises told us they really need. Things like enhanced
security, guard rails, production monitoring
of these processes, turnkey performance analysis. We don't want to have to use
a bunch of system tools just to profile this. How do we get real time
performance analysis? Of course, I mentioned things
like support, that stability from a commercial vendor. And I think one of the
best parts, as a developer, and how we architected
this early on, is there's no code
modification required. So the application that you've
currently written in Node.js will work with N
Solid by default. And I'm really pleased. Earlier this week, we actually
announced our partnership with Google Cloud. So N Solid is, in fact,
available on Google Cloud. So let me show you
a couple of demos. So let's change our project
to NodeSource first. And we are actually
running on Compute Engine. And so if you look
here, what you'll see is a handful of VMs with some
various Node.js applications. Now, in production,
you wouldn't do this. You would actually run an
autoscaling managed instance group as opposed to
just a handful of VMs. But it's a demo, so we
can deal with this now. What you see here is
a handful of apps. And this one in particular,
this is Dillinger. This is actually a
real application. It's an online markdown editor. And we see here there's some
interesting CPU utilization. It's kind of hovering
a bit high there, which is odd for a
Node.js application. Node.js has a low
memory footprint and typically has a
low CPU utilization. So something
interesting is going on. In the Google Cloud
Developer console, we can see it from a high level. But what N Solid does is it
provides us deeper insight from a bottoms-up approach. So just so that
you believe me, let me show you that Dillinger
is, in fact, a real app. So we click on the URL. There it is running. Yes, it is, in fact, a
real markdown editor. And then, if we want to go
dive in deeper into Dillinger to kind of triage what the issue
is-- why is that CPU running so high-- we're actually going
to launch into the N Solid console. So all those applications
that I showed previously on the Developer
console for Google Cloud are all right here in the
N Solid console as well. And lo and behold,
we're going to dive in deeper into Dillinger. And we can see very
quickly that there are a couple processes that
are kind of scattered out there towards the right. Now, it might be a little
difficult to see from here, but the x-axis is
measuring CPU utilization and the y-axis is memory. And what we've
noticed from the folks that we've worked with
in the past at NodeSource is that the two major
key issues that people have with Node.js
development is memory leaks and determining where our
bottlenecks are in our code. So we already can
see quickly there is something kind of going
wrong or interesting-- something worth inspecting, I
guess I should say-- with this particular process. So if we look, we get a little
bit more detailed view here. But it's not really
telling us much because there's not any
load actually hitting it. So why don't we go ahead
and trigger some load with ApacheBench. This is an open source
load testing tool. Actually, Google has a really
awesome distributed load testing tool. But just for ease
of use, we'll just go with ApacheBench for now. So given the
latency of the Wi-Fi and anything else, what we
should see in near real time is these processes actually
reacting to the traffic. And what we want to do, with
a few clicks of a button, is actually profile
this application and generate what's
called a flame graph. And if you've never generated
a flame graph, it's OK. Basically, what you're looking
for in a flame graph is the width of these
function calls and how long they're
sitting on CPU. And what we can see
very quickly here is this average time function. It's taking up 77% of
the CPU utilization for this particular process. Now, what we're doing
here is somewhat magical, but it's meant to actually help
drive towards issue resolution faster. We can see from this line
here, this is actually where that function is defined. And we can immediately
go and figure out what went wrong with the
writing of that function to make it so slow. This is only a snapshot
of the capabilities that N Solid provides. We do have a booth out front. I recommend people
coming by to see some of the enhanced security
guard rails that we have, show you some of the heat
snapshot capabilities. But in the meantime,
I will say I'm really excited to be working
with Justin and the team because the scale of Google
Cloud and the history that they have behind compute
intensive tasks, coupled with something like N Solid,
really can help drive home Node.js apps in the enterprise. Thank you. [APPLAUSE] JUSTIN BECKWITH: Great job. Thank you, Joe. So we've done a lot today. We run JavaScript on the client. We obviously had
a few demos there. We ran it on the server. That's one of my favorite
things about Node.js and about JavaScript
specifically is that it runs everywhere. We're running it on our phones,
on our desktops, on servers. And now we're even
seeing it start to run on other
alternative devices. One of my favorite
quotes out there comes from Jeff Atwood
of Stack Overflow frame, which is "Anything that can
be written in JavaScript will eventually be
written in JavaScript." And I think where
we're really starting to see that claim come to life
is with the Internet of Things. So IoT is exploding right now. And one of the top languages
that people are picking to choose is JavaScript. And a big reason for that
is that it's ubiquitous. It's everywhere. People already know it. All those skills that we've
learned throughout the last 10 or 15 years building
front end JavaScript apps, and now the last six, seven
years building back end apps, we can apply to devices
in the cloud as well. There are a lot of
great options out there. If you have something
like an Arduino, or if you have something
like a Raspberry Pi, there are some NPM
modules-- Johnny-Five, Cylon.js-- that make using these
devices really, really simple. And Google is getting
involved here, too. We have things like
Brillo and Weave, some new protocols and
tools are out there to help, and even help connect
them with Android. Now, when you do
build something that's IoT-based-- let's
say that we go out and we have a car or a
washing machine that's internet connected--
you're going to generate a lot of data. BigQuery is a great
tool that we can use when it's time to
actually take that data and try to gather some
insights out of it. But all that sounds
great and exciting. What I wanted to do today is we
have a little bit of time left. I think we have minutes left. I want to have some fun. And what I want to do is
I'm going to show a toy. Now, these toys
right here-- and they may be kind of hard to see--
these are called Little Bits. And Little Bits are these tiny,
little, electronic building block components. They're great for the young
or the young at heart. And really, they're about
teaching children electronics and about getting excited
about making things. I know that's something
that I wanted to share. And so the next part
we're going to dig into is how we can use Little Bits,
connect them to the internet, and then actually control them
with Node.js on Google Cloud. So before we dig
into the actual demo, I'm going to take a look
at a little bit of code. There we go. So this is the Hat Spin project. I'm going to open up Server.js. You can see I'm back
to using Express again. I just want to walk through
a few of the NPM modules I'm using and talk about how
they're going to communicate with these devices. So here we have the
Little Bits API. And that's using
Little Bits Cloud HTTP. And they have an
NPM module that you can install that will
actually communicate with this thing over cloud. Pretty simple to use. Again, I'm using
PubNub to connect to an internet connected client. So we're going to see a web
page here in a little bit. In this case, I
happen to be using Redis to store a few results. So you could use Google Cloud. You could use Memcache. But if you want to come use
Redis, that's perfectly fine. I'm using Redis
Labs as an example right here just because
it was easy to set up. And then let's get down to
the part that's actually doing something interesting. When you guys click
on a button in a link that I'm going to send
you out in a little bit-- I'm having a little bit of
a problem with my device. I have to reset it over here. Give me one second. Make sure that my phone is
serving everything the way it's supposed to. All right. Sorry for the interruption. And so with a
little bit of luck, assuming the technology
doesn't fail us today-- this is all very new-- every
time we come in here and click, we're going to count. We're going to try to
get up to 100 clicks. And once we get to
100, we're going to send a 20-second high
edge signal to a device. So let's take a look
at what type of device we're going to use. Some of you may
have seen this movie where there's a
bunch of guys that try to get a job at Google. And in the end,
they do get a job. And when they're
finished with that, they get one of these
fancy hats, right? Now, I was very excited
when I got my Noogler hat. And the very first thing
I thought to myself when I got it was,
you know what? I'd better connect this
thing to the internet because I'm at Google now
and that's what we do. And so-- oh my gosh. It's not starting up. And so I went and we built this. And what I want
everyone to do is I want you to go out
to hatspin.net and see if we can coax my hat
to wake up and activate. Unfortunately, right
now, it's giving me a little bit of trouble. But I'm going to
hope for the best. And worst case, I look
like a fool in front of a roomful of people. Whatever. Wouldn't be the first time. All right. So I'm running Hat
Spin over here. And what I can do is I
can actually watch-- here, I'm going to reset it. All right. If you voted-- oh my gosh. You guys are voting so fast. [LAUGHING] All right. Unfortunately, it's
having a little bit of trouble with the Wi-Fi, even
though it was working earlier. I apologize, folks. I'm going to get this
working after the event if you want to
come up and see it. And we'll give it a try
again in a little bit. So we covered a
lot of stuff today. We had some ups, we had some
downs, we had some hats. App Engine is making it easy
to operate at Google scale. So the applications that
you're writing today, we want to make it
as simple as possible for you to focus on
building the app, and not having to worry
about the infrastructure. You can use the
tools and libraries you're already using, use
the things that you know, the things that you love. When you are ready, if
you want to take advantage of our amazing
APIs and services, that's available to you. And that's really the magic
that's behind Google Cloud. And finally, the big thing is
that Node.js and Google Cloud together, we're all
ready for the enterprise. We're ready for serious
applications, not just hats and cats. But we're ready for
the real applications that are needed out there
today to change the world. Everything we looked
at today, end to end, is open source-- the
Docker images we're using, the GCloud NPM library, Trace
and Debugger tools, the API client, all the samples. Everything we're doing is in
the open and in the clear, and we want your
feedback on this. Get involved. Open issues. Tell us if something
doesn't work. Tell us if it does
work and you like it. We want to make sure
that you get involved. And lastly, thank you. [APPLAUSE]