[MUSIC PLAYING] TERRY RYAN: Hi, how's
everybody doing? [CHEERING] Wow, you guys are
very enthusiastic. Fantastic. My name is Terry Ryan and
I'm a Developer Advocate for Google Cloud Platform. And my mission today is
to kind of walk people through what is
a typical journey to taking an app that
you've had somewhere else, then bringing it
to Google Cloud. And my focus is on virtual
machine technology. So a pattern we see
from a lot of people is they when they
first come to GCP, they want to do what they've
done elsewhere but on us. So how do you do that? How do you get started? That's my goal today. So if that's what you
signed up for, great. If not, well,
hopefully you'll stay. So before I get started,
it's really important for me to know who everybody
in the audience is. And I'll probably do this
with just a show of hands. So who here is a
developer all the time? OK. Web developers? Mobile developers? Back end developers? OK. How many people here
do systems work? I see some overlap
with the developers. Is it because you
love DevOps or is it because your company
won't hire more people? Which is it? OK. Anybody else? Are there designers or
UX people, UI people? All right, fantastic. Anybody who tells any of
these other people what to do, any managers in the room? OK. So there's a lot more of us
than there are of you, so one, manager jokes fly. And two, I hear
you systems people could use some more people. Just I hear that from the crowd. All right. So I know what jokes I can
make, I know who you guys are. What are our goals? Our goals are to take someone
from a journey, I've got an app and I want to get it on
Google Cloud Platform. That's my primary goal. My secondary goal
is to sort of point out a whole lot of things
about GCP along the way, but that's a secondary goal. So I'm going to fall
way short on that. There's a lot of GCP that
I'm not going to talk about. Hopefully, I'll point you
in directions where you can, if you like it, you can start
learning more and go that way but just be aware that this
is not super comprehensive. OK. So you have an app, you
have it running someplace. We usually do what we
call lift and shift, where we pick something up
and just plop it on our cloud instead of running it on
your own data centers. So I want to show an app here. So I'm going to switch
over to my laptop. And this is a super, super
simple app called Tagger that I wrote. It's minimum viable product. It's just to show what
I'm trying to do up here. So you see I have a bunch
of tags and a bunch of kids because always get
points from the audience from showing cute kids. I'm going to go ahead
and just show you that this is a real thing. It's running
locally on my laptop but it can also be
running in a data center somewhere, on my
own on prem center. I'll hit Gladiator,
I'll hit Open. Nothing up my sleeve, I've
got a real app here running. I'll upload and there
we are, app is working. Now, I'm going to
switch back and talk a little bit about the
architecture in slides. So very, very basic, simple app. Again, written to
illustrate this stuff. I've got one single box
which may or may not be the way you want
to run your stuff. If you are, it's OK. If you're not, please
don't judge me too harshly. So I have a single
server here and it's running the application code,
it's running the file system, and it's running
the images, right? All of that is on
one space and I want to move this to make it cloudy. So our eventual
destination with this will look like this, where
my database is in the cloud and my files are in the
cloud but my app still sits on a VM, which also is
in the cloud, but you know, it's still a virtual machine. I'm not trying to get you to
move to Kubernetes or Server List or anything like that. Well, I'll talk
about those topics but I'm not going to
move to them in this app. OK so we talk about virtual
machines on Google Cloud Platform, we call
it Compute Engine. We have several engines. Compute Engine is
our virtual machines that has some properties
and some things are going to talk about. You can dial in
how many processors you need from 1
to 96 and then you have a range of memory based
on that, from 0.6 to 624 gigs. You have disks. You can install 64 terabytes
of persistent disk, three terabytes of SSD, and you can
go up to 208 gigs of RAM disk with the memory on your system. Now we have images to go
with these machines already that have operating
systems already set up, but you can configure your own. And actually, I'm just going to
show that off in the interface. So I'm going to switch to do
another demo and you should-- it was working two seconds ago. No, all right. It's working. All right. So I want to go ahead
and create an instant. I'm going to create
a new instance that I want to run my stuff on. So I'm going to hit what's
labeled here Create Instance, kind off a little on the nose. So we go ahead and
create Instance. When I do that, I
can give it a name and I can put it
in a data center. So why don't we do US West,
because that's where we are. But note we can go all
over Asia and Europe and South America and
Australia, but I'll stick to being close to here. Now when I go to
set up my machine, there's a whole bunch
of set types of machine, like 16 core and
32 core, but one of the cool things
about our platform is that you can
actually dial in. If you have a use case
where you really only need let's say 44
processors to really do the work you need to do,
with other kind of choices, you have to go to
32 and be under spec or go 64 and waste all
that money on processors you don't need. With us, you can dial in exactly
to what you want and you could see exactly, on the right, what
the impact of those choices are, right? And I got a little Warning
at some-- there you go. It came up, I
didn't hit it right, but it's basically a little
warning that said hey, you could probably
get a better price if you could figure it this
way, which is kind of cool. But I'm going to go and
stick with our set images because I don't want to
pay that much and then I'll go to the boot disks. With boot disk I,
have a whole bunch of choices a different
operating system-- Container OS and Red
Hat and Windows even. Yes, you can do Windows
on Google Cloud Platform. But I'm just going to go
ahead and keep the default. So I'm going to hit Select. And you notice I have
another couple of options. I can activate HTTP, I
can open up the firewall. I'm not going to do
it this time around but one of the favorite
things about this interface is down at the bottom, there's
this equivalent rest or command line link. So after I configured this
whole thing, I click on this and I get this little
box that says here's exactly how to do this
on the command line. So I can then configure
it once in the interface and then take this and
automate it and build scripts and I don't have to write
these things from scratch, which is really helpful. But I'm going to go
ahead and create this. We like to say these spin
up in tens of seconds, meaning when I'm in a
hotel room practicing this, because I'm doing it at IO,
it takes like 10 seconds. When I'm in front of
people, it takes longer. I don't know how it knows
but somehow it knows. So hopefully, oh
all right, good. You don't register so I'm happy. It doesn't know you're here. Shh. All right. So I'm going to
hit SSH and when I do, instead of
having to pull down certs to connect to
this VM, I can do it right through this
interface, right through the web, which
I love for like-- I can travel without my laptop. As long as I get
to a browser, I can take care and manage my stuff. First time you SSH into a box,
it takes a little bit of time because it is doing that
manual work you used to do. It's doing it
automatically for you. And I wouldn't worry about
the text here too much, but I will blow it
up a little bit. At this point what I would do is
go through and App, Get Update. And then it will fail because
I should have sudoed it. So I'll do sudo bang bang
to run the same command. And there, I would go through,
I would do app get update, and then I would start
rolling through and installing software. I feel like that would strain
our relationship as an audience and speaker if I were to
just sit here and install LAMP, because that's what's
running on this other box. If I sat there and installed
all those things over and over again, I feel like
I would lose you. So I have the solution for
that, which is that I've already done that on another machine. I've then taken that
machine and created what we call a disk image from it. I said, you could use ours,
but you can also use your own. So I'm going to create
a second instance here. And on this one, I'm going
to open up the firewall. I'm going to open up port-- the HTTP port. And when I go to select images,
instead of using OS images, I'm going to use custom images. It's going take a second. And you'll see here
that I have one called tagger-latest,
which I already built. And I'll hit Select. And now, when this
app comes up, it will be running completely
the whole software stack. None of my software is on there. None of my app is on there. But all the supporting
software I need is there. So hopefully, in a second,
this will come up-- hopefully, it has not
identify that you're here. And when it comes
up, you'll notice that I have these external
IP addresses here. And some of them are linked,
and some of them aren't. The one I just
created was a link, because it didn't open up HTTP. But on this one, I did. So when I click on it, it should
go right to the Apache Debian default page. So I've installed. I have suffer running on a VM. It's all set, ready to go. So one last thing
I want to show you before I switch
back to presenting is, up here in the
upper right, I've got this little command prompt. It's another one of
my favorite features. Instead of having to install
all the SDK and all the stuff it takes to manage Google
Cloud on my laptop, I can just use Cloud Shell. It spins up a little
VM under the covers that I don't get charged for. It's got a home directory. So I can use it
over and over again. I can save scripts there. But now, I have a one way-- I have access to this
without having to install all the stuff on my computer. So really helpful, really,
really great for trying stuff out. So with that, I'm going to
switch back to presenting and talk through a little
bit of what I did here. So if you want to start
doing this programmatically and repeatably, and kind of
put it into your build cycles, you could do all of what I did
here with a gcloud command. So this one is gcloud
compute instances create. That's how I create an instance. And then I would probably want
to run commands on those boxes. So I can do that-- gcloud
compute ssh virtual machine name, and the command
I want to run locally-- or remotely. And then finally,
I have files that I want to get onto those VMs. I can do that with
gcloud compute scp. So between these
three, I can completely automate the creation
and configuration of a machine repeatably. And I do a lot of this. I do a lot of scripting
of this stuff, because I don't want to do it by
hand, because I make mistakes. So we talked about virtual
machines and disk images. So yup, you can start
with ours, like I did. You create and put all the
software on it, make a disk, and then save it as an image. But you can also make
them from scratch. You could use VirtualBox
to make images from scratch and upload them to us. Networking, I talked about-- little bit about that. You get external IPs. But one of t the things
I didn't talk about is that you can always
get an IP address, and it'll always be
ephemeral, meaning, if the machine goes down,
when it comes back up, it'll get another IP address. But any of those
IP addresses, you can grab and say,
make static and keep. And so if you start
fooling around with a box, and you set it to ephemeral,
and you're now worried, because stuff is now depending
on the IP address of this box, you can grab it, hold onto it. We don't charge you for those. The only time we charge you--
and this is something that members of my team
just learned-- is, we only charge you for
it if you don't use it. So if you have an IP address,
and it's in use statically, we don't charge you for it. It's when you park on it that we
have a little, nominal charge. Just a couple of
cents-- it's just enough to kind of
disincentivize you from holding onto IP
addresses for a long time, or holding onto a lot of IP
addresses for a long time. Showed off the firewall,
that you can create rules. I just did the HTTP rule,
which we've kind of automated. But you can create
arbitrary rules, and they get applied
the same way. They get this tag that gets
applied to your machine. And that's what will determine
whether or not your firewall is open on the machine. One thing I didn't
show off is something called Cloud Launcher. Our VMs, our images, start
at the software layer, the OS layer. But if you want, I
don't know, like, Cassandra, or WordPress,
or Drupal, or something like that on your
machine, you can use one of these and shortcut
having to do the whole build. Now, it's important to know
that we don't manage it once you build it. We just help you build it. All right, I'm going to take a
slight detour and talk about-- I talk about the Cloud
Shell for a second. That's really important. Like I said, it's a little VM. And you have a home directory
that follows you, which is-- like, you shove scripts in
there that you use all the time, and they'll be there the next
time you go to Cloud Shell. And it basically means,
I never have to-- while I love having
my laptop, I never actually have to have
it somewhere when I might need to log in
to Google Cloud Platform. All right, we're back from
that little diversion. So now, let's talk
about storage, right? Because that's the crux
of what this app does. It stores files,
and it stores data. I'm going to start with the
files part of it, the images. So for images, we call that
blob data, because I don't know, it sounds gross, I guess. Cloud Storage is
the tool for that. So it's for files, for
videos, for pictures, for unstructured documents. Now, there are four types,
and I'll talk about them, but they all have
the same interface. And the metaphors you've seen
with other cloud providers, the idea of a bucket and
objects in that bucket, that's what cloud files-- that's how Cloud Storage works. So we have the four types. We've got multi-regional,
regional, nearline and coldline. and they all have different
use cases and different costs associated with it. Long story short, multi-regional
is for global data. It's for you pushing the
data as close to the people that are going to be using it. So it's for people
outside your company. It's for web presence. It's for web apps. It's for mobile apps. It's for streaming video. Regional, in
contrast, is, you want to keep the data close
to where it's being used, just like in global. But for this, it's being
used for internal purposes. It's for running data
jobs and doing big data analysis on large data sets. You want that data
near the computers that are going to be doing
that, but don't necessarily have to be near the public. Now, another thing
to note here is, just because it's
regional doesn't mean it's only accessible
from the region it's in. It just means that it is
fastest in that region. And you could still
pull this data from other places
around the world. So you're not restricted of
where you can use data from. Nearline and coldline are
long-term storage options. Therefore, in the
case of nearline, it's for backups and
long tail media-- stuff that's going to be used
regularly, but infrequently. And for coldline, it's things
like compliance and disaster recovery. So if you're pulling
down data from coldline, you're probably also loading
your resume in an editor, just in case, right? So that's sort of the
distinction we use for these. Now, as we go to
the right, the price goes down, because it's
less accessible and less easy to get to. The time to first byte for
global or multi-regional versus coldline-- coldline is a little bit
slower, time to first bite. And so accordingly,
the costs go down. But for nearline and coldline,
when you retrieve the data, you get charged for it. And in this case, it's
one gigabyte per-- I'm sorry, one-- all those
prices were gigabyte per month. This is per gigabyte. So when you pull down
data out of nearline, it's going to cost
you $0.01 gigabyte. And out of coldline,
$0.05 per gigabyte. The idea here is, we want
to encourage you, like, to store stuff, and hope
you never need to use it. And that's why the pricing
is set up that way-- to encourage that sort of use. Now, I'm going to do a
quick demo of Cloud Storage. So I'm going to switch
back to my laptop here. I'm going to show how easy
it is to set one of these up. I'm going to go ahead, and
like I said, it's a bucket. So I'm going to create a bucket. I'm going to call it
tagger-102, because the names have to be unique. I'm going to just go
ahead and create it. It's multi-regional. So I'm now creating a
multi-regional bucket. It's already ready for me. I'm going to upload the files. I'm going to take my files
from here that I already had. I'm going to upload them. Let's see how our
network is doing. OK, Wi-Fi network is-- there we go, all right. So you'll see, I have a bunch
of images here, now, loaded in. Now, I can't get
to any of these. I mean, Google Cloud can
see these and share them among Google Cloud apps. But the public can't see them. The way I enable them
to the public is, I can just click this
little thing here and say, I want to share publicly. And when I do that,
I get a public link. So even though I
just uploaded it, it's now globally available
and able to pull down. If I were to URL
hack and change it to-- one of the other images
was that kid in the Ewok costume, which is
ewok.jpg, I hit ewok.jpg, and it's not there. So you can be very granular
with what you share and what you don't share. So I'm now going to
switch back to presenting. Oh, wait. I'm not going to switch back. It'll switch back on this. And let's talk about a
little bit of what I did. I wanted to upload files. And you could do that with the
gsutil command on the command line. That's how I do
most of this, right? Because I don't want to manually
put all my files through a web interface. I can also share
publicly using gsutil. I can set up the settings. You can do them per file. You can also do them per bucket. That's how I tend to do it. I share the bucket,
and then I try not to have shared and
non-shared stuff in the same, even though you can. Just saying, for me, I don't
want to accidentally share something. So I usually try to
make that distinction at the bucket level. Now, my app was writing to
file storage on the server. I now need it to
write a Cloud Storage. So for that, I want to
use our client libraries. Now, our client
libraries, you'll see, there's a whole
list of languages they're available for. If your favorite language is
not here, one, I apologize. Two, we have a REST API, and
it's pretty well documented. So you can do this stuff without
needing the client library. But this app was PHP. So I don't have a problem. So let's walk through this. Please, don't judge me for PHP. I figured it was very readable. move_uploaded_file,
source, destination. So something has been
uploaded to the server. We move it to a file location. To do the same thing, but
point it to Cloud Storage, I have to pull in
the storage library and then create a storage
client and a bucket and upload it to that bucket. It's a little bit more code,
but not tremendously more. And dropping it
in was relatively simple and painless to do. So that is client library. So now, here's where we are. We've carved off images. We now have our application
code and our database still together. We need to fix that. So let's talk about databases. With that, we'll go to
Application Storage, and we get the SQL versus NoSQL. And that is not anything
that I want to wade into. I'm sure we all have beliefs
about SQL versus NoSQL. It's OK. I'm not going to
question any of them. But I'll talk about what
we have for these two places, or these two types. So we have Cloud SQL, which
is a traditional SQL Server. It's MySQL or Postgres. And it's what we call
vertically scalable. You can keep making
the machine it's running on bigger and bigger. But at a certain point,
when you need to scale out, these don't scale
out perfectly well. You can do replica sets,
and we make that easy to do. But it still just good,
old-fashioned vanilla MySQL and Postgres under the covers. We just manage the backups. And we make it easy
to set up replicas. And we take a lot of the pain
out of running these machines. And we also have this other
thing called Cloud Spanner, which-- let me put it in context-- you
would never use on this app. It's for gigantic,
worldwide companies that have very high rates
of queries per second. It is not for this tagger app. But that's also
available to you. It has a SQL interface. It's structured data, highly
available, strongly consistent. Our network makes that work,
where we can make those claims that it is both those things. And it's horizontally scalable. But keep in mind, it is
definitely cost effective when you have the scale for it. At this small app,
it would be overkill. So now, let's talk about NoSQL. NoSQL, we've got Cloud
Datastore and Firestore, I'll start with first. Anybody here use Firebase? OK, so Cloud Firestore has a lot
of integration with Firebase. And there's a similar product
called Cloud Firestore for-- I forget the branding of it. But it's the same thing. And basically, it allows
you to use Firebase's APIs to write to a gigantic
NoSQL store in the back end. It's really great when you're
writing mobile clients and web clients that you don't
want a back end at all. You don't want application
structure at all. You just basically
want the front-end app to talk directly
to the data store. That's a Cloud Firestore is for. Cloud Datastore
is more for, I've got an app that I've
written that I also want to store stuff to. And that's running
on the back end. And that communication would
happen between Cloud Datastore and your app. And finally, Bigtable, you
don't see a lot in applications. Very rarely, someone
will come out and use this for an
application, because it's got very low latency. But where it really
excels is big data jobs. You put a whole bunch
of logs in there, and store them, and run
through stuff in that. So that's Cloud Bigtable. It's another one that's only
cost effective at scale. So which one should we use? Well, I kind of set
this up so that there is an obvious winner. I'm not going to rewrite
all my logic from SQL NoSQL. So I'm not going to use
any of the NoSQL solutions. And Cloud Spanner is overkill. So I'm going to use Cloud SQL. We go to the next one. And I'm going to use
MySQL instead of Postgres, because again, I don't
want to rewrite stuff. So I'm going to do
a quick demo of this to show you what this
interface looks like. I've got SQL here. I already have a
database set up. But I'm going to
create a new one. And you'll see, I got some
options, MySQL versus Postgres. And we actually
kind of help you-- like, what are you
going to use this for? Are you going to use it for
development, for staging, for production? I'm going to use
staging, because among its many features,
it has this, which is automatic increase in storage. So when I set up my
disks on the back end, it will automatically
make my disk bigger if I start to run afoul of
the size of the disk, up to 64 terabytes. So I'm going to go ahead
and configure one of these. I can choose version. I can choose machine type. So I can make the
machine bigger. They're based on our VM images. You can't customize them. But they kind of fit with
the VM standard images. You can choose SSD
or standard disk. But one of the cool
things you need to know is that if you need higher
read-write performance to your disks, you need
to get bigger drives. And you'll see, when I
change this number over here from 20 to 200,
all of a sudden, I get more IOPS and
disk throughput. So keep that in mind. You might need to
make bigger disks to get better performance. OK, so I think
that's what I want to show off about Cloud SQL. So I'm going to switch
back to present mode. I'm going to switch
back to present mode. There you go. Oh, I'm pressing
the wrong button. I'm all over the place. All right, so how do I get
my stuff into Cloud SQL? This is relatively simple. I take it. I upload it to Cloud Storage. And then, from Cloud
Storage, I ingest it into Cloud SQL, which brings
up something kind of important, which is, this workflow,
move it to Cloud Storage and then pull it somewhere
else, is actually pretty common. You have a whole
bunch of log data. You put it in Cloud Storage,
ingest it into BigQuery. You have images that you want
to analyze with our machine learning APIs, put
it into Cloud Storage and have them analyze it. And you'll see two tools
here in the center. Cloud Storage is one of
those that acts as a hub. The other one is something
called Cloud Pub/Sub. And there's actually
a really cool demo of Pub/Sub in the Cloud Dome,
in case you're interested. But what Pub/Sub does
is, it's a message bus. You say, I'm a
publisher, and there's a whole bunch of people who
subscribe to the channel that you publish to. You send a message, a
structured piece of data, that does something. And it gets navigated--
it gets communicated out to all the people that
are-- all the subscribers of that service. It's very high throughput. So basically, it
allows you to basically offload stuff to other pieces
of the cloud when you need to, and do it with a queuing system
that's really, really robust and fast. OK, we'll segue back and talk
about some common problems people have when they start
running database servers on us. So one problem,
and I filed a bug, but I feel like I should
communicate it to you, is that when you run gcloud
sql to ingest the SQL, you might get an error. And the error is going to look
like this, INTERNAL_ERROR. And then if you
raise the verbosity of the gcloud command,
you get that it's a SQL error, which is
helpful, but not really, because we kind
of knew, it's SQL. So what you do is,
in the SQL interface, there's this tab
called Operations. And in Operations, we can
see that there was an error. We can actually see
the line number. We see all of the SQL data that
we need to troubleshoot this. For this particular
error, it was that I had done a MySQL
dump, like you normally do. But I had some super
permissions, which you can't have on Cloud SQL. And so it balked at that. So just a couple things that, if
you're switching over to this, you might want to know
off the top of your head before you headdesk trying
to track down what this is. Now, what happens
if we, Google Cloud, don't offer something
that you need? Like, we have
MySQL, and Postgres, and our NoSQL
solutions, but you want Cassandra, or Mongo, or one of
the other storage solutions. Well, you do two options. One, you can build it
yourself from hand-- build it yourself from scratch-- mixing metaphors. Or you could use Cloud Launcher
to build one of our existing solutions, like
Cassandra, Redis, Mongo, CouchDB and a whole bunch more. In each of these cases, it is
important to note that we don't maintain the software for you. We maintain the OS. And so in all the cases
with Compute Engine, I didn't mention
this earlier, but we do something called
live migration. So when you update
the machine, it will continue running,
even through the update. You'll never have
downtime because we're updating the hardware. We migrate and
solve that for you, so you don't have
to deal with that. So in each of these cases, you
get that with whatever solution you build. But we won't update the
software or run backups for you for them if we don't say we
do, like we do with Cloud SQL. All right, so now, here we are. We have taken our app. We've moved to the cloud. We've put it on a VM. And then we've offloaded
database and storage to different parts of the cloud. So now, I'm free
to start thinking, like, well, since
all of the data, of the state of this
application is elsewhere, then I can start making
interesting decisions about how I run this. Do I want to run a group of
VMs behind a load balancer to make sure that I've
always got redundancy? I can do that. If I want to go to a serverless
solution, I can do that. If I want to go to
Kubernetes, I can do that. And I'll talk a little bit
about how to do this with a VM. So how do I scale the system? So I'm going to switch to demo
one last thing, I believe. And that's going to be something
we call managed instance groups. So another error, just refresh. And there we go. So these take minutes to get
rolling instead of seconds. So I'm not, again, not going
to strain our relationship and do things that take
that long in front of you. But I will kind of walk you
through creating a managed instance group. So basically, I define a group. I say, I want this to be a
group of tagger machines. I want them all to
have tagger on it. I want them all to run
with the same software. And so what I do
is, I could say-- well, I could put this in
a single location or multi. So I can have these groups
all around the world. I put it in a particular zone. And then I can be
managed or unmanaged. Managed means that every
machine is based on a template. And they look exactly the same. So we can do things
like, say, autoscale. So all the machines
are exactly the same. If the CPU utilization ever gets
above 60, spin up another one, and then spin up another one,
until it gets down below 60, and it gets scaled down. And so I can say
things like, make sure there are always
three, no more than 10, and scale with 60%
processor utilization, which is what I have for
the one that I have there. And so I can build that. And you'll see that I did
here in this interface. So I have these three machines. And if I click
through, I can see that each one of these
machines is running. Each one of these machines
is open to the internet. And each one of these
machines is running Apache, just like I want it to. So we're all good. So where'd I go? So the next thing
I want to do is take this and make it
redundant, basically share it with the world. To share it with the world,
I need a load balancer. So I'm going to pull in
tagger load balancer, which I've already set up. And again, this takes
a little while to go. So I'm just going to walk you
through the way it's set up. It's pretty simple. I say, give me a front end. It's HTTP. It's on port 80 and serve
it up an IP address. And then I say, from the back
end, serve this instance group. So this instance group,
everything is open on port 80. You're listening on port 80. Combine them and
make that redundant. So once I do that, I have
the setup the way it is. Each one of those machines
is answering calls to the internet. But if I go to this
machine, this IP address that I get here, it's
also up and running. And so like I said,
these will scale. All of that scaling
can happen, usually, within a minute to two minutes. So if you start
getting a load spike, we can scale up load relatively
quickly, but not necessarily instantaneously, OK? So that is scaling
out this application with a load balancer. So with that, I'm going to
switch back to presenting and take you through
a couple more things. So we talked about instance
groups, which are-- they can be managed
or unmanaged. Why would they be unmanaged? Well, maybe you want to combine
a two-CPU box and a 60-CPU box. I don't know why,
but you want to. That's cool-- no judgment. You're responsible for
scaling and all that, because we can't figure
out CPU utilization-- like, what should we add? We don't have any
of that set up. We're not using a template. So you can manage that. But if you use managed,
we can auto scale. And we can also auto heal. If you have health checks on
your app, and it goes down, we can rebuild a VM for
you and bring it back into the instance group. Networking, we talked
about load balancers. We have HTTP and
HTTPS load balancer, which is equivalent of an
L7, if you're used to that. We also do TCP and
UDP load balancing. And we have internal
only that are only accessible within your project. Now, that's how I scale VMs. But I do work for Google Cloud. And we use the
buzzwords, "container" and "servers," a lot. So I felt like it was necessary
to at least talk about these. How many people are
already using, Docker? OK, how many people
are using Kubernetes? OK. All right, for those of
you that aren't, I'll give just a quick
what is containers at a very, very high level. And watch the faces
of the people that raised their hands as
they squirm and scream through this really, really
oversimplified definition. Containers are
basically a package that contains all
the code and runtime components to run a
particular process, your app. And then there's an environment,
Docker or Kubernetes, that will trick the
process into thinking it's the only one
running on the machine. And so you can do
crazy things, like have multiple versions
of the same software running on the same machine. You can stack as many more
processes on a machine that you can do VMs
on the same hardware. So basically, there are a
lot of efficiencies you get from switching to this model. We have two products
that we say are container products on Google Cloud. There are Kubernetes
Engine, which is managed Kubernetes, which, if
you don't know what Kubernetes is, it's basically, you
have dozens, or hundreds, or thousands of containers
all doing different things. You want to run them in
a way that you're not managing individual containers. You put them into a
Kubernetes cluster with instructions
on how to run them. And it'll keep them
running, whether they're a microservice, a web app,
or a batch job somewhere. App Engine flexible is, if
you just want something, you have one app that
you just want to scale, and it's already in
a Docker container, you can push it to
App Engine flexible, and we will scale
it for you, from one to as many instances
of it you need. What is serverless? So serverless is
a marketing term. It is, right? Like, there is a server there. There's not some beast
living beyond space and time running your server--
your load for you. But it describes, I think, a
really important distinction, a different type of app,
which is, you pay as you go. Whatever you use, you pay for. If you don't use it,
you don't pay for it. And you don't care about the
underlying infrastructure. You don't have to know
about number of servers, or number of nodes, or anything. We will just run it for you as
much or as little as you need. So when we talk about serverless
on Google Cloud Platform, we're talking about two things. We're talking Cloud Functions,
which the unit of abstraction is a function, right? And you attach that function
to some sort of event. Someone drops a file
somewhere into a bucket. Someone sends a
message Pub/Sub queue, you respond to it with
that simple Cloud Function. We also have a serverless
solution that's app based. Now, it's very restricted
in terms of what you can do. You can only use
certain languages. You can't use third-party code. You can only use code
in that language. But what you get
for that restriction is incredible scaling. App Engine can scale from
zero to pretty much infinite in a relatively
short period of time. I have a demo where I
will scale an app that's cold, like I didn't use
it, to about 8,500 QPS-- Queries Per Second--
in about five minutes. And to put that in
perspective, 8,500 QPS is what Wikipedia gets. So I'm able to go
from nothing, cold, to a Wikipedia-scale application
in five minutes with App Engine. But again, it comes at a cost-- restricted abilities
to do things. So the question is, should I
switch over to one of these? That's like a whole talk-- that's a whole separate
talk, of whether I should use serverless, virtual
machine, or containers. But I'll say, for this app,
without any context, no. It's working where it is. I'm bringing it over
to someplace else. I don't necessarily want to
put a lot of effort into it. Lift and shift is
absolutely fine. If I have plans to
grow this app bigger, or to grow it with a whole
bunch of other applications, maybe, it might make
sense to containerize. But again, that would
depend on the context. And if I'm going to have
very inconsistent use, then maybe serverless
would work out, too. But for this particular
app, with no context, I'm not switching it over. I'm staying on VMs. OK, we've come to the end. I've taken you through a pretty
good tour of what we have here. We started here. We started with a single app,
all running on one machine. But we ended up here,
where we've basically taken all the state,
pushed it elsewhere. And we can now grow
and expand, have a lot of different options
with the computing level. Now, there was so
much I couldn't cover. I was pretty vicious
with my editing pen, because I wasn't getting
under 40 minutes. So I didn't talk about our
ML APIs, which are awesome. So what you could do with our
ML APIs, same sort inserting the code, the way we
did Cloud Storage. You can grab-- within Vision
API, you can take this-- I'm not going to switch-- don't switch over. But you'll notice
there that I have tags that I got from
Cloud Vision API instead of making people do it. So this app, I could
just completely replace data entry on
this app with Cloud Vision if I wanted to. I think it's funny. A little detail
is that it thinks that the Ewok, the kid
in the Ewok costume is carnivoran, which
is kind of scary, but I don't know
how he's raised. I don't know. So just if you see a kid in
an Ewok costume, be afraid. But Cloud vision was
able to do that for me. It also picked out
skin, and children, and picked a whole bunch of
details from those images. I didn't talk about
the API library. So if you're using any
of the other Google APIs, like Maps, or
YouTube, or Ads, you can manage them through here and
use them in your Google Cloud Platform project with
default credentials so that you don't have
to set up a lot of search and a lot of
credentialing information. You can just run it with
very minimal authentication, but securely. Didn't talk about Stackdriver. If you need your logs, or
monitoring, and all that stuff, Stackdriver can do this for you. All the stuff I talked-- I didn't even get a chance
to talk about will all help you run in the scenario. So if we look at Google
Cloud as a whole, I talked about the
Compute platform. I think we covered a lot of it. We covered storage
and databases. We covered a little
bit of networking. We covered a little-- the
tiniest little bit of big data in that I mentioned Pub/Sub. Mentioned the machine
learning APIs, mentioned developer tools, and
mentioned management tools. But if you look at
this whole list, there's a lot missing,
because there's just a lot more to the platform. So I want to ask, if
you have any questions, or if this has ignited anything
in you, please make sure you stop by the Cloud booth. There's everybody from
Google-- not everybody, but a lot of people
from Google Cloud are there, can answer
your questions. And hopefully, I showed
you how you can onboard, how you could get
started Google Cloud. Keep in mind that
if you sign up, you get a $300 free
credit for 12 months that you can use
to kick the tires. And hopefully, you go from
here, discovering more about what's on the platform. So thank you very much. I've been Terry Ryan. Please feel free to heckle
me on Twitter, @tpryan. If you have any
questions, I'll be around. But I think I'm
running short on time. So have a good rest of your I/O. [MUSIC PLAYING]