>> URS HOLZLE: Good
morning, everyone. Good morning and welcome
to the second day of Next. So, as you know, we've
been building and operating a hyperscale Cloud for a very long
time, and our goal throughout this time actually has remained
the same, namely to make it possible for you to build great
applications and to be very productive as a developer or as
IT while we deliver this on a global scale with
unprecedented reliability, performance, and security. And this Cloud powers all of
Google's services, including the seven billion user services, so
YouTube, Gmail, Search, and now it also powers a GCP,
which itself is a billion user application, meaning that every
single day the users of GCP, the customers of GCP, connect to
over a billion individual users every single day. And what makes this possible
is that we started from first principles. We designed every element of our
infrastructure so that you could be uniquely productive and that
you can enjoy the performance that we created. Now when designing this from
first principles, you have to go and actually optimize
every single element, so from efficient data centers to custom
servers to custom networking gear to a software-defined
global background to specialized ASICs for machine learning. We've been living
this Cloud at scale. And, in fact, in just the last
three years, we spent almost $30 billion on capital
expenditures alone. Now one example where this
all shows is our network. It's probably the largest
global network today. Analysts put its traffic at
between 25 and 40% of global Internet user traffic. And as a GCP or G Suite
customer, you benefit from this network because your traffic
travels on our private ultra-high-speed backbone
for minimum latency. And 98% of the time we hand off
your packets directly to the end-user ISP because we
interconnect directly with almost any ISP
everywhere in the world. So you won't see any congestion. And fewer handoffs mean more
throughput, lower latency, and better security. So, in fact, we have a
global network presence in 182 countries or
territories in the world. For comparison, the
United Nations today has 193 Member States. And so to carry this traffic to
pretty much everywhere in the world, we also
need to cross oceans. And so nine years ago Google
became the first non-telecom company to build
an undersea cable. That was the Unity
cable from the U.S. to Japan. And since then, we've built
or acquired fiber capacity, submarine fiber capacity, pretty
much anywhere in the world so we have a redundant backbone
to pretty much any place. So, for example, last year we
turned up what today is the highest capacity submarine
cable in the world from the U.S. to Asia. And in just a few months, we'll
turn up an even higher capacity cable between the U.S. and South America. Now one of the ways we use
this network, of course, is to connect GCP
regions to each other. Last year we announced
that we're building 11 new Cloud regions. Of these 11, Oregon and Tokyo
are already live, Singapore and Norther Virginia will go live
in just the next few months, and Sydney and London will
come on shortly after that. And today I'm excited to
announce that we are adding three more regions to the set,
namely the Netherlands, Canada, and California. And all of these will come
online either this year or next and bring the total count of GCP
regions globally to 17 and the total number of zones to 50. Now our technology
leadership extends from hardware to software. If you think about Containers,
MapReduce, NoSQL, TensorFlow, Serverless, Kubernetes, they all
originated at Google and they've been powering our services,
including GCP, for a long time. And recently we added yet
another of these ground-breaking new systems to our lineup so
you can use it on GCP, and I'm talking about Cloud Spanner. Spanner solves a long-standing
problem in databases. Until now, you could either use
a database, a SQL database with nice transactional semantics,
but it was really hard to scale it beyond one or just a few
machines and very complicated and very expensive. Or you could use a no SQL
system like Bigtable and you get infinite scale, horizontal
scalability, but you no longer have strong consistency
guarantees, and so you're pushing the complexity
into your application. Neither of these two choices
gave you a truly global system because your replicas needed to
be pretty close to each other in order to get good performance. And Cloud Spanner, which is a
globally distributed database service, solves these problems. So it scales horizontally like
a NoSQL system and thus it can take pretty much any transaction
loads, but it also has these strong consistency guarantees
and the relational SQL semantics of traditional databases. And as an extra bonus, it can
also be globally distributed while managing high performance. We use it for hundreds of
mission critical applications inside of Google, including
AdWords, and now it's available on GCP as well. So how does Spanner do that? I don't really have time to go
into that but I'll show you one example. So Spanner uses a timestamp
service, a highly accurate timestamp service, to keep the
copies of your data in sync. This timestamp service is
powered by atomic clocks because it needs to be really
very highly accurate. So that means we literally run
atomic clocks in every one of our data centers to
make Spanner possible. But with GCP, since Spanner is
a service, you see none of that complexity. In fact, Cloud Spanner is fully
managed and serverless so that you don't have to worry
about machine types, capacity planning, replication, software
updates, really anything. And that's what we try to
provide with all GCP services, the as much value as possible
with as little administration as possible. And to show you how Cloud
Spanner really works, I am pleased to
introduce Greg DeMichillie. >> GREG DEMICHILLIE:
Thanks, Urs. Thanks, everyone. I hope you like live demos
because this is the first of four live GCP demos you're going
to see in the keynote today. And I couldn't be more
excited to have Spanner be the first one. To give you a picture of how
amazing Spanner is, let's start at the beginning, just
creating the database. Those of you who have actually
tried to deploy a multi-region database that's highly scalable,
highly reliable, and highly reliant, know that
there's like a zillion steps. You've got to configure
machines, you've got to set up replication, you've got to
worry about failover, you've got sharding, what happens if I have
network issues, and then there is the operational side of
just keeping the thing running, having to have whole staffs of
people who do nothing but take care of machinery. With Cloud Spanner, we
turned that into one machine. Robert is going to bring
up the dialogue to do that. He names the instance, he
chooses whether he wants it to be a single or a multi-regional
database, and then he says how many nodes. And in this case, he picks
42 because that's always the right answer. So that's it. We've now created a
multi-regional database that has amazing capabilities. But that's getting started. Let's look at Spanner in an
actual real-world scale example. Imagine you're a ticket seller. You want to sell
tickets to events worldwide to customers worldwide. Now obviously it's pretty
important that you sell a ticket once and only once, right? And it's also pretty important
that customers get a good performance experience wherever
in the world they are buying their tickets. In this example, we've deployed
Cloud Spanner into three Google regions - U.S., Europe, and Asia
- all connected by that private network that Urs told you about. We're also running workloads
that emulate customers buying billions of tickets. In this case, we're selling at
about half a million tickets per minute through this system,
all through a system that is maintaining consistent version
snapshots, full asset support for transactions,
full, strong consistency. That's the power of Spanner
as a distributed base, as a distributed database. Our customers get a low latency
experience; we get good old traditional SQL consistency. Now to prove that this really is
a database of some size, let me show you the schema
behind this database. You can see it
actually is a SQL database. There are columns
that are joined together. And if you look at that ticket
table, it has three billion rows in it. So while we're doing this, in
fact, we're running all this traffic against a
real SQL database. In fact, we can actually run SQL
queries against that database while it's being hammered
with all that traffic. Here we have a query. And if you look at the bottom
part of the where clause, you will see that we are looking
for tickets in the U.S. that are still available as of
10:00 PM and how many seats they have. So these are our top five
events with unsold tickets. Now what's important
here is what you don't see. What's remarkable is how
unremarkable this SQL looks. If you have deployed a sharding
system, your SQL is a mess because you're trying to have to
do joins across multiple shards of your database. In fact, it's probably not even
human writable SQL when you use those systems. Spanner takes care of all that. But you know real-world
applications aren't static; they change over time. What happens when
your needs change? Well, first of all, Spanner
lets you make transactionally consistent schema updates with
no downtime, which is critically important when you're running
systems that your business depends on. And what happens if you get a
wonderful success and your needs grow? Well, with a traditional SQL
database, you better put a PO in for a brand new server, you
better figure out how to configure the new server, you
better figure out how to migrate the data, you better
figure out how to do cutover. Or if I'm doing scale out, well
now I've got to add a whole new complex middleware layer
that's incredibly difficult to maintain. Even that takes time
and adds operational risk. In Cloud Spanner, Robert is
going to go back to the console and show you how we add
capacity to a Spanner database. We simply go from
70 nodes to 99 nodes. That's it. We've scaled out. That's the power of a
horizontally scalable SQL database. And, of course, since we use
Spanner so much internally, we made sure it
handles large database. Robert is going to switch over
to the monitoring console here and what I want you to
look at is that graph on the bottom left. We've been running this
whole demo against an 80 terabyte database. And if you look at the top right
graph, when we loaded the data in, we loaded the data at an
average of 800 megabytes per second, and we peaked at about a
gig per second of loading data. That's amazing. That's crazy fast. So that's Cloud Spanner. You create a highly reliable
performant, fully managed, multi-region
database in seconds. Cloud Spanner is SQL. You get interoperability in the
power of SQL queries, you get ASIC transactions, plus you get
advanced features like on the fly schema changes. Cloud Spanner is fully managed. We never had to worry
about setting up replication, sharding, failover. The system actually even tunes
itself automatically over time. So, as you use it, it learns
your query patterns and it optimizes its own
performance characteristics. And Cloud Spanner
is built for scale. We've been running our largest
production systems on this for years. So with that, I'll turn it back
over to Urs for a little bit more of the keynote. Thanks. >> URS HOLZLE: Thank you, Greg. So if you're used to traditional
databases, it takes a while to absorb that. I just want to summarize because
Spanner is truly something new. It is a SQL database, a
traditional SQL database, with the same semantics, that just
scales by changing a value on the admin interface. It scales through thousands of
nodes and has the same strong consistency guarantees that
your applications have come to depend on. And, of course, as you saw, easy
to administer, does replication automatically, multiple
locations worldwide, all without any setup. It has been immensely popular
inside of Google, and so we're very, very excited that we
have it now available on GCP. Now from databases to compute. Many of our customers have
demanding workloads, financial risk models, movie rendering,
scientific computing, large ARP systems, and so on. And so starting today, GCP VMs
will come with up to 64-cores and 416 gigabytes of memory. And like our other VMs,
these VMs are available as pre-emptible VMs. So if you have some time or
flexibility in your computation, you can use them for batch
workloads and get much lower prices. We're not stopping here. So, later this year, you will
see even higher core counts and memory sizes of a
terabyte or more. In fact, when it comes to
hardware innovation and to optimizing the hardware for the
best performance in the Cloud, we've been working very
closely with industry partners throughout the industry to
specialize these systems and to accelerate the
innovation in hardware. One of these
partners of course is Intel. Please welcome RaeJeanne
Skillearn from Intel to talk about our
partnership. RaeJeanne. >> RAEJEANNE SKILLEARN: Hello. >> URS HOLZLE: Thank you
for joining us, RaeJeanne. It's great to have you here. >> RAEJEANNE SKILLEARN:
Yes. Thank you. >> URS HOLZLE: We started to
work together quite a long time. In fact, I don't quite remember
even how long ago it was. Can you tell us
about the history? >> RAEJEANNE SKILLEARN: Yeah. It actually started in 2003
with our Intel Core 2 processor. Google was able to give us
in-depth performance analysis and real-world benchmarks and
enabled us to tune our processor for their unique workload. Fast forward to 2008, we started
working on the Intel Xeon processor, the 5600 at the time. And since then, we have
optimized six generations of Xeon processors specifically
for Google's environment. Now in November of 2016, we
took that collaboration and we expanded it. Your Diane Greene and our Diane
Bryant announced a strategic alliance between
our two companies. We are now collaborating on
Hybrid Cloud orchestration, security, machine and deep
learning, and IoT edge to Cloud solutions. >> URS HOLZLE: That's right. We have had six generations
of custom processors and that required a lot of work. We actually
learned a lot from that. Can you tell us how we worked
together on this and how this actually works in practice? >> RAEJEANNE SKILLEARN: I can. And it starts early. It starts in the absolute
earliest phases of our architecture and CPU
process development. We take the Google feedback
every step of the way and we incorporate it and we iterate
and modify our processors. We know that to truly create
the best technology that is performance optimized, it really
relies on the software as well. That's why both of our companies
are investing in the TensorFlow and Kubernetes
open source projects. Intel is actually going to
contribute code to both of those projects so we'll dramatically
improve the performance of both Kubernetes and TensorFlow on our
Intel Xeon processors and our Intel Xeon Phi processors. >> URS HOLZLE: So three weeks
ago we launched the newest server, Intel Xeon server
processor codenamed Skylake on GCP, but yet you can't buy this
processor anywhere and Intel is not planning to launch
it for quite a while. >> RAEJEANNE
SKILLEARN: For a while. >> URS HOLZLE: How
does that work? >> RAEJEANNE SKILLEANER:
Well, first I would like to congratulate Google on being
the absolute first to have Cloud services on our next generation
Xeon processor, Skylake. >> URS HOLZLE: Great.
Thank you. Thank you. >> RAEJEANNE SKILLEANER: There
was a trick to doing this. First we had to accelerate
production readiness of a targeted set of features that
are a little bit different than our broad feature set, all the
systems in the software that we have to qualify and validate for
the broad general availability. Intel and Google started an
early definition on a custom skew all the way through
significant joint investment in the validation of those
systems into production in your environment. It was a team effort many steps
of the way, at massive scale and in a very aggressive
timeline for both of us. >> URS HOLZLE: Well thank you
for being such a great partner. We really enjoyed
working with you. >> RAEJEANNE
SKILLEARN: Thank you. We enjoy it. We look forward to tomorrow. >> URS HOLZLE: We're very happy
that our customers benefit from this decade of collaboration
between us and Intel. Because we're pushing the
envelope in so many directions on performance that we have
to really, really work very differently with vendors. Skylake offers great performance
for compute-intensive workloads. We are very, very happy that
Skylake is available at first on Google Cloud. Now let's go from
performance to economics. Clouds, as you know, are
supposed to be elastic and free from capex or capacity planning,
yet some Cloud providers force you to pay up front for three
years to get the best price. But if you have to buy a VM for
three years, then how is that better than buying your
own server for three years? There's no flexibility. And, in fact, a recent study
showed that Cloud users waste on average 45% of their spend on
resources that they bought but can't use. That's caused by many factors. One is these three year leases
that force you to predict your future perfectly. And let's face it;
none of us can do that. We end up with stranded
resources that we actually can't use. Similarly, we are forced to
buy - like on premise - we are forced to buy servers in
fixed sizes, and so we end up overbuying on some dimension,
and we still pay for it even though we can't use it. And last but not least, you pay
for the full hour even when your test run runs for
only ten minutes. So you pay for compute time
that you're not even using. All of that adds
up to 45% waste. Now we believe that it should be
easy to get the best price, and so we have solved
these sources of waste. In fact, in 2014 we introduced
automatic sustained use discounts that give
you discounts without any long-term agreement. So as soon as you use a VM for
more than a quarter of a month, you start seeing savings. And if you use the VM for
the entire month, you get a 30% discount. But you're still free to
stop using a VM at any time. So sustained use discounts
brought the Cloud back into Cloud. You can change your mind
at any time but you still get great prices. And you don't need to accept
VMs whose shapes come in powers of two. So with other providers, if your
application needs say 20 cores and 50 gigabytes of RAM, you
might have to buy the next larger machine size that might
have 40 cores and 160 gigabytes of RAM. And so you end up paying
literally twice the resources that you need. But not on GCP, because with
custom machine types, you can dial in exactly the
configuration you need so you pay for exactly the
resources you need. Today, over 20% of our core
hours on GCP are for custom machine types and our users
save an average of 19% through the customization. And we even help you
save money with our right sizing recommendations. That's a service that looks at
your memory and CPU usage and then suggests the best
virtual machine size. And last but not least, GCP
has permanent billing for VMs. So if you use a VM for 11
minutes, you pay for 11 minutes. It seems pretty logical. So GCP and only GCP is
truly an elastic Cloud. You only buy what you need,
you only pay for it when you actually use it, you get
automatic discount if you use it for extended periods of time,
and we even alert you when you appear to be wasting resources. And when you put all of this
together, you save an average of 60% relative to what you
would pay on other Clouds. So given the complexity of Cloud
pricing we just went through, it's not surprising that the
same study I quoted says that 53% of Cloud users say that
optimizing and controlling spend is their top
problem in the Cloud. But not with GCP. Because with GCP, you don't need
to create an entire new ministry in your company just to get the
best price because our flexible pricing structure lets you enjoy
a Cloud as it was meant to be - on demand, pay as you go,
paying only for what you need. But today it gets even better
because we're introducing another way for you to save
- committed use discounts. So in exchange for a one or
three-year commitment, you receive a discount of
up to 57% billed monthly, no upfront payment. But these are not inflexible
reserved instances that lock you into a particular instance type
or family that force you to pay up front. No. They only commit you to an
overall volume for your compute and memory. So you're not locked into
any particular VM size. You can change individual
machine types and VM shapes at will. You can change
the number of VMs. You're only committing to
the aggregate volume and not the details. And if you're not sure how much
commitment to make, you can start smaller because you still
get sustained use discounts and all the other benefits on
any of the usage above your committed usage. And so you're not facing a huge
unit cost cliff when you exceed your forecast. Because we believe that when
you move to the Cloud, capacity planning and cost planning
should really become a distant memory and not your
number one headache. So on GCP you are saving money
automatically with no regrets, no spreadsheets, and no PhD in
economics needed to manage it all. In fact, one way where you see
this flexibility is our GPUs. Many of our customers
already GPUs for a variety of applications to speed up
simulations, transcoding, deep learning, computational
chemistry, finance, and many more. But on GCP, you can add a GPU to
any VM configuration and start it up in under a minute, much
faster than our competitors. So that means that if you want a
GPU with lots of memory, you can get that. Or if you want a GPU with little
memory but lots of cores and an SSD, you can get that, and you
don't waste money on your GPU because you have to buy
some other stuff with it. And, of course, GPUs also have
per minute billing so you don't end up paying when
you're not actually using it. So to help you learn more about
this, I want to introduce one of our customers who is using
GPUs as a critical part of their business. Please welcome Ashok
Belani from Schlumberger. Hi, Ashok. Great to have you here. >> ASHOK BELANI: Thank
you for having me. >> URS HOLZLE: So Schlumberger
moved to Google platform last year. What drove you to adopt Google? >> ASHOK BELANI: At
Schlumberger, we've been leaders in high performance
computing for many decades. You see the size of our compute
clusters growing on the slide. We bought the first grade back
I think in the mid '80s and we moved to massively
powered PCs in the '90s. And then we were actually one of
the first companies to jump onto very high GPU to CPU
ratios back in 2007 or 2008. What we think is that we can
go to the next level of compute possibilities working
together with Google. >> URS HOLZLE: So you clearly
use massive amounts of compute. Tell us what you're
actually doing with compute and those GPUs. >> ASHOK BELANI: So we are
looking in the subsurface for our geologists who are exploring
for oil and gas deposits. So we look very deep into the
subsurface, tens of thousands of feet. We use acoustic data and
dynamic data of different kinds. Here you see on the slide behind
me a vessel which has a huge spread of sensors which
are being pulled behind it. There are millions of sensors on
a spread which is ten kilometers by two kilometers. Sometimes we use two of these
boats which would cover half the surface area of San Francisco. And, as you see, the amount of
data that we generate has been increasing over the
years significantly. And then we use high performance
computing to create images of the subsurface for
our exploration efforts. >> URS HOLZLE: So tell us
how your experience has been. How has it been
working with Google? And tell us actually
what we're seeing here. >> ASHOK BELANI: So we see an
image here which actually has been - the data has been
acquired over about five months. This data was actually processed
in Google over a period of three months. And you see the subsurface which
is actually 20,000 feet into the ground and it is 50,000 square
kilometers off the shore of Mexico - bigger than
the size of Switzerland. >> URS HOLZLE: Our
mountains are taller. No need to go on the ground. >> ASHOK BELANI: I think the
interesting thing about working with the Google Cloud is that we
are able to create basically a cluster which is suited for that
particular problem because of the algorithms we are going
to use on that problem. Within a matter of minutes, we
are able to mix the CPUs, GPUs, high GPU ratios, memory close
to the processor the way we need for particular algorithms
to work in a very good way. This is a big advantage
that we get out of Google. But I would say another
advantage is Urs Holzle himself. I think Google should be
very proud of having someone like this. >> URS HOLZLE: Okay. Yeah. Let that be
stricken from the record. That was not in the script. >> ASHOK BELANI: That
wasn't in the script. We certainly cooperate very well
with the engineers in Google. We actually created a center
here in Menlo Park so that our engineers could work together
with Google very closely. And we are only at say 10-15% of
the way on this journey to make Cloud very efficient
on our applications. We think we will be able to
weave in things like big data type of technologies or
analytics or machine learning. In the future, they will be
woven into our applications and I think we will definitely be
able to achieve the next level of computing in oil and gas. And I think we serve all the
oil companies in the world. So as we go on this journey,
they will come together with us. >> URS HOLZLE:
Thank you very much. It's really been
great working with you. >> ASHOK BELANI:
Thank you very much. >> URS HOLZLE: Thank you, Ashok. All right. So when you use Google, you use
the same security infrastructure that Google uses. And we've been investing an
enormous amount of effort into that security. You can see an overview
of the many layers here. I don't really have time
to talk about that today. Today I would like to show
you just a few highlights. But we have a detailed security
design, white paper, and we have a one-hour session that do
real justice to this topic. So we put a lot of effort
into security starting with physical security. For example, in this single
Google data center campus we have over 175 security guards
on staff 24/7 in addition to countless cameras, motion
sensors, iris scanners, and so on. Many of our locations are not
individual data centers but data center campuses. And so this very high level of
physical security is amortized over hundreds of
thousands of servers. And thus on GCP,
low prices don't mean low security standards. In fact, to protect the security
of our hardware, we put a security chip on all our new
machines to serve as the basis of trust for that
machine's identity. So this is a custom chip
designed by Google so we know exactly what it does and it
helps us protect servers against tampering even at the bios
level; or in this particular example, it helps protect the
bios of the networking card that we built. The chip, in fact, is so small
that I'm actually wearing one on my earring here. So here it is. So now you know that you
are watching the authentic GCP keynote. So this hardware chip helps
us authenticate the hardware. And then on top of that, it
helps us authenticate the services that we run. When services call each other,
so the services that implement GCP, they must mutually prove
their identity to each other and use certificates for that and
the binaries are signed and cryptographically signed so that
we can verify we are running the right binary. And then on the storage side,
GCP's durable storage services encrypt all data before it is
written to the physical media. And on the networking side, all
Internet traffic from and to G Suite or GCP are protected with
strong encryptions and multiple layers of service protection. Now that covers
the data center side. But a system is only as
secure as its user accounts. Today, phishing probably is the
number one security problem for enterprises, meaning someone is
trying to trick your users into providing their password and
perhaps their one-time token. But your G Suite and GCP account
is already protected by a sophisticated abuse detection
system to thwart those attackers that are trying to guess your
password or trying to use a stolen password. But with our optional
phishing-resistant second factor, we can provide a very
strong defense against phishing. No other Cloud today provides
you this protection against what is probably the number one
security problem in enterprises. For further end-to-end security,
we also ensure that the user's client device is secure. So Chrome, Chromulus, and
Android are designed ground-up for security, all featured
Cloud management and frequent over-the-air security updates. And our client operating
systems feature a hardened boot. So they actually have a chip
similar to this to verify that they're booting
the correct software. They have encryption of storage
by default and enterprise grade management. And with such a strong
security stack and nearly zero administration cost and a wide
range of models to choose from, it's no surprise that Chrome
Books outsold Macs last year. >> URS HOLZLE: Yes, thank you. So today we are announcing
several new security features that we're adding
to GCP and G Suite. First, some tools to
let you secure your data. Data loss prevention is an API
now available that lets you discover PII and other sensitive
data in your content and take appropriate actions
such as redacting it. You will see a demo in a minute. The engine behind this API is
the same engine that powers the data loss prevention feature
in Gmail and Drive, so you get consistent results everywhere. Next, Cloud Key
Management Service is now generally available. It lets you manage encryption
for your Cloud services because Cloud KMS protects your data and
secrets that are stored at rest and it can automatically
rotate your keys for you. And then we also added tools
that let you safely access that secured content
once it's been secured. The first one is the
Identity Aware Proxy or IAP. It is available in beta and it
enables you to configure secure controlled access
to your applications. So you can enforce a who can
see what access control at the application layer. So you don't need client
software, remote access VPNs, firewalls,
network configurations. You just deploy IAP with a
single click and you're going to see that in a second. So IAP acts as a smart
front-end in front of all of your Cloud applications. It's built on top of the GCP
load balancer so you benefit from the transport security and
the scalability of the Google front-end service. And last but not least, security
key enforcement is now available to GCP users as part
of G Suite Enterprise. So this feature lets you enforce
security key use for all members of your domain. That means that all of your
applications - access to all of your applications is now
strongly phishing-resistant. Here to demonstrate some
of these features are Greg DeMichillie and team. >> GREG DEMICHILLIE:
Thanks again, Urs. You know, today the task of
taking a corporate application and making it available to
your employees outside of your network is just painful. You've got to set up VPNs
and nobody likes those. They're hard to set up. They're hard to configure. And in the end, you end up with
this perimeter-based security, right, with a semi-hard exterior
and a soft underbelly, which isn't nearly as
effective as modern device multi-factor authentication. So we're going to give you an
example of how easy it is to use the Identity Aware Proxy to take
a typical corporate enterprise app and make it
available to your users. Now the specific app we're
going to use in this case is as enterprise as it comes. It's Oracle E-Business
Suite running on Google Cloud platform. And as a company, we want our
users to access this application whether they are in the office
or they are out on the road. So let's see how simple
that is to do with IAP. Neil is going to click
the button to turn on IAP. He's then going to tell us what
domain he wants to publish that application for. And now users in that domain
and only that domain can access the application. So let's test it out. He's going to flip
over to a browser tab. Now the first thing is there
is no VPN software installed on this machine. We are just
navigating to the URL. Now first he's going to try his
personal Gmail account which has not been authorized
for this application. Sure enough, he is denied. Now he's going to try it with
his corporate account which has been authorized and he gets the
username, he gets the password, but now he will get
prompted to enter his second key authentication. So if you look on the sides,
I think we're going to project that up in a second. Well, okay, trust me,
he pushed - there it is. He pushed the security key. I think you know
what that looks like. And now he's taken directly into
the application without having to have another
additional login step needed. Not only is this easier for the
developer, it's easier for the admin, and lord knows it's
easier for the end user rather than hassling with a bunch of
VPN software, and it's more secure because we're not
trusting anybody who just happens to come in
through a VPN tunnel. So that's IAP. The other thing Urs talked
about was data loss prevention. Let me show you how Google can
help you secure some of your most sensitive data. Most companies today have a
policy around data minimization, the goal being to minimize the
amount of data that you collect and store to only the data
that's actually needed to run your business. But that's easy to do in words
but hard to do in practice. Data has a way of leaking
out all over the place. So let's look at a
concrete example. We have an enterprise
application that a support agent might use to chat
with your end users. And in the chat you can see
Neil is a support agent and he's asked Alice for some information
to verify her account. Well, Alice has way over-shared. We just wanted the
last four of her Social. She gave us her full Social, her
phone number, and a picture of her Social Security card. Now obviously we want to store
this chat, right, to see how our conversations are going or how
the agents are working, but I can't store that in a database. I'm going to have half my
company now knowing Alice's Social Security Number. So how do we do that? DLP makes it easy. When he ends the chat, we will
use Data Loss Prevention to identify and redact
sensitive information. So now you'll see
that we've - yes. We recognize over 40 different
types of sensitive info types. In this case, we've
identified Alice's name. If he scrolls down a little bit,
you can see we found her phone number, her Social Security
Number, we replaced all those with red dots, and even we went
into the picture of the Social Security card, found the Social
Security Number there and redacted it. Now this chat can be
used for analysis. Now just to prove this is real,
Neil has a webcam over there and we're going to
try one more example. I was in the green room a while
ago and I found this credit card sitting there and
it said Urs Holzle. I'm just kidding. Neil has a sample credit card
with a valid number but it's not Urs' card. I trust DLP. I don't trust 10,000 of
you with cell phone cameras. So he's going to put the credit
card number on it, he's going to hit the button, we're going to
use DLP, and it will find and redact the credit card. Now this isn't just dumb OCR. You'll notice it didn't block
out the expiration date or the name because we told the API
that the only information we wanted to eliminate
was credit card numbers. So you can control your
definition of PII, what matters to you. So there you have it. Identity Aware Proxy. We made it super simple to take
a corporate application and make it available to our end users
without the hassle of complex network configurations. And with Data Loss Prevention,
we were able to help it make sure that you minimize the
amount of data that you collect so that's one less headache you
have in terms of compliance or regulatory or really
just running your business. And with that, Neil and I will
turn it back over to Urs again. Thanks, everybody. >> URS HOLZLE: Thank you. Thank you, Greg. So, as you just saw, it's really
easy to use the IAP Proxy to control access to
your applications. But we're already working on
the next version based on a principle that we've been
applying to our own corporate users for a while. Because we view every access
decision to resource as not something that's just about the
user credentials and maybe their second factor but really about
something that should be based on the context around it; for
example, the state of the user's device, their
location, and so on. So we call this
Context Aware Security. The user's context determines
access, not just the network they're on or who they are. The context that we have today
with the IAP Proxy is just the user identity and the
security key - so the strength of authentication. But you can expect our future
versions to use a richer and richer context over time to
better secure access to your Cloud applications
or your G Suite. Now I am thrilled to introduce
Brian Stevens to tell you more about how customers
adopt GCP. Brian. >> BRIAN STEVENS: Good morning. Thanks, Urs. So, public Cloud has absolutely
exited the early adopter stage. It's now a shared platform,
available to everyone from start-ups to the
world's largest enterprises. And it's quickly gone pan
vertical from financial services, healthcare,
industrial, government, everybody is choosing the Cloud. And when they pick GCP,
why do they come to GCP? A lot in common. They want the
world's best security. They want to be on a flywheel
of continued innovation, all available through an API. They want the best in the
world data and analytics. And the most important thing,
they want the tools and the platforms that
their developers love. I've spent a lot of time
working directly with customers. And when you actually zoom out
and you try to look at patterns, we're actually seeing three
distinct things when they come to GCP. The first is wholesale
migrations of their existing workloads from on-premise to
GCP, which, to be honest, was a little bit of a surprise. The second is building
Cloud native applications. That includes start-ups as well
as some of the world's largest enterprises now. And the third is just coming to
Cloud to get the richest set of data analytics and
machine learning. So our first Cloud service was
back in 2008, Google App Engine, almost nine years ago, and
it was an incredibly powerful platform as a service, possibly
too advanced for its time. We're going to talk about
that a little bit more soon. But most enterprises either
can't or don't want to rewrite their application
architecture just to move to the public Cloud. And what's amazing is that
even without that rewrite, the drivers for moving to GCP are
still incredibly compelling. They get the amazing security
that you just saw, they get to reduce their capex, they get
incredible reliability, Google's pretty good about reliability,
performance, they get it for free, look what you just
saw with Skylake and what's happening in the
network, and they get this improved efficiency. So I actually kind of loathe, to
be honest, industry conferences, but there's this one CIO
conference that I go to every year, and it's put on by Excel,
and it's a small, little, intimate venue
with about ten CIOs. And two years ago the top things
when we went around the room that they cared about was
security, what kept them awake at night, and mobile handsets. The two are really related. A month ago when we went back
again, same group of CIOs, they went around the table and the
top two things that were keeping them up, security still, and
then shutting down data centers. Every one of them was shrinking
their data center footprint. I felt a little weird because
- and then it was my turn and I was talking about
we're doing the opposite. Like look at what
Urs just showed you. So it's kind of clear where
these workloads are going. So, ideally, you want to be able
to shift the workloads with as little change as possible
because you don't want it to be a lot of work just to move to
Cloud, and the more change you introduce, there's more risk. But that's just
the first chapter. Once you get to GCP, often what
happens is chapter two, and that's when they actually
look at how do we re-factor? How do we use things
like a managed database? How do we use
something like Spanner? So our call to action at Google
is how do we make that as simple and easy as possible for them,
and that includes technology, processes, and people. One thing that we did is we
recently added right from the Cloud console the ability to
migrate a virtual machine to GCP. This is more than
transcoding an offline image. It's actually the live migration
of a running server to GCP. What I love about it is that
it's hypervisor agnostic, so it works on bare metal, it works
on Hyper-V, ESX, KVM, even from another Cloud. Also, we've been making big
investments in Windows because we want to make GCP as great
for Windows developers as it has been for Linux and
Open Source developers. Our goal wasn't just to be
an okay Windows platform. We want to be a great Windows
platform, perhaps the best Windows platform. So we already have support
and pre-built images for many flavors, a Windows server,
as well as SQL server. And we also support active
directory running in the Cloud, and you can integrate
that with your on-premise domain controller. But it's really important for us
that for developers and Windows developers that we meet
them exactly where they are. We don't want them to have to
change how they do things just to be able to take
advantage of Cloud. That's why we did the Visual
Studio integration that we've done with GCP so that you can
actually deploy dot net apps from Visual Studio and then
just manage all of your Cloud resources. And we've also integrated
with hundreds of cmdlets for PowerShell right
into our Cloud SDK. And so now they can be very
comfortable managing all of their Windows-based
Cloud projects. So today we're announcing the
general availability of SQL Server Enterprise and that
actually includes support for a high availability as
well as clustering. Also, a beta dot net and that
will be available in both App Engine and Container Engine. And to help people on this
journey, we're announcing a new Windows partner program. And so we've partnered up with
a number of top specialists that actually have great
Windows expertise as well as GCP expertise. So they can help customers on
their journey to move Windows environments to GCP. So now Linux or Windows, my SQL
Oracle or SQL server, and as of today also a beta of Managed
Postgres, developers can really easily migrate to GCP with
minimal refactoring of their application stacks. So what's the best test
for moving to the Cloud with minimal friction? Probably to be able to
do that with zero downtime. It seems pretty impossible,
but Evernote actually did it last year. They moved their entire software
infrastructure from their on-premise data centers to GCP -
200 million customers depend on Evernote every day - and they
did this migration in 89 days with zero downtime. And so Lush, a British cosmetics
company, they began their migration to GCP. They started last year in
September and it was critical that they finish in time for the
holiday season and they did it. And so here, I would like to
introduce Jack Constantine from Lush to tell us
about their journey. Hi, Jack. >> JACK CONSTANTINE: Hi, Brian. >> BRIAN STEVENS: So 22 days. I don't think anybody
is going to believe you. It seems impossible. How is that even possible? >> JACK CONSTANTINE: Yeah. So in September we
started discussing doing the whole migration. The actual migration was from
December the 1st until December the 22nd. So it's not just a little bit
business critical; we're talking peak trade time. Yeah, so it was a huge deal. I mean one of the key things,
I couldn't have done it without the in-house engineering team
that we have, some of the guys in the audience. I wouldn't be up here if it
wasn't for all the hard work those guys put in, the hours,
the dedication, the focus. I think they deserve
a round of applause. We found ourselves in
a bit of a tricky spot. We were in a contract that we
weren't really comfortable with. We wanted to be able to actually
have a look and see what else we could go for. The contract ran out on the 22nd
of December, hence the reason that we had that hard deadline. >> BRIAN STEVENS: Lucky us. >> JACK CONSTANTINE: Yeah. I'm a little bit of a risk-taker
myself, as you can tell. Basically there is very little
bureaucracy in Lush, so the ability for me to be able to
actually make that decision was quite fast and then the
team just powered through. It was a really exciting time. We were really, obviously,
so pleased with the result. >> BRIAN STEVENS: So were the
like any sort of challenges with the migration,
technical inhibitors, process? >> JACK CONSTANTINE: Well I
think, like with any migration, there are always
technical challenges. You've got to worry about
getting your data from one place to the other and the amount of
it and are you going to make sure you've got consistency. We moved 17 websites from all
over the world with customer data, with order
data, product data. Obviously you want all of that
to be completely up and stable. But I think one of the
main things really from my perspective was the
kind of commitment to actually achieving it. I think sometimes people can -
obviously when you're going to throw a kind of hard deadline
like that, it can sound a bit unachievable. But I always like to think it
depends on kind of which reality you're looking from. I like to think that it's
realistic; you may think it's not realistic. Right? I think it's realistic so I
think we should try and do it. So keeping that focus on people
actually kind of believing that this is a goal we can achieve
in the timeframe and not letting people start to put the blockers
up and go, oh, we're going to have to delay this, we're
going to have to delay that. All of a sudden you watch
everyone - I mean, yeah, everyone gets very tense,
but also you achieve a lot. You actually get through it and
the things you need to get done get done. >> BRIAN STEVENS: Anything that
could have made it a little bit easier? >> JACK CONSTANTINE:
Oh, definitely. I think the awkwardness of our
previous supplier and the fact that it was a very closed
environment made it very difficult for us to get
visibility of everything we wanted to be able to move over. >> BRIAN STEVENS:
Open wins again. >> JACK CONSTANTINE: Exactly. Openness is something that we
absolutely cherish in Lush. Obviously you guys do at Google,
which has been great for us. And the other thing that we had,
we had this great - one of my colleagues had this great
conversation with a Google partner on the phone and it was
about a week before and they were obviously getting a bit
scared, oh, is this going to happen, and they
were on the phone. The partner said what is Plan B? My colleagues said Plan B
is to make Plan A happen. >> BRIAN STEVENS:
That's a great line. >> JACK CONSTANTINE:
That's it. That's it. And we made it happen. >> BRIAN STEVENS: That's cool. Urs, the last speaker, was the
one who recruited me to Google. I was already sold because I
believed the future is public Cloud and just fascinated by it. You have to be a technology
company to win this and just everything that Google has
been investing in for years in technology. But the surprise for me when
I actually got here was the culture, right, the ethics,
the diversity, the focus on inclusion, sustainability. And in our chat yesterday, it
sounds like there are a few similarities in your culture
that you and your parents have. >> JACK CONSTANTINE: Yeah. Yeah. So my parents founded Lush over
20 years ago and there are a lot of ethical values that
we've built throughout the organization, and we pride
that in kind of everyone we go through, supply chain when we're
buying ingredients, and fair trade, and looking at the best
quality ingredients, we look at that with our packaging. I represent the more digital
side of the business, obviously. It's a very interesting time and
Lush is looking at its digital future and understanding where
we go and how we navigate through the landscape. I think there are a lot of
similarities with Google in terms of the openness versus
closed, the cultural elements. It's been great to be able to
- one of the reasons we were so eager to do the migration in
that time period was because it felt like by moving over to the
Google Cloud platform we would be aligning with our ethical
values on a much higher level. Things like the renewable energy
and the open mentality, all of those things, we're looking
at the moment around digital ethics policy. Actually, just before we did the
migration in November, we did a global campaign to support
keeping the Internet on, especially in countries where
the government may shut the Internet down because they don't
want to encourage communication. Obviously kind of the reverse
of what we wanted to happen with our migration. Thankfully we also kept the
Internet on when we migrated our websites. But, yeah, the synergy between
Google and Lush is great. I'm really excited about even
the things you've been showing today and being able to
work together much more on prototyping new ideas,
having that flexibility. I spoke to Urs earlier and he
was saying about that whole kind of engineer-to-engineer dynamic. And my team straight away,
they were absolutely buzzing. We've only been working with you
guys for the last 3-4 months but the energy is huge. It's great. >> BRIAN STEVENS: Well, my wife
and daughter have always been big fans, but I'll say
I'm a convert now as well. Thanks, Jack. >> JACK CONSTANTINE:
Thanks very much. >> BRIAN STEVENS:
That's a good story. So Evernote and Lush are great
examples of a complete migration to GCP from either another
Cloud or on-premise. But for the largest enterprises,
the move to Cloud isn't a point in time. It's going to be a
perpetual state to run in a hybrid environment. And what we don't want is we
don't want this steel curtain between public
Cloud and private. It should really feel like this
data center extension, albeit this really amazing
data center extension. So cornerstone to that is
Virtual Private Cloud, VPC. That really allows enterprises
to build these really nice integrated hybrid environments. It gives them a completely
private virtual network running inside of GCP. It used to be that everything
running on Cloud had public IP addresses. Now with VPCs you can have a
completely private environment, private IP addresses,
private DNS, and full control. And you want to be able to
also control what applications outside of your VPC can actually
have ingressed in as well as to make sure you have full control
of anything running inside your VPC that you allow to
talk to the outside world. And on top of all that, you
always want full auditability and full telemetry so that you
really see everything that's going on, even inside of
these managed services. They should not be opaque. Also, you don't want application
in data silos forced on you just because of this
move to public Cloud. That was really one of several
drivers behind the acquisition of Apigee. So with Apigee, you can actually
put really elegant APIs in front of technology so you turn them
into consumable services and that allows you to integrate
applications in these services whether they're built on
Cloud and integrate them into on-premise into your enterprise
or on-premise and integrate them into your Cloud applications. So soon Chet Kapoor, the VP of
Apigee, is going to be on stage and he's going to go into far
more detail on how they're helping in building
connected business platforms. So Docker has been
this amazing thing. It's been this amazing gift to
the industry what Docker has created because it's really the
first time that you've actually been able to build these
consistent application stacks that run across hypervisors,
different operating systems, and different Cloud. It's not perfect in terms of
compatibility but it's really setting us free to do
some amazing things. And it's because of Docker that
Kubernetes is so successful. Because now with Kubernetes, you
need a control plane, you need to be able to manage
and orchestrate these container-based environments
and that's what Kubernetes does. So it's really quickly become
that de facto operational model. And because it's open source,
we're seeing customers, enterprises run Kubernetes
on-premise and then they're running our managed service
for Kubernetes, the container engine, on Cloud. It gives them this single
operational model that's entirely consistent across
hybrid environments or they can integrate them all together and
run a single control plane on GCP to even manage
the on-premise world. Serverless is a
really important concept. Developers shouldn't
need to think about managing infrastructure. Servers should be provisioned
automatically and just sized to the workload. And engineered right, it's more
reliable, it's easier, it's more efficient. The spoiler alert is the
servers are still there. When you need them,
they become plentiful. And when you don't need
them, they just go away. It's really been
a design approach. Serverless is not new to Google. It's been a design
approach across many of our major services. On the compute side, App
Engine and Container Engine are serverless. On the database side, Datastore
and Spanner you don't see infrastructure, and
also with BigQuery. Each service consumes no servers
when there is no load and they all scale out horizontally for
when you need more horsepower. And today we're announcing the
beta of our newest serverless offering - Cloud Functions. So functions are simply
fragments of code and they get applause. That's great. But what developers do with
these fragments of code is they connect services together and
they plug them into a growing corpus of events across GCP and
that's how they tether it in. But what it does is powerful. It lets you take this generic
Cloud that's good for everybody and you can personalize it in a
way that is meaningful to you. During the alpha, we saw people
do some pretty amazing things. One example that I love was one
company actually looked at event logs and then they keyed off
certain events in the event logs and they actually automatically filed bugs in JIRA. I think that was pretty cool. And then we also see
people doing PII scanning. They plug in for the PII they
care about, and any time an object comes into GCS, they
can actually scan for it. So this ability to be able to
extend GCP in a way both for developers as well
as system administrators are virtually endless. So I mentioned earlier that back
in 2008 Google App Engine was really this pioneering
serverless and it was ahead of its time. The core promise to developers
still remains the same - bring your code and Google will
handle everything else. It empowers applications to
scale from one request per day up to millions of
requests per second. And so when you actually
liberate developers from managing and patching servers,
dealing with scale, dealing with load balancers, version
rollouts, managing databases, they can create great things. That's what's been
happening at Snap, Home Depot, and Philips Lighting. And internally to Google,
App Engine has been around a long time. So our corporate IT, we have
thousands of App Engine-based apps in production that Googlers
depend on each and every day. So beginning today, we're
delivering a promise to Google App Engine but we're making it
available to entirely expanded developer communities. It's a focus on more
openness, developer choice, and expanded portability. So out of the box, we now
support seven popular languages - Java, Ruby, Go, PHP, Python,
C#, and Node - or you can now, for the first time, bring your
own runtime, bring your own framework to App Engine
Flexible Environment. As long as it runs in the
Docker Container, it now runs on App Engine. Thank you. So I can't think of where
serverless would matter more but to mobile developers. Do you know a mobile
developer that wants to manage operating systems? What they want to focus on is
building a great user experience for their users. So we've been working really
closely with the Firebase team, which is Google's mobile
application development platform, which supports
iOS, Android, and Web. They actually jut surpassed
a milestone last month. In the last 11 months, they now
have one million active projects on Firebase. So, really amazing momentum. Today we're actually making
Firebase even more powerful because we're integrating it
closer so that you can access GCP resources
right from Firebase. The first thing we did is we
integrated Cloud storage into the Firebase SDKs, so now
you can access any GCS bucket anywhere in the
world right from Firebase. So you get this direct to mobile
upload and download for every Cloud storage user
from a mobile platform. One thing that people love about
Firebase is it has this amazing built-in analytics capability
and that's what developers use to understand their users. Now we've actually integrated
that so you can actually take the analytics data in Firebase
and then drive it into BigQuery and that allows even
more analysis and really understanding your
users in realtime. And we've also integrated the
Cloud Functions that we just talked about with Firebase,
and so that allows Firebase developers to extend their
backend logic in crazy ways. They can send push
notifications, transform data to machine learning, define custom
business logic, all with just a few lines of code. In fact, the integration with
Cloud Functions has been the number one feature request of
Firebase developers over the last year. And then finally, we love
lawyers almost as much as we love developers, so we're
actually working to extend GCP's terms of service to cover
many of the Firebase products. So we're not there yet but soon
you'll be able to bundle much of Firebase into essentially the
same contract that you use for GCP. So we'll be making a lot
more integrations over the coming months. Greg is coming back now and he's
going to show you how we use GCP and Firebase together
to really modernize corporate IT apps. Greg. >> GREG DEMICHILLIE:
Thanks, Brian. Brian told you we're going to
take an example of an existing on-premise application and
we're going to modernize it with Google Cloud. Now in this case, the app that
Chris and I are going to work with is an old
ASP.net application. Well that's not an
ASP.net application. That's an ASP.net application. This is an application used
by an insurance company. So the adjusters go out in the
field, they take their digital camera, they take pictures of
the accidents, they put it in their SD cards, and then
they upload the images into the application. It's also got an API on it so
that you can manage all the images that are
associated with a given claim. Now we're going to start
bringing this application out of the '90s and
into the modern era. We're going to start by giving
our agents a good mobile application so they don't
have to lug a laptop around. Firebase makes that super easy. You can build a mobile
application without having to be a backend expert. In this case, what Chris is
showing you is the line of code that integrates
Firebase with Cloud Storage. So this line of code allows the
camera to take a picture, that picture then automatically gets
uploaded into a Google Cloud Storage bucket. So to show how this is going to
happen, Chris is going to take a quick photo
with the phone. Yep. >> CHRIS: All right,
everyone, say cheese. I love having audiences do that. >> GREG DEMICHILLIE: So that
photo now is being uploaded and it's being stored into a
Google Cloud Storage bucket. Now to show that it really is
in a GCS bucket, we're going to switch over and we're going
to list the contents of this bucket. Now the Windows users among you
are going to notice that he's using PowerShell. That's because he's using
Cloud tools for PowerShell. It gives Windows developers a
first class experience using Google Cloud platform with
the services that they love. And sure enough, there
is the picture that was just uploaded today. So now we've got a claim picture
in GCS, how do we get it to this old legacy application? Apigee and Cloud
Functions make that simple. The good news is our legacy
app has an API, as I mentioned. The bad news is the app was
written over a decade ago so it's a SOAP API. So it's not really very friendly
to a modern restful developer like Chris. Apigee, however,
is an Enterprise API Management platform. We're going to start by using
it to create a proxy to convert this legacy
SOAP XML into modern JSON. So here is the Apigee console. We've connected it to
the WSDL of the service. I can't believe I'm
saying that in 2017. And on the left side, you see
the SOAP XML, and on the right side, Apigee has automatically
created a much friendlier, easier for the developer
to use, JSON version of it. Now Apigee has a lot
of other capabilities. So, in addition to doing that,
Chris has applied a quota so that we don't flood this poor
service with millions of mobile agents all trying to upload
photos at the same time. It automatically
provides throttling. We've also put authentication in
so that only our agents and only those mobile applications
can upload the data. Now Apigee has got tons
more capabilities; I am only scratching the surface. But for our purposes,
that's good enough for this application. So we've got images in a GCS
bucket, we got our application that's now got a nice modern
API on it, how do we connect the two? Well I could deploy a VM,
but that's overkill, right? All I want to do is copy a
little file from GCS to an API. Why should I pick an operating
system and have to patch the OS and deploy a big,
heavy VM image? Cloud Functions allows me
to just write a snippet of Javascript. This Javascript is
listening to that GCS bucket. So any time a file gets added,
this snippet gets invoked. And if he goes down, you'll see
he is using the post method, that nice, restful
interface onto our web service. So now when Chris takes a
picture, we should see the picture go all the way through. Now I don't have a car
crash on stage, Chris. I think you've got though
some sort of a next best thing for us? >> CHRIS: Actually, what I am
prepared to do, Greg, is I've got some scale model vehicle
replicas here that I am going to use to simulate a high
speed collision scenario. >> GREG DEMICHILLIE: This is
scaled down in a digital scale up. Yeah, very good. >> CHRIS: You ready? >> GREG DEMICHILLIE:
Yeah, go for it. >> CHRIS: Vroom, crash. Ah! >> GREG DEMICHILLIE: Please
tell me you didn't misplace the phone. So he's going to take the
picture of, oh, the humanity, and it gets uploaded, and now
he's going to switch back to our application, our corporate app. He's going to hit F5 because
it's the '90s and we have to refresh. And there we go,
there is our application. We started with a legacy on-prem
app, we used Firebase to build a mobile application that didn't
even require a backend expert, we wrapped the old SOAP API in
modern JSON, and Cloud Functions was this wonderful glue layer
to connect everything together. But one more thing. This application started as an
on-prem application and we've wrapped it with all this amazing
Cloud stuff, but it's still sitting on-prem. Let's fix that bug, too. With just a couple clicks, Chris
will start the Migration Wizard. In about 45 minutes, we will
live migrate this running service into Google Cloud
platform where it runs on Google's amazing infrastructure, network, operations,
and reliability. And now we've really taken this
app and brought it into the modern era. Now I'll turn it
back over to Brian. Thanks. >> BRIAN STEVENS: Thanks, Greg. So, great companies use data
to react faster to the market, build novel new products, it
even changed culturally how they work. Breaking down data silos to see
bigger and better pictures of what's going on in realtime
is really super important. And so for GCP to enable that,
we deliver an end-to-end managed analytics platform. It now spins storage, data
warehousing, ingestion, you saw PII cleaning, ETL, batch and
streaming modes, visualization, and even machine learning. It is expanded and extended
by a great set of partners. So Google is actually pretty
good, right, about being a data-driven company. But to support that,
it was all about Dremel. Dremel was the purpose-built
analytics engine inside of Google that we depend on. There is a white
paper out on it. It surfaced as Google Cloud
BigQuery for users outside of Google. It's great because it scales
horizontally in realtime. It can ingest millions of rows. It can process
trillions of rows a second. And you really need that for a
lot of new applications to have realtime decision-making. Think about like what's
happening with data ingestion and IoT and social. So it's great for realtime
results across ever-changing data sets. As part of that, it's a big part
of our data analytics platform. What we've been doing is we've
been actually connecting this sophisticated data analysis
that's inside of BigQuery with rich data sources such
as advertising platforms. So today, to make that easier,
we're now seeing the Google BigQuery Data Transfer Service. What that does is it automates
the transfer of data from SAS applications into BigQuery
but it does it on a scheduled, managed basis. So today we have
connectors for AdWords, DoubleClick, YouTube Analytics. And then once the data is
inside of BigQuery, that's where further enrichment happens. You can integrate the ads data
with weather data, geo data, your sales data. So it makes it really easy
for marketing teams to become empowered and they can build
marketing analytics warehouses on GCP. But the process of actually
bringing data into a data warehouse can
still be cumbersome. I've seen stats that these data
scientists spend 75% of their time just like dealing with
these disparate data sources and cleaning them up so that they
can be homogenous enough to relate to each other. And we want to make that easier. That's why today we're
introducing Cloud Dataprep. Dataprep is this intelligent
data service that allows you to visually explore and clean
your data so that it can be integrated into a
BigQuery environment. It's as simple as
using a mouse cursor. You can hover over an attribute
in a JSON object and you can decide that you want that
attribute to be a top level column in BigQuery. Or we've seen people take a
single field that has a location address and the Dataprep service
allows them to break that apart and say I want separate fields,
separate columns for state and Zip Code. And it's really smart. It actually uses machine
learning itself, because often in data, there is data
quality issues as well. And so based on machine
learning, it suggests transformations to your
data to make it cleaner. So all of these need not
be one time ingestions and transformations orchestrated
under a Cloud data flow. The aim is that you actually get
out a batch and you're building a streaming data
analytics pipeline. We've integrated with
Data Studio out of Google's analytics team. What that now allows is you can
actually visualize, create these really rich dashboards and
charts almost automatically. Or you can integrate all your
data that's now in Cloud with a lot of our machine learning. So you can actually train whole
new models and then use them for recommendations and predictions. Oh, with all of this, never will
you be managing infrastructure. You won't be
installing software. You won't be
integrating software. You won't be managing
performance and scalability. You'll just get right
to the mission at hand - analyzing data. So, enough talk. Greg is actually back to live
demo again how all this works. >> GREG DEMICHILLIE:
Thanks, Brian. This is my last of the
four promised live demos. Before we start, I would be
remiss if I didn't thank Chris, Neil, Martin, and Robert who
have been helping with building these demos. Would you thank them for me
before we get too far into this? Thank you. So to give you a picture of
how this data platform puts together, we're going to use
a scenario of an advertising agency that puts ads in the back
of those screens in New York City taxis, if ever you've been
in a cab recently in New York. There's an ad that plays and
you can interact with it. To give you the picture
first, this is a simplified architecture of
this data solution. The taxis are sending their
position in realtime, where they are in the city, as well as
their destination, how many passengers they have, and
what sort of interactions the customers are doing. Pub/Sub is then
ingesting that data. We're using dataflow to process
a data pipeline to handle and process all that data. We're also archiving the entire
historical data in BigQuery which is a really powerful tool
for you to do ad hoc analysis of all of your historical data. And at the end, we want
to make a visualization. So why don't we actually
see the pipeline in action? So this is the dataflow
visualization tool. You see Robert has hovered. We are ingesting a little over
15,000 new elements per second into this database of
all of this information. But how do you
visualize all that? Google Maps is super powerful
but I would hate to point 15,000 updates per second
at my Chrome browser. So dataflow actually allows us
to down sample that event stream to give us a realtime
visualization of taxi positions. This is that visualization. This is showing where all these
taxis are in New York City at any given time. Now what if I want to look at
one particular customer who has placed ads with us,
a particular agency? We can filter that dataflow
and now the visualization will update so that we're only seeing
ads that are being placed for a specific store. In this case, the Acme store,
it looks like, is on the Upper East Side. And sure enough, we see that
we're placing ads roughly geographically near the store. But right now we're just
using a dumb rule for this. It's just a very simple rule. Where is the store? Where is the taxi? Place the ads. Can we do better? Can we automate this to start
to take into account all the various data that
we have about this? Let's start by using BigQuery
to sort of explore this dataset. The query Robert has here looks
at all of the taxis coming from the airports and tries to see
how many of them are traveling near restaurants. Now note here he's querying this
historical data even as we're ingesting new data at
15,000 events per second. So BigQuery gives him the
ability to have an always up-to-date realtime
view into his data. In this case, we see that
roughly 17% of our taxis are from the airport
and near a restaurant. That sounds like an insight that
maybe we ought to do some ad targeting there. And if you look at this query,
you can see it really is a non-trivial query and BigQuery
is handling all that for us. So right now we are
manually configuring these ads. I think what we want to do is
look at a way that we can do this in an automatic fashion. Now we could similarly search
for every different customer, but how do we
build a generalizable and flexible model? That sounds like a
job for machine learning. Machine learning is tailor-made
for taking large chunks of data and building models that give
you insights out of that. So we're going to build a model
that takes into account all of the data we have. Where did the taxi start? Where did it end? How many people were in it? What time of day is it? Where is it going? First, we're going to start
by using, as you see in this picture, we're going to use
Dataprep to take all that historical data, clean it up,
make sure that it's ready to use, and then we're going
to use it with ML Engine. So this is Dataprep. The first thing to notice is,
across the top, Dataprep has automatically inferred
the schema from my table. I didn't have to tell it that. Now if you look at that
passenger count column, you'll notice that it
looks a little funny. The histogram up at the top
includes some elements where it says there are 10 or 11 people. Now I don't know about you, but
I've been in a New York City cab and I'm pretty sure you can't
actually put 10 or 11 people in a cab. So that's a common data bug. Dataprep allows me to fix
that in a number of ways. I could delete the data. I could replace the 11 with a
1 on the theory that it's a fat finger mistake. And Dataprep allows me to build
simple recipes that clean up my entire data at scale. It also can do things like if
you look at our location column there, it's a composite column. It's longitude and
latitude comma separated. I would really like to
have a latitude column and a longitude column. Dataprep allows me to build a
recipe that splits columns with composite data into separate
columns so that I now have a better schema to work with. Now this just scratches the
surface of how Dataprep can take this really tedious job of
preparing data for analytics and make it much faster
and must more reliable. Once we've done this, machine
learning, as you heard from Faye Faye, allows me to build a
model on my laptop, upload it to Google, and deploy it at scale. Let's see how
that actually works. If you scroll down in the
dataflow pipeline, you'll see we've added an ML engine call. So, now as our taxi data is
coming in, we're calling to machine learning with the data
and saying help us choose the best ad to target for
this particular user. So let's switch back over to the
map with the deployed machine learning model and now you'll
see that we start seeing some interesting combinations. When there is one passenger
coming in towards Midtown, the taxi is currently in Chelsea
and it's on its way to Midtown, we're presenting an offer for
Manhattan Bagels because it's a single person, maybe
he or she wants a bite to eat. If we pick a taxi that has more
than one person in it - if you have a taxi with more than one
person in it - there it is - in this case -
that's still not right. That's a one person one. But they're heading into a
different location and instead we've chosen a winery. If you pick a taxi with three
people, the system would automatically recommend
discounted tickets to Broadway, for example, as multiple
people coming to Midtown in the evening. So that's Cloud ML making
suggestions based on all our available data. We used Pub/Sub to ingest a
huge stream of data at scale. We used dataflow to build a
pipeline to give us a good analytics system. We used BigQuery so that
we always had the complete historical data available. We never had to
deal with sample data. And at the end, we used Cloud ML
and Dataprep to build and train at scale and model. What's important here
is what you didn't see. I didn't deploy
a virtual machine. I didn't deploy patches. I spent my time
actually in the data. That's the point of Big Data is
to spend the time in the data and not in the machines
that take care of the data. That's what Big Data on
the Google platform does. So I want to thank you all and
I'll turn it back over to Brian. >> BRIAN STEVENS: Thanks, Greg. So people that know me know that
I loathe buzzwords, and so I'm probably going to offend a few
people in the audience, but digital transformation is
absolutely one of the ones that I hate. I think in part it is because
I've never seen even two people even agree on
what it even means. But what I do know is this,
is that markets are incredibly competitive, each year more so
than past, and the new companies that actually take advantage of
new technology without having any legacy move really quickly
to either disrupt or create whole new lines of business. The companies that win are going
to be the ones that shed the mundane, take advantage of
state-of-the-art technology, and actually put people to work,
their intellectual horsepower, and they're creating
new end-user value for their customers. That's what we want to do at GCP
is enable you on that journey. So thank you and it's my
pleasure to introduce Prabhakar Raghavan, the leader
of our G Suite team. >> PRABHAKAR RAGHAVAN:
Thanks, Brian. It's so exciting to be here. What is productive work? It's a systematic progress from
a huge multitude of choices down to a decision or an outcome. Whether it's figuring out where
to file the document that I just got, choosing the words for my
next email, or at the epic end, choosing all the pieces that
come together in a grand symphonic work, in every
instance we are going from chaos and ambiguity
down to an outcome. You will notice that the three
examples I just gave you range from the mundane to the sublime. The mundane end was where
do I file the document. The sublime end was
the grand symphonic work. Machines are really, really
good at taking care of the humdrum down there, but it
takes humans to do the truly creative work. At G Suite, we are obsessed with
the idea that computers should constantly raise the bar on what
they can get done at the mundane so that our users, your
employees, have more and more time to focus on
truly creative work. In a study of several million of
our Gmail users today, we find that one in eight of their
email replies is actually machine-generated and they take
those machine-generated replies and send them off
and they're good. So that's a case where we
are raising the bar on what computers couldn't do five years
ago but today they can take care of that and leave humans to
focus on truly creative work. And so what that means is while
our computers may not have written Beethoven's Ninth
Symphony, maybe we could have freed him up to write
nine more symphonies. At G Suite, this has been an
obsessive pursuit for over ten years. Today, three of our apps are
on over a billion smart phones. We have over three million
paying businesses that use the G Suite. But I will say this has been
an evolution, a journey for us, because we have gone from
applications crafted for consumers and then outfitted for
enterprises after the fact to having an
enterprise first focus. I'm going to give you a
couple of data points for that. One thing, we have begun to do
early adopter programs with our best enterprise customers so
that we don't just build a product and push
it out the door. We work closely in the final
months of development with our top customers and partners. We take their feedback and
refine the product and get it just right before we deliver it. Here are three examples of
recent early adopter programs that we have done. I call these out
for a specific reason. Each of these apps is
something that was built for enterprises only. These were not consumer
apps ported to enterprises. There is no
consumer pedigree here. The Jamboard, App Maker, Google
Cloud Search, every one of these is an enterprise-only product. You will see the
Jamboard in a bit. We don't expect too many
consumers to be buying those. Now at the center of productive,
collaborative work is content. I am thrilled to share a
statistic with you today. Google Drive today has over 800
million monthly active users and it's on a tear to
hitting a billion. It will soon become Google's
latest billion user product. We think of Google Drive as
the premiere personal storage solution, but we also see an
opportunity there where it can evolve from being personal
storage to serving the needs of the enterprise for file sharing. And as we go through this
journey, I will be making five announcements now to
represent steps along the way. First, Team Drives goes into
general availability today. As you can tell, this has
been one of the most demanded features from our customers. It lets teams easily share
content in their drive and manage the sharing. Now once the content is in a
drive or in a team drive, we would like to make sure that the
enterprise needs of compliance, of archiving, litigation holds,
e-discovery, all of that stuff is taken care of. So I am pleased to announce
general availability for Google Vault for Drive content. So once the content is in the
drive - okay, I see a few people that are excited
about that as well. Thank you. Google Vault for Drive has
arrived, general availability. All right, you say. But if you're a prospect, you
look at it and go that's all very well for content in the
Cloud but most of my content sits in on-premise file servers. What do I do about that? I'm excited to
announce that we are acquiring Vancouver-based AppBridge. AppBridge is a company
that builds connectors from on-premise file servers to
siphon the content up into Google Drive. The fourth announcement I am
making today around Google Drive has to do with the following. Once the content is up there,
how do you make it accessible and easily manageable for
someone who has got a Windows laptop or a Mac? No more sync clients. No more checkboxing
which files you want to save. No more worrying about how much
hard drive space do you have. Drive File Stream takes care
of all of that seamlessly and obliviously so you don't have
to worry about all of these minutiae and can work as if
Google Drive, the entire Cloud, is connecting
to your laptop. Great. Now once all of that content is
up in the Cloud, it opens up the potential for Google's
machine learning magic. Think of the following. You come to your favorite file
depository and you know there is a file you're looking for but
you're scratching your head did I create it, did Joe share it
with me, what key words does it have? In our studies, you spend
something like 40 seconds on average trying to figure out the
right filters and restrictions and browsing and keywords before
you actually get to the file. You shouldn't have to. There is one file
you are looking for. Google's machine learning magic
will build a predictive model for who you are and your
activity and serve up that file before you even ask for it. We call this Quick Access and
today it is generally available for both Drive and Team Drives,
Android, iOS, and the Web. So that is the cache of
announcements I had to make around Google Drive. Now that's all about content,
but really where do the people come in here? So for this next segment of
announcements, rather than my keep talking, we're going to do
a bunch of demos around teams and meetings for you. I'm going to call on stage my
colleagues Scott Johnston and Jonathon Rochelle. >> SCOTT JOHNSTON: I'm Scott
Johnston and I'm going to show you how we rebuilt Google
Hangouts with a focus on making teams productive. And to do this, we're going to
visit a company called Cloudy Coffee which is in the midst
of launching a coffee bean with 100X the caffeine
of a normal bean. This is a bean that I
desperately needed this morning and I think will be very
popular in the market. We're going to start by looking
at the new Hangouts chat, completely rebuilt to be an
intelligent messaging app for teams. Now, Cloudy and our G Suite
customers already use Hangouts' direct messages every day
to keep work moving forward. But in the new Hangouts
chat, we've added rooms. Rooms are a central place for
team and project discussions. To help look at this and help
us walk through this, I've got Mandy at the controls. >> MANDY: Fired up
and ready to go. So I'm going to bring up
Cloudy Coffee Room in the new Hangouts chat. >> SCOTT JOHNSTON: So the first
thing you will notice in this room is that discussion,
realtime discussion is threaded, and this allows me to separate
the lunch conversation from the work and allows teams to dive
deeply into discussion without fragmenting other discussions. Mandy, why don't we move a bit
lower in this room and look at some other stuff? So here we see Nicole has posted
a brochure for our launch. That brochure is in Docs. In the new Hangouts chat, Docs
and Drive are deeply integrated and the room manages the
permissions for you, so anybody who is a member of the room
now or in the future always has access to work with that file. So now we have a central
place to discuss project and team information. We know that search is critical,
and so we have built a powerful search directly into
the Hangouts interface. >> MANDY: So on top of free text
search, you can also filter by people or types. So you can always find
the content in your rooms. Let me take a look at slides. Ah, and there's the sales
forecast that Patrick put in. >> SCOTT JOHNSTON: Perfect. Teams work with a myriad of
tools today, and so we have created the Hangouts platform
that lets third parties deeply integrate with
Hangouts and Team rooms. The platform supports a wide
range of capabilities from lightweight scripting with our
Google Apps script so you can automate team workflows
quickly all the way through to intelligent bots. So let's look at an example. Cloudy Coffee uses a sauna, a
work tracking product, to stay on top of their launches and
know who is working on what and when. >> MANDY: Okay. So right from inside the room, I
can assign a task to Mike, who I like to assign all my tasks to. And with one simple
click, task is created. >> SCOTT JOHNSTON: Great. We've teamed up with a number
of companies like Zendesk, ProsperWorks, Box, and more, to
deeply integrate their products into the Hangouts platform. And let me show you actually how
we're using the platform itself to integrate our own products. We've built an
intelligent bot we call Meet. Meet uses natural language
processing and machine learning to automatically schedule
meetings for your team. >> MANDY: I'll ask Meet to find
a time for us - for the people in the room today. >> SCOTT JOHNSTON: Perfect. >> MANDY: Oops, maybe I'm -
I may have had too much of that coffee. >> SCOTT JOHNSTON: Too
much of the coffee? Yeah. >> MANDY: Oops. >> SCOTT JOHNSTON: So what's
happening here is Meet is going to go out, it's going to look at
all our calendars for members in the room, it's going to find the
optimal time for us to meet this afternoon, and automatically
book it in Google Calendar. >> MANDY: Actually, let's
move that meeting to tomorrow. >> SCOTT JOHNSTON: Okay. So here we are seeing with
simple conversational commands we can do something that
otherwise used to take many, many steps. Wait, if you're moving that
meeting to tomorrow, that means all of you are going to have
to stay until tomorrow for the demo. Is that okay? Are you guys good with that? I can lead a sing-along. We have snacks. No? >> MANDY: Good point, Scott. Let me just move
that to right now. >> SCOTT JOHNSTON: Okay. So we're going to ask Meet
to schedule a team meeting right now. And before we jump into this
meeting, let me talk about meeting technology today. I look around and I see us
landing rockets on rafts in the ocean and it's still so hard
to get people into a meeting. Our customers sometimes
spend ten minutes getting a meeting ready. Somebody doesn't have an
account, the meeting system sent them 72 phone numbers and
you can't find the code. And so this is what we obsessed
about when we rebuilt the Hangouts meeting experience. What does it mean
to obsess about it? It means no plug-ins required. One click and you're in. We dramatically reduced the
code size and optimized the experience so that you're
instantly in the meeting, you have less CPU fan, and your
video and audio quality are dramatically improved. All right, I'm
done with my rant. >> MANDY: Great. I think we have the
entire team already one. >> SCOTT JOHNSTON:
All right, great. >> MANDY: So when you click in,
you're instantly taken to what we call the green room where
you can check out and make sure you're ready for the
meeting and then join. Hey, guys. Welcome to the keynote. >> TEAM: Hey. >> SCOTT JOHNSTON: Here we are. So what you're seeing here
is the new Hangouts Meet, our enterprise solution
for video meetings. So, sure, it loads fast. All right, yeah. So it loads fast, it performs
well, and I'm sure you'll get to try that when we
launch it today. But there's also friction
getting into meetings in other ways. What about that consultant that
doesn't work for the company, doesn't use G Suite? I don't know why you wouldn't
use G Suite, but some people don't I hear. No problem. Vroon is a consultant that is
helping us with this coffee campaign, and with a
single link he has jumped in. >> MANDY: So I've just
accepted Vroon's knock and he's instantly there. Hi, Vroon. It also looks like Mike
has joined from the road. >> SCOTT JOHNSTON: And that's
because Hangouts Meet with every meeting now can contain a dial
in code so that you can connect and participate in the meeting
even when you don't have a data connection. So there is a lot more I could
talk about with Hangouts Meet, but what I want to stress is
that the focus was to really, really get you into the meeting
as quickly as possible so the technology would get out of
the way and you could focus on real work. I don't want to spend more
time taking because we're in a meeting. So now that we're actually in
the meeting, why don't we talk about the ad
campaign for this new bean? >> FEMALE SPEAKER: Yeah, I have
some ideas that I can white board but I don't know if you
guys are going to be able to see it. >> JONATHAN ROCHELLE: Stop. Please stop. You said white board? >> SCOTT JOHNSTON: Yeah,
is there a problem? >> JONATHAN ROCHELLE: And you're
trying to reduce friction in the meeting? That's a great idea. >> SCOTT JOHNSTON:
What's the issue? >> JONATHAN
ROCHELLE: No, really. I mean, it's 2017. Nobody's going to
be able to see it. And a five person
meeting is bad enough. We've got 5,000 people here;
they're not going to be able to see the white board. Oh, wait. Actually, we can just have you
point the camera to it, right? Yeah, we'll do that. >> SCOTT JOHNSTON: Yeah. >> JONATHAN ROCHELLE:
And you can't save it. There's no way you're going
to be able to save it. >> SCOTT JOHNSTON: Can't
I write don't erase? >> JONATHAN ROCHELLE:
Yeah, how about that? Write do not erase on it. That will work. >> SCOTT JOHNSTON: Let me guess. >> JONATHAN ROCHELLE:
Or snap a picture. >> SCOTT JOHNSTON: This thing
they rolled out here, do you have a better solution? >> JONATHAN ROCHELLE:
I think we might. Yeah. Let's talk about that. So what we need is not to waste
the time once we're in the meeting, right? We got there really easily. Let's not waste that. Let's get work done. That's why we meet, to
collaborate and to work. But there's never been
a great tool for that. The white board is close. What we need is all the
virtues of the white board. It's fast, it's simple, and it's
freeform, but we want that with something where all
the friction is removed. That's why we created Jamboard. Jamboard is a white board in the
Cloud, in the meeting room, and beyond your meeting rooms. You just pick up the
stylus and you start thinking, communicating, and
working with your team. So let's see what
that really means. So T.J. is going to write some ideas that we've
talked about for this new Cloudy Coffee
ad campaign, but T.J. won't be working alone here. >> FEMALE SPEAKER: Hey, guys. Now we can't see the Jamboard. >> JONATHAN ROCHELLE: All right. I think we've got a
solution for that, though. So wait for it. The Jamboard knows that there
is a meeting going on and automatically presents a
prompt, and with one click T.J. can join the meeting and present
the Jamboard to the meeting. So everybody remote, everybody
on the meeting can see what he's doing on the Jamboard. Hey, Patrick. Patrick, you guys can see
the Jamboard now, right? >> PATRICK: Yeah. >> FEMALE SPEAKER: Yep, thanks. >> MALE SPEAKER: We're all good. >> JONATHAN ROCHELLE: And by the
way, we have an opinion on the stylus that T.J. is using. The Jamboard stylus is passive. You don't have to charge it. You don't have to pair it. You don't have to dock it. And when you lose it, it won't
cost an arm and a leg because it's passive. It's intuitive and simple. Most importantly, the Jamboard
still knows the difference between that stylus and your
finger so you can write with the stylus and erase with
your finger naturally and intuitively. So, guys, why don't
you help us out here? Help T.J. out and take a minute and let's
brainstorm some activity. So you know the ad campaign that
they're working on, sometimes you use those sticky notes
if you're in the same room. So the team is going to use
the Jamboard companion apps and anyone on the team can now
add content and help T.J. with his brainstorming. So whether it's from their lap
or their phone or their tablet, from an Android or an iOS
device, the companion apps let them participate and get work
done to add their ideas, to make a point, to organize
what is already there. It's the same magical
collaboration experience hopefully you've gotten
used to in Google Docs. Now we're all at
the white board. So Lucy in Liverpool or in the
room over there, or Bereen in Brazil, or Tae in
Taiwan, this is teamwork. This is actual work happening. Progress here and now
while we're in this meeting. So let's keep working. We're looking at where to
target this ad campaign. So let's see another feature. We're going to look at
a billboard location. So how about New York City? Let's go big. I've seen a lot of
billboards in San Francisco this week, actually. But we're going to go big
and go in Times Square. So the power of the Web, the
power of Google is right there for T.J. or anybody using
the companion apps, finding relevant, functional, beautiful
information to add to the Jam and your Jam comes to life. So T.J. is using Search and Maps on the
Jamboard to add useful content for the team, and Bereen and Tae
and Lucy and others are adding other content. So they're working on visuals. What's the branding? What is the branding feel we
want for 100X caffeine coffee? I'm afraid to ask. So they're all using Search. And Jamboard is also
integrated with Google Drive. So let's try something else. You see somebody actually added
using the companion app a slides deck that we're working
on the financial model for Cloudy Coffee. So spreadsheets, presentations,
and documents, and anything from Drive can be added to the
Jamboard from the companion apps so the team can
see it, work on it, and keep it. Excellent. Now imagine we had
more than one Jamboard. Imagine we had more than one. And we actually do. We have actually somebody in New
York on this meeting, one of our best graphic designers. I think we'll work on a custom
logo for this Cloudy Coffee. So Elon is working on a Jamboard
out of our New York office. You can see actually the avatars
- T.J., if you could open up that frame organizer - you
can see the avatars of where everybody is while they're
working on the different frames in the Jam. And Elon is working on something
and we're seeing it change live here. Everyone that's on the companion
apps is seeing the same thing. Thanks, Elon. Awesome work. So now I think we're pretty much
ready to send this off to the ad agency. So T.J.'s handwriting is
not actually the best. That's probably the
best I've ever seen, T.J. But let's make
it a little better. The power of the Web and
Google and the machine learning capabilities of Google
are available for T.J. to make his
handwriting really good too. Excellent. And when you're done, please don't snap
a picture of this. Okay? You don't need to. Everything from Jam is saved in
Google Drive so you can pick up the progress next time. That was just a brief overview
of the new Hangouts chat, Hangouts Meet, and the
new Jamboard, built with the enterprise power of security,
intelligence, and scale that you would expect from G Suite and
designed to bring teams together in several ways. If you want to learn more,
please participate in our breakout sessions or come
see us in the sandbox. Back to you, Prabhaker. >> PRABHAKER RAGHAVAN:
Awesome job, Jon and Scott. Thank you for that. The final announcement we are
going to make today thinks of G Suite kind of in the same
way you think of your teams. They work really well together
but they also need to work outside of the team. We decided that it was time for
Gmail to have add-on capability because people were wasting
simply too much time going from their primary work surface,
which could be Gmail, digressing over to a distraction, and then
by the time they come back 20 minutes are gone. Rather than my doing the talking
and explaining this, I'm going to bring on stage a partner,
Intuit, who will explain what they did with Gmail add-ons. Please welcome to the
stage EVP and CTO of Inuit, Tayloe Stansbury. Welcome, Tayloe. Take it away. >> TAYLOE STANSBURY:
Thank you, Prabhaker. And good morning, everyone. I am delighted to be here. If I could just say a word or
two about Intuit, our primary brands are
TurboTax and QuickBooks. And with QuickBooks, we have
some 1.8 million online and mobile users around the world. Now it turns out that about a
half million of those users also use Gmail. It seemed natural that with this
add-on capability we would want to integrate the two so that
those users could have a more smooth flow between
those applications. So let me give an example. Here we have a
customer whose name is Craig. He's a gardener. He really loves making
people's gardens beautiful. And he's sitting in a cafe and
he's reading his Gmail on his phone and up comes a message
from his customer Sarah who says she's delighted with the work
that he's done on her garden and she's wondering how
she should pay him. He's thinking, wow, this is an
opportune time to invoice her. Now what it used to be is that
he would have to jump out of Gmail, go into QuickBooks,
log in, find the place to do an invoice. But instead what we did with
this add-on integration is that we have a QB icon that is
at the bottom of his Gmail. He simply clicks on that, it
infers context, single signs him on to QuickBooks, drops him
right into the place where he can fill out an invoice. And as you can see, it is
excerpted from the email the name and address
of his customer. Now all he has to do is fill in
the rest of the details for his invoice and off he goes. Remember, he's still in Gmail. He's never left the application
and he's been able to fill out the invoice entirely in there. Boom. He sends
the invoice and he's done. It was that easy. Now when he's done with that, he
can actually quickly look at how his invoices are doing with his
other customers in QuickBooks still inside the Gmail app, see
how that's going, and get right back into Gmail so he can finish
his coffee and go off to make other people's lawns beautiful. So, to summarize, it was really
easy to do this integration. We wrote once and deploy
many across Android, iOS, and the Web. We can't wait to
get this to market. It will be later this year. And this is just one of many
integrations that we expect to do between
QuickBooks and the G Suite. We already announced one earlier
around the calendar and we expect to have more over time. Thank you. >> PRABHAKER
RAGHAVEN: Outstanding. Outstanding, Tayloe. Thank you for being such
a great partner in this. All right. I'm just going to finish with a
wrap-up of the announcements we saw today. So here are the announcements
we went through today. Team Drives, general
availability; Vault for Drive, GA; AppBridge, Vancouver-based
company, acquired for building connectors; Drive File Stream
and early adopter program; and Quick Access for Team
Drives across all platforms. We saw the new Hangouts chat
that Scott demonstrated. That goes into early
adopter program today. Hangouts Meet in
general availability. That's the video
conferencing piece. The Jamboard. We will begin taking orders
soon and we should be generally available in May at a price
point just below $5,000 for the board with an annual
subscription fee of $600 for the service. And finally, you saw Tayloe
showcase the Gmail add-ons, and we've been working with a bunch
of other partners to build out exciting new add-ons. That goes into development
preview and we hope many of you will come and join us
on that exciting journey. With that, I want to thank
you for all your attention. Next, welcome my
colleague Chet Kapoor. >> CHET KAPOOR:
Thanks, Prabhaker. Hello, how is it going? >> AUDIENCE: Good. >> CHET KAPOOR: That
was a little weak. I just wanted to let you
know that we have extended the keynotes. We are going to have another two
hours, so you should just get really comfortable. This is going to take a while,
so you might as well enjoy the show. My name is Chet Kapoor. It is tough to follow Prabhaker. It always is. Until recently, I was the CEO
of Apigee and have been now at Google for 3 1/2 months
and it's been great. I want to do a special shout-out
to a badass woman pioneer in the tech space, Diane Greene. It is great to work with her and
for her and the opportunity to continue to do it for
quite some time to come. Oh, she's
actually in the audience. Sorry, I didn't realize that. It is awesome to see so many
customers and partners here and so many others that we will
soon have an opportunity to work with. We thank you very much for
giving us the opportunity to serve you, and we hope to do
that for many, many, many years to come. In addition to all the great
innovation, all the great innovation you've heard, one of
the conversations that we have with the G Suite for the
companies that you work for and board members is about how
the Cloud enables innovative business models, new ways of
you creating and changing your business that you wouldn't
have been able to do before. And we think a large part
- a large part of creating innovative business models is
actually working on ecosystems. In fact, Gartner actually in
2017 CIO agenda talked about how some of the top performers
of companies that they track actually participate in
many digital ecosystems. Interestingly enough, not
all ecosystems are the same. There are many different
kinds of ecosystems. And generally when we think
about ecosystems, we say, you know, it must be
public ecosystems. But there are many different
kinds and all of them affect innovative business models. So let's spend a couple of
minutes talking about them. T-Mobile is a great example - great example
of an internal ecosystem. Cody Sanford, the CIO, has
taken their core components like billing, like customer, like
product, and assigned senior leaders to them. These senior leaders are
responsible for making sure that these core components are
available as APIs with SLAs; by the way, only for the
internal folks at T-Mobile. In addition to making them
available as APIs, they also have to modernize the
stack underneath the APIs. And so what this is helping
T-Mobile do is deliver products and services at a pace that
they have never done before. We're going to see so much
more from this team soon. ABnet is one of the largest
distributors of electronic components and solutions. Recently, they wanted
to get into the Cloud solutions business. They set up a Cloud ecosystem. They now have 5,000
resellers - 5,000 resellers. And their Cloud business is
growing over 1,000% - at a pace of 1,000% year-over-year. Industry ecosystems are
actually well understood. The stakeholders in healthcare
IT have all come together. These are the providers, these
are the pairs, these are the device makers, they are all
coming together to create FHIR, which is an interoperability
standard to securely exchange patient information. FHIR is picking up momentum. I love the Ticketmaster story. Ticketmaster actually has a
billion dollar API internally and API that actually processes
over a billion dollars on an annual basis. The aspirations are to become
the operating system for live entertainment worldwide. They want to create a
billion dollar business in a public ecosystem. They want all of you to be
able to use Ticketmaster functionality in the apps that
you create; any apps, whether it's in your car, whether it's
your mobile device, or any other kind of app that you create. We think they are
well on their way. What's common across all these
ecosystems is they are all focused on one common goal. That is to create
innovative business models. We think to create innovative
business models you need three different parts. There are three pillars
to creating innovative business models. You need to create and join
ecosystems, you need to be able to connect to apps, to data,
and to devices internally and externally, and be able
to leverage all the great innovation happening around you. So let's spend a couple of
minutes talking about each one of these. The question that all of you
have to ask and every G Suite has to ask, every board member
has to ask, is your enterprise ecosystem ready? Because the right ecosystem
platform can help your enterprise with keeping an
inventory of products or solutions if that's what you do,
apply pricing models to these products, different
kinds of pricing models, consumption-based, many others,
be able to work through multiple tiers of partners because it's
going to be B to B to C in some cases, so multiple tiers of
partners, and the most important thing, to make your product very
easy to discover and easy to buy for end-user customers. Orbitera is a multi-tier Cloud
ecosystem that helps ISVs and enterprises take their
Cloud-ready solutions and help them distribute and sell them. We've had great momentum
with Orbitera recently. We are now over three billion
transactions per month and the growth continues. APIs are well understood and
everybody knows that it's the cornerstone of every digital
transformation happening in any and every industry. APIs come in different types. There are APIs as a service. This is how
software talks to software. And now with microservices,
you're going to have thousands of APIs in your enterprises. And then there is
APIs as interactions. This is how your software talks
to the physical world, whether it's a mobile device, whether
it's Google Home, or whether it's your car,
all through one API. APIs as products is
just getting understood. Technology companies
actually understand it well. Now enterprises are
starting to get it as well. It is about taking a
business focus to your API. Thinking about your API as a
business and bringing everything that you can think about,
product management, revenue, channels, thinking about how
the usage patterns are, thinking about having a road map for the
API that you publish, thinking about all the different version
and every engineering schedule that goes with it. So there is a lot happening with
APIs as a product with some of our enterprise
customers as well. Obviously an API platform needs
to cater to the entire spectrum of APIs. And in addition, it needs to be
very secure, it needs to scale, it obviously needs to be
multi-Cloud, it needs to work in your private Cloud as well, and
it needs to be able to support an hybrid architecture. At Apigee, we know a
thing or two about APIs. We process billions of
API calls in our Cloud. You process billions of API
calls using our technology in your Cloud. Thank you very much for the
customers that have bet on us. Hopefully we get a chance
to serve more of you soon. Leveraging innovation is
something that start-ups get because they're
born with constraints. They start every day and
say what is our core value? What am I going to do that
is going to be world class? Everything else, they don't
outsource but they go and partner for. It's something that happens
really, really well in the tech industry. And now it's good to see
enterprise companies do that as well. Audi, Crate and Barrel, and many
others are taking maps APIs and making it part of the customer
experience that they are delivering every day. It's not just
about the maps API. There are thousands of APIs. AccuWeather's weather API,
Twilio's telephony API, and on and on that are available that
you should try to leverage to accelerate your ecosystem. As Google, we pioneered
the product API space. Google Maps has been
around for over a decade. It now has close to three
million daily active users. And you've probably heard in the
last two days the number of APIs that we are going to bring to
you all with one simple goal is to accelerate your journey as
you think about more innovative business models. So we take these three pillars
and bring them together and call it the Connected
Business Platform. A Connected Business Platform
is about creating and joining ecosystems, internal and
external ecosystems, all kinds of ecosystems. It's about connecting to apps,
data, and devices in your data centers, in your firewall,
as well as externally. And it's about leveraging all
this great innovation that is happening in the bazaar. Digital is happening today. You work for a company that
has either been disrupted, is getting disrupted, or is
going to get disrupted. It's going to happen
with every industry. Every analyst is writing about
it, every strategy group is writing about it, and more
importantly, you are living it. We think the time to act and to
go off and think about creative new business models is now. As Google, we have 7 one billion
user apps that we run every day. We have created multiple
ecosystems and have joined many, many more. We want to take our experience
and meet your requirements for today and partner with
you for the future. Please come by our showcase. It's right here outside
the third - on the third floor itself. Come and take a look at
the great examples we have. We have many enterprises that
are building different kinds of ecosystems and we are happy to
talk to you more about helping you with your journey. Accenture has
been a great partner. We've participated in many
Accenture tech vision reports. The one in 2017 is phenomenal. It talks a lot about ecosystems,
AI, and many other things. But I thought this quote was
really interesting because it talks about how companies need
to partner with their customers, a long-time pet peeve, the
best product managers are your customers and the employees, and
this requires a cultural shift. So to discuss this, I would
like to invite Gene Reznik to the stage. Gene, take your time. They already know we're going
to take another two hours. >> GENE REZNIK: Twenty minutes. Twenty minutes. >> CHET KAPOOR: That's
all right we said, right? Gene, thank you very
much for joining us. >> GENE REZNIK: Pleasure. >> CHET KAPOOR: Would
you introduce yourself? Tell us a little bit
about how Accenture thinks about ecosystems. And then most importantly, what
are you seeing in the market? What are enterprises thinking
about and doing with ecosystems? >> GENE REZNIK: Yeah. Yeah. So I'm responsible for
Accenture's ecosystem and ventures. What that really means to us
- Chet, as you said - every industry, every enterprise
customer is being disrupted or is doing a disruption. What we really wanted to do
is coming from the strength of Accenture is really to help
our clients on that journey. And due to our partnership with
Apigee that we started 5-7 years ago, and now we're very excited
to continue it as part of the Google family. And really what we see is this
concept that you brought up, externalize innovation. To compete, you need to
work much more broadly. And fundamentally, innovation is
the operative word with many of our clients. Now innovation is a culture,
innovation is a set of business processes, and I think what our
clients are really demanding for is for innovation to be also
not hindered by integration. I think this is where the
technology and what technology enables really needs to
reinforce, the velocity, the creativity, the business models
that are really a lot of the Fortune 100, 1,000, really want
to set up to take their business to the next level. That's really the
excitement for us. We're really disrupting with
them and helping them evolve and really empower their
businesses in the new. >> CHET KAPOOR: That's awesome. So the one thing that is
different about the 2017 tech vision report than the others
that we participated in and work with you on is that you come
up with this concept called people first. >> GENE REZNIK: Right. >> CHET KAPOOR: As we talked
to the G Suite and about their journeys, the
cultural ramifications are quite significant. Can you tell us a little bit
more about this people first concept and how you're thinking
about it and how you're talking to your clients about it? >> GENE REZNIK: Yeah. I mean, over the past couple
of years, I think we've all realized that it's really about
the people, the employees, the innovation, the culture, but
fundamentally it's probably the hardest thing to transform. I think a lot of us work to
really reinvent our teams. A lot of us work, even us
consultants, to reinvent and be relevant and ultimately embrace
things like design thinking and storytelling and a lot of
the things that we saw here earlier today. And again, how did that really
translate into you don't want to come out of that and then go
into the standard waterfall development lifecycle that takes
six months to do the integration that you're trying to do. You want it realtime. You want to drag and drop. You really want to continue and
extend the culture that you're fostering through your design
studios, through your innovation centers, all the way through
the entire lifecycle and really create those experiences for
your customers and the ways that you do your business. So it's really an important
part of what we're trying to get right. >> CHET KAPOOR: Awesome. Thank you very much, Gene. I look forward to
working with you more. >> GENE REZNIK: Absolutely. Thank you, Chet. Thank you. >> CHET KAPOOR: Next I
would like to invite three of our customers. These are customers that
are actually already doing ecosystems, a company that has
actually been doing ecosystems for quite some time. They now are thinking
about it differently. And a couple others that are
actually new to this and are changing the way
their companies work. So please welcome
our customer panel. >> LYNN LUCAS: Hello. >> CHET KAPOOR: You're ready
to extend this for another two hours? >> LYNN LUCAS: I'm
getting hungry. >> CHET KAPOOR: All right. So why don't each of you start? And maybe I'll
start with you, Lynn. Introduce yourself. Veritas and Google had
an announcement yesterday. Tell us a little bit about that. And then tell us a little bit
about how Veritas thinks about ecosystems because
you're not new to this. I mean, you've been a tech
company for a long time. How are you thinking about
ecosystems differently now? >> LYNN LUCAS: Great. So thank you very much. Lynn Lucas. I'm CMO and I lead
marketing at Veritas. We made a major announcement
and a strategic partnership with Google yesterday. It's really all about what's
been talked about here, which is the importance of data and
how it's changing businesses. How do you have
visibility into your data? How are you moving the right
data to the Google Cloud? How do you protect it there? And then for you G Suite users,
how are you ensuring that you continue to have
regulatory compliance to increasing regulation? Now, Chet, what you said is that
Veritas has long understood the importance of ecosystems. We've invested
heavily in it for years. I mean, IT is built on the fact
that you have to have ecosystems to make it easier for you. This couldn't be more important
in the era of the Cloud. When your data is spread amongst
Google Cloud, but as Eric said, probably more than just that. Many of you are probably having
your CRM, your HR applications and other workloads in Clouds. We're building ecosystems with
Google Cloud and with many others to ensure that it's
easier for you to manage and protect that data. >> CHET KAPOOR: Awesome. Thanks, Lynn. Roger, same for you. Give us an introduction. And tell us, what is this Pitney
Bowes, you know, been around for 100 years. What are you guys
doing with ecosystems? >> ROGER PILC: Yeah, absolutely. So, Chief Innovation
Officer of Pitney Bowes. We are a global technology
company that powers commerce. We have been around for 100
years and we've been undergoing a very, very
exciting transformation. That transformation has been
powered by what we call the Pitney Bowes
Commerce Cloud Apigee. As Chet knows, they've been an
absolutely outstanding partner in helping us digitize
everything we do for clients as we discovery, identify, locate,
communicate, ship, and pay. So the Commerce Cloud in Apigee
and APIs has allowed us to power a very exciting commerce
ecosystem with many millions of sellers, many millions of
buyers, multiple e-commerce marketplaces, tens of sellers,
financial services companies, and it's really helped
transform our company. More recently, we also engaged
with Google around the Android operating system for S&B
appliance and our S&B ecosystem and recently chose Orbitera
as well as the apps to our technology and our Pitney
Bowes Commerce Cloud for S&B. So it's been a great journey. >> CHET KAPOOR: Awesome. Fatala, I was really moved by
Fred, your CEO's letter in the annual report where he talked
about we're going to stop being a retailer with a digital area. We're going to become a digital
company with some physical space and a human touch. >> ANDRE FATALA: Right. >> CHET KAPOOR: You've been
doing this for a while. There were two people and a dog
in a small room when you first started out. Can you tell us - introduce
yourself and tell us a little bit about your journey as you've
expanded Magazine Louiza's perspective on how to
think about ecosystems. >> ANDRE FATALA: Okay, right. So my name is Andre Fatala. I am from Brazil. So I am glad to be here
talking about technology and not carnival. We started our journey in 2012. We were a really small R&D team
that tried cracking the systems of the corporate
enterprises company. After two or three years, we
just get all the points of sales development inside the
company using a lot of lean methodologies. And we moved everything
to the Cloud since then. We are starting for the APIs. So when we get together and
talk about how to leverage the innovation and then decouple
the big legacy systems. Since then, we got this idea of
our CEO to try to be a digital platform with physical stores. Since then, we just turned out
entire company to do this thing. We are seeing
really good results. Last year our
digital just goes up 35%. I don't know if you guys know,
but in Brazil we are facing a really bad economic crisis. >> CHET KAPOOR: For sure. That was great. Fatala, one follow-up
question for you. I'm sure this was
not easy, right? I mean, yeah, you've started out
in a small office and a dog and everything and you've
transformed quite a bit. What was the hardest
part of the journey? >> ANDRE FATALA: I think the
hardest part of the journey was to get everybody understanding
this changing of the mindset of the company. After that, we have the
challenge to change the entire technology to provide the
flexibility and agility to to the business. But I think that we are
doing a great job on that. >> CHET KAPOOR:
And so - go ahead. >> LYNN LUCAS: If I could add
into that, I think that the transformations - and Veritas is
undergoing that transformation as well - the hardest part is
the people and the culture. It seems like that's what
you've experienced as well. The ability to move forward with
the people and culture makes all the technology come to life. >> ANDRE FATALA: That's right. Technology is there. It can be used for everything. We need to get the
right people to do this. >> CHET KAPOOR: So, Roger,
how do you - I will ask a very direct question - how do
you deal with antibodies? Right? I mean,
it's a cultural shift. It's understood in the G Suite,
the board, everybody, we're going to make the
transformation, but not everybody gets it. How do you deal with
those cultural aspects of this transformation? >> ROGER PILC: Yeah. It's similar to
the other panelists. For us, culture has been
the biggest positive in our transformation. We've had a company that for 100
years has valued our employees, has had great employee
engagement, has had a culture of innovation, and most
importantly collaboration. For us, we were
able to harness that. That was a great surprise being
there are only three companies. So the key to us has been have a
very clear vision of where we're going, be very consistent with
it, have a focus on operational execution and the discipline of
getting things done, and just maintain a belief. There are pockets of concerns
where maybe there was past failings, so maintaining that
vision and that constant belief and sharing that sense of
belief has been critical. >> CHET KAPOOR: And the sense
that it is okay to move the goalpost a bit. >> ROGER PILC: Right. Absolutely. We move them very far. >> CHET KAPOOR: Final
question for all three of you. We'll start with you, Fatala. If you were in the audience a
year ago, what advice would you have liked to
have given yourself? >> ANDRE FATALA: Try to focus
on our customers, be creative to solve their problems and not
ours, and I think when you decide to do something,
execute like crazy. You need to try a lot. Get data, get some insight
from this data, and then try to evolve our product. >> CHET KAPOOR: Awesome. How about you, Roger? >> ROGER PILC: The key is to
believe that great things and great
transformation is possible. With the right vision and
recognizing the availability of great technologies today,
picking the absolute right technology partners, we
couldn't misstep on that. So choosing the right technology
partners and executing against a vision has been critical. >> CHET KAPOOR: I love that. You're starting with believing. >> ROGER PILC: Right. >> CHET KAPOOR: And
Lynn, how about you? >> LYNN LUCAS: If it's your
digital transformation, you absolutely have to focus on
people and culture first. When it comes to data, which is
the heart of how we build our businesses, get visibility into
what you have, 30% of it is junk or rot, and then move it quickly
to the Google Cloud, rethink your storage, reduce waste, make
a more sustainable environment. >> CHET KAPOOR: Awesome. Thank you very much. I would like to now
invite Urs back on stage. >> URS HOLZLE: All right. So you see a lot of innovation
at this conference behind me. You see actually just a subset
of what we announced today because we don't really -
we can't really fit anything on one slide. We showed how Google Cloud has
the best infrastructure, the best security, the best
productivity suite, and we showed a host of new
collaboration features, including the awesome Jamboard. So I'm really
looking forward to that. And then, of course, there is a
lot of help to get you started where you are, and we also
showed you yesterday how we're building our ecosystem with
global partnerships with companies like
Pivotal, SAP, Rackspace. And we understand it's not just
about the technology, it's about helping everyone be
successful in using it. So that's what this
conference is all about. So thank you very much for
coming, and thank you for the trust that you've placed in us. Have a great day.